VSTS Extension Template (w/ React and Webpack)

It is pretty clear to me nowadays that I always have a hard time trying to focus myself in a single subject to learn. Every time I start to study something cool, something, which is usually also pretty cool, gets my attention. This time VSTS extensions decided to get my attention.

Recently I realized that a feature that is pretty awesome on GitHub is not present on VSTS, and it is not even being tracked on VSTS UserVoice, so I decided to give it a shot and develop it myself, as an extension.

I am still working on my extension, but I realized that it took me some effort to put together the pieces I needed to develop this extension, and I decided that it would be a good idea to share this template with everyone who is interested in developing a new extension.

The template code can be accessed here. On the project README file you should find all the information you need to start creating your own extension!

To start a very simple VSTS extension is really easy, but on this template I already setup some nice tools that should make your life much easier and your development much more productive.

Wait… What is VSTS?

VSTS, which stands for Visual Studio Team Services, is a Cloud Based service offered by Microsoft for collaboration on code development.

As simple as it may sound, VSTS is actually a robust service that allows you to manage the entire flow of your application development. You can read a lot more about it on the links at the end of this post.

And, up to a certain limit, is free! ūüôā

Why would I need this template?

You don’t really need it. Starting a new VSTS extension is a really straightforward process, and you can start one in 10 minutes, probably. The problem, or at least the hard and repetitive work, is in setting up the tools to make your development process decent and productive. That is where the template can help you.

The whole point of the template is to allow you to focus on your extension code, not having to worry about setting up the tools that you are going to need.

The tools setup on the template are:

  • React
    • For the user interface
    • It is, of course, setup with compiling JSX
  • Webpack
    • Webpack the module bundler that will take care of transpiling all the JSX code (and some other resources) to their final state
  • Jest
    • This is the Unit Testing Framework for our extensions ūüôā
    • I didn’t even knew Jest until I decided to start this extension! It is created by Facebook and looks really nice and simple
  • TFX-CLI
    • This is the command line tool used by TFS and VSTS to do a lot of smaller tasks, and that includes managing extensions
    • This tool a simple npm package, so it will be installed with all the rest of the tools
  • Travis CI/CD
    • There is also a Travis CI/CD build partially setup on the project
    • This will require a little bit of manual work to make it work perfectly for your project
  • ESLint
    • To make our code look nice and follow minimum good practices.

The Code Editor

I must mention that the template also contains some configurations that would make it much easier to use with Visual Studio Code.

It is not really a requirement, but it will give you some extras, like a couple of suggested extensions to make it easier to use and some pre-configured tasks for running builds and tests with the default shortcut.

Feedback

Please, let me know if you have any feedback on the template, and specially if you face some problem, I will definitively do my best to try to help.

You can also use the repository issues panel to report any problem or give any suggestions.

That is it, for now ūüôā I hope to be back really soon with more content

Sources

Creating an OData v4 API with ASP.NET Core 2.0

Hallo Mensen ūüėÄ Let’s talk OData, shall we?

In the last few years my work revolved a lot around REST APIs. I think most of us will agree that REST APIs are a really good way of obtaining data from a server without caring too much about details on how to get access to that data.

You call an URL with the proper parameters (including auth or not), headers, and HTTP verb and you get some data back. Easy enough, right?

The negative side is that these APIs are either very rigid on the way they are implemented, meaning that you can only get the data in the exact way defined by the server, or they are very verbose on their implementation, meaning that you will have a lot of code to maintain on the server side to make the API flexible.

Then what?… Use OData!

What is the Open Data Protocol (OData)?

OData Logo
OData Logo

OData, or Open Data Protocol, is a set of ISO approved standards for building and consuming RESTul APIs, but what does it mean, in practice?

It means that OData is not really an implementation that you simply use but documentation specifying how you must implement it yourself. You can read a lot more about OData here.

Why would I want to use OData?

There are a number of benefits that you will obtain from implementing OData standards, from an easier learning path on the usage of your API by your customers/consumers, to the fact that OData is easily readable by machines, and in this post I want to talk about the flexibility that implementing OData gives to your API through the OData URL Conventions.

Using URL Conventions you can expose a much cleaner and generic API and let the consumer specify their needs through the call.

OData URL Conventions

The OData URL Conventions are a set of  commands that you can pass to the API through the HTTP call query string.

An OData URL is typically composed by three parts: service root URL, resource path and query options.

OData Url Format

  • Service Root URL is the root address for the API.
  • Resource Path identifies exactly which resource you are trying to achieve.
  • Query Options is how you define to the API the format in which you need that data to be delivered.

How does that sound? Simple enough, right? but also, with the proper options, extremely powerful. Let me give you a list of possible options within the Query Options block of the call:

  • $select: Allows you to define a subset of properties to return from that Resource.
  • $expand: Allows you to include data from a related resource to your query results.
  • $orderby: Not surprisingly, allows you to define the ordering of the returned dataset.
  • $top: Allows you to select the top X results of the query.
  • $skip: Allows you to skip X results of the query.
  • $count:¬†Allows you to get a count of items that would result from that query.
  • $search:¬†Allows for a free-text search on that particular resource
  • $format: Allows you to define the format for the returned data in some query types
  • $filter: Allows you to define a filter for your dataset.

As you can see, many of the commands are pretty similar to what you have in most of the common query languages.

I will go into a bit more detail in each of those options on the Code sample.

OData and ASP.NET

ASP.NET Core¬†still don’t have a stable library to implement the OData protocol! But worry you not, as Microsoft has been working on it for some time and right now we have a really promising beta version on Nuget. You can find it here.

ASP.NET Framework has a really good library to implement OData, and it is quite stable by now. You can find it here.

Enough with the theory, how can we implement this query protocol in my ASP.NET Core Application?

letscode

Implementing your API

Let’s start creating a simple ASP.NET Core Web API Application on Visual Studio and¬†creating our models.

Also, let’s create our DbContext…

…and configure our Services.

Good! Our plumbing is set, now we are going to seed some initial data to our database.

And now we call our seeder on app startup.

We must not forget to add our migration.

And last, but not least, let’s implement the simplest API possible on our 3 entities.

Done. Now we can test it using Postman:

 


Wow! It certainly looks like a nice API, doesn’t it? What is the problem with it? Why would I ever want to add OData to it?

Well, there are two fundamental problems with this approach to our API: the payload size and the querying of the data from the database.

Payload Size

The payload format is completely defined on the API/Server side of the application and the client cannot define which data he really needs to receive.

This can be made more flexible by adding complexity to the code (more parameters? more calls?) but this is not what we want right?

In the most common scenarios the client will simply have to ignore a lot of data that he doesn’t care about.

Look at the result of our query for Books below and tell me what should we do if I only want the name for the first book on the list?

We have no options here other than accept all this data and filter what we need on client side.

Querying the Data

For much of the same reason, all the queries to the database have to be done in a very rigid way, not allowing for smaller queries whenever possible.

In the same query as above, we just sent a request for a list of books, and let’s have a look at what was sent to the database:

All this huge query just to get the name of one book. That doesn’t sound good, right?

Let’s make it better with OData ūüôā

Changing our API to OData

The good news is that we can use much of the same structure for our OData API, we just need to make a few configurations. Let’s start by installing the package.

As you can see the OData package for .NET Core is still in beta, as opposed to the .NET Framework version of the package, who is stable for a long time already. I have high hopes that this package will be out of beta in no time!

Let’s configure our entities to understand the OData commands.

I commented the function of each call on this class, pay close attention to it, as the calls are paramount for the full use of the URL Conventions. And now let’s wire our OData configuration with the rest of the API.

All good! Finally, we must adjust our three controllers and have then accept OData URLs. Things you should notice were changed on the Controllers:

  • All the controllers were renamed to be in the singular form. That is only necessary due to our configuration in the ModelBuilder, they can be configured to be plural.
  • The return types were all changed to IQueryable<T>
  • The .Include()¬†calls were removed, as they are no longer necessary. The OData package will take care of this for you.
  • We are no longer inheriting from Controller¬†but from ODataController
  • We have a new decorator for the API calls: [EnableQuery]

And that is it! Our API is ready to be used with OData URL Conventions. Let’s try it?

New API and Results

You can play with the new API format on Postman:

 

The original call to get books would look like this:

http://localhost:5000/api/books

The new call will look like this

http://localhost:5000/odata/Book

First of all, let’s try to get a list of books and look at the results:

MUCH Cleaner, right? Let’s even make it smaller, as we just want the name of the first book on the list:

http://localhost:5000/odata/Book?$top=1

And let’s have a look at the database query for this call:

Yes! A much more efficient call! But wait… We just need the NAME of the book, why don’t we make it more specific?

http://localhost:5000/odata/Book?$top=1&$select=Name

And that is an awesomely small payload! And the query is also more efficient

What if you want details about and specific Author and all his related books?

http://localhost:5000/odata/Author?$expand=Books&$filter=Name eq ‘J.K. Rowling’

Amazing, isn’t it? That can really increase the quality of our APIs.

As a last piece of information, let’s not forget that OData is designed to be readable by machines! So we have a couple of out-of-the-box urls with documentation on our API:

http://localhost:5000/odata/$metadata

Cool, isn’t it?

What’s Next?

Well, now that you know how simple it is to implement the OData protocol in .NET, I would recommend that you spend some time getting familiar with the Protocol itself, you can find all the guidance that you need here.

Also, if you have the intention to use the protocol with .NET Core, I suggest that you keep a close eye on the Nuget Package page and also the feature request on GitHub.

Source Code

You can find the whole source code for this solution on my GitHub.

And that is all for today folks ūüôā

I hope you enjoyed it, and don’t hesitate to use the comments section if you were left with any questions.

Sources:

Azure Web Jobs – Creating your first Azure Web Job from Scratch

So, you have just created your first Web App in Azure, and it is working like a charm. You have all the scalability you need and you know exactly how to store your data in the many options that Azure provide. What else do you need? Well, what will you do if you need to run some asynchronous background processing, like processing images or batch data? That is when Azure Web Jobs comes in handy.

What are Azure Web Jobs?

Straight from Azure documentation web site:

WebJobs is a feature of Azure App Service that enables you to run a program or script in the same context as a web app, API app, or mobile app. The purpose of the WebJobs SDK is to simplify the code you write for common tasks that a WebJob can perform, such as image processing, queue processing, RSS aggregation, file maintenance, and sending emails.

So, as you can see, it can perfectly fit the role of asynchronous background processing and with one interesting advantage, Web Jobs have very powerful integration with other Azure Services like the Service Bus and the Storage Services, making it incredibly easy not only to read/write data to this services but also to use them as TRIGGERS, avoiding the use of scheduled processing.

Let’s begin.

Creating your first ‘Azure Web Job’ from Scratch

The idea here is to create a simple Web Job that will be triggered after an item is added to an Azure Queue, and after very little parsing, save it to the Azure Tables Service.

The Setup

The first thing we are going to need is an Azure Account. If you don’t already have one, you can create one here, for free. Then we will¬†install¬†the Azure CLI 2.0, the installation instructions for all platforms can be found here. You may also want to install the Azure Storage Explorer, a tool that we will use to manage our Storage Account for testing.

Now we¬†need to setup the resources in Azure to which the Web Job will¬†connect. I’ll give you a list of commands from the Azure CLI that must be execute in a shell prompt.

First things first, we need to login to Azure through the console.

Then we need to create the Resource Group that will group all our resources.

Storage Account

Next step, create the storage account.

Then we create the Queue, that will be the source for our data, and the Table, which will be the destination of the processed data.

That’s all we need in the Storage service. Now we need to create the App that will host our Web Jobs.

App Service

And last, but not least, let’s configure our Connection Strings for the App:

And that is it for the Setup. Now, we can begin the coding.

Finally! Web Jobs

The Coding

For the coding part we can use whatever code editor we prefer. I will use Visual Studio Code, as it offers great support for coding C#.

As for the programming language, you may be surprised to know that Azure supports a lot of programming/scripting languages in the Web Jobs SDK, so you can choose the one you like the most. For¬†this exercise, I will use C# because I feel like doing so ūüôā

Start the Project

As I want this article to be as multi platform as possible, I will create a new project using the dotnet CLI. Open a shell prompt, navigate to the directory of your Web Job and type the following command:

Unfortunately the Web Jobs SDK has no support yet for the .NET Core Framework, so we have to make a small modification to the .csproj file, so we can use .NET Framework v4.5x.

The final file should look like this:

It is important to understand that a .NET Core Application can be executed by a Web Job, it only cannot make use of the Web Jobs SDK, which makes it much easier to integrate with other Azure Services

Azure Web Jobs SDK

Back to the shell prompt, we need to do three things. Add the Web Jobs SDK Package, restore the packages and build the project.

Code Host

It is worth mentioning now that one Web Job project has, at least, two main components:

  • The Code Host
    • This is the Entrypoint for the application, and this is where we should do all the configuration for our Web Job
  • A Trigger
    • Our¬†project must¬†have at least one Trigger, but it can have many others. In this case we will use only one trigger, fired by a new entry in our Queue.

Let’s start with the Code Host, which will be placed¬†in the Main method of our code¬†and will be¬†the starting point of the application

That’s it ūüėÄ

Seriously Web Jobs?

No! I am not. The Host is, as the name implies, a piece of code that will configure your web job and control the lifetime of the code. It can be a lot bigger than this, including all sorts of configurations, but that is all we need for a simple Web Job.

The Trigger

The trigger is where the real business logic of our Web Job live or, at least, where it begin. In our case we will have a single method that takes a QueueTrigger and records the data to a Table. The table will also be represented through a parameter, to which we will save the data.

This is the method code:

Simple enough, isn’t it?

The method gets three¬†parameters, the first one is the trigger itself, the message in the queue, notice that the attribute of the parameter¬†specifies which queue¬†the message comes from. The second parameter has the “out” keyword, specifying that is the output of the code and the parameter attribute also specifies the¬†Table to which the data must go. The last parameter is a TextWriter object that the Azure SDK injects so you can log information to the Azure Log Console.

The model for the Queue is a simple POCO class, nothing special about it. The¬†model for the Table¬†inherits from “TableEntity” making it suitable to be¬†saved to an Azure Table.

We are almost there! Only one step missing!

Deploy Web Jobs!

The Deploy

Now we have to deploy our Web Job to Azure, which is actually pretty simple. First thing we want to do is to build the project, to make sure that everything is in place, back to the shell prompt!

We should get a result like this:

Deploy Web Jobs

Navigate to the directory showed in the build command, in my case C:\Code\AzureCoderWebJob\bin\Debug\net45\, then we will zip the content of the folder.

Zipped Web Job

Now you have to open the Azure Portal, and login with your account.

In the Dashboard page, look for the App Service you created during the setup and open it. In the App Service Blade, under “Settings”, look for the option “WebJobs” and click on it, the WebJobs Blade will be opened, and at the top¬†you should see a button “Add”, click on it. The¬†Blade to add a new WebJob will be opened, and we will have to fill some information:

Add Web Job

The name is simply an identifier for the WebJob, the File must be the Zip file we have just created.

The¬†possible “Type”s¬†are “Continuous” and “Triggered”, it may cause a little confusion but we¬†want to keep the Continuous Type.

The¬†scale is what define if the¬†WebJob will run in only one¬†instance of the App Service or in all the instances, scaling with the Web App, it is not so relevant for our WebJob, so we will keep the default “Multi-instance”.

After filling all the fields, click in “OK”, wait for the deployment and process and… that’s it ūüôā You have just deployed your first WebJob created from scratch! But, how can we test it?

The Test

Let’s start by opening the “Logs” for our¬†deployed WebJob. After we¬†deployed the WebJob, we¬†will be sent back to the WebJobs list Blade.¬†In this list, select the¬†WebJob and click in “Logs”. The Logs will be opened in another Tab of the Web Browser. We should see something like this:

Web Jobs Logs

Click on the title of the Web Job and you should see a small console, with the execution details for the Web Job:

Web Jobs Log

As you can see, our “ProcessQueue” method has been detected!

Now¬†we must¬†add a new item to the azure-coder-queue, using the Azure Storage Explorer…

…and then look at the log again:

Log Web Jobs

Our message has been successfully processed! Now, if we take a look at the contents of the AzureCoderTable, also using Azure Storage Explorer, we should see our item added to the table:

Web Jobs Table

Happy Web Job

And, finally, that is all you need to know to create a very simple WebJob from scratch! Most of what I showned here should work fine in Linux and Mac OS (apart from the usage of .NET Framework) but, unfortunatly, I am not able to test every step in these platforms, so let me know if you try and have any problem!

I am really sorry for the long post ūüôā

Long Web Job Post

Sources: