Microsoft Build 2018 – Session Recommendations (Part 2)

As promised, I bring a couple more sessions from Microsoft Build 2018!

The first part of this list can be found here.

Be aware that I’m a .NET+C# Lover, so those are the sessions that I enjoy the most, but if you want a deeper look into the sessions catalog, you can go through this list on Channel9, all the sessions are there (or will be very soon)

But let’s start with my session recommendations.

.NET Overview & Roadmap

This session was amazing. Presented by Scott Hanselman and Scott Hunter, it presented some amazing numbers about .NET Core and how bright (and fast) the future will be.

They also presented a couple of ideas that they are working on and I was particularly AMAZED by the demo on the micro-services template that they are playing with for ASP.NET Core 2.2. You will see that the session’s audience reaction was also very positive.


Also great demos on .NET Tooling and Blazor (Blazor is unbelievable so far)

I, particularly, loved it. Also, Scott Hanselman was joking as hell on this one, it’s almost half tech half stand-up comedy 😛 

The session:

Pair Programming Made Awesome with Visual Studio Live Share

I tried really hard to get access to VS Live Share during the Beta phase, but unfortunately it didn’t work out for me, now it really available (in preview) and it’s awesome!

The whole idea is to be able to share code files during a LIVE session in a way that developers can collaborate and help solve each others problems (or any other idea that you can come up in a collaboration session)

But how is it different from Screen Sharing? Well, it has a number of benefits that are enumerated on this session by Jonathan Carter and Jon Chu.

I played with it a little bit with some friends (even friends from Brazil!) and it really is super fun.

Past, Present, and Future of .NET

I did a post not long ago about a session from Richard Campbell on the history of .NET, which was really great. If you watched it you might know that he is now working on a book about the History of .NET (!)

Now, in this Build 2018 Talk/Session, he is exploring some of this history with some major players from the .NET world! Scott Hunter, Beth Massi and Mads Torgersen.

This is really just a casual talk exploring some of the major events that led to the current state of the platform. It’s really nice to see how things played to get us to where we are right now.

The player is a bit different on this one because it’s straight out of Channel9 and not from YouTube.

Technology Keynote: Microsoft Azure

Also traditional at this point is the Microsoft Azure Technology Keynote by Scott Guthrie.

Again there was a big focus on the AI power that Microsoft is adding to the Azure Cloud every day. This one has a number of really interesting demos!

  • Visual Studio Live Share (Amanda Silver & Johnathan Carter)
    • I’m really super excited about this feature! So much potential.
  • VS App Center + GitHub Integration (Simina Pasat)
  • VSTS and a Lot on CI/CD (Donovan Brown)
    • I consume a lot of CI/CD and DevOps resources at my work, but I’m not really a specialist in configuring or maintaining them, still, this Demo made me want to now a lot more about it!
  • Kubernetes + Azure + Visual Studio (Scott Hanselman)
    • Nice Demo on how those 3 tools integrate really well to make the handling of containers with Kubernetes on Azure be as smooth as possible.
    • There was a funny technical problem during this Demo 😛 Scott Guthrie added some value to the presentation.
  • Serverless + IoT (Jeff Hollan)
    • I don’t need to tell you how much Microsoft is investing on this right? Azure Functions are getting better and better in an amazing speed.
  • CosmosDB (Rimma Nehme)
  • Azure Search + Cognitive Skills (Paige Bailey)
  • Azure DataBricks (Paige Bailey)

As you can see there was two Demos by Paige Bailey, a Cloud Developer Advocate for Azure that has a number of amazing posts on twitter! You can follow her here.

And here is the session.

Nice wasn’t it? 😀

I have enough videos on my queue yet, so I will probably make at least one more post with more session recommendations. Let me know if you are enjoying it!

Microsoft Build 2018 – Session Recommendations (Part 1)

As some of you might know, the Microsoft Build 2018 event happened this week (07-09 May) and this is always an awesome event for .NET Developers!

As I am making my way through as many sessions as my time allows me, I would like to share some of the best ones (in my not so humble opinion) with all of you!

Vision Keynote

I will start here with the more obvious one, the Keynote from Satya Nadella. This year’s Vision Keynote focused a lot into AI in the Cloud (Intelligent Cloud) and AI in IoT (Intelligent Edge), which is pretty interesting as I still lack a lot of experience in both AI/ML and IoT.

This Keynote actually made me very excited with what Microsoft envisions for AI and Machine Learning in the .NET world for the future, and I will try to keep my radar on this subjects.

I am particularly excited with ML.NET and the ever-growing power of the Cognitive Services.

As usual, there are some amazing numbers about the industry:Industry Numbers

Here is the video:

The Future of C#

This session is a must for me by now, as every year Mads Torgersen and Dustin Campbell make an awesome presentation on what is coming next for my favorite language ever. (I love you Python, but C# comes first)

I was a bit disconnected from what is about to come with C# 8.0 and this session was amazing to put me up to speed with the stage in which those feature proposals are.

If you wanna know more about C# language feature proposals (or maybe even propose your own!) you can check this repository on github.


MACHINE LEARNING IN DOTNET! Do I need to say anything else? Okay, a bit more then:

  • It’s multi-platform (Windows, Linux and MacOS)
  • It’s Open Source (check it out here)
  • It’s still on Preview

The session is a really short one (20 minutes) and in this short time Rowan Miller walks through an application that can fix a piece of music(!) using Machine Learning with ML.NET.

It’s a relatively high altitude overview of the Framework’s capabilities, but it’s also super FUN, and powerful.

I am still waiting for them to upload some sessions that look really amazing.

But what do you think of these sessions so far?

I will post another list of my favorite sessions in a couple of days, after I went through most of my watch list 😛

Talk to you soon 🙂

VSTS Extension Template (w/ React and Webpack)

It is pretty clear to me nowadays that I always have a hard time trying to focus myself in a single subject to learn. Every time I start to study something cool, something, which is usually also pretty cool, gets my attention. This time VSTS extensions decided to get my attention.

Recently I realized that a feature that is pretty awesome on GitHub is not present on VSTS, and it is not even being tracked on VSTS UserVoice, so I decided to give it a shot and develop it myself, as an extension.

I am still working on my extension, but I realized that it took me some effort to put together the pieces I needed to develop this extension, and I decided that it would be a good idea to share this template with everyone who is interested in developing a new extension.

The template code can be accessed here. On the project README file you should find all the information you need to start creating your own extension!

To start a very simple VSTS extension is really easy, but on this template I already setup some nice tools that should make your life much easier and your development much more productive.

Wait… What is VSTS?

VSTS, which stands for Visual Studio Team Services, is a Cloud Based service offered by Microsoft for collaboration on code development.

As simple as it may sound, VSTS is actually a robust service that allows you to manage the entire flow of your application development. You can read a lot more about it on the links at the end of this post.

And, up to a certain limit, is free! 🙂

Why would I need this template?

You don’t really need it. Starting a new VSTS extension is a really straightforward process, and you can start one in 10 minutes, probably. The problem, or at least the hard and repetitive work, is in setting up the tools to make your development process decent and productive. That is where the template can help you.

The whole point of the template is to allow you to focus on your extension code, not having to worry about setting up the tools that you are going to need.

The tools setup on the template are:

  • React
    • For the user interface
    • It is, of course, setup with compiling JSX
  • Webpack
    • Webpack the module bundler that will take care of transpiling all the JSX code (and some other resources) to their final state
  • Jest
    • This is the Unit Testing Framework for our extensions 🙂
    • I didn’t even knew Jest until I decided to start this extension! It is created by Facebook and looks really nice and simple
    • This is the command line tool used by TFS and VSTS to do a lot of smaller tasks, and that includes managing extensions
    • This tool a simple npm package, so it will be installed with all the rest of the tools
  • Travis CI/CD
    • There is also a Travis CI/CD build partially setup on the project
    • This will require a little bit of manual work to make it work perfectly for your project
  • ESLint
    • To make our code look nice and follow minimum good practices.

The Code Editor

I must mention that the template also contains some configurations that would make it much easier to use with Visual Studio Code.

It is not really a requirement, but it will give you some extras, like a couple of suggested extensions to make it easier to use and some pre-configured tasks for running builds and tests with the default shortcut.


Please, let me know if you have any feedback on the template, and specially if you face some problem, I will definitively do my best to try to help.

You can also use the repository issues panel to report any problem or give any suggestions.

That is it, for now 🙂 I hope to be back really soon with more content


The History of .NET by Richard Campbell

Hi everyone,
I am preparing a nice post on Azure Functions for very soon but, today I would like to share some history with you.

I found today a really amazing video on YouTube about the History of .NET. The video is actually a recording of a talk at the NDC { London } 2018, just 2 months ago, by Richard Campbell (from .NET Rocks)

On the talk Richard, who is a great storyteller, explains the story of how Microsoft was cornered and had to reinvent itself many times along the last 20-25 years, and how it led to the creation of the .NET Framework and all the technologies that entangles it. He cannot avoid mentioning all the great people involved into this process, from the original designer of J++, Microsoft’s implementation of Java, to the current maintainers of the .NET Framework and C# Language, and even the current CEO.

It also, and it could not be different, explains a lot of the history of the Windows OS, Microsoft’s Web Browsers and the rise of Microsoft Azure!

If you code in any Microsoft Language and like .NET and a bit of History, I assure you it is worth every second of the (~) 1 hour and 10 minutes.

Here is the video:

So, what do you think? Great, wasn’t it?


Playing with pandas in Jupyter

So, today I would like to talk with you about the fun time I had playing with pandas in Jupyter 🙂

What a nice and misleading title 😀 haha
Let me add some context.


I decided to spend some time playing with data using python, just to have a feeling on how easy it is, giving that python is the language of choice of many data scientists.

“Why is this guy talking about python in the first place? Isn’t this an Azure/.NET Blog?”

Mainly, yes, but Python has a special place in my heart <3 and it is my second language, I could say. So, whenever I am not learning Azure/.NET I am most likely learning python 🙂

What did I do?

I found a Data Analysis tool called pandas and a web application that allows you to visualize this data while you play with code called Jupyter Notebooks.

Let me make it clear that I am NOT an expert in any of the tools that I am going to list below and I was learning most of what I used while creating this post! So, if you see something terribly wrong, go easy on me and enlighten me, please! I would love to learn more from other people about this whole Data Science world.

The tools that I choose to do this are:


an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language.

Jupyter Notebooks

an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more.


a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms.

The home of the U.S. Government’s open data

I was always told that this is an awesome place to get some nice datasets with data that you can use to generate visualizations, and now I can confirm this.

Before we begin

All the code that I wrote for this post can be found on my GitHub, where I also plan to add more code to this repository, as I am still learning new things.

Also, I created a Twitter account for the blog, just to separate it from my personal account. You can find it here: @AzureCoderBlog and my personal account is: @lucas_lra

Let’s begin.

Setting up the Environment

First thing we need to do is to set up our environment with all the tools.

If you are on window, you can use the CreateEnvironment.bat script that is available as part of the source code. This script will create the entire environment for you. But if you don’t want to miss the fun, just follow the step-by-step.

    1. Install Python 3
      • If you don’t know anything about python, just download the installer from this page.
      • You are going to LOVE it.
    2. Clone the GitHub repository
    3. Navigate to the project folder
    4. Create a Python Virtual Environment
    5. Activate your Virtual Environment
    6. Install the required packages (this step may take a while, and needs internet connection)
    7. Finally, start Jupyter Notebooks!

You should now see a screen like this:

Jupyter Notebook
Click on the Image to enlarge

As you can see, this is a file explorer that shows everything on the current folder that you are running, and what we want to do now it open the notebook: World-Population-by-Continent-[1980-2010].ipynb

What should be seeing now is some kind of in-browser text editor filled with text and python code:

Population Notebook
Click on the Image to enlarge

I won’t go into the specifics about how to better navigate on a Jupyter Notebook in this post, but you can learn everything you are going to need into this documentation.

To execute each block of our notebook, we are going to use the shortcut SHIFT+ENTER. This shortcut will execute the current block and jump to the next.

While I tried to make the notebook as self-explanatory as possible, I would like to go over the blocks of code and try to explain what is happening.

We start importing all the packages that we are going to need in the execution of our script.

As mentioned before, pandas is what we are going to use for the Data analysis, matplotlib is responsible for the graph generation and itertools is a default python package used to do lots of awesome stuff with iterable types.

Next we are going to import our dataset.

Really simple, isn’t it? pandas contains lots of those methods to import many different data types, like pd.read_excel() or pd.read_json(). This csv file, as I mentioned before, was obtained on the website.

The next step is to try to make the data a little better. I started by naming the column with the names of the places.

This was tricky for me on a first sight but, what is happening here is that I am copying the titles of all the columns on the dataset to a separate list object, after that I am renaming the first item of this list and then, finally, I am applying this entire list as a new set of column names for the pandas.DataFrame. Looks weird, but works like a charm.

Next problem we need to address is that the population data on the DataFrame is recognized by the script as str! We need to recognize this data as numeric types if we want to do some operations with it, let’s do this.

So here we are basically iterating through the DataFrame and using the pandas.to_numeric() function to convert the values. Also, we are using the errors='coerse' option to make sure we ignore the ‘NaN’ values.

Great! Now we have all the data into the DataFrame prepared. So I started thinking, what if I wanted to do some Data Analysis based on the type of place? (Like, is it a Country? A Continent?) and I realized that I would need to add one extra piece of data to the DataFrame, and this is how I did it.

I decide that I just wanted to tag the Continents on the DataFrame, so any other register will be tagged with a simple  (dash), which we will ignore later. To be quite honest, I don’t like this approach but, so far, I don’t know any other.

Next! Let’s effectively filter only the continents data out of the dataset.

Ok! Now let’s pause and have a look at the state of our DataFrame:

Splitted DataFrame
Click on the Image to enlarge

Looking good, isn’t it? We only have the five rows for the continents, we have our Region Type column correctly filled and the columns are all there, now what? First let’s setup two small lists of markers and colors that we are going to use in our Graph.

Those are all codes for markers and colors that matplotlib can understand. You can find more documentation about this here. Also, we are using itertools.cycle() to generate this list. Why? Well, the reason is that this object type allows us to iterate through it a random number of times, and it will always go back to the first item of the list, after reaching the last one, that will allow us to have any number of data entries on our DataFrame and still have enough markers and colors.

And with that, our preparations are done. Let’s start setting up our graph by configuring a Figure()

Here we are configuring our font-size in a global way for matplotlib, that will allow us to use relative sizes later. We are also creating the Figure() which will be the canvas for our plotting, and the actual subplot, which will contain our visualization.

Now, let’s effectively plot our Graph to the Figure

I’ll try to explain everything on this not so awesome looking code.

  • Lines 2-3
    • Here we are just converting our data to lists for easier handling
  • Lines 6;33 (Shame on me)
    • I couldn’t find a nice way of fitting all the y axis on the graph width so, my trick was to add two fake and empty axis, just to readjust the graph width and make it better.
    • I am REALLY sorry for this one, it doesn’t look good AT ALL. I’ll find a best solution next time 😛
  • Line 7
    • Our for will iterate through all the columns that represent a year on the DataFrame
  • Lines 8-14
    • This is where we are adding our data ticks to the graph. I should say that this is the most important part of the process.
    • The first two parameters of the execution are the ones that define our data tick, the rest is configuration of which you can learn more here.
  • Lines 16-33
    • This is where we set some annotations (in this case, text) to our data ticks, ensuring that we can really understand the plotted data
  • Lines 36-37
    • Here we enable the Grid and the legend (upper right corner) for our Graph
  • Lines 40-41
    • Here we add some style to the labels around the Graph, even rotating than to 30 degrees.

And this is it! This is the entire code, and where is our final result? (Click on the image to expand it)

Final Graph
Click on the Image to enlarge

HA! It worked! Eureka!

And that is actually everything I have to show for my efforts on Data Science, so far 😀

I can’t stress enough the fact that I am NOT a specialist in any Data-Science related technology so, please, don’t take anything from my code as a best practice.

I also can’t stress enough that I do love Python, and I bet you are going to like it too, if I don’t ruin it for you with my ugly code.

And that is all for today! Till next time!


Creating an OData v4 API with ASP.NET Core 2.0

Hallo Mensen 😀 Let’s talk OData, shall we?

In the last few years my work revolved a lot around REST APIs. I think most of us will agree that REST APIs are a really good way of obtaining data from a server without caring too much about details on how to get access to that data.

You call an URL with the proper parameters (including auth or not), headers, and HTTP verb and you get some data back. Easy enough, right?

The negative side is that these APIs are either very rigid on the way they are implemented, meaning that you can only get the data in the exact way defined by the server, or they are very verbose on their implementation, meaning that you will have a lot of code to maintain on the server side to make the API flexible.

Then what?… Use OData!

What is the Open Data Protocol (OData)?

OData Logo
OData Logo

OData, or Open Data Protocol, is a set of ISO approved standards for building and consuming RESTul APIs, but what does it mean, in practice?

It means that OData is not really an implementation that you simply use but documentation specifying how you must implement it yourself. You can read a lot more about OData here.

Why would I want to use OData?

There are a number of benefits that you will obtain from implementing OData standards, from an easier learning path on the usage of your API by your customers/consumers, to the fact that OData is easily readable by machines, and in this post I want to talk about the flexibility that implementing OData gives to your API through the OData URL Conventions.

Using URL Conventions you can expose a much cleaner and generic API and let the consumer specify their needs through the call.

OData URL Conventions

The OData URL Conventions are a set of  commands that you can pass to the API through the HTTP call query string.

An OData URL is typically composed by three parts: service root URL, resource path and query options.

OData Url Format

  • Service Root URL is the root address for the API.
  • Resource Path identifies exactly which resource you are trying to achieve.
  • Query Options is how you define to the API the format in which you need that data to be delivered.

How does that sound? Simple enough, right? but also, with the proper options, extremely powerful. Let me give you a list of possible options within the Query Options block of the call:

  • $select: Allows you to define a subset of properties to return from that Resource.
  • $expand: Allows you to include data from a related resource to your query results.
  • $orderby: Not surprisingly, allows you to define the ordering of the returned dataset.
  • $top: Allows you to select the top X results of the query.
  • $skip: Allows you to skip X results of the query.
  • $count: Allows you to get a count of items that would result from that query.
  • $search: Allows for a free-text search on that particular resource
  • $format: Allows you to define the format for the returned data in some query types
  • $filter: Allows you to define a filter for your dataset.

As you can see, many of the commands are pretty similar to what you have in most of the common query languages.

I will go into a bit more detail in each of those options on the Code sample.

OData and ASP.NET

ASP.NET Core still don’t have a stable library to implement the OData protocol! But worry you not, as Microsoft has been working on it for some time and right now we have a really promising beta version on Nuget. You can find it here.

ASP.NET Framework has a really good library to implement OData, and it is quite stable by now. You can find it here.

Enough with the theory, how can we implement this query protocol in my ASP.NET Core Application?


Implementing your API

Let’s start creating a simple ASP.NET Core Web API Application on Visual Studio and creating our models.

Also, let’s create our DbContext…

…and configure our Services.

Good! Our plumbing is set, now we are going to seed some initial data to our database.

And now we call our seeder on app startup.

We must not forget to add our migration.

And last, but not least, let’s implement the simplest API possible on our 3 entities.

Done. Now we can test it using Postman:


Wow! It certainly looks like a nice API, doesn’t it? What is the problem with it? Why would I ever want to add OData to it?

Well, there are two fundamental problems with this approach to our API: the payload size and the querying of the data from the database.

Payload Size

The payload format is completely defined on the API/Server side of the application and the client cannot define which data he really needs to receive.

This can be made more flexible by adding complexity to the code (more parameters? more calls?) but this is not what we want right?

In the most common scenarios the client will simply have to ignore a lot of data that he doesn’t care about.

Look at the result of our query for Books below and tell me what should we do if I only want the name for the first book on the list?

We have no options here other than accept all this data and filter what we need on client side.

Querying the Data

For much of the same reason, all the queries to the database have to be done in a very rigid way, not allowing for smaller queries whenever possible.

In the same query as above, we just sent a request for a list of books, and let’s have a look at what was sent to the database:

All this huge query just to get the name of one book. That doesn’t sound good, right?

Let’s make it better with OData 🙂

Changing our API to OData

The good news is that we can use much of the same structure for our OData API, we just need to make a few configurations. Let’s start by installing the package.

As you can see the OData package for .NET Core is still in beta, as opposed to the .NET Framework version of the package, who is stable for a long time already. I have high hopes that this package will be out of beta in no time!

Let’s configure our entities to understand the OData commands.

I commented the function of each call on this class, pay close attention to it, as the calls are paramount for the full use of the URL Conventions. And now let’s wire our OData configuration with the rest of the API.

All good! Finally, we must adjust our three controllers and have then accept OData URLs. Things you should notice were changed on the Controllers:

  • All the controllers were renamed to be in the singular form. That is only necessary due to our configuration in the ModelBuilder, they can be configured to be plural.
  • The return types were all changed to IQueryable<T>
  • The .Include() calls were removed, as they are no longer necessary. The OData package will take care of this for you.
  • We are no longer inheriting from Controller but from ODataController
  • We have a new decorator for the API calls: [EnableQuery]

And that is it! Our API is ready to be used with OData URL Conventions. Let’s try it?

New API and Results

You can play with the new API format on Postman:


The original call to get books would look like this:


The new call will look like this


First of all, let’s try to get a list of books and look at the results:

MUCH Cleaner, right? Let’s even make it smaller, as we just want the name of the first book on the list:


And let’s have a look at the database query for this call:

Yes! A much more efficient call! But wait… We just need the NAME of the book, why don’t we make it more specific?


And that is an awesomely small payload! And the query is also more efficient

What if you want details about and specific Author and all his related books?

http://localhost:5000/odata/Author?$expand=Books&$filter=Name eq ‘J.K. Rowling’

Amazing, isn’t it? That can really increase the quality of our APIs.

As a last piece of information, let’s not forget that OData is designed to be readable by machines! So we have a couple of out-of-the-box urls with documentation on our API:


Cool, isn’t it?

What’s Next?

Well, now that you know how simple it is to implement the OData protocol in .NET, I would recommend that you spend some time getting familiar with the Protocol itself, you can find all the guidance that you need here.

Also, if you have the intention to use the protocol with .NET Core, I suggest that you keep a close eye on the Nuget Package page and also the feature request on GitHub.

Source Code

You can find the whole source code for this solution on my GitHub.

And that is all for today folks 🙂

I hope you enjoyed it, and don’t hesitate to use the comments section if you were left with any questions.


.NET Core 2.1 is coming! (and I will be back)

Hallo Mensen 🙂
I know I’ve been away from my blog for a long time and I’ll not try to make an excuse for this, but I want to make it clear that I intend to start writing again some time this quarter!

Today I just wanted to share with you two new videos from Channel 9 with some cool demos on the new features for .NET Core 2.1. In particular I would advise you to pay close attention to the improvements on the HttpClient and the Entity Framework support for CosmosDb. Enjoy!

What is new in .NET Core 2.1?

The Demos!!

One last thing to mention. Pay close attention to the benchmarks on the build process for .NET Core 2.1, it is amazing!

Incremental Build Improvements for .NET Core 2.1 SDKReally excited for the Future of .NET Core 😀

.NET Core 2.1 should have its first previews released still in this February and the RTM Version is planned to this Summer!

Source: .NET Core 2.1 Roadmap | .NET Blog


Announcing .NET Core 2.0 | .NET Blog

This post is a reblog from the Official .NET Blog on MSDN.

.NET Core 2.0 is available today as a final release. You can start developing with it at the command line, in your favorite text editor, in Visual Studio 2017 15.3, Visual Studio Code or Visual Studio for Mac. It is ready for production workloads, on your own hardware or your favorite cloud, like Microsoft Azure.

We are also releasing ASP.NET Core 2.0 and Entity Framework Core 2.0. Read the ASP.NET Core 2.0 and the Entity Framework Core 2.0 announcements for details. You can also watch the launch video on Channel 9 to see many of the new features in action.

The .NET Standard 2.0 spec is complete, finalized at the same time as .NET Core 2.0. .NET Standard is a key effort to improve code sharing and to make the APIs available in each .NET implementation more consistent. .NET Standard 2.0 more than doubles that set of APIs that you have available for your projects.

.NET Core 2.0 has been deployed to Azure Web Apps. It is available today in a small number of regions and will expand globally quickly.

.NET Core 2.0 includes major improvements that make .NET Core easier to use and much more capable as a platform. The following improvements are the biggest ones and others are described in the body of this post. Please share feedback and any issues you encounter at dotnet/core #812.



Visual Studio

  • Live Unit Testing supports .NET Core
  • Code navigation improvements
  • C# Azure Functions support in the box
  • CI/CD support for containers

For Visual Studio users: You need to update to the latest versions of Visual Studio to use .NET Core 2.0. You will need to install the .NET Core 2.0 SDK separately for this update.


On behalf of the entire team, I want to express our gratitude for all the direct contributions that we received for .NET Core 2.0. Thanks! Some of the most prolific contributors for .NET Core 2.0 are from companies investing in .NET Core, other than Microsoft. Thanks to Samsung and Qualcomm for your contributions to .NET Core.

The .NET Core team shipped two .NET Core 2.0 previews (preview 1 and preview 2) leading up to today’s release. Thanks to everyone who tried out those releases and gave us feedback.

Using .NET Core 2.0

You can get started with .NET Core 2.0 in just a few minutes, on Windows, MacOS or Linux.

You first need to install the .NET Core SDK 2.0.

You can create .NET Core 2.0 apps on the command line or in Visual Studio.

Creating new projects is easy. There are templates you can use in Visual Studio 2017. You can also create new application at the command line with dotnet new, as you can see in the following example.

You can also upgrade an existing application to .NET Core 2.0. In Visual Studio, you can change the target framework of an application to .NET Core 2.0.

If you are working with Visual Studio Code or another text editor, you will need to update the target framework to netcoreapp2.0.

It is not as critical to update libraries to .NET Standard 2.0. In general, libraries should target .NET Standard unless they require APIs only in .NET Core. If you do want to update libraries, you can do it the same way, either in Visual Studio or directly in the project file, as you can see with the following project file segment that target .NET Standard 2.0.

You can read more in-depth instructions in the Migrating from ASP.NET Core 1.x to ASP.NET Core 2.0 document.

Relationship to .NET Core 1.0 and 1.1 Apps

You can install .NET Core 2.0 on machines with .NET Core 1.0 and 1.1. Your 1.0 and 1.1 applications will continue to use the 1.0 and 1.1 runtimes, respectively. They will not roll forward to the 2.0 runtime unless you explicitly update your apps to do so.

By default, the latest SDK is always used. After installing the .NET Core 2.0 SDK, you will use it for all projects, including 1.0 and 1.1 projects. As stated above, 1.0 and 1.1 projects will still use the 1.0 and 1.1 runtimes, respectively.

You can configure a directory (all the way up to a whole drive) to use a specific SDK by creating a global.json file that specifies a specific .NET Core SDK version. All dotnet uses “under” that file will use that version of the SDK. If you do that, make sure you have that version installed.

.NET Core Runtime Improvements

The .NET Core 2.0 Runtime has the following improvements.

Performance Improvements

There are many performance improvements in .NET Core 2.0. The team published a few posts describing the improvements to the .NET Core Runtime in detail.

.NET Core 2.0 Implements .NET Standard 2.0

The .NET Standard 2.0 spec has been finalized at the same time as .NET Core 2.0.

We have more than doubled the set of available APIs in .NET Standard from 13k in .NET Standard 1.6 to 32k in .NET Standard 2.0. Most of the added APIs are .NET Framework APIs. These additions make it much easier to port existing code to .NET Standard, and, by extension, to any .NET implementation of .NET Standard, such as .NET Core 2.0 and the upcoming version of Universal Windows Platform (UWP).

.NET Core 2.0 implements the .NET Standard 2.0 spec: all 32k APIs that the spec defines.

You can see a diff between .NET Core 2.0 and .NET Standard 2.0 to understand the set of APIs that .NET Core 2.0 provides beyond the set required by the .NET Standard 2.0 spec.

Much easier to target Linux as a single operating system

.NET Core 2.0 treats Linux as a single operating system. There is now a single Linux build (per chip architecture) that works on all Linux distros that we’ve tested. Our support so far is specific to glibc-based distros and more specifically Debian- and Red Hat-based Linux distros.

There are other Linux distros that we would like to support, like those that use musl C Standard library, such as Alpine. Alpine will be supported in a later release.

Please tell us if the .NET Core 2.0 Linux build doesn’t work well on your favorite Linux distro.

Similar improvements have been made for Windows and macOS. You can now publish for the following “runtimes”.

  • linux-x64linux-arm
  • win-x64win-x86
  • osx-x64

Linux ARM32 is now supported, in Preview

The .NET Core team is now producing Linux ARM32 builds for .NET Core 2.0+. These builds are great for using on Raspberry Pi. These builds are not yet supported by Microsoft and have preview status.

The team is producing Runtime and not SDK builds for .NET Core. As a result, you need to build your applications on another operating system and then copy to a Raspberry Pi (or similar device) to run.

There are two good sources of .NET Core ARM32 samples that you can use to get started:

Globalization Invariant Mode

.NET Core 2.0 includes a new opt-in globalization mode that provides basic globalization-related functionality that is uniform across operating systems and languages. The benefit of this new mode is its uniformity, distribution size, and the absence of any globalization dependencies.

See .NET Core Globalization Invariant Mode to learn more about this feature, and decide whether the new mode is a good choice for your app or if it breaks its functionality.

.NET Core SDK Improvements

The .NET Core SDK 2.0 has the following improvements.

dotnet restore is implicit for commands that require it

The dotnet restore command has been a required set of keystrokes with .NET Core to date. The command installs required project dependencies and some other tasks. It’s easy to forget to type it and the error messages that tell you that you need to type it are not always helpful. It is now implicitly called on your behalf for commands like runbuild and publish.

The following example workflow demonstrates the absence of a required dotnet restore command:

Reference .NET Framework libraries from .NET Standard

You can now reference .NET Framework libraries from .NET Standard libraries using Visual Studio 2017 15.3. This feature helps you migrate .NET Framework code to .NET Standard or .NET Core over time (start with binaries and then move to source). It is also useful in the case that the source code is no longer accessible or is lost for a .NET Framework library, enabling it to be still be used in new scenarios.

We expect that this feature will be used most commonly from .NET Standard libraries. It also works for .NET Core apps and libraries. They can depend on .NET Framework libraries, too.

The supported scenario is referencing a .NET Framework library that happens to only use types within the .NET Standard API set. Also, it is only supported for libraries that target .NET Framework 4.6.1 or earlier (even .NET Framework 1.0 is fine). If the .NET Framework library you reference relies on WPF, the library will not work (or at least not in all cases). You can use libraries that depend on additional APIs,but not for the codepaths you use. In that case, you will need to invest significantly in testing.

You can see this feature in use in the following images.

The call stack for this app makes the dependency from .NET Core to .NET Standard to .NET Framework more obvious.

.NET Standard NuGet Packages no longer have required dependencies

.NET Standard NuGet packages no longer have any required dependencies if they target .NET Standard 2.0 or later. The .NET Standard dependency is now provided by the .NET Core SDK. It isn’t necessary as a NuGet artifact.

The following is an example nuspec (recipe for a NuGet package) targeting .NET Standard 2.0.

The following is an example nuspec (recipe for a NuGet package) targeting .NET Standard 1.4.

Visual Studio 2017 version 15.3 updates

Side-by-Side SDKs

Visual Studio now has the ability to recognize the install of an updated .NET Core SDK and light up corresponding tooling within Visual Studio. With 15.3, Visual Studio now provides side-by-side support for .NET Core SDKs and defaults to utilizing the highest version installed in the machine when creating new projects while giving you the flexibility to specify and use older versions if needed, via the use of global.json file. Thus, a single version of Visual Studio can now build projects that target different versions of .NET Core.

Support for Visual Basic

In addition to supporting C# and F#, 15.3 now also supports using Visual Basic to develop .NET Core apps. Our aim with Visual Basic this release was to enable .NET Standard 2.0 class libraries. This means Visual Basic only offers templates for class libraries and console apps at this time, while C# and F# also include templates for ASP.NET Core 2.0 apps. Keep an eye on this blog for updates.

Live Unit Testing Support

Live Unit Testing (LUT) is a new feature we introduced in Visual Studio 2017 enterprise edition and with 15.3 it now supports .NET Core. Users who are passionate with Test Driven Development (TDD) will certainly love this new addition. Starting LUT is as simple as turning it ON from the menu bar: Test->Live Unit Testing->Start.

When you enable LUT, you will get unit test coverage and pass/fail feedback live in the code editor as you type. Notice the green ticks and red x’s shown in the code editor in image below.


IDE Productivity enhancements

Visual Studio 2017 15.3 has several productivity enhancements to help you write better code faster. We now support .NET naming conventions and formatting rules in EditorConfig allowing your team to enforce and configure almost any coding convention for your codebase.

With regards to navigation improvements, we’ve added support for camelCase matching in GoToAll (Ctrl+T), so that you can navigate to any file/type/member/symbol declaration just by typing cases (e.g., “bh” for “BusHelpers.cs”). You’ll also notice suggested variable names (Fig.2) as you are typing (which will adhere to any code style configured in your team’s EditorConfig).

We’ve added a handful of new refactorings including:

  • Resolve merge conflict
  • Add parameter (from callsite)
  • Generate overrides
  • Add named argument
  • Add null-check for parameters
  • Insert digit-separators into literals
  • Change base for numeric literals (e.g., hex to binary)
  • Convert if-to-switch
  • Remove unused variable

Project System simplifications

We further simplified the .csproj project file by removing some unnecessary elements that were confusing to users and wherever possible we now derive them implicitly. Simplification trickles down to Solution Explorer view as well. Nodes in Solution Explorer are now neatly organized into categories within the Dependencies node, like NuGet, project-to-project references, SDK, etc.

Another enhancement made to the .NET Core project system is that it is now more efficient when it comes to builds. If nothing changed and the project appears to be up to date since the last build, then it won’t waste build cycles.


Several important improvements were made to .NET Core support for Docker during the 2.0 project.

Support and Lifecycle

.NET Core 2.0 is a new release, supported by Microsoft . You can start using it immediately for development and production.

Microsoft has two support levels: Long Term Support (LTS) and Current release. LTS releases have three years of support and Current releases are shorter, typically around a year, but potentially shorter. .NET Core 1.0 and 1.1 are LTS releases. You can read more about these support levels in the .NET Support and Versioning post. In that post, “Current” releases are referred to as “Fast Track Support”.

.NET Core 2.0 is a Current release. We are waiting to get your feedback on quality and reliability before switching to LTS support. In general, we want to make sure that LTS releases are at the stage where we only need to provide security fixes for them. Once you deploy an app with an LTS release, you shouldn’t have to update it much, at least not due to platform updates.

.NET Core 1.1

.NET Core 1.1 has transitioned to LTS Support, adopting the same LTS timeframe as .NET Core 1.0.

.NET Core 1.0 and 1.1 will both go out of support on June 27, 2019 or 12 months after the .NET Core 2.0 LTS release, whichever is shorter.

We recommend that all 1.0 customers move to 1.1, if not to 2.0. .NET Core 1.1 has important usability fixes in it that make for a significantly better development experience than 1.0.

Red Hat

Red Hat also provides full support for .NET Core on RHEL and will be providing a distribution of .NET Core 2.0 very soon. We’re excited to see our partners like Red Hat follow our release so quickly. For more information head to RedHatLoves.NET.


We’re very excited on this significant milestone for .NET Core. Not only is the 2.0 release our fastest version of .NET ever, the .NET Standard 2.0 delivers on the promise of .NET everywhere. In conjunction with the Visual Studio family, .NET Core provides the most productive development platform for developers using MacOS or Linux as well as Windows. We encourage you to download the latest .NET Core SDK from and start working with this new version of .NET Core.

Please share feedback and any issues you encounter at dotnet/core #812.

Watch the launch video for .NET Core 2.0 to see this new release in action.

Original Post

[ASP.NET Core MVC Pipeline] Controller Initialization – Action Selection

So, we just finished looking at the Routing Middleware, and that also completes our walk-through the Middleware Pipeline! What happens next? Now we enter the realm of the Controller Initialization, and the first thing we need to do is the Action Selection. Let’s revisit our MVC Core Pipeline flow.

The ASP.NET Core MVC Pipeline
The ASP.NET Core MVC Pipeline

We will now focus on the green part of our pipeline, the Controller Initialization.

Controller Initialization
Controller Initialization

The objective of the process in the green box is:

  1. Find the most suitable Action in the application for that request
  2. Call the Controller Factory informing the required Action
  3. Get an instance of a Controller from the Controller Factory

That is all it does, and it is a very important job 🙂 But, how can we interfere with this process? The first thing we can do is add some rules to make the Action Selection behave as we want it to.

The easiest way

You don't have to customize everything
You don’t have to customize everything

It’s true! You probably already used some alternatives to bend the process of Action Selection towards your objectives. The most common method is to use the Verb Attributes. Let’s imagine you want to create 4 actions in a single controller and each of them will respond to a different HTTP Verb:

What is wrong with this code? All the methods claim the same action path on the route, and the Action Selector has no good way to define which one to call! What will happen when we try to access the /index route?

Ambiguous Exception
Ambiguous Exception

Ambiguous Exception! and that happens, as you can see in the underscored line, because the framework does not have enough information to decide which one is the best candidate action, and that is where we can use a MethodSelectorAttribute:

Now the framework will know the best action to choose based on the HTTP Verb of the Request 🙂

That code exemplifies the kind of intervention that we can do in the process of choosing the most fitting Action Method. But, what if we want to change this behavior in a way that is specific to some kind of logic that we envisioned? That is when you should think about adding an Action Constraint.

What is an Action Constraint?

An Action Constraint is a way that we have to tell the Action Selection process that some method should be a better candidate than the other options for that request. It is really that simple. An action constraint is a class that implements the following Interface:

The Order property will help you define the priority in which that constraint must be evaluated and the Accept method is where the true logic is implemented. Whenever an ActionConstraint is evaluated and the return of the Accept method is TRUE, then it will tell the Action Selection process that this Action is a better suitable match for the request.

Customizing the Action Selection – Custom Action Constraint

Now let’s implement our own IActionConstraint and force the Action Selection process to work as we want it to. Let’s imagine a scenario where we want to serve specific content to our users who access our application through a Mobile Browser, and we want to handle that on the back-end, as we really will serve different data to this users. In this situation we have the following Action Methods:

That would, again, give us the AmbiguousException because, as it is, is impossible to the framework to choose a better Action between those two, so what can we do to help? Let’s implement our action constraint:

I know, I know… there are sure better ways to implement this behavior and that not fool-proof at all, but it is enough for our intents. We set the Order property of our Action Constraint to “0” so it will be one of the first to be evaluated by the framework, and the implementation of our Accept Method returns true if the request’s user-agent container either “Android” or “iPhone” in its value.

So, how to hookup this component to our pipeline? Easy enough:

Ha! Simple, isn’t it?


Default Content
Default Content

When accessing through a common browser, you are going to be redirected to the default implementation of our View…

Mobile Content
Mobile Content

…and when accessed through a Mobile Browser, you will be presented with the specific implementation of our View. Cool, right?

This is one of my favorite pluggable components in the entire framework pipeline. It doesn’t feel too much invasive, and it can help us bend the flow of the request in a very useful way!

What do you think of it? Can we think of a way to use it on your applications?