A couple of days ago, I ended up watching a video about Windows Server 2016 at Microsoft Virtual Academy. I think it was A Deep Dive into Nano Server, but I’m not sure to be honest. Anyhow, they started talking about Windows Server Containers and Docker, and I got very interested.
I really like the idea of Docker, but since I’m a .NET dev, the whole Linux dependency is a bit of a turn-off to be honest. And yes, I know that ASP.NET 5 will be cross-platform and so on, but in the initial release of .NET Core, it will be very limited. So it makes it a little less appealing. However, with Windows Server Containers, I get the same thing, but on Windows. So all of the sudden, it got interesting to look at Docker. So I decided to get an ASP.NET 5 app up and running in a Windows Server Container. Actually, I decided to do it in 2 ways, but in this post I will cover the simplest way, and then I will do another post about the other way, which is more complicated but has some benefits…
What is Docker?
So the first question I guess is “What is Docker?”. At least if you have no knowledge of Docker at all, or very little. If you know a bit about Docker, you can skip to the next part!
Disclaimer: This is how I see it. It is probably VERY far from the technical version of it, and people who know more about it would probably say I am on drugs or something. But this is the way I see it, and the way that makes sense to me, and that got me to understand what I needed to do…
To me, Docker, or rather a Docker container, is a bit like a shim. When you create a Docker container on a machine, you basically insert a shim between what you are doing in that container, and the actual server. So anything you do inside that container, will be written to that shim, and not to the actual machine. That way, your “base machine” keeps its own state, but you can do things inside of the container to configure it like you want it. Then is run as sort of a virtual machine on that machine. It is a bit like a VM, but much simpler and light weight. You can then save that configuration that you have made, and re-use it over and over again. This means that you can create your configuration for your environment in a Docker image, and then use it all over the place on different servers, and they will all have the same set-up.
You can then persist that container into what is called an image. The image is basically a pre-configured “shim”, that you can then base new containers off of, or base another images on. This allows you to build up your machine based on multiple “shims”. So you start out with the base machine, and then maybe you add the IIS image that activates IIS, and then you add the one that adds your company’s framework to it, and finally on top of that, you add your own layer with your actual application. Kind of like building your environment from Lego-blocks.
There are 2 ways to build the images. The first one is to create an “interactive” container, which is a container you “go into” and do your changes to, and then commit that to an image. The second one is to create something called a dockerfile, which is a file containing all of the things that need to be done to whatever base-image you are using, to get it up to the state that you want it to be.
Using a dockerfile is a lot easier once you get a hang of how they work, as you don’t need to sit and do it all manually. Instead you just write the commands that need to be run, and Docker sorts it all out for you, and hands you back a configured image (if it all you told it to do in the dockerfile worked).
If you are used to virtualized machines, it is a bit like differencing disks. Each layer just adds things to the previous disk, making incremental changes until you reach the level you want. However, the “base disk” is always the operating system you are running. So, in this case Windows Server 2016. And this is why they are lighter weight, and faster to start and so on. You don’t need to boot the OS first. It is already there. All you need to do is create your “area” and add your images/”shims” to it.
To view the images available on the machine, you can run
C:\>docker images
On a new Windows Server 2016, like I am using here, you will only see a single image to begin with. It will be named windowsservercore, and represents the “base machine”. It is the image that all containers will be based on.
Set up a server
There are a couple of different ways to set up a Windows Server 2016 (Technical Preview 3 at the time of writing) to try this out.
Option 1: On a physical machine, or existing VM. To do this, you need to download and execute a Powershell-script that enables containers on the machine. It is documented here.
Option 2: In a new VM. This was ridiculously simple to get working. You just download and execute a Powershell-script, and it sorts everything out for you. Kind of… It is documented here.
Option 3: In Azure. This is by far the simplest way of doing it. Just get a new machine that is configured and done, and up and running in the cloud. Documented here.
Warning: I went through option 2 and 3. I started with 2, and got a new VM in Hyper-V. However, my Machine got BSOD every time I tried to bridge my network. And apparently, this is a known bug in the latest drivers for my Broadcom WLAN card. Unfortunately it didn’t work to downgrade it on my machine, so I had to give up. So if you are running a new MacBook Pro, or any other machine with that chip, you might be screwed as well. Luckily, the Azure way solved that…
Warning 2: Since this is Windows Sever Core, there is a VERY limited UI. Basically, you get a command line, and have to do everything using that. That and your trusty friend PowerShell…
Confige the server
The next step, after getting a server up and running is to configure it. This is not a big thing, there are only a few configurations that need to be made. Maybe even just one if you are lucky. It depends on where you are from, and how you intend to configure your containers.
The first step is to move from cmd.exe to PowerShell by executing
c:\>powershell
To me, being from Sweden, with a Swedish keyboard, I needed to make sure that I could type properly by setting the correct language. To do this I used the Set-WinUserLanguageList
PS C:\>Set-WinUserLanguageList -LanguageList sv-SE
Next, you need to open a firewall rule for the port you intend to use. In this case, I intend to use port 80 as I am going to run a webserver. This is done using the following command
PS C:\>if (!(Get-NetFirewallRule | where {$_.Name -eq "TCP80"})) { New-NetFirewallRule -Name "TCP80" -DisplayName "HTTP on TCP/80" -Protocol tcp -LocalPort 80 -Action Allow -Enabled True}
It basically checks to see if there is a firewall rule called TCP80. If not, it creates one, opening port 80 for TCP.
Note: If you are running in Azure, you also need to set up an endpoint for your machine for it to work. It is documented in the resource linked above.
Next, I want to make sure that I can access the ports I want on the container(s) I am about to create.
When running containers, you will have a network connection that your machine uses, as well as a virtual switch that you containers will be connected to. In Azure, your machine will have a 10.0.x.x IP by default, and a virtual switch at 172.16.x.x that you containers will be connected to. However, both of them are behind the firewall. So by opening port 80 like we just did, we opened port 80 on both connections. So as long as your container is only using port 80, you are good to go. But if you want to use other ports in your containers, and map port 80 from the host to port X on your container, you need to make some changes, as port X will be denied by the firewall.
Note: This part tripped my up a lot. I Microsoft’s demos, the map port 80 to port 80 running an nginx server. But they never mentioned that this worked because the firewall implicitly had opened the same port to the container connection. I assumed that since the 172-connection was internal, it wasn’t affected by the firewall. Apparently I thought the world was too simple.
So to make it simple, I have just turned off the firewall for the 172.16.x.x connection. The public connection is still secured, so I assume this should be ok… But beware that I am not a network or security guy! To be really safe, you could open just the ports you needed on that connection. But while I am mucking about and trying things out, removing the firewall completely makes things easier!
The command needed to solve this “my way” is
PS C:\>New-NetFirewallRule -Name "TCP/Containers" -DisplayName "TCP for containers" -Protocol tcp -LocalAddress 172.16.0.1/255.240.0.0 -Action Allow -Enabled True
It basically says “any incoming TCP request for the 172.16.x.x connection is allowed”, e.g. firewall turned off for TCP. Just what I wanted!
Create and upload an application
As this is all about hosting the application, I don’t really care about what my application does. So I created a new ASP.NET 5 application using the Empty template in Visual Studio 2015, which in my case created an application based on ASP.NET 5 Beta 7.
The application as such is just a “Hello World”-app that has an OWIN middleware that just returns “Hello World!” for all requests. Once again, this is about hosting, not the app…
I decided to make one change though. As Kestrel is going to be the server used in the future for ASP.NET 5 apps, I decided to modify my application to use Kestrel instead of the WebListener server that is in the project by default. To do this, I just made a couple of changes to the project.json file.
Step 1, modify the dependencies. The only dependency needed to get this app going it Microsoft.AspNet.Server.Kestrel. So the dependencies node looks like this
"dependencies": { "Microsoft.AspNet.Server.Kestrel": "1.0.0-beta7" }
Step 2, change the commands to run Kestrel instead of WebListener
"commands": { "kestrel": "Microsoft.AspNet.Hosting --server Microsoft.AspNet.Server.Kestrel --server.urls http://localhost:5004" }
In that last step, I also removed the hosting.ini file, as I find it clearer to just have the config in the project.json file, instead of spread out…
Next I used Visual Studio to publish the application to a folder on my machine. This packages up everything needed to run the application, including a runtime and dependencies and so on. It also creates a cmd-file for the kestrel-command. So it is very easy to run this application on any Windows-machine.
Transfer the application to the server
With the app done, and published to the local machine, I went ahead and zipped it up for transfer to the server. And since I needed to get it to my server in Azure, I went and uploaded it to a blob in Azure Storage, making sure that it was publicly available.
The next is to get the app to the server. Luckily, this is actually not that hard, even from a CLI. It is just a matter of running the wget-command
wget -uri 'http://[ACCOUNT_NAME].blob.core.windows.net/[CONTAINER]/[APP_ZIP_FILE_NAME].zip' -OutFile "c:\KestrelApp.zip"
And when the zip is on the machine it is time to unzip it. As there will be something called a dockerfile to set everything up, the files need to be unzipped to a subfolder in a new directory. In my case, I decided to call my new directory “build”, and thus unzip my files to “build\app”. Like this
Expand-Archive -Path C:\KestrelApp.zip -DestinationPath C:\build\app –Force
Create a dockerfile
Now that we have the app on the server, it is time to create the dockerfile. To do this, I start out by making my way into the “build”-directory.
PS C:\>cd build
To create the actual file, you use the New-Item command
PS C:\build>New-Item –Type File dockerfile
And then you can open it in Notepad by running
PS C:\build>notepad dockerfile
Inside the Notepad it is time to define what needs to be done to get the container into the state we want.
The first step is to define what image we want to base our new image on. In this case, there is only one, and it is called “windowsservercore”. So, I tell Docker to use that image as my base-image, by writing
FROM windowsservercore
on the first line of the dockerfile.
Next, I want to include my application in the new container.
The base disk (windowsservercore) is an empty “instance” of the physical machine’s OS. So anything we want to have access to in our container, needs to be added to the container using the ADD keyword. So to add the “app” directory I unzipped my app to, I add
ADD app /app
which says, add the app directory to a directory called “app” in my image.
Once I have my directory added, I also want to set it as the working directory when my container starts up. This is done using the WORKDIR keyword like this
WORKDIR /app
And finally, I need to tell it what to do when the container starts up. This can be done using either an ENTRYPOINT or a CMD keyword, or a combination of them. However, being a Docker-noob, I cant tell you exactly the differences between them, which way to use them is the best, but I got it working by adding
CMD kestrel.cmd
which tells it to run kestrel.cmd when the container starts up.
So finally, the dockerfile looks like this
FROM windowsservercore
ADD app /app
WORKDIR /app
CMD kestrel.cmd
which says, start from the “windowsservercore” image, add the content of my app directory to my image under a directory called app. Then set the app directory as the working directory. And finally, run kestrel.cmd when the container starts.
Once I have the configuration that I want, I save the file and close Notepad.
Create a Docker image from the dockerfile
Now that we have a dockerfile that hopefully works, it is time to tell Docker to use it to create a new image. To do this, I run
PS C:\build>docker build -t kestrelapp .
This tells Docker to build an image named “kestrelappimage” from the current location. By adding just a dot as the location, it uses the current location, and looks for a file called “dockerfile”.
Docker will then run through the dockerfile one line at the time, setting up the image as you want it.
And at the end, you will have a new image on the server. So if you run
PS C:\build>docker images
You will now see 2 images. The base “windowsservercore”, as well as your new “kestrelappimage” that is based on “windowsservercore”.
Create, and run, a container based on the new image
Once the image is created, it is time to create, and start, a container based on that image. Once again it is just a matter of running a command using docker
docker run --name kestrelappcontainer -d -p 80:5004 kestrelapp
This command says “create a new container called kestrelappcontainer based on the kestrelapp image, map port 80 from the host to port 5004 on the container, and run it in the background for me”.
Running this will create the container and start it for us, and we should be good to go.
Note: Adding –p 80:5004 to map the ports will add a static NAT mapping between those ports. So if you want to re-use some ports, you might need to remove the mapping before it works. Or, if you want to re-use the same mapping, you can just skip adding the –p parameter. If you want to see your current mapping, you can run Get-NetNatStaticMapping, and remove any you don’t want by running “Remove-NetNatStaticMapping –StaticMappingID [ID]”
If you want to see the containers currently on your machine, you can run
PS C:\build>docker ps –a
which will write out a list of all the available containers on the machine.
We should now be able to browse to the server and see “Hello World!”.
That’s it! That’s how “easy” it is to get up and going with Windows Server Containers and ASP.NET 5. At least it is now… It took a bit of time to figure everything out, considering that I had never even seen Docker before this.
If you want to remove a container, you can run
PS C:\build>docker rm [CONTAINER_NAME]
And if you have any mapping defined for the container you are removing. Don’t forget to remove it using Remove-NetNatStaticMapping as mentioned before..
If you want to remove an image, the command is
PS C:\build>docker rmi [IMAGE_NAME]
As this has a lot of dependencies, and not a lot of code that I can really share, there is unfortunately no demo source to download…
Cheers!