Quantcast
Channel: DarksideCookie - ASP.NET
Viewing all 29 articles
Browse latest View live

Manually configuring OWIN WS-Federation middleware and accepting encrypted tokens

$
0
0

In my previous post, I showed how to do a simple configuration of WS-Federation using WIF, or whatever it is called now that it is part of the framework, to enable federated authentication in ASP.NET. Something that was previously done using a tool, but now either has to be done at the start of the application, or manually.

But what about OWIN? As all new security stuff is moving to OWIN, how do we get it to work there? Well, by default, it is ridiculously simple. And that has been the whole goal with this new model.

By default, all you need to do is get hold of a couple of NuGet packages. In this case, when integrating with WS-Federation, the package to get is Microsoft.Owin.Security.WsFederation. This package will install a whole heap of other NuGet packages that you will need. But if you just choose this one, it will include what you need and you don’t have to worry. However, you do also need Microsoft.Owin.Security.Cookies and Microsoft.Owin.Host.SystemWeb to get OWIN running in your application, and cookie authentication to work.

Warning: As of today, the WsFederation NuGet package is a release candidate (RC-2 at the moment), so you will need to keep that in mind when adding it to your solution. If you use the package manager console, you will need to add “-pre”…and if you use the UI, you have to remember to select “Include Prereleases”

Once you got those packages, you create a Startup class and add the following piece of code

app.UseCookieAuthentication(new CookieAuthenticationOptions
{
AuthenticationType = WsFederationAuthenticationDefaults.AuthenticationType
});
app.UseWsFederationAuthentication(new WsFederationAuthenticationOptions
{
MetadataAddress = "https://sts/FederationMetadata/2007-06/metadata.xml",
Wtrealm = "urn:myRealm",
SignInAsAuthenticationType = WsFederationAuthenticationDefaults.AuthenticationType,
TokenValidationParameters = new TokenValidationParameters
{
ValidAudience = "urn:myRealm"
}
});

Ok, so “ridiculously simple” might be an exaggeration. But it isn’t that hard… So, what are we doing here? Well, we start off by telling the application to use cookies for authentication. This is used once the token has been validated… Next we tell it to use WS-Federation, and we pass in some configuration. In this case, we are telling it a few things. First off, we give it the address to the federation metadata document. This will be used to figure out all the other configuration that is needed to get it working. Next, we tell it the realm to use, and the authentication type to use.

The authentication type is used by the cookie middleware to figure out a bunch of things, so it is important that you have the same value here as you have for the authentication type in the cookie authentication registration.

Finally, we add some token validation parameters. In this case, I add only an audience to validate it against. In more complex scenarios, you would probably add more stuff here.

Ok, that is a bit simpler than having to use config files and stuff to get it configured. And on top of that, it is based on OWIN, so it will work with any OWIN based solution.

But what happens if you want to have encrypted tokens? Well, then it kind of falls apart…at the moment at least…

If you don’t manually add the SecurityTokenHandlers to use, the middleware will default to adding a bunch of standard ones. Right now, those standard ones are JwtSecurityTokenHandler, Saml2SecurityTokenHandler and SamlSecurityTokenHandler.

Warning! A bit of caution here! The SamlSecurityTokenHandler and Saml2SecurityTokenHandler in this case are from the namespace Microsoft.IdentityModel.Tokens and not System.IdentityModel.Tokens. They inherit from SecurityTokenHandler, but they also implement ISecurityTokenValidator, which is a new thing. The ones in System.IdentityModel do not implement that interface, and will not work in this pipeline.

As the “old” SecurityTokenHandlers don’t work in the new pipeline, you would kind of expect there to be nice replacements for them. Unfortunately, at the moment, there isn’t a SecurityTokenHandler in the Microsoft.IdentityModel namespace that can handled encrypted tokens. So how do we solve this?

Well, we will just have to manually add some custom token handlers to do the job. Let’s start with the encryption…

As there is an EncryptedSecurityTokenHandler in the System.IdentityModel namespace, this seems like a great place to start. So I start out but subclassing this handler. I call the new version EncryptedSecurityTokenHandlerEx. And as the class will need a SecurityTokenResolver to do the decryption, I take one of those as a parameter to the constructor. And finally, I add the ISecurityTokenValidator interface to my class.

It looks like this

publicclass EncryptedSecurityTokenHandlerEx : EncryptedSecurityTokenHandler, ISecurityTokenValidator
{
public EncryptedSecurityTokenHandlerEx(SecurityTokenResolver securityTokenResolver)
{
Configuration = new SecurityTokenHandlerConfiguration
{
ServiceTokenResolver = securityTokenResolver
};
}

publicoverridebool CanReadToken(string securityToken) { ... }

public ClaimsPrincipal ValidateToken(string securityToken, TokenValidationParameters validationParameters,
out SecurityToken validatedToken) { ... }

publicint MaximumTokenSizeInBytes { get; set; }
}

Ok, nothing too complicated so far. The only thing is that I set the Configuration property to a new instance, and configure it to use the supplied SecurityTokenResolver.

One thing to note is that I am overriding the CanReadToken() method. This is part of the ISecurityValidator interface. “Unfortunately”, the base class (SecurityTokenHandler) implements a method with the same signature, but just throws an exception…

Ok, so what do I do in my methods? Well, not much. I mostly delegate to the base class, but to the correcly implemented overloads. In the CanReadToken() method, I convert the incoming string to an XmlTextReader, and call the method overload from the base class that takes a XmlReader instead of a string.

Remember, I am just trying to get the handler to work nicely with the new pipeline. I don’t want to change the way it works. So delegating everything to the base class I great. I just need to make sure that I find methods that work. in this case, converting from string to XmlReader is all that is needed.

publicoverridebool CanReadToken(string securityToken)
{
returnbase.CanReadToken(new XmlTextReader(new StringReader(securityToken)));
}

In the ValidateToken() method, there is a bit more work to do. First of all, the base class doesn’t have a method that corresponds perfectly to this. So I have to start by calling ReadToken() instead. And I also need to remember to call the ReadToken() version that takes 2 parameters. The single parameter one will just throw an exception as it is implemented by SecurityTokenHandler and not the EncryptedSecurityTokenHandler class.

Next, I take my decrypted token and validate it. However, the EncryptedSecurityTokenHandler doesn’t include validation. Instead, it relies on another handler in the collection of handlers being able to validate it. So I will do the same.

Luckily, adding a handler to a SecurityTokenHandler will make sure that the handler gets access to the containing collection through the ContainingCollection property. Using this, I validate the the token, and return a new ClaimsPrincipal.

If there is no ContainingCollection for some reason, I call the base class, which will throw an exception…

It looks like this

public ClaimsPrincipal ValidateToken(string securityToken, TokenValidationParameters validationParameters, out SecurityToken validatedToken)
{
validatedToken = ReadToken(new XmlTextReader(new StringReader(securityToken)), Configuration.ServiceTokenResolver);
if (ContainingCollection != null)
{
returnnew ClaimsPrincipal(ContainingCollection.ValidateToken(validatedToken));
}
returnnew ClaimsPrincipal(base.ValidateToken(validatedToken));
}

Cool, so now I have a way to decrypt my incoming token and redirecting it to some other handler for processing. The problem is that I am still in need of that other hander…

Luckily, that part is already in the OWIN WS-Federation NuGet package. So I don’t have to create my own. All I need to do is to add the existing one to the collection. Or can I? Actually you can’t. Well, you might be able to, but as the SecurityTokenHandlerCollection is based in the “old” ways, it depends on the old methods from the handlers. And in this case, it means that I can’t use the new handler out of the box, and subclassing and modifying that class is harder than subclassing the “old” handler. So that is what I will do.

Once again I create a new SecurityTokenHandler class and implement the ISecurityTokenValidator interface. I name it SamlSecurityTokenHandlerEx and inherit from the System.IdentityModel.Tokens.SamlSecurityTokenHandler.

This class does not need any special configuration so I don’t take any parameters in the constructor… Actually I will need to configure it, but I will do that when I use it instead…

Once again I need to override the CanReadToken() and implement the ValidateToken() method. Both of these implementations are almost identical to the one I just showed. So I won’t go in to too much detail, and instead just show the code

publicclass SamlSecurityTokenHandlerEx : SamlSecurityTokenHandler, ISecurityTokenValidator
{
publicoverridebool CanReadToken(string securityToken)
{
returnbase.CanReadToken(XmlReader.Create(new StringReader(securityToken)));
}

public ClaimsPrincipal ValidateToken(string securityToken, TokenValidationParameters validationParameters,
out SecurityToken validatedToken)
{
validatedToken = ReadToken(new XmlTextReader(new StringReader(securityToken)), Configuration.ServiceTokenResolver);
returnnew ClaimsPrincipal(ValidateToken(validatedToken));;
}

publicint MaximumTokenSizeInBytes { get; set; }
}

Ok, not that I have my handlers, how do I go about getting them hooked up and the application running then? Well, The setup is pretty similar to the one used at the beginning of the post. However, instead of pointing to a federation metadata file, I set the configuration manually, which is a bit more complicated and for obvious reasons a bit more like the old way.

First of all, I need to define what audience restrictions I have, and what issuers I trust. I do this by creating an AudienceRestriction instance, and a ConfigurationBasedIssuerNameRegistry instance, and populating them with the required information. Like this

publicvoid Configuration(IAppBuilder app)
{
app.UseCookieAuthentication(new CookieAuthenticationOptions
{
AuthenticationType = WsFederationAuthenticationDefaults.AuthenticationType
});

var audienceRestriction = new AudienceRestriction(AudienceUriMode.Always);
audienceRestriction.AllowedAudienceUris.Add(new Uri("urn:realm"));

var issuerRegistry = new ConfigurationBasedIssuerNameRegistry();
issuerRegistry.AddTrustedIssuer("xxxxxxxxxxxxxxxxxxxxxxxxx", "http://sts/");

...
}

Next I call the UseWsFederationAuthentication() method like before. But this time, I put in a heap of configuration instead of just point to a federation metadata file.

The first part of the config is in regard to what realm it is, what reply address should be used and where the user should go to get a token. And I also need to tell it what authentication type it is. I don’t know why, as this is passed into the constructor, but it needs to be passed in as part of a TokenValidationParameters object as well…

app.UseWsFederationAuthentication(new WsFederationAuthenticationOptions(WsFederationAuthenticationDefaults.AuthenticationType)
{
Wtrealm = "http://wsfedtest/",
Wreply = "http://localhost:1949/secure",
Configuration = new WsFederationConfiguration() { TokenEndpoint = "http://sts.kmd/SignIn" },
TokenValidationParameters = new TokenValidationParameters
{
AuthenticationType = WsFederationAuthenticationDefaults.AuthenticationType
},
...
});

The last bit is to get my own SecurityTokenHandlers in to the pipeline instead of the default ones. This is quite easy. I just have to set the SecurityTokenHandlers property of the WsFederationAuthenticationOptions. It needs to be set to a SecurityTokenHandlerCollection containing all the handlers I want to use. However, they also need to be configured before being put in there. They will not be reading any of the “common” configuration that you can set using the TokenValidationParameters. At least not the way I built it…

app.UseWsFederationAuthentication(new WsFederationAuthenticationOptions(WsFederationAuthenticationDefaults.AuthenticationType)
{
...
SecurityTokenHandlers = new SecurityTokenHandlerCollection
{
new EncryptedSecurityTokenHandlerEx(new X509CertificateStoreTokenResolver(StoreName.My,StoreLocation.LocalMachine)),
new SamlSecurityTokenHandlerEx
{
CertificateValidator = X509CertificateValidator.None,
Configuration = new SecurityTokenHandlerConfiguration()
{
AudienceRestriction = audienceRestriction,
IssuerNameRegistry = issuerRegistry
}
}
}
});

As you can see, I create and add an EncryptedSecurityTokenHandlerEx instance giving it a SecurityTokenResolver pointing to my encryption cert. I then create and add an SamlSecurityTokenHandlerEx instance, setting the required configuration. In this case that means disabling certificate validation and setting the AudienceRestriction and IssuerNameRegistry to my previously created and configured instances.

Note: Reading the encryption cert from the certificate store is the regular way to do it. However, if you are publishing your app to a location where this doesn’t work, for example an Azure Website, you can just create your own SecurityTokenResolver that reads the cert from somewhere else…

That’s pretty much it!

Ok, that should be enough to get everything going! As usual, there is some sample code available here: DarksideCookie.Owin.WsFederation.Encrypted.zip (482.97 kb)

Just note that there is a bit of “external” configuration to get it running, such as adding certs etc... And it requires that you have an STS available. So look through the code and see if you can get it working…

The sample code only includes code for setting up the encrypted tokens tuff. Plain vanilla stuff should be pretty well documented on-line…

Cheers!


The code from my SweNug presentation about what OWIN is, and why it matters…

$
0
0

Most of you can ignore this post completely! But if you attended SweNug today (September 10th), you know that I promised to publish my code. So here it is!

Code: SweNug.Owin.zip (2.30 mb)

I’m sorry for the ridiculous size of the download, but all the solutions have NuGet package restore enabled, which places an exe to restore NuGet packages in the project. This exe is quite sizable, so the download gets a bit large. On the other hand, including all the NuGet packages would make it even larger…

If you have any questions, don’t hesitate to drop me a line!

Building a “front-end build pipeline” from a C# devs perspective - Part 1

$
0
0

I started building web-based software professionally around year 2000, just before the big IT crash in Sweden. It started out being just hacing together HTML, mostly using tables, and a little JavaScript. But slowly evolved into building ASP applications with VB Script and COM-components in VB. Since then, I have been in and out of the webdevelopment scene a whole bunch of times, and very little has changed. It is still HTML/CSS/JavaScript over HTTP…

Yes, on server-side there have been some changes. First an abstraction into WebForms, and then back to MVC. And to me, ASP.NET MVC is pretty similar to classical ASP in many ways. But the front end has pretty much stayed the same. It is still good ol’ HTML and JavaScript…and CSS of course. However, having been away from it for a little while now, coming back I realize that the scene has changed. A lot… Yes, the languages are unfortunately the same, but the methods have changed a lot.

The thing that has changed the most is that we are using MUCH more JavaScript and CSS. MUCH more. And that creates new requirements. Requirements like bundling and minifying, as well as testing even our front-end code. And in a lot of cases, we are authoring our code in other languages and have them “compiled”, or “transpiled”, into JavaScript and CSS, to make up for their “shortcomings”. Whether it be using CoffeScript or Dart for you JavaScript, or LESS or SASS for your CSS, it needs processing before we can use it… And this new way of building things has created the need for a front-end build pipeline… At least if you want to be really productive.

Most of the front-end stuff is built around tools we as C# devs are often unfamiliar with, so I thought I would just write down how we have solved it in the last couple of projects I have been working on. It might not be the best solution, but it is a working one. We all have different needs and workflows, and this has worked pretty well for us. But we are continuously tweaking our flow to get it better. And if you have opinions about it, feel free to voice them in the comments below, or by contacting me through some other channel you seem fit (like the contact form on this blog).

Ok, so let’s start building our pipeline! The most obvious way to start out is by accepting that we will be doing a lot of this using Node.js and JavaScript. Yes, that is correct! We will use JavaScript to create the stuff that, among other things, builds JavaScript. Very meta…

So I am just going to assume that you have Node.js installed, and that you are at least a little familiar with it. If not, you need to go and get it. And probably go and get some basic node knowledge. At least enough to know how to run npm, the node package manager… Actually…that is all you need to know about node. The rest is just JavaScript…

In my last project, we used TypeScript and LESS, so that is what I will use for this walkthrough. But you can use whatever you want. However, if you use some other languages, you will need other node modules to transpile them. But you will still get the basic idea in this walkthrough…

I am going to use a very simple folder structure for my code in this demo. I have one folder called src, in which I store my TypeScript files, and I have one called styles, in which I will keep my LESS files.

The first thing we need is some form of task runner. There are 2 VERY common choices here. Gulp and Grunt. Grunt used to be the sheriff in town, but lately it seems as though Gulp is the new beez kneez. And now, when you read it, there might be something else. Welcome to the wonderful world of front-end development. It is a very rapidly changing world, with LOTS of opinions.

Grunt is based on creating a bunch of configuration and use that to tell it what to do. Gulp on the other hand is code based. We write JavaScript telling Gulp what to do. In most cases you start out with defining what files to work with, then you chain the things you want to do to them, one after the other. A very smooth, and flexible, way of doing it in my mind.

Gulp is installed using npm. You can run Gulp both as a local and a global install, but when it comes to Gulp, I prefer a global install for different reasons and personal preference… Installing it globally lets you run Gulp tasks in the command window by just typing “gulp <task>”. If you install it locally you have to write “npm run gulp <task>”.

Next we need to install the modules we need to help us with the stuff we want Gulp to do. Remember, Gulp is just a smooth taskrunner, pretty much anything you want to do in it, is a separate module. Let’s start by transpiling our TypeScript into JavaScript. For this, you can use a module called “gulp-tsc”. So just install that using npm…

Just like when choosing taskrunner, there are often lots of different modules to choose from. I have looked at some of the options and just chosen one based on my gut feeling. Or, to be honest, by running a few of them to see which seems to work the best. Just have a look at the options and go for the one you like. If it doesn’t work well, when then you can always just replace it!

Gulp assumes that there is a file in the root directory called gulpfile.js, so let’s add one of those. It will be used to define the different tasks you want to use Gulp to run. In this case, I want a single task that I will call “default”. Calling it default means that Gulp will default to using that task if no specific task name is supplied. It looks like this

var gulp = require('gulp');
var typescript = require('gulp-tsc');

gulp.task('default', function(){
// Task functionality
});

As you can see, you need to start out by “requiring” gulp. In this case, as we need to compile TypeScript, we should also “require” the gulp-tsc module that we just installed.

The first thing to do inside the task is to figure out what files to work with. This is done by calling a function called src() on the gulp object. To the src() function you pass a string, or array of strings, containing Unix glob filters. In this case, using “src/*.ts” will suffice.

The return value of src() is chainable, so we can just tack on what ever you want to do with the files that have just been selected. In this case, “piping” them to the TypeScript compiler seems like a good idea. And then finally they are piped to gulp.dest(), which tells Gulp to write the result(s) to the defined location. Like this

gulp.task('default', function(){
gulp.src('src/*.ts')
.pipe(typescript())
.pipe(gulp.dest('build/js/'))
});

Ok, so now we should be able to get the TypeScript transpiled to JavaScript using Gulp. So let’s try it. To do that, create a simple TypeScript class called Person.ts in the src folder, and add the following TypeScript to it

class Person {
fullname : string;
constructor(public firstname, public lastname) {
this.fullname = firstname + " " + lastname;
}
}

Next, open a command prompt in the root directory, and type gulp and press enter. This should result in something similar to this

image

And if you look in the directory called “js” under the “build” directory in the root, which was just magically created, you will find a JavaScript file called Person.js containing the transpiled Person class. Sweet!

Sidenote: A neat feature that very few people seems to know is that if you browse to the root directory in Explorer, you can just type “cmd” in the addressbar and press enter to open a command line at that location…

Next up is to add some LESS. So let’s put a LESS file called styles.less in the “styles” folder. Add some simple LESS to it. Something like this

body {
margin:0;
padding:0;

header {
background-color: #ccc;
font-family: Verdana;
font-size: 14px;
}
}

Next, let’s add LESS tanspiling to our gulpfile. First off, we need to “require” gulp-less, and then use that in in a new task called “styles”. This task selects all the less files in the “styles” directory, pass them to a the gulp-less module, and finally writes them to “build/styles”. Like this

//...other require statements...
var less = require('gulp-less');

// default task

gulp.task('styles', function() {
return gulp.src('styles/*.less')
.pipe(less())
.pipe(gulp.dest('build/styles'));
});

And this will obviously fail as the gulp-less module is missing. So use npm to install it, and then run “gulp styles” in the command prompt.

Once again you are greeted by some colored text output from Gulp, as well as a new “styles” directory under the “build” directory. This directory will contain styles.css, which contains the transpiled css.

Ok, so that is pretty sweet! But having to run two different commands to build our front-end stuff is a bit annoying. So let’s modify the gulpfile to run both at the same time. This is done by renaming the default task to “typescript”, and adding a new task called “default”. But instead of passing a function as the second parameter to the default task, we’ll pass an array of strings with the names of the tasks we want to run. In the end, the gulpfile will look like this

var gulp = require('gulp');
var typescript = require('gulp-tsc');
var less = require('gulp-less');

gulp.task('styles', function() {
return gulp.src('styles/*.less')
.pipe(less())
.pipe(gulp.dest('build/styles'));
});

gulp.task('typescript', function(){
gulp.src('src/*.ts')
.pipe(typescript())
.pipe(gulp.dest('build/js/'))
});

gulp.task('default', ['typescript','styles']);

If you run Gulp now, without passing any parameters to it, it will run both tasks, and create your files. But wouldn’t it be nice to not have to go and run Gulp after each LESS or TypeScript change you ever made? Well, luckily Gulp supports watching files and directories as well. So let’s add a watcher that watches for changes to the TypeScript and LESS files, and automatically transpiles them. This is done using a function called watch() on the gulp object. Just remember that watching things will lock the command window, so I prefer having the watchers in its own task. That way, I can easily run the transpiling manually if I want to, but start the watchers if that suits my needs better at that point.

So let’s add a new task called “watch”. Inside it, use the watch() function to define what files to watch, and what tasks to run when they change. Like this

gulp.task('watch', function() {
gulp.watch('styles/*.less', ['styles']);
gulp.watch('src/*.ts', ['tyepscript']);
});

Now, if you run “gulp watch”, you should be greeted by something like this

image

And that window should now have locked itself for input, and if you go and change any of the LESS or TS files, it will automatically run the defined tasks. And if you want to stop the watching, just press Ctrl+C and confirm the termination by pressing Y and then enter.

So now we have a “build pipeline” that will handle all the transpile needs we have. And even doing it automagically in the background for us. All we need to do is to remember to start the watch task before we start coding. And if you are using VS for you development, there is even a Gulp taskrunner extension that you can install. It will enable starting Gulp tasks based on events in VS. Events like when the project is opened, when it is compiled and so on.

And obviously, you can configure as many tasks, and intricate task set-ups as you feel that you need. You can pretty much create whatever flow you can even think of using Gulp. This is just a very simple starter solution that works well for a small solution…

In the next part we will look at taking the whole thing one step further by adding bundling and minification to the pipeline so that we can optimize what we deliver to the client. We will also have a look at how we can write tests for our TypeScript, and have them run automatically, so that we can make sure we don’t break things.  But this is it for this time!

Building a “front-end build pipeline” from a C# devs perspective - Part 2

$
0
0

In the previous post, we looked at how we can use Gulp to run tasks for us. And in that post we used it to create tasks for transpiling LESS and TypeScript into CSS and JavaScript. But the example was very small and simple. It only contained 1 LESS file and 1 JavaScript file. But what if we have more than 1? Well, that’s when we need to start bundling the files together, and potentially minify them so that they are faster to download. Luckily, this is a piece of cake to do using Gulp. So in this post, we will have a look at how to do that, as well as how to get some TypeScript/JavaScript tests thrown in there as well.

Dislaimer: The solution will still be VERY small and simple. But at least it will be made big enough to be able to use bundling and minification. Which to be honest just means that we need more than one file of each type…

I assume that you have read the last post, and that if you are following along on your machine, you will need to be done with everything that was done in that post…

The first thing that needs to be done to be able to show bundling and minification, is obviously to add more TypeScript files, and more LESS files. However, adding even just one of each is enough to show off the concept. 2 or 2000 files doesn’t matter, the concept is the same. So let’s add one more file of each type. The content, and naming, isn’t really important, but I will call my TypeScript file Animal.ts, and the LESS file for animal-styles.less. And they will look like this on my machine

// animal-styles.less
body {
.animal {
background-color:brown;
}
}


// Animal.ts
class Animal {
constructor(public name) {
}
greet() {
console.log(this.name + ' greets you!');
}
}

As I said, the content doesn’t matter…

Next, let’s modify our gulpfile to include bundling. Something I have chosen to do using a module called “gulp-concat” for the JavaScript, and one called “gulp-concat-css” for the CSS. So after npm installing them, two require statements need to be added at the top, before modifying the “typescript” and “styles” tasks. Once those require statements are in place, all that is needed is to modify the existing tasks by adding “piping” the files to the corresponding bundler like this

// other requires...
var concat = require('gulp-concat');
var concatCss = require('gulp-concat-css');

gulp.task('styles', function() {
return gulp.src('styles/*.less')
.pipe(less())
.pipe(gulp.dest('build/styles'))
.pipe(concatCss("bundle.css"))
.pipe(gulp.dest('build/styles'));
});

gulp.task('typescript', function(){
return gulp.src('src/*.ts')
.pipe(typescript())
.pipe(gulp.dest('build/js/'))
.pipe(concat("scripts.js"))
.pipe(gulp.dest('build/js/'));
});

// other tasks

As the files are bundled, the bundler needs to define a new filename for the resulting data. So this is passed into the concat() and concatCss() functions. These names are then used by the gulp.dest() function when writing the result to disk.

If you run “gulp” now, you will see that next to the transpiled files you now get bundled versions as well.

But this is only bundling. We really want minification as well… As we get large CSS and JavaScript files, we need to get them as small as possible to make the download as fast as possible for the client.

Once again we just install a couple of modules and pipe the data to those. In this case, my choice has fallen on “gulp-uglify” for the JavaScript, and “gulp-minify-css” for the CSS. So it is just a matter of installing those 2 modules using npm, and add them to the gulpfile using a couple of require statements.

However, for this step, I also want to rename the stream I am working with. I want my minified files to have a slightly modified name, compared to the bundles, so that I know that they are minified as well. To do this, I have chosen the “gulp-rename” module. So let’s install that module as well…and add another require for that one as well.

As soon that is done, we can change the bundle tasks to include the minification as well using a couple of extra pipes like this

// other requires...
var rename = require('gulp-rename');
var minifyCSS = require('gulp-minify-css');
var uglify = require('gulp-uglify');

gulp.task('styles', function() {
return gulp.src('styles/*.less')
.pipe(less())
.pipe(gulp.dest('build/styles'))
.pipe(concatCss("bundle.css"))
.pipe(gulp.dest('build/styles'))
.pipe(minifyCSS())
.pipe(rename('styles.min.css'))
.pipe(gulp.dest('dist/'));
});

gulp.task('typescript', function(){
return gulp.src('src/*.ts')
.pipe(typescript())
.pipe(gulp.dest('build/js/'))
.pipe(concat("scripts.js"))
.pipe(gulp.dest('build/js/'))
.pipe(rename('scripts.min.js'))
.pipe(uglify())
.pipe(gulp.dest('dist/'));
});

// other tasks...

As you can see, if you look at the code snippet above, I have added a couple of “pipes” to minify the data, and then renaming the output, before finally writing them to disk in a new directory called “dist”. The files in the “dist” directory are the ones we want to use when we publish our application.

If you run gulp now, you will end up with a “build” directory, containing all the transpiled files, as well as the bundled ones, and a “dist” directory with the bundled and minified version that you want to put into production. And since we have just modified the existing tasks, these new things will automatically be generated by the watchers we put in place in the last post. Easy peasy!

I could have done it all in one swoop, and not saved the steps along the way, but to me, having the transpiled files, and the bundled files available as well, makes it easier to figure out what went wrong is things don’t work. On top of that, it is a lot easier to use the non-bundled files during development. The bundled and minified files are not that great for some things. So in the solution I am working on at the moment, we only use the minified versions in production. During development, we use the unbundled files. And to be honest, you can even serve up the LESS files and have them compiled in the browser if you wanted to. It does slow down the solution a bit though…

You can get to a usable situation with the bundled and minified versions as well using sourcemaps, however, I still need to find a good way to work with thm. So for now, we don’t use them. But if you want to, Gulp can obviously help you to output sourcemaps as well!

Now that we have a fully working pipeline for transpiling and bundling our front-end stuff, we are pretty well set up to development. However, there is one more thing that I want to have in place before then. And that is a way to run JavaScript tests as well. Or rather my TypeScript tests. So let’s see how we go about doing that.

First, we obviously need some tests, and as we are writing our code in TypeScript, we should obviously write our tests in TypeScript as well. But doing so is a bit interesting. Not hard, but interesting.

There are 2 obvious ways of doing it. One simple, and one not so simple. I have chosen the not so simple, as I don’t want to digress into another topic right now, but after the description of what I have done for this “demo”, I will explain briefly how to do it the easier way…

To create tests for the Person class, let’s create a new file in the src directory. Let’s call it Person.tests.ts. Inside it, add a TypeScript reference to the Person.ts file as that is what is about to be tested. Next we need to declare any of the Jasmine based methods we will be using. And if I potentially forgot to tell you that I use Jasmine tests, you have now been told… In this simple case, we will only be using the describe(), it(), beforeEach() and expect() functions. So the declarations look like this

/// <reference path="Person.ts" />

declare function describe(description: string, specDefinitions: () => void): void;
declare function it(expectation: string, assertion?: () => void): void;
declare function beforeEach(action: () => void): void;
declare function expect(actual: any): jasmine.Matchers;
declare module jasmine {
interface Matchers {
toBe(expected: any): boolean;
}
}

Once we got that in place, we can start writing the tests. I won’t talk very much about them as they aren’t important as such. But the existence of them is… So they look like this

// declarations....

describe("Person", () => {

var person: Person;

beforeEach(() => {
person = new Person('Chris','Klug');
});

it("should set fullname property", () => {
expect(person.fullname).toBe("Chris Klug");
});

describe("greet", () => {

it("should log a greeting to the user properly", () => {
var oldLog = console.log;
var loggedEntries = new Array<string>();
console.log = function(str) { loggedEntries.push(str)};
person.greet('Bill')
console.log = oldLog;
expect(loggedEntries[0]).toBe("Greetings Bill, says Chris Klug");
});

});

});

Ok, so now we have some tests to run! However, I did promise to tell you of another way. Well, declaring all of you external dependencies, like we did for the Jasmine functions above, can get a bit tedious, and hard, depending on how many external dependencies you have. Luckily, you can find pre-build TypeScript declarations for most bigger JavaSctipt frameworks. Most of them are available at DefinitelyTyped on GitHub, or through a node module called TSD. Using these pre-defined TypeScript declaration files, you just need to reference them and everything just magically works, instead of having to define every little thing you are working with in you tests…

For JavaScript test running, I have chosen a node module called “testem”. It is a nice “interactive” testrunner that you can set up using a simple json file, and then just start and have running in the background. It will automatically watch your files files for you, serve up the things you want to use to run the tests, and do the testing in whatever browser you wish to use. In my case, I want to run it using PhantomJS, which is a headless browser that doesn’t require you to have a browser window open all the time.

To get started with testem, you need to use your npm skills to install it, as well as PhantomJS if you want to use that. Both can be installed locally, but will work globally as well. The only important thing to get things to work though, is to include the path to the PhantomJS executable location in the PATH environment variable. If you install it locally, setting the path to “node_modules\phantomjs\lib\phantom” works fine. The path can be relative, as long as it is there. If it isn’t, testem fails to start. And it doesn’t give you a good error message, so it can be a bit hard to figure out…

After installing testem, and PhantomJS, we need to set up the configuration. This is done by creating a new JSON-file in the root of the solution, called testem.json. This file will include the configuration needed by testem. For this solution, my testem.json file looks like this

{
"framework": "jasmine2",
"launch_in_dev":["PhantomJS"],
"src_files": [
"src/*.ts"
],
"serve_files": [
"build/js/*.js"
],
"before_tests":"gulp typescript"
}

As you can probably figure out from the above config, testem will run Jasmine2-based tests, and run them in development using the PhantomJS browser. It will also monitor all .ts files in the src directory, and whenever they change, it will re-run the tests. And the tests are run by serving up all the JavaScript files in the build/js directory. However, as I am watching for changes in the TypeScript files, I need to get them transpiled first. Luckily, as I already have this working in my gulpfile, that is just a matter of telling testem to run Gulp’s “typescript” task before running the tests.

Testem has a whole heap of configuration options available, but for this scenario this is enough. If you want to run more browsers, just modify the list in “launch_in_dev” (the available browser options for your machine is retrieved by running “testem launchers”). If you want to watch more, or other, files just modify the “src_files”. And so on… Testem can even be used to run the tests as part of the build pipeline on the build server. However, as we are currently running TFS, we do this using another tool, which I will show later on. So for us, just configuring the “launch_in_dev” is enough. But if you want to run it during builds, you can set that up too.

Ok, that should be it! Typing “testem” and pressing enter in the console should result in something like this

image

And changing any of the TypeScript files should cause an automatic transpile, and then a new test run. It might be a little slow, and can be optimized a bit depending on your solution set-up, but it works very well even in this simple set-up!

That’s it! You now have a fully working “front-end build pipeline” including a nice testrunner that you can have running while you develop your beautiful front-end code. The only problem is that this will now only run on your local machine, and you do NOT want to check in the generated files. They will create merge conflicts on every single check-in. Instead, we want to make sure we run these things on the build server during the build process. But that is a topic for the next post.

Uploading files using ASP.NET Web Api

$
0
0

I have once again been tasked with writing a ASP.NET Wep Api action to handle the upload of a file. Or rather a form including a file. And once again, I Googled it looking for good solutions. And once again I got the same code thrown in my face over and over again. It seems to be the only one out there. And it isn’t what I want…

The code I am talking about is the one available at http://www.asp.net/web-api/overview/advanced/sending-html-form-data,-part-2. And if you don’t want to go and read that, the general gist is this

if (!Request.Content.IsMimeMultipartContent())
thrownew HttpResponseException(HttpStatusCode.UnsupportedMediaType);

string root = HttpContext.Current.Server.MapPath("~/App_Data");
var provider = new MultipartFormDataStreamProvider(root);

await Request.Content.ReadAsMultipartAsync(provider);

/* Get files using provider.FileData */

Ok, so it does what it is supposed to do. It gets the posted files… But once in a while, getting the job done isn’t enough…

If you look at the code above, there is a line that references HttpContext.Current.Server.MapPath(). It bugs me. For two reasons.

First of all, taking a dependency on HttpContext.Current means taking a dependency on System.Web, which in turn means taking a dependency on something I might not want in some cases.

Secondly, it also indicates that the code will be saving things on my hard-drive. Something that I definitely don’t want in this case. Why would I want to save the posted files on my drive before I can use them? And what happens if something fails in my controller? Well, the answer to the lest question is that they will be sitting there on the disk waiting for some nice person to come by and manually delete them…

So, is there a way around this? Yes there is. And it isn’t even that hard…

Instead of instantiating a MultipartFormDataStreamProvider and passing it to the ReadAsMultipartAsync() method, we can just call the ReadAsMultipartAsync() without passing any parameters. This will return a MultipartMemoryStreamProvider instead of populating the MultipartFormDataStreamProvider like it did in the previous example. And as you can probably guess from the naming of the class, it uses MemoryStreams instead of writing things to disk…

Caution: Just a little caution here. If you are uploading large files, putting them all in-memory might not be such a great idea. In those cases, placing them on disk actually makes a bit of sense I guess. But for smaller files, storing them on permanent storage seems to offer very little.

Ok, after that little warning, let’s carry on!

The main problem with using the MultipartMemoryStreamProvider instead is that it doesn’t have quite as nice an API to work with. Instead, it exposes all posted data as a collection of HttpContent objects. This is a little bit more cumbersome to work with, but it still works fine. All you need to do is to cycle through them all, figure out which are files, and which are “regular” data.

As I got this working, I decided to roll it into a reusable piece of code as I have been requested to do this more than once.

So I started out by defining the “return” value. The value to get after the parsing has completed. And I decided to build something like this

publicclass HttpPostedData
{
public HttpPostedData(IDictionary<string, HttpPostedField> fields, IDictionary<string, HttpPostedFile> files)
{
Fields = fields;
Files = files;
}

public IDictionary<string, HttpPostedField> Fields { get; private set; }
public IDictionary<string, HttpPostedFile> Files { get; private set; }
}


A very simple class that encapsulates a dictionary containing the posted fields, and one containing the posted files. The HttpPostedFile and HttpPostedField classes are simple DTOs that look like this

publicclass HttpPostedFile
{
public HttpPostedFile(string name, string filename, byte[] file)
{
//Property Assignment
}

publicstring Name { get; private set; }
publicstring Filename { get; private set; }
publicbyte[] File { private set; get; }
}

publicclass HttpPostedField
{
public HttpPostedField(string name, stringvalue)
{
//Property Assignment
}

publicstring Name { get; private set; }
publicstring Value { get; private set; }
}

Ok, that was pretty simple! The next thing is to parse the incoming request and create a response like that. In a reusable fashion…

I decided to create an extension method called ParseMultipartAsync. The signature looks like this Task<HttpPostedData> ParseMultipartAsync(this HttpContent postedContent). And as you can see from the signature, it is async using Tasks.

Inside the method I use ReadAsMultipartAsync() to asyncly (is that even a word?) parse the incoming content and get hold of one of the fabled MultipartMemoryStreamProvider instances.

Next, I new up a couple of generic dictionaries to hold the posted field values.

Tip: By using the constructor that takes a IEqualityComparer<>, and passing in StringComparer.InvariantCultureIgnoreCase I get case-insensitive key comparison, which can be really helpful in this case. And a lot of other cases…

I then loop through each of the HttpContent items in the provider. And for each one of them I determine whether it is a file or a field by looking at the FileName property of the ContentDisposition header. If it isn’t null, it is a file. If it is, it isn’t.

I then populate the corresponding dictionary with a new HttpPostedXXX instance. And then finally return a new HttpPostedData.

The code for that extension method looks like this

publicstatic async Task<HttpPostedData> ParseMultipartAsync(this HttpContent postedContent)
{
var provider = await postedContent.ReadAsMultipartAsync();

var files = new Dictionary<string, HttpPostedFile>(StringComparer.InvariantCultureIgnoreCase);
var fields = new Dictionary<string, HttpPostedField>(StringComparer.InvariantCultureIgnoreCase);

foreach (var content in provider.Contents)
{
var fieldName = content.Headers.ContentDisposition.Name.Trim('"');
if (!string.IsNullOrEmpty(content.Headers.ContentDisposition.FileName))
{
var file = await content.ReadAsByteArrayAsync();
var fileName = content.Headers.ContentDisposition.FileName.Trim('"');
files.Add(fieldName, new HttpPostedFile(fieldName, fileName, file));
}
else
{
var data = await content.ReadAsStringAsync();
fields.Add(fieldName, new HttpPostedField(fieldName, data));
}
}

returnnew HttpPostedData(fields, files);
}

It isn’t really that hard. but having it as an extension method makes it easy to use, and reuse.

Using it looks like this

[Route("upload")]
[HttpPost]
public async Task<HttpResponseMessage> UploadFile(HttpRequestMessage request)
{
if (!request.Content.IsMimeMultipartContent())
thrownew HttpResponseException(HttpStatusCode.UnsupportedMediaType);

var data = await Request.Content.ParseMultipartAsync();

if (data.Files.ContainsKey("file"))
{
// Handle the uploaded file "file"
// Ex: var fileName = data.Files["file"].Filename;
}

if (data.Fields.ContainsKey("description"))
{
// Handle the uploaded field "description"
// Ex: var description = data.Fields["description"].Value;
}

// Do something with the posted data

returnnew HttpResponseMessage(HttpStatusCode.OK);
}

As you can see, I take in a HttpRequestMessage instance as a parameter instead of specific parameters from the post. This gives us access to the request message in its “raw” form. it is the same that is available through the base class’ Request property. I just find it a bit more explicit to handle it like this. Through that parameter I can then access the HttpContent through the Content property and call the extension method, getting the required data in a simple way.

That was it for this time! Shorter than normal, but hopefully as useful!

And code? Of course! Here it is: DarksideCookie.AspNet.WebApi.FileUploads.zip (65.29 kb)

The source includes a web project with a simple AngularJS based form doing the upload. It doesn’t actually do anything with the uploaded file, but at least you can add a breakpoint and see it work… Winking smile

Cheers!

Understanding OWIN (Katana) Authentication Middleware

$
0
0

As some of you might have noticed, I really like OWIN. I like the simplicity, and the extremely powerful things that you can do with it in a simple way. And the fact that I don’t have to create an IHttpModule implementation, and figure out the ASP.NET event to hook into, like I had to to do the same thing before OWIN.

Katana, Microsoft’s implementation of OWIN, also offers a standardized way to handle authentication. And it is really easy to use, and not too hard to extend to work with your own identity providers. However, being me, I want to know how it works “under the hood”, and not just read a “how to build an authentication middlware” blog post…

Remember, knowing how things work “under the hood”, or “under the bonnet” if you speak the Queens English, makes it possible to do more things than just use it. By knowing how a combustion engine works (under the hood/bonnet of your car), makes it possible to add a turbo or two to it, or a compressor, or at maybe tweak the fuel consumption and horse power you get from it. But let’s leave the car analogies and look at Katana authentication middleware.

Building, and using, authentication middleware means integrating with a few baseclasses, and obviously specific implementations for the authentication provider you choose. The baseclasses are all located in the assembly Microsoft.Owin.Security, in the namespace Microsoft.Owin.Security.Infrastructure, which is available in the NuGet package with the same name.

The classes we are interested in are the AuthenticationMiddleware<TOptions>, which inherits from OwinMiddleware, and the AuthenticationHandler and its subclass AuthenticationHandler<TOptions>. AuthenticationHandler<TOptions> actually does very little except hard type the options used by the handler.

Most information you find about Katana authentication middleware will only explain how to inherit these classes to get the desired behavior. Something that this post will also do…at the end… But I want to dig a little deeper as I am curious about how completely independent middleware can work together like they do, and enable an extendable authentication environment.

To be honest, is it mostly the CookieAuthenticationMiddleware that works together with the others, but seeing that CookieAuthenticationMiddleware is just another authentication middleware still means that completely separated middlewares manage to co-operate in some way…

Let’s start at the middleware end of things as that is what we plug in to the OWIN pipeline. The AuthenticationMiddleware<TOptions> is just another piece of OWIN middleware, and as such, its constructor takes the “standard” next middelware and “options” object. But being that AuthenticationMiddleware<TOptions> I generic, it types the options, and forces it to inherit from AuthenticationOptions.

Side note: The AuthenticationOptions baseclass contains 3 properties, AuthenticationType, AuthenticationMode and Description. AuthenticationType is the name of the authentication type, which is pretty much the “identifier”of it. AuthenticationMode defines whether the authentication is Active or Passive. Active means that it should authentication the user actively on each request, while Passive will wait until told to do so. (Active would be something like Windows authentication that can be performed “actively” without redirecting the user and so on). And finally the Description is used by an application that wants to look at what authentication methods are available.

Note: It is worth nothing that the Caption on the Description object is used by the default ASP.NET MVC templates when creating the list of external login providers. If the caption is not set, the provider will not be shown in the list…

Being that AuthenticationMiddleware<TOptions> inherits from OwinMiddleware, it is required to override the Invoke() method. In this override, it creates a new AuthenticationHandler<TOptions> by calling the abstract method CreateHandler(). So the implementing class will override this, and return a suitable AuthenticationHandler<TOptions>.

Once it has an authentication handler instance, it initializes the handler using some internal stuff. The internal stuff is actually quite interesting though, and if we just ignore that some of it is done in AuthenticationHandler<TOptions> and some of it in its baseclass, this is what is being done (simplified).

First, the options object is stored in a protected property so that we can access it from our sub class. So is the Owin context, together with a few other protected properties. The only one really worth noting is the one called Helper though. The Helper property holds an object that can help out with some of the things our handle needs to do, for example set the current identity and verify a challenge request has been issued for our handler (more about that later).

Next, it hooks the handler into 2 things. First, it uses the IOwinRequest.RegisterAuthenticationHandler() method to “add the handler to the system”. This can then be used by IOwinContext.Authentication to authenticate requests, as well as figure out what authentication middleware is available. Secondly it adds a callback to IOwinResponse.OnSendingHeaders(), which is called when headers are about to be sent out, which is the last point when we can change the request.

Once those hooks have been added, InitializeCoreAsync() is called. This makes it possible for our subclass to initialize itself if needed. Then it checks to see if the AuthenticationMode is set to Active. If it is, the handler is requested to authenticate the user straight away, which is done by calling AuthenticateCoreAsync (by way of the AuthenticateAsync method).

If AuthenticateCoreAsync returns an AuthenticationTicket with an Identity set, that identity is then set as the current user for the request. If not, the request just carries on unauthenticated.

Note: A handler with the AuthenticationMode set to Active will run on every request, setting up the user based on some form of evidence available in the request. It will not cause a cookie to be issued by the CookieAuthenticationMiddleware, and instead will expect the evidence used to do the authentication to be available in every request.

Once the handler has been initialized, the middleware calls InvokeAsync(), which needs to be overridden to actually do something. By default, it just returns false, which will cause the next middleware to be executed. However, if InvokeAsync() returns true, it stop the request execution and starts travelling “back out” the pipeline.

So, what should we be doing in InvokeAsync() considering that the default implementation doesn’t actually do anything? Well, generally, the handler will see if the current request is aimed at the configured callback path. If not, it will just return false, request execution will just carry on.

If the request really is aimed at the callback path, it calls AuthenticateAsync, which causes AuthenticateCoreAsync() to be called. Inside AuthenticateCoreAsync(), which we need to override, we should use the current request to figure out who the user is. Once that is done, we should return an AuthenticationTicket containing a ClaimsIdentity if the user was authenticated. The ClaimsIdentity’s AuthenticationType should be set to the AuthenticationType defined in the handlers options.

Note: Most authentication middleware has a SignInAsAuthenticationType property on the options object. If this is set, the middleware will sign in using another authentication type than the default for the current middleware. If this is set, the ClaimsIdentity will need to have its AuthenticationType replaced before being sent to SignIn().

Note 2: If SignInAsAuthenticationType isn’t set manually, the middleware will in most cases set this to IAppBuilder.GetDefaultSignInAsAuthenticationType(), causing it sign in as the default.

If the returned AuthenticationTicket includes an Identity, it indicates that the user has been authenticated, and we need to do something about it. And by “do something about it”, I mean that we should call IOwinContext.Authentication.SignIn() to sign in the user. Or rather prepare the system to sign in the user… SignIn() doesn’t acutally do anything but place an AuthenticationResponseGrant instance in the OWIN environment dictionary.

If the authentication fails for some reason, InvokeAsync() should handle this in some way. Most likely by setting a corresponding HTTP status code, or set up a redirect, and return true to cause the execution of the request to stop.

As the request is turning back around, either through InvokeAsync() returning true, or by the next middleware returning, the handler is “torn down”, which causes 3 things to happen.

First it calls ApplyResponseAsync()/ApplyResponseCoreAsync, which I will get back to in a little while. After that, it calls TeardownCoreAsync() allowing us to remove anything that isn’t needed anymore. And finally it unregisters the handler using Request.UnregisterAuthenticationHandler().

One thing to note is that if InvokeAsync() returns true, and the request is “short circuited”, nothing is actually written to the response. It is up to the ApplyResponseCoreAsync(), which is called in the teardown of the handler, to do that if needed.

So what should we be doing in the ApplyResponseCoreAsync()? Well, normally we shouldn’t even override it as the default implementation will delegate the call to 2 other methods that we are more interested in. At least one of them…

First it calls ApplyResponseGrantAsync() and then it calls ApplyResponseChallengeAsync(). In most cases, we can ignore ApplyResponseGrantAsync(). This is used by for example the CookieAuthenticationMiddleware to take the AuthenticationResponseGrant that we added during user authentication, and turn that into a cookie that can be used in subsequent calls to authenticate the user.

Side note: And there it is! The way the middlewares communicate. The individual authentication middleware puts defined data in the Owin environment dictionary using IOwinContext.Authentication.SignIn() for example, and other middlewares can then pick up that data and act on it.

Side note 2: This is why the CookieAuthenticationMiddleware should always be registered before all other authentication middlewares in the OWIN pipelinet. It needs to be able to look at the authentication responses that the other middlewares have added to the enrionment, and turn it in to a cookie (or remove the cookie) if needed. And this means it needs to be the last authentication middleware to look at the response going out through the pipeline, and thus be the first registered.

Back to the ApplyResponseChallengeAsync() method, which we actually need to care about. Inside this method, we should check the response to see if it has a 401 status code. If it does, something further down the pipeline has said that the user is unauthorized, and needs to be authenticated. In that case, we need to see if there is an AuthenticationResponseChallenge placed in the OWIN Environment dictionary, and whether or not that challenge defines the current authentication method.

Tip: This is quite easily done by calling Helper.LookupChallenge(Options.AuthenticationType, Options.AuthenticationMode)

If there is such a challenge, with the correct authentication method defined, the handler should find a way to authenticate the user. Normally this means redirecting the user to some identity provider, which will then redirect the user back once the authentication has been performed.

Note: It is customary to use a correlation id to disable CSRF when doing OAuth2 for example. This correlation id is easily created using GenerateCorrelationId() method that is defined in the AuthenticationHandler baseclass. This method also has a corresponding ValidateCorrelationId() that can be used to verify the correlation id in the AuthenticateCoreAsync() when the response from the identity provider is evaluated.

Remember that ApplyResponseAsync(), and subsequently ApplyResponseGrantAsync() and ApplyResponseChallengeAsync(), will also be called once when the headers are being written to the client due to the AuthenticationHandler hooking up a callback using Response.OnSendingHeaders() during the initialization phase. This makes sure that all handlers have a chance to create a challenge, or handle a response grant, right before the headers are sent to the user, which, as mentioned before, is the last point in time that we can modify the response. At least modify it by returning a redirect if needed…

Ok, that is pretty much all you need to know!

So, what do we need to do to create a new authentication middleware to support some new identity provider? Well, if we ignore the somewhat deep walkthrough above, and just do the quick walkthrough, it looks like this.

First we need to create an authentication middleware that inherits from AuthenticationMiddleware<TOptions>, as well as an options class that inherits from AuthenticationOptions. Inside the new middleware’s constructor, we need to check the passed in options, and set any defaults that we might want to use.

Note: One thing we might want to set, unless already set by the client, is the AuthenticationOptions.Description.Caption. This is, as mentioned before, the caption that will be displayed by an application that lists the available authentication options.

Next, we need to override the CreateHandler() method, and implement it so that it creates a new handler and returns it.

Obviously we need to create this handler as well… The handler needs to be a class that inherits from AuthenticationHandler<TOptions>, with the TOptions generic parameter being the same as the one used by the newly created middleware.

Inside the new handler, we need to override at least 3 things. First we need to override InvokeAsync(). Inside this override, we need to check if Request.Path is the same as the defined callback path. Normally this callback path is added to the options so that the client can configure it so it doesn’t collide with something else.

Note: The callback path is purely “virtual”. It will never have an MVC route or anything backing it. It is just for the authentication middleware to figure out that a request is actually the identity provider passing back information about an authenticated user.

If the Request.Path is not equal to the callback path, we just return false and let the request go on through the pipeline. Otherwise we need to call AuthenticateAsync() to get hold of an AuthenticationTicket. If the ticket contains an identity, the user has been properly authenticated, and we should sign in the user using IOwinContext.Authentication.SignIn() before returning true. If the ticket does not contain an identity, something has gone wrong and we need to handle that by redirecting the user or something similar.

Next we need to override AuthenticateCoreAsync(). In the override we need to look at the current request to figure out who the user is. The callback generally includes data from the identity provider in some form. This data should be verified, a ClaimsIdentity with the corresponding information should be created, and it should be returned inside an AuthenticationTicket.

Finally we need to override ApplyResponseChallengeAsync(). In this method we need to check to see if the response has a status code of 401 (Unauthorized). If that is the case, we use Helper.LookupChallenge() to see if there is a AuthenticationResponseChallenge in the environment, and if it requests the current authentication handler to cause the user to log in.

If that is the case, we create a redirect URL to the identity provider, including all required config values needed (retrieved from the options object), and redirect the user there. This URL generally includes a return URL, which obviously should correspond to the callback path checked in InvokeAsync().

That is actually all there is to building a new piece of authentication middleware. At least if it is a Passive one. And one that relies on the CookieAuthenticationMiddleware (or something similar) to do the cookie stuff.

Note: You might want to create a nice IAppBuilder extension method to handle the registration as well… But that isn’t really necessary if you don’t want to. But it looks better to write app.UseCustomAuthentication(myOptions) than app.Use<CustomAuthenticationMiddleware>(myOptions)…

If you want to build something like the CookieAuthenticationMiddleware that handles more things, like sign out etc, it becomes more complicated. As well as if you need to implement single sign out, like the WsFederation authentication middleware does. Luckily, most of these things have already been implemented by Microsoft in some form in other authentication handlers that can be used to get inspiration. And as it is all open source now, we can just go to CodePlex and see how they have solved those more complicated scenarios.

That’s it for this time! I hope it has given a bit more of an insight into how Katana authentication middlewares work, and what you need to do if you have a custom identity provider that you need to integrate with.

Getting the ASP.NET 5 samples to work on Windows with the “new” dnvm (and beta 5)

$
0
0

[UPDATE] If you clone the dev branch’s samples instead of the master branch, it should be easier. You will still need to update the Startup.cs file for now though. Pull-request made… (That was a long blog post to no use…still gives some insight into how it works though…) [END UPDATE]

[UPDATE 2] Now updated to beta-5 if anyone still wants it considering the update above… [END UPDATE 2]

Yesterday I finally had time to sit down and play with the new ASP.NET 5 runtime, which is something I have wanted to do for quite some time. However, it kind of annoyed me that I couldn’t just install the ASP.NET 5 runtime, clone the samples from GitHub and get started, as it generated a whole heap of errors. After an hour or so of Googling and trying things out, I finally got it working, so I thought I would write down what I did to get it to work.

Note: This codebase is moving ridiculously fast, so this post is going to be old within a very short while. Everything is based on the code as of today, March 20th 2015.

The first step is to to install the ASP.NET 5 version manager on your machine. This is done by going to the ASP.NET GitHub repo and getting the PowerShell command needed.

A bit down on the home page, you will see a section called “Install the .NET Version Manager (DNVM)”. Under this, it mentions that it was formerly known as kvm… Kvm is still the, as they call it, “stable-ish” version, but the cool stuff is handled with the dnvm, which is called “optimistic”.

The thing we are looking for is the PowerShell script that looks like this

@powershell -NoProfile -ExecutionPolicy unrestricted -Command "&{$Branch='dev';iex ((new-object net.webclient).DownloadString('https://raw.githubusercontent.com/aspnet/Home/dev/dnvminstall.ps1'))}"

To install the .NET Version Manager, paste that string into a console window and press enter. This will generate a screen similar to this

image

Now, the dnvm is installed on your machine. To be honest, it is simply a PowerShell script and a cmd file located at %USERPROFILE%\.dnx. This path, or to be honest one that goes to %USERPROFILE%\.dnx\bin, is added to the PATH environment variable to make it accessible in the console.

Next, we need to install an actual runtime environment. This is easily done by running the command

dnvm upgrade

This will generate some nicely colored ASCII goodness running, and at the end of it, it will tell you that it has installed version XXX at some location on your machine. In my case, it installed version 1.0.0.0-beta4-1111566. Wait…what!? beta-4? I want beta-5!

Beta-4 is what you get from the “stable”/master branch on GitHub. However, dnvm now supports getting stuff from the “unstable”/dev branch as well. All you need to do is to append –u or –unstable

dnvm upgrade –u

Now it installed what I needed, the 1.0.0-beta5-11682.

Now you have one runtime installed. In my case, the full CLR beta5 (and 4) version(s). If you want to install the CoreCLR version, you can just run the upgrade command again, but add the switch –Runtime CoreCLR

dnvm upgrade –u –runtime CoreCLR

Note: When installing the CoreCLR version, it will also pre-compile the code to native images to make it faster to startup. So it takes a little bit longer to do…

If you want to see what versions you have installed, you can run the list command

dnvm list

In my case, that generates a list that looks like this at the moment

image

As you can see, the CoreCLR is set as the active version. This is because I installed that last. If I want to switch to the full CLR, I just run

dnvm use 1.0.0-beta5-11682 -r CLR

Yes, it is really that easy to switch between versions of the runtime… Try doing that in the current .NET world…

Ok, now that we have the runtime installed, let’s go ahead and get the samples from GitHub. They are in the same repo as the dnvm stuff that we used before, so all we need to do is to get the “clone URL” from that page, and clone the repo to a local folder on our machine. Like this

git clone https://github.com/aspnet/Home.git

Once we have the cloned repo, we can aim our command line at the sample we want to run. In my case, that would be the HelloMvc sample, which is located at “Home/samples/latest/HelloMvc”.

Next we need to restore the required packages. This is done using the tool called dnu.exe and the command restore

dnu restore

This will print out a whole heap of data on the screen explaining what it is doing. The short version being that it is downloading all the NuGet packages needed to run the application.

The samples in the /latest/ folder have a very simple definition of the versions for the packages needed. They are defined as 1.0.0-*, which will get the latest version available. At the time of writing that would be beta-5…

The new runtime is built around a much more modular framework, downloaded from NuGet. This way, we can get a much more lean deployment, and easier upgrades.

Once this is done, GitHub tells us that all we need to do is run the sample by running

dnx . web

Unfortunately, that results in an error like this

image

Yeah, a big fat System.InvalidOperationException with the message “Failed to resolve the following dependencies for target framework 'DNX,Version=v4.5.1'” and so on…

Ok, so that’s where the GitHub instructions fail, and you start scratching your head as to why.

Well, first of all, the project.json file for the samples is based on and “old” naming of the environments. That means that we have to update it to the new.

The old file includes a “frameworks” fragment that says

    "frameworks": {
        "aspnet50": {},
        "aspnetcore50": {}
    }

Unfortunately, that is, as said before, the old naming standard. The new one is dnxXXX, not aspnet. So it needs to be changed to

"frameworks": {
        "dnx451": {},
        "dnxcore50": {}
  }

And while in the project.json, it is pretty obvious that all dependencies are based on the XXX-beta3 versions. So let’s update that to the latest. And why not make it the REALLY latest at all times. Let’s just change the dependencies to 1.0.0-*. Except for Mvc, which is 6.0.0-*

Now that we have fixed that, we can try running “dnx . web” again.

Oops, same error! However, this time we have changed the project.json to use “another” framework. So let’s try running “dnu restore” again as suggested in the error message. And once again, it runs through and prints our a whole heap of information about what it is fetching from NuGet.

dnu restore

Ok, that just tells me that it can’t find the beta-5 packages… What’s up with that? Well, the beta-5 packages are on another NuGet feed, or rather MyGet feed. To get access to them, you need to add that feed to the available NuGet feeds.

If you open the Visual Studio 2015 Preview and go to “Tools > Options…” and then open the node “NuGet Package Manager > Package Sources”, you will get something like this

image

Add a new source by pressing the huge plus sign. Then enter some useful name together with the source “https://www.myget.org/F/aspnetvnext/api/v2/”, and then press “Update” to persist it

image

I’m not 100% sure about what feeds are used, but I have the following active (in the following order)

https://api.nuget.org/v3/index.json

https://www.myget.org/F/aspnetrelease/api/v2

https://www.myget.org/F/aspnetvnext/api/v2/

https://www.nuget.org/api/v2/

And that seems to work…

Close the Options window, return to the command line and run “dnu restore”…again... This time it should just run through without any errors.

Ok, let’s try it again, “dnx . web”. And OH NOES! it failed again…

image

But this time it is another exception.

This time it says “System.IO.FileLoadException: Could not load file or assembly 'HelloMvc' or one of its dependencies”. Ok, that doesn’t seem right… And if you keep on reading, it says “error CS1061: ‘IApplicationBuilder' does not contain a definition for 'UseServices' and no extension method 'UseServices' accepting a first argument of type 'IApplicationBuilder' could be found (are you missing a using directive or an assembly reference?)”. Hmm…that sounds really wrong…

That is because we don’t register services using the UseServices() extension method anymore, so that extension method doesn’t exist anymore. Instead, we use a method called ConfigureServices(). So we need to modify the startup.cs file and remove the call to UseServices() and add a method called ConfigureServices(). Like this

namespace HelloMvc
{
publicclass Startup
{
publicvoid ConfigureServices(IServiceCollection services)
{
services.AddMvc();
}

publicvoid Configure(IApplicationBuilder app)
{
app.UseErrorPage();

app.UseMvc();

app.UseWelcomePage();
}
}
}

 

 

Ok, let’s try again. “dnx . web”… image

And there it is, a running sample. And if you open your browser and navigate to http://localhost:5001/ you will see something like this

image

Success! We finally have the HelloMvc sample up and running!

However, it is still running on the full CLR. What if we want to switch to the Core CLR? Well, just run “dnvm use 1.0.0-beta5-XXX –r CoreCLR”, depending on your version, to switch to the CoreCLR. Then run “dnx . web” to start the server, and you should be good to go. If you get any errors, run “dnu restore” to make sure you go the right dependencies for the CoreCLR

That’s it! Yes, there are a few steps to get it to work, but at least it works now! Or should I say “works for now”…this could change at any moment to be honest. But hopefully this will get you going with the samples.

ASP.NET 5 demo code from SweNug

$
0
0

As promised during my talk at SweNug Stockholm last week, I have now uploaded my code for anyone to play around with. I just want to highlight that it is based around VERY early bits, and new versions might cause problems. It is built using runtime 1.0.0-beta5-11533. But even if it changes, the code should give you a clue about how it works.

Code is available here: ASP.NET 5 Demo.zip (714.47 kb)


Trying to understand the versioning “mess” that is .NET 2015

$
0
0

Right now there is a lot of talk about the next iteration of the .NET platform, and the different versions and runtimes that is about to be released. Unfortunately, it has turned into a quite complicated situation when it comes to the versions of things being released.

I get quite a few question about how it all fits together, and I try answering them as best as I can. However, as the question keeps popping up over and over again, I thought I would sum up the situation as I have understood it.

Disclaimer: Everything in this post is “as I have understood it”. I am not working for Microsoft, and I am in no way or form guaranteeing that this is the right description. This is just how I understand the situation. Hopefully it is fairly close to the real world.

Let’s start at the top…

.NET 2015

The first thing to clear up is the “.NET 2015” name. This is an umbrella name for the things that are about to be released in the .NET space. It includes a whole heap of things, including .NET 4.6, .NET Core, ASP.NET 5, which in turn contains ASP.NET MVC 6, Entity Framework 7, SignalR 3 etc. It is just an umbrella name, and not a new version name for .NET or anything like that.

.NET 4.6

Inside the .NET 2015 “release”, there is a new version of the .NET framework called .NET 4.6. It is “just” a new version of .NET, e.g. the same old .NET framework we know and love in a new version. It includes all the stuff we have today, with the addition of new features in a lot of areas. Just as we would expect a new version of the .NET framework to do.

It still includes WinForms, WPF, WCF and all the other things we are used to having.. It also includes ASP.NET as we are used to, supporting all the current stuff, such as ASP.NET MVC, WebForms etc, as well as a new ASP.NET implementation (ASP.NET 5). And as expected with a new version of .NET, there are new versions of most of the areas, including WebForms 4.6.

So just to make it clear, with .NET 4.6 you will be able to build anything you can build today, including ASP.NET WebForms application (version 4.6), which seems to be what most people are asking about.

.NET Native

This is one of the areas I know the least about, but as I understand it, .NET native is a special runtime used when building Universal Apps. Application using this runtime are compiled into native code instead of IL, and in that way offers better performance on low-power devices like phones. This means faster startup times etc. It is as far as I know only used for Universal Apps, so unless you build those, you can pretty much ignore it.

.NET Core

.NET Core is a new framework for building web- and console applications. It is a subset of the full .NET framework, a bit like the .NET Client Profile that we used to have (or might even have today). However, there are some major differences between .NET Core and anything we have seen so far in the .NET space.

The .NET Core framework is a modular framework with all of its components deployed through NuGet. This way we get framework that we can mix and match ourselves to get what we need for our applications. This in turn gives us a smaller package to deploy. Small enough to include it in the actual deployment. So when we deploy a .NET Core application, we actually deploy the “whole” framework with our application, meaning that we don’t need to install it on the machine we are deploying to. It also means that we can run .NET Core-based apps with different versions side by side on the same machine.

However, it is a subset, and a pretty small one at that at the moment. It includes the things needed to build web and console apps, and nothing else.

The parts inside this framework have been highly optimized to be as lean as possible. The goal being to make it small enough to deploy with the application, but also to make it lean enough from a performance point of view (memory etc), enabling a higher density of applications on a server.

It will probably grow in the future, but for now, it is a pretty small subset of functionality that has been “ported” to this framework.

It also has a pretty massive “kicker”. It works cross-platform, meaning that it runs on Windows, as well as Linux and Mac! However, being that it is a subset, it means that we can only run ASP.NET and console applications specifically targeted at this framework on these other platforms. It does NOT mean that we can just take any .NET app and use it on a Mac.

Unfortunately, the .NET Core framework has got the name .NET Core 5. However, it is not a next version of 4.5. It is completely separate. I do agree with some people that have suggested to name this XNET 1.0 (cross platform .net 1.0) or something to make it obvious that it is related to .NET, but no the 5th version of it.

ASP.NET 5

ASP.NET 5 is a new ASP.NET implementation built so that it enables targeting both .NET 4.6 and .NET Core. This means for example that it does not have a dependency on System.Web, which has not been ported.

Note: It has some major changes in the way that it works, its feature set and so on, but that is all for another post. This post is just about trying to clear up the confusion with the versions.

When building ASP.NET 5 applications, we can choose to target .NET 4.6, and get ALL the bells and whistles that we are used to, or the .NET Core framework with its somewhat limited feature set. Or we can choose to target them both and use #IF/ELSE statements to get different implementations for the 2 different frameworks when using features that are not available on both.

At the moment, I do believe that most developers will choose to target .NET 4.6 when building ASP.NET 5 applications. Why? Well, the subset of features offered in the .NET Core framework will probably be too limiting for a lot of scenarios. And any external libraries will need to be updated to support .NET Core to be able to use them. So at least in the beginning, it will be a limited amount of third party libraries that will be abled to be added to an application targeting .NET Core.

ASP.NET MVC 6

ASP.NET MVC 6 is a completely new implementation of the ASP.NET MVC framework that we have had for a while now. However, it is built to work using both .NET 4.6 and the .NET Core framework. It has also been rebuilt as an OWIN middleware, making it possible to host it outside of the IIS. On top of that, it has also been united with ASP.NET Web Pages and ASP.NET Web Api, making them all one big, happy family.

It does include some new features like View Components, TagHelpers, attribute-based routing etc, but it seems as though most of the focus has gone into getting it to work with the .NET Core framework, which definitely is no small feat.

ASP.NET Web Api has also been changed around a bit, and works somewhat differently than before. Mostly due to the fact that is now a part of ASP.NET MVC and not its own thing with its own baseclasses etc.

Entity Framework 7 (and 6)

.NET 2015 also comes with a new implementation of EF called Entity Framework 7. However, this is not just another version. Instead, version 7 is another complete rewrite made to work on .NET Core. This means that it is a much smaller framework with some limitations.

Being that I ‘m not a massive EF person, I haven’t looked into this, but it seems as though version 7 is somewhat a step back feature-wise. However, it has the added bonus of working with .NET Core. And if you decide to target .NET 4.6 instead of .NET Core, you can still happily use EF 6 and get all the bells and whistles that you are used to.

SignalR 3

SignalR has also got a bumped version with support for .NET Core. I have not looked at all at this new version, but I do expect it to be pretty much on par with 2, but with .NET Core support. But once again, if it weren’t, you can always just target the full .NET framework and use the current version.

C#

In this iteration, we also get a new version of C#, version 6. But that isn’t really why I have the headline C#. The reason is that I want to mention that this is all C# at the moment with no VB support. I assume this will be added in the future, but as far as I know I have heard nothing about it.

Conclusion

The next iteration of .NET will include some MAJOR changes. At least in the web-space. If you stay out of ASP.NET, not whole lot will change as such. It will just be another version. However, if you do wok with ASP.NET, and decide to move on to ASP.NET 5, there are some major changes. And by major, I mean MASSIVE. However, they are voluntary. You will still be able to use the current stuff if you don’t want to start targeting “the new stuff”.

It is very exciting times, with a LOT of changes on the way. Unfortunately, the versioning story has become very complicated. Hopefully it will sort itself out and become easier over time.

Integrating with Github Webhooks using OWIN

$
0
0

For some reason I got the urge to have a look at webhooks when using GitHub. Since it is a feature that is used extensively by build servers and other applications to do things when code is pushed to GitHub etc, I thought it might be cool to have a look at how it works under the hood. And maybe build some interesting integration in the future…

The basic idea behind it is that you tell GitHub that you want to get notified when things happen in your GitHub repo, and GitHub makes sure to do so. It does so using a regular HTTP call to an endpoint of your choice.

So to start off, I decided to create a demo repo on GitHub and then add a webhook. This is easily done by browsing to your repo and clicking the “Settings” link to the right.

image

In the menu on the left hand side in the resulting page, you click the link called “Webhooks & Services”.

image

Then you click “Add webhook” and add the “Payload URL”, the content type to use and a secret. You can also define whether you want more than just the “push” event, which tells you when someone pushed some code, or if that is good enough. For this post, that is definitely enough… Clicking the “Add webhook” button will do just that, set up a webhook for you. Don’t forget that the hook must be active to work…

image

Ok, that is all you need to do to get webhooks up and running from that end!

There is a little snag though… GitHub will call your “Payload URL” from the internet (obviously…). This can cause some major problems when working on your development machine. Luckily this can be easily solved by using a tool called ngrok.

ngrok is a secure tunneling service that makes it possible to easily expose a local http port on your machine on the net. Just download the program and run it in a console window passing it the required parameters. In this case, tunneling an HTTP connection on port 4567 would work fine.

ngrok http 4567

image

The important part to note here is the http://e5630ddd.ngrok.io forwarding address. This is what you need to use when setting up the webhook at GitHub. And if you want more information about what is happening while using ngrok, just browse to http://127.0.0.1:4040/.

Ok, so now we have a webhook set up, and a tunnel that will make sure it can be called. The next thing is to actually respond to it…

In my case, I created a Console application in VS2013 and added the NuGet packages Microsoft.Owin.SelfHost and Newtonsoft.Json. Next, I created a new Owin middleware called WebhookMiddleware, as well as a WebhookMiddlewareOptions and WebhookMiddlewareExtensions, to be able to follow the nice AddXXX() pattern for IAppBuilder. It looks something like this

publicclass WebhookMiddleware : OwinMiddleware
{
privatereadonly WebhookMiddlewareOptions _options;

public WebhookMiddleware(OwinMiddleware next, WebhookMiddlewareOptions options)
: base(next)
{
_options = options;
}

publicoverride async Task Invoke(IOwinContext context)
{
await Next.Invoke(context);
}
}

publicclass WebhookMiddlewareOptions
{
public WebhookMiddlewareOptions()
{
}

publicstring Secret { get; set; }
}

publicstaticclass WebhookMiddlewareExtensions
{
publicstaticvoid UseWebhooks(this IAppBuilder app, string path, WebhookMiddlewareOptions options = null)
{
if (!path.StartsWith("/"))
path = "/" + path;

app.Map(path, (app2) =>
{
app2.Use<WebhookMiddleware>(options ?? new WebhookMiddlewareOptions());
});
}
}

As you can see, we use the passed in path to map when the middleware should be used…

Ok, now that all the middleware scaffolding is there, let’s just quickly add it to the IAppBuilder as well…

class Program
{
staticvoid Main(string[] args)
{
using (WebApp.Start<Startup>("http://127.0.0.1:4567"))
Console.ReadLine();
}
}

publicclass Startup
{
publicvoid Configuration(IAppBuilder app)
{
app.UseWebhooks("/webhook", new WebhookMiddlewareOptions { Secret = "12345" });
}
}

Note: The address passed to the OWIN server. It isn’t the usual localhost. Instead, I use 127.0.0.1, which is kind of the same, but not quite. To get the ngrok tunnel to work, it needs to be 127.0.0.1

That’s the baseline… Let’s add some actual functionality!

The first thing I need is a way to expose the information sent from GitHub. It is sent using JSON, but I would prefer it if I could get it statically typed for the end user. As well as readonly… So I added 4 classes that can represent at least the basic information sent back.

If we start from the top, we have the WebhookEvent. It is the object containing the basic information sent from GitHub. It looks like this

publicclass WebhookEvent
{
protected WebhookEvent(string type, string deliveryId, string body)
{
Type = type;
DeliveryId = deliveryId;

var json = JObject.Parse(body);
Ref = json["ref"].Value<string>();
Before = json["before"].Value<string>();
After = json["after"].Value<string>();
HeadCommit = new GithubCommit(json["head_commit"]);
Commits = json["commits"].Values<JToken>().Select(x => new GithubCommit(x)).ToArray();
Pusher = new GithubUser(json["pusher"]);
Sender = new GithubIdentity(json["sender"]);
}

publicstatic WebhookEvent Create(string type, string deliveryId, string body)
{
returnnew WebhookEvent(type, deliveryId, body);
}

publicstring Type { get; private set; }
publicstring DeliveryId { get; private set; }
publicstring Ref { get; private set; }
publicstring Before { get; private set; }
publicstring After { get; private set; }
public GithubCommit HeadCommit { get; set; }
public GithubCommit[] Commits { get; set; }
public GithubUser Pusher { get; private set; }
public GithubIdentity Sender { get; private set; }
}

As you can see, it is just a basic DTO that parses the JSON sent from GitHub and puts it in a statically typed class…

The WebhookEvent class exposes referenced commits using the GitHubCommit class, which looks like this

publicclass GithubCommit
{
public GithubCommit(JToken data)
{
Id = data["id"].Value<string>();
Message = data["message"].Value<string>();
TimeStamp = data["timestamp"].Value<DateTime>();
Added = ((JArray)data["added"]).Select(x => x.Value<string>()).ToArray();
Removed = ((JArray)data["removed"]).Select(x => x.Value<string>()).ToArray();
Modified = ((JArray)data["modified"]).Select(x => x.Value<string>()).ToArray();
Author = new GithubUser(data["author"]);
Committer = new GithubUser(data["committer"]);
}

publicstring Id { get; private set; }
publicstring Message { get; private set; }
public DateTime TimeStamp { get; private set; }
publicstring[] Added { get; private set; }
publicstring[] Removed { get; private set; }
publicstring[] Modified { get; private set; }
public GithubUser Author { get; private set; }
public GithubUser Committer { get; private set; }
}

Once again, it is just a JSON parsing DTO. And so are the last 2 classes, the GitHubIdentity and GitHubUser…

publicclass GithubIdentity
{
public GithubIdentity(JToken data)
{
Id = data["id"].Value<string>();
Login = data["login"].Value<string>();
}

publicstring Id { get; private set; }
publicstring Login { get; private set; }
}

publicclass GithubUser
{
public GithubUser(JToken data)
{
Name = data["name"].Value<string>();
Email = data["email"].Value<string>();
if (data["username"] != null)
Username = data["username"].Value<string>();
}

publicstring Name { get; private set; }
publicstring Email { get; private set; }
publicstring Username { get; private set; }
}

Ok, those are all the boring scaffolding classes to get data from the JSON to C# code…

Let’s have a look at the actual middleware implementation…

The first thing it need to do is to read out the values from the request. This is very easy to do. The webhook will get 3 headers and a body.  And they are read like this

publicoverride async Task Invoke(IOwinContext context)
{
var eventType = context.Request.Headers["X-Github-Event"];
var signature = context.Request.Headers["X-Hub-Signature"];
var delivery = context.Request.Headers["X-Github-Delivery"];

string body;
using (var sr = new StreamReader(context.Request.Body))
{
body = await sr.ReadToEndAsync();
}
}

Ok, now that we have all the data, we need to verify that the signature passed in the X-Hub-Signature header is correct.

The passed value will look like this sha1=XXXXXXXXXXX, and the XXXXXXXXX is a HMAC SHA1 hash generated using the body and the secret. To validate the hash, I add a method to the WebhookMiddlewareOptions class, and make it private. In just a minute I will explain how I can still let the user make modifications to it even if it is private…

It looks like this

privatebool ValidateSignature(string body, string signature)
{
var vals = signature.Split('=');
if (vals[0] != "sha1")
returnfalse;

var encoding = new System.Text.ASCIIEncoding();
var keyByte = encoding.GetBytes(Secret);

var hmacsha1 = new HMACSHA1(keyByte);

var messageBytes = encoding.GetBytes(body);
var hashmessage = hmacsha1.ComputeHash(messageBytes);
var hash = hashmessage.Aggregate("", (current, t) => current + t.ToString("X2"));

return hash.Equals(vals[1], StringComparison.OrdinalIgnoreCase);
}

As you can see, it is pretty much just a matter of generating a HMAC SHA1 hash based on the body and secret, and then verifying that they are equal. I do this case-insensitive as the .NET code will generate uppercase characters, and the GitHub signature is lowercase.

Now that we have this validation in place, it is time to hook it up. I do this by exposing a OnValidateSignature property of type Func<string, string, bool> on the options class, and assign it to the private function in the constructor.

publicclass WebhookMiddlewareOptions
{
public WebhookMiddlewareOptions()
{
OnValidateSignature = ValidateSignature;
}

...

publicstring Secret { get; set; }
public Func<string, string, bool> OnValidateSignature { get; set; }
}

This way, the user can just leave that property, and it will verify the signature as defined. Or he/she can replace the func with their own implementation and override the way the validation is done.

The next step is to make sure that we validate the signature in the middleware

publicoverride async Task Invoke(IOwinContext context)
{
...

if (!_options.OnValidateSignature(body, signature))
{
context.Response.ReasonPhrase = "Could not verify signature";
context.Response.StatusCode = 400;
return;
}
}

And as you can see, if the validation doesn’t approve the signature, it returns an HTTP 400.

So why are we validating this? Well, considering that you are exposing this endpoint on the web, it could get compromised and someone could be sending spoofed messages to your application…

Ok, the last thing to do is to make it possible for the middleware user to actually do something when the webhook is called. Once again I expose a delegate on my options class. In this case it is an Action of type WebhookEvent, called OnEvent. And once again I add a default implementation in the options class itself. However, the implementation doesn’t actually do anything in this case. But it means that I don’t have to do a null check…

publicclass WebhookMiddlewareOptions
{
public WebhookMiddlewareOptions()
{
...
OnEvent = (obj) => { };
}

...

public Action<WebhookEvent> OnEvent { get; set; }
}

And now that we have a way to tell the user that the webhook has been called, we just need to do so…

publicoverride async Task Invoke(IOwinContext context)
{
...

_options.OnEvent(WebhookEvent.Create(eventType, delivery, body));

context.Response.StatusCode = 200;
}

The last thing to do is also to send an HTTP 200 back to the server, telling it that the request has been accepted and processed properly.

Now that we have a callback system in place, we can easily hook into the webhook and do whatever processing we want. In my case, that means doing a very exciting Console.WriteLine

publicvoid Configuration(IAppBuilder app)
{
app.UseWebhooks("/webhook", new WebhookMiddlewareOptions
{
Secret = "12345",
OnEvent = (e) =>
{
Console.WriteLine("Incoming hook call: {0}\r\nCommits:\r\n{1}", e.Type, string.Join("\r\n", e.Commits.Select(x => x.Id)));
}
});

app.UseWelcomePage("/");
}

That’s it! A fully working and configurable webhook integration using OWIN!

And as usual, there is code for you to download! It is available here: DarksideCookie.Owin.GithubWebhooks.zip (13KB)

Cheers!

Integrating a front-end build pipeline in ASP.NET builds

$
0
0

A while back I wrote a couple of blog posts about how to set up a “front-end build pipeline” using Gulp. The pipeline handled things like less to css conversion, bundling and minification, TypeScript to JavaScript transpile etc. However, this pipeline built on the idea that you would have Gulp watching your files as you worked, and doing the work as they changed. The problem with this is that it only runs on the development machine, and not the buildserver.… I nicely avoided this topic by ending my second post with “But that is a topic for the next post”. I guess it is time for that next post…

So let’s start by looking at the application I will be using for this example!

The post is based on a very small SPA application based on ASP.NET MVC and AngularJS. The application itself is really not that interesting, but I needed something to demo the build with. And it needed to include some things that needed processing during the build. So I decided to build the application using TypeScript instead of JavaScript, and Less instead of CSS. This means that I need to transpile my code, and then bundle and minify it.

I also want to have a configurable solution where I can serve up the raw Less files, together with a client-side Less compiler, as well as the un-bundled and minified JavaScript files, during development, and the bundled and minified CSS and JavaScript in production. And on top of that, I want to be able to include a CDN in the mix as well if that becomes a viable option for my app.

Ok, so let’s get started! The application looks like this

image

As you can see, it is just a basic ASP.NET MVC application. It includes a single MVC controller, that returns a simple view that hosts the Angular application. The only thing that has been added on top of the empty MVC project is the Microsoft.AspNet.Web.Optimization NuGet package that I will use to handle the somewhat funky bundling that I need.

What the application does is fairly irrelevant to the post. All that is interesting is that it is an SPA with some TypeScript files, some Less files and some external libraries, that needs transpiling, bundling, minification and so on during the build.

As I set work on my project, I use Bower to create a bower.json file, and then install the required Bower dependencies, adding them to the bower.json file using the –save switch. In this case AngularJS, Bootstrap and Less. This means that I can restore the Bower dependencies real easy by running ”bower install” instead of having to check them into source control.

Next I need to set up my bundling using the ASP.NET bundling stuff in the Microsoft.AspNet.Web.Optimization NuGet package. This is done in the BundleConfig file.

If you create an ASP.NET project using the MVC template instead of the empty one that I used, you will get this package by default, as well as the BundleConfig.cs file. If you decided to do it like me, and use the empty one, you need to add a BundleConfig.cs file in the App_Start folder, and make sure that you call the BundleConfig.RegisterBundles() method during application start in Global.asax.

Inside the BundleConfig.RegisterBundles I define what bundles I want, and what files should be included in them… In my case, I want 2 bundles. One for scripts, and one for styles.

Let’s start by the script bundle, which I call ~/scripts. It looks like this

publicstaticvoid RegisterBundles(BundleCollection bundles)
{
bundles.Add(new ScriptBundle("~/scripts", ConfigurationManager.AppSettings["Optimization.PathPrefix"] + "scripts.min.js") { Orderer = new ScriptBundleOrderer() }
.Include("~/bower_components/angularjs/angular.js")
.Include("~/bower_components/less/dist/less.js")
.IncludeDirectory("~/Content/Scripts", "*.js", true));
...
}

As you can see from the snippet above, I include AngularJS and Less.js from my bower_components folder, as well as all of my JavaScript files located in the /Content/Scripts/ folder and its subfolders. This will include everything my application needs to run. However, there are 2 other things that are very important in this code. The first one is that second parameter that is passed to the ScriptBundle constructor. It is a string that defines the Url to use when the site is configured to use CDN. I pass in a semi-dynamically created string, by concatenating a value from my web.config, and “scripts.min.js”.

In my case, the Optimization.PathPrefix value in my web.config is defined as “/Content/dist/” at the moment. This means that if I were to tell the application to use CDN, it would end up writing out a script-tag with the source set to “/Content/dist/scripts.min.js”. However, if I ever did decide to actually use a proper CDN, I could switch my web.config setting to something like “//mycdn.cdnnetwork.com/” which would mean that the source would be changed //mycdn.cdnnetwork.com/scripts.min.js, and all of the sudden point to an external CDN instead of my local files. This is a very useful thing to be able to do if one were to introduce a CDN…

The second interesting thing to note in the above snippet is the { Orderer = new ScriptBundleOrderer() }. This is a way for me to control the order of which my files are added to the page. Unless you use some dynamic resource loading strategy in your code, the order of which your scripts are loaded is actually important… So for this application, I created this

privateclass ScriptBundleOrderer : DefaultBundleOrderer
{
publicoverride IEnumerable<BundleFile> OrderFiles(BundleContext context, IEnumerable<BundleFile> files)
{
var defaultOrder = base.OrderFiles(context, files).ToList();

var firstFiles = defaultOrder.Where(x => x.VirtualFile.Name != null&& x.VirtualFile.Name.StartsWith("~/bower_components", StringComparison.OrdinalIgnoreCase)).ToList();
var lastFiles = defaultOrder.Where(x => x.VirtualFile.Name != null&& x.VirtualFile.Name.EndsWith("module.js", StringComparison.OrdinalIgnoreCase)).ToList();
var app = defaultOrder.Where(x => x.VirtualFile.Name != null&& x.VirtualFile.Name.EndsWith("app.js", StringComparison.OrdinalIgnoreCase)).ToList();

var newOrder = firstFiles
.Concat(defaultOrder.Where(x => !firstFiles.Contains(x) && !lastFiles.Contains(x) && !app.Contains(x)).Concat(lastFiles).ToList())
.Concat(lastFiles)
.Concat(app)
.ToList();

return newOrder;
}
}

It is pretty much a quick hack to make sure that all libraries from the bower_components folder is added first, then all JavaScript files that are not in that folder and do not end with “module.js” or “app.js”, then all files ending with “module.js” and finally “app.js”. This means that my SPA is loaded in the correct order. It all depends on the naming conventions one use, but for me, and this solution, this will suffice…

Next it is time to add my styles. It looks pretty much identical, except that I add Less files instead of JavaScript files…and I use a StyleBundle instead of a ScriptBundle

publicstaticvoid RegisterBundles(BundleCollection bundles)
{
..
bundles.Add(new StyleBundle("~/styles", ConfigurationManager.AppSettings["Optimization.PathPrefix"] + "styles.min.css")
.Include("~/bower_components/bootstrap/less/bootstrap.less")
.IncludeDirectory("~/Content/Styles", "*.less", true));
...
}

As you can see, I supply a CDN-path here as well. However, I don’t need to mess with the ordering this time as my application only has 2 less files that are added in the correct order. But if you hade a more complex scenario, you could just create another orderer.

And yes, I include Less files instead of CSS. I will then use less.js, which is included in the script bundle, to convert it to CSS in the browser…

Ok, that’s it for the RegisterBundles method, except for 2 more lines of code

publicstaticvoid RegisterBundles(BundleCollection bundles)
{
...
bundles.UseCdn = bool.Parse(ConfigurationManager.AppSettings["Optimization.UseBundling"]);
BundleTable.EnableOptimizations = bundles.UseCdn;
}

These two lines makes it possible to configure whether or not to use the bundled and minified versions of my scripts and styles by setting a value called Optimization.UseBundling in the web.config file. By default, the optimization stuff will serve unbundled files if the current compilation is set to debug in web.config, dynamically bundled files if set to “not debug” and the CDN path if UseCdn is set to true. In my case, I short circuit this and make it all dependent on the web.config setting…

Tip: To be honest, I generally add a bit more config in here, making it possible to not only use the un-bundled and minified JavaScript files or the bundled and minified versions. Instead I like being able to use bundled but not minified files as well. This can make debugging a lot easier some times. But that is up to you if you want to or not…I just kept it simple here…

Ok, now all the bundling is in place! Now it is just a matter of adding it to the actual page. Something that is normally not a big problem. You just call Styles.Render() and Scripts.Render(). Unfortunately, Iwhen adding Less files instead of CSS, the link tags that added need to have a different type defined. So to solve that, I created a little helper class called LessStyles. It looks like this

publicstaticclass LessStyles
{
privateconststring LessLinkFormat = "<link rel=\"stylesheet/less\" type=\"text/css\" href=\"{0}\" />";

publicstatic IHtmlString Render(paramsstring[] paths)
{
if (!bool.Parse(ConfigurationManager.AppSettings["Optimization.UseBundling"]) & HttpContext.Current.IsDebuggingEnabled)
{
return Styles.RenderFormat(LessLinkFormat, paths);
}
else
{
return Styles.Render(paths);
}
}
}

All it does is verifying whether or not it is set to use bundling or not, and if it isn’t, it renders the paths by calling Styles.RenderFormat(), passing along a custom format for the link tag, which sets the correct link type. This will then be picked up by the less.js script, and converted into CSS on the fly.

Now that I have that helper, it is easy to render the scripts and styles to the page like this

@using System.Web.Optimization
@using DarksideCookie.AspNet.MSBuild.Web.Helpers
<!DOCTYPEhtml>

<htmldata-ng-app="MSbuildDemo">
<head>
<title>MSBuild Demo</title>
@LessStyles.Render("~/styles")
</head>
<bodydata-ng-controller="welcomeController as ctrl">
<div>
<h1>{{ctrl.greeting('World')}}</h1>
</div>
@Scripts.Render("~/scripts")
</body>
</html>

There you have it! My application is done… Running this in a browser, with Optimization.UseCdn set to false, returns


<!DOCTYPEhtml>

<htmldata-ng-app="MSbuildDemo">
<head>
<title>MSBuild Demo</title>
<linkrel="stylesheet/less"type="text/css"href="/bower_components/bootstrap/less/bootstrap.less"/>
<linkrel="stylesheet/less"type="text/css"href="/Content/Styles/site.less"/>

</head>
<bodydata-ng-controller="welcomeController as ctrl">
<div>
<h1>{{ctrl.greeting('World')}}</h1>
</div>
<scriptsrc="/bower_components/angularjs/angular.js"></script>

<scriptsrc="/bower_components/less/dist/less.js"></script>

<scriptsrc="/Content/Scripts/Welcome/GreetingService.js"></script>

<scriptsrc="/Content/Scripts/Welcome/WelcomeController.js"></script>

<scriptsrc="/Content/Scripts/Welcome/Module.js"></script>

<scriptsrc="/Content/Scripts/Welcome/App.js"></script>


</body>
</html>


As expected it returns the un-bundled Less files and JavaScript files. And setting Optimization.UseCdn to true returns


<!DOCTYPEhtml>

<htmldata-ng-app="MSbuildDemo">
<head>
<title>MSBuild Demo</title>
<linkhref="/Content/dist/styles.min.css"rel="stylesheet"/>
</head>
<bodydata-ng-controller="welcomeController as ctrl">
<div>
<h1>{{ctrl.greeting('World')}}</h1>
</div>
<scriptsrc="/Content/dist/scripts.min.js"></script>
</body>
</html>

Bundled JavaScript and CSS from a folder called dist inside the Content folder.

Ok, sweet! So my bundling/CDN hack thingy worked. Now I just need to make sure that I get my scripts.min.js and styles.min.css created as well. To do this, I’m going to turn to node, npm and Gulp!

I use npm to install my node dependencies, and just like with Bower, I use the “--save” flag to make sure it is saved to the package.json file for restore in the future…and on the buildserver…

In this case, there are quite a few dependencies that needs to be added… In the end, my package.json looks like this

{
...
"dependencies": {
"bower": "~1.3.12",
"del": "^1.2.0",
"gulp": "~3.8.8",
"gulp-concat": "~2.4.1",
"gulp-less": "~1.3.6",
"gulp-minify-css": "~0.3.11",
"gulp-ng-annotate": "^0.5.3",
"gulp-order": "^1.1.1",
"gulp-rename": "~1.2.0",
"gulp-typescript": "^2.2.0",
"gulp-uglify": "~1.0.1",
"gulp-watch": "^1.1.0",
"merge-stream": "^0.1.8",
"run-sequence": "^1.1.1"
}
}

But it all depends on what you are doing in your build…

Now that I have all the dependencies needed, I add a gulfile.js to my project.

It includes a whole heap of code, so I will just write it out here, and then try to cover the main points

var gulp = require('gulp');
var typescript = require('gulp-typescript');
var concat = require('gulp-concat');
var uglify = require('gulp-uglify');
var rename = require('gulp-rename');
var less = require('gulp-less');
var minifycss = require('gulp-minify-css');
var watch = require('gulp-watch');
var del = require('del');
var runSequence = require('run-sequence');
var ngAnnotate = require('gulp-ng-annotate');
var mergeStream = require('merge-stream');
var order = require('gulp-order');
var exec = require('child_process').exec;

var settings = {
contentPath: "./Content/",
buildPath: "./Content/build/",
distPath: "./Content/dist/",
bowerPath: "./bower_components/",
bower: {
"bootstrap": "bootstrap/dist/**/*.{map,css,ttf,svg,woff,eot}",
"angular": "angularjs/angular.js"
},
scriptOrder: [
'**/angular.js',
'**/*Service.js',
'!**/*(App|Module).js',
'**/*Module.js',
'**/App.js'
],
stylesOrder: [
'**/normalize.css',

'**/*.css'
]
}

gulp.task('default', function (callback) {
runSequence('clean', 'build', 'package', callback);
});
gulp.task('Debug', ['default']);
gulp.task('Release', ['default']);

gulp.task('build', function () {
var lessStream = gulp.src(settings.contentPath + "**/*.less")
.pipe(less())
.pipe(gulp.dest(settings.buildPath));

var typescriptStream = gulp.src(settings.contentPath + "**/*.ts")
.pipe(typescript({
declarationFiles: false,
noExternalResolve: false,
target: 'ES5'
}))
.pipe(gulp.dest(settings.buildPath));

var stream = mergeStream(lessStream, typescriptStream);

for (var destinationDir in settings.bower) {
stream.add(gulp.src(settings.bowerPath + settings.bower[destinationDir])
.pipe(gulp.dest(settings.buildPath + destinationDir)));
}

return stream;
});

gulp.task('package', function () {
var cssStream = gulp.src(settings.buildPath + "**/*.css")
.pipe(order(settings.stylesOrder))
.pipe(concat('styles.css'))
.pipe(gulp.dest(settings.buildPath))
.pipe(minifycss())
.pipe(rename('styles.min.css'))
.pipe(gulp.dest(settings.distPath));

var jsStream = gulp.src(settings.buildPath + "**/*.js")
.pipe(ngAnnotate({
remove: true,
add: true,
single_quotes: true,
sourcemap: false
}))
.pipe(order(settings.scriptOrder))
.pipe(concat('scripts.js'))
.pipe(gulp.dest(settings.buildPath))
.pipe(uglify())
.pipe(rename('scripts.min.js'))
.pipe(gulp.dest(settings.distPath));

return mergeStream(cssStream, jsStream);
});



 

gulp.task('clean', function () {
del.sync([settings.buildPath, settings.distPath]);
});


It starts with a “default” task which runs the “clean”, “build” and “packages” tasks in sequence. It then has one task per defined build configuration in the project. In this case “Debug” and “Release”. These in turn just run the “default” in this case, but it makes I possible to run different tasks during different builds.

Note: Remember that task names are case-sensitive, so make sure that the task names use the same casing as the build configurations in you project

The “build” task transpiles Less to CSS and TypeScript to JavaScript and puts them in the folder defined as “/Content/build/”. It also copies the defined Bower components to this folder.

The “package” task takes all the CSS and JavaScript files generated by the “build” task, bundles them into a styles.css and a scripts.js file in the same folder. It then minifies them into a styles.min.css and scripts.min.css file, and put them in a folder defined as /Content/dist/”. It also makes sure that it is all added in the correct order. Just as the BundleConfig class did.

The “clean” task does just that. It cleans up the folders that the other tasks have created. Why? Well, it is kind of a nice feature to have…

Ok, now I have all the Gulp tasks needed to generate the files needed, as well as clean up afterwards. And these are easy to run from the command line, or using the Task Runner Explorer extension in Visual Studio. But this will not work on a buildserver unfortunately… 

Note: Unless you can get your buildserver to run the Gulp stuff somehow. In TFS 2015, and a lot of other buildservers, you can run Gulp as a part of the build. In TFS 2013 for example, this is a bit trickier…

So how do we get it to run as a part of the build (if we can’t have the buildserver do it for us, or we just want to make sure it always runs with the build)?

Well, here is where it starts getting interesting! One way is unload the .csproj file, and start messing with the build settings in there. However, this is not really a great solution. It gets messy very fast, and it is very hard to understand what is happening when you open a project that someone else has created, and it all of the sudden does magical things. It is just not very obvious to have it in the .csproj file… Instead, adding a file called <PROJECTNAME>.wpp.targets, will enable us to do the same things, but in a more obvious way. This file will be read during the build, and work in the same way as if you were modifying the .csproj file.

In my case the file is called DarksideCookie.AspNet.MSBuild.Web.wpp.targets. The contents of it is XAML, just like the .csproj file. It has a root element named Project in the http://schemas.microsoft.com/developer/msbuild/2003 namespace. Like this

<?xmlversion="1.0"encoding="utf-8"?>
<ProjectToolsVersion="4.0"xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup>
<ExcludeFoldersFromDeployment>.\Content\build\</ExcludeFoldersFromDeployment>
</PropertyGroup>
</Project>

In this case, it starts out by defining a property called ExcludeFoldersFromDeployment property. This tells the build to not include the “/Content/build/” folder when deploying the application. It is only a temporary storage folder for the build, so it isn’t needed…

Next, is the definition of the tasks, or targets as they are called in a targets file. They define what should happen, and when.

A target is defined by a Target element. It includes a name, when to run, and whether or not it depends on any other target before it can run, and if there are any conditions to take into account. Inside the Target element, you find the definition of what should happen.

In this case, I have a a bunch of different targets, so let’s look at what they do

The first on is one called “BeforeBuild”.

Note: Oh…yeah…by naming your target to some specific names, they will be run at specific times, and do not need to be told when to run… In this case, it will run before the build…

<TargetName="BeforeBuild">
<MessageText="Running custom build things!"Importance="high"/>
</Target>

All it does is print out that it is running the custom build. This makes it easy to see if everything is running as it should by looking at the build output.

The next target is called “RunGulp”, and it will do jus that, run Gulp as a part of the build.

<TargetName="RunGulp"AfterTargets="BeforeBuild"DependsOnTargets="NpmInstall;BowerInstall">
<MessageText="Running gulp task $(Configuration)"Importance="high"/>
<ExecCommand="node_modules\.bin\gulp $(Configuration)"WorkingDirectory="$(ProjectDir)"/>
<OnErrorExecuteTargets="DeletePackages"/>
</Target>

As you can see, it is set to run after the target called “BeforeBuild”, and depends on the targets called “NpmInstall” and “BowerInstall”. This will make sure that the “NpmInstall” and “BowerInstall” targets are run before this target. In the target, it prints out that it is running the specified Gulp task, which once again simplifies debugging things using the build output and logs, and the runs Gulp using an element called “Exec”. “Exec” is basically like running a command in the command line. In this case, the “Exec” element is also configured to make sure the command is run in the correct working directory. And if it fails, it executes the target called “DeletePackages”.

Ok, so that explains how the Gulp task is run. But what do the “NpmInstall” and “BowerInstall” targets do? Well, they do pretty much exactly what they are called. They run “npm install” and “bower install” to make sure that all dependencies are installed. As I mentioned before, I don’t check in my dependencies. Instead I let my buildserver pull them in as needed.

Note: Yes, this has some potential drawbacks. Things like, the internet connection being down during the build, or the bower or npm repos not being available, and so on. But in most cases it works fine and saves the source control system from having megs upon megs of node and bower dependencies to store…

<TargetName="NpmInstall"Condition="'$(Configuration)' != 'Debug'">
<MessageText="Running npm install"Importance="high"/>
<ExecCommand="npm install --quiet"WorkingDirectory="$(ProjectDir)"/>
<OnErrorExecuteTargets="DeletePackages"/>
</Target>



<TargetName="BowerInstall"Condition="'$(Configuration)' != 'Debug'">
<MessageText="Running bower install"Importance="high"/>
<ExecCommand="node_modules\.bin\bower install --quiet"WorkingDirectory="$(ProjectDir)"/>
<OnErrorExecuteTargets="DeletePackages"/>
</Target>


As you can see, these targets also define some conditions. They will only run if the current build configuration is something else than “Debug”. That way they will not run every time you build in VS. Instead it will only run when building for release, which is normally done on the buildserver.

The next two targets look like this

<TargetName="DeletePackages"Condition="'$(Configuration)' != 'Debug'"AfterTargets="RunGulp">
<MessageText="Downloaded packages"Importance="high"/>
<ExecCommand="..\tools\delete_folder node_modules"WorkingDirectory="$(ProjectDir)\"/>
<ExecCommand="..\tools\delete_folder bower_components"WorkingDirectory="$(ProjectDir)\"/>
</Target>

<TargetName="CleanGulpFiles"AfterTargets="Clean">
<MessageText="Cleaning up node files"Importance="high"/>
<ItemGroup>
<GulpGeneratedInclude=".\Content\build\**\*"/>
<GulpGeneratedInclude=".\Content\dist\**\*"/>
</ItemGroup>
<DeleteFiles="@(GulpGenerated)"/>
<RemoveDirDirectories=".\Content\build;.\Content\dist;"/>
</Target>

The “DeletePackages” target is set to run after the “RunGulp” target. This will make sure that it removes the node_modules and bower_components folders when done. However, once again, only when not building in “Debug”. Unfortunately, the node_modules folder can get VERY deep, and cause some problems when being deleted on a Windows machine. Because of this, I have included a little script called delete_folder, which will take care of this problem. So instead of just deleting the folder, I call on that script to do the job.

The second target, called “CleanGulpFiles”, deletes the files and folders generated by Gulp, and is set to run after the target called “Clean”. This means that it will run when you right click your project in the Solution Explorer and choose Clean. This is a neat way to get rid of generated content easily.

In a simple world this would be it. This will run Gulp and generate the required files as a part of the build. So it does what I said it would do… However, if you use MSBuild or MSDeploy to create a deployment package, or deploy your solution to a server as a part of the build, which you normally do on a buildserver, they newly created files will not automatically be included. To get this solved, there is one final target called “AddGulpFiles” in this case.

<TargetName="AddGulpFiles"BeforeTargets="CopyAllFilesToSingleFolderForPackage;CopyAllFilesToSingleFolderForMsdeploy">
<MessageText="Adding gulp-generated files"Importance="high"/>
<ItemGroup>
<CustomFilesToIncludeInclude=".\Content\dist\**\*.*"/>
<FilesForPackagingFromProjectInclude="%(CustomFilesToInclude.Identity)">
<DestinationRelativePath>.\Content\dist\%(RecursiveDir)%(Filename)%(Extension)</DestinationRelativePath>
</FilesForPackagingFromProject>
</ItemGroup>
<OnErrorExecuteTargets="DeletePackages"/>
</Target>

This target runs before “CopyAllFilesToSingleFolderForPackage” and “CopyAllFilesToSingleFolderForMsdeploy”, which will make sure that the defined files are included in the deployment.

In this case, as all the important files are added to the “/Content/dist/” folder, all we need to do is tell it to include all files in that folder…

That is it! Switching the build over to “Release” and asking VS to build, while watching the Output window, will confirm that our targets are running as expected. Unfortunately it will also run “npm install” and “bower install”, as well as delete the bower_components and npm_modules folders as part of the build. So once you have had the fun of watching it work, you will have to run those commands manually again to get your dependencies back…

And if you want to see what would actually going to be deployed to a server, you can right-click the project in the Solution Explorer and choose Publish. Publishing to the file system, or to a WebDeploy package, will give you a way to look at what files would be sent to a server in a deployment scenario.

In my code download, the publish stuff is set to build either to the file system or a web deploy package, in a folder called DarksideCookie on C:. This can obviously be changed if you want to…

As usual, I have created a sample project that you can download and play with. It includes everything covered in this post. Just remember to run “npm install” and “bower install” before trying to run it locally.

Code available here: DarksideCookie.AspNet.MSBuild.zip (65.7KB)

Building a simple PicPaste replacement using Azure Web Apps and WebJobs

$
0
0

This post was supposed to be an introduction to Azure WebJobs, but it took a weird turn somewhere and became a guide to building a simple PicPaste replacement using just a wimple Azure Web App and a WebJob.

As such, it might not be a really useful app, but it does show how simple it is to build quite powerful things using Azure.

So, what is the goal? Well, the goal is to build a website that you can upload images to, and then get a simple Url to use when sharing the image. This is not complicated, but as I want to resize the image, and add a little overlay to it as well before giving the user the Url, I might run into performance issues if it becomes popular. So, instead I want the web app to upload the image to blob storage, and then have a WebJob process it in the background. Doing it like this, I can limit the number of images that are processed at the time, and use a queue to handle any peaks.

[note]Note: Is this a serious project? No, not really. Does it have some performance issues if it becomes popular? Yes. Did I build it as a way to try out background processing with WebJobs? Yes… So don’t take it too serious.

The first thing I need is a web app that I can use to upload images through. So I create a new empty ASP.NET project, adding support for MVC. Next I add a NuGet packages called WindowsAzure.Storage. And to be able to work with Azure storage, I need a new storage account. Luckily, that is as easy as opening the “Server Explorer” window, right-clicking the “Storage” node, and selecting “Create Storage Account…”. After that, I am ready to start building my application.

Inside the application, I add a single controller called HomeController. However, I don’t want to use the default route configuration. instead I want to have a nice, short and simple route that looks like this

routes.MapRoute(
name: "Image",
url: "{id}",
defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional }
);

Ok, now that that is done, I add my “Index” view, which is ridiculously simple, and looks like this

<!DOCTYPEhtml>

<html>
<head>
<metaname="viewport"content="width=device-width"/>
<title>AzurePaste</title>
</head>
<body>
@using (Html.BeginForm("Index", "Home", FormMethod.Post, new { enctype = "multipart/form-data" }))
{
<label>Chose file: </label>
<inputtype="file"name="file"accept="image/*"/><br/>
<inputtype="submit"value="Upload"/>
}
</body>
</html>

As you can see, it contains a simple form that allows the user to post a single file to the server using the Index method on the HomeController.

The only problem with this is that there’s no controller action called Index that accepts HTTP POSTs. So let’s add one.

[HttpPost]
public ActionResult Index(HttpPostedFileBase file)
{
if (file == null)
{
returnnew HttpStatusCodeResult(HttpStatusCode.BadRequest);
}

var id = GenerateRandomString(10);
var blob = StorageHelper.GetUploadBlobReference(id);
blob.UploadFromStream(file.InputStream);

StorageHelper.AddImageAddedMessageToQueue(id);

return View("Working", (object)id);
}

So what does this action do? Well, first of all it returns a HTTP 400 if you forgot to include the file… Next it uses a quick and dirty helper method called GenerateRandomString(). It just generates a random string with the specified length… Next, I use a helper class called StorageHelper, which I will return to shortly, to get a CloudBockBlob instance to which I upload my file. The name of the blob is the random string I just retrieved, and the container for it is a predefined one that the WebJob knows about. The WebJob will then pick it up there, make the necessary tranformations, and then save it to the root container.

Note: By adding a container called $root to a blob storage account, you get a way to put files in the root of the Url. Otherwise you are constrained to a Url like https://[AccountName].blob.core.windows.net/[ContainerName]/[FileName]. But using $root, it can be reduced to https://[AccountName].blob.core.windows.net/[FileName].

Once the image is uploaded to Azure, I once again turn to my StorageHelper class to put a message on a storage queue.

Note: I chose to build a WebJob that listened to queue messages instead of blob creation, as it can take up to approximately 10 minutes before the WebJob is called after a blob is created. The reason for this is that the blob storage logs are buffered and only written approximately every 10 minutes. By using a queue, this delay is decreased quite a bit. Bit it is still not instantaneous… If you need to decrease it even further, you can switch to a ServiceBus queue instead.

Ok, if that is all I am doing, that StorageHelper class must be really complicated. Right? No, not really. It is just a little helper to keep my code a bit more DRY.

The StorageHelper has 4 public methods. EnsureStorageIsSetUp(), which I will come back to, GetUploadBlobReference(), GetRootBlobReference() and AddImageAddedMessageToQueue(). And they are pretty self explanatory. The two GetXXXBlobReference() methods are just helpers to get hold of a CloudBlockBlob reference. By keeping it in this helper class, I can keep the logic of where blobs are placed in one place… The AddImageAddedMessageToQueue() adds a simple CloudQueueMessage, containing the name of the added image, on a defined queue. And finally, the EnsureStorageIsSetUp() will make sure that the required containers and queues are set up, and that the root container has read permission turned on for everyone.

publicstaticvoid EnsureStorageIsSetUp()
{
UploadContainer.CreateIfNotExists();
ImageAddedQueue.CreateIfNotExists();
RootContainer.CreateIfNotExists();
RootContainer.SetPermissions(new BlobContainerPermissions { PublicAccess = BlobContainerPublicAccessType.Blob });
}

publicstatic CloudBlockBlob GetUploadBlobReference(string blobName)
{
return UploadContainer.GetBlockBlobReference(blobName);
}

publicstatic CloudBlockBlob GetRootBlobReference(string blobName)
{
return RootContainer.GetBlockBlobReference(blobName);
}

publicstaticvoid AddImageAddedMessageToQueue(string filename)
{
ImageAddedQueue.AddMessage(new CloudQueueMessage(filename));
}

Kind of like that…

The returned view that the user gets after uploading the file looks like this

@model string

<!DOCTYPEhtml>

<html>
<head>
<metaname="viewport"content="width=device-width"/>
<title>Azure Paste</title>
<scriptsrc="~/Content/jquery.min.js"></script>
<scripttype="text/javascript">
$(function () {
function checkCompletion() {
setTimeout(function () {
$.ajax({ url: "/@Model" })
.done(function (data, status, jqXhr) {
if (jqXhr.status === 204) {
checkCompletion();
return;
}
document.location.href = data;
})
.fail(function (error) {
alert("Error...sorry about that!");
console.log("error", error);
});
}, 1000);
}
checkCompletion();
});
</script>
</head>
<body>
<div>Working on it...</div>
</body>
</html>

As you can see, the bulk of it is some JavaScript, while the actual content that the users sees is really tiny…

The JavaScript uses jQuery (no, not a fan, but it has some easy ways to do ajax calls) to poll the server every second. It calls the server at “/[FileName]”, which as you might remember from my changed routing, will call the Index method on the HomeController.

If the call returns an HTTP 204, the script keeps on polling. If it returns HTTP 200, it redirects the user to a location specified by the returned content. If something else happens, if just alerts that something went wrong…

Ok, so this kind of indicates that my Index() method needs to be changed a bit. It needs to do something different if the id parameter is supplied. So I start by handling that case

public ActionResult Index(string id)
{
if (string.IsNullOrEmpty(id))
{
return View();
}

...
}

That’s pretty much the same as it is by default. But what if the id is supplied? Well, then I start by looking for the blob that the user is looking for. If that blob exists, I return an HTTP 200, and the Url to the blob.

public ActionResult Index(string id)
{
...

var blob = StorageHelper.GetRootBlobReference(id);
if (blob.Exists())
{
returnnew ContentResult { Content = blob.Uri.ToString().Replace("/$root", "") };
}

...
}

As you can see, I remove the “/$root” part of the Url before returning it. The Azure Storage SDK will include that container name in the Url even if it is a “special” container that isn’t needed in the Url. So by removing it I get this nicer Url.

If that blob does not exist, I look for the temporary blob in the upload folder. If it exists, I return an HTTP 204. And if doesn’t, then the user is looking for a file that doesn’t exist, so I return a 404.

public ActionResult Index(string id)
{
...

blob = StorageHelper.GetUploadBlobReference(id);
if (blob.Exists())
{
returnnew HttpStatusCodeResult(HttpStatusCode.NoContent);
}

returnnew HttpNotFoundResult();
}

Ok, that is all there is to the web app. Well…not quite. I still need to ensure that the storage stuff is set up properly. So I add a call to the StorageHelper.EnsureStorageIsSetUp() in the Application_Start() method in Global.asax.cs.

protectedvoid Application_Start()
{
AreaRegistration.RegisterAllAreas();
RouteConfig.RegisterRoutes(RouteTable.Routes);

StorageHelper.EnsureStorageIsSetUp();
}

Next up is the WebJob that will do the heavy lifting. So I add a WebJob project to my solution. This gives me a project with 2 C# files. One called Program.cs, which is the “entry point” for the job, and one called Functions.cs, which contains a sample WebJob.

The first thing I want to do is to make sure that I don’t overload my machine by running too many of these jobs in parallel. Doing image manipulation is hogging resources, and I don’t want it to get too heavy for the machine.

I do this by setting the batch size for queues in a JobHostConfiguration inside the Program.cs file

staticvoid Main()
{
var config = new JobHostConfiguration();
config.Queues.BatchSize = 2;

var host = new JobHost(config);
host.RunAndBlock();
}

Now that that is done, I can start focusing on my WebJob… I start out by deleting the existing sample job, and add in a new one.

A WebJob is just a static method with a “trigger” as the first parameter. A trigger is just a method parameter that has an attribute set on it, which defines when the method should be run. In this case, I want to run it based on a queue, so I add a QueueTrigger attribute to my first parameter.

As my message contains a simple string with the name of the blob to work on, I can define my first parameter as a string. Had it been something more complicated, I could have added a custom type, which the calling code would have populated by deserializing the content in the message. Or, I could have chosen to go with CloudQueueMessage, which gives med total control. But as I said, just a string will do fine. It will also help me with my next  three parameters.

As a lot of WebJobs will be working with Azure storage, the SDK includes helping tools to make this easier. One such tool is an attribute called BlobAttribute. This makes it possible to get a blob reference, or its contents, passed into the method. In this case, getting references to the blobs I want to work with makes things a lot easier. I don’t have to handle getting references to them on my own. All I have to do, is to add parameters of type ICloudBlob, and add a BlobAttribute to them. The attribute takes a name-pattern as the first string. But in this case, the name of the blob will be coming from the queue message… Well, luckily, the SDK people have thought of this, and given us a way to access this by adding “{queueTrigger}” to the pattern. This will be replaced by the string in the message…

Ok, so the signature fore my job method turns in to this

publicstaticvoid OnImageAdded([QueueTrigger("image-added")]string blobName,
[Blob("upload/{queueTrigger}")]ICloudBlob blob,
[Blob("$root/{queueTrigger}.png")]ICloudBlob outputImage,
[Blob("$root/{queueTrigger}")]ICloudBlob htmlBlob,
TextWriter log) {
...
}

As you can see, I have also added a final parameter of type TextWriter, called log. This will be a log supplied by the WebJob host, making it possible to log things in a nice a uniform way.

Bu wait a little…why am I taking in 3 blobs? The first one is obviously the uploaded image. The second one is the target where I am going to put the transformed image. What is the last one? Well, I am going to make it a little more complicated than just hosting the image… I am going to host the image under a name which is [UploadedBlobName].png. I am then going to add a very simple HTML file to show the image in a blob at the same name as the uploaded blob. That ay, the Url to the page to view the image will be a nice and simple one, and it will show the image and a little text.

The first thing I need to do is get the content of the blob. This could have been done by requesting a Stream instead of a IClodBlob, but as I want to be able to delete it at the end, that didn’t work…unless I used more parameters, which felt unnecessary…

Once I have my stream, I turn it into a Bitmap class from the System.Drawing assembly. Next, I resize that image to a maximum with or height before adding a little watermark.

publicstaticvoid OnImageAdded([QueueTrigger("image-added")]string blobName,
[Blob("upload/{queueTrigger}")]ICloudBlob blob,
[Blob("$root/{queueTrigger}.png")]ICloudBlob outputImage,
[Blob("$root/{queueTrigger}")]ICloudBlob htmlBlob,
TextWriter log)
{
var ms = new MemoryStream();
blob.DownloadToStream(ms);

var image = new Bitmap(ms);
var newImage = image.Resize(int.Parse(ConfigurationManager.AppSettings["AzurePaste.MaxSize"]));
newImage.AddWatermark("AzurePaste FTW");

...
}

Ok, now that I have my transformed image, it is time to add it to the “target blob”. I do this by saving the image to a MemoryStream and then uploading that. However, by default, all blobs get the content type “application/octet-stream”, which isn’t that good for images. So I update the blob’s content type to “image/png”.
publicstaticvoid OnImageAdded([QueueTrigger("image-added")]string blobName,
[Blob("upload/{queueTrigger}")]ICloudBlob blob,
[Blob("$root/{queueTrigger}.png")]ICloudBlob outputImage,
[Blob("$root/{queueTrigger}")]ICloudBlob htmlBlob,
TextWriter log)
{
...

var outputStream = new MemoryStream();
newImage.Save(outputStream, ImageFormat.Png);
outputStream.Seek(0, SeekOrigin.Begin);
outputImage.UploadFromStream(outputStream);
outputImage.Properties.ContentType = "image/png";
outputImage.SetProperties();

...
}

The last part is to add the fabled HTML page… In this case, I have just quickly hacked a simple HTML page into my assembly as an embedded resource. I could of course have used one stored in blob storage or something, making it easier to update. But I just wanted something simple…so I added it as an embedded resource… But before I add it to the blob, I make sure to replace the Url to the blob, which has been defined as “{0}” in the embedded HTML.

publicstaticvoid OnImageAdded([QueueTrigger("image-added")]string blobName,
[Blob("upload/{queueTrigger}")]ICloudBlob blob,
[Blob("$root/{queueTrigger}.png")]ICloudBlob outputImage,
[Blob("$root/{queueTrigger}")]ICloudBlob htmlBlob,
TextWriter log)
{
...

string html;
using (var htmlStream = typeof (Functions).Assembly.GetManifestResourceStream("DarksideCookie.Azure.WebJobs.AzurePaste.WebJob.Resources.page.html"))
using (var reader = new StreamReader(htmlStream))
{
html = string.Format(reader.ReadToEnd(), blobName + ".png");
}
htmlBlob.UploadFromStream(new MemoryStream(Encoding.UTF8.GetBytes(html)));
htmlBlob.Properties.ContentType = "text/html";
htmlBlob.SetProperties();

blob.Delete();
}

As you can see, I also make sure to update the content type of this blob to “text/html”. And yeah…I also make sure to delete the original file.

That is about it… I guess I should mention that the Resize() and AddWatermark() methods in the Bitmap are extension methods I have added. They are not important for the topic, so I will just leave them out. But they are available in the downloadable code below.

There is one more thing though… What happens if my code is borked, or someone uploads a file that isn’t an image? Well, in that case, the code will fail, and fail, and fail… If it fails, the WebJob will be re-run at a later point. Unfortunately, this can turn ugly, and turn into what is known as a poison message. Luckily, this is handled by default. After a configurable amount of retries, the message is considered a poison message, and will be discarded. As it is, a new message is automatically added to a dynamically created queue to notify us about it. So it might be a good idea for us to add a quick little job to that queue as well, and log any poison messages.

The name of the queue that is created is “[OriginalQueueName]-poison”, and the handler for it looks like any other WebJob. Just try not to add code in here that turns these messages into poison messages…

publicstaticvoid LogPoisonBlob([QueueTrigger("image-added-poison")]string blobname, TextWriter logger)
{
logger.WriteLine("WebJob failed: Failed to prep blob named {0}", blobname);
}

That’s it! Uploading an image through the web app will now place it in blob storage. it will then wait until the WebJob has picked it up, transformed it, and stored it, and a new html file, at the root of the storage account. Giving the user a quick and easy address to share with his or her friends.

Note: If you want to run something like this in the real world, you probably want to add some form of clean-up solution as well. Maybe a WebJob on a schedule, that removes any images older than a certain age…

And as usual, there is a code sample to play with. Just remember that you need to set up a storage account for it, and set the correct connectionstrings in the web.config file. As well as the WebJob project’s app.config if you want to run it locally.

Note: It might be good to know that logs and information about current wWebJobs can be found at https://[ApplicationName].scm.azurewebsites.net/azurejobs/#/jobs

Source code: DarksideCookie.Azure.WebJobs.AzurePaste.zip (52.9KB)

Cheers!

Running ASP.NET 5 applications in Windows Server Containers using Windows Server 2016

$
0
0

A couple of days ago, I ended up watching a video about Windows Server 2016 at Microsoft Virtual Academy. I think it was A Deep Dive into Nano Server, but I’m not sure to be honest. Anyhow, they started talking about Windows Server Containers and Docker, and I got very interested.

I really like the idea of Docker, but since I’m a .NET dev, the whole Linux dependency is a bit of a turn-off to be honest. And yes, I know that ASP.NET 5 will be cross-platform and so on, but in the initial release of .NET Core, it will be very limited. So it makes it a little less appealing. However, with Windows Server Containers, I get the same thing, but on Windows. So all of the sudden, it got interesting to look at Docker. So I decided to get an ASP.NET 5 app up and running in a Windows Server Container. Actually, I decided to do it in 2 ways, but in this post I will cover the simplest way, and then I will do another post about the other way, which is more complicated but has some benefits…

What is Docker?

So the first question I guess is “What is Docker?”. At least if you have no knowledge of Docker at all, or very little. If you know a bit about Docker, you can skip to the next part!

Disclaimer: This is how I see it. It is probably VERY far from the technical version of it, and people who know more about it would probably say I am on drugs or something. But this is the way I see it, and the way that makes sense to me, and that got me to understand what I needed to do…

To me, Docker, or rather a Docker container, is a bit like a shim. When you create a Docker container on a machine, you basically insert a shim between what you are doing in that container, and the actual server. So anything you do inside that container, will be written to that shim, and not to the actual machine. That way, your “base machine” keeps its own state, but you can do things inside of the container to configure it like you want it. Then is run as sort of a virtual machine on that machine. It is a bit like a VM, but much simpler and light weight. You can then save that configuration that you have made, and re-use it over and over again. This means that you can create your configuration for your environment in a Docker image, and then use it all over the place on different servers, and they will all have the same set-up.

You can then persist that container into what is called an image. The image is basically a pre-configured “shim”, that you can then base new containers off of, or base another images on. This allows you to build up your machine based on multiple “shims”. So you start out with the base machine, and then maybe you add the IIS image that activates IIS, and then you add the one that adds your company’s framework to it, and finally on top of that, you add your own layer with your actual application. Kind of like building your environment from Lego-blocks.

There are 2 ways to build the images. The first one is to create an “interactive” container, which is a container you “go into” and do your changes to, and then commit that to an image. The second one is to create something called a dockerfile, which is a file containing all of the things that need to be done to whatever base-image you are using, to get it up to the state that you want it to be.

Using a dockerfile is a lot easier once you get a hang of how they work, as you don’t need to sit and do it all manually. Instead you just write the commands that need to be run, and Docker sorts it all out for you, and hands you back a configured image (if it all you told it to do in the dockerfile worked).

If you are used to virtualized machines, it is a bit like differencing disks. Each layer just adds things to the previous disk, making incremental changes until you reach the level you want. However, the “base disk” is always the operating system you are running. So, in this case Windows Server 2016. And this is why they are lighter weight, and faster to start and so on. You don’t need to boot the OS first. It is already there. All you need to do is create your “area” and add your images/”shims” to it.

To view the images available on the machine, you can run

C:\>docker images

On a new Windows Server 2016, like I am using here, you will only see a single image to begin with. It will be named windowsservercore, and represents the “base machine”. It is the image that all containers will be based on.

image

Set up a server

There are a couple of different ways to set up a Windows Server 2016 (Technical Preview 3 at the time of writing) to try this out.

Option 1: On a physical machine, or existing VM. To do this, you need to download and execute a Powershell-script that enables containers on the machine. It is documented here.

Option 2: In a new VM. This was ridiculously simple to get working. You just download and execute a Powershell-script, and it sorts everything out for you. Kind of… It is documented here.

Option 3: In Azure. This is by far the simplest way of doing it. Just get a new machine that is configured and done, and up and running in the cloud. Documented here.

Warning: I went through option 2 and 3. I started with 2, and got a new VM in Hyper-V. However, my Machine got BSOD every time I tried to bridge my network. And apparently, this is a known bug in the latest drivers for my Broadcom WLAN card. Unfortunately it didn’t work to downgrade it on my machine, so I had to give up. So if you are running  a new MacBook Pro, or any other machine with that chip, you might be screwed as well. Luckily, the Azure way solved that…

Warning 2: Since this is Windows Sever Core, there is a VERY limited UI. Basically, you get a command line, and have to do everything using that. That and your trusty friend PowerShell…

Confige the server

The next step, after getting a server up and running is to configure it. This is not a big thing, there are only a few configurations that need to be made. Maybe even just one if you are lucky. It depends on where you are from, and how you intend to configure your containers.

The first step is to move from cmd.exe to PowerShell by executing

c:\>powershell

To me, being from Sweden, with a Swedish keyboard, I needed to make sure that I could type properly by setting the correct language. To do this I used the Set-WinUserLanguageList

PS C:\>Set-WinUserLanguageList -LanguageList sv-SE

Next, you need to open a firewall rule for the port you intend to use. In this case, I intend to use port 80 as I am going to run a webserver. This is done using the following command

PS C:\>if (!(Get-NetFirewallRule | where {$_.Name -eq "TCP80"})) { New-NetFirewallRule -Name "TCP80" -DisplayName "HTTP on TCP/80" -Protocol tcp -LocalPort 80 -Action Allow -Enabled True}

It basically checks to see if there is a firewall rule called TCP80. If not, it creates one, opening port 80 for TCP.

Note: If you are running in Azure, you also need to set up an endpoint for your machine for it to work. It is documented in the resource linked above.

Next, I want to make sure that I can access the ports I want on the container(s) I am about to create.

When running containers, you will have a network connection that your machine uses, as well as a virtual switch that you containers will be connected to. In Azure, your machine will have a 10.0.x.x IP by default, and a virtual switch at 172.16.x.x that you containers will be connected to. However, both of them are behind the firewall. So by opening port 80 like we just did, we opened port 80 on both connections. So as long as your container is only using port 80, you are good to go. But if you want to use other ports in your containers, and map port 80 from the host to port X on your container, you need to make some changes, as port X will be denied by the firewall.

Note: This part tripped my up a lot. I Microsoft’s demos, the map port 80 to port 80 running an nginx server. But they never mentioned that this worked because the firewall implicitly had opened the same port to the container connection. I assumed that since the 172-connection was internal, it wasn’t affected by the firewall. Apparently I thought the world was too simple.

So to make it simple, I have just turned off the firewall for the 172.16.x.x connection. The public connection is still secured, so I assume this should be ok… But beware that I am not a network or security guy! To be really safe, you could open just the ports you needed on that connection. But while I am mucking about and trying things out, removing the firewall completely makes things easier!

The command needed to solve this “my way” is

PS C:\>New-NetFirewallRule -Name "TCP/Containers" -DisplayName "TCP for containers" -Protocol tcp -LocalAddress 172.16.0.1/255.240.0.0 -Action Allow -Enabled True

It basically says “any incoming TCP request for the 172.16.x.x connection is allowed”, e.g. firewall turned off for TCP. Just what I wanted!

Create and upload an application

As this is all about hosting the application, I don’t really care about what my application does. So I created a new ASP.NET 5 application using the Empty template in Visual Studio 2015, which in my case created an application based on ASP.NET 5 Beta 7.

The application as such is just a “Hello World”-app that has an OWIN middleware that just returns “Hello World!” for all requests. Once again, this is about hosting, not the app…

I decided to make one change though. As Kestrel is going to be the server used in the future for ASP.NET 5 apps, I decided to modify my application to use Kestrel instead of the WebListener server that is in the project by default. To do this, I just made a couple of changes to the project.json file.

Step 1, modify the dependencies. The only dependency needed to get this app going it Microsoft.AspNet.Server.Kestrel. So the dependencies node looks like this

"dependencies": { "Microsoft.AspNet.Server.Kestrel": "1.0.0-beta7" }

Step 2, change the commands to run Kestrel instead of WebListener

"commands": {  "kestrel": "Microsoft.AspNet.Hosting --server Microsoft.AspNet.Server.Kestrel --server.urls http://localhost:5004"  }

In that last step, I also removed the hosting.ini file, as I find it clearer to just have the config in the project.json file, instead of spread out…

Next I used Visual Studio to publish the application to a folder on my machine. This packages up everything needed to run the application, including a runtime and dependencies and so on. It also creates a cmd-file for the kestrel-command. So it is very easy to run this application on any Windows-machine.

Transfer the application to the server

With the app done, and published to the local machine, I went ahead and zipped it up for transfer to the server. And since I needed to get it to my server in Azure, I went and uploaded it to a blob in Azure Storage, making sure that it was publicly available.

The next is to get the app to the server. Luckily, this is actually not that hard, even from a CLI. It is just a matter of running the wget-command

wget -uri 'http://[ACCOUNT_NAME].blob.core.windows.net/[CONTAINER]/[APP_ZIP_FILE_NAME].zip' -OutFile "c:\KestrelApp.zip"

And when the zip is on the machine it is time to unzip it. As there will be something called a dockerfile to set everything up, the files need to be unzipped to a subfolder in a new directory. In my case, I decided to call my new directory “build”, and thus unzip my files to “build\app”. Like this

Expand-Archive -Path C:\KestrelApp.zip -DestinationPath C:\build\app –Force

Create a dockerfile

Now that we have the app on the server, it is time to create the dockerfile. To do this, I start out by making my way into the “build”-directory.

PS C:\>cd build

To create the actual file, you use the New-Item command

PS C:\build>New-Item –Type File dockerfile

And then you can open it in Notepad by running

PS C:\build>notepad dockerfile

Inside the Notepad it is time to define what needs to be done to get the container into the state we want.

The first step is to define what image we want to base our new image on. In this case, there is only one, and it is called “windowsservercore”. So, I tell Docker to use that image as my base-image, by writing

FROM windowsservercore

on the first line of the dockerfile.

Next, I want to include my application in the new container.

The base disk (windowsservercore) is an empty “instance” of the physical machine’s OS. So anything we want to have access to in our container, needs to be added to the container using the ADD keyword. So to add the “app” directory I unzipped my app to, I add

ADD app /app

which says, add the app directory to a directory called “app” in my image.

Once I have my directory added, I also want to set it as the working directory when my container starts up. This is done using the WORKDIR keyword like this

WORKDIR /app

And finally, I need to tell it what to do when the container starts up. This can be done using either an ENTRYPOINT or a CMD keyword, or a combination of them. However, being a Docker-noob, I cant tell you exactly the differences between them, which way to use them is the best, but I got it working by adding

CMD kestrel.cmd

which tells it to run kestrel.cmd when the container starts up.

So finally, the dockerfile looks like this

FROM windowsservercore
ADD app /app
WORKDIR /app
CMD kestrel.cmd

which says, start from the “windowsservercore” image, add the content of my app directory to my image under a directory called app. Then set the app directory as the working directory. And finally, run kestrel.cmd when the container starts.

Once I have the configuration that I want, I save the file and close Notepad.

Create a Docker image from the dockerfile

Now that we have a dockerfile that hopefully works, it is time to tell Docker to use it to create a new image. To do this, I run

PS C:\build>docker build -t kestrelapp .

This tells Docker to build an image named “kestrelappimage” from the current location. By adding just a dot as the location, it uses the current location, and looks for a file called “dockerfile”.

Docker will then run through the dockerfile one line at the time, setting up the image as you want it.

image

And at the end, you will have a new image on the server. So if you run

PS C:\build>docker images

You will now see 2 images. The base “windowsservercore”, as well as your new “kestrelappimage” that is based on “windowsservercore”.

 

image

 

Create, and run, a container based on the new image

Once the image is created, it is time to create, and start, a container based on that image. Once again it is just a matter of running a command using docker

docker run --name kestrelappcontainer -d -p 80:5004 kestrelapp

This command says “create a new container called kestrelappcontainer based on the kestrelapp image, map port 80 from the host to port 5004 on the container, and run it in the background for me”.

Running this will create the container and start it for us, and we should be good to go.

image

Note: Adding –p 80:5004 to map the ports will add a static NAT mapping between those ports. So if you want to re-use some ports, you might need to remove the mapping before it works. Or, if you want to re-use the same mapping, you can just skip adding the –p parameter. If you want to see your current mapping, you can run Get-NetNatStaticMapping, and remove any you don’t want by running “Remove-NetNatStaticMapping –StaticMappingID [ID]”

If you want to see the containers currently on your machine, you can run

PS C:\build>docker ps –a

which will write out a list of all the available containers on the machine.

image

We should now be able to browse to the server and see “Hello World!”.

image

That’s it! That’s how “easy” it is to get up and going with Windows Server Containers and ASP.NET 5. At least it is now… It took a bit of time to figure everything out, considering that I had never even seen Docker before this.

If you want to remove a container, you can run

PS C:\build>docker rm [CONTAINER_NAME]

And if you have any mapping defined for the container you are removing. Don’t forget to remove it using Remove-NetNatStaticMapping as mentioned before..

If you want to remove an image, the command is

PS C:\build>docker rmi [IMAGE_NAME]

As this has a lot of dependencies, and not a lot of code that I can really share, there is unfortunately no demo source to download…

Cheers!

Combining Windows Server 2016 Container, ASP.NET 5 and Application Request Routing (ARR) using Docker

$
0
0

I recently did a blog post about how to get an ASP.NET 5 application to run in a Windows Server container using Docker. However, I kept thinking about that solution, and started wondering if I could add IIS Application Request Routing to the mix as well. What if I could have containers at different ports, and have IIS and ARR routing incoming requests to different ports based on the host for example. And apparently I could. So I decided to write another post about how I got it going.

Disclaimer: There is still some kinks to work out regarding the routing. Right now, I have to manually change the routing to point to the correct container IP every time it is started, as I don’t seem to find a way to assign my containers static IP addresses…

Disclaimer 2: I have no clue about how this is supposed to be done, but this seems to work… Smile

Adding a domain name to my server (kind of)

The first thing I needed to do, was to solve how to get a custom domain name to resolve to my server, which in this case was running in Azure. The easiest way to solve this is by going to the Azure portal and looking at my machines Virtual IP address, and add it to my hosts file with some random host.

image41

This is a volatile address that changes on reboots, but it works for testing it out. However, it would obviously be much better to get a reserved IP, and maybe connect a proper domain name to it, but that was too much of a hassle…

Next, I opened my hosts file and added a couple of domain names connected to my machine’s IP address. In my case, I just added the below

40.127.129.213        site1.azuredemo.com
40.127.129.213        site2.azuredemo.com

Installing and configuring IIS and ARR 3.0

The next step was to install IIS on the machine. Using nothing but a command line. Luckily, that was a lot easier than I expected… All you have to do is run

C:\>Install-WindowsFeature Web-Server

and you are done. And yes, you can probably configure a whole heap of things to install and so on, but this works…

With IIS installed, it was time to add Application Request Routing (ARR) version 3.0, and even if there are other ways to do this, I decided to use the Web Platform Installer. So I downloaded the installer for that to my machine using Powershell and the following command

PS C:\>wget -uri "http://download.microsoft.com/download/C/F/F/CFF3A0B8-99D4-41A2-AE1A-496C08BEB904/WebPlatformInstaller_amd64_en-US.msi" -outfile "c:\installers\WebPI.msi"

and then followed that up by running the installer

C:\installers\platforminstallercmd.msi

With the Web Platform Installer installed, I got access to a tool called WebPICMD.exe. It is available under “%programfiles%\microsoft\web platform installer”, and can be used to browse the available things that the installer can install etc. It can also be used to install them, which is what I needed. So I ran

C:\>“%programfiles%\microsoft\web platform installer\WebPICMD.exe” /Install /Products:ARRv3_0

which installs ARR v.3.0 and all the prerequisites.

After the ARR bits had been installed, it was time to configure IIS to use it… The first step is to run

C:\>“%windir%\system32\inetsrv\appcmd.exe” set apppool "DefaultAppPool" -processModel.idleTimeout:"00:00:00" /commit:apphost

This sets the idle timeout for the default app pool to 0, which basically means “never timeout and release your resources”. This is pretty good for an application that has as its responsibility to make sure that all requests are routed to the correct place…

Next, it was time to enable the ARR reverse proxy using the following command

C:\>“%windir%\system32\inetsrv\appcmd.exe” set config -section:system.webServer/proxy /enabled:"True" /commit:apphost

With the IIS configured and up and running with the ARR bits, it was just a matter of opening up a port in the firewall so that I could reach it. So I did that using Powershell

PS C:\>New-NetFirewallRule -Name "TCP80" -DisplayName "HTTP on TCP/80" -Protocol tcp -LocalPort 80 -Action Allow -Enabled True

By now, the IIS is up and running and good to go, with a port opened for the traffic. The next step is to set up the websites for the containers that will sit behind the ARR bits.

Creating a couple of ASP.NET web applications

The websites I decided to create were ridiculously simple. I just created 2 empty ASP.NET 5 web applications, and modified their default OWIN middleware to return 2 different messages so that I could tell them apart. I also changed them to use Kestrel as the server, and put them on 2 different ports, 8081 and 8082. I then published them to my local machine, zipped them up and uploaded them to my server and added a simple dockerfile. All of this is covered in the previous post, so I won’t say anything more about that part. However, just for the sake of it, the dockerfile looks like this

FROM windowsservercore
ADD app /app
WORKDIR /app
CMD kestrel.cmd

With the applications unzipped and ready on the server, I used Docker to create 2 container images for my applications

C:\build\kestrel8081>docker build -t kestrel8081 .

C:\build\kestrel8082>docker build -t kestrel8082 .

and with my new images created, I created and started 2 containers to host the applications using Docker. However, I did not map through any port, as this will be handled by the ARR bits.

C:\>docker run --name kestrel8081 -d kestrel8081

With the containers up and running, there is unfortunately still a firewall in the way for those ports. So, just as in my last blog post, I just opened up the firewall completely for anything on the network that the containers were running on, using the following Powershell command

PS C:\>New-NetFirewallRule -Name "TCP/Containers" -DisplayName "TCP for containers" -Protocol tcp -LocalAddress 172.16.0.1/255.240.0.0 -Action Allow -Enabled True

Configuring Application Request Routing

The next step was to configure the request routing. To do this, I went over to the wwwroot folder

C:\>cd inetpub\wwwroot\

I then created a new web.donfig file, byt opening it in Notepad

C:\inetpub\wwwroot>notepad web.config

As there was no such file, Notepad asked if I wanted to created one, which I told it to do.

Inside the empty web.config file, I added the following

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <system.webServer>
    <rewrite>
      <rules>
        <rule name="Reverse Proxy to Site 1" stopProcessing="true">
          <conditions trackAllCaptures="true">
            <add input="{HTTP_HOST}" pattern="^site1.azuredemo.com$" />
          </conditions>
          <action type="Rewrite" url="http://172.16.0.2:8081/{R:1}" />
          <match url="^(.*)" />
        </rule>
        <rule name="Reverse Proxy to Site 2" stopProcessing="true">
          <conditions trackAllCaptures="true">
            <add input="{HTTP_HOST}" pattern="^site2.azuredemo.com$" />
          </conditions>
          <action type="Rewrite" url="http://172.16.0.3:8082/{R:1}" />
          <match url="^(.*)" />
        </rule>
      </rules>
    </rewrite>
  </system.webServer>
</configuration>

This tells the system that any requests headed for http://site1.azuredemo.com/ should be redirected internally to http://172..16.0.2:8081/, and any request to http://site2.azuredemo.com/ should be redirected to http://172.16.0.3:8082/.

Disclaimer: Yes, I am well aware that redirecting to 172.16.0.2 & 3 like this is less than great. Every time the containers are started, the IP will change for them. Or rather, the IP actually seems to be assigned depending on the order they are requested. So if all containers are always started in the same order after a reboot, it should in theory work. But as I said, far from great. However, I don’t quite know how to solve this problem right now. Hopefully some smart person will read this and tell me…

The only thing left to do was to try and browse to my 2 applications, which returned

image1

when browsing to http://site1.azuredemo.com, and

image411

when browsing to http://site2.azuredemo.com. So it seems that adding IIS and ARR to my server and use it to route my requests to my containers is working.

That’s it! If you have any idea how to solve it in a better way, please tell me! If not, I hope you got something out of it!

Disclaimer: No, the azuredemo.com domain is not mine, and I have no affiliation to whomever does. I just needed something to play with, and thought that would work for this.

Disclaimer 2: My server is not online at that IP address anymore, so you can’t call it, or try and hack it or whatever you were thinking… Smile

Cheers!

Webucator made a video out of one of my blog posts

$
0
0

A while back, I was contacted by the people at Webucator in regards to one of my blog posts. In particular this one… They wanted to know if they could make a video version of it, and of course I said yes! And here it is! Hot off the presses!

So there it is! And they even managed to get my last name more or less correct, which is awesome! So if you are looking for some on-line ASP.NET Training, have a look at their website.


Setting Up Continuous Deployment of an ASP.NET app with Gulp from GitHub to an Azure Web App

$
0
0

I just spent some time trying to figure out how to set up continuous deployment to an Azure Web App from GitHub, including running Gulp as part of the build. It seems that there are a lot blog posts and instructions on how to set up continuous deployment, but none of them seem to take into account that people actually use things like Gulp to generate client side resources during the build

The application

So to kick it off, I needed a web app to deploy. And since I just need something simple, I just opened Visual Studio and created an empty ASP.NET app. However, as I knew that I would need to add some extra stuff to my repo, I decided to add my project in a subfolder called Src. Leaving me with a folder structure like this

Repo Root
      Src
            DeploymentDemo <----- Web App Project
                  [Project files and folders]
                  Scripts
                  Styles
                  DeploymentDemo.csproj
            DeploymentDemo.sln

The functionality of the actual web application doesn’t really matter. The only important thing is the stuff that is part of the “front-end build pipeline”. In this case, that would be the Less files that are placed in the Styles directory, and the TypeScript files that are placed in the Scripts directory.

Adding Gulp

Next, I needed a Gulpfile.js to do the transpilation, bundling and minification of my Less and TypeScript. So I used npm to install the following packages

gulp
bower
gulp-less
gulp-minify-css
gulp-order
gulp-concat
gulp-rename
gulp-typescript
gulp-uglify

making sure that I added --save-dev, adding them to the package.json file so that I could restore them during the build phase.

I then used bower to install

angular
less
bootstrap

once again, adding --save-dev to that I could restore the Bower dependencies during build.

Finally, I created a gulpfile.js that looks like this

var gulp = require('gulp');
var ts = require('gulp-typescript');
var order = require("gulp-order");
var concat = require('gulp-concat');
var uglify = require('gulp-uglify');
var rename = require("gulp-rename");
var less = require('gulp-less');
var minifyCSS = require('gulp-minify-css');

var jslibs = [
'bower_components/angular/angular.min.js'
];

var csslibs = [
'bower_components/bootstrap/dist/css/bootstrap.min.css'
];

gulp.task('default', ['build:scripts', 'build:styles']);

gulp.task('build:scripts', ['typescript:bundle', 'jslibs:copy'], function () {
return gulp.src('build/*.min.js')
.pipe(order([
"angular.min.js",
"deploymentdemo.min.js"
]))
.pipe(concat('deploymentdemo.min.js'))
.pipe(gulp.dest('dist/'));
});

gulp.task('typescript:bundle', function () {
return gulp.src('Scripts/**/*.ts')
.pipe(ts({
noImplicitAny: true,
target: 'ES5'
}))
.pipe(order([
"!App.js",
"App.js"
]))
.pipe(concat('deploymentdemo.js'))
.pipe(gulp.dest('build/'))
.pipe(uglify())
.pipe(rename("deploymentdemo.min.js"))
.pipe(gulp.dest('build/'));
});

gulp.task('jslibs:copy', function () {
return gulp.src(jslibs)
.pipe(gulp.dest('build/'));
});

gulp.task('build:styles', ['css:bundle', 'csslibs:copy'], function () {
return gulp.src('build/*.min.css')
.pipe(order(["bootstrap.min.css", "deploymentdemo.min.css"]))
.pipe(concat('deploymentdemo.min.css'))
.pipe(gulp.dest('dist/'));
});

gulp.task('css:bundle', function () {
return gulp.src('Styles/**/*.less')
.pipe(less())
.pipe(concat('deploymentdemo.css'))
.pipe(gulp.dest('build/'))
.pipe(minifyCSS())
.pipe(rename("deploymentdemo.min.css"))
.pipe(gulp.dest('build/'));
});

gulp.task('csslibs:copy', function () {
return gulp.src(csslibs)
.pipe(gulp.dest('build/'));
});

Yes, that is a long code snippet. The general gist is pretty simple though. The default task will transpile the TypeScript into JavaScript, minify it, concatenate it with the angular.min.js file from Bower, before adding it to the directory called dist in the root of my application. Naming it deploymentdemo.min.js. It will also transpile the Less to CSS, minify it, concatenate it with the bootstrap.min.css from Bower, before adding it to the same dist folder with the name deploymentdemo.min.css.

Running the default Gulp task produces a couple of new things, making the directory look like this

Repo Root
      Src
            DeploymentDemo
                  [Project files and folders]
                  build
                          deploymentdemo.css
                          deploymentdemo.js
                          deploymentdemo.min.css

                          deploymentdemo.min.js

                  dist
                          deploymentdemo.min.css

                          deploymentdemo.min.js

                  Scripts
                  Styles
                  DeploymentDemo.csproj
            DeploymentDemo.sln

The files placed under the build directory, is just for debugging, and are not necessarily required. But they do make it a bit easier to debug potential issues from the build…

Ok, so now I have a web app that I can use to try out my continuous deployment!

Adding Continuous Deployment

Next, I needed a GitHub repo to deploy from. So I went to GitHub and set one up. I then opened a command line at the root of my solution (Repo Root in the above directory structure), and initialized a new Git repo. However, to not get too much stuff into my repo, I added a .gitignore file that excluded all unnecessary files.

I got my .gitignore from a site called gitignore.io. On this site, I just searched for “VisualStudio” and then downloaded the .gitignore file that it generated to the root of my repo.

However, since I am also generating some files, that I don’t want to commit to Git, on the fly. I added the following 2 lines to the ignore file

build/
dist/

Other than that, the gitignore-file seems to be doing what it should. So I created my first commit to my repo, and then pushed to my new GitHub repo.

Next, I needed to set up the actual continuous deployment. This is done through the Azure portal. So opened up the portal (https://portal.azure.com/), and navigated to the Web App that I wanted to enable CD for. Under the settings for the app, there is a Continuous Deployment option that I clicked

image

After clicking this, you are asked to choose your source. Clicking the “choose source” button, gives you a list of supported source control solutions.

image

In this case, I chose GitHub (for obvious reasons). If you haven’t already set up CD from GitHub at some point before, you are asked to authenticate yourself. Once that is done, you get to choose what project/repo you want to deploy from, as well as what branch you want to use.

As soon as all of this is configured, and you click “Done”, Azure will start its first deployment. However, since it wouldn’t run the Gulp-stuff that I required, the build is a massive fail. Tthe deployment will succeed, but your app won’t work, as it doesn’t have the required CSS and JavaScript.

Modifying the build process to run Gulp

Ok, so now that I had the actual CD configured in Azure, it was time to set up the build to run the required Gulp-task. To do this, I needed to modify the build process. Luckily, this is a simple thing to do…

When doing CD from GitHub, the system doing the actual work is called Kudu. Kudu is a fairly “simple” deployment engine that can be used to deploy pretty much anything you can think of to an Azure Web App. It also happens to be very easily modified. All you need is a .deployment file in the root of your repo to tell Kudu what to do. Or rather what command to run. In this case, the command to run was going to be a CMD-file. This CMD-file will replace the entire build process. However, you don’t have to go and re-create/re-invent the whole process. You can quite easily get a baseline of the whole thing created for you using the Azure CLI.

The Azure CLI can be installed in a couple of different ways, including downloading an EXE or by using npm. I hit some bumps trying to install it through npm though, so I recommend just getting the EXE, which is available here: https://azure.microsoft.com/en-us/documentation/articles/xplat-cli-install/.

Once that is installed, you can generate a .deployment and deployment.cmd file using a command that looks like this

azure site deploymentscript -s <PathToSln> --aspWAP <PathToCsProj>

So in this case, I ran the following command in the root of my repo

azure site deploymentscript --s Src\DeploymentDemo.sln --aspWAP \Src\DeploymentDemo\DeploymentDemo.csproj

Note: It does not verify that the sln or csproj file even exists. The input parameters are just used to set some stuff in the generated deploy.cmd file.

The .deployment-file contains very little. Actually, it only contains

[config]
command = deploy.cmd

which tells Kudu that the command to run during build, is deploy.cmd. The deploy.cmd on the other hand, contains the whole build process. Luckily, I didn’t have to care too much about that, even if it is quite an interesting read. All I had to do, was to find the right place to interfere with the process, and do my thing.

Scrolling through the command file, I located the row where it did the actual build. It happened to be step number 2, and what it does, is that it builds the defined project, putting the result in %DEPLOYMENT_TEMP%. So all I had to do, was to go in and do my thing, making sure that the generated files that I wanted to deploy was added to the %DEPLOYMENT_TEMP% directory as well.

So I decided to add my stuff to the file after the error level check, right after step 2.

First off I made sure to move the working directory to the correct directory, which in my case was Src\DeploymentDemo. I did this using the command pushd, which allows me to return to the previous directory by just calling popd.

After having changed the working directory, I added code to run npm install. Luckily, npm is already installed on the Kudu agent, so that is easy to do. However, I made sure to call it in the same way that all the other code was called in the file, which meant calling :ExecuteCmd. This just makes sure that if the command fails, the error is echoed to the output automatiacally.

So the code I added so far looks like this

echo Moving to source directory
pushd "Src\DeploymentDemo"

echo Installing npm packages: Starting %TIME%
call :ExecuteCmd npm install
echo Installing npm packages: Finished %TIME%
IF !ERRORLEVEL! NEQ 0 goto error

As you can see, I also decided to add some echo calls in the code as well. This makes it a bit easier to debug any potential problems using the logs.

Once npm was done installing, it was time to run bower install. And once again, I just called it using :ExecuteCmd like this

echo Installing bower packages: Starting %TIME%
call :ExecuteCmd "bower" install
echo Installing bower packages: Finished %TIME%
IF !ERRORLEVEL! NEQ 0 goto error

And with my Bower-packages in place, it was time to run Gulp. And once again, same deal, :ExecuteCmd gulp. However, I also need to make sure that any files generated by Gulp in the dist directory was added to the %DEPLOYMENT_TEMP% directory. Which I did by just calling xcopy, copying the dist directory to the temporary deployment directory. Like this

echo Running Gulp: Starting %TIME%
call :ExecuteCmd "gulp"
echo Running Gulp: Finished %TIME%

echo Publishing dist folder files to temporary deployment location
call :ExecuteCmd "xcopy""%DEPLOYMENT_SOURCE%\Src\DeploymentDemo\dist\*.*""%DEPLOYMENT_TEMP%\dist" /S /Y /I
echo Done publishing dist folder files to temporary deployment location

IF !ERRORLEVEL! NEQ 0 goto error

And finally, I just “popped” back to the original directory by calling popd… So the code I added between step 2 and 3 in the file ended up being this

// "Step 2" in the original script

echo Moving to source directory
pushd "Src\DeploymentDemo"

echo Installing npm packages: Starting %TIME%
call :ExecuteCmd npm install
echo Installing npm packages: Finished %TIME%
IF !ERRORLEVEL! NEQ 0 goto error

echo Installing bower packages: Starting %TIME%
call :ExecuteCmd "bower" install
echo Installing bower packages: Finished %TIME%
IF !ERRORLEVEL! NEQ 0 goto error

echo Running Gulp: Starting %TIME%
call :ExecuteCmd "gulp"
echo Running Gulp: Finished %TIME%

echo Publishing dist folder files to temporary deployment location
call :ExecuteCmd "xcopy""%DEPLOYMENT_SOURCE%\Src\DeploymentDemo\dist\*.*""%DEPLOYMENT_TEMP%\dist" /S /Y /I
echo Done publishing dist folder files to temporary deployment location
IF !ERRORLEVEL! NEQ 0 goto error

echo Moving back from source directory
popd

// "Step 3"in the original script

That’s it! After having added the new files, and modified the deployment.cmd file to do what I wanted, I commited my changes and pushed them to GitHub. And as soon as I did that, Azure picked up the changes and deployed my code, including the Gulp generated files.

Simplifying development

However, there was still one thing that I wanted to solve… I wasn’t too happy about having my site using the bundled and minified JavaScript files at all times. During development, it made it hard to debug, and since VS is already transpiling my TypeScript to JavaScript on the fly, even adding source maps, why not use those files instead… That makes debugging much easier…

So In my cshtml-view, I add the following code where I previously just added an include for the minified JavaScript

@if (HttpContext.Current.IsDebuggingEnabled)
{
<script src="~/bower_components/angular/angular.js"></script>
<script src="~/bower_components/less/dist/less.min.js"></script>
<script src="~/Scripts/MainCtrl.js"></script>
<script src="~/Scripts/App.js"></script>
}
else
{
<script src="/dist/deploymentdemo.min.js"></script>
}

This code checks to see if the app is running in debug, and if it is, it uses the “raw” JavaScript files instead of the minified one. And also, it includes Angular in a non-minified version, as this adds some extra debugging capabilities. As you can see, I have also included less.min.js in the list of included JavaScript files. I will get back to why I did this in a few seconds…

Note: Yes, this is a VERY short list of files, and in a real project, this would be a PITA to work with. However, this can obviously be combined with smarter code to generate the list of files to include in a more dynamic way.

Next, I felt that having to wait for Gulp to transpile all my LESS to CSS was a bit of a hassle. Ever so often, I ended up making a change to the LESS, and then refreshing the browser too fast, not giving Gulp enough time to run. So why not let the browser do the LESS transpiling on the fly during development?

To enable this, I changed the way that the view included the CSS. In a very similar way to what I did with the JavaScript includes

@if (HttpContext.Current.IsDebuggingEnabled)
{
<link href="~/bower_components/bootstrap/dist/css/bootstrap.min.css" rel="stylesheet"/>
<link rel="stylesheet/less" type="text/css" href="~/Styles/Site.less" />
}
else
{
<link href="/dist/deploymentdemo.min.css" rel="stylesheet" />
}

So, if the app is running in debug, it includes bootstrap.min.css, and my less file. If not it just includes the deployment.min.css. However, to include the LESS file like this, you need to make sure that it is added with the rel attribute set to stylesheet/less, and that the less.min.js file is included, which I did in the previous snippet I showed.

That’s it! Running this app in debug will now give me a good way of handling my resources for development. And running it in the cloud will give the users a bundled and minified version of both the required JavaScript and CSS.

As usual on my blog, you can download the code used in this post. However, it is dependent on setting up a few things in Azure as well… So you can’t just download and try it out unfortunately. But, the code should at least give you a good starting point.

Code is available here: DeploymentDemo.zip (58.1KB)

I also want to mention that you might want to put in some more error checks and so on in there, but I assume that you know that code from blogs are generally just quick demos of different concepts, not complete solutions… And if you didn’t, now you do! Winking smile

Cheers!

PS: I have also blogged about how to upload the generated files to blob storage as part of the build. That is available here.

Uploading Resources to Blob Storage During Continuous Deployment using Kudu

$
0
0

In  my last post, I wrote about how to run Gulp as part of your deployment process when doing continuous deployment from GitHub to an Azure Web App using Kudu. As part of that post, I used Gulp to generate bundled and minified JavaScript and CSS files that was to be served to the client.

The files were generated by using Gulp, and included in the deployment under a directory called dist. However, they were still part of the website. So they are still taking up resources from the webserver as they need to be served from it. And also, they are taking up precious connections from the browser to the server… By offloading them to Azure Blob Storage, we can decrease the amount of requests the webserver gets, and increase the number of connections used by the browser to retrieve resources. And it isn’t that hard to do…

Modifying the deployment.cmd file

I’m not going to go through all the steps of setting the deployment up. All of that was already done in the previous post, so if you haven’t read that, I suggest you do that first…

The first thing that I need to do, is add some more functionality to my deployment.cmd file. So right after I “pop” back to the original directory, I add some more code. More specifically, I add the following code

echo Pushing dist folder to blobstorage
call :ExecuteCmd npm install azure-storage
call :ExecuteCmd "node" blobupload.js "dist""%STORAGE_ACCOUNT%""%STORAGE_KEY%"
echo Done pushing dist folder to blobstorage

Ok, so I start by echoing out that I am about to upload the dist folder to blob storage. Next, I use npm to install the azure-storage package, which includes code to help out with working with Azure storage in Node.

Next, I execute a script called blobuplad.js using Node. I pass in 3 parameters, a string containing the name of the folder to upload, a storage account name, and a storage account key.

And finally I echo out that I’m done.

So what is in that blobupload.js file? Well, it is “just” a node script to upload the files in the specified folder. It starts out like this

var azure = require('azure-storage');
var fs = require('fs');

var distDirName = process.argv[2];
var accountName = process.argv[3];
var accountKey = process.argv[4];
var sourceDir = 'Src\\DeploymentDemo\\dist\\';

It “requires” the newly installed azure-storage package, and fs, which is what one use to work with the file system in node.

Next, it pulls out the arguments that were passed in. They are available in the process.argv array, from index 2 and forward. (0 says “node” and 1 says “blobupload.js”, which I don’t need). And finally, it just defines the source directory to copy from.

Note: The source directory should probably be passed in as a parameter as well, making the script more generic. But it’s just a demo…

After all the important variables have been collected, it goes about doing the upload

var blobService = azure.createBlobService(accountName, accountKey);

blobService.createContainerIfNotExists(distDirName, { publicAccessLevel: 'blob' }, function(error, result, response) {
if (error) {
console.log(result);
throw Error("Failed to create container");
}

var files = fs.readdirSync(sourceDir);
for (i=0;i<files.length;i++) {
console.log("Uploading: " + files[i]);
blobService.createBlockBlobFromLocalFile(distDirName, files[i], sourceDir + files[i], function(error, result, response) {
if (error) {
console.log(error);
throw Error("Failed to upload file");
}
});
}});

It starts out by creating a blob service, which it uses to talk to blob storage. Next,it creates the target container if it doesn’t already exist. If that fails, it logs the result from the call and throws an error.

Once it knows that there is a target container, it uses fs to get the names of the files in the target directory. It then loops through those names, and uses the blob services createBlockBlobFromLocalFile method to upload the local file to blob storage.

That’s it… It isn’t harder than that…

Parameters

But wait a second! Where did those magical parameters called %STORAGE_ACCOUNT% and %STORAGE_KEY% that I used in deploy.cmd come from? Well, since Kudu runs in a context that knows about the Web App it is deploying to, it is nice enough to set up any app setting that you have configured for the target Web App, as a variable that you can use in your script using %[AppSettingName]%.

So I just went to the Azure Portal and added 2 app settings to the target Web App, and inserted the values there. This makes it very easy to have different values for different targets when using Kudu. It also means that you never have to check in you credentials.

Warning: You should NEVER EVER EVER EVER check in your credentials to places like GitHub. they should be kept VERY safe. Why? Well, read this, and you will understand.

Changing the website

Now that the resources are available in storage instead of in the local application, the web app needs to be modified to include them from there instead.

This could easily be done by changing

@if (HttpContext.Current.IsDebuggingEnabled)
{
<link href="~/bower_components/bootstrap/dist/css/bootstrap.min.css" rel="stylesheet"/>
<link rel="stylesheet/less" type="text/css" href="~/Styles/Site.less" />
}
else
{
<link href="https://[StorageAccountName].blob.core.windows.net/dist/deploymentdemo.min.css" rel="stylesheet" />
}

However, that is a bit limiting to me… I prefer changing it into

@if (HttpContext.Current.IsDebuggingEnabled) {
<link href="~/bower_components/bootstrap/dist/css/bootstrap.min.css" rel="stylesheet"/>
<link rel="stylesheet/less" type="text/css" href="~/Styles/Site.less" />
} else {
<link href="@ConfigurationManager.AppSettings["cdn.prefix"]/dist/deploymentdemo.min.css" rel="stylesheet" />
}

The difference, if it isn’t quite obvious, is that I am not hardcoding the storage account that I am using. Instead, I am reading in from the web apps app settings. This gives me a few advantages. First of all, I can just not set the value at all, and it would just default to using the files from the webserver. I can also set it to be different storage accounts for different deployments. However, you might also have noticed that I called the setting cdn.prefix. The reason for this, is that I can also just turn on the CDN in Azure, and then configure my setting to use this instead of the storage account straight up. So using this little twist, I can use my local files, files from any storage account, as well as a CDN if that is what I want…

This is a small twist to just using storage, but it offers a whole heap of more flexibility, so why wouldn’t you…?

That’s actually all there is to it! Not very complicated at all!

Uploading Resources to Blob Storage During Continuous Deployment using XAML Builds in Visual Studio Team Services

$
0
0

In my last blog post, I wrote about how we can set up continuous deployment to an Azure Web App, for an ASP.NET application that was using Gulp to generate client side resources. I have also previously written about how to do it using GitHub and Kudu (here and here). However, just creating the client side resources and uploading the to a Web App is really not the best use of Azure. It would be much better to offload those requests to blob storage, instead of having the webserver having to handle them. For several reasons…

So let’s see how we can modify the deployment from the previous post to also include uploading the created resources to blob storage as part of the build.

Creating a Script to Upload the Resources

The first thing we need to get this going, is some form of code that can do the actual uploading the generated files to blob storage. And I guess one of the easiest ways is to just create a PowerShell script that does it.

So by calling in some favors, and Googling a bit, I came up with the following script

[CmdletBinding()]
param(
[Parameter(Mandatory = $true)]
[string]$LocalPath,

[Parameter(Mandatory = $true)]
[string]$StorageContainer,

[Parameter(Mandatory = $true)]
[string]$StorageAccountName,

[Parameter(Mandatory = $true)]
[string]$StorageAccountKey
)

function Remove-LeadingString
{
[CmdletBinding()]
param (
[Parameter(Mandatory = $true, ValueFromPipeline = $true)]
[AllowEmptyString()]
[string[]]
$String,

[Parameter(Mandatory = $true)]
[string]
$LeadingString
)

process
{
foreach ($s in $String)
{
if ($s.StartsWith($LeadingString, $true, [System.Globalization.CultureInfo]::InvariantCulture))
{
$s.Substring($LeadingString.Length)
}
else
{
$s
}
}
}
}

function Construct-ContentTypeProperty
{
[CmdletBinding()]
param (
[Parameter(Mandatory = $true, ValueFromPipeline = $true)]
[System.IO.FileInfo]
$file
)

process
{
switch($file.Extension.ToLowerInvariant()) {
".svg" { return @{"ContentType"="image/svg+xml"} }
".css" { return @{"ContentType"="text/css"} }
".js" { return @{"ContentType"="application/javascript"} }
".json" { return @{"ContentType"="application/javascript"} }
".png" { return @{"ContentType"="image/png"} }
".html" { return @{"ContentType"="text/html"} }
default { return @{"ContentType"="application/octetstream"} }
}
}
}

Write-Host "About to deploy to blobstorage"

# Check if Windows Azure Powershell is avaiable
if ((Get-Module -ListAvailable Azure) -eq $null)
{
throw"Windows Azure Powershell not found! Please install from http://www.windowsazure.com/en-us/downloads/#cmd-line-tools"
}

$context = New-AzureStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $StorageAccountKey

$existingContainer = Get-AzureStorageContainer -Context $context | Where-Object { $_.Name -like $StorageContainer }
if (!$existingContainer)
{
$newContainer = New-AzureStorageContainer -Context $context -Name $StorageContainer -Permission Blob
}

$dir = Resolve-Path ($LocalPath)
$files = (Get-ChildItem $dir -recurse | WHERE {!($_.PSIsContainer)})
foreach ($file in $files)
{
$fqName = $file.Directory,'\',$file.Name
$ofs = '
'
$fqName = [string]$fqName
$prop = Construct-ContentTypeProperty $file
$blobName = ($file.FullName | Remove-LeadingString -LeadingString "$($dir.Path)\")

Set-AzureStorageBlobContent -Blob $blobName -Container $StorageContainer -File $fqName -Context $context -Properties $prop -Force
}

Yes, I needed to get some help with this. I have very little PowerShell experience, so getting some help just made it a lot faster…

Ok, so what does it do? Well, it isn’t really that complicated. It takes 4 parameters, the local path of the folder to upload, the name of the container to upload the files to, the name of the storage account, and the key to the account. All of these parameters will be passed in by the build definition in just a little while…

Next, it declares a function that can remove the beginning of a string if it starts with the specified string, as well as a function to get the content type of the file being uploaded based on the file extension.

After these two functions have been created, it verifies that the Azure PowerShell commandlets are available. If not, it throws an exception.

It then creates an Azure context, which is basically the way you tell the commandlets you are calling, what credentials to use.

This is then used to create the specified target container, if it doesn’t already exist. After that, it recursively walks through all files and folders in the specified upload directory, uploading one file at the time as it is found.

Not very complicated at all…but could probably do with some optimization in the form of parallel uploads etc, but I couldn’t quite figure that out, and I had other things to solve as well…

Calling the Script During Deployment

Once the script is in place, it needs to be called during the build. Luckily, this is a piece of cake! All you need to do, is to add the following target in the XXX.wpp.targets file that was added in the previous post.

<Target Name="UploadToBlobStorage" AfterTargets="RunGulp" Condition="'$(Configuration)' != 'Debug'">
<Message Text="About to deploy front-end resources to blobstorage using the script found at $(ProjectDir)..\..\Tools\BlobUpload.ps1" />
<PropertyGroup>
<ScriptLocation Condition=" '$(ScriptLocation)'=='' ">$(ProjectDir)..\..\Tools\BlobUpload.ps1</ScriptLocation>
</PropertyGroup>
<Exec Command="powershell -NonInteractive -executionpolicy bypass -command &quot;&amp;{&amp;'$(ScriptLocation)' '$(ProjectDir)dist' 'dist' '$(StorageAccount)' '$(StorageAccountKey)'}&quot;" />
</Target>

As you can see, it is another Target, that is run after the Target called RunGulp. It also has that same condition that the NpmInstall target had, making sure that it isn’t run while building in Debug mode.

The only things that are complicated are the syntax of calling the PowerShell script, which is a bit wonky, and the magical $(StorageAccount) and $(StorageAccountKey) properties, which are actually properties that I have added at the top of the targets file like this

<PropertyGroup>
<StorageAccount></StorageAccount>
<StorageAccountKey></StorageAccountKey>
</PropertyGroup>

However, as you can see, they are empty. That’s because we will populate them from the build definition so that we don’t have to check-in out secret things into source control.

Modifying the Build Definition to Set the Storage Account Values

The final step is to edit the build definition, and make sure that the storage account name and key is set properly, so the PowerShell-script gets the values passed in properly.

To do this, you edit the build definition, going to the Process part of it and expanding the “5. Advanced” section. Like this

image

Under this section, you will find an “MSBuild arguments” item. In here, you can set the values of the properties used in the targets file. This is done by adding a /p:[PropertyName]=”[PropertyValue]” for each value you want to set. So in this case, I add

/p:StorageAccount="deploymentdemo" /p:StorageAccountKey="XXXXXXXX"

That’s it! If you just check-in the modified targets file, and the PowerShell script, you should be able to queue a new build and have it upload the generated files to blob storage for you.

Finally, to make sure you add the correct includes to the actual web page, I suggest having a look this blog post. It is about doing this with Kudu from GitHub, but the part about “Changing the website” in that post, is just as valid for this scenario. It enables the ability to easily switch between local resources, different blob storage accounts, and even CDN.

Cheers!

Setting Up Continuous Deployment of an ASP.NET App with Gulp from VSTS to an Azure Web App using Scripted Build Definitions

$
0
0

A few weeks ago, I wrote a couple of blog posts on how to set up continuous deployment to Azure Web Apps, and how to get Gulp to run as a part of it. I covered how to do it from GitHub using Kudu, and how to do it from VSTS using XAML-based build definitions. However, I never got around to do a post about how to do it using the new scripted build definitions in VSTS. So that is why this post is going to be about!

The Application

The application I’ll be working with, is the same on that I have been using in the previous posts. So if you haven’t read them, you might want to go and have a look at them. Or, at least the first part of the first post, which includes the description of the application in use. Without that knowledge, this post might be a bit hard to follow…

If you don’t feel like reading more than you need to, the basics are these. It’s an ASP.NET web application that uses TypeScript and LESS, and Gulp for generating transpiled, bundled and minified files versions of these resources. The files are read from the Styles and Scripts directories, and built to a dist directory using the default” task in Gulp. The source code for the whole project, is placed in a Src directory in the root of the repo…and the application is called DeploymentDemo.

I think that should be enough to figure out most of the workings of the application…if not, read the first post!

Setting up a new build

Ok, so the first step is to set up a new build in our VSTS environment. And to do this, all you need to do, is to log into visualstudio.com, go to your project and click the “Build” tab

image

Next, click the fat, green plus sign, which gives you a modal window where you can select a template for the build definition you are about to create. However, as I’m not just going to build my application, but also deploy it, I will click on the “Deployment” tab. And since I am going to deploy an Azure Web App, I select the “Azure Website” template and click next.

image

Note: Yes, Microsoft probably should rename this template, but that doesn’t really matter. It will still do the right thing.

Warning: If you go to the Azure portal and set up CD from there, you will actually get a XAML-based build definition, and not a scripted one. So you have to do it from in here.

Note: VSTS has a preview feature right now, where you split up the build and deployment steps into multiple steps. However, even if this is a good idea, I am just going to keep it simple and do it as a one-step procedure.

On the next screen, you get to select where the source code should come from. In this case, I’ll choose Git, as my solution is stored in a Git-based VSTS project. And after that, I just make sure that the right repo and branch is selected.

Finally, I make sure to check the “Continuous integration…”-checkbox, making sure that the build is run every time I push a change.

image

That’s it! Just click “Create” to create build definition!

Note: In this window you are also asked what agent queue to use by default. In this example, I’ll leave it on “Hosted”. This will give me a build agent hosted by Microsoft, which is nice. However, this solution can actually be a bit slow at times, and limited, as you only get a certain amount of minutes of free builds. So if you run into any of these problems, you can always opt in to having your own build agent in a VM in Azure. This way you get dedicated resources to do builds. Just keep in mind that the build agent will incur an extra cost.

Once that is done, you get a build definition that looks like this

image

Or at least you did when I wrote this post…

As you can see, the steps included are:

1. Visual Studio Build – A step that builds a Visual Studio solution

2. Visual Studio Test – A step that runs tests in the solution and makes sure that failing tests fail the build

3. Azure Web App Deployment – A step that publishes the build web application to a Web App in Azure

4. Index Sources & Publish Symbols – A step that creates and published pdb-files

5. Copy and Publish Artifacts – A step that copies build artifacts generated by the previous steps to a specified location

Note: Where is the step that downloads the source from the Git repo? Well, that is actually not its own step. It is part of the definition, and can be found under the “Repository” tab at the top of the screen.

In this case however, I just want to build and deploy my app. I don’t plan on running any tests, or generate pdb:s etc, so I’m just going to remove some of the steps… To be honest, the only steps I want to keep, is step 1 and 3. So it looks like this

image

Configuring the build step

Ok, now that I have the steps I need, I guess it is time to configure them. There is obviously something wrong with the “Azure Web App Deployment” step considering that it is red and bold…  But before I do anything about that, I need to make a change to the “Visual Studio Build” step.

As there will be some npm stuff being run, which generates that awesome, and very deep, folder structure inside of the “node_modules” folder, the “Visual Studio Build” step will unfortunately fail in its current configuration. It defines the solution to build as **/*.sln, which means “any file with an .sln-extension, in any folder”. This causes the build step to walk through _all_ the folders, including the “node_modules” folder, searching for solution files. And since the folder structure is too deep, it seems to fail if left like this. So it needs to be changed to point to the specific solution file to use. In this case, that means setting the Solution setting to Src/DeploymentDemo.sln. Like this

image

Configuring the deployment step

Ok, so now that the build step is set up, we need to have a look at the deployment part. Unfortunately, this is a bit more complicated than it might seem, and to be honest, than it really needed to be. At first look, it doesn’t look too bad

image

Ok, so all we need to do is to select the subscription to use, the Web App to deploy to and so on… That shouldn’t be too hard. Unfortunately all that becomes a bit more complicated when you open the “Azure Subscription” drop-down and realize that it is empty…

The first thing you need to do is to give VSTS access to your Azure account, which means adding a “Service Endpoint”. This is done by clicking the Manage link to the right of the drop-down, which opens a new tab where you can configure “Service Endpoints”.

image

The first thing to do is to click the fat, green plus sign and select Azure in the drop-down. This opens a new modal like this

image

There are 3 different ways to add a new connection, Credentials, Certificate Based and Service Principle Authentication. In this case, I’ll switch over to Certificate Based.

Note: If you want to use Service Principle Authentication you can find more information here

First, the connection needs a name. It can be whatever you want. It is just a name.

Next, you need to provide a bunch of information about your subscription, which is available in the publish settings file for your subscription. The easiest way to get hold of this file, is to hover over the tooltip icon, and then click the link called publish settings file included in the tool tip pop-up.

image

This brings you to a page where you can select what directory you want to download the publish settings for. So just select the correct directory, click “Submit”, and save the generated file to somewhere on your machine. Once that is done, you can close down the new tab and return to the “Add New Azure Connection” modal.

To get hold of the information you need, just open the newly downloaded file in a text editor. It will look similar to this

image

As you can see, there are a few bits of information in here. And it can be MUCH bigger than this if you have many subscriptions in the directory your have chosen. So remember to locate the correct subscription if you have more than one.

The parts that are interesting would be the attribute called Id, which needs to be inserted in the Subscription Id field in the modal, the attribute called Name, which should be inserted in Subscription Name, and finally the attribute called ManagementCertificate, which goes in the Management Certificate textbox. Like this

image

Once you click OK, the information will be verified, and if everything is ok, the page will reload, and you will have a new service endpoint to play with. Once that is done, you can close down the tab, and return to the build configuration set up.

The first thing you need to do here, is to click the “refresh” button to the right of the drop-down to get the new endpoint to show up. Next, you select the newly created endpoint in the drop-down.

After that, you would assume that the Web App Name drop-down would be populated with all the available web apps in your subscription. Unfortunately, this is not the case for some reason. So instead, you have to manually insert the name of the Web App you want to deploy to.

Note: You have two options when selecting the name of the Web App. Either, you choose the name of a Web App that you have already provisioned through the Azure portal, or you choose a new name, and if that name is available, the deployment script will create a new Web App for you with that name on the fly.

Next, select the correct region to deploy to, as well as any specific slot you might be deploying to. If you are deploying to the default slot, just leave the “slot” textbox empty.

The Web Deploy Packagebox is already populated with the value $(build.stagingDirectory)\**\*.zip which works fine for this. If you have more complicated builds, or your application contains other zips that will be output by the build, you might have to change this.

Once that is done, all you have to do is click the Save button in the top left corner, give the build definition a name, and you are done with the configuration.

Finally, click the Queue build… button to queue a new build, and in the resulting modal, just click OK. This will queue a new build, and give you a screen like this while you wait for an agent to become available

image

Note: Yes, I have had a failing build before I took this “screen shot”. Yours might look a little bit less red…

And as soon as there is an agent available for you, the screen will change into something like this

image

where you can follow along with what is happening in the build. And finally, you should be seeing something like this

image

At least if everything goes according to plan

Adding Gulp to the build

So far, we have managed to configure a build and deployment of our solution. However, we are still not including the Gulp task that is responsible for generating the required client-side resources. So that needs to be sorted out.

The first thing we need to do is to run

npm install

To do this, click the fat, green Add build step… button at the top of the configuration

image

and in the resulting modal, select Package in the left hand menu, and then add an npm build step

image

Next, close the modal and drag the new build step to the top of the list of steps.

By default, the command to run is set to install, which is what we need. However, we need it to run in a different directory than the root of the repository. So in the settings for the npm build step, expand the Advanced area, and update the Working Directory to say Src/DeploymentDemo.

image

Ok, so now npm will install all the required npm packages for us before the application is built.

Next, we need to run

bower install

To do this, add a new build step of the type Command Line from the Utility section, and drag it so that it is right after the npm step. The configuration we need for this to work is the following

Tool should be $(Build.SourcesDirectory)\Src\DeploymentDemo\node_modules\.bin\bower.cmd, the arguments should be install, and the working folder should be Src/DeploymentDemo

image

This will execute the bower command file, which is the same as running Bower in the command line, passing in the argument install, which will install the required bower packages. And setting the working directory will make sure it finds the bower.json file and installs the packages in the correct folder.

Now that the Bower components have been installed, or at least been configured to be installed, we can run Gulp. To do this, just add a new Gulp build step, which can be found under the Build section. And then make sure that you put it right after the Command Line step.

As our gulpfile.js isn’t in the root of the repo, the Gulp File Path needs to be changed to Src/DeploymentDemo/gulpfile.js, and the working directory once again has to be set to Src/DeploymentDemo

image

As I’m using the default task in this case, I don’t need to set the Gulp Task(s) to get it to run the right task.

Finally, I want to remove any left over files from the build agent as these can cause potential problems. They really shouldn’t, at least not if you are running the hosted agent, but I have run in to some weird stuff when running on my own agent, so I try to always clean up after the build. So to do this, I will run the batch file called delete_folder.bat in the Tools directory of my repo. This will use RoboCopy to safely remove deep folder structures, like the node_modules and bower_components folders.

To do this, I add two new build step to the end of the definition. Both of them of the type Batch Script from the Utility section of the Add Build Step modal.

Both of them need to have their Path set to Tools/delete_folder.bat, their Working Folder set to Src/DeploymentDemo, and their Always run checkbox checked. However, the first step needs to have the Arguments set to node_modules and the second one have it set to bower_components

image

This will make sure that the bower_components and node_modules folders are removed after each build.

Finally save the build configuration and you should be done! It should look something like this

image

However, there is still one problem. Gulp will generate new files for us, as requested, but it won’t be added to the deployment unfortunately. To solve this, we need to tell MSDeploy that we want to have those files added to the deployment. To do this, a wpp.targets-file is added to the root of the project, and checked into source control. The file is in this case called DeploymentDemo.wpp.targets and looks like this

<?xmlversion="1.0"encoding="utf-8" ?>
<ProjectToolsVersion="4.0"xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

<TargetName="AddGulpFiles"BeforeTargets="CopyAllFilesToSingleFolderForPackage;CopyAllFilesToSingleFolderForMsdeploy">
<MessageText="Adding gulp-generated files to deploy"Importance="high"/>
<ItemGroup>
<CustomFilesToIncludeInclude=".\dist\**\*.*"/>
<FilesForPackagingFromProjectInclude="%(CustomFilesToInclude.Identity)">
<DestinationRelativePath>.\dist\%(RecursiveDir)%(Filename)%(Extension)</DestinationRelativePath>
</FilesForPackagingFromProject>
</ItemGroup>
</Target>

</Project>

It basically tells the system that any files in the dist folder should be added to the deployment.

Note: You can read more about wpp.targets-files and how/why they work here: http://chris.59north.com/post/Integrating-a-front-end-build-pipeline-in-ASPNET-builds

That’s it! Queuing a new build, or pushing a new commit should cause the build to run, and a nice new website should be deployed to the configured location, including the resources generated by Gulp. Unfortunately, due to the npm and Bower work, the build can actually be a bit slow-ish. But it works!

Cheers!

Integrating with Github Webhooks using OWIN

$
0
0

For some reason I got the urge to have a look at webhooks when using GitHub. Since it is a feature that is used extensively by build servers and other applications to do things when code is pushed to GitHub etc, I thought it might be cool to have a look at how it works under the hood. And maybe build some interesting integration in the future…

The basic idea behind it is that you tell GitHub that you want to get notified when things happen in your GitHub repo, and GitHub makes sure to do so. It does so using a regular HTTP call to an endpoint of your choice.

So to start off, I decided to create a demo repo on GitHub and then add a webhook. This is easily done by browsing to your repo and clicking the “Settings” link to the right.

image

In the menu on the left hand side in the resulting page, you click the link called “Webhooks & Services”.

image

Then you click “Add webhook” and add the “Payload URL”, the content type to use and a secret. You can also define whether you want more than just the “push” event, which tells you when someone pushed some code, or if that is good enough. For this post, that is definitely enough… Clicking the “Add webhook” button will do just that, set up a webhook for you. Don’t forget that the hook must be active to work…

image

Ok, that is all you need to do to get webhooks up and running from that end!

There is a little snag though… GitHub will call your “Payload URL” from the internet (obviously…). This can cause some major problems when working on your development machine. Luckily this can be easily solved by using a tool called ngrok.

ngrok is a secure tunneling service that makes it possible to easily expose a local http port on your machine on the net. Just download the program and run it in a console window passing it the required parameters. In this case, tunneling an HTTP connection on port 4567 would work fine.

ngrok http 4567

image

The important part to note here is the http://e5630ddd.ngrok.io forwarding address. This is what you need to use when setting up the webhook at GitHub. And if you want more information about what is happening while using ngrok, just browse to http://127.0.0.1:4040/.

Ok, so now we have a webhook set up, and a tunnel that will make sure it can be called. The next thing is to actually respond to it…

In my case, I created a Console application in VS2013 and added the NuGet packages Microsoft.Owin.SelfHost and Newtonsoft.Json. Next, I created a new Owin middleware called WebhookMiddleware, as well as a WebhookMiddlewareOptions and WebhookMiddlewareExtensions, to be able to follow the nice AddXXX() pattern for IAppBuilder. It looks something like this

publicclass WebhookMiddleware : OwinMiddleware
{
privatereadonly WebhookMiddlewareOptions _options;

public WebhookMiddleware(OwinMiddleware next, WebhookMiddlewareOptions options)
: base(next)
{
_options = options;
}

publicoverride async Task Invoke(IOwinContext context)
{
await Next.Invoke(context);
}
}

publicclass WebhookMiddlewareOptions
{
public WebhookMiddlewareOptions()
{
}

publicstring Secret { get; set; }
}

publicstaticclass WebhookMiddlewareExtensions
{
publicstaticvoid UseWebhooks(this IAppBuilder app, string path, WebhookMiddlewareOptions options = null)
{
if (!path.StartsWith("/"))
path = "/" + path;

app.Map(path, (app2) =>
{
app2.Use<WebhookMiddleware>(options ?? new WebhookMiddlewareOptions());
});
}
}

As you can see, we use the passed in path to map when the middleware should be used…

Ok, now that all the middleware scaffolding is there, let’s just quickly add it to the IAppBuilder as well…

class Program
{
staticvoid Main(string[] args)
{
using (WebApp.Start<Startup>("http://127.0.0.1:4567"))
Console.ReadLine();
}
}

publicclass Startup
{
publicvoid Configuration(IAppBuilder app)
{
app.UseWebhooks("/webhook", new WebhookMiddlewareOptions { Secret = "12345" });
}
}

Note: The address passed to the OWIN server. It isn’t the usual localhost. Instead, I use 127.0.0.1, which is kind of the same, but not quite. To get the ngrok tunnel to work, it needs to be 127.0.0.1

That’s the baseline… Let’s add some actual functionality!

The first thing I need is a way to expose the information sent from GitHub. It is sent using JSON, but I would prefer it if I could get it statically typed for the end user. As well as readonly… So I added 4 classes that can represent at least the basic information sent back.

If we start from the top, we have the WebhookEvent. It is the object containing the basic information sent from GitHub. It looks like this

publicclass WebhookEvent
{
protected WebhookEvent(string type, string deliveryId, string body)
{
Type = type;
DeliveryId = deliveryId;

var json = JObject.Parse(body);
Ref = json["ref"].Value<string>();
Before = json["before"].Value<string>();
After = json["after"].Value<string>();
HeadCommit = new GithubCommit(json["head_commit"]);
Commits = json["commits"].Values<JToken>().Select(x => new GithubCommit(x)).ToArray();
Pusher = new GithubUser(json["pusher"]);
Sender = new GithubIdentity(json["sender"]);
}

publicstatic WebhookEvent Create(string type, string deliveryId, string body)
{
returnnew WebhookEvent(type, deliveryId, body);
}

publicstring Type { get; private set; }
publicstring DeliveryId { get; private set; }
publicstring Ref { get; private set; }
publicstring Before { get; private set; }
publicstring After { get; private set; }
public GithubCommit HeadCommit { get; set; }
public GithubCommit[] Commits { get; set; }
public GithubUser Pusher { get; private set; }
public GithubIdentity Sender { get; private set; }
}

As you can see, it is just a basic DTO that parses the JSON sent from GitHub and puts it in a statically typed class…

The WebhookEvent class exposes referenced commits using the GitHubCommit class, which looks like this

publicclass GithubCommit
{
public GithubCommit(JToken data)
{
Id = data["id"].Value<string>();
Message = data["message"].Value<string>();
TimeStamp = data["timestamp"].Value<DateTime>();
Added = ((JArray)data["added"]).Select(x => x.Value<string>()).ToArray();
Removed = ((JArray)data["removed"]).Select(x => x.Value<string>()).ToArray();
Modified = ((JArray)data["modified"]).Select(x => x.Value<string>()).ToArray();
Author = new GithubUser(data["author"]);
Committer = new GithubUser(data["committer"]);
}

publicstring Id { get; private set; }
publicstring Message { get; private set; }
public DateTime TimeStamp { get; private set; }
publicstring[] Added { get; private set; }
publicstring[] Removed { get; private set; }
publicstring[] Modified { get; private set; }
public GithubUser Author { get; private set; }
public GithubUser Committer { get; private set; }
}

Once again, it is just a JSON parsing DTO. And so are the last 2 classes, the GitHubIdentity and GitHubUser…

publicclass GithubIdentity
{
public GithubIdentity(JToken data)
{
Id = data["id"].Value<string>();
Login = data["login"].Value<string>();
}

publicstring Id { get; private set; }
publicstring Login { get; private set; }
}

publicclass GithubUser
{
public GithubUser(JToken data)
{
Name = data["name"].Value<string>();
Email = data["email"].Value<string>();
if (data["username"] != null)
Username = data["username"].Value<string>();
}

publicstring Name { get; private set; }
publicstring Email { get; private set; }
publicstring Username { get; private set; }
}

Ok, those are all the boring scaffolding classes to get data from the JSON to C# code…

Let’s have a look at the actual middleware implementation…

The first thing it need to do is to read out the values from the request. This is very easy to do. The webhook will get 3 headers and a body.  And they are read like this

publicoverride async Task Invoke(IOwinContext context)
{
var eventType = context.Request.Headers["X-Github-Event"];
var signature = context.Request.Headers["X-Hub-Signature"];
var delivery = context.Request.Headers["X-Github-Delivery"];

string body;
using (var sr = new StreamReader(context.Request.Body))
{
body = await sr.ReadToEndAsync();
}
}

Ok, now that we have all the data, we need to verify that the signature passed in the X-Hub-Signature header is correct.

The passed value will look like this sha1=XXXXXXXXXXX, and the XXXXXXXXX is a HMAC SHA1 hash generated using the body and the secret. To validate the hash, I add a method to the WebhookMiddlewareOptions class, and make it private. In just a minute I will explain how I can still let the user make modifications to it even if it is private…

It looks like this

privatebool ValidateSignature(string body, string signature)
{
var vals = signature.Split('=');
if (vals[0] != "sha1")
returnfalse;

var encoding = new System.Text.ASCIIEncoding();
var keyByte = encoding.GetBytes(Secret);

var hmacsha1 = new HMACSHA1(keyByte);

var messageBytes = encoding.GetBytes(body);
var hashmessage = hmacsha1.ComputeHash(messageBytes);
var hash = hashmessage.Aggregate("", (current, t) => current + t.ToString("X2"));

return hash.Equals(vals[1], StringComparison.OrdinalIgnoreCase);
}

As you can see, it is pretty much just a matter of generating a HMAC SHA1 hash based on the body and secret, and then verifying that they are equal. I do this case-insensitive as the .NET code will generate uppercase characters, and the GitHub signature is lowercase.

Now that we have this validation in place, it is time to hook it up. I do this by exposing a OnValidateSignature property of type Func<string, string, bool> on the options class, and assign it to the private function in the constructor.

publicclass WebhookMiddlewareOptions
{
public WebhookMiddlewareOptions()
{
OnValidateSignature = ValidateSignature;
}

...

publicstring Secret { get; set; }
public Func<string, string, bool> OnValidateSignature { get; set; }
}

This way, the user can just leave that property, and it will verify the signature as defined. Or he/she can replace the func with their own implementation and override the way the validation is done.

The next step is to make sure that we validate the signature in the middleware

publicoverride async Task Invoke(IOwinContext context)
{
...

if (!_options.OnValidateSignature(body, signature))
{
context.Response.ReasonPhrase = "Could not verify signature";
context.Response.StatusCode = 400;
return;
}
}

And as you can see, if the validation doesn’t approve the signature, it returns an HTTP 400.

So why are we validating this? Well, considering that you are exposing this endpoint on the web, it could get compromised and someone could be sending spoofed messages to your application…

Ok, the last thing to do is to make it possible for the middleware user to actually do something when the webhook is called. Once again I expose a delegate on my options class. In this case it is an Action of type WebhookEvent, called OnEvent. And once again I add a default implementation in the options class itself. However, the implementation doesn’t actually do anything in this case. But it means that I don’t have to do a null check…

publicclass WebhookMiddlewareOptions
{
public WebhookMiddlewareOptions()
{
...
OnEvent = (obj) => { };
}

...

public Action<WebhookEvent> OnEvent { get; set; }
}

And now that we have a way to tell the user that the webhook has been called, we just need to do so…

publicoverride async Task Invoke(IOwinContext context)
{
...

_options.OnEvent(WebhookEvent.Create(eventType, delivery, body));

context.Response.StatusCode = 200;
}

The last thing to do is also to send an HTTP 200 back to the server, telling it that the request has been accepted and processed properly.

Now that we have a callback system in place, we can easily hook into the webhook and do whatever processing we want. In my case, that means doing a very exciting Console.WriteLine

publicvoid Configuration(IAppBuilder app)
{
app.UseWebhooks("/webhook", new WebhookMiddlewareOptions
{
Secret = "12345",
OnEvent = (e) =>
{
Console.WriteLine("Incoming hook call: {0}\r\nCommits:\r\n{1}", e.Type, string.Join("\r\n", e.Commits.Select(x => x.Id)));
}
});

app.UseWelcomePage("/");
}

That’s it! A fully working and configurable webhook integration using OWIN!

And as usual, there is code for you to download! It is available here: DarksideCookie.Owin.GithubWebhooks.zip (13KB)

Cheers!

Viewing all 29 articles
Browse latest View live