Quantcast
Channel: DarksideCookie - ASP.NET
Viewing all 29 articles
Browse latest View live

Integrating a front-end build pipeline in ASP.NET builds

$
0
0

A while back I wrote a couple of blog posts about how to set up a “front-end build pipeline” using Gulp. The pipeline handled things like less to css conversion, bundling and minification, TypeScript to JavaScript transpile etc. However, this pipeline built on the idea that you would have Gulp watching your files as you worked, and doing the work as they changed. The problem with this is that it only runs on the development machine, and not the buildserver.… I nicely avoided this topic by ending my second post with “But that is a topic for the next post”. I guess it is time for that next post…

So let’s start by looking at the application I will be using for this example!

The post is based on a very small SPA application based on ASP.NET MVC and AngularJS. The application itself is really not that interesting, but I needed something to demo the build with. And it needed to include some things that needed processing during the build. So I decided to build the application using TypeScript instead of JavaScript, and Less instead of CSS. This means that I need to transpile my code, and then bundle and minify it.

I also want to have a configurable solution where I can serve up the raw Less files, together with a client-side Less compiler, as well as the un-bundled and minified JavaScript files, during development, and the bundled and minified CSS and JavaScript in production. And on top of that, I want to be able to include a CDN in the mix as well if that becomes a viable option for my app.

Ok, so let’s get started! The application looks like this

image

As you can see, it is just a basic ASP.NET MVC application. It includes a single MVC controller, that returns a simple view that hosts the Angular application. The only thing that has been added on top of the empty MVC project is the Microsoft.AspNet.Web.Optimization NuGet package that I will use to handle the somewhat funky bundling that I need.

What the application does is fairly irrelevant to the post. All that is interesting is that it is an SPA with some TypeScript files, some Less files and some external libraries, that needs transpiling, bundling, minification and so on during the build.

As I set work on my project, I use Bower to create a bower.json file, and then install the required Bower dependencies, adding them to the bower.json file using the –save switch. In this case AngularJS, Bootstrap and Less. This means that I can restore the Bower dependencies real easy by running ”bower install” instead of having to check them into source control.

Next I need to set up my bundling using the ASP.NET bundling stuff in the Microsoft.AspNet.Web.Optimization NuGet package. This is done in the BundleConfig file.

If you create an ASP.NET project using the MVC template instead of the empty one that I used, you will get this package by default, as well as the BundleConfig.cs file. If you decided to do it like me, and use the empty one, you need to add a BundleConfig.cs file in the App_Start folder, and make sure that you call the BundleConfig.RegisterBundles() method during application start in Global.asax.

Inside the BundleConfig.RegisterBundles I define what bundles I want, and what files should be included in them… In my case, I want 2 bundles. One for scripts, and one for styles.

Let’s start by the script bundle, which I call ~/scripts. It looks like this

publicstaticvoid RegisterBundles(BundleCollection bundles)
{
bundles.Add(new ScriptBundle("~/scripts", ConfigurationManager.AppSettings["Optimization.PathPrefix"] + "scripts.min.js") { Orderer = new ScriptBundleOrderer() }
.Include("~/bower_components/angularjs/angular.js")
.Include("~/bower_components/less/dist/less.js")
.IncludeDirectory("~/Content/Scripts", "*.js", true));
...
}

As you can see from the snippet above, I include AngularJS and Less.js from my bower_components folder, as well as all of my JavaScript files located in the /Content/Scripts/ folder and its subfolders. This will include everything my application needs to run. However, there are 2 other things that are very important in this code. The first one is that second parameter that is passed to the ScriptBundle constructor. It is a string that defines the Url to use when the site is configured to use CDN. I pass in a semi-dynamically created string, by concatenating a value from my web.config, and “scripts.min.js”.

In my case, the Optimization.PathPrefix value in my web.config is defined as “/Content/dist/” at the moment. This means that if I were to tell the application to use CDN, it would end up writing out a script-tag with the source set to “/Content/dist/scripts.min.js”. However, if I ever did decide to actually use a proper CDN, I could switch my web.config setting to something like “//mycdn.cdnnetwork.com/” which would mean that the source would be changed //mycdn.cdnnetwork.com/scripts.min.js, and all of the sudden point to an external CDN instead of my local files. This is a very useful thing to be able to do if one were to introduce a CDN…

The second interesting thing to note in the above snippet is the { Orderer = new ScriptBundleOrderer() }. This is a way for me to control the order of which my files are added to the page. Unless you use some dynamic resource loading strategy in your code, the order of which your scripts are loaded is actually important… So for this application, I created this

privateclass ScriptBundleOrderer : DefaultBundleOrderer
{
publicoverride IEnumerable<BundleFile> OrderFiles(BundleContext context, IEnumerable<BundleFile> files)
{
var defaultOrder = base.OrderFiles(context, files).ToList();

var firstFiles = defaultOrder.Where(x => x.VirtualFile.Name != null&& x.VirtualFile.Name.StartsWith("~/bower_components", StringComparison.OrdinalIgnoreCase)).ToList();
var lastFiles = defaultOrder.Where(x => x.VirtualFile.Name != null&& x.VirtualFile.Name.EndsWith("module.js", StringComparison.OrdinalIgnoreCase)).ToList();
var app = defaultOrder.Where(x => x.VirtualFile.Name != null&& x.VirtualFile.Name.EndsWith("app.js", StringComparison.OrdinalIgnoreCase)).ToList();

var newOrder = firstFiles
.Concat(defaultOrder.Where(x => !firstFiles.Contains(x) && !lastFiles.Contains(x) && !app.Contains(x)).Concat(lastFiles).ToList())
.Concat(lastFiles)
.Concat(app)
.ToList();

return newOrder;
}
}

It is pretty much a quick hack to make sure that all libraries from the bower_components folder is added first, then all JavaScript files that are not in that folder and do not end with “module.js” or “app.js”, then all files ending with “module.js” and finally “app.js”. This means that my SPA is loaded in the correct order. It all depends on the naming conventions one use, but for me, and this solution, this will suffice…

Next it is time to add my styles. It looks pretty much identical, except that I add Less files instead of JavaScript files…and I use a StyleBundle instead of a ScriptBundle

publicstaticvoid RegisterBundles(BundleCollection bundles)
{
..
bundles.Add(new StyleBundle("~/styles", ConfigurationManager.AppSettings["Optimization.PathPrefix"] + "styles.min.css")
.Include("~/bower_components/bootstrap/less/bootstrap.less")
.IncludeDirectory("~/Content/Styles", "*.less", true));
...
}

As you can see, I supply a CDN-path here as well. However, I don’t need to mess with the ordering this time as my application only has 2 less files that are added in the correct order. But if you hade a more complex scenario, you could just create another orderer.

And yes, I include Less files instead of CSS. I will then use less.js, which is included in the script bundle, to convert it to CSS in the browser…

Ok, that’s it for the RegisterBundles method, except for 2 more lines of code

publicstaticvoid RegisterBundles(BundleCollection bundles)
{
...
bundles.UseCdn = bool.Parse(ConfigurationManager.AppSettings["Optimization.UseBundling"]);
BundleTable.EnableOptimizations = bundles.UseCdn;
}

These two lines makes it possible to configure whether or not to use the bundled and minified versions of my scripts and styles by setting a value called Optimization.UseBundling in the web.config file. By default, the optimization stuff will serve unbundled files if the current compilation is set to debug in web.config, dynamically bundled files if set to “not debug” and the CDN path if UseCdn is set to true. In my case, I short circuit this and make it all dependent on the web.config setting…

Tip: To be honest, I generally add a bit more config in here, making it possible to not only use the un-bundled and minified JavaScript files or the bundled and minified versions. Instead I like being able to use bundled but not minified files as well. This can make debugging a lot easier some times. But that is up to you if you want to or not…I just kept it simple here…

Ok, now all the bundling is in place! Now it is just a matter of adding it to the actual page. Something that is normally not a big problem. You just call Styles.Render() and Scripts.Render(). Unfortunately, Iwhen adding Less files instead of CSS, the link tags that added need to have a different type defined. So to solve that, I created a little helper class called LessStyles. It looks like this

publicstaticclass LessStyles
{
privateconststring LessLinkFormat = "<link rel=\"stylesheet/less\" type=\"text/css\" href=\"{0}\" />";

publicstatic IHtmlString Render(paramsstring[] paths)
{
if (!bool.Parse(ConfigurationManager.AppSettings["Optimization.UseBundling"]) & HttpContext.Current.IsDebuggingEnabled)
{
return Styles.RenderFormat(LessLinkFormat, paths);
}
else
{
return Styles.Render(paths);
}
}
}

All it does is verifying whether or not it is set to use bundling or not, and if it isn’t, it renders the paths by calling Styles.RenderFormat(), passing along a custom format for the link tag, which sets the correct link type. This will then be picked up by the less.js script, and converted into CSS on the fly.

Now that I have that helper, it is easy to render the scripts and styles to the page like this

@using System.Web.Optimization
@using DarksideCookie.AspNet.MSBuild.Web.Helpers
<!DOCTYPEhtml>

<htmldata-ng-app="MSbuildDemo">
<head>
<title>MSBuild Demo</title>
@LessStyles.Render("~/styles")
</head>
<bodydata-ng-controller="welcomeController as ctrl">
<div>
<h1>{{ctrl.greeting('World')}}</h1>
</div>
@Scripts.Render("~/scripts")
</body>
</html>

There you have it! My application is done… Running this in a browser, with Optimization.UseCdn set to false, returns


<!DOCTYPEhtml>

<htmldata-ng-app="MSbuildDemo">
<head>
<title>MSBuild Demo</title>
<linkrel="stylesheet/less"type="text/css"href="/bower_components/bootstrap/less/bootstrap.less"/>
<linkrel="stylesheet/less"type="text/css"href="/Content/Styles/site.less"/>

</head>
<bodydata-ng-controller="welcomeController as ctrl">
<div>
<h1>{{ctrl.greeting('World')}}</h1>
</div>
<scriptsrc="/bower_components/angularjs/angular.js"></script>

<scriptsrc="/bower_components/less/dist/less.js"></script>

<scriptsrc="/Content/Scripts/Welcome/GreetingService.js"></script>

<scriptsrc="/Content/Scripts/Welcome/WelcomeController.js"></script>

<scriptsrc="/Content/Scripts/Welcome/Module.js"></script>

<scriptsrc="/Content/Scripts/Welcome/App.js"></script>


</body>
</html>


As expected it returns the un-bundled Less files and JavaScript files. And setting Optimization.UseCdn to true returns


<!DOCTYPEhtml>

<htmldata-ng-app="MSbuildDemo">
<head>
<title>MSBuild Demo</title>
<linkhref="/Content/dist/styles.min.css"rel="stylesheet"/>
</head>
<bodydata-ng-controller="welcomeController as ctrl">
<div>
<h1>{{ctrl.greeting('World')}}</h1>
</div>
<scriptsrc="/Content/dist/scripts.min.js"></script>
</body>
</html>

Bundled JavaScript and CSS from a folder called dist inside the Content folder.

Ok, sweet! So my bundling/CDN hack thingy worked. Now I just need to make sure that I get my scripts.min.js and styles.min.css created as well. To do this, I’m going to turn to node, npm and Gulp!

I use npm to install my node dependencies, and just like with Bower, I use the “--save” flag to make sure it is saved to the package.json file for restore in the future…and on the buildserver…

In this case, there are quite a few dependencies that needs to be added… In the end, my package.json looks like this

{
...
"dependencies": {
"bower": "~1.3.12",
"del": "^1.2.0",
"gulp": "~3.8.8",
"gulp-concat": "~2.4.1",
"gulp-less": "~1.3.6",
"gulp-minify-css": "~0.3.11",
"gulp-ng-annotate": "^0.5.3",
"gulp-order": "^1.1.1",
"gulp-rename": "~1.2.0",
"gulp-typescript": "^2.2.0",
"gulp-uglify": "~1.0.1",
"gulp-watch": "^1.1.0",
"merge-stream": "^0.1.8",
"run-sequence": "^1.1.1"
}
}

But it all depends on what you are doing in your build…

Now that I have all the dependencies needed, I add a gulfile.js to my project.

It includes a whole heap of code, so I will just write it out here, and then try to cover the main points

var gulp = require('gulp');
var typescript = require('gulp-typescript');
var concat = require('gulp-concat');
var uglify = require('gulp-uglify');
var rename = require('gulp-rename');
var less = require('gulp-less');
var minifycss = require('gulp-minify-css');
var watch = require('gulp-watch');
var del = require('del');
var runSequence = require('run-sequence');
var ngAnnotate = require('gulp-ng-annotate');
var mergeStream = require('merge-stream');
var order = require('gulp-order');
var exec = require('child_process').exec;

var settings = {
contentPath: "./Content/",
buildPath: "./Content/build/",
distPath: "./Content/dist/",
bowerPath: "./bower_components/",
bower: {
"bootstrap": "bootstrap/dist/**/*.{map,css,ttf,svg,woff,eot}",
"angular": "angularjs/angular.js"
},
scriptOrder: [
'**/angular.js',
'**/*Service.js',
'!**/*(App|Module).js',
'**/*Module.js',
'**/App.js'
],
stylesOrder: [
'**/normalize.css',

'**/*.css'
]
}

gulp.task('default', function (callback) {
runSequence('clean', 'build', 'package', callback);
});
gulp.task('Debug', ['default']);
gulp.task('Release', ['default']);

gulp.task('build', function () {
var lessStream = gulp.src(settings.contentPath + "**/*.less")
.pipe(less())
.pipe(gulp.dest(settings.buildPath));

var typescriptStream = gulp.src(settings.contentPath + "**/*.ts")
.pipe(typescript({
declarationFiles: false,
noExternalResolve: false,
target: 'ES5'
}))
.pipe(gulp.dest(settings.buildPath));

var stream = mergeStream(lessStream, typescriptStream);

for (var destinationDir in settings.bower) {
stream.add(gulp.src(settings.bowerPath + settings.bower[destinationDir])
.pipe(gulp.dest(settings.buildPath + destinationDir)));
}

return stream;
});

gulp.task('package', function () {
var cssStream = gulp.src(settings.buildPath + "**/*.css")
.pipe(order(settings.stylesOrder))
.pipe(concat('styles.css'))
.pipe(gulp.dest(settings.buildPath))
.pipe(minifycss())
.pipe(rename('styles.min.css'))
.pipe(gulp.dest(settings.distPath));

var jsStream = gulp.src(settings.buildPath + "**/*.js")
.pipe(ngAnnotate({
remove: true,
add: true,
single_quotes: true,
sourcemap: false
}))
.pipe(order(settings.scriptOrder))
.pipe(concat('scripts.js'))
.pipe(gulp.dest(settings.buildPath))
.pipe(uglify())
.pipe(rename('scripts.min.js'))
.pipe(gulp.dest(settings.distPath));

return mergeStream(cssStream, jsStream);
});



 

gulp.task('clean', function () {
del.sync([settings.buildPath, settings.distPath]);
});


It starts with a “default” task which runs the “clean”, “build” and “packages” tasks in sequence. It then has one task per defined build configuration in the project. In this case “Debug” and “Release”. These in turn just run the “default” in this case, but it makes I possible to run different tasks during different builds.

Note: Remember that task names are case-sensitive, so make sure that the task names use the same casing as the build configurations in you project

The “build” task transpiles Less to CSS and TypeScript to JavaScript and puts them in the folder defined as “/Content/build/”. It also copies the defined Bower components to this folder.

The “package” task takes all the CSS and JavaScript files generated by the “build” task, bundles them into a styles.css and a scripts.js file in the same folder. It then minifies them into a styles.min.css and scripts.min.css file, and put them in a folder defined as /Content/dist/”. It also makes sure that it is all added in the correct order. Just as the BundleConfig class did.

The “clean” task does just that. It cleans up the folders that the other tasks have created. Why? Well, it is kind of a nice feature to have…

Ok, now I have all the Gulp tasks needed to generate the files needed, as well as clean up afterwards. And these are easy to run from the command line, or using the Task Runner Explorer extension in Visual Studio. But this will not work on a buildserver unfortunately… 

Note: Unless you can get your buildserver to run the Gulp stuff somehow. In TFS 2015, and a lot of other buildservers, you can run Gulp as a part of the build. In TFS 2013 for example, this is a bit trickier…

So how do we get it to run as a part of the build (if we can’t have the buildserver do it for us, or we just want to make sure it always runs with the build)?

Well, here is where it starts getting interesting! One way is unload the .csproj file, and start messing with the build settings in there. However, this is not really a great solution. It gets messy very fast, and it is very hard to understand what is happening when you open a project that someone else has created, and it all of the sudden does magical things. It is just not very obvious to have it in the .csproj file… Instead, adding a file called <PROJECTNAME>.wpp.targets, will enable us to do the same things, but in a more obvious way. This file will be read during the build, and work in the same way as if you were modifying the .csproj file.

In my case the file is called DarksideCookie.AspNet.MSBuild.Web.wpp.targets. The contents of it is XAML, just like the .csproj file. It has a root element named Project in the http://schemas.microsoft.com/developer/msbuild/2003 namespace. Like this

<?xmlversion="1.0"encoding="utf-8"?>
<ProjectToolsVersion="4.0"xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup>
<ExcludeFoldersFromDeployment>.\Content\build\</ExcludeFoldersFromDeployment>
</PropertyGroup>
</Project>

In this case, it starts out by defining a property called ExcludeFoldersFromDeployment property. This tells the build to not include the “/Content/build/” folder when deploying the application. It is only a temporary storage folder for the build, so it isn’t needed…

Next, is the definition of the tasks, or targets as they are called in a targets file. They define what should happen, and when.

A target is defined by a Target element. It includes a name, when to run, and whether or not it depends on any other target before it can run, and if there are any conditions to take into account. Inside the Target element, you find the definition of what should happen.

In this case, I have a a bunch of different targets, so let’s look at what they do

The first on is one called “BeforeBuild”.

Note: Oh…yeah…by naming your target to some specific names, they will be run at specific times, and do not need to be told when to run… In this case, it will run before the build…

<TargetName="BeforeBuild">
<MessageText="Running custom build things!"Importance="high"/>
</Target>

All it does is print out that it is running the custom build. This makes it easy to see if everything is running as it should by looking at the build output.

The next target is called “RunGulp”, and it will do jus that, run Gulp as a part of the build.

<TargetName="RunGulp"AfterTargets="BeforeBuild"DependsOnTargets="NpmInstall;BowerInstall">
<MessageText="Running gulp task $(Configuration)"Importance="high"/>
<ExecCommand="node_modules\.bin\gulp $(Configuration)"WorkingDirectory="$(ProjectDir)"/>
<OnErrorExecuteTargets="DeletePackages"/>
</Target>

As you can see, it is set to run after the target called “BeforeBuild”, and depends on the targets called “NpmInstall” and “BowerInstall”. This will make sure that the “NpmInstall” and “BowerInstall” targets are run before this target. In the target, it prints out that it is running the specified Gulp task, which once again simplifies debugging things using the build output and logs, and the runs Gulp using an element called “Exec”. “Exec” is basically like running a command in the command line. In this case, the “Exec” element is also configured to make sure the command is run in the correct working directory. And if it fails, it executes the target called “DeletePackages”.

Ok, so that explains how the Gulp task is run. But what do the “NpmInstall” and “BowerInstall” targets do? Well, they do pretty much exactly what they are called. They run “npm install” and “bower install” to make sure that all dependencies are installed. As I mentioned before, I don’t check in my dependencies. Instead I let my buildserver pull them in as needed.

Note: Yes, this has some potential drawbacks. Things like, the internet connection being down during the build, or the bower or npm repos not being available, and so on. But in most cases it works fine and saves the source control system from having megs upon megs of node and bower dependencies to store…

<TargetName="NpmInstall"Condition="'$(Configuration)' != 'Debug'">
<MessageText="Running npm install"Importance="high"/>
<ExecCommand="npm install --quiet"WorkingDirectory="$(ProjectDir)"/>
<OnErrorExecuteTargets="DeletePackages"/>
</Target>



<TargetName="BowerInstall"Condition="'$(Configuration)' != 'Debug'">
<MessageText="Running bower install"Importance="high"/>
<ExecCommand="node_modules\.bin\bower install --quiet"WorkingDirectory="$(ProjectDir)"/>
<OnErrorExecuteTargets="DeletePackages"/>
</Target>


As you can see, these targets also define some conditions. They will only run if the current build configuration is something else than “Debug”. That way they will not run every time you build in VS. Instead it will only run when building for release, which is normally done on the buildserver.

The next two targets look like this

<TargetName="DeletePackages"Condition="'$(Configuration)' != 'Debug'"AfterTargets="RunGulp">
<MessageText="Downloaded packages"Importance="high"/>
<ExecCommand="..\tools\delete_folder node_modules"WorkingDirectory="$(ProjectDir)\"/>
<ExecCommand="..\tools\delete_folder bower_components"WorkingDirectory="$(ProjectDir)\"/>
</Target>

<TargetName="CleanGulpFiles"AfterTargets="Clean">
<MessageText="Cleaning up node files"Importance="high"/>
<ItemGroup>
<GulpGeneratedInclude=".\Content\build\**\*"/>
<GulpGeneratedInclude=".\Content\dist\**\*"/>
</ItemGroup>
<DeleteFiles="@(GulpGenerated)"/>
<RemoveDirDirectories=".\Content\build;.\Content\dist;"/>
</Target>

The “DeletePackages” target is set to run after the “RunGulp” target. This will make sure that it removes the node_modules and bower_components folders when done. However, once again, only when not building in “Debug”. Unfortunately, the node_modules folder can get VERY deep, and cause some problems when being deleted on a Windows machine. Because of this, I have included a little script called delete_folder, which will take care of this problem. So instead of just deleting the folder, I call on that script to do the job.

The second target, called “CleanGulpFiles”, deletes the files and folders generated by Gulp, and is set to run after the target called “Clean”. This means that it will run when you right click your project in the Solution Explorer and choose Clean. This is a neat way to get rid of generated content easily.

In a simple world this would be it. This will run Gulp and generate the required files as a part of the build. So it does what I said it would do… However, if you use MSBuild or MSDeploy to create a deployment package, or deploy your solution to a server as a part of the build, which you normally do on a buildserver, they newly created files will not automatically be included. To get this solved, there is one final target called “AddGulpFiles” in this case.

<TargetName="AddGulpFiles"BeforeTargets="CopyAllFilesToSingleFolderForPackage;CopyAllFilesToSingleFolderForMsdeploy">
<MessageText="Adding gulp-generated files"Importance="high"/>
<ItemGroup>
<CustomFilesToIncludeInclude=".\Content\dist\**\*.*"/>
<FilesForPackagingFromProjectInclude="%(CustomFilesToInclude.Identity)">
<DestinationRelativePath>.\Content\dist\%(RecursiveDir)%(Filename)%(Extension)</DestinationRelativePath>
</FilesForPackagingFromProject>
</ItemGroup>
<OnErrorExecuteTargets="DeletePackages"/>
</Target>

This target runs before “CopyAllFilesToSingleFolderForPackage” and “CopyAllFilesToSingleFolderForMsdeploy”, which will make sure that the defined files are included in the deployment.

In this case, as all the important files are added to the “/Content/dist/” folder, all we need to do is tell it to include all files in that folder…

That is it! Switching the build over to “Release” and asking VS to build, while watching the Output window, will confirm that our targets are running as expected. Unfortunately it will also run “npm install” and “bower install”, as well as delete the bower_components and npm_modules folders as part of the build. So once you have had the fun of watching it work, you will have to run those commands manually again to get your dependencies back…

And if you want to see what would actually going to be deployed to a server, you can right-click the project in the Solution Explorer and choose Publish. Publishing to the file system, or to a WebDeploy package, will give you a way to look at what files would be sent to a server in a deployment scenario.

In my code download, the publish stuff is set to build either to the file system or a web deploy package, in a folder called DarksideCookie on C:. This can obviously be changed if you want to…

As usual, I have created a sample project that you can download and play with. It includes everything covered in this post. Just remember to run “npm install” and “bower install” before trying to run it locally.

Code available here: DarksideCookie.AspNet.MSBuild.zip (65.7KB)


Building a simple PicPaste replacement using Azure Web Apps and WebJobs

$
0
0

This post was supposed to be an introduction to Azure WebJobs, but it took a weird turn somewhere and became a guide to building a simple PicPaste replacement using just a wimple Azure Web App and a WebJob.

As such, it might not be a really useful app, but it does show how simple it is to build quite powerful things using Azure.

So, what is the goal? Well, the goal is to build a website that you can upload images to, and then get a simple Url to use when sharing the image. This is not complicated, but as I want to resize the image, and add a little overlay to it as well before giving the user the Url, I might run into performance issues if it becomes popular. So, instead I want the web app to upload the image to blob storage, and then have a WebJob process it in the background. Doing it like this, I can limit the number of images that are processed at the time, and use a queue to handle any peaks.

Note: Is this a serious project? No, not really. Does it have some performance issues if it becomes popular? Yes. Did I build it as a way to try out background processing with WebJobs? Yes… So don’t take it too serious.

The first thing I need is a web app that I can use to upload images through. So I create a new empty ASP.NET project, adding support for MVC. Next I add a NuGet packages called WindowsAzure.Storage. And to be able to work with Azure storage, I need a new storage account. Luckily, that is as easy as opening the “Server Explorer” window, right-clicking the “Storage” node, and selecting “Create Storage Account…”. After that, I am ready to start building my application.

Inside the application, I add a single controller called HomeController. However, I don’t want to use the default route configuration. instead I want to have a nice, short and simple route that looks like this

routes.MapRoute(
name: "Image",
url: "{id}",
defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional }
);

Ok, now that that is done, I add my “Index” view, which is ridiculously simple, and looks like this

<!DOCTYPEhtml>

<html>
<head>
<metaname="viewport"content="width=device-width"/>
<title>AzurePaste</title>
</head>
<body>
@using (Html.BeginForm("Index", "Home", FormMethod.Post, new { enctype = "multipart/form-data" }))
{
<label>Chose file: </label>
<inputtype="file"name="file"accept="image/*"/><br/>
<inputtype="submit"value="Upload"/>
}
</body>
</html>

As you can see, it contains a simple form that allows the user to post a single file to the server using the Index method on the HomeController.

The only problem with this is that there’s no controller action called Index that accepts HTTP POSTs. So let’s add one.

[HttpPost]
public ActionResult Index(HttpPostedFileBase file)
{
if (file == null)
{
returnnew HttpStatusCodeResult(HttpStatusCode.BadRequest);
}

var id = GenerateRandomString(10);
var blob = StorageHelper.GetUploadBlobReference(id);
blob.UploadFromStream(file.InputStream);

StorageHelper.AddImageAddedMessageToQueue(id);

return View("Working", (object)id);
}

So what does this action do? Well, first of all it returns a HTTP 400 if you forgot to include the file… Next it uses a quick and dirty helper method called GenerateRandomString(). It just generates a random string with the specified length… Next, I use a helper class called StorageHelper, which I will return to shortly, to get a CloudBockBlob instance to which I upload my file. The name of the blob is the random string I just retrieved, and the container for it is a predefined one that the WebJob knows about. The WebJob will then pick it up there, make the necessary tranformations, and then save it to the root container.

Note: By adding a container called $root to a blob storage account, you get a way to put files in the root of the Url. Otherwise you are constrained to a Url like https://[AccountName].blob.core.windows.net/[ContainerName]/[FileName]. But using $root, it can be reduced to https://[AccountName].blob.core.windows.net/[FileName].

Once the image is uploaded to Azure, I once again turn to my StorageHelper class to put a message on a storage queue.

Note: I chose to build a WebJob that listened to queue messages instead of blob creation, as it can take up to approximately 10 minutes before the WebJob is called after a blob is created. The reason for this is that the blob storage logs are buffered and only written approximately every 10 minutes. By using a queue, this delay is decreased quite a bit. Bit it is still not instantaneous… If you need to decrease it even further, you can switch to a ServiceBus queue instead.

Ok, if that is all I am doing, that StorageHelper class must be really complicated. Right? No, not really. It is just a little helper to keep my code a bit more DRY.

The StorageHelper has 4 public methods. EnsureStorageIsSetUp(), which I will come back to, GetUploadBlobReference(), GetRootBlobReference() and AddImageAddedMessageToQueue(). And they are pretty self explanatory. The two GetXXXBlobReference() methods are just helpers to get hold of a CloudBlockBlob reference. By keeping it in this helper class, I can keep the logic of where blobs are placed in one place… The AddImageAddedMessageToQueue() adds a simple CloudQueueMessage, containing the name of the added image, on a defined queue. And finally, the EnsureStorageIsSetUp() will make sure that the required containers and queues are set up, and that the root container has read permission turned on for everyone.

publicstaticvoid EnsureStorageIsSetUp()
{
UploadContainer.CreateIfNotExists();
ImageAddedQueue.CreateIfNotExists();
RootContainer.CreateIfNotExists();
RootContainer.SetPermissions(new BlobContainerPermissions { PublicAccess = BlobContainerPublicAccessType.Blob });
}

publicstatic CloudBlockBlob GetUploadBlobReference(string blobName)
{
return UploadContainer.GetBlockBlobReference(blobName);
}

publicstatic CloudBlockBlob GetRootBlobReference(string blobName)
{
return RootContainer.GetBlockBlobReference(blobName);
}

publicstaticvoid AddImageAddedMessageToQueue(string filename)
{
ImageAddedQueue.AddMessage(new CloudQueueMessage(filename));
}

Kind of like that…

The returned view that the user gets after uploading the file looks like this

@model string

<!DOCTYPEhtml>

<html>
<head>
<metaname="viewport"content="width=device-width"/>
<title>Azure Paste</title>
<scriptsrc="~/Content/jquery.min.js"></script>
<scripttype="text/javascript">
$(function () {
function checkCompletion() {
setTimeout(function () {
$.ajax({ url: "/@Model" })
.done(function (data, status, jqXhr) {
if (jqXhr.status === 204) {
checkCompletion();
return;
}
document.location.href = data;
})
.fail(function (error) {
alert("Error...sorry about that!");
console.log("error", error);
});
}, 1000);
}
checkCompletion();
});
</script>
</head>
<body>
<div>Working on it...</div>
</body>
</html>

As you can see, the bulk of it is some JavaScript, while the actual content that the users sees is really tiny…

The JavaScript uses jQuery (no, not a fan, but it has some easy ways to do ajax calls) to poll the server every second. It calls the server at “/[FileName]”, which as you might remember from my changed routing, will call the Index method on the HomeController.

If the call returns an HTTP 204, the script keeps on polling. If it returns HTTP 200, it redirects the user to a location specified by the returned content. If something else happens, if just alerts that something went wrong…

Ok, so this kind of indicates that my Index() method needs to be changed a bit. It needs to do something different if the id parameter is supplied. So I start by handling that case

public ActionResult Index(string id)
{
if (string.IsNullOrEmpty(id))
{
return View();
}

...
}

That’s pretty much the same as it is by default. But what if the id is supplied? Well, then I start by looking for the blob that the user is looking for. If that blob exists, I return an HTTP 200, and the Url to the blob.

public ActionResult Index(string id)
{
...

var blob = StorageHelper.GetRootBlobReference(id);
if (blob.Exists())
{
returnnew ContentResult { Content = blob.Uri.ToString().Replace("/$root", "") };
}

...
}

As you can see, I remove the “/$root” part of the Url before returning it. The Azure Storage SDK will include that container name in the Url even if it is a “special” container that isn’t needed in the Url. So by removing it I get this nicer Url.

If that blob does not exist, I look for the temporary blob in the upload folder. If it exists, I return an HTTP 204. And if doesn’t, then the user is looking for a file that doesn’t exist, so I return a 404.

public ActionResult Index(string id)
{
...

blob = StorageHelper.GetUploadBlobReference(id);
if (blob.Exists())
{
returnnew HttpStatusCodeResult(HttpStatusCode.NoContent);
}

returnnew HttpNotFoundResult();
}

Ok, that is all there is to the web app. Well…not quite. I still need to ensure that the storage stuff is set up properly. So I add a call to the StorageHelper.EnsureStorageIsSetUp() in the Application_Start() method in Global.asax.cs.

protectedvoid Application_Start()
{
AreaRegistration.RegisterAllAreas();
RouteConfig.RegisterRoutes(RouteTable.Routes);

StorageHelper.EnsureStorageIsSetUp();
}

Next up is the WebJob that will do the heavy lifting. So I add a WebJob project to my solution. This gives me a project with 2 C# files. One called Program.cs, which is the “entry point” for the job, and one called Functions.cs, which contains a sample WebJob.

The first thing I want to do is to make sure that I don’t overload my machine by running too many of these jobs in parallel. Doing image manipulation is hogging resources, and I don’t want it to get too heavy for the machine.

I do this by setting the batch size for queues in a JobHostConfiguration inside the Program.cs file

staticvoid Main()
{
var config = new JobHostConfiguration();
config.Queues.BatchSize = 2;

var host = new JobHost(config);
host.RunAndBlock();
}

Now that that is done, I can start focusing on my WebJob… I start out by deleting the existing sample job, and add in a new one.

A WebJob is just a static method with a “trigger” as the first parameter. A trigger is just a method parameter that has an attribute set on it, which defines when the method should be run. In this case, I want to run it based on a queue, so I add a QueueTrigger attribute to my first parameter.

As my message contains a simple string with the name of the blob to work on, I can define my first parameter as a string. Had it been something more complicated, I could have added a custom type, which the calling code would have populated by deserializing the content in the message. Or, I could have chosen to go with CloudQueueMessage, which gives med total control. But as I said, just a string will do fine. It will also help me with my next  three parameters.

As a lot of WebJobs will be working with Azure storage, the SDK includes helping tools to make this easier. One such tool is an attribute called BlobAttribute. This makes it possible to get a blob reference, or its contents, passed into the method. In this case, getting references to the blobs I want to work with makes things a lot easier. I don’t have to handle getting references to them on my own. All I have to do, is to add parameters of type ICloudBlob, and add a BlobAttribute to them. The attribute takes a name-pattern as the first string. But in this case, the name of the blob will be coming from the queue message… Well, luckily, the SDK people have thought of this, and given us a way to access this by adding “{queueTrigger}” to the pattern. This will be replaced by the string in the message…

Ok, so the signature fore my job method turns in to this

publicstaticvoid OnImageAdded([QueueTrigger("image-added")]string blobName,
[Blob("upload/{queueTrigger}")]ICloudBlob blob,
[Blob("$root/{queueTrigger}.png")]ICloudBlob outputImage,
[Blob("$root/{queueTrigger}")]ICloudBlob htmlBlob,
TextWriter log) {
...
}

As you can see, I have also added a final parameter of type TextWriter, called log. This will be a log supplied by the WebJob host, making it possible to log things in a nice a uniform way.

Bu wait a little…why am I taking in 3 blobs? The first one is obviously the uploaded image. The second one is the target where I am going to put the transformed image. What is the last one? Well, I am going to make it a little more complicated than just hosting the image… I am going to host the image under a name which is [UploadedBlobName].png. I am then going to add a very simple HTML file to show the image in a blob at the same name as the uploaded blob. That ay, the Url to the page to view the image will be a nice and simple one, and it will show the image and a little text.

The first thing I need to do is get the content of the blob. This could have been done by requesting a Stream instead of a IClodBlob, but as I want to be able to delete it at the end, that didn’t work…unless I used more parameters, which felt unnecessary…

Once I have my stream, I turn it into a Bitmap class from the System.Drawing assembly. Next, I resize that image to a maximum with or height before adding a little watermark.

publicstaticvoid OnImageAdded([QueueTrigger("image-added")]string blobName,
[Blob("upload/{queueTrigger}")]ICloudBlob blob,
[Blob("$root/{queueTrigger}.png")]ICloudBlob outputImage,
[Blob("$root/{queueTrigger}")]ICloudBlob htmlBlob,
TextWriter log)
{
var ms = new MemoryStream();
blob.DownloadToStream(ms);

var image = new Bitmap(ms);
var newImage = image.Resize(int.Parse(ConfigurationManager.AppSettings["AzurePaste.MaxSize"]));
newImage.AddWatermark("AzurePaste FTW");

...
}

Ok, now that I have my transformed image, it is time to add it to the “target blob”. I do this by saving the image to a MemoryStream and then uploading that. However, by default, all blobs get the content type “application/octet-stream”, which isn’t that good for images. So I update the blob’s content type to “image/png”.
publicstaticvoid OnImageAdded([QueueTrigger("image-added")]string blobName,
[Blob("upload/{queueTrigger}")]ICloudBlob blob,
[Blob("$root/{queueTrigger}.png")]ICloudBlob outputImage,
[Blob("$root/{queueTrigger}")]ICloudBlob htmlBlob,
TextWriter log)
{
...

var outputStream = new MemoryStream();
newImage.Save(outputStream, ImageFormat.Png);
outputStream.Seek(0, SeekOrigin.Begin);
outputImage.UploadFromStream(outputStream);
outputImage.Properties.ContentType = "image/png";
outputImage.SetProperties();

...
}

The last part is to add the fabled HTML page… In this case, I have just quickly hacked a simple HTML page into my assembly as an embedded resource. I could of course have used one stored in blob storage or something, making it easier to update. But I just wanted something simple…so I added it as an embedded resource… But before I add it to the blob, I make sure to replace the Url to the blob, which has been defined as “{0}” in the embedded HTML.

publicstaticvoid OnImageAdded([QueueTrigger("image-added")]string blobName,
[Blob("upload/{queueTrigger}")]ICloudBlob blob,
[Blob("$root/{queueTrigger}.png")]ICloudBlob outputImage,
[Blob("$root/{queueTrigger}")]ICloudBlob htmlBlob,
TextWriter log)
{
...

string html;
using (var htmlStream = typeof (Functions).Assembly.GetManifestResourceStream("DarksideCookie.Azure.WebJobs.AzurePaste.WebJob.Resources.page.html"))
using (var reader = new StreamReader(htmlStream))
{
html = string.Format(reader.ReadToEnd(), blobName + ".png");
}
htmlBlob.UploadFromStream(new MemoryStream(Encoding.UTF8.GetBytes(html)));
htmlBlob.Properties.ContentType = "text/html";
htmlBlob.SetProperties();

blob.Delete();
}

As you can see, I also make sure to update the content type of this blob to “text/html”. And yeah…I also make sure to delete the original file.

That is about it… I guess I should mention that the Resize() and AddWatermark() methods in the Bitmap are extension methods I have added. They are not important for the topic, so I will just leave them out. But they are available in the downloadable code below.

There is one more thing though… What happens if my code is borked, or someone uploads a file that isn’t an image? Well, in that case, the code will fail, and fail, and fail… If it fails, the WebJob will be re-run at a later point. Unfortunately, this can turn ugly, and turn into what is known as a poison message. Luckily, this is handled by default. After a configurable amount of retries, the message is considered a poison message, and will be discarded. As it is, a new message is automatically added to a dynamically created queue to notify us about it. So it might be a good idea for us to add a quick little job to that queue as well, and log any poison messages.

The name of the queue that is created is “[OriginalQueueName]-poison”, and the handler for it looks like any other WebJob. Just try not to add code in here that turns these messages into poison messages…

publicstaticvoid LogPoisonBlob([QueueTrigger("image-added-poison")]string blobname, TextWriter logger)
{
logger.WriteLine("WebJob failed: Failed to prep blob named {0}", blobname);
}

That’s it! Uploading an image through the web app will now place it in blob storage. it will then wait until the WebJob has picked it up, transformed it, and stored it, and a new html file, at the root of the storage account. Giving the user a quick and easy address to share with his or her friends.

Note: If you want to run something like this in the real world, you probably want to add some form of clean-up solution as well. Maybe a WebJob on a schedule, that removes any images older than a certain age…

And as usual, there is a code sample to play with. Just remember that you need to set up a storage account for it, and set the correct connectionstrings in the web.config file. As well as the WebJob project’s app.config if you want to run it locally.

Note: It might be good to know that logs and information about current wWebJobs can be found at https://[ApplicationName].scm.azurewebsites.net/azurejobs/#/jobs

Source code: DarksideCookie.Azure.WebJobs.AzurePaste.zip (52.9KB)

Cheers!

Running ASP.NET 5 applications in Windows Server Containers using Windows Server 2016

$
0
0

A couple of days ago, I ended up watching a video about Windows Server 2016 at Microsoft Virtual Academy. I think it was A Deep Dive into Nano Server, but I’m not sure to be honest. Anyhow, they started talking about Windows Server Containers and Docker, and I got very interested.

I really like the idea of Docker, but since I’m a .NET dev, the whole Linux dependency is a bit of a turn-off to be honest. And yes, I know that ASP.NET 5 will be cross-platform and so on, but in the initial release of .NET Core, it will be very limited. So it makes it a little less appealing. However, with Windows Server Containers, I get the same thing, but on Windows. So all of the sudden, it got interesting to look at Docker. So I decided to get an ASP.NET 5 app up and running in a Windows Server Container. Actually, I decided to do it in 2 ways, but in this post I will cover the simplest way, and then I will do another post about the other way, which is more complicated but has some benefits…

What is Docker?

So the first question I guess is “What is Docker?”. At least if you have no knowledge of Docker at all, or very little. If you know a bit about Docker, you can skip to the next part!

Disclaimer: This is how I see it. It is probably VERY far from the technical version of it, and people who know more about it would probably say I am on drugs or something. But this is the way I see it, and the way that makes sense to me, and that got me to understand what I needed to do…

To me, Docker, or rather a Docker container, is a bit like a shim. When you create a Docker container on a machine, you basically insert a shim between what you are doing in that container, and the actual server. So anything you do inside that container, will be written to that shim, and not to the actual machine. That way, your “base machine” keeps its own state, but you can do things inside of the container to configure it like you want it. Then is run as sort of a virtual machine on that machine. It is a bit like a VM, but much simpler and light weight. You can then save that configuration that you have made, and re-use it over and over again. This means that you can create your configuration for your environment in a Docker image, and then use it all over the place on different servers, and they will all have the same set-up.

You can then persist that container into what is called an image. The image is basically a pre-configured “shim”, that you can then base new containers off of, or base another images on. This allows you to build up your machine based on multiple “shims”. So you start out with the base machine, and then maybe you add the IIS image that activates IIS, and then you add the one that adds your company’s framework to it, and finally on top of that, you add your own layer with your actual application. Kind of like building your environment from Lego-blocks.

There are 2 ways to build the images. The first one is to create an “interactive” container, which is a container you “go into” and do your changes to, and then commit that to an image. The second one is to create something called a dockerfile, which is a file containing all of the things that need to be done to whatever base-image you are using, to get it up to the state that you want it to be.

Using a dockerfile is a lot easier once you get a hang of how they work, as you don’t need to sit and do it all manually. Instead you just write the commands that need to be run, and Docker sorts it all out for you, and hands you back a configured image (if it all you told it to do in the dockerfile worked).

If you are used to virtualized machines, it is a bit like differencing disks. Each layer just adds things to the previous disk, making incremental changes until you reach the level you want. However, the “base disk” is always the operating system you are running. So, in this case Windows Server 2016. And this is why they are lighter weight, and faster to start and so on. You don’t need to boot the OS first. It is already there. All you need to do is create your “area” and add your images/”shims” to it.

To view the images available on the machine, you can run

C:\>docker images

On a new Windows Server 2016, like I am using here, you will only see a single image to begin with. It will be named windowsservercore, and represents the “base machine”. It is the image that all containers will be based on.

image

Set up a server

There are a couple of different ways to set up a Windows Server 2016 (Technical Preview 3 at the time of writing) to try this out.

Option 1: On a physical machine, or existing VM. To do this, you need to download and execute a Powershell-script that enables containers on the machine. It is documented here.

Option 2: In a new VM. This was ridiculously simple to get working. You just download and execute a Powershell-script, and it sorts everything out for you. Kind of… It is documented here.

Option 3: In Azure. This is by far the simplest way of doing it. Just get a new machine that is configured and done, and up and running in the cloud. Documented here.

Warning: I went through option 2 and 3. I started with 2, and got a new VM in Hyper-V. However, my Machine got BSOD every time I tried to bridge my network. And apparently, this is a known bug in the latest drivers for my Broadcom WLAN card. Unfortunately it didn’t work to downgrade it on my machine, so I had to give up. So if you are running  a new MacBook Pro, or any other machine with that chip, you might be screwed as well. Luckily, the Azure way solved that…

Warning 2: Since this is Windows Sever Core, there is a VERY limited UI. Basically, you get a command line, and have to do everything using that. That and your trusty friend PowerShell…

Confige the server

The next step, after getting a server up and running is to configure it. This is not a big thing, there are only a few configurations that need to be made. Maybe even just one if you are lucky. It depends on where you are from, and how you intend to configure your containers.

The first step is to move from cmd.exe to PowerShell by executing

c:\>powershell

To me, being from Sweden, with a Swedish keyboard, I needed to make sure that I could type properly by setting the correct language. To do this I used the Set-WinUserLanguageList

PS C:\>Set-WinUserLanguageList -LanguageList sv-SE

Next, you need to open a firewall rule for the port you intend to use. In this case, I intend to use port 80 as I am going to run a webserver. This is done using the following command

PS C:\>if (!(Get-NetFirewallRule | where {$_.Name -eq "TCP80"})) { New-NetFirewallRule -Name "TCP80" -DisplayName "HTTP on TCP/80" -Protocol tcp -LocalPort 80 -Action Allow -Enabled True}

It basically checks to see if there is a firewall rule called TCP80. If not, it creates one, opening port 80 for TCP.

Note: If you are running in Azure, you also need to set up an endpoint for your machine for it to work. It is documented in the resource linked above.

Next, I want to make sure that I can access the ports I want on the container(s) I am about to create.

When running containers, you will have a network connection that your machine uses, as well as a virtual switch that you containers will be connected to. In Azure, your machine will have a 10.0.x.x IP by default, and a virtual switch at 172.16.x.x that you containers will be connected to. However, both of them are behind the firewall. So by opening port 80 like we just did, we opened port 80 on both connections. So as long as your container is only using port 80, you are good to go. But if you want to use other ports in your containers, and map port 80 from the host to port X on your container, you need to make some changes, as port X will be denied by the firewall.

Note: This part tripped my up a lot. I Microsoft’s demos, the map port 80 to port 80 running an nginx server. But they never mentioned that this worked because the firewall implicitly had opened the same port to the container connection. I assumed that since the 172-connection was internal, it wasn’t affected by the firewall. Apparently I thought the world was too simple.

So to make it simple, I have just turned off the firewall for the 172.16.x.x connection. The public connection is still secured, so I assume this should be ok… But beware that I am not a network or security guy! To be really safe, you could open just the ports you needed on that connection. But while I am mucking about and trying things out, removing the firewall completely makes things easier!

The command needed to solve this “my way” is

PS C:\>New-NetFirewallRule -Name "TCP/Containers" -DisplayName "TCP for containers" -Protocol tcp -LocalAddress 172.16.0.1/255.240.0.0 -Action Allow -Enabled True

It basically says “any incoming TCP request for the 172.16.x.x connection is allowed”, e.g. firewall turned off for TCP. Just what I wanted!

Create and upload an application

As this is all about hosting the application, I don’t really care about what my application does. So I created a new ASP.NET 5 application using the Empty template in Visual Studio 2015, which in my case created an application based on ASP.NET 5 Beta 7.

The application as such is just a “Hello World”-app that has an OWIN middleware that just returns “Hello World!” for all requests. Once again, this is about hosting, not the app…

I decided to make one change though. As Kestrel is going to be the server used in the future for ASP.NET 5 apps, I decided to modify my application to use Kestrel instead of the WebListener server that is in the project by default. To do this, I just made a couple of changes to the project.json file.

Step 1, modify the dependencies. The only dependency needed to get this app going it Microsoft.AspNet.Server.Kestrel. So the dependencies node looks like this

"dependencies": { "Microsoft.AspNet.Server.Kestrel": "1.0.0-beta7" }

Step 2, change the commands to run Kestrel instead of WebListener

"commands": {  "kestrel": "Microsoft.AspNet.Hosting --server Microsoft.AspNet.Server.Kestrel --server.urls http://localhost:5004"  }

In that last step, I also removed the hosting.ini file, as I find it clearer to just have the config in the project.json file, instead of spread out…

Next I used Visual Studio to publish the application to a folder on my machine. This packages up everything needed to run the application, including a runtime and dependencies and so on. It also creates a cmd-file for the kestrel-command. So it is very easy to run this application on any Windows-machine.

Transfer the application to the server

With the app done, and published to the local machine, I went ahead and zipped it up for transfer to the server. And since I needed to get it to my server in Azure, I went and uploaded it to a blob in Azure Storage, making sure that it was publicly available.

The next is to get the app to the server. Luckily, this is actually not that hard, even from a CLI. It is just a matter of running the wget-command

wget -uri 'http://[ACCOUNT_NAME].blob.core.windows.net/[CONTAINER]/[APP_ZIP_FILE_NAME].zip' -OutFile "c:\KestrelApp.zip"

And when the zip is on the machine it is time to unzip it. As there will be something called a dockerfile to set everything up, the files need to be unzipped to a subfolder in a new directory. In my case, I decided to call my new directory “build”, and thus unzip my files to “build\app”. Like this

Expand-Archive -Path C:\KestrelApp.zip -DestinationPath C:\build\app –Force

Create a dockerfile

Now that we have the app on the server, it is time to create the dockerfile. To do this, I start out by making my way into the “build”-directory.

PS C:\>cd build

To create the actual file, you use the New-Item command

PS C:\build>New-Item –Type File dockerfile

And then you can open it in Notepad by running

PS C:\build>notepad dockerfile

Inside the Notepad it is time to define what needs to be done to get the container into the state we want.

The first step is to define what image we want to base our new image on. In this case, there is only one, and it is called “windowsservercore”. So, I tell Docker to use that image as my base-image, by writing

FROM windowsservercore

on the first line of the dockerfile.

Next, I want to include my application in the new container.

The base disk (windowsservercore) is an empty “instance” of the physical machine’s OS. So anything we want to have access to in our container, needs to be added to the container using the ADD keyword. So to add the “app” directory I unzipped my app to, I add

ADD app /app

which says, add the app directory to a directory called “app” in my image.

Once I have my directory added, I also want to set it as the working directory when my container starts up. This is done using the WORKDIR keyword like this

WORKDIR /app

And finally, I need to tell it what to do when the container starts up. This can be done using either an ENTRYPOINT or a CMD keyword, or a combination of them. However, being a Docker-noob, I cant tell you exactly the differences between them, which way to use them is the best, but I got it working by adding

CMD kestrel.cmd

which tells it to run kestrel.cmd when the container starts up.

So finally, the dockerfile looks like this

FROM windowsservercore
ADD app /app
WORKDIR /app
CMD kestrel.cmd

which says, start from the “windowsservercore” image, add the content of my app directory to my image under a directory called app. Then set the app directory as the working directory. And finally, run kestrel.cmd when the container starts.

Once I have the configuration that I want, I save the file and close Notepad.

Create a Docker image from the dockerfile

Now that we have a dockerfile that hopefully works, it is time to tell Docker to use it to create a new image. To do this, I run

PS C:\build>docker build -t kestrelapp .

This tells Docker to build an image named “kestrelappimage” from the current location. By adding just a dot as the location, it uses the current location, and looks for a file called “dockerfile”.

Docker will then run through the dockerfile one line at the time, setting up the image as you want it.

image

And at the end, you will have a new image on the server. So if you run

PS C:\build>docker images

You will now see 2 images. The base “windowsservercore”, as well as your new “kestrelappimage” that is based on “windowsservercore”.

 

image

 

Create, and run, a container based on the new image

Once the image is created, it is time to create, and start, a container based on that image. Once again it is just a matter of running a command using docker

docker run --name kestrelappcontainer -d -p 80:5004 kestrelapp

This command says “create a new container called kestrelappcontainer based on the kestrelapp image, map port 80 from the host to port 5004 on the container, and run it in the background for me”.

Running this will create the container and start it for us, and we should be good to go.

image

Note: Adding –p 80:5004 to map the ports will add a static NAT mapping between those ports. So if you want to re-use some ports, you might need to remove the mapping before it works. Or, if you want to re-use the same mapping, you can just skip adding the –p parameter. If you want to see your current mapping, you can run Get-NetNatStaticMapping, and remove any you don’t want by running “Remove-NetNatStaticMapping –StaticMappingID [ID]”

If you want to see the containers currently on your machine, you can run

PS C:\build>docker ps –a

which will write out a list of all the available containers on the machine.

image

We should now be able to browse to the server and see “Hello World!”.

image

That’s it! That’s how “easy” it is to get up and going with Windows Server Containers and ASP.NET 5. At least it is now… It took a bit of time to figure everything out, considering that I had never even seen Docker before this.

If you want to remove a container, you can run

PS C:\build>docker rm [CONTAINER_NAME]

And if you have any mapping defined for the container you are removing. Don’t forget to remove it using Remove-NetNatStaticMapping as mentioned before..

If you want to remove an image, the command is

PS C:\build>docker rmi [IMAGE_NAME]

As this has a lot of dependencies, and not a lot of code that I can really share, there is unfortunately no demo source to download…

Cheers!

Combining Windows Server 2016 Container, ASP.NET 5 and Application Request Routing (ARR) using Docker

$
0
0

I recently did a blog post about how to get an ASP.NET 5 application to run in a Windows Server container using Docker. However, I kept thinking about that solution, and started wondering if I could add IIS Application Request Routing to the mix as well. What if I could have containers at different ports, and have IIS and ARR routing incoming requests to different ports based on the host for example. And apparently I could. So I decided to write another post about how I got it going.

Disclaimer: There is still some kinks to work out regarding the routing. Right now, I have to manually change the routing to point to the correct container IP every time it is started, as I don’t seem to find a way to assign my containers static IP addresses…

Disclaimer 2: I have no clue about how this is supposed to be done, but this seems to work… Smile

Adding a domain name to my server (kind of)

The first thing I needed to do, was to solve how to get a custom domain name to resolve to my server, which in this case was running in Azure. The easiest way to solve this is by going to the Azure portal and looking at my machines Virtual IP address, and add it to my hosts file with some random host.

image41

This is a volatile address that changes on reboots, but it works for testing it out. However, it would obviously be much better to get a reserved IP, and maybe connect a proper domain name to it, but that was too much of a hassle…

Next, I opened my hosts file and added a couple of domain names connected to my machine’s IP address. In my case, I just added the below

40.127.129.213        site1.azuredemo.com
40.127.129.213        site2.azuredemo.com

Installing and configuring IIS and ARR 3.0

The next step was to install IIS on the machine. Using nothing but a command line. Luckily, that was a lot easier than I expected… All you have to do is run

C:\>Install-WindowsFeature Web-Server

and you are done. And yes, you can probably configure a whole heap of things to install and so on, but this works…

With IIS installed, it was time to add Application Request Routing (ARR) version 3.0, and even if there are other ways to do this, I decided to use the Web Platform Installer. So I downloaded the installer for that to my machine using Powershell and the following command

PS C:\>wget -uri "http://download.microsoft.com/download/C/F/F/CFF3A0B8-99D4-41A2-AE1A-496C08BEB904/WebPlatformInstaller_amd64_en-US.msi" -outfile "c:\installers\WebPI.msi"

and then followed that up by running the installer

C:\installers\platforminstallercmd.msi

With the Web Platform Installer installed, I got access to a tool called WebPICMD.exe. It is available under “%programfiles%\microsoft\web platform installer”, and can be used to browse the available things that the installer can install etc. It can also be used to install them, which is what I needed. So I ran

C:\>“%programfiles%\microsoft\web platform installer\WebPICMD.exe” /Install /Products:ARRv3_0

which installs ARR v.3.0 and all the prerequisites.

After the ARR bits had been installed, it was time to configure IIS to use it… The first step is to run

C:\>“%windir%\system32\inetsrv\appcmd.exe” set apppool "DefaultAppPool" -processModel.idleTimeout:"00:00:00" /commit:apphost

This sets the idle timeout for the default app pool to 0, which basically means “never timeout and release your resources”. This is pretty good for an application that has as its responsibility to make sure that all requests are routed to the correct place…

Next, it was time to enable the ARR reverse proxy using the following command

C:\>“%windir%\system32\inetsrv\appcmd.exe” set config -section:system.webServer/proxy /enabled:"True" /commit:apphost

With the IIS configured and up and running with the ARR bits, it was just a matter of opening up a port in the firewall so that I could reach it. So I did that using Powershell

PS C:\>New-NetFirewallRule -Name "TCP80" -DisplayName "HTTP on TCP/80" -Protocol tcp -LocalPort 80 -Action Allow -Enabled True

By now, the IIS is up and running and good to go, with a port opened for the traffic. The next step is to set up the websites for the containers that will sit behind the ARR bits.

Creating a couple of ASP.NET web applications

The websites I decided to create were ridiculously simple. I just created 2 empty ASP.NET 5 web applications, and modified their default OWIN middleware to return 2 different messages so that I could tell them apart. I also changed them to use Kestrel as the server, and put them on 2 different ports, 8081 and 8082. I then published them to my local machine, zipped them up and uploaded them to my server and added a simple dockerfile. All of this is covered in the previous post, so I won’t say anything more about that part. However, just for the sake of it, the dockerfile looks like this

FROM windowsservercore
ADD app /app
WORKDIR /app
CMD kestrel.cmd

With the applications unzipped and ready on the server, I used Docker to create 2 container images for my applications

C:\build\kestrel8081>docker build -t kestrel8081 .

C:\build\kestrel8082>docker build -t kestrel8082 .

and with my new images created, I created and started 2 containers to host the applications using Docker. However, I did not map through any port, as this will be handled by the ARR bits.

C:\>docker run --name kestrel8081 -d kestrel8081

With the containers up and running, there is unfortunately still a firewall in the way for those ports. So, just as in my last blog post, I just opened up the firewall completely for anything on the network that the containers were running on, using the following Powershell command

PS C:\>New-NetFirewallRule -Name "TCP/Containers" -DisplayName "TCP for containers" -Protocol tcp -LocalAddress 172.16.0.1/255.240.0.0 -Action Allow -Enabled True

Configuring Application Request Routing

The next step was to configure the request routing. To do this, I went over to the wwwroot folder

C:\>cd inetpub\wwwroot\

I then created a new web.donfig file, byt opening it in Notepad

C:\inetpub\wwwroot>notepad web.config

As there was no such file, Notepad asked if I wanted to created one, which I told it to do.

Inside the empty web.config file, I added the following

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
  <system.webServer>
    <rewrite>
      <rules>
        <rule name="Reverse Proxy to Site 1" stopProcessing="true">
          <conditions trackAllCaptures="true">
            <add input="{HTTP_HOST}" pattern="^site1.azuredemo.com$" />
          </conditions>
          <action type="Rewrite" url="http://172.16.0.2:8081/{R:1}" />
          <match url="^(.*)" />
        </rule>
        <rule name="Reverse Proxy to Site 2" stopProcessing="true">
          <conditions trackAllCaptures="true">
            <add input="{HTTP_HOST}" pattern="^site2.azuredemo.com$" />
          </conditions>
          <action type="Rewrite" url="http://172.16.0.3:8082/{R:1}" />
          <match url="^(.*)" />
        </rule>
      </rules>
    </rewrite>
  </system.webServer>
</configuration>

This tells the system that any requests headed for http://site1.azuredemo.com/ should be redirected internally to http://172..16.0.2:8081/, and any request to http://site2.azuredemo.com/ should be redirected to http://172.16.0.3:8082/.

Disclaimer: Yes, I am well aware that redirecting to 172.16.0.2 & 3 like this is less than great. Every time the containers are started, the IP will change for them. Or rather, the IP actually seems to be assigned depending on the order they are requested. So if all containers are always started in the same order after a reboot, it should in theory work. But as I said, far from great. However, I don’t quite know how to solve this problem right now. Hopefully some smart person will read this and tell me…

The only thing left to do was to try and browse to my 2 applications, which returned

image1

when browsing to http://site1.azuredemo.com, and

image411

when browsing to http://site2.azuredemo.com. So it seems that adding IIS and ARR to my server and use it to route my requests to my containers is working.

That’s it! If you have any idea how to solve it in a better way, please tell me! If not, I hope you got something out of it!

Disclaimer: No, the azuredemo.com domain is not mine, and I have no affiliation to whomever does. I just needed something to play with, and thought that would work for this.

Disclaimer 2: My server is not online at that IP address anymore, so you can’t call it, or try and hack it or whatever you were thinking… Smile

Cheers!

Webucator made a video out of one of my blog posts

$
0
0

A while back, I was contacted by the people at Webucator in regards to one of my blog posts. In particular this one… They wanted to know if they could make a video version of it, and of course I said yes! And here it is! Hot off the presses!

So there it is! And they even managed to get my last name more or less correct, which is awesome! So if you are looking for some on-line ASP.NET Training, have a look at their website.

Setting Up Continuous Deployment of an ASP.NET app with Gulp from GitHub to an Azure Web App

$
0
0

I just spent some time trying to figure out how to set up continuous deployment to an Azure Web App from GitHub, including running Gulp as part of the build. It seems that there are a lot blog posts and instructions on how to set up continuous deployment, but none of them seem to take into account that people actually use things like Gulp to generate client side resources during the build

The application

So to kick it off, I needed a web app to deploy. And since I just need something simple, I just opened Visual Studio and created an empty ASP.NET app. However, as I knew that I would need to add some extra stuff to my repo, I decided to add my project in a subfolder called Src. Leaving me with a folder structure like this

Repo Root
      Src
            DeploymentDemo <----- Web App Project
                  [Project files and folders]
                  Scripts
                  Styles
                  DeploymentDemo.csproj
            DeploymentDemo.sln

The functionality of the actual web application doesn’t really matter. The only important thing is the stuff that is part of the “front-end build pipeline”. In this case, that would be the Less files that are placed in the Styles directory, and the TypeScript files that are placed in the Scripts directory.

Adding Gulp

Next, I needed a Gulpfile.js to do the transpilation, bundling and minification of my Less and TypeScript. So I used npm to install the following packages

gulp
bower
gulp-less
gulp-minify-css
gulp-order
gulp-concat
gulp-rename
gulp-typescript
gulp-uglify

making sure that I added --save-dev, adding them to the package.json file so that I could restore them during the build phase.

I then used bower to install

angular
less
bootstrap

once again, adding --save-dev to that I could restore the Bower dependencies during build.

Finally, I created a gulpfile.js that looks like this

var gulp = require('gulp');
var ts = require('gulp-typescript');
var order = require("gulp-order");
var concat = require('gulp-concat');
var uglify = require('gulp-uglify');
var rename = require("gulp-rename");
var less = require('gulp-less');
var minifyCSS = require('gulp-minify-css');

var jslibs = [
'bower_components/angular/angular.min.js'
];

var csslibs = [
'bower_components/bootstrap/dist/css/bootstrap.min.css'
];

gulp.task('default', ['build:scripts', 'build:styles']);

gulp.task('build:scripts', ['typescript:bundle', 'jslibs:copy'], function () {
return gulp.src('build/*.min.js')
.pipe(order([
"angular.min.js",
"deploymentdemo.min.js"
]))
.pipe(concat('deploymentdemo.min.js'))
.pipe(gulp.dest('dist/'));
});

gulp.task('typescript:bundle', function () {
return gulp.src('Scripts/**/*.ts')
.pipe(ts({
noImplicitAny: true,
target: 'ES5'
}))
.pipe(order([
"!App.js",
"App.js"
]))
.pipe(concat('deploymentdemo.js'))
.pipe(gulp.dest('build/'))
.pipe(uglify())
.pipe(rename("deploymentdemo.min.js"))
.pipe(gulp.dest('build/'));
});

gulp.task('jslibs:copy', function () {
return gulp.src(jslibs)
.pipe(gulp.dest('build/'));
});

gulp.task('build:styles', ['css:bundle', 'csslibs:copy'], function () {
return gulp.src('build/*.min.css')
.pipe(order(["bootstrap.min.css", "deploymentdemo.min.css"]))
.pipe(concat('deploymentdemo.min.css'))
.pipe(gulp.dest('dist/'));
});

gulp.task('css:bundle', function () {
return gulp.src('Styles/**/*.less')
.pipe(less())
.pipe(concat('deploymentdemo.css'))
.pipe(gulp.dest('build/'))
.pipe(minifyCSS())
.pipe(rename("deploymentdemo.min.css"))
.pipe(gulp.dest('build/'));
});

gulp.task('csslibs:copy', function () {
return gulp.src(csslibs)
.pipe(gulp.dest('build/'));
});

Yes, that is a long code snippet. The general gist is pretty simple though. The default task will transpile the TypeScript into JavaScript, minify it, concatenate it with the angular.min.js file from Bower, before adding it to the directory called dist in the root of my application. Naming it deploymentdemo.min.js. It will also transpile the Less to CSS, minify it, concatenate it with the bootstrap.min.css from Bower, before adding it to the same dist folder with the name deploymentdemo.min.css.

Running the default Gulp task produces a couple of new things, making the directory look like this

Repo Root
      Src
            DeploymentDemo
                  [Project files and folders]
                  build
                          deploymentdemo.css
                          deploymentdemo.js
                          deploymentdemo.min.css

                          deploymentdemo.min.js

                  dist
                          deploymentdemo.min.css

                          deploymentdemo.min.js

                  Scripts
                  Styles
                  DeploymentDemo.csproj
            DeploymentDemo.sln

The files placed under the build directory, is just for debugging, and are not necessarily required. But they do make it a bit easier to debug potential issues from the build…

Ok, so now I have a web app that I can use to try out my continuous deployment!

Adding Continuous Deployment

Next, I needed a GitHub repo to deploy from. So I went to GitHub and set one up. I then opened a command line at the root of my solution (Repo Root in the above directory structure), and initialized a new Git repo. However, to not get too much stuff into my repo, I added a .gitignore file that excluded all unnecessary files.

I got my .gitignore from a site called gitignore.io. On this site, I just searched for “VisualStudio” and then downloaded the .gitignore file that it generated to the root of my repo.

However, since I am also generating some files, that I don’t want to commit to Git, on the fly. I added the following 2 lines to the ignore file

build/
dist/

Other than that, the gitignore-file seems to be doing what it should. So I created my first commit to my repo, and then pushed to my new GitHub repo.

Next, I needed to set up the actual continuous deployment. This is done through the Azure portal. So opened up the portal (https://portal.azure.com/), and navigated to the Web App that I wanted to enable CD for. Under the settings for the app, there is a Continuous Deployment option that I clicked

image

After clicking this, you are asked to choose your source. Clicking the “choose source” button, gives you a list of supported source control solutions.

image

In this case, I chose GitHub (for obvious reasons). If you haven’t already set up CD from GitHub at some point before, you are asked to authenticate yourself. Once that is done, you get to choose what project/repo you want to deploy from, as well as what branch you want to use.

As soon as all of this is configured, and you click “Done”, Azure will start its first deployment. However, since it wouldn’t run the Gulp-stuff that I required, the build is a massive fail. Tthe deployment will succeed, but your app won’t work, as it doesn’t have the required CSS and JavaScript.

Modifying the build process to run Gulp

Ok, so now that I had the actual CD configured in Azure, it was time to set up the build to run the required Gulp-task. To do this, I needed to modify the build process. Luckily, this is a simple thing to do…

When doing CD from GitHub, the system doing the actual work is called Kudu. Kudu is a fairly “simple” deployment engine that can be used to deploy pretty much anything you can think of to an Azure Web App. It also happens to be very easily modified. All you need is a .deployment file in the root of your repo to tell Kudu what to do. Or rather what command to run. In this case, the command to run was going to be a CMD-file. This CMD-file will replace the entire build process. However, you don’t have to go and re-create/re-invent the whole process. You can quite easily get a baseline of the whole thing created for you using the Azure CLI.

The Azure CLI can be installed in a couple of different ways, including downloading an EXE or by using npm. I hit some bumps trying to install it through npm though, so I recommend just getting the EXE, which is available here: https://azure.microsoft.com/en-us/documentation/articles/xplat-cli-install/.

Once that is installed, you can generate a .deployment and deployment.cmd file using a command that looks like this

azure site deploymentscript -s <PathToSln> --aspWAP <PathToCsProj>

So in this case, I ran the following command in the root of my repo

azure site deploymentscript --s Src\DeploymentDemo.sln --aspWAP \Src\DeploymentDemo\DeploymentDemo.csproj

Note: It does not verify that the sln or csproj file even exists. The input parameters are just used to set some stuff in the generated deploy.cmd file.

The .deployment-file contains very little. Actually, it only contains

[config]
command = deploy.cmd

which tells Kudu that the command to run during build, is deploy.cmd. The deploy.cmd on the other hand, contains the whole build process. Luckily, I didn’t have to care too much about that, even if it is quite an interesting read. All I had to do, was to find the right place to interfere with the process, and do my thing.

Scrolling through the command file, I located the row where it did the actual build. It happened to be step number 2, and what it does, is that it builds the defined project, putting the result in %DEPLOYMENT_TEMP%. So all I had to do, was to go in and do my thing, making sure that the generated files that I wanted to deploy was added to the %DEPLOYMENT_TEMP% directory as well.

So I decided to add my stuff to the file after the error level check, right after step 2.

First off I made sure to move the working directory to the correct directory, which in my case was Src\DeploymentDemo. I did this using the command pushd, which allows me to return to the previous directory by just calling popd.

After having changed the working directory, I added code to run npm install. Luckily, npm is already installed on the Kudu agent, so that is easy to do. However, I made sure to call it in the same way that all the other code was called in the file, which meant calling :ExecuteCmd. This just makes sure that if the command fails, the error is echoed to the output automatiacally.

So the code I added so far looks like this

echo Moving to source directory
pushd "Src\DeploymentDemo"

echo Installing npm packages: Starting %TIME%
call :ExecuteCmd npm install
echo Installing npm packages: Finished %TIME%
IF !ERRORLEVEL! NEQ 0 goto error

As you can see, I also decided to add some echo calls in the code as well. This makes it a bit easier to debug any potential problems using the logs.

Once npm was done installing, it was time to run bower install. And once again, I just called it using :ExecuteCmd like this

echo Installing bower packages: Starting %TIME%
call :ExecuteCmd "bower" install
echo Installing bower packages: Finished %TIME%
IF !ERRORLEVEL! NEQ 0 goto error

And with my Bower-packages in place, it was time to run Gulp. And once again, same deal, :ExecuteCmd gulp. However, I also need to make sure that any files generated by Gulp in the dist directory was added to the %DEPLOYMENT_TEMP% directory. Which I did by just calling xcopy, copying the dist directory to the temporary deployment directory. Like this

echo Running Gulp: Starting %TIME%
call :ExecuteCmd "gulp"
echo Running Gulp: Finished %TIME%

echo Publishing dist folder files to temporary deployment location
call :ExecuteCmd "xcopy""%DEPLOYMENT_SOURCE%\Src\DeploymentDemo\dist\*.*""%DEPLOYMENT_TEMP%\dist" /S /Y /I
echo Done publishing dist folder files to temporary deployment location

IF !ERRORLEVEL! NEQ 0 goto error

And finally, I just “popped” back to the original directory by calling popd… So the code I added between step 2 and 3 in the file ended up being this

// "Step 2" in the original script

echo Moving to source directory
pushd "Src\DeploymentDemo"

echo Installing npm packages: Starting %TIME%
call :ExecuteCmd npm install
echo Installing npm packages: Finished %TIME%
IF !ERRORLEVEL! NEQ 0 goto error

echo Installing bower packages: Starting %TIME%
call :ExecuteCmd "bower" install
echo Installing bower packages: Finished %TIME%
IF !ERRORLEVEL! NEQ 0 goto error

echo Running Gulp: Starting %TIME%
call :ExecuteCmd "gulp"
echo Running Gulp: Finished %TIME%

echo Publishing dist folder files to temporary deployment location
call :ExecuteCmd "xcopy""%DEPLOYMENT_SOURCE%\Src\DeploymentDemo\dist\*.*""%DEPLOYMENT_TEMP%\dist" /S /Y /I
echo Done publishing dist folder files to temporary deployment location
IF !ERRORLEVEL! NEQ 0 goto error

echo Moving back from source directory
popd

// "Step 3"in the original script

That’s it! After having added the new files, and modified the deployment.cmd file to do what I wanted, I commited my changes and pushed them to GitHub. And as soon as I did that, Azure picked up the changes and deployed my code, including the Gulp generated files.

Simplifying development

However, there was still one thing that I wanted to solve… I wasn’t too happy about having my site using the bundled and minified JavaScript files at all times. During development, it made it hard to debug, and since VS is already transpiling my TypeScript to JavaScript on the fly, even adding source maps, why not use those files instead… That makes debugging much easier…

So In my cshtml-view, I add the following code where I previously just added an include for the minified JavaScript

@if (HttpContext.Current.IsDebuggingEnabled)
{
<script src="~/bower_components/angular/angular.js"></script>
<script src="~/bower_components/less/dist/less.min.js"></script>
<script src="~/Scripts/MainCtrl.js"></script>
<script src="~/Scripts/App.js"></script>
}
else
{
<script src="/dist/deploymentdemo.min.js"></script>
}

This code checks to see if the app is running in debug, and if it is, it uses the “raw” JavaScript files instead of the minified one. And also, it includes Angular in a non-minified version, as this adds some extra debugging capabilities. As you can see, I have also included less.min.js in the list of included JavaScript files. I will get back to why I did this in a few seconds…

Note: Yes, this is a VERY short list of files, and in a real project, this would be a PITA to work with. However, this can obviously be combined with smarter code to generate the list of files to include in a more dynamic way.

Next, I felt that having to wait for Gulp to transpile all my LESS to CSS was a bit of a hassle. Ever so often, I ended up making a change to the LESS, and then refreshing the browser too fast, not giving Gulp enough time to run. So why not let the browser do the LESS transpiling on the fly during development?

To enable this, I changed the way that the view included the CSS. In a very similar way to what I did with the JavaScript includes

@if (HttpContext.Current.IsDebuggingEnabled)
{
<link href="~/bower_components/bootstrap/dist/css/bootstrap.min.css" rel="stylesheet"/>
<link rel="stylesheet/less" type="text/css" href="~/Styles/Site.less" />
}
else
{
<link href="/dist/deploymentdemo.min.css" rel="stylesheet" />
}

So, if the app is running in debug, it includes bootstrap.min.css, and my less file. If not it just includes the deployment.min.css. However, to include the LESS file like this, you need to make sure that it is added with the rel attribute set to stylesheet/less, and that the less.min.js file is included, which I did in the previous snippet I showed.

That’s it! Running this app in debug will now give me a good way of handling my resources for development. And running it in the cloud will give the users a bundled and minified version of both the required JavaScript and CSS.

As usual on my blog, you can download the code used in this post. However, it is dependent on setting up a few things in Azure as well… So you can’t just download and try it out unfortunately. But, the code should at least give you a good starting point.

Code is available here: DeploymentDemo.zip (58.1KB)

I also want to mention that you might want to put in some more error checks and so on in there, but I assume that you know that code from blogs are generally just quick demos of different concepts, not complete solutions… And if you didn’t, now you do! Winking smile

Cheers!

PS: I have also blogged about how to upload the generated files to blob storage as part of the build. That is available here.

Uploading Resources to Blob Storage During Continuous Deployment using Kudu

$
0
0

In  my last post, I wrote about how to run Gulp as part of your deployment process when doing continuous deployment from GitHub to an Azure Web App using Kudu. As part of that post, I used Gulp to generate bundled and minified JavaScript and CSS files that was to be served to the client.

The files were generated by using Gulp, and included in the deployment under a directory called dist. However, they were still part of the website. So they are still taking up resources from the webserver as they need to be served from it. And also, they are taking up precious connections from the browser to the server… By offloading them to Azure Blob Storage, we can decrease the amount of requests the webserver gets, and increase the number of connections used by the browser to retrieve resources. And it isn’t that hard to do…

Modifying the deployment.cmd file

I’m not going to go through all the steps of setting the deployment up. All of that was already done in the previous post, so if you haven’t read that, I suggest you do that first…

The first thing that I need to do, is add some more functionality to my deployment.cmd file. So right after I “pop” back to the original directory, I add some more code. More specifically, I add the following code

echo Pushing dist folder to blobstorage
call :ExecuteCmd npm install azure-storage
call :ExecuteCmd "node" blobupload.js "dist""%STORAGE_ACCOUNT%""%STORAGE_KEY%"
echo Done pushing dist folder to blobstorage

Ok, so I start by echoing out that I am about to upload the dist folder to blob storage. Next, I use npm to install the azure-storage package, which includes code to help out with working with Azure storage in Node.

Next, I execute a script called blobuplad.js using Node. I pass in 3 parameters, a string containing the name of the folder to upload, a storage account name, and a storage account key.

And finally I echo out that I’m done.

So what is in that blobupload.js file? Well, it is “just” a node script to upload the files in the specified folder. It starts out like this

var azure = require('azure-storage');
var fs = require('fs');

var distDirName = process.argv[2];
var accountName = process.argv[3];
var accountKey = process.argv[4];
var sourceDir = 'Src\\DeploymentDemo\\dist\\';

It “requires” the newly installed azure-storage package, and fs, which is what one use to work with the file system in node.

Next, it pulls out the arguments that were passed in. They are available in the process.argv array, from index 2 and forward. (0 says “node” and 1 says “blobupload.js”, which I don’t need). And finally, it just defines the source directory to copy from.

Note: The source directory should probably be passed in as a parameter as well, making the script more generic. But it’s just a demo…

After all the important variables have been collected, it goes about doing the upload

var blobService = azure.createBlobService(accountName, accountKey);

blobService.createContainerIfNotExists(distDirName, { publicAccessLevel: 'blob' }, function(error, result, response) {
if (error) {
console.log(result);
throw Error("Failed to create container");
}

var files = fs.readdirSync(sourceDir);
for (i=0;i<files.length;i++) {
console.log("Uploading: " + files[i]);
blobService.createBlockBlobFromLocalFile(distDirName, files[i], sourceDir + files[i], function(error, result, response) {
if (error) {
console.log(error);
throw Error("Failed to upload file");
}
});
}});

It starts out by creating a blob service, which it uses to talk to blob storage. Next,it creates the target container if it doesn’t already exist. If that fails, it logs the result from the call and throws an error.

Once it knows that there is a target container, it uses fs to get the names of the files in the target directory. It then loops through those names, and uses the blob services createBlockBlobFromLocalFile method to upload the local file to blob storage.

That’s it… It isn’t harder than that…

Parameters

But wait a second! Where did those magical parameters called %STORAGE_ACCOUNT% and %STORAGE_KEY% that I used in deploy.cmd come from? Well, since Kudu runs in a context that knows about the Web App it is deploying to, it is nice enough to set up any app setting that you have configured for the target Web App, as a variable that you can use in your script using %[AppSettingName]%.

So I just went to the Azure Portal and added 2 app settings to the target Web App, and inserted the values there. This makes it very easy to have different values for different targets when using Kudu. It also means that you never have to check in you credentials.

Warning: You should NEVER EVER EVER EVER check in your credentials to places like GitHub. they should be kept VERY safe. Why? Well, read this, and you will understand.

Changing the website

Now that the resources are available in storage instead of in the local application, the web app needs to be modified to include them from there instead.

This could easily be done by changing

@if (HttpContext.Current.IsDebuggingEnabled)
{
<link href="~/bower_components/bootstrap/dist/css/bootstrap.min.css" rel="stylesheet"/>
<link rel="stylesheet/less" type="text/css" href="~/Styles/Site.less" />
}
else
{
<link href="https://[StorageAccountName].blob.core.windows.net/dist/deploymentdemo.min.css" rel="stylesheet" />
}

However, that is a bit limiting to me… I prefer changing it into

@if (HttpContext.Current.IsDebuggingEnabled) {
<link href="~/bower_components/bootstrap/dist/css/bootstrap.min.css" rel="stylesheet"/>
<link rel="stylesheet/less" type="text/css" href="~/Styles/Site.less" />
} else {
<link href="@ConfigurationManager.AppSettings["cdn.prefix"]/dist/deploymentdemo.min.css" rel="stylesheet" />
}

The difference, if it isn’t quite obvious, is that I am not hardcoding the storage account that I am using. Instead, I am reading in from the web apps app settings. This gives me a few advantages. First of all, I can just not set the value at all, and it would just default to using the files from the webserver. I can also set it to be different storage accounts for different deployments. However, you might also have noticed that I called the setting cdn.prefix. The reason for this, is that I can also just turn on the CDN in Azure, and then configure my setting to use this instead of the storage account straight up. So using this little twist, I can use my local files, files from any storage account, as well as a CDN if that is what I want…

This is a small twist to just using storage, but it offers a whole heap of more flexibility, so why wouldn’t you…?

That’s actually all there is to it! Not very complicated at all!

Uploading Resources to Blob Storage During Continuous Deployment using XAML Builds in Visual Studio Team Services

$
0
0

In my last blog post, I wrote about how we can set up continuous deployment to an Azure Web App, for an ASP.NET application that was using Gulp to generate client side resources. I have also previously written about how to do it using GitHub and Kudu (here and here). However, just creating the client side resources and uploading the to a Web App is really not the best use of Azure. It would be much better to offload those requests to blob storage, instead of having the webserver having to handle them. For several reasons…

So let’s see how we can modify the deployment from the previous post to also include uploading the created resources to blob storage as part of the build.

Creating a Script to Upload the Resources

The first thing we need to get this going, is some form of code that can do the actual uploading the generated files to blob storage. And I guess one of the easiest ways is to just create a PowerShell script that does it.

So by calling in some favors, and Googling a bit, I came up with the following script

[CmdletBinding()]
param(
[Parameter(Mandatory = $true)]
[string]$LocalPath,

[Parameter(Mandatory = $true)]
[string]$StorageContainer,

[Parameter(Mandatory = $true)]
[string]$StorageAccountName,

[Parameter(Mandatory = $true)]
[string]$StorageAccountKey
)

function Remove-LeadingString
{
[CmdletBinding()]
param (
[Parameter(Mandatory = $true, ValueFromPipeline = $true)]
[AllowEmptyString()]
[string[]]
$String,

[Parameter(Mandatory = $true)]
[string]
$LeadingString
)

process
{
foreach ($s in $String)
{
if ($s.StartsWith($LeadingString, $true, [System.Globalization.CultureInfo]::InvariantCulture))
{
$s.Substring($LeadingString.Length)
}
else
{
$s
}
}
}
}

function Construct-ContentTypeProperty
{
[CmdletBinding()]
param (
[Parameter(Mandatory = $true, ValueFromPipeline = $true)]
[System.IO.FileInfo]
$file
)

process
{
switch($file.Extension.ToLowerInvariant()) {
".svg" { return @{"ContentType"="image/svg+xml"} }
".css" { return @{"ContentType"="text/css"} }
".js" { return @{"ContentType"="application/javascript"} }
".json" { return @{"ContentType"="application/javascript"} }
".png" { return @{"ContentType"="image/png"} }
".html" { return @{"ContentType"="text/html"} }
default { return @{"ContentType"="application/octetstream"} }
}
}
}

Write-Host "About to deploy to blobstorage"

# Check if Windows Azure Powershell is avaiable
if ((Get-Module -ListAvailable Azure) -eq $null)
{
throw"Windows Azure Powershell not found! Please install from http://www.windowsazure.com/en-us/downloads/#cmd-line-tools"
}

$context = New-AzureStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $StorageAccountKey

$existingContainer = Get-AzureStorageContainer -Context $context | Where-Object { $_.Name -like $StorageContainer }
if (!$existingContainer)
{
$newContainer = New-AzureStorageContainer -Context $context -Name $StorageContainer -Permission Blob
}

$dir = Resolve-Path ($LocalPath)
$files = (Get-ChildItem $dir -recurse | WHERE {!($_.PSIsContainer)})
foreach ($file in $files)
{
$fqName = $file.Directory,'\',$file.Name
$ofs = '
'
$fqName = [string]$fqName
$prop = Construct-ContentTypeProperty $file
$blobName = ($file.FullName | Remove-LeadingString -LeadingString "$($dir.Path)\")

Set-AzureStorageBlobContent -Blob $blobName -Container $StorageContainer -File $fqName -Context $context -Properties $prop -Force
}

Yes, I needed to get some help with this. I have very little PowerShell experience, so getting some help just made it a lot faster…

Ok, so what does it do? Well, it isn’t really that complicated. It takes 4 parameters, the local path of the folder to upload, the name of the container to upload the files to, the name of the storage account, and the key to the account. All of these parameters will be passed in by the build definition in just a little while…

Next, it declares a function that can remove the beginning of a string if it starts with the specified string, as well as a function to get the content type of the file being uploaded based on the file extension.

After these two functions have been created, it verifies that the Azure PowerShell commandlets are available. If not, it throws an exception.

It then creates an Azure context, which is basically the way you tell the commandlets you are calling, what credentials to use.

This is then used to create the specified target container, if it doesn’t already exist. After that, it recursively walks through all files and folders in the specified upload directory, uploading one file at the time as it is found.

Not very complicated at all…but could probably do with some optimization in the form of parallel uploads etc, but I couldn’t quite figure that out, and I had other things to solve as well…

Calling the Script During Deployment

Once the script is in place, it needs to be called during the build. Luckily, this is a piece of cake! All you need to do, is to add the following target in the XXX.wpp.targets file that was added in the previous post.

<Target Name="UploadToBlobStorage" AfterTargets="RunGulp" Condition="'$(Configuration)' != 'Debug'">
<Message Text="About to deploy front-end resources to blobstorage using the script found at $(ProjectDir)..\..\Tools\BlobUpload.ps1" />
<PropertyGroup>
<ScriptLocation Condition=" '$(ScriptLocation)'=='' ">$(ProjectDir)..\..\Tools\BlobUpload.ps1</ScriptLocation>
</PropertyGroup>
<Exec Command="powershell -NonInteractive -executionpolicy bypass -command &quot;&amp;{&amp;'$(ScriptLocation)' '$(ProjectDir)dist' 'dist' '$(StorageAccount)' '$(StorageAccountKey)'}&quot;" />
</Target>

As you can see, it is another Target, that is run after the Target called RunGulp. It also has that same condition that the NpmInstall target had, making sure that it isn’t run while building in Debug mode.

The only things that are complicated are the syntax of calling the PowerShell script, which is a bit wonky, and the magical $(StorageAccount) and $(StorageAccountKey) properties, which are actually properties that I have added at the top of the targets file like this

<PropertyGroup>
<StorageAccount></StorageAccount>
<StorageAccountKey></StorageAccountKey>
</PropertyGroup>

However, as you can see, they are empty. That’s because we will populate them from the build definition so that we don’t have to check-in out secret things into source control.

Modifying the Build Definition to Set the Storage Account Values

The final step is to edit the build definition, and make sure that the storage account name and key is set properly, so the PowerShell-script gets the values passed in properly.

To do this, you edit the build definition, going to the Process part of it and expanding the “5. Advanced” section. Like this

image

Under this section, you will find an “MSBuild arguments” item. In here, you can set the values of the properties used in the targets file. This is done by adding a /p:[PropertyName]=”[PropertyValue]” for each value you want to set. So in this case, I add

/p:StorageAccount="deploymentdemo" /p:StorageAccountKey="XXXXXXXX"

That’s it! If you just check-in the modified targets file, and the PowerShell script, you should be able to queue a new build and have it upload the generated files to blob storage for you.

Finally, to make sure you add the correct includes to the actual web page, I suggest having a look this blog post. It is about doing this with Kudu from GitHub, but the part about “Changing the website” in that post, is just as valid for this scenario. It enables the ability to easily switch between local resources, different blob storage accounts, and even CDN.

Cheers!


Setting Up Continuous Deployment of an ASP.NET App with Gulp from VSTS to an Azure Web App using Scripted Build Definitions

$
0
0

A few weeks ago, I wrote a couple of blog posts on how to set up continuous deployment to Azure Web Apps, and how to get Gulp to run as a part of it. I covered how to do it from GitHub using Kudu, and how to do it from VSTS using XAML-based build definitions. However, I never got around to do a post about how to do it using the new scripted build definitions in VSTS. So that is why this post is going to be about!

The Application

The application I’ll be working with, is the same on that I have been using in the previous posts. So if you haven’t read them, you might want to go and have a look at them. Or, at least the first part of the first post, which includes the description of the application in use. Without that knowledge, this post might be a bit hard to follow…

If you don’t feel like reading more than you need to, the basics are these. It’s an ASP.NET web application that uses TypeScript and LESS, and Gulp for generating transpiled, bundled and minified files versions of these resources. The files are read from the Styles and Scripts directories, and built to a dist directory using the default” task in Gulp. The source code for the whole project, is placed in a Src directory in the root of the repo…and the application is called DeploymentDemo.

I think that should be enough to figure out most of the workings of the application…if not, read the first post!

Setting up a new build

Ok, so the first step is to set up a new build in our VSTS environment. And to do this, all you need to do, is to log into visualstudio.com, go to your project and click the “Build” tab

image

Next, click the fat, green plus sign, which gives you a modal window where you can select a template for the build definition you are about to create. However, as I’m not just going to build my application, but also deploy it, I will click on the “Deployment” tab. And since I am going to deploy an Azure Web App, I select the “Azure Website” template and click next.

image

Note: Yes, Microsoft probably should rename this template, but that doesn’t really matter. It will still do the right thing.

Warning: If you go to the Azure portal and set up CD from there, you will actually get a XAML-based build definition, and not a scripted one. So you have to do it from in here.

Note: VSTS has a preview feature right now, where you split up the build and deployment steps into multiple steps. However, even if this is a good idea, I am just going to keep it simple and do it as a one-step procedure.

On the next screen, you get to select where the source code should come from. In this case, I’ll choose Git, as my solution is stored in a Git-based VSTS project. And after that, I just make sure that the right repo and branch is selected.

Finally, I make sure to check the “Continuous integration…”-checkbox, making sure that the build is run every time I push a change.

image

That’s it! Just click “Create” to create build definition!

Note: In this window you are also asked what agent queue to use by default. In this example, I’ll leave it on “Hosted”. This will give me a build agent hosted by Microsoft, which is nice. However, this solution can actually be a bit slow at times, and limited, as you only get a certain amount of minutes of free builds. So if you run into any of these problems, you can always opt in to having your own build agent in a VM in Azure. This way you get dedicated resources to do builds. Just keep in mind that the build agent will incur an extra cost.

Once that is done, you get a build definition that looks like this

image

Or at least you did when I wrote this post…

As you can see, the steps included are:

1. Visual Studio Build – A step that builds a Visual Studio solution

2. Visual Studio Test – A step that runs tests in the solution and makes sure that failing tests fail the build

3. Azure Web App Deployment – A step that publishes the build web application to a Web App in Azure

4. Index Sources & Publish Symbols – A step that creates and published pdb-files

5. Copy and Publish Artifacts – A step that copies build artifacts generated by the previous steps to a specified location

Note: Where is the step that downloads the source from the Git repo? Well, that is actually not its own step. It is part of the definition, and can be found under the “Repository” tab at the top of the screen.

In this case however, I just want to build and deploy my app. I don’t plan on running any tests, or generate pdb:s etc, so I’m just going to remove some of the steps… To be honest, the only steps I want to keep, is step 1 and 3. So it looks like this

image

Configuring the build step

Ok, now that I have the steps I need, I guess it is time to configure them. There is obviously something wrong with the “Azure Web App Deployment” step considering that it is red and bold…  But before I do anything about that, I need to make a change to the “Visual Studio Build” step.

As there will be some npm stuff being run, which generates that awesome, and very deep, folder structure inside of the “node_modules” folder, the “Visual Studio Build” step will unfortunately fail in its current configuration. It defines the solution to build as **/*.sln, which means “any file with an .sln-extension, in any folder”. This causes the build step to walk through _all_ the folders, including the “node_modules” folder, searching for solution files. And since the folder structure is too deep, it seems to fail if left like this. So it needs to be changed to point to the specific solution file to use. In this case, that means setting the Solution setting to Src/DeploymentDemo.sln. Like this

image

Configuring the deployment step

Ok, so now that the build step is set up, we need to have a look at the deployment part. Unfortunately, this is a bit more complicated than it might seem, and to be honest, than it really needed to be. At first look, it doesn’t look too bad

image

Ok, so all we need to do is to select the subscription to use, the Web App to deploy to and so on… That shouldn’t be too hard. Unfortunately all that becomes a bit more complicated when you open the “Azure Subscription” drop-down and realize that it is empty…

The first thing you need to do is to give VSTS access to your Azure account, which means adding a “Service Endpoint”. This is done by clicking the Manage link to the right of the drop-down, which opens a new tab where you can configure “Service Endpoints”.

image

The first thing to do is to click the fat, green plus sign and select Azure in the drop-down. This opens a new modal like this

image

There are 3 different ways to add a new connection, Credentials, Certificate Based and Service Principle Authentication. In this case, I’ll switch over to Certificate Based.

Note: If you want to use Service Principle Authentication you can find more information here

First, the connection needs a name. It can be whatever you want. It is just a name.

Next, you need to provide a bunch of information about your subscription, which is available in the publish settings file for your subscription. The easiest way to get hold of this file, is to hover over the tooltip icon, and then click the link called publish settings file included in the tool tip pop-up.

image

This brings you to a page where you can select what directory you want to download the publish settings for. So just select the correct directory, click “Submit”, and save the generated file to somewhere on your machine. Once that is done, you can close down the new tab and return to the “Add New Azure Connection” modal.

To get hold of the information you need, just open the newly downloaded file in a text editor. It will look similar to this

image

As you can see, there are a few bits of information in here. And it can be MUCH bigger than this if you have many subscriptions in the directory your have chosen. So remember to locate the correct subscription if you have more than one.

The parts that are interesting would be the attribute called Id, which needs to be inserted in the Subscription Id field in the modal, the attribute called Name, which should be inserted in Subscription Name, and finally the attribute called ManagementCertificate, which goes in the Management Certificate textbox. Like this

image

Once you click OK, the information will be verified, and if everything is ok, the page will reload, and you will have a new service endpoint to play with. Once that is done, you can close down the tab, and return to the build configuration set up.

The first thing you need to do here, is to click the “refresh” button to the right of the drop-down to get the new endpoint to show up. Next, you select the newly created endpoint in the drop-down.

After that, you would assume that the Web App Name drop-down would be populated with all the available web apps in your subscription. Unfortunately, this is not the case for some reason. So instead, you have to manually insert the name of the Web App you want to deploy to.

Note: You have two options when selecting the name of the Web App. Either, you choose the name of a Web App that you have already provisioned through the Azure portal, or you choose a new name, and if that name is available, the deployment script will create a new Web App for you with that name on the fly.

Next, select the correct region to deploy to, as well as any specific slot you might be deploying to. If you are deploying to the default slot, just leave the “slot” textbox empty.

The Web Deploy Packagebox is already populated with the value $(build.stagingDirectory)\**\*.zip which works fine for this. If you have more complicated builds, or your application contains other zips that will be output by the build, you might have to change this.

Once that is done, all you have to do is click the Save button in the top left corner, give the build definition a name, and you are done with the configuration.

Finally, click the Queue build… button to queue a new build, and in the resulting modal, just click OK. This will queue a new build, and give you a screen like this while you wait for an agent to become available

image

Note: Yes, I have had a failing build before I took this “screen shot”. Yours might look a little bit less red…

And as soon as there is an agent available for you, the screen will change into something like this

image

where you can follow along with what is happening in the build. And finally, you should be seeing something like this

image

At least if everything goes according to plan

Adding Gulp to the build

So far, we have managed to configure a build and deployment of our solution. However, we are still not including the Gulp task that is responsible for generating the required client-side resources. So that needs to be sorted out.

The first thing we need to do is to run

npm install

To do this, click the fat, green Add build step… button at the top of the configuration

image

and in the resulting modal, select Package in the left hand menu, and then add an npm build step

image

Next, close the modal and drag the new build step to the top of the list of steps.

By default, the command to run is set to install, which is what we need. However, we need it to run in a different directory than the root of the repository. So in the settings for the npm build step, expand the Advanced area, and update the Working Directory to say Src/DeploymentDemo.

image

Ok, so now npm will install all the required npm packages for us before the application is built.

Next, we need to run

bower install

To do this, add a new build step of the type Command Line from the Utility section, and drag it so that it is right after the npm step. The configuration we need for this to work is the following

Tool should be $(Build.SourcesDirectory)\Src\DeploymentDemo\node_modules\.bin\bower.cmd, the arguments should be install, and the working folder should be Src/DeploymentDemo

image

This will execute the bower command file, which is the same as running Bower in the command line, passing in the argument install, which will install the required bower packages. And setting the working directory will make sure it finds the bower.json file and installs the packages in the correct folder.

Now that the Bower components have been installed, or at least been configured to be installed, we can run Gulp. To do this, just add a new Gulp build step, which can be found under the Build section. And then make sure that you put it right after the Command Line step.

As our gulpfile.js isn’t in the root of the repo, the Gulp File Path needs to be changed to Src/DeploymentDemo/gulpfile.js, and the working directory once again has to be set to Src/DeploymentDemo

image

As I’m using the default task in this case, I don’t need to set the Gulp Task(s) to get it to run the right task.

Finally, I want to remove any left over files from the build agent as these can cause potential problems. They really shouldn’t, at least not if you are running the hosted agent, but I have run in to some weird stuff when running on my own agent, so I try to always clean up after the build. So to do this, I will run the batch file called delete_folder.bat in the Tools directory of my repo. This will use RoboCopy to safely remove deep folder structures, like the node_modules and bower_components folders.

To do this, I add two new build step to the end of the definition. Both of them of the type Batch Script from the Utility section of the Add Build Step modal.

Both of them need to have their Path set to Tools/delete_folder.bat, their Working Folder set to Src/DeploymentDemo, and their Always run checkbox checked. However, the first step needs to have the Arguments set to node_modules and the second one have it set to bower_components

image

This will make sure that the bower_components and node_modules folders are removed after each build.

Finally save the build configuration and you should be done! It should look something like this

image

However, there is still one problem. Gulp will generate new files for us, as requested, but it won’t be added to the deployment unfortunately. To solve this, we need to tell MSDeploy that we want to have those files added to the deployment. To do this, a wpp.targets-file is added to the root of the project, and checked into source control. The file is in this case called DeploymentDemo.wpp.targets and looks like this

<?xmlversion="1.0"encoding="utf-8" ?>
<ProjectToolsVersion="4.0"xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

<TargetName="AddGulpFiles"BeforeTargets="CopyAllFilesToSingleFolderForPackage;CopyAllFilesToSingleFolderForMsdeploy">
<MessageText="Adding gulp-generated files to deploy"Importance="high"/>
<ItemGroup>
<CustomFilesToIncludeInclude=".\dist\**\*.*"/>
<FilesForPackagingFromProjectInclude="%(CustomFilesToInclude.Identity)">
<DestinationRelativePath>.\dist\%(RecursiveDir)%(Filename)%(Extension)</DestinationRelativePath>
</FilesForPackagingFromProject>
</ItemGroup>
</Target>

</Project>

It basically tells the system that any files in the dist folder should be added to the deployment.

Note: You can read more about wpp.targets-files and how/why they work here: http://chris.59north.com/post/Integrating-a-front-end-build-pipeline-in-ASPNET-builds

That’s it! Queuing a new build, or pushing a new commit should cause the build to run, and a nice new website should be deployed to the configured location, including the resources generated by Gulp. Unfortunately, due to the npm and Bower work, the build can actually be a bit slow-ish. But it works!

Cheers!

Viewing all 29 articles
Browse latest View live