Quantcast
Channel: DarksideCookie - ASP.NET
Viewing all articles
Browse latest Browse all 29

Building a simple PicPaste replacement using Azure Web Apps and WebJobs

$
0
0

This post was supposed to be an introduction to Azure WebJobs, but it took a weird turn somewhere and became a guide to building a simple PicPaste replacement using just a wimple Azure Web App and a WebJob.

As such, it might not be a really useful app, but it does show how simple it is to build quite powerful things using Azure.

So, what is the goal? Well, the goal is to build a website that you can upload images to, and then get a simple Url to use when sharing the image. This is not complicated, but as I want to resize the image, and add a little overlay to it as well before giving the user the Url, I might run into performance issues if it becomes popular. So, instead I want the web app to upload the image to blob storage, and then have a WebJob process it in the background. Doing it like this, I can limit the number of images that are processed at the time, and use a queue to handle any peaks.

Note: Is this a serious project? No, not really. Does it have some performance issues if it becomes popular? Yes. Did I build it as a way to try out background processing with WebJobs? Yes… So don’t take it too serious.

The first thing I need is a web app that I can use to upload images through. So I create a new empty ASP.NET project, adding support for MVC. Next I add a NuGet packages called WindowsAzure.Storage. And to be able to work with Azure storage, I need a new storage account. Luckily, that is as easy as opening the “Server Explorer” window, right-clicking the “Storage” node, and selecting “Create Storage Account…”. After that, I am ready to start building my application.

Inside the application, I add a single controller called HomeController. However, I don’t want to use the default route configuration. instead I want to have a nice, short and simple route that looks like this

routes.MapRoute(
name: "Image",
url: "{id}",
defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional }
);

Ok, now that that is done, I add my “Index” view, which is ridiculously simple, and looks like this

<!DOCTYPEhtml>

<html>
<head>
<metaname="viewport"content="width=device-width"/>
<title>AzurePaste</title>
</head>
<body>
@using (Html.BeginForm("Index", "Home", FormMethod.Post, new { enctype = "multipart/form-data" }))
{
<label>Chose file: </label>
<inputtype="file"name="file"accept="image/*"/><br/>
<inputtype="submit"value="Upload"/>
}
</body>
</html>

As you can see, it contains a simple form that allows the user to post a single file to the server using the Index method on the HomeController.

The only problem with this is that there’s no controller action called Index that accepts HTTP POSTs. So let’s add one.

[HttpPost]
public ActionResult Index(HttpPostedFileBase file)
{
if (file == null)
{
returnnew HttpStatusCodeResult(HttpStatusCode.BadRequest);
}

var id = GenerateRandomString(10);
var blob = StorageHelper.GetUploadBlobReference(id);
blob.UploadFromStream(file.InputStream);

StorageHelper.AddImageAddedMessageToQueue(id);

return View("Working", (object)id);
}

So what does this action do? Well, first of all it returns a HTTP 400 if you forgot to include the file… Next it uses a quick and dirty helper method called GenerateRandomString(). It just generates a random string with the specified length… Next, I use a helper class called StorageHelper, which I will return to shortly, to get a CloudBockBlob instance to which I upload my file. The name of the blob is the random string I just retrieved, and the container for it is a predefined one that the WebJob knows about. The WebJob will then pick it up there, make the necessary tranformations, and then save it to the root container.

Note: By adding a container called $root to a blob storage account, you get a way to put files in the root of the Url. Otherwise you are constrained to a Url like https://[AccountName].blob.core.windows.net/[ContainerName]/[FileName]. But using $root, it can be reduced to https://[AccountName].blob.core.windows.net/[FileName].

Once the image is uploaded to Azure, I once again turn to my StorageHelper class to put a message on a storage queue.

Note: I chose to build a WebJob that listened to queue messages instead of blob creation, as it can take up to approximately 10 minutes before the WebJob is called after a blob is created. The reason for this is that the blob storage logs are buffered and only written approximately every 10 minutes. By using a queue, this delay is decreased quite a bit. Bit it is still not instantaneous… If you need to decrease it even further, you can switch to a ServiceBus queue instead.

Ok, if that is all I am doing, that StorageHelper class must be really complicated. Right? No, not really. It is just a little helper to keep my code a bit more DRY.

The StorageHelper has 4 public methods. EnsureStorageIsSetUp(), which I will come back to, GetUploadBlobReference(), GetRootBlobReference() and AddImageAddedMessageToQueue(). And they are pretty self explanatory. The two GetXXXBlobReference() methods are just helpers to get hold of a CloudBlockBlob reference. By keeping it in this helper class, I can keep the logic of where blobs are placed in one place… The AddImageAddedMessageToQueue() adds a simple CloudQueueMessage, containing the name of the added image, on a defined queue. And finally, the EnsureStorageIsSetUp() will make sure that the required containers and queues are set up, and that the root container has read permission turned on for everyone.

publicstaticvoid EnsureStorageIsSetUp()
{
UploadContainer.CreateIfNotExists();
ImageAddedQueue.CreateIfNotExists();
RootContainer.CreateIfNotExists();
RootContainer.SetPermissions(new BlobContainerPermissions { PublicAccess = BlobContainerPublicAccessType.Blob });
}

publicstatic CloudBlockBlob GetUploadBlobReference(string blobName)
{
return UploadContainer.GetBlockBlobReference(blobName);
}

publicstatic CloudBlockBlob GetRootBlobReference(string blobName)
{
return RootContainer.GetBlockBlobReference(blobName);
}

publicstaticvoid AddImageAddedMessageToQueue(string filename)
{
ImageAddedQueue.AddMessage(new CloudQueueMessage(filename));
}

Kind of like that…

The returned view that the user gets after uploading the file looks like this

@model string

<!DOCTYPEhtml>

<html>
<head>
<metaname="viewport"content="width=device-width"/>
<title>Azure Paste</title>
<scriptsrc="~/Content/jquery.min.js"></script>
<scripttype="text/javascript">
$(function () {
function checkCompletion() {
setTimeout(function () {
$.ajax({ url: "/@Model" })
.done(function (data, status, jqXhr) {
if (jqXhr.status === 204) {
checkCompletion();
return;
}
document.location.href = data;
})
.fail(function (error) {
alert("Error...sorry about that!");
console.log("error", error);
});
}, 1000);
}
checkCompletion();
});
</script>
</head>
<body>
<div>Working on it...</div>
</body>
</html>

As you can see, the bulk of it is some JavaScript, while the actual content that the users sees is really tiny…

The JavaScript uses jQuery (no, not a fan, but it has some easy ways to do ajax calls) to poll the server every second. It calls the server at “/[FileName]”, which as you might remember from my changed routing, will call the Index method on the HomeController.

If the call returns an HTTP 204, the script keeps on polling. If it returns HTTP 200, it redirects the user to a location specified by the returned content. If something else happens, if just alerts that something went wrong…

Ok, so this kind of indicates that my Index() method needs to be changed a bit. It needs to do something different if the id parameter is supplied. So I start by handling that case

public ActionResult Index(string id)
{
if (string.IsNullOrEmpty(id))
{
return View();
}

...
}

That’s pretty much the same as it is by default. But what if the id is supplied? Well, then I start by looking for the blob that the user is looking for. If that blob exists, I return an HTTP 200, and the Url to the blob.

public ActionResult Index(string id)
{
...

var blob = StorageHelper.GetRootBlobReference(id);
if (blob.Exists())
{
returnnew ContentResult { Content = blob.Uri.ToString().Replace("/$root", "") };
}

...
}

As you can see, I remove the “/$root” part of the Url before returning it. The Azure Storage SDK will include that container name in the Url even if it is a “special” container that isn’t needed in the Url. So by removing it I get this nicer Url.

If that blob does not exist, I look for the temporary blob in the upload folder. If it exists, I return an HTTP 204. And if doesn’t, then the user is looking for a file that doesn’t exist, so I return a 404.

public ActionResult Index(string id)
{
...

blob = StorageHelper.GetUploadBlobReference(id);
if (blob.Exists())
{
returnnew HttpStatusCodeResult(HttpStatusCode.NoContent);
}

returnnew HttpNotFoundResult();
}

Ok, that is all there is to the web app. Well…not quite. I still need to ensure that the storage stuff is set up properly. So I add a call to the StorageHelper.EnsureStorageIsSetUp() in the Application_Start() method in Global.asax.cs.

protectedvoid Application_Start()
{
AreaRegistration.RegisterAllAreas();
RouteConfig.RegisterRoutes(RouteTable.Routes);

StorageHelper.EnsureStorageIsSetUp();
}

Next up is the WebJob that will do the heavy lifting. So I add a WebJob project to my solution. This gives me a project with 2 C# files. One called Program.cs, which is the “entry point” for the job, and one called Functions.cs, which contains a sample WebJob.

The first thing I want to do is to make sure that I don’t overload my machine by running too many of these jobs in parallel. Doing image manipulation is hogging resources, and I don’t want it to get too heavy for the machine.

I do this by setting the batch size for queues in a JobHostConfiguration inside the Program.cs file

staticvoid Main()
{
var config = new JobHostConfiguration();
config.Queues.BatchSize = 2;

var host = new JobHost(config);
host.RunAndBlock();
}

Now that that is done, I can start focusing on my WebJob… I start out by deleting the existing sample job, and add in a new one.

A WebJob is just a static method with a “trigger” as the first parameter. A trigger is just a method parameter that has an attribute set on it, which defines when the method should be run. In this case, I want to run it based on a queue, so I add a QueueTrigger attribute to my first parameter.

As my message contains a simple string with the name of the blob to work on, I can define my first parameter as a string. Had it been something more complicated, I could have added a custom type, which the calling code would have populated by deserializing the content in the message. Or, I could have chosen to go with CloudQueueMessage, which gives med total control. But as I said, just a string will do fine. It will also help me with my next  three parameters.

As a lot of WebJobs will be working with Azure storage, the SDK includes helping tools to make this easier. One such tool is an attribute called BlobAttribute. This makes it possible to get a blob reference, or its contents, passed into the method. In this case, getting references to the blobs I want to work with makes things a lot easier. I don’t have to handle getting references to them on my own. All I have to do, is to add parameters of type ICloudBlob, and add a BlobAttribute to them. The attribute takes a name-pattern as the first string. But in this case, the name of the blob will be coming from the queue message… Well, luckily, the SDK people have thought of this, and given us a way to access this by adding “{queueTrigger}” to the pattern. This will be replaced by the string in the message…

Ok, so the signature fore my job method turns in to this

publicstaticvoid OnImageAdded([QueueTrigger("image-added")]string blobName,
[Blob("upload/{queueTrigger}")]ICloudBlob blob,
[Blob("$root/{queueTrigger}.png")]ICloudBlob outputImage,
[Blob("$root/{queueTrigger}")]ICloudBlob htmlBlob,
TextWriter log) {
...
}

As you can see, I have also added a final parameter of type TextWriter, called log. This will be a log supplied by the WebJob host, making it possible to log things in a nice a uniform way.

Bu wait a little…why am I taking in 3 blobs? The first one is obviously the uploaded image. The second one is the target where I am going to put the transformed image. What is the last one? Well, I am going to make it a little more complicated than just hosting the image… I am going to host the image under a name which is [UploadedBlobName].png. I am then going to add a very simple HTML file to show the image in a blob at the same name as the uploaded blob. That ay, the Url to the page to view the image will be a nice and simple one, and it will show the image and a little text.

The first thing I need to do is get the content of the blob. This could have been done by requesting a Stream instead of a IClodBlob, but as I want to be able to delete it at the end, that didn’t work…unless I used more parameters, which felt unnecessary…

Once I have my stream, I turn it into a Bitmap class from the System.Drawing assembly. Next, I resize that image to a maximum with or height before adding a little watermark.

publicstaticvoid OnImageAdded([QueueTrigger("image-added")]string blobName,
[Blob("upload/{queueTrigger}")]ICloudBlob blob,
[Blob("$root/{queueTrigger}.png")]ICloudBlob outputImage,
[Blob("$root/{queueTrigger}")]ICloudBlob htmlBlob,
TextWriter log)
{
var ms = new MemoryStream();
blob.DownloadToStream(ms);

var image = new Bitmap(ms);
var newImage = image.Resize(int.Parse(ConfigurationManager.AppSettings["AzurePaste.MaxSize"]));
newImage.AddWatermark("AzurePaste FTW");

...
}

Ok, now that I have my transformed image, it is time to add it to the “target blob”. I do this by saving the image to a MemoryStream and then uploading that. However, by default, all blobs get the content type “application/octet-stream”, which isn’t that good for images. So I update the blob’s content type to “image/png”.
publicstaticvoid OnImageAdded([QueueTrigger("image-added")]string blobName,
[Blob("upload/{queueTrigger}")]ICloudBlob blob,
[Blob("$root/{queueTrigger}.png")]ICloudBlob outputImage,
[Blob("$root/{queueTrigger}")]ICloudBlob htmlBlob,
TextWriter log)
{
...

var outputStream = new MemoryStream();
newImage.Save(outputStream, ImageFormat.Png);
outputStream.Seek(0, SeekOrigin.Begin);
outputImage.UploadFromStream(outputStream);
outputImage.Properties.ContentType = "image/png";
outputImage.SetProperties();

...
}

The last part is to add the fabled HTML page… In this case, I have just quickly hacked a simple HTML page into my assembly as an embedded resource. I could of course have used one stored in blob storage or something, making it easier to update. But I just wanted something simple…so I added it as an embedded resource… But before I add it to the blob, I make sure to replace the Url to the blob, which has been defined as “{0}” in the embedded HTML.

publicstaticvoid OnImageAdded([QueueTrigger("image-added")]string blobName,
[Blob("upload/{queueTrigger}")]ICloudBlob blob,
[Blob("$root/{queueTrigger}.png")]ICloudBlob outputImage,
[Blob("$root/{queueTrigger}")]ICloudBlob htmlBlob,
TextWriter log)
{
...

string html;
using (var htmlStream = typeof (Functions).Assembly.GetManifestResourceStream("DarksideCookie.Azure.WebJobs.AzurePaste.WebJob.Resources.page.html"))
using (var reader = new StreamReader(htmlStream))
{
html = string.Format(reader.ReadToEnd(), blobName + ".png");
}
htmlBlob.UploadFromStream(new MemoryStream(Encoding.UTF8.GetBytes(html)));
htmlBlob.Properties.ContentType = "text/html";
htmlBlob.SetProperties();

blob.Delete();
}

As you can see, I also make sure to update the content type of this blob to “text/html”. And yeah…I also make sure to delete the original file.

That is about it… I guess I should mention that the Resize() and AddWatermark() methods in the Bitmap are extension methods I have added. They are not important for the topic, so I will just leave them out. But they are available in the downloadable code below.

There is one more thing though… What happens if my code is borked, or someone uploads a file that isn’t an image? Well, in that case, the code will fail, and fail, and fail… If it fails, the WebJob will be re-run at a later point. Unfortunately, this can turn ugly, and turn into what is known as a poison message. Luckily, this is handled by default. After a configurable amount of retries, the message is considered a poison message, and will be discarded. As it is, a new message is automatically added to a dynamically created queue to notify us about it. So it might be a good idea for us to add a quick little job to that queue as well, and log any poison messages.

The name of the queue that is created is “[OriginalQueueName]-poison”, and the handler for it looks like any other WebJob. Just try not to add code in here that turns these messages into poison messages…

publicstaticvoid LogPoisonBlob([QueueTrigger("image-added-poison")]string blobname, TextWriter logger)
{
logger.WriteLine("WebJob failed: Failed to prep blob named {0}", blobname);
}

That’s it! Uploading an image through the web app will now place it in blob storage. it will then wait until the WebJob has picked it up, transformed it, and stored it, and a new html file, at the root of the storage account. Giving the user a quick and easy address to share with his or her friends.

Note: If you want to run something like this in the real world, you probably want to add some form of clean-up solution as well. Maybe a WebJob on a schedule, that removes any images older than a certain age…

And as usual, there is a code sample to play with. Just remember that you need to set up a storage account for it, and set the correct connectionstrings in the web.config file. As well as the WebJob project’s app.config if you want to run it locally.

Note: It might be good to know that logs and information about current wWebJobs can be found at https://[ApplicationName].scm.azurewebsites.net/azurejobs/#/jobs

Source code: DarksideCookie.Azure.WebJobs.AzurePaste.zip (52.9KB)

Cheers!


Viewing all articles
Browse latest Browse all 29

Trending Articles