Nozus Step 1: Creating a Web API with MVC 6 – Project Setup

This article involves basic Visual Studio and project setup and should go fairly quickly.  I’m going to start out by creating a Web Api using MVC 6.  For reference, I’m using Visual Studio 2015 Preview 6.  Instead of beginning from complete scratch, I’m going to start with the Web API template.

From Visual Studio, go to File –> New –> Project.

In the New Project dialog box, select Web from the template tree and ASP.Net Web Application as the template.


In the resulting web project template modal, select ASP.Net 5 Preview Web API. Note:  The Web API template is new with Preview 6.


Now lets add a couple of basic projects to our solution to round out the API.  A Domain class library for all Domain entities and interfaces, and a Data project for any repositories or EF 7 DataContexts.

Right click the solution node in the Solution Explorer and select Add –> New Project


InitialSolutionIn the Add New Project dialog, select ASP.Net 5 Class Library as the template. Do this twice:  Once for a .Domain project and once for a .Data project.  The initial structure should look something like the image to the right.

Now that some basic project structure is set up…lets add in some needed Nuget packages.  The first thing to do is to make sure the Nuget package manager is up to date.  You can go to Tools –> Extensions and Updates…  Look in the updates node on the navigation tree to see if there are any updates available for the Nuget Package Manager.

Now we need to set up the package manager such that it’s pointing to the correct source of packages for ASP.Net 5.

Open the Nuget settings by going to Tools –> Nuget Package Manager –> Package Manager Settings.  The Package Manager Settings dialog will open.

Navigate to Package Sources, and if it’s not already present, add an entry for AspNetVNext packages with source:


Before we get started adding packages, lets make sure the packages we have are up to date.  Right click the solution node in the Solution Explorer and select Manage Nuget Packages…  In the Nuget Package Manager dialog, select the AspNetVNext source with Upgrade available and Include prerelease as filters. Install any available official M$ upgrades.


If you open your project.config in the Web.Api project, it should look something like the structure below.  We’ll talk more about this structure with the next installment.

 "webroot": "wwwroot",
 "version": "1.0.0-*",
 "dependencies": {
 "Microsoft.AspNet.Server.IIS": "1.0.0-beta3",
 "Microsoft.AspNet.Mvc": "6.0.0-beta3",
 "Microsoft.AspNet.StaticFiles": "1.0.0-beta3",
 "Microsoft.AspNet.Server.WebListener": "1.0.0-beta3",
 "System.Runtime": "4.0.20"
 "frameworks": {
 "aspnet50": { },
 "aspnetcore50": { }
 "exclude": [
 "bundleExclude": [

Go ahead and run the Web.Api project.  The browser of your choice should launch and you should see a page like the following:


In the next article, we’ll set up basic logging and then identity management.

So many cool tools, which way to go…

I was beginning to work on a new exploratory project where I wanted to get a little deeper into some technologies that I don’t get to use every day at work. I’ve got a plan to build a cloud-based web site that does a few things.  So the choices were Angular or Aurelia on the front end and Node or ASP.Net 5/MVC 6 on the back end.

I’ve been doing various Node and Angular exercises and tutorials for the past year or so and have also been excited about the release of ASP.Net 5 and what it promises.  Although I’ve been put off by the recent Node schism, I’m fully confident that either a reconciliation will take place or IO.js will run away with the ball.  And the Node/IO community is so incredibly vibrant right now.  It seems like there’s a package for anything/everything. You can literally see the energy coming off of that system. Recently, I’ve also been impressed by the Aurelia project and it’s embrace of ES 6 and other emerging and established standards.  Aurelia will probably be a niche player in the SPA market when compared to Angular/React/Ember etc…, but it’s a really cool and looks well put together, so why not try it out.

But as cool as Node is, there are things that I’ll miss terribly in the .Net ecosystem like LINQ and (a hopefully more performant) EF and truly awesome productivity tools in VS and R# as well as all of my previous experience. I also want to support .Net’s move into OSS and embrace/trust its own community.  Already the source code out on GitHub has been an incredible help to me.  I think that’s already paying off.

I’ve decide to stick to my .Net roots and try out Asp.Net 5/MVC 6 as an API on the backend, trying to learn it while it’s still relatively new and attempt Aurelia on the front end with help from Bootstrap.  We’ll see how it works out…

Setting up Babel with Gulp

So after initially using the WebStorm file watcher mechanism to transpile to ES5 using Babel, I decided to instead do it the “correct” way: using Gulp.  In this case my project is a Node/Express Rest Api, in which case I would end up using Gulp anyway for various other tasks.  Here’s the easy setup:

Add in the following dependencies using npm:

npm install gulp --save-dev
npm install gulp-babel --save-dev
npm install gulp-sourcemaps --save-dev
npm install require-dir --save-dev

My project structure is set up as:

  • src – The untranspiled source.
  • dist – The transpiled source.
  • build – The build files.  I typically put a paths file inside of build that has all of my project path information for conducting builds and add add a tasks directory under build for the actual build tasks.

We can now add a build.js file under build–>tasks which will look like:

var gulp = require("gulp");
var sourceMaps = require("gulp-sourcemaps");
var babel = require("gulp-babel");

gulp.task("build", function () {
    return gulp.src("src/**/*.js") //get all js files under the src
        .pipe(sourceMaps.init()) //initialize source mapping
        .pipe(babel()) //transpile
        .pipe(sourceMaps.write(".")) //write source maps
        .pipe(gulp.dest("dist")); //pipe to the destination folder

Now define your main gulpfile.js in the root project directory.  It simply uses require-dir to require all files in the build/tasks folder (to pull in all tasks).


That’s it!  now run “gulp build” at the command prompt…all set.  This is obviously pretty bare-bones.  Normally I might also be using some other packages to set up gulp tasks that enhance my build process by:

  • Cleaning the dist directory pre-transpile (del)
  • Setting up a linter (jshint)
  • Running the project with change monitoring (gulp-nodemon)

Setting up Babel with WebStorm on Windows

FYI: this post refers to WebStorm 9.  Although the same approach should work with WebStorm 10, I found that WebStorm 10 already had a watcher set up for Babel.

So I decided to start working on a project combining Node and Aurelia, kind of a MEAN with Aurelia as the A.  I’m coming from a .Net background, so I’m on Windows.  I tried out both WebStorm and Sublime and was really drawn to WebStorm based on my familiarity with many of the shortcuts I’ve used forever in ReSharper.

So now I’m using WebStorm and I want to develop using es6.  WebStorm comes with a transpiler plugin (basically a file watcher) for Traceur.  I’m sure Traceur works great, but Babel has gotten a lot of good reviews and in addition Aurelia uses 6to5 (old Babel) out of the box, so why not stick with the same thing.  So I wanted to set up Babel as a custom file watcher in WebStorm…here’s the easy way to do that.

First, install babel via npm:

npm install babel -g 

So in order to run a WebStorm command, at least in windows, it has to be an exe, bat, or cmd file.  So add a new file to the root of your project and call it “runbabel.cmd” with the following contents:

babel %*

This tells Babel to run with any arguments passed in.  Make sure to **not** name the command babel.cmd as it will just call itself in a tight loop instead of calling the Babel CLI.

Now in the main menu, select File –> Settings…  From the resulting popup, go to Tools –> File Watchers and click the + button to add a new watcher.

Add file watcher

From the resulting modal, set up the watcher with the following settings:Create watcher

  • Name the watcher Babel (or whatever) and give it a description if you like.
  • Set the file type to JavaScript files.
  • Create a Scope and scope it to the directory containing your source files (in this project, that is src).
    • It’s important to set the scope properly because this is the directory that WebStorm watches, which is not necessarily the directory the program will operate on.  So initially I set this to be the project directory since the Babel already accepts a directory it should transpile, but I ran into issues because the watcher would get into a tight loop: Babel would output into a project directory which would retrigger the watcher, which would transpile, which would output new files and trigger the watcher again…infinity!   For more information on setting up a scope, check out the WebStorm docs on this subject.
  • For the Program select the runbabel.cmd that was created earlier, if you have it within your project, you can use the $ProjectFileDir$ macro to locate the command.
  • The Arguments can now be any arguments that the Babel CLI accepts.  In this case we’re saying that it should run on the src directory and output to the lib directory.

Now just select the Babel watcher you created…and let it rip!


Next, we’ll talk about how to set up the transpiler using gulp instead of a file watcher.

New Version of FPR Available

Want a .Net object -> object mapper with lots of functionality that’s anywhere from 10-50x faster than AutoMapper?  Of course you do.  That’s why I created FPR:  The Mapper of Your Domain!

To be fair, I didn’t start this project.  I forked it a while back from FastMapper when I ran into some perf issues with AutoMapper. We still use AutoMapper in a lot of places, but have found it to be really slow in some situations and we have a very high throughput SaaS API.  We do a lot of mapping Repo -> Domain -> Contracts, so we need our mapper to be lightning fast.  I found FastMapper, but discovered that while it was really fast, it had some critical bugs and gave very few actionable errors/feedback.  In addition, we needed a much more robust feature set, to put it in the ballpark AutoMapper.  So I forked it and enhanced it significantly.  A teammate suggested the original name, which err….had to be abbreviated to be slightly less controversial.

We don’t use a lot of EF and where we do we haven’t yet switched to FPR, so it hasn’t come up as an issue, but I got some pull requests recently to help with EF mapping support.  Those have been added to the latest release.

So try it out!  Pull requests welcome…

Introducing ClearScript Manager

So I wrote a wrapper for the ClearScript .Net V8 wrapper.  ClearScript Manager was created to encapsulate the use of the ClearScript V8 engine in multi-use scenarios, like in a hosted server project (Ex: for use in a Web App or Web API).

ClearScript is an awesome library that was created to allow execution of JavaScript via V8 from the .Net runtime. It’s a great project but it needed a few extra things in certain situations that aren’t in the core goals of ClearScript itself. ClearScript also runs VBScript and JScript but those are not in the scope of ClearScript.Manager at the current time.

It should be noted that the package also installs the latest version of ClearScript.

Here are a couple of the related discussions on the clearscript forum:

And the ClearScript site:

Along those lines, ClearScript.Manager does the following to make certain things easier in your server project:

  • Downloads and adds the ClearScript dlls appropriately.
  • Creates a configurable pool of V8 Runtimes that are cached and reused.
    • Pools have a configurable number of max instances.
    • Behavior when attempting to retrieve a V8 Runtime is to block until a V8 engine becomes available.
  • Because V8 Runtimes have affinity for compiled scripts, it compiles and caches scripts for each V8 Runtime instance.
  • Attempts to better contain running V8 scripts by:
    • Setting up a Task in which the script is run with a configurable timeout.
    • Allow easy management of the memory usage of each instance of the V8 Runtime and sets the limits to a much lower threshold than the default V8 settings.
Check it out!  For more information go to the GitHub Page.

Why Did I Create Burrows?

So there’s already (at least) three great .Net implementations out there that support RabbitMQ, why create something different?  And why not just use MassTransit out of the box instead of forking it?  Good and valid questions.

It started a couple of years ago.  We had a project in hardcore dev mode and we wanted to use RabbitMQ as the core of our messaging system.  NServiceBus was there, but their Rabbit support was still somewhat suspect and not fully integrated. Our take was:  Why pay for a commercial product and then use a community add-on for the core implementation?  Of course now Rabbit is a fully supported transport, but then it was a different game and I’m still not sure NSB would be worth the money for us.
We then looked at EasyNetQ and in fact we started using it.  Let me make it clear that I love this product, but it was missing some things.  At the time it was really just getting up to speed and it didn’t support message object-type routing like NServiceBus and MassTransit.
So we went with MassTransit.  This worked great for a while (with a few bug fixes) until we really wanted to implement solid Publisher-Confirms.  MassTransit had stated they were going to add it, but it had been a while.  I put in a pull request with a naive implementation but the guys instead created their own implementation.  The problem was that while it was there, it didn’t really work in our scenarios and would have resulted in message loss.  I contacted Mike Hadlow (from EasyNetQ) and asked if he’d be open to accepting a pull for real/full message inheritance support (which is still doesn’t support well).  Mike said that he would consider it, but didn’t want to risk EasyNetQ getting overly complicated.  So I stuck with MassTransit, created a fork and added our own implementation of Publisher-Confirms.  
But then there was something else as well:  MassTransit started with MSMQ as the transport.  Although the transport is mostly abstracted, there was a lot of cruft in the core and configuration that was MSMQ related.  So I basically ripped all of that stuff.  In addition, it looked like a few different coding styles had been used and there was a pattern of using typical class names for interfaces and then using an “impl” suffix for the actual implementation.  ***Brain explosion sound***  I’m not against that but brain just does not compute.  I’m used to the standard IWhatever interface naming standard so I updated all of those and tried to make other class names more uniform.
Although we use it actively, I’ve had Burrows on the shelf for a little bit now.  But I think I’m getting ready to dive back in re-energized.  Frankly we’ve been experiencing some issues under heavy load and it doesn’t really leverage async well (true of both MassTransit and Burrows).  In addition, it’s basic message handling approach can lead to thread starvation if your subscribers don’t process messages quickly, which isn’t obvious.  But then there’s the balance between performance and safety that must be considered.  Hopefully more coming soon, but meanwhile, check out our current implementation:
For more information go to GitHub or Nuget.