Nozus Step 2: Setting up MVC 6 with Basic Logging

The previous post in this series covered basic MVC 6 API project setup. In this post, we’re going to build on that and set up some baseline logging functionality in the API.  We’ll continue to enhance logging as the project progresses.

We’ll start by setting up logging.  In our case, we’re going to use Serilog.  It’s the new kid on the block for .Net logging and looks like it combines some nice elements of structured data with basic log messages.  This can easily be swapped for NLog.  Right now there isn’t 5.0 support for Log4Net, but expect that to change.

If it’s not already available, open the Package Manager Console: Tools –> Nuget Package Manager –> Package Manager Console. In the package source, select your VNext package source and the Web.Api project. At the prompt:

PM> Install-Package Microsoft.Framework.ConfigurationModel.Json -includeprerelease
Installing NuGet package Microsoft.Framework.ConfigurationModel.Json.1.0.0-beta3.
PM> Install-Package Microsoft.Framework.Logging.Serilog -includeprerelease
Installing NuGet package Microsoft.Framework.Logging.Serilog.1.0.0-beta3.

So we just installed JSON configuration support (no more web.config) and support for Serilog.  Now lets add a config.json file.  Right click on the Web.Api project and Select Add –> New Item…  Select ASP.Net Configuration File.  This should correspond to a config.json file.  Just add the default json.config to the project, we’ll add some things to it in a later installment.  I lowercase the file name…optional.

We’re going to add logging into the Startup.cs now. Open Startup.cs, add a Configuration property, and set it in the constructor. This will set up config information from our config.json and will also add configuration in from any environment variables that may be set in the deployment environment.

 public Startup(IHostingEnvironment env)
 {
   // Setup configuration sources.
   Configuration = new Configuration()
   .AddJsonFile("config.json")
   .AddEnvironmentVariables();
 }

 public IConfiguration Configuration { get; set; }

Now we’ll update the ConfigureServices method.  Configure services sets up services in our DI container.  We’ll add in logging by calling AddLogging() and while we’re in here, we’ll go ahead and remove XML as a potential output format. Notice that AddLogging reads in the configuration we read in the constructor.  We’re going JSON only with these services. Why? Because I don’t really want to support XML. Leave it if you like.

 public void ConfigureServices(IServiceCollection services)
 {
 services.AddLogging(Configuration);
 services.AddMvc();
 services.Configure<MvcOptions>(options =>
 {
 options.OutputFormatters.RemoveAll(formatter =>
formatter.Instance is XmlDataContractSerializerOutputFormatter);
 });
 }

Next, we’ll configure the actual middleware pipeline.  For this we use the Configure method. It’s a little confusing that we have ConfigureServices (DI) and Configure (Pipeline), but that’s the convention.  We’ll add Serilog as our logger and create a method to set up our logger configuration. Serilog can alternately read from app settings if that is preferred and I’ll probably switch it over to do that at some point.  Notice here that I’m writing out to a rolling file on my D drive.  You can write to whatever location or writer works for you.

public void Configure(IApplicationBuilder app,
IHostingEnvironment env, ILoggerFactory loggerFactory)
{
  loggerfactory.AddSerilog(GetLoggerConfiguration());
  app.UseStaticFiles();
  // Add MVC to the request pipeline.
  app.UseMvc(routes =>
  {
    routes.MapRoute(
    name: "default",
    template: "{controller}/{action}/{id?}",
    defaults: new {
        controller = "Home",
        action = "Index" });
  });
}

private static LoggerConfiguration GetLoggerConfiguration()
{
 return new LoggerConfiguration()
 .Enrich.WithMachineName()
 .Enrich.WithProcessId()
 .Enrich.WithThreadId()
 .MinimumLevel.Debug()
 .WriteTo.RollingFile(@"D:\Logs\Nozus.Web.Api\file-{Date}.txt",
 outputTemplate:
 "{Timestamp:yyyy-MM-dd HH:mm:ss.fff zzz} {Level}:{EventId} [{SourceContext}] {Message}{NewLine}{Exception}")
}

Now that we have the Serilog package installed, if you try to compile you might notice a problem.  The compiler has some complaints and if you look closely, you’ll notice that it’s complaining about Serilog but in addition that it’s complaining about ASP.Net Core 5.0.

CoreErrors

So if you’re not already aware, we have two flavors of .Net 5.0, the core flavor, which is being touted as the minimal/side-by-side/deployable/cloud flavor vs the full framework.  The problem here is that Serilog (and most other legacy .Net assemblies) won’t be compatible with the core flavor.  Expect this to change as .Net 5.0 goes live and the migration begins.  Right now we are compiling for both the Core and full flavors of the framework.  For now, we’re going to remove Core compilation from our project.  Open the project.json file and delete the core framework:

 "frameworks": {
     "aspnet50": { },
     "aspnetcore50": { }
 }

Now compilation should succeed.  Now let’s put in some basic logging to do an error catch all.  So there are a few ways to accomplish this.  One was is to use the built in ErrorHandling middleware.  To me, this is more geared towards MVC, where one wants to perform some logging and then perhaps render an alternate view from the standard. In my case since this is an API, I just want to log the error and send the standard 500 error out the door.  Perhaps I’ll add in a Production mode later that sends an alternate view out.  The nice thing is that since ASP.Net is now open source, we can see what the error handling middleware does and just make a simpler version of this when configuring services.  Since I just need something stupid simple, I added my own middleware.

In the Web.Api project add a Middleware folder.  In this folder create a new class called ErrorLoggingMiddleware.  It should look like this:

using System; using System.Threading.Tasks;
using Microsoft.AspNet.Builder;
using Microsoft.AspNet.Http;
using Microsoft.Framework.Logging; 

namespace Nozus.Web.Api.Middleware
{
public class ErrorLoggingMiddleware
{
private readonly RequestDelegate _next;
private readonly ILogger _logger;
public ErrorLoggingMiddleware(RequestDelegate next,
ILoggerFactory loggerFactory)
{
  _next = next;
  _logger = loggerFactory.Create<ErrorLoggingMiddleware>();
} 

public async Task Invoke(HttpContext context)
{
  try
  {
    await _next(context);
  }
  catch (Exception ex)
  {
    _logger.WriteError("An unhandled exception has occurred: "
    + ex.Message, ex);
    throw; // Don't stop the error
  }
}
}
}

All we’re doing is logging the error and throwing it up the chain for eventual handling by the framework. One really nice thing is that you’ll notice our dependencies are injected into the middleware for us.  So middleware is now hooked into the core DI mechanism and is truly first class.  Very nice…  You can also see the basic middleware pattern.  This is the same basic OWIN pattern you may be used to: Middleware is a russian doll where the next middleware component is injected and wrapped by the current middleware component.  So in our case we just call the next middleware component and log something if there’s an error. Really easy, the possibilities for open-source/third-party middleware are huge.  I expect this to explode with Asp.Net 5.

So now we need to call our middleware.  In the Configure method, well use the AddMiddleware method to add in our custom middleware.

public void Configure(IApplicationBuilder app, 
IHostingEnvironment env, ILoggerFactory loggerFactory)
 {
 loggerFactory.AddSerilog(GetLoggerConfiguration());

 app.UseStaticFiles();
 // Add MVC to the request pipeline.

 app.UseMiddleware<ErrorLoggingMiddleware>();

 app.UseMvc(routes =>
 {
   routes.MapRoute(
   name: "default",
   template: "{controller}/{action}/{id?}",
   defaults: new { 
        controller = "Home", 
        action = "Index" });
 });
 }

So easy and now we’re set.  One thing to remember is that the order of adding middleware determines the call sequence, so it’s important to add your component in the right place, which may differ depending on what you are trying to accomplish.

So now as a quick example, let’s log something from our home page by throwing an error from our default HomeController.  Just open the HomeController and have the Index method throw an error, something like:

[HttpGet("/")]
 public IActionResult Index()
 {
 throw new InvalidOperationException("Ghost in the machine!");
 return View();
 }

Let’s start up our app and it should immediately err since the error message gets thrown up the chain by our middleware.

ErrorShot

But if we go to our log file location that we set earlier, we should now have a log file with our error properly logged.

2015-03-11 21:02:58.723 -05:00 Error: [Nozus.Web.Api.Middleware.ErrorLoggingMiddleware] An unhandled exception has occurred: Ghost in the machine!
System.InvalidOperationException: Ghost in the machine!
 at Nozus.Web.Api.Controllers.HomeController.Index() in C:\Users\Visual Studio 14\Projects\Another\Nozus.Web.Api\Controllers\HomeController.cs:line 14
--- End of stack trace from previous location where exception was thrown --etc....

Just remember to remove the thrown error before proceeding further….in the next installment we’ll set up basic identity management…then we may switch over to Aurelia before getting back to social logins.

Nozus Step 1: Creating a Web API with MVC 6 – Project Setup

This article involves basic Visual Studio and project setup and should go fairly quickly.  I’m going to start out by creating a Web Api using MVC 6.  For reference, I’m using Visual Studio 2015 Preview 6.  Instead of beginning from complete scratch, I’m going to start with the Web API template.

From Visual Studio, go to File –> New –> Project.

In the New Project dialog box, select Web from the template tree and ASP.Net Web Application as the template.

NewProject

In the resulting web project template modal, select ASP.Net 5 Preview Web API. Note:  The Web API template is new with Preview 6.

NewProject2

Now lets add a couple of basic projects to our solution to round out the API.  A Domain class library for all Domain entities and interfaces, and a Data project for any repositories or EF 7 DataContexts.

Right click the solution node in the Solution Explorer and select Add –> New Project

NewProjectClassLib

InitialSolutionIn the Add New Project dialog, select ASP.Net 5 Class Library as the template. Do this twice:  Once for a .Domain project and once for a .Data project.  The initial structure should look something like the image to the right.

Now that some basic project structure is set up…lets add in some needed Nuget packages.  The first thing to do is to make sure the Nuget package manager is up to date.  You can go to Tools –> Extensions and Updates…  Look in the updates node on the navigation tree to see if there are any updates available for the Nuget Package Manager.

Now we need to set up the package manager such that it’s pointing to the correct source of packages for ASP.Net 5.

Open the Nuget settings by going to Tools –> Nuget Package Manager –> Package Manager Settings.  The Package Manager Settings dialog will open.

Navigate to Package Sources, and if it’s not already present, add an entry for AspNetVNext packages with source: https://www.myget.org/F/aspnetmaster/api/v2.

NugetSetup

Before we get started adding packages, lets make sure the packages we have are up to date.  Right click the solution node in the Solution Explorer and select Manage Nuget Packages…  In the Nuget Package Manager dialog, select the AspNetVNext source with Upgrade available and Include prerelease as filters. Install any available official M$ upgrades.

UpgradePackages

If you open your project.config in the Web.Api project, it should look something like the structure below.  We’ll talk more about this structure with the next installment.

{
 "webroot": "wwwroot",
 "version": "1.0.0-*",
 "dependencies": {
 "Microsoft.AspNet.Server.IIS": "1.0.0-beta3",
 "Microsoft.AspNet.Mvc": "6.0.0-beta3",
 "Microsoft.AspNet.StaticFiles": "1.0.0-beta3",
 "Microsoft.AspNet.Server.WebListener": "1.0.0-beta3",
 "System.Runtime": "4.0.20"
 },
 "frameworks": {
 "aspnet50": { },
 "aspnetcore50": { }
 },
 "exclude": [
 "wwwroot",
 "node_modules",
 "bower_components"
 ],
 "bundleExclude": [
 "node_modules",
 "bower_components",
 "**.kproj",
 "**.user",
 "**.vspscc"
 ]
}

Go ahead and run the Web.Api project.  The browser of your choice should launch and you should see a page like the following:

AppRunning

In the next article, we’ll set up basic logging and then identity management.

So many cool tools, which way to go…

I was beginning to work on a new exploratory project where I wanted to get a little deeper into some technologies that I don’t get to use every day at work. I’ve got a plan to build a cloud-based web site that does a few things.  So the choices were Angular or Aurelia on the front end and Node or ASP.Net 5/MVC 6 on the back end.

I’ve been doing various Node and Angular exercises and tutorials for the past year or so and have also been excited about the release of ASP.Net 5 and what it promises.  Although I’ve been put off by the recent Node schism, I’m fully confident that either a reconciliation will take place or IO.js will run away with the ball.  And the Node/IO community is so incredibly vibrant right now.  It seems like there’s a package for anything/everything. You can literally see the energy coming off of that system. Recently, I’ve also been impressed by the Aurelia project and it’s embrace of ES 6 and other emerging and established standards.  Aurelia will probably be a niche player in the SPA market when compared to Angular/React/Ember etc…, but it’s a really cool and looks well put together, so why not try it out.

But as cool as Node is, there are things that I’ll miss terribly in the .Net ecosystem like LINQ and (a hopefully more performant) EF and truly awesome productivity tools in VS and R# as well as all of my previous experience. I also want to support .Net’s move into OSS and embrace/trust its own community.  Already the source code out on GitHub has been an incredible help to me.  I think that’s already paying off.

I’ve decide to stick to my .Net roots and try out Asp.Net 5/MVC 6 as an API on the backend, trying to learn it while it’s still relatively new and attempt Aurelia on the front end with help from Bootstrap.  We’ll see how it works out…

Setting up Babel with Gulp

So after initially using the WebStorm file watcher mechanism to transpile to ES5 using Babel, I decided to instead do it the “correct” way: using Gulp.  In this case my project is a Node/Express Rest Api, in which case I would end up using Gulp anyway for various other tasks.  Here’s the easy setup:

Add in the following dependencies using npm:

npm install gulp --save-dev
npm install gulp-babel --save-dev
npm install gulp-sourcemaps --save-dev
npm install require-dir --save-dev

My project structure is set up as:

  • src – The untranspiled source.
  • dist – The transpiled source.
  • build – The build files.  I typically put a paths file inside of build that has all of my project path information for conducting builds and add add a tasks directory under build for the actual build tasks.

We can now add a build.js file under build–>tasks which will look like:

var gulp = require("gulp");
var sourceMaps = require("gulp-sourcemaps");
var babel = require("gulp-babel");

gulp.task("build", function () {
    return gulp.src("src/**/*.js") //get all js files under the src
        .pipe(sourceMaps.init()) //initialize source mapping
        .pipe(babel()) //transpile
        .pipe(sourceMaps.write(".")) //write source maps
        .pipe(gulp.dest("dist")); //pipe to the destination folder
});

Now define your main gulpfile.js in the root project directory.  It simply uses require-dir to require all files in the build/tasks folder (to pull in all tasks).

require('require-dir')('build/tasks');

That’s it!  now run “gulp build” at the command prompt…all set.  This is obviously pretty bare-bones.  Normally I might also be using some other packages to set up gulp tasks that enhance my build process by:

  • Cleaning the dist directory pre-transpile (del)
  • Setting up a linter (jshint)
  • Running the project with change monitoring (gulp-nodemon)

Setting up Babel with WebStorm on Windows

FYI: this post refers to WebStorm 9.  Although the same approach should work with WebStorm 10, I found that WebStorm 10 already had a watcher set up for Babel.

So I decided to start working on a project combining Node and Aurelia, kind of a MEAN with Aurelia as the A.  I’m coming from a .Net background, so I’m on Windows.  I tried out both WebStorm and Sublime and was really drawn to WebStorm based on my familiarity with many of the shortcuts I’ve used forever in ReSharper.

So now I’m using WebStorm and I want to develop using es6.  WebStorm comes with a transpiler plugin (basically a file watcher) for Traceur.  I’m sure Traceur works great, but Babel has gotten a lot of good reviews and in addition Aurelia uses 6to5 (old Babel) out of the box, so why not stick with the same thing.  So I wanted to set up Babel as a custom file watcher in WebStorm…here’s the easy way to do that.

First, install babel via npm:

npm install babel -g 

So in order to run a WebStorm command, at least in windows, it has to be an exe, bat, or cmd file.  So add a new file to the root of your project and call it “runbabel.cmd” with the following contents:

babel %*

This tells Babel to run with any arguments passed in.  Make sure to **not** name the command babel.cmd as it will just call itself in a tight loop instead of calling the Babel CLI.

Now in the main menu, select File –> Settings…  From the resulting popup, go to Tools –> File Watchers and click the + button to add a new watcher.

Add file watcher

From the resulting modal, set up the watcher with the following settings:Create watcher

  • Name the watcher Babel (or whatever) and give it a description if you like.
  • Set the file type to JavaScript files.
  • Create a Scope and scope it to the directory containing your source files (in this project, that is src).
    • It’s important to set the scope properly because this is the directory that WebStorm watches, which is not necessarily the directory the program will operate on.  So initially I set this to be the project directory since the Babel already accepts a directory it should transpile, but I ran into issues because the watcher would get into a tight loop: Babel would output into a project directory which would retrigger the watcher, which would transpile, which would output new files and trigger the watcher again…infinity!   For more information on setting up a scope, check out the WebStorm docs on this subject.
  • For the Program select the runbabel.cmd that was created earlier, if you have it within your project, you can use the $ProjectFileDir$ macro to locate the command.
  • The Arguments can now be any arguments that the Babel CLI accepts.  In this case we’re saying that it should run on the src directory and output to the lib directory.

Now just select the Babel watcher you created…and let it rip!

SelectFileWatcher

Next, we’ll talk about how to set up the transpiler using gulp instead of a file watcher.

New Version of FPR Available

Want a .Net object -> object mapper with lots of functionality that’s anywhere from 10-50x faster than AutoMapper?  Of course you do.  That’s why I created FPR:  The Mapper of Your Domain!

To be fair, I didn’t start this project.  I forked it a while back from FastMapper when I ran into some perf issues with AutoMapper. We still use AutoMapper in a lot of places, but have found it to be really slow in some situations and we have a very high throughput SaaS API.  We do a lot of mapping Repo -> Domain -> Contracts, so we need our mapper to be lightning fast.  I found FastMapper, but discovered that while it was really fast, it had some critical bugs and gave very few actionable errors/feedback.  In addition, we needed a much more robust feature set, to put it in the ballpark AutoMapper.  So I forked it and enhanced it significantly.  A teammate suggested the original name, which err….had to be abbreviated to be slightly less controversial.

We don’t use a lot of EF and where we do we haven’t yet switched to FPR, so it hasn’t come up as an issue, but I got some pull requests recently to help with EF mapping support.  Those have been added to the latest release.

So try it out!  Pull requests welcome…

Introducing ClearScript Manager

So I wrote a wrapper for the ClearScript .Net V8 wrapper.  ClearScript Manager was created to encapsulate the use of the ClearScript V8 engine in multi-use scenarios, like in a hosted server project (Ex: for use in a Web App or Web API).

ClearScript is an awesome library that was created to allow execution of JavaScript via V8 from the .Net runtime. It’s a great project but it needed a few extra things in certain situations that aren’t in the core goals of ClearScript itself. ClearScript also runs VBScript and JScript but those are not in the scope of ClearScript.Manager at the current time.

It should be noted that the package also installs the latest version of ClearScript.

Here are a couple of the related discussions on the clearscript forum:

https://clearscript.codeplex.com/discussions/535693
https://clearscript.codeplex.com/discussions/533516

And the ClearScript site: https://clearscript.codeplex.com

Along those lines, ClearScript.Manager does the following to make certain things easier in your server project:

  • Downloads and adds the ClearScript dlls appropriately.
  • Creates a configurable pool of V8 Runtimes that are cached and reused.
    • Pools have a configurable number of max instances.
    • Behavior when attempting to retrieve a V8 Runtime is to block until a V8 engine becomes available.
  • Because V8 Runtimes have affinity for compiled scripts, it compiles and caches scripts for each V8 Runtime instance.
  • Attempts to better contain running V8 scripts by:
    • Setting up a Task in which the script is run with a configurable timeout.
    • Allow easy management of the memory usage of each instance of the V8 Runtime and sets the limits to a much lower threshold than the default V8 settings.
Check it out!  For more information go to the GitHub Page.

Why Did I Create Burrows?

So there’s already (at least) three great .Net implementations out there that support RabbitMQ, why create something different?  And why not just use MassTransit out of the box instead of forking it?  Good and valid questions.

It started a couple of years ago.  We had a project in hardcore dev mode and we wanted to use RabbitMQ as the core of our messaging system.  NServiceBus was there, but their Rabbit support was still somewhat suspect and not fully integrated. Our take was:  Why pay for a commercial product and then use a community add-on for the core implementation?  Of course now Rabbit is a fully supported transport, but then it was a different game and I’m still not sure NSB would be worth the money for us.
 
We then looked at EasyNetQ and in fact we started using it.  Let me make it clear that I love this product, but it was missing some things.  At the time it was really just getting up to speed and it didn’t support message object-type routing like NServiceBus and MassTransit.
 
So we went with MassTransit.  This worked great for a while (with a few bug fixes) until we really wanted to implement solid Publisher-Confirms.  MassTransit had stated they were going to add it, but it had been a while.  I put in a pull request with a naive implementation but the guys instead created their own implementation.  The problem was that while it was there, it didn’t really work in our scenarios and would have resulted in message loss.  I contacted Mike Hadlow (from EasyNetQ) and asked if he’d be open to accepting a pull for real/full message inheritance support (which is still doesn’t support well).  Mike said that he would consider it, but didn’t want to risk EasyNetQ getting overly complicated.  So I stuck with MassTransit, created a fork and added our own implementation of Publisher-Confirms.  
 
But then there was something else as well:  MassTransit started with MSMQ as the transport.  Although the transport is mostly abstracted, there was a lot of cruft in the core and configuration that was MSMQ related.  So I basically ripped all of that stuff.  In addition, it looked like a few different coding styles had been used and there was a pattern of using typical class names for interfaces and then using an “impl” suffix for the actual implementation.  ***Brain explosion sound***  I’m not against that but brain just does not compute.  I’m used to the standard IWhatever interface naming standard so I updated all of those and tried to make other class names more uniform.
 
Although we use it actively, I’ve had Burrows on the shelf for a little bit now.  But I think I’m getting ready to dive back in re-energized.  Frankly we’ve been experiencing some issues under heavy load and it doesn’t really leverage async well (true of both MassTransit and Burrows).  In addition, it’s basic message handling approach can lead to thread starvation if your subscribers don’t process messages quickly, which isn’t obvious.  But then there’s the balance between performance and safety that must be considered.  Hopefully more coming soon, but meanwhile, check out our current implementation:
For more information go to GitHub or Nuget.