Monday, July 30, 2012

Finito

With this last post, our trip to Italy has come to an end. We have spent these last few days practicing several indispensable tourist activities: testing the ice cold lake water, to wriggle between other sunbathers on the packed beach right after, exuberant eating and drinking, reading novels, and sauntering between prullaria shops.

Fortuitously, one of our last evenings here was upgraded by fireworks over an already moonlit Lake Garda. This outlying location gave the fireworks show an extra notch. With each firework rocket, the pitch dark beach was illuminated for a few seconds. With each bang, the sound echoed over the mountains, making the water ripple to strand in the pebble beach eventually.

Friday, July 27, 2012

Milano

After Venice on Wednesday, we decided to stay in Torbole for a relaxing hike yesterday. As it turned out, the hiking trail that was described in the brochures as suitable for all and with a duration of 1h20, in reality took us 2h30 and sour calves to complete. The views of the lake's valley and its school of windsurfers defying the wind were worth the sweat though. Arriving at the endpoint, and discovering that we had to wait over an hour for the bus to take us back, we stumbled upon one of the best hidden lake side terraces we've seen yet.



Since the days of our dolce vita are numbered, we made our last big trip today: Milano. We parked our car in the suburbs, where you pay only a quarter an hour and have a direct metro-connection to the Duomo. While we've had it with churches in general, we couldn't let this one slip. The Milanese Duomo is the third biggest church in all of Christendom and could easily host a Metallica concert. The entrance to the church is free, but if you want to visit the rooftop, you pay 7€ on foot (250 stairs) or 12€ if you want to take the elevator. Although the panorama wasn't as striking as the one we had seen yesterday, the architecture and craftsmanship of the rooftop details are worth to see up close. After that, we continued the 'Old City' itinerary of our travel guide, which includes the Galleria Vittorio Emanuele II and the Teatro alla Scala.



Thursday, July 26, 2012

The Floating City

Yesterday we cruised along the eastern coastline of Lake Garda. Plan A was to visit Malcesine and take the cableway up to the top of Monte Baldo, but since the line at the ticket center resembled one of a Justin Bieber concert, we skipped the cableway. Maybe we'll give it another try later this week, but then earlier in the day. We continued our journey along the coast, passing past several genuinely picturesque villages. We stopped in Garda for an elaborate lake side lunch, before continuing to our last destination that day, Sirmione. There we visited the medieval castello Scaligero and after climbing the numerous steps inside the castle, we rewarded ourselves with an oversized typical Italian gelato. 


Today we made a trip to Venice. After a 2 hour drive, we parked our car at Venezia Mestre train station, to enter the Floating City by train. We soon found ourselves lost in a maze of fairy alleys and waterways, searching for the main tourist attractions. First we headed for the Mercato Rialto, which is probably the best market in the world, but we can’t confirm that since it was already cleared out when we got there at 2PM. After having some delicious fresh fruit at the foot of the ponte di Rialto, we crossed the famous bridge stacked with souvenir shops. Arriving at Piazza San Marco, we took the time to visit the impressive basilica San Marco and the gigantesque Doge's palace. While these classics were definitely worthwhile, you can't miss out on strolling around the less tourist neighborhoods to experience the authentic Venice.







Tuesday, July 24, 2012

Ferrari red. Lamborghini yellow.

After 14 hours and 34 minutes in the car, driving 1148km, over the German autobahn, swerving Austrian roads, and the Italian autostrade, defying overwhelming hail storms, roadwork and miles long tunnel congestions, we arrived at Torbole, Italy.

Torbole is a small village, nestled between the mountains, right at the edge of Lake Garda. While the lakeside view is astonishing, you have to pick the right time to truly experience it. In the daytime, the village is overwhelmed by tourists; miles long rows of cars queue to enter the village, pizzerias and ristorantes on every corner, Germans galore. This doesn't take away that this location is a great launching point to visit other places in northern Italy though.





After a lazy day yesterday, we drove two hours South today, to Sant'Agata Bolognese and Maranello, to visit two legendary car manufacturers: Lamborghini and Ferrari. While both museums only take around 40 minutes to visit, I definitely was more impressed by the Lamborghini museum. Compared to the Ferrari museum, the Lamborghini museum seemed to exhibit far more (non-racing) eccentric and rare models.






Since both museums failed to entertain us for longer than an hour, we visited Verona on our way back. While we didn't enter any tourist attractions, we enjoyed strolling through the old town, blending into the mass, only halting on a Roman square to pay way too much for an iced cappuccino which was served lukewarm.


Sunday, July 15, 2012

Should I unit- or integration test my ASP.NET Web API services?

Over the last two weeks, preparing for a talk, I have been doing some research on ASP.NET Web API. After working my way through the API, and the implementation of certain features, I looked at testing.

Similar to ASP.NET MVC, Web API allows you to create relatively small building blocks, which can replace parts of, or be added to an existing default global setup. This makes it possible for us to test each component in isolation: controllers, dependency resolvers, filters, serialization, type formatters, messagehandlers and routing.

Testing in isolation helps a great deal to keep the magnitude of things to stuff in your head limited, and when you break something, you are able to quickly pinpoint the origin of the error. What unit testing fails to prove however, is the correctness of your code when all the little pieces are put together and configured. And let it be that this is extremely important when you're exposing an API.

Looking at Web API, I would probably test most infrastructure in isolation - filters, type formatters, messagehandlers and serialization, because these tests will help pinpoint errors in components which will affect a large amount of other code. I wouldn't test controllers and routing in isolation though.

I would test controllers and routing from a client's perspective; meaning I'll send a request to and endpoint on the server, I'll go through the infrastructure, and I'll assert the replied response. This would exclude false positives or false negatives which can originate when you unit test controllers and have to fake a bunch of infrastructure just to get it working, while you do include testing the effect the real infrastructure has on your incoming requests or outgoing responses.

An obvious counterargument might be starting and stopping a webserver in your tests, and the associated performance hit. This isn't something to worry about with Web API though; HttpServer is just another HttpMessageHandler, which makes it possible to consume it using an HttpClient in-memory.

So let me show you some code I wrote trying out these thoughts. The first thing I did was exposing the hosting server's configuration to my tests. This could be as simple as this.
public class ServerSetup 
{
    public static HttpSelfHostConfiguration GetConfiguration(string baseAdress)
    {
        var config = new HttpSelfHostConfiguration(baseAdress);
        
        var kernel = new StandardKernel();
        kernel.Bind<IResumeStore>().To<ResumeStore>();
        
        config.Routes.MapHttpRoute(
            "DefaultApi", "api/{controller}/{id}",
            new { id = RouteParameter.Optional });
        config.MessageHandlers.Add(new MethodOverrideHandler());
        config.DependencyResolver = new NinjectDependencyResolver(kernel);

        return config;
    }
}
Now in my test I can grab this configuration, and just overwrite the dependencies and the error detail policy. I can initialize an HttpClient by passing in an HttpServer instance which uses the modified configuration.
private HttpClient _client;

[TestInitialize]
public void Setup()
{
    var kernel = new StandardKernel();
    kernel.Bind<IResumeStore>().ToConstant(new Mock<IResumeStore>().Object);

    var config = ServerSetup.GetConfiguration("http://test");
    config.IncludeErrorDetailPolicy = IncludeErrorDetailPolicy.Always;                       
    config.DependencyResolver = new NinjectDependencyResolver(kernel);

    _client = new HttpClient(new HttpServer(config));
}

[TestMethod]
public void Post_Returns_HttpStatus_Code_Created()
{         
    var result = _client.PostAsync<Resume>(
          "http://test/api/resume", 
          new Resume("Jef", "Claes"), 
          new JsonMediaTypeFormatter()).Result;

    result.EnsureSuccessStatusCode();

    Assert.AreEqual(HttpStatusCode.Created, result.StatusCode);
}
Now I'm consuming my API almost exactly as a client would; my request goes through the routing, infrastructure and the controller. Infrastructure is still tested under isolation so finding problems there is easy, but I now have the advantage of testing routing, the effect of my real infrastructure and the logic in my controller actions in one simple integration test. Remember, we are testing a delivery mechanism, not an application; Web API controllers should be skinny as well.

One drawback I stumbled upon is discoverability of controller dependencies, but surprisingly that didn't bother me much. I can still see an overview of all my dependencies in the controller's constructor, it's not a disaster to not have intellisense.

In general, I think this pragmatic approach to testing Web API implementations gets as much value as possible from automated testing with writing as little test code as possible, and without adding too much complexity.

What do you think about this approach to testing Web API solutions? 

Sunday, July 8, 2012

HtmlHelper to generate a top-level menu for areas

Last week, we had to set up a new ASP.NET MVC web application, using a somewhat customized Twitter Bootstrap build. Because the application has multiple functional contexts, we divided it in multiple parts using areas. Since these areas were a one-to-one mapping with the top-level menu items, we tried abstracting the creation of the menu items, ánd the management of setting the active item, into an HtmlHelper.

Let's say, for this example, that we have six areas: Images, Maps, Play, Search, Video and Blog, and we want to render a list item for each one of them.
<div class="nav-collapse collapse">
    <ul class="nav">
        // Add list items         
    </ul>
</div>
The first solution we tried, assumed we needed an extreme low-maintenance solution, for which we would write some infrastructure once, and then be able to just create new areas without having to think about updating the top-level menu.

This solution reflected over all the types looking for classes which inherit from the AreaRegistration class. Once you get a list of all the area names, you can iterate over them and create a list item for each one of them, using an instance of UrlHelper to resolve the associated url. You have to impose some routing convention to make the url lookup robust though; in this example, I assume the default route is sufficient. To be able to mark the active area with a css class, you can get the active areaname from the viewcontext, and use that to compare to the iterand value.
public static class TopMenuExtensions
{
    private static IEnumerable<string> _areaNames;

    public static MvcHtmlString RenderTopMenuItems(this HtmlHelper helper)
    {
        var areaNames = GetAreaNames();
        var currentArea = helper.ViewContext.RouteData.DataTokens["area"] as string;

        var html = new StringBuilder();
        foreach (var areaName in areaNames)
        {
            var urlHelper = new UrlHelper(helper.ViewContext.RequestContext);
            var url = urlHelper.Action(string.Empty, string.Empty, new { area = areaName });
            // or similar
            // var url = urlHelper.RouteUrl(areaName + "_default");

            html.AppendLine(areaName.Equals(
                currentArea, StringComparison.OrdinalIgnoreCase) ? 
                "<li class='active'>" : "<li>");
            html.AppendLine(string.Format("<a href='{0}'>{1}</a>", url, areaName));
            html.AppendLine("</li>");
        }

        return new MvcHtmlString(html.ToString());
    }

    private static IEnumerable<string> GetAreaNames()
    {
        if (_areaNames == null)
        {
            _areaNames = Assembly
                .GetExecutingAssembly()
                .GetTypes()
                .Where(t => t.IsClass && typeof(AreaRegistration).IsAssignableFrom(t))
                .Select(a => (AreaRegistration)Activator.CreateInstance(a))
                .Select(r => r.AreaName);
        }

        return _areaNames;
    }
}
Now we can add following line to our _Layout file, and be done with it.
@Html.RenderTopMenuItems()  
While this works, we stumbled upon an annoyance pretty quickly: we wanted to change the order of the menu items, but couldn't. We took a step back, and momentarily considered decorating the arearegistrations with an attribute, but since the added value is so small compared to the extra complexity introduced, we decided just to throw the overengineering out.
public static MvcHtmlString RenderTopMenuItems(
             this HtmlHelper helper, IEnumerable<string> areaNames)
{        
    var currentArea = helper.ViewContext.RouteData.DataTokens["area"] as string;

    var html = new StringBuilder();
    foreach (var areaName in areaNames)
    {
        var urlHelper = new UrlHelper(helper.ViewContext.RequestContext);
        var url = urlHelper.Action(string.Empty, string.Empty, new { area = areaName });
        
        if (url == null)
            throw new NullReferenceException(
                string.Format("Couldn't find an url for the area {0}.", areaName));                
        html.AppendLine(areaName.Equals(
                          currentArea, StringComparison.OrdinalIgnoreCase) ? 
                          "<li class='active'>" : "<li>");
        html.AppendLine(string.Format("<a href='{0}'>{1}</a>", url, areaName));
        html.AppendLine("</li>");
    }

    return new MvcHtmlString(html.ToString());
}       
The top-level menu items can now be rendered like this.
@Html.RenderTopMenuItems(new [] { "Search", "Images", "Blog", "Maps", "Play", "Video" } )
And the result looks like this.



While you can go at this problem in a lot of different ways, I think this is one of the most robust and most compact ways I have been able to write this so far. How have you solved this in the past?

Sunday, July 1, 2012

On crime and document stores

Having worked with several storage paradigms over these last few months - from flatfiles, to  NoSQL, to the big enterprisey relational databases -, I have spent plenty of time trying to make sense of all the options out there. It wasn't until I watched one of the last episodes of The Wire season 3 that I had an epiphany regarding modeling data in document stores. Yes, I know, I tend to take those things home with me.

Somewhere half way through that episode, you see a detective going through one of those old school, gray and clumsy file cabinets, looking for a dossier on one of the recent murders. Once he finds the dossier, he takes it out of the drawer, scribbles down contact information of an eyeball witness, puts it back in the drawer, and closes the drawer again with a loud stomp.

And that file cabinet actually isn't very different from a MongoDB collection; it stores and categorizes documents of the same type, and the trade-offs you have to consider when modeling dossiers, or documents, are basically the same.

Let me work the homicide department angle a little further..

A few months later, the murder is still not solved, and one of the detectives, being out of work, starts working the case again. He walks over to that same file cabinet, searches the file and goes through the data one more time. Short on leads, he decides to interrogate the witness again. So he takes the file, steps in his black Buick, and drives over to the address scribbled down next to the name of the witness. When he arrives, after a grueling car ride through morning rush hour, he finds himself standing in front of a vacant house. Son of a... He drives over to city hall, waits in line for 24 minutes, gets the new address, and heads over there. When he gets back to the office, empty handed, and several hours later, he is determined to prevent this from happening again. He suggests the other detectives either check and update their dossiers, or documents, every time somebody moves, or that they just write down a reference to the person in the dossier, and look up the data in the file cabinets at city hall. Since going through all the files every time somebody moves is not feasible, he convinces his chief to enforce the second option. Over time, people get a hang of it, and are content to be relieved of stale data in the dossiers. However, the more they introduce this system in other scenarios, the more they get frustrated doing the manual look-ups. They now first have to fetch the document in the file cabinet, and then go through five more cabinets, just to collect all the bits of the file.

To make matters worse, since it's often hard to find something in the dossier, chain of command has introduced templates for each of the documents. Now each document has a fixed schema, a list of fields, of which each one is in a fixed position on the document, and some of them are even required. When you want to add a new file, they should be signed off by your superior first. Sigh.

Getting a taste after work, the detectives are discussing the new system. 'It's great that we now have a single source of truth, are rid of stale data, and don't have to manually update everything when something changes. But damn, all the extra paperwork, all the fuss over formats and the going back and forth between file cabinets is getting old quickly.' 'However, I'm happy we could at least keep our OT slips and expense notes simple; just one document, which we can fill in and update as we please.' They collegially gulp down their drinks, and signal the bartender to bring another round.

In this short story, you witnessed the detectives totally ruining their document store. By normalizing their documents and putting constraints on the formats, they no longer reap the benefits of using a document store. Now, they would be far better of with a relational solution.

I hope these analogies made some sense, and maybe made you think about, or even challenge the SQL dogma. What it all comes down to, is having enough knowledge to be able to pick the right tool for the job. Each paradigm has its merits, and as with any other decision in our field, trade-offs have to be considered. The way you can or want to model your data isn't the only consideration to make though. While for some it is scalability and performance that makes NoSQL the obvious choice, for me it is the simplicity that does it. You don't have to be an IT pro to install a server instance locally, nor to migrate your application to the cloud (MongoDB, for example, creates its collections on the fly). The way you talk to the database also becomes easier; the mismatch between your code and storage can become a lot smaller, while you also rid yourself of some of the SQL foo. This doesn't mean that you don't have to be considerate about how you query your data though; you still need common sense, but there is a lot less black magic you need to master.

NoSQL solutions seem to put the developer first; I see NoSQL, and particularly the document store flavor, not as a silver bullet, but as a great new asset to my toolbox.