Sunday, October 30, 2011

The gift of legacy


Just after graduating, I hated legacy with the heat of a thousand suns. I felt unfortunate, having to work on old code, built using outdated technologies, while software is all about making new and shiny things. Right, guys? Those naïve expectations of a rookie got crumbled very soon. Legacy is a constant in our industry. You can try to ignore it as long as possible, but it's impossible to keep that up forever.
Over the years, I have come to accept that. I even have been fully embracing it lately.

Legacy is an opportunity to learn from the past. To learn from the costly mistakes our predecessors made, so you can avoid them in the present. Often legacy also helps me fully comprehend the motivation behind more modern architectures and software practices. Finding yourself all alone in thousands of lines of code, where the smallest change can impact an unknown amount of long forgotten scenarios, makes it perfectly clear why we test our code, and why it should be a crime not to.

It's a shame that there often isn't a whole lot of glory in maintaining old software. Although, I do think some programmers should be awarded a medal of honor after serving multiple months in the trenches. There is, however, a lot of space to make small refactorings, improving the code quality and in the meanwhile giving you the small wins you need to keep going.

All legacy isn't bad per se though. Once in a while you discover a ruby in the dust; a creative solution or a small but interesting trick, which you might be able to apply later on.

Legacy is a gift, learn to embrace it.

Wednesday, October 26, 2011

Oops, Pluralsight

Only a few weeks ago, we got an annual Pluralsight subscription at Euricom. Since then, I have downloaded a couple of videos, raw wmv's, to watch offline on my laptop, commuting.

Yesterday, being out of material, I headed over to the Pluralsight website to download some more content. I discovered the site was redesigned, removing download links next to the videos.

Heading over to their blog, I found out more.

On 24 October, they announced the site redesign, obsoleting the download links and focusing on support for mobile devices.
However, this change does require you to have a supported mobile device to take advantage of offline viewing moving forward. We no longer support offline viewing on laptops/desktops, at least for now. 
Just one day later, after receiving - probably a shitload of - negative feedback, they promised to support offline support for laptops and desktops in the near future again. 
We’ve listened to your feedback and have decided to support offline viewing for laptop/desktop users. We’re going to implement a desktop app that will provide offline viewing as soon as possible. We’re going to work around the clock to get this desktop app ready for release as soon as possible and hope to have the initial beta out within a few weeks from now.
After reading the initial post, I was somewhat dissatisfied as a customer. I knew the decision had to make sense though, from a business perspective. Supporting less platforms means less code, which is good. And mobile is big, it feels big anyway.

Based on the time it took to make the second announcement, I'm pretty sure mobile is nowhere near big enough to forget about laptops and desktops. Hey, being able to have a desktop on my lap still feels mobile enough to me.

I'm satisfied again though. I'm content Pluralsight didn't pull a 'Steve Jobs', and just pushed through their vision. Or maybe, they just can't afford to make a small (?) percentage of customers unhappy.

Some other random thoughts: Pluralsight, is it hard to include the links back in the website for now? Or is all the old infrastructure gone? What were other (business?) motivations to completely get rid of raw videos? Pirates, aay?

Monday, October 17, 2011

Viewmodel extractors in ASP.NET MVC


Last week, I wrote something on assembling viewmodels in ASP.NET MVC. In that post, I said it would be nice to have a layer between my controller and my domain services that would assemble viewmodels for me. This would work one-way. In the other direction - from controller to domain services - I would just take a piece of my composite viewmodel and pass that directly to my domain services.

Well, that last part didn't really work out eventually. I found it hard to find real-world scenarios where I could just pick a piece of my viewmodel and directly pass it to the domain services.

I was in need of something that could extract my domain models from my dumb viewmodels. I really didn't want this logic to be a fixed part of my viewmodel, nor did I want to make helper classes for these utility methods. Looking for a place to put this, I thought of a set of extension methods that pulls out every useful domain model per viewmodel.
public static class AddEntryViewModelExtractors
{
    public static Entry ExtractEntry(this AddEntryViewModel addEntryViewModel) 
    {
        var entry = new Entry();
 
        entry.Activity = new Activity();
        entry.Activity.Name = addEntryViewModel.ActivityName;            
        entry.Meta = addEntryViewModel.Meta;
 
        return entry;
    }
}

This makes it possible to do something like this in my controller.

public ActionResult Add(AddEntryViewModel addEntryViewModel)
{
    if (ModelState.IsValid)
    {
        _entryService.AddEntry(addEntryViewModel.ExtractEntry());
 
        return RedirectToAction("Index", "Home");
    }
    else
    {
        return View(addEntryViewModel);
    }
}

So far, I'm liking this approach, pushing code away from the controller, helping me to keep my controllers as lean as possible. I also enjoy that it's trivial to test these extractor methods.

Oh btw, I had a chance to look at AutoMapper, but I haven't decided yet whether I find it helpful or not. I hardly came across scenarios where simple mappings were sufficient.

As always, I welcome your feedback and thoughts!

Wednesday, October 12, 2011

Viewmodel assemblers in ASP.NET MVC


Working on a new ASP.NET MVC side-project, I have the luxury to experiment with new technologies, but also with different patterns and naming conventions.

Something which bugged me in a previous project was that we made our service layer return viewmodels. It worked rather well because the service layer in our MVC project was just another layer between the real domain services - where most of its work was creating viewmodels from domain objects or translating viewmodels into domain objects, so they could be passed to the domain services. Although it somehow worked rather well, it felt dirty. Mostly because the name service is so overloaded and overused, that it's often not clear what its responsibility is.

Searching for a more meaningful name, I thought of an assembler. A simple object which fetches some domain objects and assembles them into a clean viewmodel. I also consider making the assemblers work one-way, from domain objects to viewmodels. Wrapping communication in the other direction feels like overhead, bringing no added value. I'm comfortable making the controller responsible for taking a piece of my composite viewmodel and passing it to the domain services, avoiding layers of unnecessary abstraction where possible.

I tried representing this into a nice PowerPoint drawing.


Something for me to find out in the coming days, is how AutoMapper can facilitate me in assembling these viewmodels.

Anyway, as always, I appreciate your feedback. How do you handle these scenarios? I'm also interested in hearing what naming conventions work for you.

Saturday, October 8, 2011

Commute hacking: Chromapaper


Since I changed jobs, I do my daily commute by train. This adds two hours of leasure time to the day, which I used to lose in traffic.

So far, I have been spending these two won hours programming and/or reading. I have been looking for a way to read online content offline on my laptop. Searching for software that could make this happen, I mostly came across programs which would mirror a whole site locally. Eventually I asked Twitter, and Lee Dumond pointed me to Chromapaper.

Chromapaper is a free Chrome application which caches Instapaper entries locally. Through the day, I mark interesting, but lenghty articles for reading later, so that I can simply synchronise them to my machine before leaving the connected cocoon.

You can find the Instapaper and the Chromapaper plugins in the Chrome Web Store.


Add articles to Instapaper, and start an offline sync.


Open Chromapaper offline.

Thursday, October 6, 2011

Book review: The Art of Unit Testing

I think The Art of Unit Testing targets a broad audience. Beginners will find every part of the book useful, where intermediates might be more interested in the final two parts.

Roy Osherove starts this book by laying a solid foundation of the unit testing concept. Why is testing important? What defines a good unit test, and how does a unit test differ from an integration test? In the second part of the book, he demonstrates the use of two core unit testing techniques: stubs and mocks. After showing you how these techniques work, he shows off various isolation frameworks which can help you creating stubs and mocks at runtime (fakes), greatly reducing the effort of writing these objects.

If you have some experience with unit testing, you might not be impressed with the content of the book so far. If you're somewhat like me, and have been writing tests for some while, but often find yourself wondering if your tests will still be considered solid six months from now, you will find the content of part three very useful. In this part, Roy talks about organizing your tests and how to strive for satisfying the three pillars of good tests: trustworthiness, maintainability and readability.

The final part wasn't something I expected to find in this book. In this chapter it's very clear that the author has been an agent of change himself for a long time. Next to sharing successful strategies on how to usher testing into the organization, he answers a bunch of hard questions you will be asked when you're on your own quest to bring change.

As I said before, there's something in the book for everyone. While there are a lot of topics discussed, the book counts less than 300 pages. It's well written at large, sometimes a bit sloppy, but in general a smooth read. All of the concepts are explained using examples written in C#, where the problems are small enough to keep it simple, yet comprehensive enough to make it a plausible scenario.

If you want to start out with unit testing, are looking for confirmation on how you're doing or want to refine some techniques, this book might be the one to get.

Sunday, October 2, 2011

Ninjecting MVC3


Dabbling with a new MVC3 side-project, I chose Ninject to handle dependency injection. The creative project name might have something to do with that decision. So far, I am really impressed. Setting up Ninject is a breeze, and the framework does a really good job of not getting in your way.

In this post, I will walk you through a simple MVC3 project using Ninject.

Adding Ninject

Adding Ninject to your MVC3 project is simple enough.

Open the Nuget console and type 'Install-Package Ninject.MVC3'. This will make Nuget go to work, fetching the NinjectMVC3 package and its dependencies.
Attempting to resolve dependency 'Ninject (= 2.2.0.0 && < 2.3.0.0)'.
Attempting to resolve dependency 'WebActivator (= 1.4)'.
Attempting to resolve dependency 'Microsoft.Web.Infrastructure (= 1.0.0.0)'.
...
Successfully installed 'Ninject 2.2.1.4'.
Successfully installed 'Microsoft.Web.Infrastructure 1.0.0.0'.
Successfully installed 'WebActivator 1.4.4'.
Successfully installed 'Ninject.MVC3 2.2.2.0'.
...
Successfully added 'Ninject 2.2.1.4' to PrettyDIBaby.
Successfully added 'Microsoft.Web.Infrastructure 1.0.0.0' to PrettyDIBaby.
Successfully added 'WebActivator 1.4.4' to PrettyDIBaby.
Successfully added 'Ninject.MVC3 2.2.2.0' to PrettyDIBaby.
Making up some dependencies

Let's add an ITimeService to our models.
public interface ITimeService
{
    DateTime GetCurrentDateTime();
}
The implementation could look like this.
public class TimeService : ITimeService
{
    public DateTime GetCurrentDateTime()
    {
        return DateTime.Now;
    }
}
Now, we can make our HomeController depend on the ITimeService by adding it as a constructor argument.
public class HomeController : Controller
{
    private ITimeService _timeService;

    public HomeController(ITimeService timeService)
    {
        _timeService = timeService;
    }
}
The Index action on our controller will make use of this service to display the date on the Index view.
public ActionResult Index()
{
    ViewData["Time"] = _timeService.GetCurrentDateTime();

    return View();
}
Binding them dependencies

So far we have added Ninject to our project, added a silly ITimeService, and made our HomeController dependant on that service.

The only thing that's left for us to do, is binding the ITimeService interface to the TimeService implementation.

If you look in your project root, you will find an App_Start folder, containing a NinjectMVC3 class. All of this was added by Nuget for us. The NinjectMVC3 class is used to initialize and shut down Ninject using the bootstrapper. WebActivator makes sure that this happens when the application starts and stops. When the Ninject bootstrapper is initialized, we need to bind our dependencies.

In our example, I modified the RegisterServices method to look like this.
private static void RegisterServices(IKernel kernel)
{
    kernel.Bind<ITimeService>().To<TimeService>();
}       
Finishing strong

Et voila.


There is no need to mark your dependencies in the controller, nor is there any need to resolve your dependencies through the dependency container. Ninject will automagically resolve your dependencies.

You can download the source here.