Sunday, December 28, 2014

TDD as the crack cocaine of software

The psychologist Mihaly Csikszentmihalyi popularized the term "flow" to describe states of absorption in which attention is so narrowly focused on an activity that a sense of time fades, along with the troubles and concerns of day-to-day life. "Flow provides an escape from the chaos of the quotidian," he wrote. 
This is a snippet from the highly recommended book Addiction by Design, which not only gives you an incredibly complete overview of the gambling industry, but also insights into the human psyche which apply far outside the domain of gambling.

For me, this book was an eye-opener, with the biggest realization being that most gamblers don't play to win. They play to lose. To lose themselves. Slot machines and video poker are for many people the quickest and surest way to reach flow. It's this phenomenon that has earned machine gambling the title of "the crack cocaine of gambling."

It's not just gamblers that crave for flow though, we all do.

Some of us get up early on the weekends, to drive halfway across the country for a few hours of intensive mountain biking. Others come home after work, throw their laptop in the corner, to engage in an online shooter, zoning out for a good hour. Others will accidentally waste their entire Sunday morning solving a crossword puzzle they bumped on reading the newspaper.

All these activities meet a specific set of preconditions.
Csikszentmihalyi identified four preconditions of flow: first, each moment of the activity must have a little goal; second, the rules of attaining that goal must be clear; third, the activity must give immediate feedback so that one has certainty, from moment to moment, on where one stands; fourth, the tasks of the activity must be matched with operational skills, bestowing a sense of simultaneous control and challenge.
Machine play has all these properties. Let's look at video poker. The goal is to make a winning combination. The set of winning combinations should be easy enough to remember; they're similar to live poker. After pushing "deal" you get five cards. The player decides which cards to "hold". Pushing the "deal" button the second time will draw new cards from the same virtual deck. After the draw, you immediately know whether you've won or lost. Feedback is instantaneous. A game is over in a few seconds. Although the outcome is determined by chance, there is some degree of skill involved; it's up to you to hold the right cards.

As programmers we're lucky enough to inadvertently end up in the zone frequently. Without a doubt, it's in this state most of us do our best work. In the zone, it's constant feedback and a sense of moving forward that keep me going. One could argue that the zone is inherent to the activity of programming. I'd say that the length of the feedback loop and the size of the goals are critical and hard to maintain without working at it.

In this regard, there are a few techniques that help me reach a state of flow. At first I could get by just trying to get the code to compile or to just launch whatever it was I was working on. But once you're comfortable with a code base, getting it to compile isn't much of a challenge, and having to start your application to get feedback gets old real quick. Most often it's TDD that helps me get there these days. You start of with a failing test, your mission: to make it pass. The rules are simple; when your test goes from red to green, you're allowed to move on. It's important that tests are fast to be able to give you that immediate feedback. How fast? Fast enough for you not to lose focus. It stands for itself that the fourth precondition is met too; you're writing the code, doing your best to bend it your way.

When TDD is sold as a productivity booster, it are often strengths such as automated continuous validation of correctness, partitioning of work in smaller units and cleaner and better designed code that are used as arguments. While these are valid arguments, it's a shame that the power of TDD as a consistent gateway to flow gets neglected or undersold.

Getting in the zone by yourself is one thing, getting there surrounded by a group of people is often out of the question. Here Event Storming has helped me out. Small goals; what happens before this event? Rules; write the previous event down on a yellow post-it. Feedback; once the post-it is up, we see that we're reaching a better understanding of the big picture. Control and challenge; you're the one searching for deeper insight, writing and putting up the post-its.

The activities that get me in a state of flow are the ones that I enjoy the most and which enable me to output my best results. If you reread the four preconditions, and assess the things that get you going, you might learn that the same goes for you.

Sunday, December 14, 2014

Spinning the wheel: clustering and near misses

The previous post showed a simple model casinos use to manipulate the odds. Instead of relying on the physical wheel for randomness, they rely on a virtual list of indexes that maps to the physical wheel.

Using that same model, it's easy to fiddle with the virtual indexes so that they map to misses right next to the winning pocket, creating "near misses". "Near misses" make players feel less like losing, since you "almost won". Casinos use this technique to get the next spin out of you.

Let's create more specific labels - a label for each individual pocket.

The winning pocket is in the physical wheel at index two. We need the virtual indexes to make clusters next to the winning label. Four indexes map to Miss2, one maps to Win and three map to Miss3. We intentionally ignore Miss1.


Spinning the wheel one million times reveals the pattern; Miss1 gets ignored, while we hardly ever win but very often "just" miss.

Since the law states that randomness and visualization are two separate concepts, casinos are free to operate in this gray zone, as long as randomness stays untouched.

Thursday, December 11, 2014

Spinning the wheel: manipulating the odds

The previous post defined a basic set of data structures and functions to spin a wheel of fortune in F#.

There was very little mystery to that implementation though. The physical wheel had four pockets and spinning the wheel would land you a win one out of four spins. As a casino, it's impossible to come up with an interesting payout using this model.

To juice up the pot, casinos started adding more pockets to the wheel of fortune. This meant that the odds were lower, but the possible gain was higher. More pockets also allowed casinos to play with alternative payouts, such as multiple smaller pots instead of one big one.

Adding pockets to the wheel didn't turn out the way casinos hoped for though. Although players were drawn to a bigger price pot, they were more intimidated by the size of the wheel - it was obvious that the chances of winning were very slim now.

Today, instead of having the physical wheel determine randomness, randomness is determined virtually.

Casinos now define a second set of virtual indexes that map to the indexes of the physical wheel.


There are seven virtual indexes; six map to a miss pocket and only one maps to a win pocket - one out of seven is a win.

Instead of picking a random index in the physical wheel, we now pick a random index in the virtual indexes and map that back to an index in the physical wheel.

When we now spin the wheel a million times, the outcome is different. Although the physical wheel has four pockets, we now only win one out of seven times or 14% of the time.

Using this technique, the physical wheel only serves for interaction and visualization. Randomness is determined virtually, not physically.

In my next post, I'll describe how casinos have tweaked this model to create "near misses", making players feel as if they just missed the big pot.

Tuesday, December 9, 2014

Spinning the wheel

In this post, I'll define a basic set of data structures and functions to spin a wheel of fortune. In the next post, I'll show you the simple model casinos use to build a bigger, more attractive pot, without touching the physical wheel and without losing money. Finally, I'll show you how casinos tweak that model to bend the odds and create near misses.


Let's say we have a physical wheel with four pockets, which are labeled either miss or win.

Three out of four pockets are labeled as a miss, one is labeled as a win. This makes the odds to win one out of four, or 25%.

Spinning the wheel, we should end up in one of four pockets. We can do this by picking a random index of the physical wheel.

To avoid a shoulder injury spinning the wheel multiple times, we'll define a function that does this for us instead.

Now I can spin the wheel one million times.

If the math adds up, we should win 25% of the time. To verify this, we'll group the results by label and print them.

Give or take a few hundred spins, we're pretty close to winning 25% of the time.

When the odds are this fair, it's impossible to come up with an attractive enough payout without the casino going broke. What if we wanted to advertise a bigger pot, while keeping the same physical wheel, without losing money? Tomorrow, I'll write about the simple model casinos have been using to achieve this.

Sunday, November 16, 2014

Hot aggregate detection using F#

Last week, I wrote about splitting a hot aggregate. Discovering that specific hot aggregate was easy; it would cause transactional failures from time to time.

Long-lived hot aggregates often are an indication of a missing concept and an opportunity for teasing things apart. Last week, I took one long-lived hot aggregate and pulled smaller short-lived hot aggregates out, identifying two missing concepts.

Hunting for more hot aggregates, I could visualize event streams and use my eyes to detect bursts of activity, or I could have a little function analyze the event streams for me.

Looking at an event stream, we can identify a hot aggregate by having a lot of events in a short window of time.


Let's say that when six events occur within five seconds from each other, we're dealing with a hot aggregate.

What I came up with is a function that folds over an event stream. It will walk over each event, maintaining the time window, allowing us to look back in time. When the window size exceeds the treshold, the event stream will be identified as hot. Once identified, the remaining events won't be analyzed.

Sunday, November 9, 2014

Splitting hot aggregates

When you visit a real casino, the constant busy-ness is overwhelming; players spamming buttons, pulling levers, spinning the wheel, gambling on the outcome of sports games, playing cards, feeding the machine, cashing out, breaking bills, ordering drinks or even buying a souvenir. A single player will easily generate a thousand transactions in one sitting.

When you look at an online casino, this isn't very different. In the system we inherited, the biggest and busiest aggregate by far is a user's account. Basically every action that has money involved, leads to activity on this aggregate.
This makes sense. An account is an important consistency boundary, if not the most important one. Casino's can't afford to have people spend more than their account's worth.

Since we're applying optimistic concurrency, bursts of activity would occasionally lead to transactional failures. Looking at a real casino, it's easy to see why they aren't running into these types of issues.
In a physical casino, it's only the owner of the wallet that gets to access it. Casino employees are not allowed to take a player's wallet out of his pocket to make a transaction. There is no concurrent use of a player's wallet: single spender principle.
Online on the other hand, we aren't constrained by common courtesy and have no problem reaching into a user's pocket. It's common to have a user playing the slots, while we automatically try to pay out a sportsbetting win once the results of a game are in.

Mapping out an aggregate's eventstream on a timeline is a great way to visualize its lifecycle and usage patterns. When we did this for an account, we came up with something that looked like this.


Activity peaks when a user starts a game. Each bet and each win drags in the account aggregate. When you know that some players make thirty bets per minute, it should be of no surprise that other processes accessing the account in the background might introduce transactional failures.

Inspired by a real casino, I wonder if users online would appreciate it if we stayed out of their pockets and let them do it for us instead.
Instead of paying out sportsbetting winnings automatically, we could notify a user that his bet was settled and that he can head over to his bet and cash out the winnings to his account any time.
The same goes for games; instead of cashing out wins to a player's account after each bet, we could - like in a casino - cumulate all winnings in the slot machine itself, also known as a game session, for the player to cash out by pushing a button once he's done playing. To reduce the amount of small bets taken from the account, we could also encourage users to feed the slot machine before they start playing.

In practice, we would extract behaviour out of the account aggregate and move it into the sportsbet and game session aggregates. It wouldn't be until the end of their lifecycles, that we would involve the account aggregate to move money around.


By spreading activity to other and shorter lived aggregates, and having the player do a bit of our work, we could reduce the amount of concurrency on the account aggregate and end up with less transactional failures.

But can we really expect of users to cash out manually? Probably not, but we can still use most of the mechanics we just came up with, but cash out automatically. We can cash out winnings automatically when a user leaves a game session. We can queue up sportsbetting winnings and cash out when a user isn't playing a game.

By exploring alternatives, we discovered that we can work the model to reduce activity and concurrency on the account aggregate, lowering the chances for transactional failures. Now, it's only fair to say that there are other, more technical, options. The most obvious one would probably be making the existing transactions on the account aggregate shorter, also lowering the chance of concurrent use of the account.

I can't help but guess the actor model might be a better fit for this type of problem.

Sunday, October 26, 2014

Programmatically force create a new LocalDB database

I have spent the last week working in an integration test suite that seemed to be taking ages to run its first test. I ran a profiler on the setup, and noticed a few things that were cheap to improve. The first one was how a new LocalDB database was being created.
An empty database file was included into the project. When running the setup, this file would replace the existing test database. However, when there were open connections to the test database - SQL Server Management Studio for example - replacing it would fail. To avoid this, the SQL server process was being killed before copying the file, waiting for it to come back up.

Another way to create a new database, is by running a script on master. You can force close open connections to the existing database, by putting the database in single user mode and rolling back open transactions. You can also take advantage of creation by script to set up a sane size to avoid the database having to grow while running your tests. When you specify the database size, you need to also specify the filename; I'm using the default location.

Switching to this approach shaved seven seconds of database creation.

Sunday, October 19, 2014

Print out the maximum depth of recursion allowed

Karl Seguin tweeted the following earlier this week: "An interview question I sometimes ask: Write code that prints out the maximum depth of recursion allowed."

This question is interesting for a couple of reasons. First, it's a shorter FizzBuzz; can the candidate open an IDE, write a few lines of code, compile and run them? And second, does he know what recursion is?

Now let's say, the interviewee knows how to write code and is familiar with the concept of recursion. If he had to do this exercise in C#, he might come up with something along these lines.

Before you let him run his code, you ask him to guess the output of this little program. If he's smart, he won't give you much of an answer. Instead he will point out that the result depends on the runtime, compiler, compiler switches, machine architecture, the amount of available memory and what not.

If he's not familiar with the C# compiler and runtime, he might even say there's a chance the integer will overflow before the call stack does.
The recursive method call is the last call in this method, making it tail-recursive. A smart compiler might detect the tail-recursion and convert the recursive call into a plain loop, avoiding recursion.

Running this program shows that the C# compiler isn't that smart, and will yield the maximum depth of recursion just before crashing.

If we were to port this snippet to F#, a functional language in which recursion is a first class citizen, the results are a bit different.

This just kept running until I killed it when the count was far over 171427. Looking at the generated IL, you can see that the compiler was smart enough to turn this recursive function into a loop.

If we want the F# implementation to behave more like the C# one, we need to make sure the compiler doesn't optimize for tail recursion.

Running this also ends in a StackOverflowException pretty early on.

I love how this question seems shallow at the surface, but gives away more and more depth the harder you look.

Sunday, October 5, 2014

Read an Event Store stream as a sequence of slices using F#

I'm slowly working on some ideas I've been playing around with whole Summer. Since that's taking me to unknown territory, I guess I'll be putting out more technical bits here in the next few weeks.

Using the Event Store, I tried to read all events of a specific event type. This stream turned out to be a tad too large to be sent over the wire in one piece, leading to a PackageFramingException: Package size is out of bounds.

The Event Store already has a concept of reading slices, allowing you to read a range of events in the stream, avoiding sending too much over the wire at once. This means that if you want to read all slices, you have to read from the start, moving up one slice at a time, until you've reached the end of the stream.

Avoiding mutability, I ended up with a recursive function that returns a sequence of slices.

Sunday, September 14, 2014

The road to GroƟglockner

Sixteen days after leaving home, we're now on our way back to Antwerp. After Croatia, we've driven through Slovenia, Italy, Austra and Switzerland, arriving in France to meet up with my parents for a few days.

France offered us the typical vineyards and chateaux. What, next to the good company, will stick with me the most is the High Alpine road in Austria. We paid 34 euros to be allowed on the road, but the surroundings of that piece of asphalt are extraordinary.
The drive is rough, maybe even more so coming down than going up. No wonder car manufacturers use it to test drive their close to production-ready prototypes.

It has been quite the trip. I had never expected to give my eyes such a show so close to home.



Wednesday, September 10, 2014

Northbound again

Leaving Plitvice behind us, we crossed the country heading for the sea. We didn't need to cover a lot of distance, to discover how diverse the Croatian landscape and climate is. In only two hours we went from the cold, foggy lakes and waterfalls to green meadows to dusty mountains to sunbathing coasts.



We followed the coastline for a little while, strolling through old coast towns, eating seafood, drinking something cold. Driving up North and more inland, we spent two days in Motovun, a picturesque town high on a mountain.


We'll be continuing up North, slowly making our way home.

Sunday, September 7, 2014

Tolkien's inspiration

The next morning we woke up to another rainy day. We caught up with the rain in Austria and we have been following it ever since. Or the other way around. Even the locals can't seem to stop talking about how much rain is falling.

Between showers, we were able to hike the Vintgar trail. The one hour hike takes you along dangerous rapid waters, bringing you to a big waterfall. If I ever wondered how J.R.R. Tolkien came up with inspiration for his famous novels, now I have a good idea.



On Tuesday, exploring the Postojna cave system by train and on foot offered similar enchanting images.



After exchanging the Slovenian scenery for that of Croatia - still raining - we visited the Plitvice Lakes on Thursday. The non-stop rain had caused a big part of the park to flood. If you didn't mind getting wet up  to your knees, wading through the lakes that were now on top of the trails was definitely an option - one that we took.




We'll spend our last few days in Croatia along the coastline, hoping to at least warm up a little after our wet and cold experience in the national park. 

Tuesday, September 2, 2014

Porsches and a setting that sells them

With the Belgian 'Summer' ending, we are exploring South Eastern Europe. This trip will take us through Germany, Austria, Slovenia and Croatia.

On Saturday, we stopped in Stuttgart to complete our list of visits to the three most popular European sports car museums, after visiting Lamborghini and Ferrari two years ago: the Porsche museum. In my teens, my affinity to fast cars was ignited by spending rainy afternoons at one of my best friends's, attempting to crush each other's track records playing Need For Speed Porsche Unleashed. The brand has succeeded in making its way into my subconscious early on. It would still be a Porsche I'd buy today, if I would have too much money to spend.

In the meanwhile, my interest for fast cars has tempered, making place for a more general interest in the combination of good design and raw engineering. There was certainly a place for this at the Porsche museum. The architecture sets high expectations. Every detail in the building met Porsche's standards; from the toilet pictograms to the elevator interior.

Unfortunately, like in most car museums, they fail to really capture the driving experience. Not once did I hear the trademark roar of a Porsche engine, which I learned to love playing video games.



Leaving Stuttgart behind us, we set out for Bled, Slovenia. This took us through Austria, where we diverged far from the optimal route to visit Hallstatt. This small village, famous around the world, even coined as one of the most beautiful places on earth, is hidden deep into the inhospitable landscape of Austria. Being so out-of-the-way might just be what has made it possible to preserve the charm of this little place.




Getting back into the car, it started raining, and it didn't stop until we crossed the Slovenian border. The type of rain that gives you that claustrophobic car wash feeling. We arrived in Bled well after dark, and went to bed curious of what our surroundings would look like in the morning sun. 

Tuesday, August 26, 2014

Solomon, the architect

Two junior developers who started working for the company at the same time, had been quite competitive with each other from the get-go. They had once been assigned to the same team, but because of the constant bickering, which had put a serious amount of stress on the team, one of them was pulled off the project and reassigned.

A good year later, just the two of them were assigned to a new smallish, but interesting in-house project. When management assigned them to the same project again, they had just been shuffling resources around, and had no idea of the history these two had. An architect was also assigned to the project, but this was not more than a formality. As soon as the enterprise architecture diagram was updated and the paper work was out of the way, he would do an official hand over, but he would only occasionally check in on the project from then on.

The Friday morning after the hand over, the architect was making small talk with one of the project managers in the coffee corner. His agenda for the day was almost empty - exactly how he liked it on Fridays. He planned on making the rounds this morning to check in on projects he was involved with, to attend a meeting at noon, and to spend the afternoon reading up on micro services, for then to go home early. While he charged the coffee machine with another coffee pod, he heard an uproar that had to be coming from the other side of the floor. He couldn't quite make out what it was about, but he recognized the voices immediately: the two juniors.

He hurried over to the source of the noise. Prying eyes looked curiously over their monitors, ignoring the architect as he passed by. He found the two juniors standing next to the white board, yelling at each other, gesturing vigorously, crossing out parts of the drawings. The architect broke up the fight, and commanded them to get in the meeting room right away. Startled at first, but red-faced just a few seconds later, they shuffled towards the meeting room, not saying one more word, staring at their shoes.

The architect closed the door behind him, and started questioning them. What was this all about? He learned they had a big disagreement on how they should design a part of the new system. The architect, after hearing them out, thought he understood both their solutions. Given the information he had at that time, he thought both solutions were good enough for now - each made trade-offs, and only time would tell them more.
"There are still dozens of possible alternatives out there. I should try to show them how to come to a consensus together," he thought. "Before I do that, I'm curious to discover their incentives though, so let me try something here."

He told them that he liked both of their solutions, but that he couldn't decide  which was the better one. Instead, he would take random parts of each solution and throw them together, to come up with a hybrid solution. This way, nobody loses.

Junior number one seemed relieved. He nodded and glanced at his partner. To his surprise, his partner didn't look very happy. Junior number two blurted out that he'd rather see his own solution in the bin, than to give up conceptual integrity, just because two people can't agree.

With that, the architect learned that junior number one had fallen into the trap of making it personal, and was trying to save face. Number two however, was wise enough to favor conceptual integrity over getting his way. The architect complimented the kid, and acknowledged that sometimes giving in might be the wise thing to do - not always, you have to pick your battles. Then he rolled up his sleeves, picked up a bunch of post-its and a pen for both of them and said that it was time for them to "fight the problem together instead of fighting each other". But not before he made another coffee.

Tuesday, August 19, 2014

Thinking No Computers

The other day I happened to see a curious exchange in one of our businesses. The cashier swapped a torn, but carefully restored by taping it together again, Euro bill for a new one with the manager. Inquisitive, I asked the manager what he was planning to do with that Euro bill. "Once a month, I take all those ripped up or badly worn bills to the National Bank, and trade them for new ones. All you need is one piece of the bank note for them to give you a newly printed one."

While he started taking care of some other paper work, my mind started racing towards how the National Bank made this system work. I had noticed before that bank notes have an identifier on each side. I figured the National Bank probably stores the identifier of each note they trade, so people don't go ripping up bank notes with the intent of trading them twice. That seems easy enough, no?


Once the manager finished up his paperwork, I wanted to confirm my idea and asked if he knew how that system worked. How do they avoid people cheating? "It's really simple actually, you need to own more than 50% of a bill for it to be tradable."

Well... that's a much simpler solution. No need to store all the traded notes somewhere, you probably don't even need a computer at all.

I catch myself regularly defaulting to wanting to solve problems using computers. While taking a step back and thinking of how it would be done without, often exposes a simpler model. Sometimes you will realize that you don't need a computer at all. If not that, you get to steal from models that have been molded and battle tested for years.

Look at existing organizational structures and search for boundaries. You might find that aligning your software with existing boundaries makes the pieces of the puzzle fit. Learn how departments communicate; passing forms, by phone or by email. Maybe it shows that you need synchronous communication with strong consistency based on a formal protocol or that you might just get away with asynchronous communication with a less strong schema. Go through paper work, formulas, legislation, research papers, books and what not, and that hard to crack nut might become a bit softer. Look at where people are making decisions and how they are deciding on them, do you spot the patterns?

People have been solving problems for hundreds of years, tinkering with and perfecting models, way before computers were a commodity. Natural selection made sure only the strongest made it this far. Those solutions didn't stop working all of a sudden, nor should they be discarded as a whole. A great deal of them will likely survive another few hundred years, in one form or the other. 

Sunday, June 8, 2014

Paper notes: A Study and Toolkit for Asynchronous Programming in C#

The .NET framework mainly provides two models for asynchronous programming: (1) the Asynchronous Programming Model (APM), that uses callbacks, and (2) the Task Asynchronous Pattern (TAP), that uses Tasks, which are similar to the concept of futures.

The Task represents the operation in progress, and its future result. The Task can be (1) queried for the status of the operation, (2) synchronized upon to wait for the result of the operation, or (3) set up with a continuation that resumes in the background when the task completes.

When a method has the async keyword modifier in its signature, the await keyword can be used to define pausing points. The code following the await expression can be considered a continuation of the method, exactly like the callback that needs to be supplied explicitly when using APM or plain TAP.

Do Developers Misuse async/await?
  1. One in five async methods violate the principle that an async method should be awaitable unless it is the top level event handler.
  2. Adding the async modifier comes at a price: the compiler generates some code in every async method and generated code complicates the control flow which results in decreased performance. There is no need to use async/await in 14% of async methods.
  3. 1 out of 5 apps miss opportunities in at least one async method to increase asynchronicity.
  4. 99% of the time, developers did not use ConfigureAwait(false) where this was needed.
The async/await feature is a powerful abstraction. asynchronous methods are more complicated than regular methods in three ways. (1) Control flow of asynchronous methods. Control is returned to the caller when awaiting, and the continuation is resumed later on. (2) Exception handling. Exceptions thrown in asynchronous methods are automatically captured and returned through the Task. The exception is then re-thrown when the Task is awaited. (3) Non-trivial concurrent behavior.

Each of these is a leak in the abstraction, which requires an understanding of the underlying technology - which developers do not yet seem to grasp.

Another problem might simply be the naming of the feature: asynchronous methods. However, the first part of the method executes synchronously, and possible the continuations do as well. Therefore, the name asynchronous method might be misleading: the term pauseable could be more appropriate.

Source

Sunday, June 1, 2014

Not about the UI and the database

When you ask an outsider which components an average application consists of, he will most likely be able to identify the user interface and the database. He will also recognize that there is something in between that takes the input from the user interface, applies some logic and persists the result in the database.


In the past, trying to make sense of what goes on the middle, we started - with the best intentions - layering things. Each layer had its own responsibility and would build upon previous layers. Although there was a layer for business logic, we never really succeeded in capturing the essence. In the end we would still be orchestrating database calls, but now we would be forced to go through a bunch of indirections in the form of anemic layers and objects.


Some people saw these designs for what they were, broke free and started optimizing for the shortest path - from user interface to database with the least amount of effort. By aiming to serve the common denominator and by putting their trust in dark magic, frameworks popped up that would allow you to slap together an application in a matter of hours.


The problem with these frameworks is that they leave you with very little room for your own, and you often end up jumping through hoops when you need to deviate from the path carved out for you.


That's not the only problem though - applications are a lot more than a user interface and a database. What lives between those two is more than a technical necessity - it's a place where you get to build a model of the problem you are solving. The model gives you an opportunity to learn from and to communicate with domain experts, peers and users. And that's exactly where most businesses make the difference, not by having a fancy user interface or a carefully designed database schema, but by really understanding and by being absorbed by the problem they are solving. It's the user interface and the database that are the necessary evil we bring upon ourselves by solving problems using computers.

The state that lives in your database is a side effect - the result of the model's behavior. The user interface tries to make it as easy as possible for users to drive and use the model.


Although the user interface and the database are important, it's the model that is the heart and soul of your application.

(*) Disclaimer: all these drawings are simplistic by design.

Sunday, May 25, 2014

Eventual consistency in the Wild West

San Francisco, 1852. With the California Gold Rush at its peak, successful gold-seekers wanted to protect all their precious gold nuggets by storing them in a strong safe. At the time, it wasn't that easy to have access to a safe though. At the very beginning, it were just a few local merchants that owned one. Not much later, bankers swamped the area hoping to get their piece of the pie - bringing the strongest safes money can buy.

James King of William - who had made a fortune himself mining gold - was one of the first to found a trustworthy bank in San Francisco.
With the city growing from a small 200 residents in 1846 to about 36.000 in 1852, it became harder and harder for the bank to accommodate all their customers in that one office. It needed to expand its operations.
Three months later a new branch opened up on the other side of town. James determined to build a strong brand, wanted to allow customers to go to any of the two branches to deposit and withdraw their money. This meant that the books of the two branches had to be kept consistent. To maintain the books, James commanded all the clerks to duplicate all new records. Two hired horsemen would then come in every few hours and bring those records to the other branch. Since the books were now the same in both branches, customers could deposit and withdraw money in both sides of town. For James, life was good, he was now raking in twice as much.

Pancho and Lefty, two bandits fled from Mexico, trying their luck in California, were instead of spending their Friday afternoon in search of gold, trying to forget their gold drought by playing cards and drinking cheap Whiskey in the local saloon. While Lefty kept rambling on about how unlucky they had been these last few weeks, Pancho paid closer attention to the conversation going on between the saloon keeper and this well-dressed rider that had just entered. The leather bound notebook the rider was carrying stood out immediately - it carried this familiar looking mark and looked expensive. Even though Lefty kept on rambling, Pancho could make up quite a bit eavesdropping on the saloon keeper and the rider's conversation. That suddenly came to an end though, when the rider knocked back his drink and got ready to leave - "Well, I better get going before the boss man notices that his books are no longer up to date."
Pancho watched the rider step outside and get back on his horse. Once the rider was out of sight, Pancho with an intense gleam in his eyes, cut off Lefty and ordered him to shut his mouth - "Shut it fool, I have a plan that's going to make us some easy money."

That same day, Pancho and Lefty gathered all their coins, opened an account for Pancho at James King's bank and made their first deposit. Afterwards, they stuck around the bank's side entrance just to make sure one of the horsemen came by to pick up the books. Twenty minutes later, the same rider they saw at the saloon showed up - Pancho and Lefty spontaneously looked the other way, avoiding to be seen by the rider. They hung around a little longer, to verify that when the driver exited the bank, he was carrying a notebook - a notebook containing the records of the account they just opened and their first deposit.

The next day, Pancho and Lefty got up early. They only got a few hours of shut-eye in, they had been up all night sneaking around, scouting the biggest ranch in town. To make their plan work, they needed to borrow a fast horse.
Impatiently waiting in front of the bank, Lefty breathed a sigh of relief when the clerk opened the bank at nine AM sharp. Before he entered, he looked one more time to the left, where Lefty was standing in the shadows holding the horse - ready to go. Being the first customer to enter the bank, he walked straight up to the counter, pulled out his token of authentication and told the clerk he wanted to make a withdrawal for all that was on his balance. The clerk looked at the token carefully, but didn't ask any further questions - he still remembered seeing Pancho yesterday, it wasn't like he owned a fortune.

Money in hand, Pancho firmly walked out of the bank. To make the plan work, he now needed to jump on that horse as quickly as possible and ride off to the branch on the other side of town - if he made it there before the hired horsemen had the chance to transfer the latest records, he might be able to double his assets in a single morning.

Pancho didn't spare the horse at all, frantically digging his spurs into its sides. He made sure to take the dirt road through town, avoiding obstacles and people where he could.
Tying the horse down next to the bank, he watched one of the horsemen walk out of the bank. Damn. Had he been too slow, were the books already consistent again?
Pancho broke out in a cold sweat, but he had to try. He shuffled into the bank and got in line - there were three customers in front of him. Every time he looked up at the clock, he got more anxious. By the time his turn came, he was terrified. Without looking up, he pulled out his token of authentication once more, and mumbled to the clerk that he wanted to withdraw all of his money. The clerk noticed the drops of sweat on Pancho's forehead and how Pancho's hands were shaking. "Poor soul," the clerk thought, "he must have caught cholera like many others, they're falling like flies." The clerk went into the back, looked into the books, opened the safe and handed Pancho everything he was worth.

James King spent very little of his time at one of his banks - he was too busy looking for new ventures to fund. When he was in town and had the time, he did make a habit of jumping by to look in the books right before closing time. Today, one of the clerks walked up to James King as soon as he arrived; "I'm afraid there has been made a mistake in the books, sir. The books show that one customer withdrew all of his money twice today. First here, and then in the other branch. Something must have gone wrong copying the records." Going through the records, it was obvious to James King what had happened; they had been robbed.

"Thank the lord that it's only for a small amount. Let's make sure this doesn't happen again." James started by hiring two extra horsemen, but to be even more sure he also had to introduce some new rules. A customer could now only withdraw $5 a day, unless he had proven to be a good and reliable customer. If a customer needed to withdraw extraordinary large amounts, he needed to inform a specific branch one day in advance. This allowed both branches to keep serving all customers smoothly while not giving up on operating in a semi-autonomous fashion. James was even considering opening a new branch in the famous city of angels, Los Angeles.

Next time; isolation levels in the Wild West.

Friday, May 16, 2014

NCrafts Eventstorming slides

Me and Tom just finished doing our Event storming workshop at NCrafts Paris. Although we made a few mistakes along the way, feedback on the workshop was great. I hope to put something out about what we learned facilitating later this week. People talked, discovered and eventually learned a new domain in under two hours. The domain? Two minutes before the workshop we found a domain expert prepared to talk about his coupon start-up.
Modeling a business process is often associated with people in suits having long and dull meetings. Way too much time gets wasted for an outcome that's far from reality and which will be obsolete in weeks. Event storming is a workshop format that brings modeling back to all stakeholders, and aims to create usable models.  
In this workshop, you won't be writing any code, but you will be using lots of paper, post-its and a marker. After going over a bit of theory and a small example, we will present you with a real business problem, and in a few short playful sessions you will experience how powerful event storming can be in helping a team gain insight into complex problems.  
The techniques you will learn in this workshop will pay off immediately. 
The slides I used are now up on Slideshare and embedded below.



To learn more, you can join the EventStormers community, but more importantly, you should start experimenting yourself!

Monday, May 12, 2014

What if we stored events instead of state? - slides

I just returned from Croatia, where I got to speak twice at the second edition of The Geek Gathering.

Being such a young conference, I had no idea what to expect really. Turns out they have a good thing going on; a small, local and very personal approach to conferences. Speakers both local and international, covering topics that serve the community, not their employer.

Together with Tom, I preached Alberto's Event Storming during a four hour long workshop. As always, people were impressed by how quick you can gain an understanding of a new domain using this technique. Slides of this workshop will be online after I make some tweaks, and try it in Paris on Friday.

In my second talk, I got to share what I've learned these last two years on event sourcing. You can find the slides of that talk embedded below or on Slideshare. Thanks Tom, Mathias, Stijn, Yves and Bart for reviewing them!


Sunday, May 4, 2014

Glueing the browser and POS devices together

I have been occupied building a modest Point of Sale system over these last few weeks. Looking at implementing the client, there were two constraints; it needed to run on Windows and it should be able to talk to devices such as a ticket printer and a card reader.

Although we could use any Windows client framework, we like building things in the browser better for a number of reasons; platform-independence, familiar user experience, JavaScript's asynchronous programming model and its incredible rich ecosystem. Having to talk to devices ruled out leveraging the browser to deliver our application though - or didn't it?

Most Windows client frameworks give you a browser component which can be used to host web applications inside of your application. We used this component to host our web application, which turned the hosting application into not much more than a bridge between our web application and the devices.

This bridge processes commands sent by the browser (or the application itself), and produces events which are returned to the browser. I ended up not needing much code to implement this.

I defined two thread-safe queues - one to put commands on, and one to put events on. 
private readonly BlockingCollection<ICommand> _commandQueue = 
    new BlockingCollection<ICommand>(); 
private readonly BlockingCollection<IEvent> _eventQueue = 
    new BlockingCollection<IEvent>();
Then I start consuming the command queue in the background by turning it into an observable and subscribing to it. Processing commands in the background ensures that command processing never blocks the UI thread.
Task.Factory.StartNew(() =>
{
    var processor = new CommandProcessor(_eventQueue);

    _commandQueue
        .GetConsumingEnumerable()
        .ToObservable()
        .Subscribe(processor.Execute);
});
When a command is dequeued, the associated handler will be invoked. The handler then does its work while raising events when appropriate.
public class DoSomethingHandler : IHandle<DoSomething>
{
    private readonly BlockingCollection<IEvent> _eventQueue;

    public SleepCommandHandler(BlockingCollection<IEvent> eventQueue) 
    {
        _eventQueue = eventQueue;
    }

    public void Execute(DoSomething cmd)
    {
        _eventQueue.Add(new DoingSomething());

        // do work

        _eventQueue.Add(new FinishedDoingSomething());
    }
}
In the meanwhile the event queue is being processed in the background as well - sending events to the browser as fast as they can be dequeued.
Task.Factory.StartNew(() =>
{
    _eventQueue
        .GetConsumingEnumerable()
        .ToObservable()
        .Subscribe(SendToBrowser);
});
Sending events to the browser is done by invoking a script through the browser control.
private void SendToBrowser(IEvent @event)
{
    object[] args = { string.Format("app.bus.send({0})", EventSerializer.Serialize(@event)) };

    if (WebBrowser.InvokeRequired)
    {
        WebBrowser.BeginInvoke((MethodInvoker)delegate
        {
            if (WebBrowser.Document != null)
                WebBrowser.Document.InvokeScript("eval", args);
        });
    }
    else
    {
        if (WebBrowser.Document != null)
            WebBrowser.Document.InvokeScript("eval", args);
    }
}
In the browser, we can now transparently subscribe to these events. As an implementation detail on that side, we're using Postman for pub-sub in the browser.

With this, we've come full circle; commands come in, they get processed, leading to events being produced, which eventually go out to the browser.

With this, we provide a consistent web experience for users and for developers, while not having to jump through too much hoops to make it work.


I also thought of hosting communication with the devices in a Windows service while having that component expose its functionalities over HTTP so that the browser could talk to a local endpoint instead of being hosted in an application. While this is a valid alternative, it raised some concerns towards deployment in our scenario (we can't push changes towards these clients, they need to come get them). With the existing set-up, I think even if we would like to change to such a model, it wouldn't be that much trouble.

If you've pieced together a similar solution, feel free to let me know what I'm getting myself into.

Monday, April 21, 2014

Solving Mavericks with VMware Fusion 6 and Windows 8.1 hangs

Since I intended to avoid Windows at home, I got a Macbook Pro starting out at my new job. Overall it has been a great machine for doing development; it's fast, light enough to carry around, its battery life is outstanding, it has a screen that's gentle to the eyes, and full screen apps together with powerful mouse gestures allow me to quickly shuffle between things not missing touch or a second monitor.

Most of my professinal work is still on the Microsoft stack though, so I'm running a Windows 8.1 VM on VMWare Fusion 6. Much to my frustration, this setup would gradually slow down my system until it would completely grind to a halt every few hours. After complaining about it on Twitter, people said that having 8GB of RAM with half of that allocated to the VM might not be enough.

However after applying some tweaks, I got my system to chug away for a week without any hangs.

Here is what I changed:
  1. Turn off App Nap for VMware
  2. Install Memory Clean
  3. Disable Windows visual effects (Advanced System Settings - Visual Effects)
  4. Turn off Resharper Solution-Wide Analysis
  5. Turn off Visual Studio rich client visual experience 
Hope it helps.

Sunday, April 6, 2014

Rebinding a knockout view model

As you might have noticed reading my last two posts, I have been doing a bit of front-end work using knockout.js. Here is something that had me scratching my head for a little while..

In one of our pages we're subscribing to a specific event. As soon as that event arrives, we need to reinitialize the model that is bound to our container element. Going through snippets earlier, I remembered seeing the cleanNode function being used a few times - which I thought would remove all knockout data and event handlers from an element. I used this function to clean the element the view model was bound to, for then to reapply the bindings to that same element.

This seemed to work fine, until I used a foreach binding. If you look at the snippet below, what is the result you would expect?
<div id="books">
    <ul data-bind="foreach: booksImReading">
        <li data-bind="text: name"></li>
    </ul>
</div>

var bookModel = {
    booksImReading: [
        { name: "Effective Akka" }, 
        { name: "Node.js the Right Way" }]
};
                         
ko.applyBindings(bookModel, el);

var bookModel2 = {
    booksImReading: [
        { name: "SQL Performance Explained" },
        { name: "Code Connected" }]
};

ko.cleanNode(books);
ko.applyBindings(bookModel2, books);
Two list-items? One for "SQL Performance Explained" and one for "Code Connected"? That's what I would expect too. The actual result shows two list-items for "SQL Performance Explained" and two for "Code Connected" - four in total. The cleanNode function is apparently not cleaning the foreach binding completely.

Looking for documentation on the cleanNode function, I couldn't find any. What I did find was a year old Stackoverflow answer advising against using this function - since it's intended for internal use only.

I ended up making the book model itself an observable. The element is now being bound to a parent model that contains my original book model as an observable. When the event arrives now, I create a new book model and set it to that observable property. This results in my list being rerendered with just two items - like expected.
<div id="books">
    <ul data-bind="foreach: bookModel().booksImReading">
        <li data-bind="text: name"></li>
    </ul>
</div>

var page = {
    bookModel : ko.observable({
        booksImReading: [
            { name: "Effective Akka" }, 
            { name: "Node.js the Right Way" }]
    })
};
                          
ko.applyBindings(page, el);

page.bookModel({
    booksImReading: [
        { name: "SQL Performance Explained" },
        { name: "Code Connected" }]
});
Don't use the cleanNode function to rebind a model - instead make the model an observable too.

Sunday, March 23, 2014

Sending commands from a knockout.js view model

While I got to use angular.js for a good while last year, I found myself returning to knockout.js for the current application I'm working on. Where angular.js is a heavy, intrusive, opinionated, but also very complete framework, knockout.js is a small and lightweight library giving you not much more than a dynamic model binder. So instead of blindly following the angular-way, I'll have to introduce my own set of abstractions and plumbing again; I assume that I'll end up with a lot less.

Let's say that I have a view model for making a deposit.
var DepositViewModel = function() {
    var self = this;

    self.account = ko.observable('');
    self.amount = ko.observable(0);

    self.depositEnabled = ko.computed(function() {
        return self.account() !== '' && self.amount() > 0;
    });
    
    self.deposit = function() {
        if (!self.depositEnabled()) {
            throw new Error('Deposit should be enabled.');
        }

        $.ajax({ 
            url: '/Commands/Deposit', 
            data: { account: self.account(), amount: self.amount() }, 
            success: function() { self.amount(0); }
            type: 'POST', 
            dataType: 'json' 
        });
    };
};

ko.applyBindings(new DepositViewModel());
Writing a test for this, it was obvious that I couldn't have my deposit function make requests directly. An abstraction that has served me well in the past, is a command executor. 
CommandExecutor = function() {
    this.execute = function(command, success) { };
};
We can have an implementation that handles each command individually, or we can have it send requests to our server by convention. The implementation below assumes that the name of our command has a corresponding endpoint on the server. 
CommandExecutor = function() {

    this.execute = function(command, success) {

        if (console) {
            console.log('Executing command..');
            console.log(command);
        }

        $.ajax({ 
            url: '/Commands/' + command.name, data: command.data, 
            success: success
            type: 'POST', dataType: 'json' 
        });
    };
};
While angular.js has dependency management built in, we can get away by injecting dependencies manually and a bit of bootstrapping - it's not that I often have large dependency graphs in the browser, or that I care much about the life cycles of my components.
var DepositViewModel = function(dependencies) {
    var self = this;

    self.account = ko.observable('');
    self.amount = ko.observable(0);

    self.depositEnabled = ko.computed(function() {
        return self.account() !== '' && self.amount() > 0;
    });
    
    self.deposit = function() {
        if (!self.depositEnabled()) {
            throw new Error('Deposit should be enabled.');
        }

        var command = { 
            name: 'Deposit', 
            data: { account: self.account(), amount: self.amount() } };
        var callback = function() { self.amount(0); };
        dependencies.commandExecutor.execute(command, callback);
    };
};

ko.applyBindings(new DepositViewModel({ commandExecutor: new CommandExecutor() }));
See, very little magic required.

Writing a test, we now only need to replace the command executor with an implementation that will record commands instead of actually sending them to the server.
var CommandExecutorMock = function () {

    var commands = [];

    this.execute = function (command, success) {
        commands.push(command);
        success();
    };
    this.verifyCommandWasExecuted = function(command) {
        for (var i = 0; i < commands.length; i++) {
            if (JSON.stringify(commands[i]) === JSON.stringify(command)) {
                return true;                        
            }
        }
        return false;
    };

};

describe("When a deposit is invoked", function () {
    var commandExecutor = new CommandExecutorMock();
    
    var model = new DepositViewModel({ commandExecutor: commandExecutor });
    model.account('MyAccount');
    model.amount(100);
    model.deposit();

    it("a deposit command is sent.", function() {
        var command = {
            name: 'Deposit', 
            data: { account: 'MyAccount', amount: 100 }
        };

        expect(commandExecutor.verifyCommandWasExecuted(command)).toBe(true);
    });  
});
I did something similar for queries, and ended up with not that much code, that didn't even take that long to write. I'm curious to see how this application will evolve.

Sunday, March 16, 2014

Building a live dashboard with some knockout

Last week, we added a dashboard to our back office application that shows some actionable data about what's going on in our system. Although we have infrastructure in place to push changes to the browser, it seemed more reasonable to have the browser fetch fresh data every few minutes.

We split the dashboard up in a few functional cohesive widgets. On the server, we built a view-optimized read model for each widget. On the client, we wrote a generic view model that would fetch the raw read models periodically.
var ajaxWidgetModel = function (options) {
    var self = this;

    self.data = ko.observable();
    self.tick = function () {
        $.get(options.url, function (data) {
            self.data(ko.mapping.fromJS(data));
        });
    };

    self.tick();
    setInterval(self.tick, options.interval);
};
We then used knockout.js to bind the view models to the widgets.
ko.applyBindings(
    new ajaxWidgetModel({ url: "/api/dashboard/tickets", interval: 30000 }), 
    document.getElementById('widget_tickets'));

<div class="widget-title">
    <h5>Tickets</h5>
</div>
<div class="widget-content" id="widget_tickets" data-bind="with: data">
    <table class="table">
        ...
    </table>
</div>
The with data-binding ensures that the content container only gets shown when the read model data has been fetched from the server.

Building dumb view-optimized read models on the server, binding them to a widget with one line of code, and some templating, allowed us to quickly build a live dashboard in a straightforward fashion. 

Sunday, March 9, 2014

Tests as part of your code

In the last project I worked on - processing financial batches - we put a lot of effort in avoiding being silently wrong. The practice that contributed most was being religious about avoiding structures to ever be in an invalid state. Preconditions, invariants, value objects and immutability were key.

One of the things we had to do with these structures was writing them to disk in a specific banking format; all the accounts with their transactions for a specific day. To verify the outcome of these functions, we had a decent test suite in place. But still, we felt like we had to do more; the person on the team that had been working in this domain for thirthy years had been relentlessy empathizing - nagging - that bugs here would be disastrous, and would have us end up in the newspaper. That's when we decided to add postconditions, putting the tests closer to the production code. These would make sure we crashed hard, instead of silently producing something that was wrong.

To make sure we correctly wrote all transactions for one account to disk, we added a postcondition that looked something like this.
Ensure.That(txSumWrittenToDisk.Equals(account.Balance.Difference()));
A few weeks later, running very large batches in test, we had this assertion fail randomly. An account can have hundred thousands of transactions a day. This is why the account structure did not contain its transactions - there were too many to hold them in memory. To make sure an account and its transactions added up, we did do set validations earlier on - no faulty state there. Since the assertion would only fail randomly, and the function had no dependencies on time or mutable state, the only culprit could be data feeded into the function. Since all transactions for one account wouldn't fit in memory, we were streaming them in pages from the database, and this is where we forgot to sort the whole result first, resulting in random pages - doh.

Without this postcondition, we probably would have ended up in the newspaper. While putting your code under test is super valuable, having some crucial assertions as integral part of your code might strengthen it even more(*).

* This concept is central to the Eiffel programming language.

Sunday, March 2, 2014

Alternatives to Udi's domain events

Almost four years ago Udi Dahan introduced an elegant technique that allows you to have your domain model dispatch events without injecting a dispatcher into the model - keeping your model focused on the business at hand.

This works by having a static DomainEvents class which dispatches raised events.

This customer aggregate raises an event when a customer moves to a new address.
public class Customer
{
    private readonly string _id;
    private Address _address;
    private Name _name;

    public Customer(string id, Name name, Address address)
    {
        Guard.ForNullOrEmpty(id, "id");
        Guard.ForNull(name, "name");
        Guard.ForNull(address, "address");

        _id = id;
        _name = name;
        _address = address;
    }

    public void Move(Address newAddress)
    {
        Guard.ForNull(newAddress, "newAddress");

        _address = newAddress;

        DomainEvents.Raise(new CustomerMoved(_id));
    }
}
By having a dispatcher implementation that records the events instead of dispatching them, we can test whether the aggregate raised the correct domain event.
var recordingDispatcher = new RecordingDispatcher();
DomainEvents.Dispatcher = recordingDispatcher;

var customer = new Customer(
    "customer/1",
    new Name("Jef", "Claes"),
    new Address("Main Street", "114B", "Antwerp", "2018"));
customer.Move(new Address("Baker Street", "89", "Antwerp", "2018"));

recordingDispatcher.Raised(new CustomerMoved("customer/1")); // true
While this worked out great for a good while, I bumped into difficulties scoping my unit of work and such when I redid some of my infrastructure. While there are ways to have your container address these issues, getting rid of the static components is simpler throughout.  

A popular event sourcing pattern is to have your aggregate record events. There is no reason why we couldn't apply the same pattern here. Using this technique, we still avoid having to inject something into our models, plus we get rid of that static DomainEvents component. Reponsibility of dispatching the events is now delegated to an upper layer.
public class Customer : IRecordEvents
{
    private readonly EventRecorder _recorder = new EventRecorder();

    private readonly string _id;
    private Address _address;
    private Name _name;

    public Customer(string id, Name name, Address address)
    {
        Guard.ForNullOrEmpty(id, "id");
        Guard.ForNull(name, "name");
        Guard.ForNull(address, "address");

        _id = id;
        _name = name;
        _address = address;
    }

    public EventStream RecordedEvents() 
    {
        return _recorder.RecordedEvents();
    }

    public void Move(Address newAddress)
    {
        Guard.ForNull(newAddress, "newAddress");

        _address = newAddress;

        _recorder.Record(new CustomerMoved(_id));
    }
}

var customer = new Customer(
    "customer/1",
    new Name("Jef", "Claes"),
    new Address("Main Street", "114B", "Antwerp", "2018"));
customer.Move(new Address("Baker Street", "89", "Antwerp", "2018"));

customer.RecordedEvents().Contains(new CustomerMoved("customer/1")); // true
Another altnernative is to return events from your methods. This technique puts the responsibility of aggregating all events on to a higher layer. Better to put that closer to the aggregate.

What patterns are you using? 

Sunday, February 23, 2014

Strategic DDD in a nutshell

There are two big parts to Domain Driven Design; strategy and tactics. Strategy helps setting out a high-level grand design, while tactics enable us to execute on the strategy.

Practicing strategic design, we generally first try to list all of the different parts that make our business a whole; these are sub-domains. When you look at a supermarket chain, you would find sub-domains like real estate management, advertising, suppliers, stock, sales, human resources, finance, security and so on. Sub-domains will often relate to existing structures like departments and functions.

Once you've defined your sub-domains, it's useful to determine how important of a role they play. First of all, you should figure out which sub-domain is most important to your business; the core domain. This is the sub-domain that differentiates you from other businesses, or more bluntly put; this is where the money is at. For our super market chain, this might not be that obvious for an outsider. A first uneducated guess would be sales, but if you gave it some more thought, you would realize that sales are very similar for most supermarkets. Digging deeper, we would find that supermarkets really compete with each other by squeezing the last bit of value out of suppliers and by collecting data to use for targeted advertising. Supplier management and advertising can't stand on their own though; they need other sub-domains like stock and sales. These are supporting sub-domains; they are not core, but our business couldn't do without them either - they still add a bunch of value. Other sub-domains like property management, human resources or security are generic sub-domains; these problems have been widely addressed and solving them yourself won't make you any money.

Having a map of which areas are most important to your business makes it easy to distribute brain power accordingly. Make sure your core domain gets the most capable team assigned, before any other supporting sub-domain. Try to buy solutions of the shelve for generic sub-domains.

The concept of sub-domains lives in the problem space. The solution space on the other hand is where bounded contexts are at. Domain Driven Design tries to define natural boundaries between parts of your solution by putting the language first. These boundaries allow us to keep a language and model consistent inside of them, protecting conceptual integrity.
If you would ask a marketer what a product is, he would talk about images, campaigns, weekly promotions and so on. If you'd ask sales on the other hand, they would only mention price, quantity and loyalty points. The same concept can turn into something completely different depending on how you look at it. Bounded contexts enable us to build a ubiquitous understanding of concepts in a clearly defined context.

Mapping a bounded context to exactly one sub-domain would be DDD nirvana; addressing one problem with one focused solution. In the real world, things are more messy though. There will always be systems out of our control; for example legacy and third party software. As our understanding of the business grows, keeping our software aligned can be hard too. If we would lay out a map of sub-domains and bounded contexts we would see lots of overlap.

Bounded contexts will often be worthless on their own though; most useful systems exist of interconnected parts. If you have worked in the enterprise, you know how complex communication between teams and departments can be. This isn't very different while integrating bounded contexts; you need to consider politics. This is where concepts like up-stream, down-stream, bandwidth, partnership, shared kernel, customer-supplier, conformist, anti-corruption layer etc come into play. The activity of thinking about and capturing how all these systems play together is called context mapping.
In our example, we notice that supplier- and stock management would fail or succeed together; they have a partnership where the bandwidth is very high - the teams sit across the hall from each other. Human resources and security on the other hand have a very different relationship. A product was bought for human resources, while a solution for security was outsourced. Security relies quite heavily on what the human resources' open host service is exposing. If a product version bump changes those exposed contracts, security needs to comply as soon as possible; security is down-stream from human resources - shit floats down-stream.

For me, strategic DDD in one sentence, is the constant exercise of trying to see and understand your business at large, and aligning your software as efficiently as possible.