CQRS in Microservices - Breaking the rules

Since learning about CQRS, it’s something I’ve taken into almost every new data-based microservice I build. Differentiating how data is created to how it’s retrieved gives you a lot of power.

Take a relational database. Where a table has a relation, which has another two relationships.

public class TopLevelObject
    public string Name { get; private set; }

    public MyChildObject ChildObject { get; private set; }

public class MyChildObject
    public int Id { get; set; }
    public string AccessCode { get; set; }
    public int TopLevelObjectId {get; set;}
    public virtual TopLevelObject TopLevelObject { get; set; }
    public RelatedObject RelatedObject { get; set; }
    public Category Category { get; set; }

public class RelatedObject
    public int Id { get; set; }
    public string ContentDescription { get; set; }
    public string ContentType {get; set;}

public class Category
    public int Id {get; set;}
    public string Description {get; set;}

Take the extremely trivial example above. If somebody wants to retrieve a specific top-level object and the category description, a direct Entity Framework query would give some rather messy JSON.

Having a completely different data access method for querying data negates this.

public class TopLevelObjectDTO
    public string Name { get; private set; }
    public string AccessCode { get; private set; }
    public string ContentDescription { get; private set; }
    public string ContentType { get; private set; }
    public string CategoryDescription { get; private set; }

Using a direct SQL query, mixed with an ORM framework like Dapper, allows a more simplistic response model as detailed above.

Of course, this would be possible using EF. But once a complex object model is created, LINQ can get messy and un-performant.

C Microservice & Q Microservice

There’s an idea I’ve been toying with recently around taking CQRS to an extreme with Microservices.

I often find myself writing an API that would see huge improvements from running multiple instances. Whilst this is entirely possible with containers, things get kind of messy when it comes to data access.

I’ve always been nervous having multiple services having the ability to manipulate my databases. I’ve been stung with race conditions in the past.

However, having 10 workers that allow people to ready the data. Knock yourself out guys, query all the data you want.

Breaking the rules

As long as I have been working with microservices the hard and fast rule is one DB per service. Multiple services should NOT share the same database.

But what about having two services that share a database, one being a Command service and one being a Query service.

That way 100,000 instances of the query service could run (imagining a world in which a DB could handle that many connections) with its own view model.

Next to that, a more controlled data manipulation service could run at the same time.

In Practice

This is purely at a conceptual level at the moment, so I apologies right now if I’ve missed something or if I’m re-inventing the wheel.

How I actually envisage this working, at a functional level, is the 100,000 read services would each hold their own data cache.

Alt Text

When a new request comes in instance X first checks it’s own internal knowledge of the data and returns that if found.

If no query results are found, the database is then queried directly.

If a result is found in the DB, that is then stored in the local cache for next time and the response is returned.

At a command level, there is simply one service running that handles all data manipulation.

I’d love to hear your thoughts, including if that thought is ‘you’ve been building microservices wrong your whole life’.