Skip to content

On Patterns of Modern App Development

Published:

We build modern applications on common building blocks. Applications may use one or all of the building blocks. They are:

This isn’t an exhaustive list. But it gives us a set of building blocks, first principles, that form the basis for thinking about modern applications.

User-facing applications

The first, and probably most obvious, is the web application. I use the term application loosely here. It could be an API, a server-side rendered web app, or a frontend. The critical thing to remember is that someone external consumes these applications. That external person could be another team inside your organisation or someone browsing your site on their mobile.

User-facing applications, regardless of who the user is, have a set of common characteristics. User-facing applications need to be:

First, web-facing applications need to be performant. In my experience, users are always unhappy. Things could always be more performant. An application’s performance can directly impact a user’s job performance and satisfaction. If you’ve used Salesforce, you’ll know exactly what I mean.

The second and third options in the list are related. In addition to being fast, if I’m a user calling a web application, I want it to be there and to work. ‘Working’ is a broad term, and what working means will depend on the application, the context, and the use case (a rarely called non-critical service can probably accept some downtime).

But inside whatever the SLA you’ve set, you want the web app you’re calling to work and to exist behind the DNS name you’re calling to access it.

”I’m happy with this slow service that only works some of the time” - No user ever

User-facing applications use synchronous communication. This article would grow exponentially if I discussed the ideas of coupling, so let’s leave it at that.

Background applications

I’m differentiating a background application as an application that doesn’t have another user (remember, the user could be a human or another service) accessing it synchronously. A background application reacts to messages coming from an external location.

Background applications will run, wait for a message to arrive, and then pick it up and process it once a message does arrive. You don’t want your application sitting there idle, consuming resources and costing money when there is no work to do. I’d argue the most important characteristic of a background service is its ability to adjust itself according to throughput—in both directions.

You don’t want to worry about your application if your message volumes increase, and if they decrease, you want your applications to scale back automatically.

Idempotency is also incredibly important. Most messaging services guarantee ‘at least once’ delivery. Some give you the option to switch to guaranteed one-time delivery, but with that, you lose throughput. Honestly, even if you use a messaging system that supports guaranteed one-time delivery, YOU STILL NEED TO THINK ABOUT IDEMPOTENCY.

If you aren’t familiar, idempotency is the idea that the outcome should be the same if the same message/request hits the same service multiple times. If I receive a TakePayment request for order 12345, and then I receive the same request a minute later, you don’t want to take the payment again. Of course, this would also apply to user-facing applications.

As a developer, these are the two components you have direct control over. You can write code that addresses some of these challenges. But the code you write isn’t the only thing that makes up your application.

Storage

When you store data, you want it to be there again later. Whether that’s file storage, blog storage or more typical relational databases, the ability to persist state is an important part of almost every application, particularly if you are building your application layer to be completely stateless (hint, you probably should be).

What that storage looks like would change from system to system, but you’ll likely need to persist things.

Integration

Unless you’re building a monolith, things need to communicate. Even if you are building a monolith, you may still use queues to offload some processing from your main user flow. You can still use asynchronous processing inside a monolithic application.

That said, messaging and integration are more common in a microservices architecture. If you’re a developer working on a single microservice, it’s not your single microservice that makes up ‘the system’. All the individual services work together and form something greater than their parts: the system.

For that to work, your components need to integrate, whether synchronous or asynchronous. Systems need to talk!

Caching

As systems grow bigger and more distributed, data spreads across multiple backend services. Or your users may complain that your app needs to be faster. Caching is one solution to that problem.

Caches aren’t a long-term storage mechanism. Hence, they’re being grouped separately, but they are an excellent lever to pull to increase your app’s performance. A cache could be something more traditional like Redis or an in-memory state you hold inside your application. Think of a cache as an extremely fast temporary store.

Caches bring about their own sets of challenges:

There are a bunch of questions you need to answer with caching. And if your cache isn’t set up correctly, it can add latency to your system. It’s not a silver bullet but a core building block of modern distributed apps. Hint: companies like Momento give you fully serverless caching.

Observability

Observability should be number one. If you can’t understand what is happening inside your system, how can you possibly expect to fix a bug or make it better? Observability is the most important part of building modern applications. You need the ability to ask questions about your system to understand individual requests and the context around them. That should be where you start.

Think about it. A product manager comes along and says the system needs to be faster. How do you find the bottleneck? The first-line support team calls you with an odd bug they can’t diagnose. How do you know where to start? You’ve added a cache to your system, and performance has worsened. Why?

Observability helps you answer the questions. A good observability strategy should allow you to play detective and ask questions of your system that you didn’t know you needed to ask.

There’s probably a whole other blog post on this… Stay tuned, or ya’know. You could subscribe…? :)

Orchestration

The final one, which I wasn’t sure about including, is orchestration. You could replace the word orchestration with workflow. Software systems exist to solve problems, most of which are multi-step. If I order a pizza, software in the background is going to:

  1. Check the stock
  2. Take the payment
  3. Send to the kitchen
  4. Wait for the kitchen to prepare the pizza
  5. Find a delivery driver
  6. Deliver the order

Whether you’re building a monolith or distributed event-driven microservices, you’ll need to write some code somewhere in your code base to determine what happens next. After payment is taken, what do I do? If the payment fails, what do I do?

That’s why I’ve included orchestration in this list. Your system is probably orchestrating something, even if you are not explicitly using an orchestrator.

This gives us the building blocks—the overarching ‘patterns’ of modern apps, if you will. Inside each of these components, there is a range of options, services, and decisions. For the vast majority of use cases I’ve encountered in my career, you would benefit from trying to make each component as serverless as possible.

But what does serverless even mean? (If you want to dive deeper into modern compute you might like this article).

There are almost definitely some things I have missed here, and I’d love to hear from you!

Serverless? What does it even mean

Historically, serverless has been defined by a set of core principles. Momento wrote a great article introducing the Litmus Test for Serverless. For a service to be considered serverless it must:

  1. Have nothing to provision or manage
  2. Usage-based pricing with no minimums
  3. Ready with a single API call
  4. No planned downtime
  5. No instances

I like this definition, but I would add one caveat. Viewing serverless as ‘all or nothing’—either serverless or not—can remove some valuable services.

Take a service like Azure Container Apps (ACA). ACA is a container orchestrator that provides an abstraction on top of Kubernetes. You can deploy an application by providing a container image, CPU/memory requirements and scaling behaviour if required. There is almost 0 operational overhead running an application this way.

Looking at the Litmus test, this meets the criteria of 1, 3, 4 and 5. 2 gives us nuance. An application running on ACA won’t automatically scale to zero; you can configure scaling rules, but it doesn’t ‘just happen’. When stopped, they don’t cost you anything. You pay only when your app is running. But your app is running all the time even if no requests are coming in.

This application is still serverless. No, it doesn’t automatically scale to zero. Yes, you would pay for the application running when no requests are coming in. But you can deploy an application with next to 0 operational overhead.

Serverless is a spectrum, not a binary decision. You can be more or less serverless based on the requirements of your application.

In this repository, I demonstrate how to run modern web applications on various cloud providers with little to no operational overhead.

What does it mean for you?

If you’re a developer, at least anything like me, you want to run your application with as little infrastructure worries as possible. “Here’s my application. It needs this CPU and memory, scale it like this, and frankly, I don’t care about anything else.”

If your company has invested time, energy, and money into building a Kubernetes platform, then great. That can feel serverless to you as a developer; leverage it. This isn’t to say Kubernetes isn’t valuable—it very much is. But it’s valuable when your application needs it.

If your ability to dynamically scale and manage infrastructure is a core differentiator for your business and your customers, great. Go for it. Otherwise, you don’t need Kubernetes.

Are you training a machine learning model, doing heavy GPU computation, or ingesting dynamic/large amounts of data? Virtual machines and Kubernetes are probably useful.

The conversation around managed services vs. Kubernetes becomes more interesting when you zoom out and look at the bigger picture. Does your organisation have an excellent reason to invest in building a Kubernetes platform (which is just rebuilding Cloud Run/Fargate/Container Apps)? If you have a good reason to do it (that isn’t CV-driven development), then great.

Otherwise, use a managed serverless, be as serverless as possible, and build your application to keep you portable.