If you’ve been following our blog for the last few months, you might remember a post back in October that described why we decided to take an event-driven microservice approach to our architecture. That blog was mainly about microservices in general, and we promised that we’d describe why we decided to go down the event-driven path in a future blog post. This blog is us coming through on that promise! 

A quick refresher on microservices; they are the minimum functionality based on a business entity that can operate independently of everything else. In plain terms, instead of each core piece of the product (in our case these are employees, jobs, locations, etc.) serving as a piston in a larger engine, each of those pieces is its own, independent engine. 

We knew we wanted to have many small engines instead of a large, monolithic engine. What we didn’t yet know was what approach we were going to take in terms of utilizing these microservices. Specifically, how would they communicate with each other, passing data back and forth?

There are two options for how to use microservices, which can be split roughly into a regular approach and an event-driven approach. 

In the regular approach, each microservice is the source of truth. When you need an employee, you pull the employee service and tell it to get employees based on a certain filter. The employee microservice behaves as a provider to other microservices, but this causes a potential problem. Each microservice can become a bottleneck, because that’s the only one that can provide the information, and it serves everyone. 

In some cases, if a particular service is down, actions won’t be able to proceed at all. This can be dealt with by scaling up services so that you have substitutes as well as by scaling your databases, but this is very expensive. To get out of this costly venture, you can turn toward a hybrid approach and start amortizing calls and introducing queueing, but this is a big burden on developers and continues to be a resource drain. When you need a huge amount of data, you’ll be able to grab it and load it, but when you’re done with it you’ll instantly throw it away. 

An event-driven approach solves the bottleneck issues described above by introducing events into the system. Our services start emitting events any time they process something, like an employee update. Every microservice interested in that update can grab the info and store it locally, the moment it happens. We are not only introducing an event-driven approach, but we are also introducing data storage local to each service. Data and services become distributed across services, and they only store the minimum necessary amount of data. This is still a tradeoff; we’re saving on bandwidth but paying more for storage because of the duplication. Typically, though, storage is less expensive. Ultimately, this means there’s no single bottleneck in the system. We think the benefits of this approach have the most immediate, positive impact on our customers, as well as for the team that’s delivering for them.