Darby Frey is Director of Platform Engineering at Belly, the leading loyalty marketing platform in the U.S. For more information, Read our Belly customer story to learn more about how Heroku has helped Belly scale their business.
How did you approach migrating to a microservices architecture?
Originally, we built the entire business on one Rails app. Then a couple years ago, we pivoted to a microservices approach. It is still a work in progress, but we’re migrating components of the monolithic app whenever it makes sense. For example, when we need to add or expand a feature, or if we need to scale something independently, then it makes sense to pull that out into a microservice. We don’t have a grand plan to break everything out and re-write it all at once.
So far, this approach is working pretty well for us. The ability to independently scale components has been really helpful. When we see spikes, we can address them individually and they’re no longer such a big deal. We’re also happy with how well the microservices approach fits with the Heroku platform. We currently have over 100 apps running on Heroku.
How do you standardize across your 100+ apps?
We’ve done a good job at standardizing. We made our own small Ruby framework for our microservices that we call “Napa.” It’s basically a stripped down version of Rails. It gives us a low memory footprint, which works great on Heroku because we can run a lot of our apps on 1x or 2x dynos and not overtax them in terms of memory. Everything is standard, so any engineer can jump across services and see the same structure.
How do you handle requests between your front end apps and microservices?
We had started building front end apps on top of our backend microservice architecture. All our apps are either native mobile apps or single page web apps built in AngularJS. These apps would get deployed to S3 and they would connect to our backend, but we didn’t want them to connect to the microservices directly. We wanted to put a layer of security between the public Internet and our microservices, so we decided to build an API proxy.
How did you get started using API proxies?
Our first approach was to build a Rails app to act as a proxy between our single page web apps and our backend microservices. We’d direct traffic to the Rails app which would do some security checks, and then proxy the request to a backend service. It kind of worked; we had a layer of security in between that allowed us to control access but it was slow and difficult to build some of the complex interactions we were needing.
How did you solve these problems?
We started looking at external tools for API management, such as 3scale, Mashery, and Apigee. We ended up going with Apigee, which offers a nice customizable tool called “Edge.” It’s an environment where we can build a proxy layer that sits on top of our app stack.
Unlike our Ruby proxy service, Apigee Edge does a better job at taking a response and transforming it before sending it to the client. We can make parallel requests from Apigee to our backend services, take all those responses, mash them up, and create appropriate response payloads for the consumer. We define the mashup logic in JavaScript, and Apigee also allows us to share it across proxies, so in many cases we can DRY up our code at that level. We use that feature quite a bit.
So now, the way we think about it is we’re not building one API to serve all our clients. We actually build custom API interfaces for each consumer: Android, iOS, iPad, external partners, etc. We can optimize the response for each client at the Apigee layer - without affecting our backend services. Because this allows us to be so flexible, we are able to move faster and create more optimized interfaces for each consumer.
How do you handle versioning of your microservices and proxies?
We still have to deal with versioning our microservices on our platform side. Our approach is to make additive changes that maintain backwards support. If we have to make a significant change to an API, we’ll just version it. Version 1 will continue to do what it needs to do, and version 2 will be for new features. The Apigee clients can be configured to use one or the other. In the future, we’ll have to do a major overhaul of this, but for now it works well for us.
Apigee also allows us to add some basic logging to our proxies, which we send to Loggly. We use it to tell us which Apigee proxies are calling which microservices. So, if we have to make a significant change to an endpoint, then we’ll be able to use the logs to determine the impact on the Apigee side. This has worked well for us so far.
How do you deploy your proxies?
To run on Apigee, we build our proxies using their XML specification. We do our mashup logic using a simple JavaScript environment they provide. We maintain a GitHub repo for all our Apigee proxies and use the same pull request model that we use for all our code. From there we use a CLI tool that sends an API request to Apigee to deploy the code.
We have about 70-80 proxies now, and each one is very specific with 4 or 5 endpoints each. We can deploy each one independently, so we don’t have to deploy the whole thing each time. This means we don’t run the risk of breaking our entire API proxy setup if a single proxy has a syntax error. Apigee also does a compile before deploying, so if it fails for some reason it won’t go into production. That gives us an extra sanity check.
How many microservices does a client typically interact with on each request?
Good question. The most we’ll see for one request may be 8 or 9 calls, but on average it’s probably more like 2 to 4 calls. We try to avoid crazy segmentation in our microservices.
The trend seems to be when companies decide to do microservices, they just explode out and build way too many microservices for them to manage. Then, over time, they will pull some of the more granular ones together into more consolidated services when they start to feel the pain. We’ve gone that route as well.
For us, the pain comes in when some common dependency is affected by a security CVE, and we have to then patch and deploy 50 apps—it’s a real headache and takes some of the team out of development for a couple of days to get it done.
Has taking this proxy approach changed the way you think about an API?
Definitely. Our proxy APIs are more like views into our backend. We don’t want to have a ton of logic in there, but we can have a little bit. The real business logic happens at the service layer. We will do things like field mapping or structuring responses at the proxy layer.
What advice would you give others on how to size microservices?
I tell people - try to think of a component that you can break off that would have little to no interaction with your primary app - usually you can see this by looking at what data models are used for a given feature. That’s the starting point. Once you’ve broken out one service, it starts to become easier to see where the seams are in your application, and you can start to carve out those services as well.
If you have common joins across components, then it makes more sense to keep them in the same service. For example, we have a businesses table and a users table and we’re always joining across those, so it doesn’t make sense to keep them separate. There’s not enough complexity in each individually for them to live on their own.