API Gateway - Distributed Swiss Army Knife

Goal

  • To review the purpose of an API gateway in a microservices architecture, and to understand some nuances in introducing an API gateway into a Kubernetes environment.

Discussion

API Gateway - What and Why?

An API gateway is a key component of a microservice architecture. It provides a single point of entry into your distributed system, and abstracts away all the internals, like the fact that we may have multiple microservices co-ordinating to provide a service. API gateways provide a plethora of goodness:

  • Perform the role of a Reverse Proxy: we don’t want our actual microservice to be exposed as is to the outside world. A reverse proxy abstracts away the server implementation from clients.
  • Perform most L7 load balancing functions, including TLS/SSL termination
  • Provide a single URI and route to multiple microservices behind-the-scenes
  • Aggregate responses from multiple microservices
  • Support specialized protocols like websockets and gRPC in addition to HTTP and HTTPS
  • Protect microservices by providing authentication and authorization (role-based access control)
  • Provide IP block and allow-listing to manage which clients can have access to which services
  • Provide ability to throttle access to services via rate limiting
  • Help observability by providing logging, monitoring and analytics
  • Help deployment by supporting multiple versions, canary and blue/green deployments

All this goodness. How do I get it?

Read more

Load Balancing Primer

Goal

  • To review Load Balancing basics and understand how it helps modern web and data clusters.

Discussion

We have been playing with our helloworld toys for some time now. Time to take a break and do a theory session. Why are we doing this? So far, we have built a single-server microservice and figured out mechanisms for effective deployability and some scalability/reliability using orchestration. Before we expand our service features, we need to understand a few key backend technologies, which serve as the entry point to a cluster made of our service. Here are a few keywords that we have heard of/used multiple times: load balancing, API gateway, reverse proxy, ingress, service mesh. Are these the same? How are these different? Let’s attempt to demystify a bit. Let’s start with Load Balancing.

Read more

Orchestration for Scalability and Reliability

Goal

Discussion

In the previous post, we improved the deployability of our service by containerizing it. That was nice. Still, it was just on one instance. Our journey is going to be long and arduous before we reach the utopian helloworld.

In this post, we will see how we can achieve some scale and improve reliability. Traditionally, this is where we would introduce a load balancer, deploy our container across a few instances and put them all behind the load balancer. That is still a very logical thing to do here, but I thought I would take a slightly different direction and attempt to introduce Orchestration via Kubernetes.

Read more

Containerization for Deployability

Goal

  • To package our helloworld microservice into an independently deployable Docker container image and deploy it on a single instance.

Discussion

As the previous post highlighted, the deployability of our service may not be up to snuff.

Well, technically, we haven’t discussed a deployment strategy yet, but it is easy to visualize an approach driven by version control and some deployment automation (a.k.a. good old days). Even with CI/CD, deployment is narrowly focussed on the code artifacts, not worrying about the consistency of environments across multiple production machines. In a planet-scale application, this is insufficient.

Virtualization could solve this, but it is too resource-intensive and cumbersome, since consistent packaging in virtualization means creating an image with an entire OS and the whole application footprint with all the bells and whistles.

This is the context in which we speak about Containerization. A container is much more lightweight and nimble, compared to a virtualized packaging: not the OS and the entire application footprint, but just a service and its dependencies. So, it is easier to deploy different containers for various services and have them make effective use of resources, while still being consistent across the whole footprint at the container level.

Read more

Hello-world Problems

Goal

  • To review our helloworld microservice against the tenets of microservice architecture, and see where and how it falls short.

Discussion

Here is a quick review of where our little microservice stands, in terms of the “microservician” utopia:

Tenet State Observation
Loosely coupled Neutral N/A? We just have one service. Nothing to couple yet. We will get there :)
Cohesive Neutral Not a lot of cohesion to discuss with helloworld. So, same as above.
Deployable Bad Sure, it is deployable, but if we were to deploy this across a bunch of VMs, it quickly becomes a chore. On complex applications with a lot of dependencies, it is a disaster waiting to happen.
Reliable Bad Well, if our VM goes down, we don’t have anywhere to get our hello, do we?
Scalable Bad If a lot of people want our precious hello service, my puny little 2-core 2GB RAM VM will surely buckle down and choke. Now, see Reliable.
Maintainable Bad How do I deploy patches to my code or dependencies? How do I roll out the inevitable security update that needs to go out yesterday? Bring down my service, take an outage and perform maintainance?
Observable Bad I know next to nothing about how my service is being used. I don’t have any logs, metrics or end-to-end tracing.
Evolvable Neutral Same as coupling and cohesiveness, we can discuss evolvability only after adding more complexity to our setup.

Summary

Before building further, we did a quick review of our service and convinced ourselves that we have a long way to go.


Creative Commons License

Unless otherwise specified, this work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.