20 Nov 2021
Goal
- To review our helloworld microservice against the tenets of microservice architecture, and see if we are at a better place from where we began.
Discussion
Here is a quick review of where our little microservice stands now, in terms of the “microservician” utopia:
Tenet |
State |
Observation |
Loosely coupled |
Neutral |
N/A? We just have one service. Nothing to couple yet. We will get there :) |
Cohesive |
Neutral |
Not a lot of cohesion to discuss with helloworld. So, same as above. |
Deployable |
Better |
By using Docker-based containerization and by managing and orchestrating container deployments via Kubernetes, we have improved the deployability of our microservice. By using the canary release feature of Ambassador, we have improved fine-grained control over our deployments. |
Reliable |
Better |
Due to the orchestrated and scaled environment, individual node failures no longer bring down the entire service, and hence we have improved the reliability and availability of our microservice. |
Scalable |
Better |
Since Kubernetes can quickly scale up and down as necessary, we have improved the scalability of our microservice and elasticity of our production environment. |
Maintainable |
Better |
Using Kubernetes, patches to code or any dependencies can be applied without bringing down the pods, thereby reducing the need for maintenance windows and increasing the maintainability of the overall architecture. |
Observable |
Better |
Using Ambassador, Prometheus and Grafana, we now have ways to capture, monitor and visualize the various metrics that are relevant in understanding the health and performance of our infrastructure. |
Evolvable |
Beginnings |
While our code is too simple to be discussing evolvability, we have attempted to version our APIs, thereby solving a piece of the puzzle for an evolvable architecture. |
Summary
We have achieved a lot with just a simple helloworld microservice. We also realize that there is so much untapped potential with the abilities of software we deployed and configured. For example, we have so much more to get from our API gateway. We need to explore rate limiting, authentication, logging, tracing etc. We also need to push the boundaries of the infrastructure and see how it performs at load. We need to try out various load balancing algorithms. We have barely scratched the surface of Kubernetes.
But, we will forcefully take a break from this line of exploration here. After all, these tools start warming up only with interplaying services, where we deal with high coupling, complex computations, big data, concurrency etc. We need complexity. We need code. We need data. So, we bid adieu to our helloworld service here, as well as to Part 1. We will come back with a new area to explore in the next part. Onward we shall go!
19 Nov 2021
Goal
- To share statistics produced by Envoy proxy and Ambassador with Prometheus to generate time-series metrics, and visualize them as dashboards in Grafana
Discussion
Helloworld is getting better and better, while adding more and more components and complexity. This necessitates us to have good visibility into what’s happening in the cluster and in our service. Unfortunately, that’s an area where we seriously lack. Thankfully, our edge stack is capable of producing a wealth of telemetry ripe for consumption.
First step in any monitoring system is to be able to create and track metrics in a time-series data store. This provides both a real-time view into what’s happening at the moment, and also the ability to look back into the recent past to make sense of what might have happened. The raw data is also available for further processing and analytics. A popular option in this space is Prometheus.
Once we generate time-series metrics, we need the ability to query it as well as build rich visualizations and dashboards on top of it. Prometheus has a query language called PromQL and an expression browser, but we will be using Grafana for its extensive dashboarding capabilities.
Read more
18 Nov 2021
Goal
- To introduce a new version of our helloworld microservice, deploy it via Kubernetes using a rolling update strategy, and then set up a canary release using Ambassador, so that only a portion of the requests will be directed to the new version.
Discussion
We have been YAMLing for a while now. So, maybe it is good to get back to a couple of extra lines of code?
One of the desired features of an API gateway is to support canary releases. A canary release is a deployment strategy where a portion of incoming traffic is diverted to an early-release of a new version of a service. This way, we can verify the functionality with a small subset of requests or users, and validate the new version before a large-scale release to all users. It also helps rollback quickly to the current working version since it is just a matter of switching to the older code that is currently serving the other users.
Read more
16 Nov 2021
Goal
- To secure communications to our service using TLS (real certificates from Let’s Encrypt) and terminate it at our API gateway.
Discussion
So far, we have dealt with plain old HTTP for our helloworld service. I have referred to TLS termination in my previous posts on load balancing and api gateway. Now, it is time to enable HTTPS for our web service as we have the necessary sophistication via the Ambassador gateway.
Quick recall: TLS stands for Transport Layer Security. It provides encryption, authentication and data integrity on communication between ‘clients’ and ‘servers’ (client and server can be a wide variety of things here). It evolved from SSL (Secure Socket Layer). HTTPS is the TLS implementation over HTTP. One of the fundamental building blocks of TLS is the digital certificate. This certificate is issued by a Certificate Authority (CA) to the entity which owns the server domain.
In our case, we have a few interesting nuances before we TLS away to glory. Let’s dig deeper!
Read more
14 Nov 2021
Goal
- To setup MetalLB external network load balancer and introduce Ambassador API Gateway into our helloworld architecture and configure for basic usage.
Discussion
In the previous post, we introduced API gateways and discussed some nuances when using them in a Kubernetes environment. In particular, we realized that we needed an external network (L4) load balancer that worked with Kubernetes.
MetalLB will function as that external load balancer, receiving and routing requests from the outside world to the edge of our Kubernetes cluster. We will introduce Ambassador Edge Stack as our API gateway at the edge. where it will wait for requests with a ‘Listener’. The listener will be armed with ‘Mapping’ information to on how to route requests to ‘Hosts’ in the cluster, which would be Kubernetes services, like our helloworld. Let’s get this setup going!
Read more