Setting up our API Gateway
14 Nov 2021Goal
- To setup MetalLB external network load balancer and introduce Ambassador API Gateway into our helloworld architecture and configure for basic usage.
Discussion
In the previous post, we introduced API gateways and discussed some nuances when using them in a Kubernetes environment. In particular, we realized that we needed an external network (L4) load balancer that worked with Kubernetes.
MetalLB will function as that external load balancer, receiving and routing requests from the outside world to the edge of our Kubernetes cluster. We will introduce Ambassador Edge Stack as our API gateway at the edge. where it will wait for requests with a ‘Listener’. The listener will be armed with ‘Mapping’ information to on how to route requests to ‘Hosts’ in the cluster, which would be Kubernetes services, like our helloworld. Let’s get this setup going!
Infra
3 VM instances running on top of Hyper-V, provisioned with Ubuntu 20.04 Live Server. One will function as the Kubernetes Control Plane while the other two will function as the Kubernetes worker nodes. We will wire up Ambassador with Kubernetes in the control plane node.
Stack
The microservice hasn’t changed. So, it is the same stack as the helloworld post.
Containerization and Orchestration
- Docker and Kubernetes
API Gateway and Load Balancing
- Ambassador Edge Stack, MetalLB
Architecture
Setup
Same initial setup as the Kubernetes post.
If you recall, we ‘tainted’ the control plane to be able to run normal workloads. That step is necessary to get MetalLB working since it needs to deploy a controller workload on the control plane.
MetalLB installation is straightforward. I will use YAML manifests created by MetalLB to install it. The first step is to create a namespace (it creates metallb-system). The next step is to create a secret called ‘memberlist’ using OpenSSL. This will be used in the third step by the YAML manifest for the actual installation, which creates a controller pod and as many speaker pods as there are nodes in the cluster.
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/namespace.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/metallb.yaml
The installation creates and runs the MetalLB pods. You can inspect them by:
kubectl get pods -n metallb-system
These pods aren’t doing anything yet, since we haven’t actually configured the LB setup.
MetalLB operates in a couple of different modes. Its basic functionality is to take ownership of a few IP addresses in the network, map them to virtual IP of our service, and advertise these IPs using ARP or BGP. We will use the ‘Layer 2 configuration’, in which it advertises through ARP. We create this configuration through a config map. First, we create a file called metallb-config.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.250-192.168.1.255
This instructs MetalLB to work in layer2 and gives it control over the specified IP range (well, your router has to hand out these IPs via DHCP. So, perhaps it is better to reserve them via your router’s configuration, so that there aren’t any IP clashes in the future).
Now, the LB is ready to roll. To use it, we will delete the helloworld service that is currently exposed via NodePort.
kubectl delete svc helloworld
Once done, we now expose it again, this time with the type as LoadBalancer and then query the created service.
kubectl expose deployment helloworld --type=LoadBalancer --port=80 --target-port=8000
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
helloworld LoadBalancer 10.96.119.144 192.168.1.251 80:32724/TCP 42s
Note that this external IP is from the pool controlled by MetalLB. That’s cool. Now, by simply visiting that external IP, we get our helloworld response. Note that, just like before, we can still use that port 32724 and the IP addresses of our nodes (control plane or workers) to hit the service. This is because, internally they are still exposed as a NodePort, but the LB will now abstract that away. So, we no longer need to know the actual node IPs, which may go down or change any time. Instead, we can just use the IP reserved by the LB, and forget all the underlying complexities!
Now, we have an L4 load balancer managing our service, and giving all that L4 goodness. But, we are hungry for more! Time for Ambassador to enter the chat.
Ambassador is best installed using Helm, which is a popular package manager for Kubernetes. I used the typical ‘apt’ way to install Helm:
curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
sudo apt-get install apt-transport-https --yes
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
Once Helm is at the helm, Ambassador install goes very similar to the ‘apt’ approach. We add a repo to Helm, update Helm cache, and install the package we want.
# Add the Repo:
helm repo add datawire https://app.getambassador.io
helm repo update
# Create Namespace and Install:
kubectl create namespace ambassador && \
kubectl apply -f https://app.getambassador.io/yaml/edge-stack/latest/aes-crds.yaml
kubectl wait --timeout=90s --for=condition=available deployment emissary-apiext -n emissary-system
helm install edge-stack --namespace ambassador datawire/edge-stack && \
kubectl -n ambassador wait --for condition=available --timeout=90s deploy -lproduct=aes
Please note that Ambassador’s documentation has the CRD URL incorrect in their documentation. I have added the correct one above. Install takes a few minutes to complete, at which point, we will have Ambassador ready. Make a note of the IP address of Ambassador’s edge-stack service:
kubectl get svc -n ambassador
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
edge-stack LoadBalancer 10.101.14.213 192.168.1.250 80:31830/TCP,443:30377/TCP
edge-stack-admin ClusterIP 10.109.139.203 <none> 8877/TCP,8005/TCP
edge-stack-redis ClusterIP 10.99.90.171 <none> 6379/TCP
Not much else to see though. To see something working, we are going to wire up Ambassador with our helloworld service. Remember, when we left the service alone a few paragraphs above, it was already running behind (lol) the load balancer. Now, we are going to pull it back and put Ambassador in-between MetalLB and our service.
To get this wired up, we will delete our helloworld service again, and bring it back up, this time as a ClusterIP service. We no longer need it to be directly exposed to the external load balancer.
kubectl delete svc helloworld
kubectl expose deployment helloworld --port=8000
Then, we create an Ambassador Listener for HTTP on port 8000:
kubectl apply -f - <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: Listener
metadata:
name: ambassador-listener-8080
namespace: ambassador
spec:
port: 8080
protocol: HTTP
securityModel: XFP
hostBinding:
namespace:
from: ALL
EOF
Finally, we create a Mapping to route all traffic to the /helloworld/ route to our service running on port 8000.
kubectl apply -f - <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
name: helloworld-backend
spec:
hostname: "*"
prefix: /helloworld/
service: helloworld:8000
EOF
That’s all! We have now exposed our service via Ambassador to the external world. It can be accessed from the browser to see our familiar message.
http://<ambassador-ip-addr>/helloworld/
Code
No code. All config!
Summary
By using Ambassador Edge Stack, we have provided a single point of entry into our microservice architecture, abstracting away the internal server implementation. While it also provides a lot of additional functionality, we have only done the basic set up so far. In upcoming posts, we will explore it further by configuring a few other key features like TLS, observability, progressive delivery etc.