Implementing native gRPC load balancing with Kubernetes
Last year, in the Terrarium Team we’ve decided to migrate our internal network communication from REST and WebSockets to gRPC. TerrariumDB is a distributed database system so choosing the right protocol can significantly improve its performance, similar to how optimizing code or algorithms can bring about similar benefits. Our architecture was designed to ensure high availability, so we need to distribute traffic between at least two TerrariumDB clusters. Several helpful tools are available for traffic distribution, with HAProxy being our initial choice for several years. However, as our traffic increased, ensuring the proper functioning of HAProxy became increasingly challenging, demanding more resources and posing a risk to system performance.
As we were looking for a new solution to replace HAProxy, our eyes turned to gRPC. It has many advantages over typical REST protocol such as bi-directional streaming, load balancing out of the box, and less network overhead because it reuses previously established TCP connections. It is built on top of the HTTP/2 protocol and uses Protobuf for message serialization. It is also possible to compress requests on both the server and client side, which is not only more time-efficient but also, in some cases, cost-efficient when additional traffic incurs extra charges.
As the setup of our primary environment is simple, the implementation of gPRC went smoothly. Our infrastructure is designed around a robust and scalable architecture within the Synerise platform, centred on Kubernetes for orchestrating containerized applications. At the heart of this setup is a proxy application that serves as the primary gateway for handling all requests executed on TerrariumDB. This pivotal component ensures streamlined communication and data flow between the client interfaces and our backend databases. Complementing this setup are two separate TerrariumDB clusters, which are deployed on virtual machines. These clusters are dedicated to handling diverse data workloads efficiently, enabling high performance and reliability for our data storage and processing needs. Initially, we relied on HAProxy for load balancing but as our gRPC implementation matured, we’ve made the decision to remove HAProxy from our architecture and use native load balancing provided by the Netty java library. We do not use any auto-discovery methods, so Aggregator’s hosts are hardcoded in the Netty configuration. Netty also requires providing a service name which would be used in case of some external auto-discovery method. As load balancing was working like a charm we’ve decided to move on with rollout to other environments. To our surprise, we’ve quickly hit a roadblock.
During testing, we discovered that everything worked perfectly fine until we started using load balancing. When we passed a list of hosts to the application, we were receiving unusual Nginx errors indicating that requests were not reaching TerrariumDB Aggregators. The only major difference between this and primary environment was that TerrariumDB clusters were running on separate Kubernetes clusters instead of simple virtual machines.
Ensuring that everything works fine with this setup was crucial for us as our goal was to improve and migrate completely to a Kubernetes architecture. After a thorough debugging process, we decided to investigate the logs of the ingress controller. What we discovered was that incoming HTTP requests had a quite strange host that didn’t match any of the configuration settings provided to the client-side application.
We had no choice but to delve into the internals of the Netty implementation. It turned out that no matter which host is picked from the given list to execute a request, the request itself always has the host given by the service name provided in the configuration. To better understand why it worked in the primary environment but not in the new one, let’s examine the typical lifetime of a request.
Initially, the gRPC client selects a host from the provided list to execute the request. Then injects a new host provided by the configuration using the service name. The request is sent over a previously established TCP connection and received by the target server. The server doesn’t need to verify if the request’s host and VM’s host match. Also, it is worth mentioning that we don’t use a firewall in this particular case, which typically would reject such a request. The server is only interested in the remaining part of the URL path as it uses this information to determine which service method should be invoked.
As mentioned earlier, the TCP connection was established earlier using the hosts provided in the static host list. Seems pretty straightforward, right? So now let’s find out how Kubernetes determines which application running on the cluster should receive the incoming request.
Let’s drop the topic of gRPC for a moment and take a look at how a simple HTTP request is transmitted to the Kubernetes applications. At first, the host given in an outgoing request must be configured to point to the cluster’s IP address (normally achieved by DNS entry). Then all incoming requests pass through a mechanism called an ingress controller. It can be simplified to a switch-case clause based on the request’s URI. Then if the host matches the provided configuration, the request is redirected to the target application running on the cluster.
Now, we have a comprehensive understanding of why this setup with gRPC wasn’t working. The issue arose because Netty injected the same host into every request and this host wasn’t recognized by the ingress controller, resulting in an error from Nginx. The easiest solution was to include the same rule for both Kubernetes clusters with the same host pointing to different TerrariumDB Aggregators. These hosts don’t require DNS entries so using only these wildcard hosts it is impossible to reach the cluster. Connections are established using URIs provided in the static load balancing list.
What we’ve learned during an implementation of the gRPC:
- gRPC does not use request’s host to determine where to send it. It reuses connections established earlier.
- Load balancing between different Kubernetes clusters requires additional setup (i.e. our approach with wildcard hosts).
- Our new client-side load balancing gave us scalability and significantly increased performance.
Being a developer at Synerise means a constant journey over new obstacles which leads to gaining wide knowledge not only about applications development but also about infrastructure and its maintenance. We frequently tune our software with hardware to unleash the best performance we can give to our users.