Sidecar Pattern for Single Node Multi-Container Applications
Understanding Sidecar Design Pattern for Containerized Applications
The Problem
Software applications need functionalities such as monitoring, logging, service discovery, etc., to provide better reliability and failure detection. These extras can be built inside the leading service or split into separate services/components. Coupling them with the main application allows efficient use of the resources and simplifies continuous integration and deployment processes. However, any performance regressions in these functionalities may impact the performance of the entire service, which may not be an ideal scenario as issues in the monitor, logging, etc., should not impact the main service. For example, suppose logging (best effort service) is delayed or not working. In that case, the user's experience with the app's core functionality shouldn't be impacted. Tightly integrating them with the main service makes sharing these functionalities difficult across multiple services.
On the other hand, keeping these functionalities loosely coupled (into separate containers) allows them to be developed independently with flexibility in the choice of programming language. In case of any failures, they have no impact on the main service (separating core services from best-effort services).
Sidecar Pattern
The name comes from the sidecar of a motorcycle, which provides extended features like additional passenger space. Similarly, a sidecar container can be used in conjunction with the main app container to enhance its functionality. The sidecar container can also be used to add a feature set to an existing application that might be difficult to improve (such as adding HTTPS support to a legacy application).
Sidecar containers are co-located on the same host machine as the main app container. This is usually done by keeping both containers in a single atomic unit (known as a "pod" in the Kubernetes world). This co-location allows them to share host resources such as storage and network namespace. It has low inter-container latency, which makes it easier for them to communicate with each other. Think of it as running both the backend and frontend on the localhost (127.0.0.1) but on different ports.
However, as with any decoupled system, the sidecar pattern may also increase resource consumption and design and debugging complexity. So, just like with any other architecture pattern, all the trade-offs should be weighed before implementing the sidecar pattern.
Usecase Scenarios
Some use-case scenarios with production examples are listed below.
Logging and Monitoring
In this scenario, the main application container can dump sequential log entries into a text file on the host, and a sidecar container can read those at its own speed and sync them with the logging service, which may be hosted remotely at a different service provider. For example, Beats by Elastic can be used as a sidecar container to ship logs to remote services like Logstash or Elasticsearch.
Reverse Proxy and Load Balancing
In this scenario, the main application can be set up to only serve traffic on localhost, and the sidecar container can be used to route traffic from the internet to the main application. This approach allows the sidecar to offer functionalities such as SSL termination and caching of responses for the main application. When there is a single sidecar container for multiple application containers, the sidecar container can also be used to load-balance the traffic across multiple application containers. For example, an Nginx container can be deployed as a reverse proxy.
Secret Sharing
Applications can offload responsibilities such as retrieving application secrets, managing them, authenticating, authorizing, encrypting, etc. to the sidecar containers. For example, the Hashicorp Vault Agent container provides the aforementioned sidecar functionalities to applications that consume secrets from Hashicorp Vault.
Dynamic Configuration
Applications that depend on external configuration, such as feature flags, can use a sidecar to communicate with an external configuration server, pull the latest configuration, and share it with the main application container. This allows changing configuration dynamically on the fly without changing the software application code. For example, Flipt container deployment allows using feature flags through sidecar deployment.
If you enjoyed this article, please hit the ❤️ like button.
If you think someone else will benefit from this, then please 🔁 share this post.