Rise of “Service Mesh” in Application Modernisation
The What, the Why and the How
Author: Ravi Cheetirala
Technical Architect ( Cloud & DevSecOps) at TL Consulting
Learn how Service Mesh brings the safety and reliability in all the aspects of service communication. Read on to find out more about the following:
- What is a Service Mesh?
- Key Features of a Service Mesh
- Why do we need Service Mesh?
- How does it work?
- Case Study
What is Service Mesh?
A Service Mesh is programmable piece of software layer sits on tops of the services in a Kubernetes cluster. Which helps to effective management of service-to-service communication, also called as “East-West” traffic.
The objective of Service mesh is to allow services to securely communicate to each other, share data and redirect traffic in the event of application/service failures. Quite often service mesh will be an overlay of network load balancer, API gateway and network security groups.
Key Features of Service Mesh
Rate Limiting, Ingress Gateway, traffic splitting, service discovery, circuit breaking, and service retry
Service mesh helps in enabling the traffic routing between the services in one or more clusters. It also helps in resolving some of the cross-cutting concerns like service discovery, circuit breaking, traffic splitting.
Securing the Services
Authentication, Authorization, encryption and decryption, Zero Trust Security
The service mesh can also encrypt and decrypt the data in transit, by removing the complexity from each of the services. The usual implementation for encrypting traffic is mutual TLS, where a public key infrastructure (PKI) generates and distributes certificates and keys for use by the sidecar proxies. It can also authenticate and authorize requests made within and outside the app, sending only authorized requests to instances.
Monitoring, Event Management, Logging, Tracing (M.E.L.T)
Service Mesh comes with lot of monitoring and tracing plugins out of the box to understand and trace the issues like communication latency errors, service failures, routing issues. It captures the telemetry data of the service calls, including the access logs, error rates, no of requests served per second, which will be the base for the operators/developers to troubleshoot and fix the errors. Some of the out of box plugins include Kiali, Jaeger and Grafana.
Why do we need Service Mesh?
Evidently most of the new age applications or existing monolith applications are being transformed or written using the microservice architecture style and deployed in a Kubernetes cluster as a cloud native application because they offer agility, speed, and flexibility. However, the exponential growth of services in this architecture brings challenges in peer-to-peer communication, data encryption, securing the traffic and so on.
Adopting the service mesh pattern helps in addressing these issues of microservice application particular the traffic management between the services, which involves a considerable amount of manual workaround. Service mesh brings the safety and reliability in all the aspects of service communication.
How does it work?
Most of the service meshes are implemented based on a Side Car pattern, where a side car proxy named “Envoy Proxy” will be injected into the Pods. Sidecars can handle tasks abstracted from the service itself, such as monitoring and security.
Services, and their respective envoy proxies and their interactions, is called the data plane in a service mesh. Another layer called the control plane manages tasks such as creating instances, monitoring and implanting policies, such as network management or network security policies. Control plane is the brain behind the service mesh operations.
A Case Study
The client in question is one of the large online retailers with global presence. The application is legacy e-commerce platform built as a giant monolith application.
Client’s architecture consists of a multi-channel (mobile and web) front end application developed using React JS and tied together using a backend service developed using legacy Java/J2EE technology and hosted on their own data center.
There is an ongoing project to split this giant app into a microservice based architecture using the latest technical stack and hosted onto a public cloud.
Client’s Organization needed to setup a deployment platform, which ensures high availability and scalable and resilient. Also, it should have cost effective, secure and high deployment frequency when it comes to release and maintenance.
- Zero Downtime/No Outage deployments and support of various deployment strategies to test the new release/features.
- Improved deployment frequency
- Secure communication between the services
- Tracing the service-to-service communication response times and troubleshooting the performance bottlenecks
- Everything as a code
Role of Service Mesh in the project:
The client was able to achieve the goals by adopting the service mesh pattern in their micro service architecture.
- Achieved Zero downtime deployments with 99.99% availability.
- Enabled the secure communication using service mesh’s TLS/mTLs feature in a language-agnostic way.
- Using traffic splitting they were able to test the new features and sentiment in their customer base.
- Chaos testing was conducted using the service mesh fault injection features.
- Operational efficiency and infrastructure cost optimization.
- Helped to understand the latency issues, by distributed tracing.
- No additional burden on Development teams to write code to manage these.
Service mesh provides robust set of features in resolving the key challenges and issues faced by the DevOps and, SREs in a microservice applications on cloud native stack by abstracting most of its functionality. And now it is widely adopted pattern and critically used component in a Kubernetes implementation.
TL Consulting can help by solving these complex technology problems by simplifying IT engineering and delivery.
We are an industry leader delivering specialised solutions and advisory in DevOps, Data Migration & Quality Engineering with Cloud at the core.