Scale stateful classical application on kubernetes.pptx

unmeshv02 7 views 9 slides Aug 04, 2024
Slide 1
Slide 1 of 9
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9

About This Presentation

Video link: https://www.youtube.com/watch?v=z7Z1QONfo8A

Scaling stateful application in kubernetes.
Points to considered to scale highly stateful Http session based application in Kubernetes.

It will result in :
1. Cost saving .
2. High availability and fault tolerence.
3. Seemless experience to ...


Slide Content

S cale session-based, stateful application on cloud to save operational cost Low Operational cost High Availability Fault Tolerance Load balancing Respond to change in workload Optimal operational cost

1. Use Sticky Session to avoid initializing session again and again Once session is established all subsequent requests will go to the same pod. To direct all request from a user to a particular pod. To avoid initializing heavy session state again and again. To minimize delay on subsequent requests.

1. Use Sticky Session continued ….. Ingress controller configuration for session affinity.

2. Store Session State In External Cache to restore user session in case of pod failures To initialize use session on a different pod in case of pod/server failure. To provide high availability. To make system fault tolerant. To scale down smoothly.

3. Use Queue and Worker Setup for async operations To perform asynchronous operation. To not overload the pods. To Serialize concurrent operations. To avoid losing tasks in case of pod/node failures.

4 . Use Distributed Locking for concurrent operations To safeguard concurrent operations. To avoid inconsistent application state. Use time-based Locks with expiry time. Locks can be created using MySql , Mongodb and Redis.

5 . Use K8s Horizontal Pod Auto scalar to scale up/down based on workload To scale up/down number of pods based on load. To use optimal pods as per load. To optimize operational cost. To provide high availability in case of heavy load.

5 . Use K8s Horizontal Pod Auto scalar continued…… HPA configuration to scale up/down an application. If average cpu or memory utilization goes above 50% new pods will be provisioned.

6. Set K8s Grace Period to cleanup before a pod is removed Time K8s takes to before starting to remove a pod. Implement PreStop Hook: Use the preStop lifecycle hook in your pod specification. This hook is executed before the container is terminated. Grace Period: You can set terminationGracePeriodSeconds in the pod spec