eChai Developer Meetup | Cloud Native Learnings with AWS

dhavaln 31 views 30 slides Jul 24, 2024
Slide 1
Slide 1 of 30
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30

About This Presentation

This presentation was part of the eChai Developer meetup. In this presentation I have highlighted some of the recent use cases and how we build the solution on top of AWS services.


Slide Content

Cloud Native
Learnings with AWS
Dhaval Nagar, AppGambit

About Me
●Founder - AppGambit, AWS Consulting Partner ??????
●Partner - Vizabli Inc. ??????
●AWS Serverless Hero
●AWS Certification Subject Matter Expert
●AWS Community Leader, Surat
●Certified in AWS, Google Cloud, Docker and Kubernetes
●Web3 Enthusiast ⛓
●Practicing Barista ☕

Our Recent Implementations
Building a Cost-effective, Low-Maintenance
Serverless Data Pipeline
Optimising Cloud Resources with minimal
refactoring and process disruption
Building micro-service architecture that
can work in heterogeneous environments

Eco-system of Amazon Web Services
●Modern Compute
○EC2 - Virutal Machine
○ECS and EKS - Container Services
○Lambda - Serverless Compute
●Modern Databases
○DynamoDB
○Aurora Serverless (MySQL and Postgres)
○RDS Proxy for Connection-less Database Interaction
●Storage Service - AWS S3
●Developer Tools
○CodeCommit
○CodeBuild, CodePipeline, CodeDeploy
○AWS SDKs in various languages
●Cloud Observability and Monitoring
○CloudWatch Logs, Events
○CloudTrail

AGONICS - Optimising Cloud Assets
●Storage is the Most Critical Part for any applications
○Raw Data, Logs, Media Assets, etc
○Usual pattern is store and forget
○Many times data are part of the same VM where the application is running
○Increasing data means increasing disk size
○At times, there is no metrics of which data is used when
●Most applications use Multiple Environments
○Dev, QA, Customer QA, Production
○On average 33% time the non-Production environments are sittle idle
○As applications are becoming complicated, it depends on many services that also run
with minimal usage
○Many Cloud Services still charge per-second, irrespective of actual usage

●S3 is the primary storage for the their application
●Cloud storage is cheap, but you can still overpay
●As the data grow, it becomes extremely difficult to monitor, unless you have dedicated team

●After understanding their core application,
we realise the data follows a very standard
pattern
●Each process receives input data and
generates output data
●Once processed, input data becomes less
important
●After a certain period, the input data loses
the intrinsic value in the application

●Lifecycle rules help to simplify the data transition
●S3 has various Storage Tiers for Hot, Warm and Cold data types

●With growing demand and growing team, Agonics started utilising a lot of AWS services
●Many of these services are having low or in-frequent usage
●But these services are high in per-second cost
●We decided to deploy Schedule Rules to scale-in or hibernate resources based on time schedule
●Resource Tags are an important feature through which we can attach metadata to Cloud
resources like EC2 Instances, RDS Databases, Load Balancers, etc

●Lambda functions along with CloudWatch Events or EventBridge can allow you to trigger
Serverless compute processes
●Combined with AWS Resources Tags, we can invoke some complicated scheduling rules that
allow their team to operate different environments as per team’s own schedule

Learnings
●Understand the lifecycle of your application data
●Keep the important data outside of the compute environment (stateless
compute)
●Monitor and Measure the data usage
●If required (compliance, backup) to store, Batch and Compress to further
optimise
●Your Cloud usage will grow as your application or team grows, but not all
usage is CRITICAL
●Identify opportunities to TURN OFF the SERVER LIGHTS

Listen4Good - Building Serverless Data Pipeline
●500+ NGOs under Listen4Good in United States
●They use SurveyMonkey as their Survey platform to create and distribute
surveys for performance feedback for NGOs
●NGOs create hundreds of surveys with thousands of respondents
●L4G uses the Quantitative Analysis to generate performance reports
●SurveyMonkey is a solid platform for Survey management but Extremely
Expensive when it comes to providing custom solutions
●The problem was to extra meaningful data from SurveyMonkey and build
the Analysis Reports independently
●Their team wanted a No-Ops solution

●We decided to go for Serverless option
●The use case was predictable and has no dependencies on particular
hardware or OS level libraries
●Amazon Lambda is the core Serverless service in the Ecosystem of AWS
Serverless Stack

Learnings
●Each problem is different
●Explore different ways to solve the
problem in optimal way
●In last 16 months, the solution had
0 incidents and requires no
dedicated resource for
management.
●AWS is leader in Serverless space
and has a growing list of services
in the Serverless Ecosystem.

Vizabli - Designing Hybrid Solution
●Vizabli is a platform for Acute Care Engagement Solution
●The solution includes Hardware Devices along with Software stack
●Due to compliance and data privacy, the solution is preferred to run
inside the Hospital’s private network
●Although, the solution is eventually required to run on the AWS Cloud
as well
●Our challenge was to design a scalable solution that can work across
Heterogeneous Environment easily

●Container is a pioneer technology in the software development process
●Docker is still very popular way to build Application Containers
●We PREFER to package our software irrespective of the technology or infrastructure
we use
●Writing the whole application as a Coordinated Bunch of Service Containers helped
us create a modular solution

●Containers can be orchestrated depending on your target environment
●We use Docker Compose for our minimal setup
●And use Kubernetes for our High-Availability Production Setup

Learnings
●Understand your target environment and design flexible services
●It’s very likely that the same software may be required to run in different
environment in future
●AWS Developer Tools is a set of services that helps layout DevOps
practices for any type of softwares
●Most of these services are pay-per-use and easy to setup.
●A consistent build and deployment process helps increase developer
productivity and software release cycle.

●Isolate Each Environments
●Use IAM Roles for each user and resource, prefer least-priviledge
permission model
●Separate Dev/Staging vs Production
●Use AWS Organization with Service Control Policies to limit the
unwanted usage
●ALWAYS PACKAGE your software
●Practice 12 Factor App Methodology https://12factor.net/
●Prefer Process/Practice Automation over human interaction
●Keep an eye on your Cloud Spending
Cloud Goveranance

Thank You!