Cloud Resource Prediction using
Hybrid GRU-LSTM Deep Learning
Model
Presented By
1
Under the Guidance
Dr .Sangram Keshari
Swain
Dr.S.Adinarayana
S.Radhika
PhD Scholar
220506102003
•Progress of Work
•Introduction
•Objectives
•Proposed Work
•Methodology
•Experimental Analysis
•Results
•Conclusion
•References
2
Contents
3
Progress of Work
Introduction
4
The significant expansion of cloud computing has created a need for
advanced technologies to manage cloud resources effectively and efficiently.
Cloud workloads are dynamic and unpredictable, making resource estimation
difficult and leading to either excessive resource allocation.
Ensuring accurate resource allocation is crucial for maintaining high service
performance, cost-efficiency, and long-term sustainability in cloud
operations.
Deep learning models excel at capturing complex patterns and temporal
relationships in cloud usage data, offering significant improvements in
prediction accuracy compared to traditional statistical methods.
Efficiently managing and predicting cloud resource utilization is a critical
challenge in the dynamic cloud computing landscape.
Objectives
5
Design and develop resource allocation algorithms that
optimize the utilization of cloud resources
Predictive Analytics: Using predictive analytics and machine
learning models, the goal is to forecast future resource and
energy needs. It can allocate resources based on predictions,
optimizing energy consumption while maintaining application
performance.
Hybrid Cloud and Edge Computing Optimization: Investigate
AI algorithms that optimize resource allocation across a
hybrid cloud and edge computing environment, considering
the energy efficiency of both cloud data centers and edge
devices.
Proposed work
6
Forecast cloud resource use with improved accuracy
and efficiency.
An advanced hybrid deep learning model combining
GRU (Gated Recurrent Unit) and LSTM (Long
Short-Term Memory) layers.
Leverage the strengths of GRU and LSTM to
capture both short-term and long-term dependencies
in sequential data.
Uses GRU and LSTM layers along with dropout
layers for regularization and dense layers for feature
extraction.
Research Methodology
8
Captures long-term dependencies in
sequential data.
Uses memory cells and gates (input,
forget, output) to regulate data flow.
Excellent for tasks requiring understanding of long-term patterns.
Enhances the model’s ability to capture complex relationships in data.
Connect all neurons in one layer to all neurons in the next layer.
9
The repeating module of an LSTM is composed
of four layers that interact with one another.
10
Forget Gate
11
Input gate
12
Output gate
Results
13
Max CPU usage
14
Min CPU usage
15
Average CPU usage
16
Metric Value
RMSE 21359.49
MAE 117.47
MAPE 0.9212
Performance Analysis
17
Conclusion
18
Our proposed hybrid deep learning model marks a
significant step forward in predicting cloud resource usage.
By combining GRU and LSTM layers with dropout and
dense layers, our model effectively captures the complex
patterns of cloud resource consumption.
Our model outperforms traditional prediction methods,
demonstrating high accuracy and robustness in forecasting
cloud resource usage.
This innovative model helps in better managing cloud
resources, leading to more efficient and cost-effective cloud
computing operations.