Week 1 Lecture Materials.pptx Petroleum Python

kalaiselvansolairaj7 16 views 79 slides Nov 02, 2025
Slide 1
Slide 1 of 148
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84
Slide 85
85
Slide 86
86
Slide 87
87
Slide 88
88
Slide 89
89
Slide 90
90
Slide 91
91
Slide 92
92
Slide 93
93
Slide 94
94
Slide 95
95
Slide 96
96
Slide 97
97
Slide 98
98
Slide 99
99
Slide 100
100
Slide 101
101
Slide 102
102
Slide 103
103
Slide 104
104
Slide 105
105
Slide 106
106
Slide 107
107
Slide 108
108
Slide 109
109
Slide 110
110
Slide 111
111
Slide 112
112
Slide 113
113
Slide 114
114
Slide 115
115
Slide 116
116
Slide 117
117
Slide 118
118
Slide 119
119
Slide 120
120
Slide 121
121
Slide 122
122
Slide 123
123
Slide 124
124
Slide 125
125
Slide 126
126
Slide 127
127
Slide 128
128
Slide 129
129
Slide 130
130
Slide 131
131
Slide 132
132
Slide 133
133
Slide 134
134
Slide 135
135
Slide 136
136
Slide 137
137
Slide 138
138
Slide 139
139
Slide 140
140
Slide 141
141
Slide 142
142
Slide 143
143
Slide 144
144
Slide 145
145
Slide 146
146
Slide 147
147
Slide 148
148

About This Presentation

Used to learn about petroleum data analytics


Slide Content

COURSE TYPE: Elective COURSE LEVEL: Undergraduate/Postgraduate COURSE LAYOUT Week 1: Significance of Python and Petroleum Data Analysis Introduction to Python and Programming Fundamentals: Environmental set up- Installation of Python and anaconda, Python packages, basics of data structures. Week 2: Programming fundamentals: Data types (Immutable & Mutable), Operator types, loops, functions, conditions, objects, and classes) Week 3: Implementation of Python libraries:  Pandas: Environment set up, PANDAS –series, data frame, read CSV, cleaning data, correlations, lotting, panel, basic functionality, descriptive statistics, function application, iteration, and sorting. Week 4: Implementation of Python libraries:  NUMPY: Introduction and environment set up, data types, array, indexing & slicing, binary operators, string functions, mathematical functions, arithmetic operations, statistical functions, sort, search & counting functions. Plotting in Python: Installation of Matplotlib, Pyplot, plotting, markers, line, labels and title, grids, subplot, scatter, bar, histograms, pie-charts

Week 5: Data wrangling and preprocessing on reservoir/Unconventional resources data:  Understanding the concept of data wrangling using sub setting, filtering, and grouping, detecting outliers and handling missing values, concatenating, merging, and joining. Week 6: Data wrangling and preprocessing on reservoir/Unconventional resources data: Encoding categorical data, dataset splitting into test and training data, Feature scaling. Week 7: Data manipulation: Data cleaning, Data Preprocessing, Feature Engineering Week 8: Algorithms and Application to Petroleum Data: Supervised Learning  Week 9: Algorithms and Application to Petroleum Data: Unsupervised Learning Week 10: Regression for Petroleum Engineering Applications: Linear regression, multiple linear regression used for regression and classification Week 11: Regression for Petroleum Engineering Applications: Logistics regression and decision tree for regression and classification.  Week 12: Regression for Petroleum Engineering Applications: KNN used for regression and classification. Overfitting and under fitting.

BOOKS AND REFERENCES Python for Everybody: Exploring Data in Python By Dr Charles R. Severance., ASIN ‏ :‎ 9352136276, Publisher ‏ : ‎ Shroff Publishers; First edition (10 October 2017), Language ‏ : ‎ English, ISBN-10 ‏ : ‎ 9789352136278, ISBN-13 ‏ : ‎ 978-9352136278, Country of Origin ‏ : ‎ India  Machine Learning for Subsurface Characterization 1st Edition, Kindle Edition by Siddharth Misra, Hao Li, Jiabo He, ASIN ‏ : ‎ B07Z5YPHST, Publisher ‏ : ‎ Gulf Professional Publishing; 1st edition (12 October 2019), Language ‏ : ‎ English  Applied Statistical Modeling and Data Analytics: A Practical Guide for the Petroleum Geosciences by Srikanta Mishra , Akhil Datta-Gupta, ASIN ‏ : ‎ B076HLT4CX, Publisher ‏ : ‎ Elsevier; 1st edition (27 October 2017), Language ‏ : ‎ English  Python Data Science Handbook: Essential Tools for Working with Data by Jake VanderPlas , ASIN‏ : ‎ B01N2JT3ST, Publisher ‏ : ‎ O'Reilly Media; 1st edition (21 November 2016), Language ‏ : ‎ English, Page numbers source ISBN ‏ : ‎ 1491912057  Machine Learning Paperback – 1 July 2017 by Tom M. Mitchell  (Author), Publisher ‏ : ‎ McGraw Hill Education; First edition (1 July 2017), Language ‏ : ‎ English, ISBN-10 ‏ : ‎ 1259096955, ISBN-13 ‏ : ‎ 978-1259096952

Course Objective: To provide the introductory knowledge of python and its applications to petroleum data analysis. Learning Outcomes: Learn implementation of python programming for petroleum data analysis Understand and implement various statistical methods for petroleum data analysis Implement advanced algorithms for executing some petroleum data related projects

Week 1 : Lecture 1 Significance of Python and Petroleum Data Analysis

Week 01 : Significance of Python and Petroleum Data Analysis Introduction to Python and Programming Fundamentals: Environmental set up- Installation of Python and anaconda, Python packages, Basics of data structures. Topics to be covered

In the evolving landscape of the energy sector, particularly in the petroleum industry, data has become the new oil—fueling strategic decisions, operational efficiency, and innovations. The convergence of data science and petroleum engineering has led to powerful transformations, enabling more accurate forecasts, efficient resource management, and cost-effective operations. At the heart of this transformation is Python, a high-level programming language that is becoming the industry standard for data analysis and machine learning.

1 . The Data-Driven Era in Petroleum Engineering Petroleum engineering is inherently a data-intensive domain. Importance of Data Management in the Petroleum Industry The petroleum industry generates vast amounts of data from various sources, including sensors, surveys, and operational systems. Effective data management is critical for: Improving operational efficiency and reducing costs Enhancing decision-making through data-driven insights Optimizing asset performance and maximizing returns Ensuring regulatory compliance and mitigating risks From exploration and drilling to production and reservoir management, each phase generates massive volumes of data. These include:

1. Operational Data: Production data: Includes volumes of oil, gas, and other products extracted from wells, as well as information on well performance and reservoir characteristics.  Pipeline data: Monitors the flow, pressure, and integrity of pipelines used to transport petroleum products.  Downtime and maintenance data: Records the duration and causes of equipment downtime and maintenance activities, crucial for optimizing operational efficiency.  Drilling data: Tracks drilling progress, downhole conditions, and borehole characteristics during well construction.  Refining data: Includes process data, yield data, and product quality data from refining operations.  Supply chain data: Tracks inventory levels, transportation routes, and logistics associated with petroleum products. 

2. Environmental Data: Water and air quality data: Monitors levels of pollutants and other contaminants in the surrounding environment. Emissions data: Tracks greenhouse gas emissions and other atmospheric pollutants from petroleum operations. Environmental impact data: Assesses the overall impact of petroleum activities on the surrounding ecosystem.  3. Market Data: Commodity prices:  Records the fluctuating prices of crude oil, refined products, and other petroleum-related commodities.  Supply and demand trends:  Analyzes the balance between supply and demand for petroleum products.  Market trends data:  Tracks broader market conditions and economic factors that influence the petroleum industry. 

4. Safety Data: Incident data: Records accidents, near misses, and other safety-related incidents. Injury rates: Tracks the number and severity of injuries sustained by workers. Equipment failure data: Documents failures of machinery and equipment, which can indicate potential safety hazards.  5 . Geological and Geophysical Data : Seismic data: Used to create images of subsurface structures and identify potential hydrocarbon reservoirs.  Well log data: Characterizes the geology and properties of formations encountered during drilling.  Reservoir data: Models and simulates the behavior of petroleum reservoirs.  Geologic models: Creates three-dimensional representations of subsurface formations. 

However, with the advent of digital transformation and the Fourth Industrial Revolution, there's a pressing need to integrate big data analytics, machine learning, and automation. This is where Python plays a crucial role. 2. Why Python? Python has gained enormous popularity among data scientists and engineers due to its simplicity, readability, vast library ecosystem, and vibrant community. Its advantages in petroleum data analysis include: a. Open-Source and Free Python is free to use, which reduces the cost barriers associated with proprietary software like MATLAB or SAS.

b. Extensive Libraries Libraries like NumPy, pandas, matplotlib, SciPy, scikit-learn, TensorFlow, and PyTorch allow engineers to perform everything from basic data manipulation to complex neural network modeling. c. Easy Integration Python can be integrated with SQL databases, Excel, cloud services, and APIs. It also supports legacy systems, which are still prevalent in the oil and gas industry. d. Community Support and Documentation Python has strong community support, especially through platforms like GitHub, Stack Overflow, and dedicated petroleum data science forums.

3 . Applications of Python in Petroleum Data Analysis Python's applications in the petroleum industry are vast and diverse. Here are some key use cases: a. Exploratory Data Analysis (EDA) Python enables geoscientists and engineers to perform EDA using libraries like pandas, seaborn, and matplotlib. This helps in identifying trends, anomalies, and relationships within the data, providing initial insights before building predictive models. b. Predictive Modeling Machine learning models built using Python (e.g., via scikit-learn or XGBoost) are used for: Predicting reservoir performance Estimating production rates Predicting equipment failures (predictive maintenance)

c. Time Series Forecasting In production monitoring and price forecasting, Python's capabilities in time series modeling (using statsmodels, Prophet, etc.) help in anticipating trends and making data-driven decisions. d. Image and Signal Processing Python's image processing libraries like OpenCV and PIL, combined with geophysical data, can enhance the interpretation of seismic images and well log plots. e. Reservoir Simulation and Modeling Although reservoir simulation is traditionally performed using specialized software (like Eclipse or CMG), Python is increasingly used for pre- and post-processing of simulation data, as well as for sensitivity analysis.

f. Automation and Workflow Optimization Python scripts can automate repetitive tasks such as report generation, data cleaning, and integration between software tools (like Petrel, Excel, and ArcGIS). 4. Case Studies and Real-World Examples a. Shell and Python for Predictive Maintenance Shell has adopted Python for its predictive maintenance systems, using machine learning to monitor equipment health and reduce downtime. b. BP’s Use of Python in Reservoir Engineering BP engineers use Python for reservoir performance prediction, leveraging historical production data and physics-informed models. c. Schlumberger and Digital Solutions Schlumberger’s DELFI environment supports Python scripting, allowing petroleum engineers to build custom machine learning models for various subsurface analysis tasks.

5 . Challenges in Adopting Python in the Petroleum Industry Despite its advantages, there are challenges associated with the integration of Python in petroleum data workflows: a. Data Quality and Standardization Legacy data formats, missing values, and inconsistent logging pose hurdles for machine learning applications. b. Skill Gap Many traditional petroleum engineers may lack programming experience, requiring cross-training or hiring of data science specialists. c. Computational Load Large-scale simulations and deep learning models may require high-performance computing resources, adding complexity to Python-based workflows.

6. Education and Training: Bridging the Gap To bridge the skill gap, academic institutions and industry organizations are introducing interdisciplinary programs that combine petroleum engineering and data science. Python is central to these curricula. Platforms like Coursera, Udemy, and SPE (Society of Petroleum Engineers) now offer Python-centric training specifically for the oil and gas domain. 7. The Future Outlook The petroleum industry is under increasing pressure to improve operational efficiency, reduce environmental impact, and maximize resource recovery. Python and data analysis will continue to play a vital role in: Digital Twin Development: Creating real-time digital replicas of physical assets for simulation and optimization. Carbon Capture and Storage (CCS): Modeling CO₂ injection and monitoring subsurface behavior. Enhanced Oil Recovery (EOR): Optimizing injection strategies using machine learning. Sustainability Analytics: Tracking emissions, optimizing energy use, and supporting ESG reporting. As AI becomes more integrated into upstream and downstream operations, Python’s role will only grow. Its adaptability ensures it remains relevant as new data types, like real-time streaming data and satellite imagery, become more prevalent.

Conclusion Python has become an indispensable tool in petroleum data analysis, bridging the gap between engineering expertise and modern data science. Its simplicity, flexibility, and power enable petroleum professionals to unlock insights from complex datasets, improve predictions, and drive smarter decision-making. As the industry continues to embrace digital transformation, the synergy between Python and petroleum data analysis will be pivotal in shaping a more efficient, innovative, and sustainable energy future.

Exploratory Data Analysis EDA is a process through which an available dataset is examined to discover patterns, detect any irregularities, test hypotheses, and statistically analyze assumptions. The main purpose of EDA is to understand what the given data tells before modeling or formulating hypotheses. EDA was promoted by John Tuckey to statisticians (Mukhiya & Ahmed, 2020). Contemplating data requirements, data collection, data processing, and data cleaning are the stages that precede EDA. An appropriate decision needs to be made from the data collected about different fields which are primarily stored in electronic databases. Data mining is the process that gives an insight into the raw data and EDA forms the first stage of Data mining.

Different approaches towards data analysis There are several approaches for data analysis and a glimpse of three important approaches viz. classical data analysis, Exploratory data analysis, and Bayesian data analysis approach are shown in the following figure

Stages of EDA Mukhiya & Ahmed, 2020 put forth the four different stages of EDA which are- 1.  Definition of the problem – To define a problem, it is important to define the primary objective of the analysis alongside defining main deliverables, roles, and responsibilities, the present state of the data, setting a timeline, and analyzing the cost-to-benefit ratio. 2.  Preparation of data – In this stage, the characteristics of data are being comprehended, the dataset is cleaned, and irrelevant data are deleted. 3.  Analyzing the data – In this stage, the data are summarized, hidden correlations are derived, predictive models are developed and evaluated, and summary tables are generated. 4.  Results representation – Finally, the dataset is presented to the target audience in the form of graphs, and summary tables.

Crude Oil Consumption Forecasting Using Classical and Machine Learning Methods (Ref-1) The global oil market is the most important of all the world energy markets. Since crude oil is a non-renewable source, its quantity is fixed and limited. To manage the available oil reserves, it will be helpful if we have an estimation of the future consumption requirements of this resource beforehand. This paper describes methods to forecast crude oil consumption for the next 5 years using the past 17 years’ data (2000-2017). The decision-making process comprised of: (1) Preprocessing of the dataset, (2) Designing the forecasting model, (3) Training model, (4) Testing model on the test set, and (5) Forecasting results for the next 5 years. The proposed methods are divided into two categories: (a) Classical methods, and (b) Machine Learning methods. These were applied to global data as well as to three major countries: (a) the USA, (b) China, and (c) India. The results showed that the best accuracy was obtained for polynomial regression. An accuracy of 97.8% was obtained.

Artificial Intelligence-Based Prediction of Crude Oil Prices Using Multiple Features under the Effect of Russia–Ukraine War and COVID-19 Pandemic (Ref-2) The effect of the COVID-19 pandemic on crude oil prices just faded; at this moment, the Russia–Ukraine war brought a new crisis. In this paper, a new application is developed that predicts the change in crude oil prices by incorporating these two global effects. Unlike most existing studies, this work uses a dataset that involves data collected over twenty-two years and contains seven different features, such as crude oil opening, closing, intraday highest value, and intraday lowest value. This work applies cross-validation to predict the crude oil prices by using machine learning algorithms (support vector machine, linear regression, and rain forest) and deep learning algorithms (long short-term memory and bidirectional long short-term memory). The results obtained by machine learning and deep learning algorithms are compared. Lastly, the high-performance estimation can be achieved in this work with the average mean absolute error value over 0.3786.

A subsurface machine learning approach at hydrocarbon production recovery & resource estimates for unconventional reservoir systems: Making subsurface predictions from multidimensional data analysis (Ref-3) An innovative, practical, and successful subsurface machine learning workflow was introduced that utilizes any structured reservoir, geologic, engineering and production data. This workflow is called the  Artificial Learning Integrated Characterization Environment  (ALICE), and it has changed the way Chevron manages its tight rock and unconventional assets. The workflow guides users from framing and data gathering to geospatial assembly, quality control and ingestion, then on through machine-learning feature selection, modeling, validation, and acceptance for results reporting. The ultimate products of the workflow can be visualized in both map or log (depth) space to help identify key areas for good optimization or landing zones, respectfully. The results from ALICE have been used within Chevron to aid in exploration review assessments, type curve adjustments, landing strategies, well performance lookbacks and more.

Prediction method and application of shale reservoirs core gas content based on machine learning (Ref-4) To improve the recovery of shale gas, it is very important to accurately grasp the gas content of shale reservoirs. Considering the problems of low accuracy, strong local limitation and poor adaptability of seismic data of traditional methods such as empirical formulas and regression fitting. Based on machine learning (ML) algorithms such as support vector regression, decision trees, random forests, BP neural networks and convolutional neural networks, an intelligent prediction method of shale reservoir core gas content based on machine learning was established by using three parameters: P-wave velocity (Vp), S-wave velocity (Vs) and density (RHOB). It was compared with traditional method prediction, core tests and other gas content data, which verified the effectiveness and high-precision characteristics of the method. Additionally, in support vector regression (SVR), decision tree (DT), random forest (RF), BP neural network (BP), convolutional neural network (CNN), and other machine learning algorithms, the support vector regression algorithm was the most stable, robust and accurate.

Studying the direction of hydraulic fracture in carbonate reservoirs: Using machine learning to determine reservoir pressure (Ref-5) Hydraulic fracturing (HF) is an effective way to intensify oil production, which is currently widely used in various conditions, including complex carbonate reservoirs. In the conditions of the field under consideration, hydraulic fracturing leads to a significant differentiation of technological efficiency indicators, which makes it expedient to study the patterns of crack formation in detail. The developed indirect method was used for this purpose, the reliability of which was confirmed by geophysical methods. During the analysis, it was found that in all cases, the crack is oriented in the direction of the section of the development system element characterized by the maximum reservoir pressure. At the same time, the reservoir pressure values for all wells were determined at one point in time (at the beginning of HF) using machine learning methods. The reliability of the machine learning methods used is confirmed by the high convergence with the actual (historical) reservoir pressures obtained during hydrodynamic studies of wells. The obtained conclusion about the influence of the reservoir pressure on the patterns of fracture formation should be taken into account when planning hydraulic fracturing under the conditions studied.

Machine-learning-assisted high-temperature reservoir thermal energy storage optimization (Ref-5) High-temperature reservoir thermal energy storage (HT-RTES) has the potential to become an indispensable component in achieving the goal of a net-zero carbon economy, given its capability to balance the intermittent nature of renewable energy generation. In this study, a machine-learning-assisted computational framework is presented to identify HT-RTES sites with optimal performance metrics by combining physics-based simulation with stochastic hydrogeologic formation and thermal energy storage operation parameters, artificial neural network regression of the simulation data, and genetic algorithm-enabled multi-objective optimization. Neural network-based surrogate models are developed for the two scenarios and applied to generate the Pareto fronts of the HT-RTES performance for four potential HT-RTES sites. The developed Pareto optimal solutions indicate the performance of HT-RTES is operation-scenario (i.e., fluid cycle) and reservoir-site dependent and the performance metrics have competing effects for a given site and a given fluid cycle. The developed neural network models can be applied to identify suitable sites for HT-RTES, and the proposed framework sheds light on the design of resilient HT-RTES systems.

Application of machine learning to predict CO2 trapping performance in deep saline aquifers (Ref-7) Deep saline formations are considered potential sites for geological carbon storage. To better understand the CO 2  trapping mechanism in saline aquifers, it is necessary to develop robust tools to evaluate CO 2  trapping efficiency. This paper introduces the application of Gaussian process regression (GPR), support vector machine (SVM), and random forest (RF) to predict CO 2  trapping efficiency in saline formations. First, the uncertainty variables, including geologic parameters, petrophysical properties, and other physical characteristics data, were utilized to create a training dataset. In total, 101 reservoir simulations were then performed, and residual trapping, solubility trapping, and cumulative CO 2  injection were analyzed. The predicted results indicated that three machine learning (ML) models that evaluate performance from high to low (GPR, SVM, and RF) can be selected to predict the CO 2  trapping efficiency in deep saline formations. The GPR model had an excellent CO 2  trapping prediction efficiency with the highest correlation factor (R 2  = 0.992) and the lowest root mean square error (RMSE = 0.00491). Also, the predictive models obtained good agreement between the simulated field and predicted trapping index. These findings indicate that the GPR ML models can support the numerical simulation as a robust predictive tool for estimating the performance of CO 2  trapping in the subsurface.

Machine Learning-Assisted Prediction of Oil Production and CO2 Storage Effect in CO2-Water-Alternating-Gas Injection (CO2-WAG) (Ref-8) In recent years, CO 2 flooding has emerged as an efficient method for improving oil recovery. It also has the advantage of storing CO 2 underground. As one of the promising types of CO 2 enhanced oil recovery (CO2-EOR), CO2 water-alternating-gas injection (CO2-WAG) can suppress CO 2 fingering and early breakthrough problems that occur during oil recovery by CO 2 flooding. However, the evaluation of CO 2 -WAG is strongly dependent on the injection parameters, which in turn renders numerical simulations computationally expensive. So, in this work, machine learning is used to help predict how well CO 2 -WAG will work when different injection parameters are used. A total of 216 models were built by using CMG numerical simulation software to represent CO 2 -WAG development scenarios of various injection parameters where 70% of them were used as training sets and 30% as testing sets. A random forest regression algorithm was used to predict CO 2 -WAG performance in terms of oil production, CO 2 storage amount, and CO 2 storage efficiency. The CO 2 -WAG period, CO 2 injection rate, and water–gas ratio were chosen as the three main characteristics of injection parameters. The prediction results showed that the predicted value of the test set was very close to the true value. The average absolute prediction deviations of cumulative oil production, CO 2 storage amount, and CO 2 storage efficiency were 1.10%, 3.04%, and 2.24%, respectively. Furthermore, it only takes about 10 s to predict the results of all 216 scenarios by using machine learning methods, while the CMG simulation method spends about 108 min. It demonstrated that the proposed machine-learning method can rapidly predict CO 2 -WAG performance with high accuracy and high computational efficiency under conditions of various injection parameters. This work gives more insights into the optimization of the injection parameters for CO 2 -EOR.

Fatima, Z., Kumar, A., Bhargava, L. and Saxena, A., 2019. Crude oil consumption forecasting using classical and machine learning methods. International Journal of Knowledge-Based Computer Systems, 7(1), pp.10-18. Jahanshahi, H., Uzun, S., Kaçar, S., Yao, Q. and Alassafi, M.O., 2022. Artificial intelligence-based prediction of crude oil prices using multiple features under the effect of Russia–Ukraine war and COVID-19 pandemic.  Mathematics ,  10 (22), p.4361. Prochnow, S.J., Raterman, N.S., Swenberg, M., Reddy, L., Smith, I., Romanyuk, M. and Fernandez, T., 2022. A subsurface machine learning approach at hydrocarbon production recovery & resource estimates for unconventional reservoir systems: Making subsurface predictions from multimensional data analysis. Journal of Petroleum Science and Engineering, 215, p.110598. Luo, S., Xu, T. and Wei, S., 2022. Prediction method and application of shale reservoirs core gas content based on machine learning. Journal of Applied Geophysics, 204, p.104741. Martyushev, D.A., Ponomareva, I.N. and Filippov, E.V., 2023. Studying the direction of hydraulic fracture in carbonate reservoirs: Using machine learning to determine reservoir pressure. Petroleum Research, 8(2), pp.226-233. Jin, W., Atkinson, T.A., Doughty, C., Neupane, G., Spycher, N., McLing, T.L., Dobson, P.F., Smith, R. and Podgorney, R., 2022. Machine-learning-assisted high-temperature reservoir thermal energy storage optimization. Renewable Energy, 197, pp.384-397. Thanh, H.V. and Lee, K.K., 2022. Application of machine learning to predict CO2 trapping performance in deep saline aquifers. Energy, 239, p.122457. Li, H., Gong, C., Liu, S., Xu, J. and Imani, G., 2022. Machine learning-assisted prediction of oil production and CO2 storage effect in CO2-water-alternating-gas injection (CO2-WAG). Applied Sciences, 12(21), p.10958.

1. Automation of Data Processing Python allows petroleum engineers to automate the cleaning, sorting, and preprocessing of large datasets from sensors, logs, and drilling reports. 2. Handling Big Data Python’s libraries like Pandas and Dask efficiently handle vast amounts of data generated in upstream and downstream petroleum activities. 3. Real-Time Monitoring With tools like PyDash, Flask, or Plotly Dash, Python can support dashboards for real-time reservoir and production monitoring. 4. Machine Learning Applications Python libraries (like Scikit-learn and TensorFlow) enable predictive maintenance, reservoir modeling, and production forecasting using machine learning. 5. Cost Optimization Data analysis with Python helps identify inefficiencies in drilling, production, and transportation—saving time and money.

6. Reservoir Simulation Python can be used alongside simulation tools (e.g., Eclipse or CMG) to process and visualize reservoir models and simulation outputs. 7. Enhanced Decision-Making Python's analytics empower petroleum engineers to make data-driven decisions on well placement, enhanced oil recovery, and field development. 8. Integration with GIS and Remote Sensing Python integrates with spatial analysis tools (like ArcPy and GeoPandas) to analyze geospatial data important for exploration. 9. Open-Source Ecosystem Python is open-source, reducing software costs and enabling easy sharing of code and workflows in petroleum research and operations. 10. Customizable Solutions Python allows the creation of domain-specific tools, plugins, and apps tailored to the unique challenges of petroleum engineering.

Week 1 : Lecture 2 Significance of Python and Petroleum Data Analysis

Week 1: Significance of Python and Petroleum Data Analysis Introduction to Python and Programming Fundamentals: Environmental set up- Installation of Python and anaconda , Python packages, basics of data structures. Last Lectured Learning : Importance of ML/data Analytics Why Python How & where Machine Learning concept is applied in the different petroleum related problems ( From Research published paper)

Types of Data Analysis Techniques Data analysis techniques  have significantly evolved, providing a comprehensive toolkit for understanding, interpreting, and predicting data patterns. These methods are crucial in extracting actionable insights from data, enabling organizations to make informed decisions.

Descriptive Data Analysis Descriptive analysis   is considered the beginning point for the analytic journey and often strives to answer questions related to what happened. This technique follows ordering factors, manipulating and interpreting varied data from diverse sources, and then turning it into valuable insights. In addition, conducting this analysis is imperative as it allows individuals to showcase insights in a streamlined method. This technique does not allow you to estimate future outcomes - such as identifying specific reasoning for a particular factor. It will keep your data streamlined and simplify to conduct a thorough evaluation for further circumstances. Examples of Descriptive Data Analysis : Sales Performance:  A retail company might use descriptive statistics to understand the average sales volume per store or to find which products are the best sellers. Customer Satisfaction Surveys:  Analyzing survey data to find the most common responses or average scores.

Descriptive analysis plays a crucial role in addressing various challenges within the petroleum engineering field.  By analyzing historical data, it helps in understanding past performance, identifying trends, and making informed decisions for future operations.  This includes optimizing production, managing resources, and mitigating risks in upstream, midstream, and downstream operations.  How Descriptive Analysis Helps in Petroleum Engineering : Production Optimization: Descriptive analysis of production data can reveal patterns related to reservoir behavior, well performance, and operational efficiency.  This information can be used to optimize well placement, production rates, and enhanced oil recovery (EOR) strategies.  Resource Management: Analyzing historical data on resource consumption, material usage, and waste generation can help in developing more efficient resource management practices and minimizing environmental impact. 

Risk Assessment: Descriptive analysis can identify potential risks and hazards associated with different operations.  For example, analyzing past incidents and near misses can help in developing preventative measures and improving safety protocols.  Economic Analysis: By analyzing historical costs, revenue, and profitability, descriptive analysis can provide valuable insights for economic evaluation of projects, resource allocation, and investment decisions.  Maintenance Planning: Analyzing equipment performance data, failure rates, and maintenance records can help in developing predictive maintenance schedules, minimizing downtime, and optimizing maintenance costs. 

Qualitative Data Analysis This techniques cannot be measured directly, and hence, this technique is utilized when an organization needs to make decisions based on subjective interpretation. For instance,  qualitative data  can involve evaluating customer feedback, the impact of survey questions, the effectiveness of social media posts, analyzing specific changes or features of a product, and more. The focus of this technique should be identifying meaningful insights or answers from unstructured data such as transcripts, vocal feedback, and more. Examples of Qualitative Data Analysis: Market Analysis:  A business might analyze why a product’s sales spiked in a particular quarter by looking at marketing activities, price changes, and market trends. Medical Diagnosis:  Clinicians use diagnostic analysis to understand the cause of symptoms based on lab results and patient data.

Qualitative data analysis , while less common in petroleum engineering than quantitative methods, can still be valuable for understanding complex issues related to human factors, project management, and operational challenges within the field.  It involves analyzing non-numerical data like interview transcripts, field notes, or documents to identify patterns, themes, and insights.  1. Understanding Human Factors in Drilling Operations: Challenge : Drilling accidents or inefficiencies can often be traced back to human error, communication breakdowns, or inadequate training. Qualitative Approach : Interviews with drilling crews, supervisors, and engineers can reveal insights into safety protocols, communication practices, and decision-making processes.  Analyzing these interviews for recurring themes (e.g., communication breakdowns, lack of proper training, or risky decision-making) can help identify areas for improvement.

Predictive Data Analysis This analysis enables us to look into the future by answering questions— what will happen?  Individuals need to utilize the results of descriptive data analysis, exploratory and diagnostic analysis techniques, and combine machine learning and artificial intelligence. Using this method, you can get an overview of future trends and identify potential issues and loopholes in your dataset. In addition, you can discover and develop initiatives to enhance varied operation processes and your competitive edge with insightful data. With easy-to-understand insights, businesses can tap into trends, common patterns, or reasons for a specific event, making initiatives or decisions for further strategies easier. Examples of Predictive Data Analysis: Credit Scoring:  Financial institutions use predictive models to assess a customer's likelihood of defaulting on a loan. Weather Forecasting:  Meteorologists use predictive models to forecast weather conditions based on historical weather data.

Predictive data analysis offers significant benefits to the oil and gas industry by enabling proactive decision-making and optimizing various processes.  It helps in predicting equipment failures, optimizing production, and improving safety by analyzing historical and real-time data.  However, challenges like data quality, model complexity, and implementation costs need to be addressed Applications of Predictive Data Analysis in Petroleum Engineering: Predictive Maintenance: Analyzing sensor data from equipment like pumps, pipelines, and drilling rigs to anticipate potential failures and schedule maintenance proactively, minimizing downtime and costs. For example, detecting corrosion or pressure anomalies in pipelines before they cause leaks.  Production Optimization: Forecasting future production rates based on historical data and machine learning models, allowing for better resource allocation and maximizing output.

Reservoir Characterization: Analyzing seismic, mud logging, and other data to predict reservoir characteristics, optimize drilling strategies, and improve safety.  Risk Management: Identifying potential hazards and risks associated with drilling, production, and transportation of oil and gas, enabling proactive mitigation strategies.  Pipeline Integrity: Predicting the need for strain in pipelines and assessing the effectiveness of algorithms for risk assessment. 

Diagnostic Data Analysis When you know why something happened, it is easy to identify the  "How"  for that specific aspect. For instance, with  diagnostic analysis , you can identify why your sales results are declining and eventually explore the exact factors that led to this loss. In addition, this technique offers actionable answers to your specific questions. It is also the most commonly preferred method in research for varied domains. Example of Diagnostic Data Analysis: Inventory Analysis:  Checking if lower sales correlate with stock outs or overstock situations. Promotion Effectiveness:  Analyzing the impact of different promotional campaigns to see which failed to attract customers.

Diagnostic data analysis is crucial in petroleum engineering for identifying and understanding problems in oil and gas operations, leading to optimized production and reduced risks.  It involves analyzing historical data to diagnose the root cause of issues like decreased production rates, equipment failures, or anomalous reservoir behavior.  This analysis helps in making informed decisions about well management, preventative maintenance, and future development strategies.  Key Aspects of Diagnostic Data Analysis in Petroleum Engineering: Data Quality and Interpretation: Ensuring data accuracy and proper interpretation is vital for reliable analysis.  Poor data quality can lead to misleading results, while misinterpreting plots or correlations can result in incorrect conclusions about reservoir behavior or well performance. 

Diagnostic Plots: These plots are visual tools used to identify inconsistencies and anomalies in production data. For example, correlating pressure and rate data can highlight inconsistencies that might indicate problems with permeability, skin, or gas-in-place.  Data Decomposition and Mining: Analyzing the data to understand its various components and identifying patterns or relationships between variables is a key aspect of diagnostic analysis.  Causality and Correlation: Determining the root causes of problems often involves identifying correlations between different datasets and understanding the causal relationships between variables.  For instance, correlating operating parameters with equipment downtime can help diagnose the causes of failures

Applications: Well Performance Analysis:  Identifying issues like decreased production rates, pressure decline, or increased water cut can be diagnosed using diagnostic plots and data analysis techniques.  Reservoir Management:  Understanding reservoir behavior, including permeability, porosity, and fluid flow characteristics, is critical for optimizing production. Data analytics can help refine reservoir models and improve prediction accuracy.  Equipment Failure Prediction:  By analyzing operational data, diagnostic analysis can help identify potential equipment failures and enable proactive maintenance, preventing costly downtime.  Drilling Optimization:  Data from drilling operations, such as MWD and LWD data, can be analyzed to optimize drilling parameters and improve efficiency.  Production Optimization:  Analyzing production data, including flow rates, pressures, and fluid compositions, can help optimize production strategies and maximize recovery. 

Inferential data analysis , in the context of petroleum engineering, involves using statistical methods to draw conclusions and make predictions about reservoir behavior, production optimization, and other aspects of oil and gas operations based on limited data.  It's about generalizing from sample data to a larger population or making inferences about the underlying processes that generated the data.  Inferential Data Analysis in Petroleum Engineering: Reservoir Characterization: Inferential techniques can be used to estimate reservoir properties like porosity, permeability, and fluid saturation based on limited well log data, seismic surveys, and core samples.  Production Optimization: Engineers can analyze historical production data to predict future performance, optimize well placement, and adjust production strategies to maximize recovery. 

Risk Assessment: Inferential statistics can help assess the uncertainty associated with different operational decisions, like the potential for water or gas coning, or the likelihood of encountering unexpected geological formations during drilling.  Equipment Failure Prediction: By analyzing sensor data and maintenance records, engineers can use inferential methods to predict potential equipment failures and schedule preventative maintenance.  Geological Modeling: Inferential techniques are used to build subsurface models from sparse data, improving the accuracy of predictions about reservoir structure and fluid distribution. 

Key Techniques Used: Regression analysis: To model the relationship between variables and predict future outcomes (e.g., predicting production rates based on pressure and permeability).  Hypothesis testing: To determine if there is enough evidence to support a claim about the population based on the sample data (e.g., testing if a new stimulation technique has significantly increased production).  Confidence intervals: To estimate the range of values within which the true population parameter is likely to fall.  Time series analysis: To analyze data collected over time to identify trends and patterns, which can be used for forecasting and optimization.  Machine learning algorithms: Supervised learning (e.g., regression, classification) can be used to predict outcomes, while unsupervised learning (e.g., clustering) can be used to identify patterns and anomalies in the data. 

Specialized Techniques: Regression Analysis This method utilizes historical data to understand the impact on the dependent variable's value when one or more independent variables tend to change or remain the same. In addition, determining each variable's relationship and past development or initiative enables you to predict potential outcomes in the future. And the technique gives you the right path to make informed decisions effectively. Example of Regression Analysis: Market Trend Assessment:  Evaluating how changes in the economic environment (e.g., interest rates) affect property prices. Predictive Pricing:  Using historical data to predict future price trends based on current market dynamics.

Time Series Analysis A  time series analysis  technique checks data points over a certain timeframe. You can utilize this method to monitor data within a certain time frame on a loop; however, this technique isn't ideal for collecting data only in a specific time interval. This technique is ideal for determining whether the variable changed amid the evaluation interval, how each variable is dependent, and how the result was achieved for a specific aspect. Additionally, you can rely on time series analysis to determine market trends and patterns over time. You can also use this method to forecast future events based on certain data insights. Example of Time Series Analysis : Demand Forecasting:  Estimating sales volume for the next season based on historical sales data during similar periods. Resource Planning:  Adjusting production schedules and inventory levels to meet anticipated demand.

Cluster Analysis Cluster analysis  describes data and identifies common patterns. It is often used when data needs more evident labels or insights or has ambiguous categories. This process includes recognizing similar observations and grouping those aspects to create clusters, which means assigning names and categorizing groups. In addition, this technique aids in identifying similarities and disparities in databases and presenting them in a visually organized method to seamlessly compare factors.  Box plot visualization  is mainly preferred to showcase data clusters. Clustering in Python refers to the application of various algorithms to group similar data points together into clusters. This is a fundamental unsupervised machine learning technique used for discovering patterns and structures within unlabeled datasets.

Example of Cluster Analysis: Market Segmentation:  Dividing customers into groups that exhibit similar behaviors and preferences for more targeted marketing. Campaign Customization:  Designing unique marketing strategies for each cluster to maximize engagement and conversions Cluster analysis is used in market research, pattern identification, data analysis, and image processing.

Exploratory Data Analysis (EDA)  is an important step in data science as it visualizing data to understand its main features, find patterns and discover how different parts of the data are connected. Why Exploratory Data Analysis is Important? Exploratory Data Analysis (EDA) is important for several reasons in the context of data science and statistical modeling. Some of the key reasons: It helps to understand the dataset by showing how many features it has, what type of data each feature contains and how the data is distributed. It helps to identify hidden patterns and relationships between different data points which help us in and model building. Allows to identify errors or unusual data points (outliers) that could affect our results. The insights gained from EDA help us to identify most important features for building models and guide us on how to prepare them for better performance. By understanding the data, it helps us in choosing best modeling techniques and adjusting them for better results.

Types of Exploratory Data Analysis There are various types of EDA based on nature of records. Depending on the number of columns we are analyzing we can divide EDA into three types: 1. Univariate Analysis It focuses on studying one variable to understand its characteristics. It helps to describe data and find patterns within a single feature. Various common methods like histograms are used to show data distribution, box plots to detect outliers and understand data spread and bar charts for categorical data. Summary statistics like   mean ,   median ,  mode ,   variance  and  standard deviation  helps in describing the central tendency and spread of the data.

2. Bivariate Analysis It focuses on identifying relationship between two variables to find connections, correlations and dependencies. It helps to understand how two variables interact with each other. Some key techniques include: Scatter plots which visualize the relationship between two continuous variables. C orrelation coefficient measures how strongly two variables are related which commonly use  Pearson's correlation  for linear relationships. Cross-tabulation or contingency tables shows the frequency distribution of two categorical variables and help to understand their relationship. Line graphs  are useful for comparing two variables over time in time series data to identify trends or patterns. Covariance  measures how two variables change together but it is paired with the correlation coefficient for a clearer and more standardized understanding of the relationship.

3. Multivariate Analysis Multivariate Analysis  identify relationships between two or more variables in the dataset and aims to understand how variables interact with one another which is important for statistical modeling techniques. It include techniques like: Pair plots   which shows the relationships between multiple variables at once and helps in understanding how they interact. Another technique is  Principal Component Analysis (PCA )  which reduces the complexity of large datasets by simplifying them while keeping the most important information. Spatial Analysis  is used for geographical data by using maps and spatial plotting to understand the geographical distribution of variables. Time Series Analysis  is used for datasets that involve time-based data and it involves understanding and modeling patterns and trends over time. Common techniques include line plots, autocorrelation analysis, moving averages and  ARIMA  models.

1. Data Analysis is Critical for Decision-Making Effective data analysis transforms raw data into actionable insights, supporting informed decision-making across industries. 2. Choice of Technique Depends on Data Type Different techniques (descriptive, inferential, predictive, etc.) are suited to different data types (qualitative vs. quantitative) and business objectives. 3. Data Cleaning is Foundational No analysis technique can produce reliable results if the data is incomplete, inconsistent, or contains errors—clean data is essential. 4. Visualization Enhances Understanding Tools like charts, graphs, and dashboards help communicate complex data insights clearly and concisely to stakeholders. 5. Statistical Techniques Uncover Relationships Statistical methods such as regression and correlation are powerful for identifying trends, patterns, and causal relationships.

6. Machine Learning Enables Predictive Power Modern analysis often uses machine learning to forecast outcomes, detect anomalies, or automate classification, especially with large datasets. 7. Exploratory Data Analysis (EDA) Sets the Stage EDA helps analysts understand the structure and characteristics of data before applying advanced models or statistical tests. 8. Analysis is Iterative Effective data analysis often involves repeated cycles of hypothesis testing, refinement, and validation—not a one-time process. 9. Interpretation Requires Domain Knowledge Data alone doesn’t tell the full story—understanding the context is vital to interpreting results correctly and avoiding misinformed conclusions. 10. Ethical Considerations Are Essential Bias, privacy, and transparency must be considered in any data analysis to ensure responsible use of data and avoid harmful outcomes.

Week 1 : Lecture 3 Significance of Python and Petroleum Data Analysis

Week 1: Significance of Python and Petroleum Data Analysis Introduction to Python and Programming Fundamentals: Environmental set up- Installation of Python and anaconda , Python packages, basics of data structures. Last Lectured Learning : Types of Data Analytics Application of Data Analytics in different scenario including application in oil & gas field Todays Discussion focuses on Environmental set up- Installation of Python and anaconda

Step 1: Go to https://www.python.org Step 2: Click on Downloads

Step 3: After Clicking on Downloads We are able to see the latest version available for windows Just click on the which is available Currently Showing Python 3.13.5 Step 4: Click on Python 3.13.4 Step 5: Once the download stops, Go to the download folder and click on Executable Installer File Name <python-3.13.5-amd64>

Step 6: Double Click on file <python-3.13.5-amd64> Step 7: Selecting the Box that labels with “Add Python.exe to PATH

Step 8: Click on Customize Installation Step 9: Click on Next Leaves the Boxes which is default selected

Remove the rest Only Keep the install location as <C:\Python313> Step 10: To finish Click Install

Step 11: After seeing the left side window Click on Close Now Python has installed successfully on your system.

Now We will Discuss the couples of ways to Run the Python Through the command prompt Through the IDLE Through Jupyter Notebook

You may navigate to Microsoft store -> Python 3.13-> Click on Get> Install Python Interpreter

In the window, open the command prompt Press window key and r

Now, Lets do some mathematical operation & message on the screen using print command

Double click on IDLE (A python Interpreter) Icon on the desktop The following window will appear

Installing Jupyter Notebook on Windows 11: Anaconda is a cross-platform Python distribution that you can install on Windows, macOS, or different distributions of Linux. NOTE  If you already have Python installed, you don't need to uninstall it. You can still go ahead and install Anaconda and use the Python version that comes along with Anaconda distribution. Steps to follow: Go to web browser on google and type Anaconda & Press Enter

Click on “Skip Registration”

<Anaconda3-2025.06-0-Windows-x86_64> installer will get downloaded

Click on “Launch” button on the jupyter Notebook The above window will open on your system

Here Click on “New” and then select “Python 3”Option

The above window will open

In search type “ Anaconda”

Anaconda Navigator is a GUI that is a graphic user interface that allows you to manage your Anaconda distribution without using the commands Next, Click Anaconda Prompt” now we will open Jupyter Notebook Jupyter Notebook is an Open-source web application that allows you to create and share documents containing live Code text and others Next, We will type jupyter notebook and press Enter

Now Type -> Jupyter notebook and click Enter

Other Way to Work with Python: Without Installation on local Drive Go to google Colab: The following page will open

Click on Welcome to Colaboratory

The following page will open: It will be link to your google/gmail account

Click on “New notebook”, The following page will be shown

Learned the different options available to work with Python Through the command prompt Through the IDLE Through Jupyter Notebook Using google colab

Week 1 : Lecture 4 Significance of Python and Petroleum Data Analysis

Week 1: Significance of Python and Petroleum Data Analysis Introduction to Python and Programming Fundamentals: Environmental set up- Installation of Python and anaconda , Python packages, basics of data structures. Last Lectured Learning : Environmental set up- Installation of Python and anaconda Todays Discussion focuses on Programming Fundamentals Basics of data structures

Python Fundamental Python Introduction Input and Output in Python Python Variables Python Operators Python Keywords Python Data Types Conditional Statements in Python Loops in Python - For, While and Nested Loops

Starting of Python: PYTHON ─OVERVIEW Python is Interpreted: Python is processed at runtime by the interpreter. You do not need to compile your program before executing it. This is similar to PERL and PHP. Python is Interactive: You can actually sit at a Python prompt and interact with the interpreter directly to write your programs. Python is Object-Oriented: Python supports the Object-Oriented style or technique of programming that encapsulates code within objects. Python is a Beginner’s Language: Python is a great language for the beginner-level programmers and supports the development of a wide range of applications from simple text processing to WWW browsers to games. It uses English keywords frequently where as other languages use punctuation, and it has fewer syntactical constructions than other languages.

Python was developed by Guido van Rossum in the late eighties and early nineties at the National Research Institute for Mathematics and Computer Science in the Netherlands. Python is derived from many other languages, including ABC, Modula-3, C, C++, Algol-68, SmallTalk, Unix shell, and other scripting languages. Python is copyrighted. Like Perl, Python source code is now available under the GNU General Public License (GPL). Python is now maintained by a core development team at the institute, although Guido van Rossum still holds a vital role in directing its progress.

# Write one program to show the print built in function Input and Output in Python Taking input in Python Printing Output using print() in Python Printing Variables Take Multiple Input in Python

Taking input in Python Python  input() function  is used to take user input. By default, it returns the user input in form of a string.   Coding# Printing Output using print() in Python Printing Variables We can use the print() function to print single and multiple variables. Print multiple variables: By separating them with commas. 

Take Multiple Input in Python: We are taking multiple input from the user in a single line, splitting the values entered by the user into separate variables for each value using the split() method. Then, it prints the values with corresponding labels, either two or three, based on the number of inputs provided by the user. Change the Type of Input in Python By default, input() function helps in taking user input as string. If any user wants to take input as int or float, we just need to typecast it.

How to find DataType of Input in Python

Output Formatting Output formatting in Python with various techniques including the format() method, manipulation of the sep and end parameters, f-strings and the versatile % operator. These methods enable precise control over how data is displayed, enhancing the readability and effectiveness of your Python programs. Using Format() Using sep and end parameter Using f-string Using % Operator

Python Variables Variables are used to store data that can be referenced and manipulated during program execution. A variable is essentially a name that is assigned to a value. Unlike many other programming languages, Python variables do not require explicit declaration of type. The type of the variable is inferred based on the value assigned. Variables act as placeholders for data. They allow us to store and reuse values in our program. Rules for Naming Variables To use variables effectively, we must follow Python’s naming rules: Variable names can only contain letters, digits and underscores(_) A variable name cannot start with a digit Variable names are case-sensitive (myVar & myvar are different) Avoid using python keywords (e.g., if, else, for) as variables names.

Type casting It refers to the process of converting the value of one data type into another. Python provides several built-in functions to facilitate casting, including int(), float() and str() among others. Basic Casting Functions int()  - Converts compatible values to an integer. float()  - Transforms values into floating-point numbers. str()  - Converts any data type into a string.

Scope of a Variable There are two methods how we define scope of a variable in python which are local and global. Local Variables: Variables defined inside a function are local to that function. Global Variables: Variables defined outside any function are global and can be accessed inside functions using the global keyword.  

Delete a Variable Using del Keyword We can remove a variable from the namespace using the del keyword. This effectively deletes the variable and frees up the memory it was using.

Python Variables and its uses

Week 1 : Lecture 5 Significance of Python and Petroleum Data Analysis

Last Lectured Learning : Python Introduction Input and Output in Python Python Variables

Todays Discussion focuses on Python Operators Python Keywords Python Operators Types of Operators in Python Arithmetic Operators Comparison Operators Logical Operators Bitwise Operators Assignment Operators Identity Operators and Membership Operators

Operator & Symbol Description Syntax Addition (+) Adds two operands x+y Subtraction(-) Subtract two operands x-y Multiplication (*) Multiplies two operands x*y Division (/) Division(float): divides the first operand by the second x/y Floor Division (//) Division (floor): divides the first operand by the second x//y Modulus (%) Modulus: returns the remainder when the first operand is divided by the second x%y Exponentiation (**) Power (Exponent): Returns first raised to power second x**y Python Arithmetic Operators

Comparison of Python Operators: In Python Comparison   of Relational operators compares the values. It either returns True or False according to the condition. Python operators can be used with various data types, including numbers, strings, boolean and more. In Python, comparison operators are used to compare the values of two operands (elements being compared). When comparing strings, the comparison is based on the alphabetical order of their characters (lexicographic order). Operator Description Syntax > Greater than: true if the left operand is greater than the right x>y < Less than: true if the left operand is less than the right x<y == Equal to : true if both operands are equal x==y != Not Equal to : true if operands are not equal x!=y >= Greater than or equal to: true if the left operand is greater than or equal to the right x>= y <= Less than or equal to: true if the left operand is less than or equal to the right x<=y

Logical Operators in Python Python logical operators are used to combine conditional statements, allowing you to perform operations based on multiple conditions and produce a Boolean result. These Python operators, alongside arithmetic operators, are special symbols used to carry out computations on values and variables.  Python Logical operators perform Logical AND, Logical OR and Logical NOT operations. Logical AND : It returns True if both the operands are true, otherwise it returns False. Logical OR  : It returns True it at least one of the operands is True, otherwise it returns False. Logical NOT : It returns the opposite Boolean value of the operand, if the operand is True, it returns False, and if the operands is False, it returns True.

The precedence of Logical Operators in Python is as follows: LOGICAL NOT LOGICAL AND LOGICAL OR

Python Bitwise Operators Python bitwise operators are used to perform bitwise calculations on integers. The integers are first converted into binary and then operations are performed on each bit or corresponding pair of bits, hence the name bitwise operators. Operator Description Syntax & Bitwise AND x & y | Bitwise OR x|y ~ Bitwise NOT ~x ^ Bitwise XOR x^y >> Bitwise right shift x>> << Bitwise left shift x<<

Bitwise AND Operator Python Bitwise AND (&) operator takes two equal-length bit patterns as parameters. The two-bit integers are compared. If the bits in the compared positions of the bit patterns are 1, then the resulting bit is 1 . If not, it is 0. Bitwise OR Operator The Python Bitwise OR (|) Operator takes two equivalent length bit designs as boundaries; if the two bits in the looked-at position are 0, the next bit is zero. If not, it is 1. Bitwise NOT Operator The preceding three bitwise operators are binary operators, necessitating two operands to function. However, unlike the others, this operator operates with only one operand.

Bitwise XOR Operator The Python Bitwise XOR (^) Operator also known as the exclusive OR operator, is used to perform the XOR operation on two operands. XOR stands for "exclusive or", and it returns true if and only if exactly one of the operands is true. In the context of bitwise operations, it compares corresponding bits of two operands. If the bits are different, it returns 1; otherwise, it returns 0. Bitwise Not (~) Operator   It works with a single value and returns its one’s complement. This means it toggles all bits in the value, transforming 0 bits to 1 and 1 bits to 0, resulting in the one’s complement of the binary number.

BITWISE SHIFT These operators are used to shift the bits of a number left or right thereby multiplying or dividing the number by two respectively. They can be used when we have to multiply or divide a number by two.  BITWISE RIGHT SHIFT Shifts the bits of the number to the right and fills 0 on voids left( fills 1 in the case of a negative number) as a result. BITWISE LEFT SHIFT Shifts the bits of the number to the left and fills 0 on voids right as a result.

Discussion on Python Operators
Tags