Machine Learning UNIT 1 Presentation.pptx

PravinDChristopher 18 views 56 slides Aug 31, 2025
Slide 1
Slide 1 of 56
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56

About This Presentation


Slide Content

What is Machine Learning? Machine Learning (ML) is a subfield of Artificial Intelligence (AI) that enables computers to learn from data and improve their performance without being explicitly programmed for every task. How Machine Learning Works: Input Data : Data is collected (e.g., images, text, numbers). Model Training : An algorithm is used to find patterns or make predictions based on data. Prediction/Decision : The model applies what it has learned to new data. Evaluation : The system is tested and improved based on performance Why is Machine Learning Important? It automates complex decision-making tasks. It can handle large-scale, high-dimensional data. It improves over time with more data (self-learning). Domain Application Healthcare Disease prediction, drug discovery Finance Fraud detection, stock forecasting Retail Customer recommendation systems Autonomous Systems Self-driving cars, drones Natural Language Chatbots , language translation

How do neural networks work? The human brain is the inspiration behind neural network architecture. Human brain cells, called neurons, form a complex, highly interconnected network and send electrical signals to each other to help humans process information. Similarly, an artificial neural network is made of artificial neurons that work together to solve a problem. Artificial neurons are software modules, called nodes, and artificial neural networks are software programs or algorithms that, at their core, use computing systems to solve mathematical calculations. Simple neural network architecture A basic neural network has interconnected artificial neurons in three layers: Input Layer Information from the outside world enters the artificial neural network from the input layer. Input nodes process the data, analyze or categorize it, and pass it on to the next layer. Hidden Layer Hidden layers take their input from the input layer or other hidden layers. Artificial neural networks can have a large number of hidden layers. Each hidden layer analyzes the output from the previous layer, processes it further, and passes it on to the next layer. Output Layer The output layer gives the final result of all the data processing by the artificial neural network. It can have single or multiple nodes. For instance, if we have a binary (yes/no) classification problem, the output layer will have one output node, which will give the result as 1 or 0. However, if we have a multi-class classification problem, the output layer might consist of more than one output node.

Components of a Learning System Data Input : The raw or preprocessed data fed into the system serves as the foundation for training the model. Quality and quantity of data significantly impact the system’s performance. Learning Algorithm : The core engine that drives the learning process. Algorithms like Linear Regression, Neural Networks, and Decision Trees are tailored to solve specific problems and optimize model performance. Model Output : After training, the system generates predictions, classifications, or recommendations based on the input data. This output is continuously refined through iterative learning. Feedback Mechanism : A critical component ensuring improvement, feedback compares predictions with actual results, helping the system adjust and reduce errors. This iterative loop enables the model to learn and adapt

Step 1: Defining the Problem and Objectives The foundation of a learning system begins with a clear problem definition. Start by identifying the type of task the system aims to solve, such as  classification  (e.g., predicting whether an email is spam),  regression  (e.g., forecasting sales figures), or  clustering  (e.g., customer segmentation). Alongside the task, define measurable performance objectives using  metrics  like accuracy, precision, recall, or F1-score Step 2: Data Collection and Preparation Data Gathering ,Data Preprocessing Step 3: Choosing the Training Experience Supervised Learning ,Unsupervised Learning, Reinforcement Learning Step 4: Selecting the Target Function The  target function  defines the relationship between inputs (features) and desired outputs (predictions). Step 5: Choosing a Representation for the Target Function Decision Trees ,Neural Networks ,Linear Models: Step 6: Selecting a Function Approximation Algorithm Gradient Descent neural networks for minimizing error Support Vector Machines (SVM) Used for classification tasks requiring clear decision boundaries K-Means Clustering Effective for grouping data points in unsupervised learning scenarios.

Step 7: Training the Model Train the chosen model using the prepared data. This involves: Iteratively feeding the data into the algorithm. Adjusting model parameters to minimize errors using techniques like back propagation in neural networks. Monitoring the training process to avoid issues like overfitting ,. Step 8: Evaluating Model Performance Evaluating the model ensures it generalizes well to new data. Use techniques such as: Validation Splits ,Cross-Validation ,Accuracy ,Mean Squared Error (MSE)  ROC-AUC  Step 9: Iterative Refinement Hyperparameter Tuning , Retraining with Updated Data , Reevaluating Performance Designing effective learning systems in machine learning requires adhering to best practices that ensure long-term functionality, scalability, and ethical integrity: Conclusion Designing a learning system in machine learning requires a structured approach to ensure effectiveness and efficiency. By carefully defining the problem, preparing data, selecting appropriate algorithms, and iteratively refining models, organizations can build robust systems that deliver accurate and meaningful results. Adopting best practices further enhances scalability, maintainability, and ethical compliance, paving the way for impactful machine learning solutions.

Find S Algorithm Introduction :  The find-S algorithm is a basic concept learning algorithm in machine learning. The find-S algorithm finds the most specific hypothesis that fits all the positive examples. We have to note here that the algorithm considers only those positive training example. The find-S algorithm starts with the most specific hypothesis and generalizes this hypothesis each time it fails to classify an observed positive training data. Hence, the Find-S algorithm moves from the most specific hypothesis to the most general hypothesis.  Important Representation :    ?  indicates that any value is acceptable for the attribute. specify a single required value ( e.g., Cold ) for the attribute. ϕ indicates that no value is acceptable. The most  general hypothesis  is represented by:  {?, ?, ?, ?, ?, ?} The most  specific hypothesis  is represented by:  {ϕ, ϕ, ϕ, ϕ, ϕ, ϕ}

Steps Involved In Find-S :    Start with the most specific hypothesis.  h = {ϕ, ϕ, ϕ, ϕ, ϕ, ϕ} Take the next example and if it is negative, then no changes occur to the hypothesis. If the example is positive and we find that our initial hypothesis is too specific then we update our current hypothesis to a general condition. Keep repeating the above steps till all the training examples are complete. After we have completed all the training examples we will have the final hypothesis when can use to classify the new examples.

he candidate elimination algorithm incrementally builds the version space given a hypothesis space H and a set E of examples. The examples are added one by one; each example possibly shrinks the version space by removing the hypotheses that are inconsistent with the example. The candidate elimination algorithm does this by updating the general and specific boundary for each new example.  You can consider this as an extended form of the Find-S algorithm. Consider both positive and negative examples. Actually, positive examples are used here as the Find-S algorithm (Basically they are generalizing from the specification). While the negative example is specified in the generalizing form.
Tags