Machine learning _new.pptx for a presentation

RahulS66654 58 views 50 slides Jul 05, 2024
Slide 1
Slide 1 of 50
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50

About This Presentation

Ml


Slide Content

AI & Machine Learning

Artificial Intelligence It is a branch of Computer Science by which we can create intelligent machine which can behave like a human, thinks like a human and take decision. AI is techniques that help machines and computers mimic human behaviour. AI is intelligence demonstrated by machines, as opposed to the natural intelligence displayed by humans or animals. Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving.

AI type-1: Based on Capabilities

Weak AI or Narrow AI: Narrow AI is a type of AI which is able to perform a dedicated task with intelligence. The most common and currently available AI is Narrow AI in the world of Artificial Intelligence. Narrow AI cannot perform beyond its field or limitations, as it is only trained for one specific task. Hence it is also termed as weak AI. Narrow AI can fail in unpredictable ways if it goes beyond its limits. Apple Siriis a good example of Narrow AI, but it operates with a limited pre-defined range of functions. IBM's Watson supercomputer also comes under Narrow AI, as it uses an Expert system approach combined with Machine learning and natural language processing. Some Examples of Narrow AI are playing chess, self-driving cars, speech recognition and image recognition.

General AI General AI is a type of intelligence which could perform any intellectual task with efficiency like a human. The idea behind the general AI to make such a system which could be smarter and think like a human by its own. Currently, there is no such system exist which could come under general AI and can perform any task as perfect as a human. The worldwide researchers are now focused on developing machines with General AI. As systems with general AI are still under research, and it will take lots of efforts and time to develop such systems.

Super AI Super AI is a level of Intelligence of Systems at which machines could surpass human intelligence, and can perform any task better than human with cognitive properties. It is an outcome of general AI. Some key characteristics of strong AI include capability include the ability to think, to reason, solve the puzzle, make judgments, plan, learn, and communicate by its own. Super AI is still a hypothetical concept of Artificial Intelligence. Development of such systems in real is still world changing task.

AI type-1: Based on Capabilities

Artificial Intelligence type-2: Based on functionality Reactive Machines : Purely reactive machines are the most basic types of Artificial Intelligence. Such AI systems do not store memories or past experiences for future actions. These machines only focus on current scenarios and react on it as per possible best action. IBM's Deep Blue system is an example of reactive machines. Google's AlphaGo is also an example of reactive machines.

Limited Memory Limited memory machines can store past experiences or some data for a short period of time. These machines can use stored data for a limited time period only. Self-driving cars are one of the best examples of Limited Memory systems. These cars can store recent speed of nearby cars, the distance of other cars, speed limit, and other information to navigate the road. Artificial Intelligence type-2: Based on functionality

Artificial Intelligence type-2: Based on functionality Self-Awareness Self-awareness AI is the future of Artificial Intelligence. These machines will be super intelligent, and will have their own consciousness, sentiments, and self-awareness. These machines will be smarter than human mind. Self-Awareness AI does not exist in reality still and it is a hypothetical concept.

Machine Learning Machine learning is a branch of  ai   and computer science which focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy. Netflix’s recommendation engine and self-driving cars. Machine Learning can be : a. Supervised b. Unsupervised Semi-supervised d. Reinforcement machine learning

Machine Learning : Supervised learning     Supervised learning, also known as supervised machine learning, is defined by its use of labelled datasets to train algorithms to classify data or predict outcomes accurately. As input data is fed into the model, the model adjusts its weights until it has been fitted appropriately. This occurs as part of the cross validation process to ensure that the model avoids overfitting or under fitting. Supervised learning helps organizations solve a variety of real-world problems at scale, such as classifying spam in a separate folder from your inbox. Some methods used in supervised learning include neural networks, naïve bayes , linear regression, logistic regression, random forest, and support vector machine (SVM).

Supervised Machine Learning

Unsupervised machine learning Unsupervised learning, also known as unsupervised machine learning, uses machine learning algorithms to analyze and cluster unlabeled datasets. These algorithms discover hidden patterns or data groupings without the need for human intervention. This method’s ability to discover similarities and differences in information make it ideal for exploratory data analysis, cross-selling strategies, customer segmentation, and image and pattern recognition. It’s also used to reduce the number of features in a model through the process of dimensionality reduction. Principal component analysis (PCA) and singular value decomposition (SVD) are two common approaches for this. Other algorithms used in unsupervised learning include neural networks, k-means clustering, and probabilistic clustering methods.

Unsupervised machine learning Unsupervised learning is a branch of machine learning that focuses on discovering patterns and relationships within data that lacks pre-existing labels or annotations. Unlike supervised learning, unsupervised learning algorithms do not rely on labeled examples to learn from. Instead, they aim to discover inherent structures or clusters within the data.

Semi-supervised learning  Semi-supervised learning offers a happy medium between supervised and unsupervised learning. During training, it uses a smaller labeled data set to guide classification and feature extraction from a larger, unlabeled data set. Semi-supervised learning can solve the problem of not having enough labeled data for a supervised learning algorithm.  It also helps if it’s too costly to label enough data. 

Reinforcement machine learning Reinforcement machine learning is a machine learning model that is similar to supervised learning, but the algorithm isn’t trained using sample data. This model learns as it goes by using trial and error . A sequence of successful outcomes will be reinforced to develop the best recommendation or policy for a given problem.

Machine Learning Machine learning (ML) is a subdomain of artificial intelligence (AI) that focuses on developing systems that learn—or improve performance—based on the data they ingest. Artificial intelligence is a broad word that refers to systems or machines that resemble human intelligence. Machine learning and AI are frequently discussed together, and the terms are occasionally used interchangeably, although they do not signify the same thing. A crucial distinction is that, while all machine learning is AI, not all AI is machine learning.

What is Cloud Computing? It  is a technology that provides access to various computing resources over the internet. All you need to do is use your computer or mobile device to connect to your cloud service provider through the internet. Once connected, we get access to computing resources, which may include serverless computing, virtual machines, storage, and various other things. Basically,  cloud service providers have massive data centers that contain hundreds of servers, storage systems and components that are crucial for many kinds of organizations. These data centers are in secure locations and store a large amount of data. The users connect to these data centers to collect data or use it when required. Users can take advantage of various services; for example, if we want a notification every time someone sends we a text or an email, cloud services can help us. The best part about cloud platforms is that we pay only for the services we use, and there are no charges upfront.

Machine Learning in Microsoft Azure What is Microsoft Azure? Azure in cloud computing platform an an online portal that allows you to access and manage cloud services and resources provided by Microsoft. These services and resources include storing our data and transforming it, depending on our requirements. To get access to these resources and services, all we need to have is an active internet connection and the ability to connect to the Azure portal.

Azure It was launched on February 1, 2010, significantly later than its main competitor, AWS It’s free to start and follows a pay-per-use model, which means you pay only for the services you opt for. Interestingly, 80 percent of the Fortune 500 companies use Azure services for their cloud computing needs. Azure supports multiple programing language, including Java, Node Js and C#. Another benefit of Azure is the number of data centers it has around the world. There are 42 Azure data centers spread around the globe, which is the highest number of data centers for any cloud platform. Also, Azure is planning to get 12 more data centers, which will increase the number of data centers to 54, shortly.

AI + machine learning: Create the next generation of applications using artificial intelligence capabilities for any developer and any scenario. AI Anomaly Detection: Easily add anomaly detection capabilities to our apps. Azure AI Bot Services : Create bots and connect them across channels . Azure Open Datasets : Cloud platform to host and share curated open datasets to accelerate development of machine learning models. Azure AI Services : Add cognitive capabilities to apps with APIs and AI services. Azure Services

AI and machine Learning Build business-critical machine learning models at scale Azure Machine Learning empowers data scientists and developers to build, deploy, and manage high-quality models faster and with confidence. It accelerates time to value with industry-leading machine learning operations and integrated tools. This trusted platform is designed for  responsible AI  applications in machine learning.

Bot A bot is an automated software application that performs repetitive tasks over a network. It follows specific instructions to imitate human behavior but is faster and more accurate. A bot can also run independently without human intervention. For example, bots can interact with websites, chat with site visitors, or scan through content. While most bots are useful, outside parties design some bots with malicious intent. Organizations secure their systems from malicious bots and use helpful bots for increased operational efficiency. A small group of skilled automation engineers and domain experts may be able to automate many of the most tedious tasks of entire teams.

Anomaly detection Anomaly detection is examining specific data points and detecting rare occurrences that seem suspicious because they’re different from the established pattern of behaviors. Anomaly detection isn’t new, but as data increases manual tracking is impractical. Importance of anomaly detection Anomaly detection is especially important in industries like finance, retail, and cybersecurity, but every business should consider an anomaly detection solution. It provides an automated means of detecting harmful outliers and protects your data. For example, banking is an industry that benefits from anomaly detection. Using it, banks can identify fraudulent activity and inconsistent patterns and protect data.  Data is the lifeline of our business and without anomaly detection, we could lose revenue and brand equity that took years to cultivate. Our business faces security breaches and the loss of sensitive customer information.  If this happens, we stand to lose a level of customer trust that may unrecoverable. 

Protect our organization's growth Embed AI-powered monitoring features to stay one step ahead of incidents—no machine learning expertise required. Monitor the performance of any organization's growth engines, including sales revenue and manufacturing operations, with Azure AI Metrics Advisor, built on AI Anomaly Detector, and a part of Azure AI Services. Quickly identify and fix problems through a powerful combination of monitoring in near-real time, adapting models to your scenario, and offering granular analysis with diagnostics and alerting. Azure AI Metrics Advisor

Computer Vision Computer vision is a field of artificial intelligence (AI) that enables computers and systems to derive meaningful information from digital images, videos and other visual inputs — and take actions or make recommendations based on that information. If AI enables computers to think, computer vision enables them to see, observe and understand. Computer vision works much the same as human vision. Human sight has the advantage of lifetimes of context to train how to tell objects apart, how far away they are, whether they are moving and whether there is something wrong in an image.

Computer Vision Computer vision trains machines to perform these functions, but it has to do it in much less time with cameras, data and algorithms rather than retinas, optic nerves and a visual cortex. Because a system trained to inspect products or watch a production asset can analyse thousands of products or processes a minute, noticing imperceptible defects or issues, it can quickly surpass human capabilities.

How does computer vision work? Computer vision needs lots of data. It runs analyses of data over and over until it discerns distinctions and ultimately recognize images. For example, to train a computer to recognize automobile tires, it needs to be fed vast quantities of tire images and tire-related items to learn the differences and recognize a tire, especially one with no defects. Two essential technologies are used to accomplish this: a type of machine learning called deep learning and a neural network (CNN).

Computer vision & human vision

Computer Vision & human vision Computer vision shares a lot of similarities with human vision, but there are significant differences between the two. Human vision is a complex process which is still not understood completely. Computer vision is a technological implementation of human vision that enables computers to achieve human vision capabilities. We take a look at the two and try to understand the differences between them.

What is Human Vision? Human vision is a complex process that is still not completely understood. Vision is clearly one of the most important of the five senses and it is the one that human have to come to depend upon above all others. Vision is the special sense of sight that revolves around light. It’s fascinating how the human vision system perceives and interprets things. We see things as they are – trees in a forest, books on a shelf, widgets in a factory, cars on the road, and clouds in the sky. No obvious deductions are needed and extra effort is required to interpret each object or scene.

Application of computer Vision

 Computer Vision in Healthcare X-Ray Analysis Computer vision can be successfully applied for medical X-ray imaging. Although most doctors still prefer manual analysis of X-ray images to diagnose and treat diseases, with computer vision, X-ray analysis can be automated with enhanced efficiency and accuracy.  The state-of-art image recognition algorithm  can be used to detect patterns in an X-ray image that are too subtle for the human eyes. Cancer Detection Computer vision is being successfully applied for breast and skin cancer detection. With image recognition, doctors can identify anomalies by comparing cancerous and non-cancerous cells in images. With automated cancer detection, doctors can diagnose cancer faster from an MRI scan. CT Scan and MRI Computer vision has now been greatly applied in CT scans and MRI analysis. AI with computer vision designs such a system that analyses the radiology images with a high level of accuracy, similar to a human doctor, and also reduces the time for disease detection, enhancing the chances of saving a patient's life. It also includes deep learning algorithms that enhance the resolution of MRI images and hence improve patient outcomes.

Computer Vision in Manufacturing Defect Detection This is perhaps, the most common application of computer vision. Until now, the detection of defects has been carried out by trained people in selected batches, and total production control is usually impossible. With computer vision, we can detect defects such as cracks in metals, paint defects, bad prints, etc., in sizes smaller than 0.05mm. Analyzing text and barcodes (OCR) Nowadays, each product contains a barcode on its packaging, which can be analyzed or read with the help of the computer vision technique OCR. Optical character recognition or OCR helps us detect and extract printed or handwritten text from visual data such as images. Further, it enables us to extract text from documents like invoices, bills, articles, etc. and verifies against the databases. Fingerprint recognition and Biometrics Computer vision technology is used to detect fingerprints and biometrics to validate a user's identity. Biometrics is the measurement or analysis of physiological characteristics of a person that make a person unique such as Face, Finger Print, iris Patterns, etc. It makes use of computer vision along with knowledge of human physiology and behaviour.

How an image is analyzed with computer vision ? A sensing device captures an image. The sensing device is often just a camera, but could be a video camera, medical imaging device, or any other type of device that captures an image for analysis. The image is then sent to an interpreting device. The interpreting device uses pattern recognition to break the image down, compare the patterns in the image against its library of known patterns, and determine if any of the content in the image is a match. The pattern could be something general, like the appearance of a certain type of object, or it could be based on unique identifiers such as facial features. A user requests specific information about an image, and the interpreting device provides the information requested based on its analysis of the image.

What computer vision is used for ? Content organization Computer vision can be used to identify people or objects in photos and organize them based on that identification. Photo recognition applications like this are commonly used in photo storage and social media applications. Text extraction Optical character recognition can be used to boost content discoverability for information contained in large amounts of text and to enable document processing for robotic processing automation scenarios.

What computer vision is used for ? Agriculture : Images of crops taken from satellites, drones, or planes can be analysed to monitor harvests, detect weed emergence, or identify crop nutrient deficiency. Autonomous vehicles : Self-driving cars use real-time object identification and tracking to gather information about what's happening around a car and route the car accordingly. Manufacturing: Computer vision can monitor manufacturing machinery for maintenance purposes. It can also be used to monitor product quality and packaging on a production line.

Natural Language Processing Natural language processing (NLP) is a machine learning technology that gives computers the ability to interpret, manipulate, and comprehend human language.  Organizations today have large volumes of voice and text data from various communication channels like emails, text messages, social media newsfeeds, video, audio, and more. They use NLP software to automatically process this data, analyse the intent or sentiment in the message, and respond in real time to human communication.

Natural Language Processing Natural language processing (NLP) is critical to fully and efficiently analyze text and speech data. It can work through the differences in dialects, slang, and grammatical irregularities typical in day-to-day . Companies use it for several automated tasks, such as to: •    Process, analyze , and archive large documents •     Analyze customer feedback or call center recordings •    Run chatbots for automated customer service •    Classify and extract text

Why NLP ? We can also integrate NLP in customer-facing applications to communicate more effectively with customers. Example :A Chabot analyzes and sorts customer queries, responding automatically to common questions and redirecting complex queries to customer support. This automation helps reduce costs, saves agents from spending time on redundant queries, and improves customer satisfaction.

How does NLP work? Natural language processing (NLP) combines computational linguistics, machine learning, and deep learning models to process human language. Computational linguistics: Computational linguistics is the science of understanding and constructing human language models with computers and software tools. Researchers use computational linguistics methods, such as syntactic and semantic analysis, to create frameworks that help machines understand conversational human language. Tools like language translators, text-to-speech synthesizers, and speech recognition software are based on computational linguistics. 

How does NLP work? Machine learning Machine learning is a technology that trains a computer with sample data to improve its efficiency. Human language has several features like sarcasm, metaphors, variations in sentence structure, plus grammar and usage exceptions that take humans years to learn. Programmers use machine learning methods to teach NLP applications to recognize and accurately understand these features from the start. Deep learning Deep learning is a specific field of machine learning which teaches computers to learn and think like humans. It involves a neural network that consists of data processing nodes structured to resemble the human brain. With deep learning, computers recognize, classify, and co-relate complex patterns in the input data.

Conversational AI Conversational AI is a type of artificial intelligence (AI) that can simulate human conversation . It is made possible by natural language processing (NLP), a field of AI that allows computers to understand and process human language and Google's foundation models that power new generative AI capabilities.

conversational AI Conversational AI combines natural language processing (NLP) with machine learning. These NLP processes flow into a constant feedback loop with machine learning processes to continuously improve the AI algorithms. Conversational AI has principle components that allow it to process, un Machine Learning  (ML) is a sub-field of  artificial intelligence , made up of a set of algorithms, features, and data sets that continuously improve themselves with experience. As the input grows, the AI platform machine gets better at recognizing patterns and uses it to make predictions.understand , and generate response in a natural way.

Conversational AI Natural language processing  is the current method of analyzing language with the help of machine learning used in conversational AI. Before machine learning, the evolution of language processing methodologies went from linguistics to computational linguistics to statistical natural language processing.

Conversational AI NLP consists of four steps: Input generation, input analysis, output generation, and reinforcement Input generation:  Users provide input through a website or an app; the format of the input can either be voice or text. Input analysis:  If the input is text-based, the conversational AI solution app will use natural language understanding (NLU) to decipher the meaning of the input and derive its intention. However, if the input is speech-based, it’ll leverage a combination of automatic speech recognition (ASR) and NLU to analyze the data. Dialogue management:  During this stage, Natural Language Generation (NLG), a component of NLP, formulates a response. Reinforcement learning:  Finally, machine learning algorithms refine responses over time to ensure accuracy.

What is hyperparameter tuning? When we are training machine learning models, each dataset and model needs a different set of hyperparameters , which are a kind of variable. The only way to determine these is through multiple experiments, where we pick a set of hyperparameters and run them through our model. This is called  hyperparameter tuning . In essence, we are training our model sequentially with different sets of hyperparameters . This process can be manual, or we can pick one of several automated hyperparameter tuning methods.

What are hyperparameters ? Hyperparameters are external configuration variables that data scientists use to manage machine learning model training. Sometimes called  model hyperparameters , the hyperparameters are manually set before training a model.  They're different from parameters, which are internal parameters automatically derived during the learning process and not set by data scientists. Examples of hyperparameters include the number of nodes and layers in a neural network and the number of branches in a decision tree. Hyperparameters determine key features such as model architecture, learning rate, and model complexity.
Tags