This is a simple presentation for deep learning. You can know about what is deep learning, how it work, what is the benifit of using deap learning. How it help us to learn mechine from a image or video file.
Size: 1.74 MB
Language: en
Added: Sep 19, 2024
Slides: 30 pages
Slide Content
Course : Big data Analytics Course Code: CSE-5110 S ubmitted By: Arpita Das Batch : 15th ID : M240105069 Session:winter2024 Department Of Computer Science And Engineering Submitted To: Dr. Md. Manowarul Islam (MMI ) Assistant Professor Department of CSE Jagannath University
Deep Learning
Introduction Deep Learning is a subfield of machine learning that focuses on learning data representations using neural networks with many layers, also called deep neural networks. It has driven significant advancements in fields such as computer vision, natural language processing, and speech recognition.
What is Deep Learning? Deep learning is a subset of machine learning, which itself is a branch of artificial intelligence (AI). It involves the use of artificial neural networks, particularly those with many layers, to model complex patterns in data.
Deep Learning
Inspired by the Brain Deep Learning were inspired by the connections of the neurons and synapses in the brain . Visual cortex has multiple stages which implies that our thought process is hierarchical in nature .Deep Learning can “theoretically” learn any function.
Classification model of Deep Learning
Neural Network Neural networks are a type of machine learning model inspired by the human brain's structure and functioning. They consist of layers of interconnected nodes, or "neurons," each of which processes input data and passes information to subsequent layers.
Neural Network
Feedforward Neural Network Feedforward neural networks are a type of artificial neural network where connections between the nodes do not form a cycle. The information flows in one direction—from the input nodes, through the hidden layers, to the output nodes. Input Layer: This layer receives the initial data. Hidden Layers: These layers process the inputs received from the previous layer through weighted connections and activation functions. Output Layer: This layer produces the final prediction or classification.
Feedforward Neural Network
Convolutional Neural Network Convolutional Neural Networks (CNNs) are a powerful tool for machine learning, especially in tasks related to computer vision. Convolutional Neural Networks, or CNNs, are a specialized class of neural networks designed to effectively process grid-like data, such as images . How CNNs Work: Feature Extraction: CNNs automatically learn to detect features from raw image data through the convolutional and pooling layers. Early layers might detect simple features like edges, while deeper layers detect more complex patterns. Feature Mapping: The detected features are mapped into feature maps, which represent different aspects of the input data. Classification: The final feature maps are flattened and passed through fully connected layers to classify the input into categories or predict continuous values.
Convolutional Neural Network
Autoencoders Autoencoders are a specialized class of algorithms that can learn efficient representations of input data with no need for labels. It is a class of Artificial neural networks designed for unsupervised learning . An autoencoder consists of two main parts: Encoder: This part of the network compresses the input data into a smaller, lower-dimensional representation. It maps the input X to a latent space Z , reducing the dimensionality while capturing the most important features of the data. Decoder : The decoder takes the compressed representation and attempts to reconstruct the original input from it. It maps the latent space Z back to the input space, producing an output X ′, which is as close as possible to the original input X .
Autoencoders
Deep learning in b ig data Unsupervised(Clustering) Data is not labeled , no prior Knowledge Group points that are “Close” to each other Identify structure or patterns in data Unknown number of classes Unsupervised (Classification) Labeled data points ,based on a training set Want a “rule” that assigns labels to new points Known number of classes Used to classify future observation Supervised learning
Deep learning in big data
Why Deep learning is growing Processing power needed for Deep learning is readily becoming available using GPUs, Distributed Computing and powerful CPUs Moreover, as the data amount grows , Deep Learning models seem to outperform Machine Learning models. Explosion of features and datasets Focus on customization and real time indecision. Uncover hard to detect patterns (using traditional techniques)when the incidence rate is low . Find latent features (super variables)without significant manual feature engineering. Real time fraud detection and self learning models using streaming data (KAFKA , MapR ) Ensure consistent customer experience and regulatory compliance Higher operational efficiency
Deep learning is growing
Why do we need Deep Learning?
Deep learning process The data learning process, particularly in the context of machine learning, involves several key steps that allow models to learn from data and make predictions. Data Collection Data Preparation Model Selection Training the Model Evaluation Tuning the Model ( Hyperparameter Optimization) Deployment Monitoring and Maintenance
Deep learning process
Challenges with deep learning Data Requirements : Deep learning models typically require large amounts of labeled data to perform well, and acquiring such data can be time-consuming and expensive. Computational Resources: Deep learning models, especially deep neural networks (DNNs) and transformers, are computationally expensive to train, often requiring specialized hardware (e.g., GPUs, TPUs ). Overfitting: Deep learning models can easily over fit to the training data, especially when the dataset is small or noisy, leading to poor generalization on new data . Interpretability: Deep learning models, especially deep neural networks, are often considered "black boxes," making it difficult to interpret why a model made a specific decision. Lack of Generalization: Deep learning models sometimes fail to generalize well to unseen data or new domains, particularly if the training data does not cover a wide variety of cases . Long Training Times: Training large models, particularly those used in natural language processing (NLP) and computer vision, can take days or even weeks.
Challenges with deep learning Hyper parameter Tuning : Deep learning models often have many hyper parameters (e.g., learning rate, batch size, network depth), and finding the optimal values can be difficult . Bias and Fairness: Deep learning models can inherit biases present in training data, leading to unfair or unethical outcomes. Adversarial Attacks: Deep learning models, especially in computer vision, can be vulnerable to adversarial examples — small, often imperceptible changes to input data that cause the model to make incorrect predictions. Ethical and Societal Implications: Deploying deep learning systems, particularly in sensitive areas like healthcare, law enforcement, and hiring, raises ethical concerns related to privacy, accountability, and transparency.
Challenges with deep learning
Machine learning Vs Deep learning Aspect Machine Learning Deep Learning Approach Manual feature extraction Automatic feature extraction Data Requirement Can work with smaller datasets Requires large datasets Computational Power Standard CPUs often sufficient Requires GPUs or TPUs Training Time Faster, depending on model complexity Longer, due to complexity Accuracy Good for simpler tasks Superior for complex tasks Interpretability More interpretable (e.g., decision trees) Less interpretable ("black box") Applications Fraud detection, recommendation systems Image recognition, autonomous driving