MOBILE ALERT Kindly Switch Off your Mobile Phones 2
Email : [email protected] FOR LECTURE NOTES AND STUDY MATERIAL: LMS (Learning Management System) Muhammad Shahid Khan ( Senior. Asst. Professor) My Self 3
Introduction to Neural Networks 4 What are neural networks? Historical context and applications Biological neuron vs. artificial neuron Perceptrons and McCulloch-Pitts neurons
Background of Artificial Intelligence 5 https://www.analyticsvidhya.com/blog/2021/06/machine-learning-vs-artificial-intelligence-vs-deep-learning/
Neural Networks Neural networks, also known as artificial neural networks (ANNs) or simulated neural networks (SNNs), are a subset of machine learning and are at the heart of deep learning algorithms. Their name and structure are inspired by the human brain, mimicking the way that biological neurons signal to one another. 6 https://www.ibm.com/topics/neural-networks
Neural Networks History 7
Neural Networks The history of neural networks is a fascinating journey through the development of artificial intelligence and machine learning. Here's a brief overview of the key milestones in the history of neural networks: McCulloch-Pitts Neuron (1943): The foundational concept of artificial neurons, inspired by the behavior of biological neurons, was introduced by Warren McCulloch and Walter Pitts. They proposed a simple mathematical model for neurons that could perform logical operations. Perceptron (1957): Frank Rosenblatt developed the perceptron, an early type of neural network. It could be trained to recognize and classify patterns based on linear decision boundaries. The perceptron gained attention as a potential tool for image recognition. Perceptron's Limitations (1969): Marvin Minsky and Seymour Papert published a book titled "Perceptrons," which pointed out the limitations of single-layer perceptrons in solving complex problems that required non-linear decision boundaries. This led to a period of reduced interest in neural networks. 8
Neural Networks Backpropagation (1986): The backpropagation algorithm, a method for training multi-layer neural networks, was independently rediscovered and popularized by several researchers. This breakthrough allowed for the training of deeper and more complex networks. Neural Networks Renaissance (Late 20th Century): Advances in backpropagation and the availability of more powerful computers sparked renewed interest in neural networks. Researchers developed various network architectures and applied them to tasks like image recognition, speech processing, and more. Convolutional Neural Networks (CNNs, 1998): Yann LeCun and others introduced CNNs, a specialized type of neural network architecture designed for image processing. CNNs have become a fundamental component of computer vision systems. 9
Neural Networks Recurrent Neural Networks (RNNs, 1990s): RNNs, which can handle sequential data, gained popularity for tasks like natural language processing and speech recognition. Deep Learning (2000s): Deep learning, characterized by deep neural networks with many layers (deep networks), began to achieve breakthroughs in various domains, including computer vision, speech recognition, and natural language processing. ImageNet Competition (2012): The ImageNet Large Scale Visual Recognition Challenge marked a significant milestone in computer vision. The winning model, AlexNet , was a deep convolutional neural network that demonstrated the power of deep learning. 10
Neural Networks AI Resurgence (2010s): Deep learning and neural networks experienced a resurgence in the 2010s, leading to breakthroughs in areas like autonomous vehicles, AlphaGo's victory over a human Go champion, and advances in natural language understanding (e.g., GPT-3). Current Developments (2020s): Neural networks continue to evolve, with larger and more complex models, such as GPT-4 and neural architecture search (NAS) techniques. They are applied to a wide range of applications, including healthcare, finance, and climate modeling. 11
Neural Networks Artificial neural networks (ANNs) are comprised of a node layers, containing an input layer, one or more hidden layers, and an output layer. Each node, or artificial neuron, connects to another and has an associated weight and threshold. If the output of any individual node is above the specified threshold value, that node is activated, sending data to the next layer of the network. 12 https://www.ibm.com/topics/neural-networks https://medium.com/mit-6-s089-intro-to-quantum-computing/quantum-neural-networks-7b5bc469d984
Neural Networks Applications Neural networks have found widespread and famous applications across various domains. Here are some of the most notable and famous applications: Computer Vision: Image Classification: Convolutional Neural Networks (CNNs) have excelled in tasks like image classification. The ImageNet competition winners, such as AlexNet and ResNet , have demonstrated remarkable accuracy in identifying objects in images. Object Detection: Faster R-CNN, YOLO (You Only Look Once), and other models enable real-time object detection in images and videos, with applications in autonomous vehicles and surveillance systems. Facial Recognition: Neural networks are used in facial recognition systems for security, unlocking smartphones, and organizing photo libraries. 13
Neural Networks Applications Natural Language Processing (NLP): Machine Translation: Neural networks have revolutionized machine translation with models like Google's Transformer, enabling accurate and context-aware translation between languages. Chatbots and Virtual Assistants: Chatbots like GPT-3 and virtual assistants like Siri and Alexa use neural networks for natural language understanding and generation. Sentiment Analysis: Neural networks are used to determine sentiment in text data, aiding businesses in analyzing customer reviews and social media sentiment. 14
Neural Networks Applications Autonomous Vehicles: Self-Driving Cars: Neural networks play a crucial role in self-driving car technology, enabling perception, path planning, and decision-making based on sensor data. Healthcare: Medical Imaging: CNNs assist radiologists in detecting diseases from medical images like X-rays, MRIs, and CT scans. Drug Discovery: Neural networks are employed in drug discovery and molecular design, speeding up the identification of potential drug candidates. Finance: Algorithmic Trading: Neural networks are used for predicting stock prices, optimizing trading strategies, and managing portfolios. Credit Scoring: Banks and financial institutions use neural networks to assess credit risk and make lending decisions. 15
Neural Networks Applications Gaming: Game AI: Neural networks power intelligent behavior in video games, enabling non-player characters (NPCs) to adapt, learn, and provide a challenging gaming experience. DeepMind's AlphaGo: AlphaGo, a neural network-based AI developed by DeepMind, defeated the world champion Go player, marking a significant milestone in AI. Art and Creativity: Artistic Style Transfer: Neural networks can transform photographs into artistic styles mimicking famous artists like Van Gogh or Picasso. Music Composition: AI systems use neural networks to compose music, generating original compositions based on specific styles or genres. 16
Neural Networks Applications Recommendation Systems: Personalized Recommendations: Companies like Netflix and Amazon use neural networks to provide personalized recommendations for movies, products, and content. Climate Modeling: Weather Prediction: Neural networks are applied to weather forecasting, improving the accuracy of predictions and helping predict extreme weather events . 17
Biological Neuron vs. artificial Neuron biological neurons are the biological counterparts that exist in living organisms, while artificial neurons are simplified mathematical models created to mimic the basic behavior of biological neurons. Artificial neurons are the building blocks of artificial neural networks, which are used in various applications, including machine learning and deep learning. 18
The Structure of Neurons A neuron has a cell body, a branching input structure (the dendrite) and a branching output structure (the axon) Axons connect to dendrites via synapses. Electro-chemical signals are propagated from the dendritic input, through the cell body, and down the axon to other neurons 19
Neuron Vs. Perceptron 20
Neuron Vs. Perceptron Aspect Biological Neuron Perceptron Origin Natural (in living organisms) Man-Made (created for artificial neural networks) Structure Complex (soma, dendrites, axon, synapses) Simplified (input weights, summing function, activation function) Processing Analog (electrochemical signals) Digital (numeric inputs and outputs) Learning and Plasticity Exhibits learning and plasticity Can have fixed or adaptative weights, but not inherent learning Parallel Processing Massive parallelism in the brain Scalable in artificial neural networks 21
McCulloch-Pitts Neuron Model McCulloch-Pitts Neuron Model Proposed by Warren McCulloch and Walter Pitts in 1943, this model imitates the functionality of a biological neuron, thus is also called Artificial Neuron . An artificial neuron accepts binary inputs and produces a binary output based on a certain threshold value which can be adjusted. This can be mainly used for classification problems. 22 https://medium.com/@manushaurya/mcculloch-pitts-neuron-vs-perceptron-model-8668ed82c36
Perceptron Model This model was developed by Frank Rosenblatt in 1957. This is a slightly tweaked version of the Artificial Neuron model we saw earlier. Here , the neurons are also called Linear Threshold Unit (LTU). This model can work on non- boolean values where each input connection gets associated with a weight. Here the function calculates the weighted sum and based on the threshold value provided, it gives a binary output. 23 https://medium.com/@manushaurya/mcculloch-pitts-neuron-vs-perceptron-model-8668ed82c36
A Simple Model of a Neuron (Perceptron) Each neuron has a threshold value Each neuron has weighted inputs from other neurons The input signals form a weighted sum If the activation level exceeds the threshold, the neuron “fires” 24
A Simple Model of a Neuron (Perceptron) Each hidden or output neuron has weighted input connections from each of the units in the preceding layer. The unit performs a weighted sum of its inputs, and subtracts its threshold value, to give its activation level. Activation level is passed through a sigmoid activation function to determine output. 25
Supervised Learning Training and test data sets Training set; input & target 26
Learning algorithm 27
OR Function Using A Perceptron 28
Learning in Neural Networks Learn values of weights from I/O pairs Start with random weights Load training example’s input Observe computed input Modify weights to reduce difference Iterate over all training examples Terminate when weights stop changing OR when error is very small 29
Decision boundaries In simple cases, divide feature space by drawing a hyperplane across it. Known as a decision boundary. Discriminant function: returns different values on opposite sides. (straight line) Problems which can be thus classified are linearly separable. 30
Decision Surface of a Perceptron 31
Linear Separability 32
Rugby players & Ballet dancers 33
Hyperplane partitions A single Perceptron (i.e. output unit) with connections from each input can perform, and learn, a linear separation. Perceptrons have a step function activation. 34