Presentation mengenai deep learning Googlenet.pptx

edyvictor3 2 views 9 slides Oct 29, 2025
Slide 1
Slide 1 of 9
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9

About This Presentation

persentasi mengenai deep learning


Slide Content

Deep Learning Deep Learning Toolbox™ provides a framework for designing and implementing deep neural networks with algorithms, pretrained models, and apps. You can use convolutional neural networks ( ConvNets , CNNs) and long short-term memory (LSTM) networks to perform classification and regression on image, time-series, and text data. You can build network architectures such as generative adversarial networks (GANs) and Siamese networks using automatic differentiation, custom training loops, and shared weights. With the Deep Network Designer app, you can design, analyze , and train networks graphically. The Experiment Manager app helps you manage multiple deep learning experiments, keep track of training parameters, analyze results, and compare code from different experiments. You can visualize layer activations and graphically monitor training progress.

Convolutional Neural Network Convolutional neural networks are inspired from the biological structure of a visual cortex, which contains arrangements of simple and complex cells [1]. These cells are found to activate based on the subregions of a visual field. These subregions are called receptive fields. Inspired from the findings of this study, the neurons in a convolutional layer connect to the subregions of the layers before that layer instead of being fully-connected as in other types of neural networks. The neurons are unresponsive to the areas outside of these subregions in the image. These subregions might overlap, hence the neurons of a ConvNet produce spatially-correlated outcomes, whereas in other types of neural networks, the neurons do not share any connections and produce independent outcomes. In addition, in a neural network with fully-connected neurons, the number of parameters (weights) can increase quickly as the size of the input increases. A convolutional neural network reduces the number of parameters with the reduced number of connections, shared weights, and downsampling .

Transfer Learning Transfer learning is commonly used in deep learning applications. You can take a pretrained network and use it as a starting point to learn a new task. Fine-tuning a network with transfer learning is usually much faster and easier than training a network with randomly initialized weights from scratch. You can quickly transfer learned features to a new task using a smaller number of training images. Tools : VGG-16, VGG-19, Googlenet , Alexnet , Inception-V3, DarkNet-53, ResNet-50, NASNet Mobile, SqueezeNet

Googlenet GoogLeNet is a convolutional neural network that is 22 layers deep. You can load a pretrained version of the network trained on either the ImageNet [1] or Places365 [2] [3] data sets. The network trained on ImageNet classifies images into 1000 object categories, such as keyboard, mouse, pencil, and many animals. The network trained on Places365 is similar to the network trained on ImageNet, but classifies images into 365 different place categories, such as field, park, runway, and lobby. These networks have learned different feature representations for a wide range of images. The pretrained networks both have an image input size of 224-by-224. Classify the image and calculate the class probabilities using classify. The network correctly classifies the image as a bell pepper. A network for classification is trained to output a single label for each input image, even when the image contains multiple objects.

Data Preparation Image loading, (image augmentation), pre-processing, training:validation:testing = 60:10:30 There is 8 classes : Gametocyte (100, 100), Ring (50,45), Schizont (200, 205), Trophozoite (53, 200) MalariaNet = googlenet ; % make dataset image [ MimdsTrain , MimdsVal , MimdsTest ] = splitEachLabel (Mimds,0.6, 0.1, 0.3); lgraph = layerGraph ( MalariaNet ); numClasses = numel (categories( MimdsTrain.Labels )); newFC = fullyConnectedLayer (numClasses, 'Name' , 'fc8' , ... 'WeightLearnRateFactor' ,10, ... 'BiasLearnRateFactor' ,10); lgraph = replaceLayer (lgraph, 'loss3-classifier' ,newFC); newClassLayer = classificationLayer ( ' name' , 'Malaria Class Output' ); lgraph = replaceLayer ( lgraph , 'output' , newClassLayer );

Train Network pixelRange = [-50 50]; imgAugmenter = imageDataAugmenter ( ' RandXReflection ' , true, 'RandXTranslation' ,pixelRange, 'RandYTranslation' ,pixelRange); augITrain = augmentedImageDatastore ([224 224], MimdsTrain , ' DataAugmentation ' , imgAugmenter ); augIVal = augmentedImageDatastore ([224 224], MimdsVal ); augITest = augmentedImageDatastore ([224 224], MimdsTest );

Performance Testing [ YPred , scores] = classify( newMalariaNet,augITrain ); YTrain = MimdsTrain.Labels ; TrainAccuracy = mean( YPred == YTrain )*100; [ YPred , scores] = classify( newMalariaNet,augIVal ); YVal = MimdsVal.Labels ; ValAccuracy = mean( YPred == YVal )*100; [ YPred , scores] = classify( newMalariaNet,augITest ); YTest = MimdsTest.Labels ; TestAccuracy = mean( YPred == YTest )*100; fprintf ( "Accuracy of Training:Val:Testing = %.2f %.2f %.2f\n" , TrainAccuracy , ValAccuracy , TestAccuracy ) confusionchart ( YTest,YPred );

Result Accuracy of Training:Val:Testing = 99.83 96.88 98.95
Tags