Graph neural networks in human pose estimation.pptx
marwan729908
16 views
13 slides
Aug 11, 2024
Slide 1 of 13
1
2
3
4
5
6
7
8
9
10
11
12
13
About This Presentation
GNN presentation in human pose estimation
Size: 24.73 MB
Language: en
Added: Aug 11, 2024
Slides: 13 pages
Slide Content
feature extraction illustration GNN schematic Model summary Loss curve Some examples of predictions vs ground truth Aug-2023 report
Figure 1: graph input into GNN [left], feature maps (tags) with graph overlayed[right]. (The values of the tag maps but are shifted to view as images.) Input : graph & feature maps
Edge classifier graph neural network Graph creation & feature extraction Tags [14x128x128] EdgeConv Vertex Features Edge list Heatmaps [14x256x256] EdgeConv EdgeConv Linear layer (fc) Linear layer (fc) Linear layer (fc) Vertex feature embedding Vertex encoder Where; batchNorm1D batchNorm1D
Edge classifier graph neural network Edge classifier Linear layer (fc) Vertex feature embedding Vertex feature embedding batchNorm1D Relu Linear layer (fc) batchNorm1D Relu Linear layer (fc) batchNorm1D Relu Output GT Loss (BCE)
Model summary Figure 2: Graph model contains ~ parameters.
Average loss per epoch (binary cross entropy) Trained for 50 epochs Lr: Optimizer: Adam Loss function: Binary cross entropy Batch size: 5
Examples [predictions vs GT]
Questions Should feature normalization ( ) be calculated over feature map by feature map or over the whole dataset? Why does the loss function starts low, then, increases? Is that overfitting? High learning rate? Could adding edge features to the message passing formula improve model fitting? Adam or SGD as an optimizer?
ToDo : Calculate MS-COCO’s score metric “AP@ OKS-50, OKS-75, etc.” after complete training. ( ; , , is keypoint falloff constants. Calculate accuracy (percentage of correctly predicted edges). Incorporate edge attributes into the graph convolution operation. Like so.