“Transformer Networks: How They Work and Why They Matter,” a Presentation from Synthpop AI
embeddedvision
2 views
26 slides
Oct 09, 2025
Slide 1 of 26
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
About This Presentation
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2025/10/transformer-networks-how-they-work-and-why-they-matter-a-presentation-from-synthpop-ai/
Rakshit Agrawal, Principal AI Scientist at Synthpop AI, presents the “Transformer Networks: How They Work and Why T...
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2025/10/transformer-networks-how-they-work-and-why-they-matter-a-presentation-from-synthpop-ai/
Rakshit Agrawal, Principal AI Scientist at Synthpop AI, presents the “Transformer Networks: How They Work and Why They Matter” tutorial at the May 2025 Embedded Vision Summit.
Transformer neural networks have revolutionized artificial intelligence by introducing an architecture built around self-attention mechanisms. This has enabled unprecedented advances in understanding sequential data, such as human languages, while also dramatically improving accuracy on nonsequential tasks like object detection.
In this talk, Agrawal explains the technical underpinnings of transformer architectures, from input data tokenization and positional encoding to the self-attention mechanism, which is the core component of these networks. He also explores how transformers have influenced the direction of AI research and industry innovation. Finally, he touches on trends that will likely influence how transformers evolve in the near future.
Size: 1.43 MB
Language: en
Added: Oct 09, 2025
Slides: 26 pages
Slide Content
Transformer Networks: How
They Work and Why They
Matter
Rakshit Agrawal
Principal AI Scientist
Synthpop AI