Writing GPU-Ready AI Models in Pure Java with Babylon
AnaMariaMihalceanu1
1 views
28 slides
Oct 08, 2025
Slide 1 of 28
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
About This Presentation
Project Babylon introduces the experimental Code Reflection technology that lets you define machine learning logic in plain Java code, without needing Python or external model files. It then uses Foreign Function and Memory (FFM) API to connect your code to native runtimes like ONNX Runtime for fast...
Project Babylon introduces the experimental Code Reflection technology that lets you define machine learning logic in plain Java code, without needing Python or external model files. It then uses Foreign Function and Memory (FFM) API to connect your code to native runtimes like ONNX Runtime for fast inference, including GPU acceleration. Furthermore, the Heterogeneous Accelerator Toolkit (HAT) provides a developer-facing programming model for writing and composing compute kernels, which can be more broadly applied-allowing Java libraries to seamlessly harness GPU power for high-performance computing tasks.
Presented at Devoxx Belgium 2025 by Ana-Maria Mihalceanu and Lize Raes
Size: 1.86 MB
Language: en
Added: Oct 08, 2025
Slides: 28 pages
Slide Content
Writing GPU-ReadyAI Models in
Pure Javawith Babylon
Ana-Maria Mihalceanu
Senior Developer Advocate
Java Platform Group @ Oracle
Lize Raes
Senior Developer Advocate
Java Platform Group @ Oracle
@CodeReflection helps identify areas of Java source code to reflect over
and give access to as code models at compile time and runtime.
Extend Java Reach to Foreign Programming Models with Project Babylon