ZGC: A Decade of Innovation by Stefan Johansson

ScyllaDB 8 views 22 slides Oct 22, 2025
Slide 1
Slide 1 of 22
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22

About This Presentation

ZGC has been in development for more or less a decade now. This talk will explore the current state of ZGC in JDK 25 and look at how the performance have improved over the years.


Slide Content

A ScyllaDB Community
ZGC: JDK 25
Performance Update
Stefan Johansson
OpenJDK GC Engineer

A quick look at ZGC

The goals of ZGC
Low latency
Pause times below 1 ms
Scalability
Handle TB sized heaps
Auto-tuning
Minimal configuration required

ZGC Milestones
JDK 11
Experimental feature

JDK 14
Available on Windows and
macOS
JDK 16
Concurrent stack
scanning
JDK 21
Generational mode (off by default)
enabled with –XX:+ZGenerational
JDK 12
Concurrent class
unloading
JDK 15
Production ready
JDK 17
Automatic scaling of
worker threads
JDK 25
ZGC always generational

Current status
Low latency
Pause times below 1 ms
Scalability
Handle TB sized heaps
Auto-tuning
Minimal configuration required

Current status
Low latency
•Pause only for synchronization
•Heavy work done concurrently
Scalability
Handle TB sized heaps
Auto-tuning
Minimal configuration required

Current status
Low latency
•Pause only for synchronization
•Heavy work done concurrently
Scalability
•Support 16 TB heaps
•Pauses still short
Auto-tuning
Minimal configuration required

Current status
Low latency
•Pause only for synchronization
•Heavy work done concurrently
Scalability
•Support 16 TB heaps
•Pauses still short
Auto-tuning
•Little configuration required
•Just set the heap size

Performance overview

What is good performance?
■It all depends
●Throughput
●Latency
●Footprint

■Trade-offs between those aspects

How do ZGC achieve good performance?
■Great low latency alternative
●Basically no pauses

■By doing work concurrently
●Determining liveness
●Moving live objects
●Free memory

■Keep the concurrency overhead low

Low concurrency overhead
■Return memory as soon as possible
●Avoid allocation stalls

■Keep CPU overhead at minimum barriers
●Efficient barriers
●Smart algorithms

■Colored pointers

Performance details

Pause times

Pause times

Pause times

Pause times

Pause times

Pause times

Application throughput

Application latency

Thank you! Let’s connect.
Stefan Johansson
@kstefanj
kstefanj.github.io
Tags