Memory, CPU and Garbage Collector considerations to note when deploying your Java project into a Container based orquestator
Size: 1.47 MB
Language: en
Added: Oct 11, 2024
Slides: 29 pages
Slide Content
Considerations when deploying java applications to kubernetes* * o r any container based environment @superserch
Agenda Kubernetes – request and limit JVM – Threads and Memory spaces Default Ergonomics on JVM Conflict Tuning recom m endations Cloud Foundry Paketo buildpacks
Kubernetes – request and limit Platform that provides services to deploy Pods (one or more containers) Splits CPU and memory of Worker Nodes between all containers in that node Uses linux cgroup v2 technology to constrain resources CPU shared as processing power by time unit – throttling might happen Memory involves all processes runing inside a container – it can be killed by oom Request – what needs to be available on a node to be scheduled Limit – Usage that Linux kernel will enforce on the container
JVM – Threads and Memory spaces Java Virtual Machine – Memory is managed by the JVM, not the programmer Every Java application is multithreaded JVM will configure itself based on the detected environment JVM has different memory areas where it stores data Heap memory – User data Meta space memory – Class metadata and constant pool Stack and DirectMemory Not every OOME causes the JVM to crash
Ergonomics and HEURISTICS on JVM Ergonomics: the study of working conditions, especially the design of equipment and furniture, in order to help people work more efficiently Heuristics: a method of solving problems by finding practical ways of dealing with them, learning from past experience
Ergonomics on JVM – GARBAGE COLLECTOR Server class machine G1GC SerialGC No server class machine SerialGC
Ergonomics on JVM – GARBAGE COLLECTOR Server class machine G1GC SerialGC No server class machine SerialGC Server class machine Available Processors > 1 Available Memory > 2GB (-256MB)
Ergonomics on JVM – Memory Heap Memory > 512MB 25% MaxHeap Memory < 512MB Varies from 25% up to 50% MaxHeap jmap -histo[:live] <pid> Instances and bytes
Ergonomics on JVM – Memory Heap Memory > 512MB 25% MaxHeap Memory < 512MB Varies from 25% up to 50% MaxHeap jmap -histo[:live] <pid> Instances and bytes
Ergonomics on JVM – Memory Metaspace This is off-heap memory Initial reserved space: 1GB If CompresedOpps are on MetaSpace will have a location table Each class uses ~1K class space and ~8k non-class space per class jcmd <pid> VM.metaspace Can be limited with -XX:MaxMetaspaceSize and -XX:CompressedClassSpaceSize
Ergonomics on JVM – Memory Metaspace This is off-heap memory Initial reserved space: 1GB If CompresedOpps are on MetaSpace will have a location table Each class uses ~1K class space and ~8k non-class space per class jcmd <pid> VM.metaspace Can be limited with -XX:MaxMetaspaceSize and -XX:CompressedClassSpaceSize
Conflict Kubernetes will try to maintain the desired state If a container goes beyond its memory limit it will be killed I f a container goes beyond its CPU Quota limit it will be throtteled If a container does not respond to liveness readines it will be restarted JVM will use as many threads as possible to do concurrent tasks If too limited internal processes will be limited (GC, Default Executors) W hen under load JVM might be paused by Linux kernel because CPU Quota If memory is not set Java Heap might get funny values
Tuning recom m endations - CPU U se -XX:ActiveProcessorCount to set the desired concurrency level JVM won’t consume more CPU than its quota, but this setting will help in selecting the paralellism of internal structures Is a good idea to know the CPU profile of your application, you could use Jconsole, jfr, or even top Monitor kubernetes for CPU throttling of your containers
Tuning recom m endations – Garbage CollectorS SerialGC D e fault if less t ha n 2 processors or less than 1792MB of Memory No overhead because it stops the world to GC GC Pause might be an issue if heap over 1GB or container throtteled High tail-latency effect -XX:+ UseSerialGC
Tuning recom m endations – Garbage CollectorS ParallelGC No overhead because it stops the world to GC GC Pause might be an issue if heap over 4GB or container throtteled Great for batch workloads High tail-latency effect Configure at least 2000m cpu_limit -XX:+ UseParallelGC
Tuning recom m endations – Garbage CollectorS G1GC D e fault if at least 2 processors and more than 1792MB of Memory some overhead because it marks regions that have changed GC Pauses might be an issue if allocation rates are too high High tail-latency effect Configure at least 2000m cpu_limit -XX:+UseG1GC
Tuning recom m endations – Garbage CollectorS ZGC moderated overhead because it does marking concurently with app GC Pauses are usually under 1ms Low tail-latency effect JDK17+ S ince JDK23 is generational Configure at least 2000m cpu_limit -XX:+ UseZGC
Tuning recom m endations – Garbage CollectorS ShenandoahGC moderated overhead because it does marking concurently with app GC Pauses are usually under 10ms Moderated tail-latency effect JDK11+ Configure at least 2000m cpu_limit -XX:+ UseShenandoahGC
Tuning recom m endations – Memory I t is convenient to know what your application has and does Number of classes to be loaded Use of ByteBuffer Size of Live dataset Size of work area Number of threads
Tuning recom m endations – Memory Set the Heap size -Xmx -Xmx3g -XX:MaxRAMPercentage -XX:MaxRamPercentage=75 Consider than memory_limit > Heap + MetaSpace + Stack + DirectMemory + spare Monitor Kubernetes OOM resets and adjust heap size or memory_limit
Tuning recom m endations – Kubernetes M ake cpu_request = cpu_limit and memory_request = memory_limit Kubernetes calculates scheduling of pods based on request values Assign enough cpu so application and probes can run concurrently
Tuning recom m endations What about setting - Xms = - Xmx ? It not just reserves but commits memory for heap Avoids heap resizing
Cloud Foundry Paketo buildpacks Nice tool to build containers A t this moment defaults to BellSoft Liberica JRE 17 It can pack a JRE or even a JLINK version of the selected JDK It can configure some Application servers or Spring Boot applications Adds a launcher layer that inspects environment and sets JVM parameters This configuration is aimed to avoid OOM resets
Cloud Foundry Paketo buildpacks Heap Heap = Total Container Memory - Non Heap - Headroom Non Heap Non Heap = Direct Memory + Metaspace + Reserved Code cache + (Thread Stack * Thread Count ) Headroom defaults to 0
Paketo CHANGE DEFAULTS Calculator can be configured by passing environment variables BPL_JVM_HEAD_ROOM (Percentage to not be used) BPL_JVM_LOADED_CLASS_COUNT (Number of loaded classes) BPL_JVM_THREAD_COUNT (Number of expected threads) BPL_JVM_CLASS_ADJUSTMENT (as %, increments MaxMetaspace) BPL_JAVA_NMT_ENABLED (true or false, avoids memory report on OOM) JAVA_TOOL_OPTIONS (JVM launch flags, override paketo flags)
Paketo Conclusion Sets parameters that fixed size of Non heap memory areas This can create OOMEs if not adjusted properly Containers may fail to start if memory ~512MB or less Once it calculates a Fixed non heap size for a container, if the container grows the only parameter that will change is - Xmx -XX:MaxDirectMemorySize=10M might be too small, default value = Xmx, it might be changed by passing it in the JAVA_TOOL_OPTIONS