background preloader

Java HotSpot VM Options

Java HotSpot VM Options
Please note that this page only applies to JDK 7 and earlier releases. For JDK 8 please see the Windows, Solaris, Linux and Mac OS X reference pages. This document provides information on typical command-line options and environment variables that can affect the performance characteristics of the Java HotSpot Virtual Machine. Unless otherwise noted, all information in this document pertains to both the Java HotSpot Client VM and the Java HotSpot Server VM. Categories of Java HotSpot VM Options Standard options recognized by the Java HotSpot VM are described on the Java Application Launcher reference pages for Windows and Solaris & Linux. Options that begin with -X are non-standard (not guaranteed to be supported on all VM implementations), and are subject to change without notice in subsequent releases of the JDK. Some Useful -XX Options Default values are listed for Java SE 6 for Solaris Sparc with -server. The options below are loosely grouped into categories. Behavioral Options

Getting alerts when Java processes crash When bugs occur in the Java runtime environment, most administrators want to get notified so they can take corrective action. These actions can range from restarting a Java process, collecting postmortem data or calling in application support personnel to debug the situation further. The Java runtime has a number of useful options that can be used for this purpose. The first option is “-XX:OnOutOfMemoryError”, which allows a command to be run when the runtime environment incurs an out of memory condition. $ java -XX:OnOutOfMemoryError=”logger Java process %p encountered an OOM condition” … Syslog entries similar to the following will be generated each time an OOM event occurs: Jan 21 19:59:17 nevadadev root: [ID 702911 daemon.notice] Java process 19001 encountered an OOM condition Another super useful option is “-XX:OnError”, which allows a command to be run when the runtime environment incurs a fatal error (i.e., a hard crash).

City Life More Odds and Ends of Hprof heap dump format Hprof binary file format is back in my head again, as I consider whether to use it as a native heap dump format. When I wrote about it previously, I was more interested in how to parse it. Now I must consider its strengths and limitations. Strengths Format widely supported by profilers and heap analyzers. Written by the JVM directly and the hprof agent. Compact encoding of both primitive and reference fields. Hprof class and object identifiers are stable for multiple heap dumps written in the same JVM lifetime. The JVM implementation is very fast. The hprof implementation is very readable, excellent code. Weaknesses The JVM uses machine addresses as object identifiers, which change with every gc, so two heap dumps from the same JVM can't be compared object-wise. JVM heap dumps consistently have dangling references to objects that are not reported in the heap dump and, even if -live is specified, objects that are unreachable from any root. Conclusion

Joshua Bloch: Performance Anxiety – on Performance Unpredictability, Its Measurement and Benchmarking Joshua Bloch had a great talk called Performance Anxiety (30min, via Parleys; slides also available ) at Devoxx 2010, the main message as I read it was Nowadays, performance is completely non-predictable. You have to measure it and employ proper statistics to get some meaningful results.Microbenchmarking is very, very hard to do correctly. No, you misunderstand me, I mean even harder than that! There has been another blog about it but I’d like to record here more detailed remarks. Today we can’t estimate performance, we must measure it because the systems (JVM, OS, processor, …) are very complex with many different heuristics on various levels and thus the performance is highly unpredictable. Example: Results during a single JVM run may be consistent (warm-up, then faster) but can vary between JVM executions even by 20%. “Benchmarking is really, really hard!” Joshua mentiones a couple of interesting papers, you should check the slides for them. Personal touch Conclusion

Visualising Garbage Collection in the JVM Recently, I have been working with a number of customers on JVM tuning exercises. It seems that there is not widespread knowledge amongst developers and administrators about how garbage collection works, and how the JVM uses memory. So, I decided to write a very basic introduction and an example that will let you see it happening in real time! This post is about the HotSpot JVM – that’s the ‘normal’ JVM from Oracle (previously Sun). First, let’s take a look at the way the JVM uses memory. The Permanent Generation The permanent generation is used only by the JVM itself, to keep data that it requires. The size of the permanent generation is controlled by two JVM parameters. The Heap The heap is the main area of memory. This size of the heap is also controlled by JVM paramaters. When you create an object, e.g. when you say byte[] data = new byte[1024], that object is created in the area called Eden. The following explanation has been simplified for the purposes of this post. Like this:

How to tame java GC pauses? Surviving 16GiB heap and greater. Memory is cheap and abundant on modern servers. Unfortunately there is a serious obstacle for using these memory resources to their full in Java programs. Garbage collector pauses are a serious treat for a JVM with a large heap size. There are very few good sources of information about practical tuning of Java GC and unfortunately they seem to be relevant for 512MiB - 2GiB heaps size. You may also want to look at two articles explaining particular aspects of HotSpot collectors in more details “Understanding GC pauses in JVM, HotSpot's minor GC” and “Understanding GC pauses in JVM, HotSpot's CMS collector”. Target application domain GC tuning is very application specific. Heap is used to store data structures in memory. Economy of garbage collection Garbage collection algorithms can be either compacting or non-compacting. Solution to this Gordian knot lies in “weak generational hypothesis”. Most objects become garbage short after creation (die young). Object demography Pauses in CMS See also

Trove Java: All about 64-bit programming I found this article series very interesting All about 64-bit programming in one place which collects "a lot of links on the topic of 64-bit C/C++ software development." However some of these issues are relevant to Java and can make a difference. The size_t in C++ is 64-bit Java uses the int type for sizes and this doesn't change in a 64-bit JVM. As of mid 2011, for £1K you can buy a PC with 24 GB of memory and for £21K you can buy a server with 512 GB of memory. This is already a problem for memory mapping a file larger than 2 GB. BTW: I have tried using the underlying library directly using reflection which supports `long` lengths and I could get this working for reading files larger than 2 GB, but not writing. x64 has more registers than x86 This is be a small advantage. 64-bit JVM can access more memory This is essential if you need more than 1.2-3.5 GB of memory (depending on the OS). Link: What 64-bit systems are. Support for 32-bit programs Switching between int and long Shifting puzzle

Collections Library for millions of elements If you want to efficiently store large collections of data in memory. This library can dramatically reduce Full GC times and reduce memory consumption as well. When you have a data type which can be represented by an interface and you want a List for this type. 1.List<Iinterfacetype> list = new HugeArrayBuilder<Interfacetype>() {}.create(); The type needs to be described using an interface so its represented can be changed. The HugeArrayBuilder builds generated classes for the InterfaceType on demand. A more complex example is 1.HugeArrayList hugeList = new HugeArrayBuilder() {{ 2. allocationSize = 1024*1024; 3. classLoader = myClassLoader; 4. }}.create(); How does the library differ Uses long for sizes and indecies.Uses column based data making the per element overhead minimal and speed up scans over a single or small set of attributes. Performance comparison This test compares using HugeCollection vs ArrayList of JavaBeans. In both cases, the amount of memory used was halved. The source

False Sharing Memory is stored within the cache system in units know as cache lines. Cache lines are a power of 2 of contiguous bytes which are typically 32-256 in size. The most common cache line size is 64 bytes. False sharing is a term which applies when threads unwittingly impact the performance of each other while modifying independent variables sharing the same cache line. To achieve linear scalability with number of threads, we must ensure no two threads write to the same variable or cache line. Figure 1. above illustrates the issue of false sharing. Java Memory Layout For the Hotspot JVM, all objects have a 2-word header. doubles (8) and longs (8)ints (4) and floats (4)shorts (2) and chars (2)booleans (1) and bytes (1)references (4/8)<repeat for sub-class fields> With this knowledge we can pad a cache line between any fields with 7 longs. To show the performance impact let’s take a few threads each updating their own independent counters. Results So there you have it.

Endre's Tech Corner!: Linux Java Thread Priorities workaround For some annoying reason, Sun has decided that to run with threads with working thread priorities on Linux, you have to be root. The logic behind this decision, is that to heighten thread priorities, you have to be root. But, says many of us, could you not just let us lower thread priorities, at least? No, says Sun. Anyway - Akshaal found out that if you just set the "ThreadPriorityPolicy" to something else than the legal values 0 or 1, say 2 or 42, or 666 as he suggest himself, a slight logic bug in Sun's JVM code kicks in, and thus sets the policy to be as if running with root - thus you get exactly what one desire. I wrote a little program to test out priorities and these flags. As user, with the following arguments (for the rest of the runs, the things changing are user vs. root, and the ThreadPriorityPolicy): -XX:ThreadPriorityPolicy=0-XX:+PrintGCDetails-XX:+PrintGCTimeStamps-XX:+PrintCompilation As user, with -XX:ThreadPriorityPolicy=1. As root, with -XX:ThreadPriorityPolicy=1:

Troubleshooting connection problems in JConsole (JMX, SNMP, Java, etc...) ... I've seen a few posts in the Java and JMX forums from developers who were wondering how to find out why JConsole wouldn't connect to their application. So I have decided to write this short blog entry in order to outline a few diagnosing tips... Note: if you are using JConsole - you might also want to try the Java VisualVM. If you have read the JConsole FAQ but are still experiencing difficulties, here are a few additional tips: Processes not displayed in JDK 6 JConsole connection window This may be due to weird permissions in the TMP dir. See also this post where David explains how your TMP dir settings can prevent the tutorial examples from working under windows systems - and how to solve it. My problem was a little bit different : I could see processes PID but could not connect to them (nor see the Main class) The source of the problem ? Security The most common troubles which prevent JConsole to connect to a remote application are linked to SSL/Security configurations. Linux