In the second part of this series, I give an introduction to the different categories of flags offered by the HotSpot JVM. Also, I am going to discuss some interesting flags regarding JIT compiler diagnostics.
JVM flag categories
The HotSpot JVM offers three categories of flags. The first category contains the standard flags. As the name suggests, both the functionality and the output of a standard flag are stable and will most likely not change in future releases of the JVM. You can retrieve a list of all standard flags by running
java without any parameter (or with the parameter
-help, which incidentally is a standard flag itself). We have already seen some of the standard flags, e.g.,
-server, in the first part of this series.
The second category are the X flags, non-standardized flags the behavior or availability of which may change with future releases. All flags of this category start with “-X” and can be listed with
java -X. Note that the list is not guaranteed to be complete, one notable missing flag being
The third (and by far the largest) category consists of the XX flags, which are also non-standardized and could not even be listed for a long time (recently, this has changed, and we will return to this topic in the third part of the series). For practical use, however, there is no real difference between X flags and XX flags. The behavior of the X flags is quite stable, whereas many of the XX flags are rather experimental (and are primarily used by JVM developers for debugging and tuning of the JVM implementation itself). A worthwhile read is the HotSpot JVM documentation for non-standardized flags which explicitly states that XX flags should not be used in a careless fashion. This is true, but in my view this recommendation equally holds for the X flags (and some of the standard flags as well). Regardless of the category, it pays off to try to understand the intended behavior (and possible side effects) of a flag before using it.
One word about the syntax of XX flags. All XX flags start with “-XX:”, but then the syntax differs depending on the type of flag.
- For a boolean flag, we either have a “+” or a “-” and only then the actual name of the JVM option to set. For example,
-XX:+<name> activates the option
-XX:-<name> deactives that option.
- For a flag that takes a non-boolean value like a text string or an integer number, we first have the name of the flag followed by a “=”, and then the value to set. For example,
-XX:<name>=<value> sets the option
<name> to the value
Now let us take a look at some XX flags from the area of JIT compilation.
-XX:+PrintCompilation and -XX:+CITime
It is quite revealing to watch the JIT compiler work while a Java application is running. By setting the flag
-XX:+PrintCompilation we can enable some simple output regarding the bytecode to native code compilation. Let us consider an example (output shortened) for a run of the server VM:
$ java -server -XX:+PrintCompilation Benchmark
1 java.lang.String::hashCode (64 bytes)
2 java.lang.AbstractStringBuilder::stringSizeOfInt (21 bytes)
3 java.lang.Integer::getChars (131 bytes)
4 java.lang.Object::<init> (1 bytes)
--- n java.lang.System::arraycopy (static)
5 java.util.HashMap::indexFor (6 bytes)
6 java.lang.Math::min (11 bytes)
7 java.lang.String::getChars (66 bytes)
8 java.lang.AbstractStringBuilder::append (60 bytes)
9 java.lang.String::<init> (72 bytes)
10 java.util.Arrays::copyOfRange (63 bytes)
11 java.lang.StringBuilder::append (8 bytes)
12 java.lang.AbstractStringBuilder::<init> (12 bytes)
13 java.lang.StringBuilder::toString (17 bytes)
14 java.lang.StringBuilder::<init> (18 bytes)
15 java.lang.StringBuilder::append (8 bytes)
29 java.util.regex.Matcher::reset (83 bytes)
Whenever a method is compiled, a line is printed to the output of
-XX:+PrintCompilation. Each line consists of a running number (a unique compiler task ID) and the name and size of the compiled method. Thus, line 1 represents the compilation of the String::hashCode method to native code. Depending on the type of method and compiler task, additional output characters are printed. For example, the creation of a native wrapper method is marked by an “n”, like with System::arraycopy in the example above. Note that such a line does not contain a running number and method size, because nothing is actually compiled to native code. It is also possible to see methods being recompiled, see the output for StringBuilder::append in lines 11 and 15. The output stops at line 29, which means that there were a total of 29 method compilations while this Java application was running.
There is no official documentation of
-XX:+PrintCompilation, but this description is one of the few good resources on the output of this flag. I highly recommend it for further study.
The JIT compiler output helps us to understand some differences between the client VM and the server VM. With the server VM, our example application produces 29 lines of compilation output while the use of the client VM results in 55 lines (no figure shown). This may seem strange because the server VM is supposed to do “more” compilation than the client VM. However, given their respective default settings, the server VM watches methods much longer than the client VM before deciding whether they are hot spots and need to be compiled at all. Thus, it is not surprising that, with the server VM, some potential method compilations may only happen at a later stage.
By additionally setting the flag
-XX:+CITime we request various statistics about compilations to be printed on JVM shutdown. Let us take a look at just one particular section of the statistics:
$ java -server -XX:+CITime Benchmark
Accumulated compiler times (for compiled methods only)
Total compilation time : 0.178 s
Standard compilation : 0.129 s, Average : 0.004
On stack replacement : 0.049 s, Average : 0.024
A total of 0.178 seconds has been spent (on the 29 compiler tasks). Of these, 0.049 seconds belong to “on stack replacement”, i.e., the compilation of a method currently on the stack. This technique is not so simple to implement in a performant fashion, but it is actually highly important. Without “on stack replacement”, methods that have a long execution time (for example, because they contain long-running loops) could not be replaced on the fly by their compiled counterparts.
Once again, the comparison between the client VM and the server VM is interesting. The corresponding statistics for the client VM (no figure shown) indicate that even though 55 methods have been compiled, only a total of 0.021 seconds was spent on these compilations. Thus, the server VM has done less compilations than the client VM, but it spent much more time on them. The reason for this behavior is that the server VM performs more optimizations when generating native code.
In the first part of this series, we have learned about the flags
-XX:+CITime we can now get a better picture of how the JIT compiler behaves in both cases. With
-Xint, the flag
-XX:+PrintCompilation produces exactly zero lines of output in both cases. Also,
-XX:+CITime confirms that no time at all has been spent on compilation. Now with
-Xcomp things are different. The client VM produces 726 lines of output immediately after startup and then no more because every relevant method has been compiled. With the server VM we even see 993 lines of output, which tells us that more aggressive optimizations are performed. Also, the statistics printed on JVM teardown show a huge difference between the two VMs. Consider the run with the server VM:
$ java -server -Xcomp -XX:+CITime Benchmark
Accumulated compiler times (for compiled methods only)
Total compilation time : 1.567 s
Standard compilation : 1.567 s, Average : 0.002
On stack replacement : 0.000 s, Average : -1.#IO
The 1.567 seconds spent on compilations using
-Xcomp are about 10 times as much as with the default settings, i.e., mixed mode. Still, the application runs slower than with mixed mode. For comparison, the client VM only takes 0.208 seconds to compile its 726 methods using
-Xcomp, and it is even slower than the server VM with
As a side note, no “on stack replacement” takes place because every method gets compiled when it is called for the first time. The corrupted output “Average: -1.#IO” (correct would be: 0) demonstrates once more that the output of non-standardized flags is nothing to rely on too much.
Sometimes when setting a particular JVM flag, the JVM terminates right after startup printing the message “Unrecognized VM option”. If that happens, you should first check if you mistyped the flag. However, if the name is spelled correctly and the JVM still does not recognize the flag, you may need to unlock the flag by setting
-XX:+UnlockExperimentalVMOptions. It is not quite clear to me why this security mechanism is needed, but I suspect that the flags guarded in this way have a tendency to affect the stability of the JVM if not used properly (for example, because they may write excessive debug output to some log file).
Some flags are only intended to be used for JVM development but not for actual use with Java applications. If a flag cannot be enabled by
-XX:+UnlockExperimentalVMOptions, but you really need to use that flag for whatever reason, then you may try your luck with a debug build of the JVM. For the Java 6 HotSpot JVM you can find debug builds here.
-XX:+LogCompilation and -XX:+PrintOptoAssembly
If you find yourself in a situation where the information given by
-XX:+PrintCompilation is not detailed enough, you can use the flag
-XX:+LogCompilation to write extended compilation output into the file “hotspot.log”. In addition to many details about the compiled methods you can also see which compiler threads were started for which tasks. Note that
-XX:+LogCompilation needs to be unlocked by
The JVM even allows us to take a look at the native code resulting from bytecode compilation. With the flag
-XX:+PrintOptoAssembly, the native code generated by the compiler threads gets written both to stdout and to the file “hotspot.log”. Using this flag requires you to run a debug build of the server VM. We can study the output of
-XX:+PrintOptoAssembly in order to understand what kind of optimizations the JVM actually performs, for example regarding dead code elimination. An interesting article presenting an example can be found here.
Further information about XX flags
If this article tickled your fancy, just take a look at the XX flags of the HotSpot JVM yourself. A good starting point is this list.