Ideally, a Java application runs just fine with the default JVM settings so that there is no need to set any flags at all. However, in case of performance problems (which unfortunately arise quite often) some knowledge about relevant JVM flags is a welcome companion. In this part of our series, we will take a look at some JVM flags from the area of memory management. Knowing and understanding these flags will prove highly useful for developers and operations staff alike.
All established HotSpot memory management and garbage collection algorithms are based on the same basic partitioning of the heap: The “young generation” contains newly allocated and short-lived objects while the “old generation” contains long-lived objects beyond a certain age. In addition to that, the “permanent generation” contains objects expected to live throughout the whole JVM lifecycle, e.g., the object representations of loaded classes or the String intern cache. For the following discussion, we assume that the heap is partitioned according to this classic strategy of young, old, and permanent generations. However, note that other strategies are also promising, one prominent example being the new G1 garbage collector, which blurs the distinction between the young and old generations. Also, current developments seem to indicate that some future version of the HotSpot JVM will not have the separation between the old and permanent generations anymore.
-Xms and -Xmx (or: -XX:InitialHeapSize and -XX:MaxHeapSize)
Arguably the most popular JVM flags at all are
-Xmx, which allow us to specify the initial and maximum JVM heap size, respectively. Both flags expect a value in bytes but also support a shorthand notation where “k” or “K” represent “kilo”, “m” or “M” represent “mega”, and “g” or “G” represent “giga”. For example, the following command line starts the Java class “MyApp” setting an initial heap size of 128 megabytes and a maximum heap size of 2 gigabytes:
$ java -Xms128m -Xmx2g MyApp
Note that, in practice, the initial heap size turns out to also be a lower bound for the heap size, i.e., a minimum heap size. While it is true that the JVM may dynamically resize the heap at run time, and thus in theory we might observe the heap size fall below its initial size, I never witnessed such a case in practice even with very low heap utilization. This behavior is convenient for developers and operations because, if desired, it allows them to specify a static heap size simply by setting
-Xmx to the same value.
It is useful to know that both
-Xmx are only shortcuts which are internally mapped to
-XX:MaxHeapSize. These two XX flags may also be used directly, to the same effect:
$ java -XX:InitialHeapSize=128m -XX:MaxHeapSize=2g MyApp
Note that all JVM output regarding initial and maximum heap size uses the long names exclusively. Thus, when looking for information about the heap size of a running JVM, e.g., by checking the output of
-XX:+PrintCommandLineFlags or by querying the JVM via JMX, we should look for “InitialHeapSize” or “MaxHeapSize” and not for “Xms” or “Xmx”.
-XX:+HeapDumpOnOutOfMemoryError and -XX:HeapDumpPath
If we refrain from setting
-Xmx to an adequate value, we run the risk of being hit by an OutOfMemoryError, one of the most dreadful beasts that we may face when dealing with the JVM. As detailed in our blog series on this subject, the root cause of an OutOfMemoryError needs to be diagnosed carefully. Often, a good start for a deep analysis is a heap dump – too bad if none is available, in particular if the JVM has already crashed and the error did only appear on a production system after the application ran smoothly for several hours or days.
Luckily, there is a way to tell the JVM to generate a heap dump automatically when an OutOfMemoryError occurs, by setting the flag
-XX:+HeapDumpOnOutOfMemoryError. Having this flag set “just in case” can save a lot of time when facing an unexpected OutOfMemoryError. By default, the heap dump is stored in a file
java_pid<pid>.hprof in the directory where the JVM was started (here,
<pid> is the process ID of the JVM process). To change the default, we may specify a different location using the flag
<path> being a relative or absolute path to the file where to store the heap dump.
While all this sounds pretty nice, there is one caveat that we need to keep in mind. A heap dump can get large, and especially so when an OutOfMemoryError arises. Thus, it is recommended to always set a custom location using
-XX:HeapDumpPath, and to choose a place with enough disk space available.
We can even execute an arbitrary sequence of commands when an OutOfMemoryError happens, e.g., to send an e-mail to an admin or to perform some cleanup job. This is made possible by the flag
-XX:OnOutOfMemoryError, which expects a list of commands and, if applicable, their parameters. We will not go into the details here but just show an example configuration. With the following command line, should an OutOfMemoryError occur, we will write a heap dump to the file
/tmp/heapdump.hprof and execute the shell script
cleanup.sh in home directory of the user running the JVM.
$ java -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/heapdump.hprof -XX:OnOutOfMemoryError ="sh ~/cleanup.sh" MyApp
-XX:PermSize and -XX:MaxPermSize
The permanent generation is a separate heap area which contains, among others, the object representations of all classes loaded by the JVM. To successfully run applications that load lots of classes (e.g., because they depend on lots of third-party libraries, which in turn depend on and load classes from even more libraries) it may be necessary to increase the size of the permanent generation. This can be done using the flags
-XX:MaxPermSize sets the maximum size of the permanent generation while
-XX:PermSize sets its initial size on JVM startup. A quick example:
$ java -XX:PermSize=128m -XX:MaxPermSize=256m MyApp
Note that the permanent generation size is not counted as part of the heap size as specified by
-XX:MaxHeapSize. That is, the amount of permanent generation memory specified by
-XX:MaxPermSize may be required in addition to the heap memory specified by
-XX:InitialCodeCacheSize and -XX:ReservedCodeCacheSize
An interesting but often neglected memory area of the JVM is the “code cache”, which is used to store the native code generated for compiled methods. The code cache does rarely cause performance problems, but once we have a code cache problem its effects may be devastating. If the code cache is fully utilized, the JVM prints a warning message and then switches to interpreted-only mode: The JIT compiler gets deactivated and no bytecode will be compiled into native code anymore. Thus, the application will continue to run, but slower by an order of magnitude, until someone notices.
Like with the other memory areas, we may specify the size of the code cache ourselves. The relevant flags are
-XX:ReservedCodeCacheSize, and they expect byte values just like the flags introduced above.
If the code cache grows constantly, e.g., because of a memory leak caused by hot deployments, increasing the code cache size will only delay its inevitable overflow. To avoid overflow, we can try an interesting and relatively new option: to let the JVM dispose some of the compiled code when the code cache fills up. This may be done by specifying the flag
-XX:+UseCodeCacheFlushing. Using this flag, we can at least avoid the switch to interpreted-only mode when we face code cache problems. However, I would still recommend to tackle the root cause as soon as possible once a code cache problem has manifested itself, i.e., identify the memory leak and fix it.