background preloader

JVM tuning

Facebook Twitter

Understanding sun.misc.Unsafe. The biggest competitor to the Java virtual machine might be Microsoft's CLR that hosts languages such as C#. The CLR allows to write unsafe code as an entry gate for low level programming, something that is hard to achieve on the JVM. If you need such advanced functionality in Java, you might be forced to use the JNI which requires you to know some C and will quickly lead to code that is tightly coupled to a specific platform. With sun.misc.Unsafe, there is however another alternative to low-level programming on the Java plarform using a Java API, even though this alternative is discouraged. Nevertheless, several applications rely on sun.misc.Unsafe such for example objenesis and therewith all libraries that build on the latter such for example kryo which is again used in for example Twitter's Storm. Therefore, it is time to have a look, especially since the functionality of sun.misc.Unsafe is considered to become part of Java's public API in Java 9. 1.public static Unsafe getUnsafe() {

Getting C/C++ Performance from Java Object Serialisation. HeapAudit – JVM Memory Profiler for the Real World. Feb 02nd HeapAudit is not a monitoring tool, but rather an engineering tool that collects actionable data – information sufficient for directly making code change improvements. It is created for the real world, applicable to live running production servers. HeapAudit is a foursquare open source project designed for understanding JVM heap allocations. It is implemented as a Java agent built on top of ASM. Understanding JVM Memory Allocations Performance and scalability issues are generally attributed to bottlenecks in code execution (CPU), memory allocations (RAM) and I/O (disk, network, etc.).

Garbage Collection & Memory Profilers For instance, if the GC information tells you the JVM is garbage collecting hundreds of thousands of String objects every second, or that the JVM heap summary shows several millions of active String objects, it is not apparent where those String objects were allocated from. HeapAudit Java Agent Performance Overhead License and Availability - Norbert Hu (@norberthu) Java 7: How to write really fast Java code. When I first wrote this blog my intention was to introduce you to a class ThreadLocalRandom which is new in Java 7 to generate random numbers. I have analyzed the performance of ThreadLocalRandom in a series of micro-benchmarks to find out how it performs in a single threaded environment.

The results were relatively surprising: although the code is very similar, ThreadLocalRandom is twice as fast as Math.random()! The results drew my interest and I decided to investigate this a little further. I have documented my anlysis process. It is an examplary introduction into analysis steps, technologies and some of the JVM diagnostic tools required to understand differences in the performance of small code segments.

Some experience with the described toolset and technologies will enable you to write faster Java code for your specific Hotspot target environment. OK, that's enough talk, let's get started! Again, I am using a tiny micro-benchmarking framework presented in one of Heinz blogs. Everything I Ever Learned About JVM Performance Tuning @Twitter. How to fix the dreaded "java.lang.OutOfMemoryError: PermGen space" exception (classloader leaks) (Frank Kieviet) Oracle Blog Frank Kieviet Software matters « Classloader leaks:... | Main | More on... How to... » How to fix the dreaded "java.lang.OutOfMemoryError: PermGen space" exception (classloader leaks) By fkieviet on Oct 19, 2006 This blog has moved to Category: Sun Tags: none Permanent link to this entry Comments: Excellent!

Posted by Matthias on October 19, 2006 at 11:30 PM PDT # Excellent blog! Posted by Kelly O'Hair on October 20, 2006 at 02:40 AM PDT # Thanks for the feedback! Posted by Frank Kieviet on October 20, 2006 at 04:47 AM PDT # [Trackback] Frank Kieviet has written a very interesting article on how to use existing free tools to easily track down hanging references to code that has been unloaded from a JVM. Posted by Nick Stephen's blog on October 30, 2006 at 06:45 PM PST # Today I discovered your blog while googling information about JavaCAPS and transaction handling. Posted by Ortwin Escher on November 02, 2006 at 09:10 PM PST # Hi Mickael, Frank  Frank Stu. Troubleshooting connection problems in JConsole (JMX, SNMP, Java, etc...)

... I've seen a few posts in the Java and JMX forums from developers who were wondering how to find out why JConsole wouldn't connect to their application. So I have decided to write this short blog entry in order to outline a few diagnosing tips... Note: if you are using JConsole - you might also want to try the Java VisualVM. Java VisualVM comes with the JDK since JDK 6 update 7. If you have read the JConsole FAQ but are still experiencing difficulties, here are a few additional tips: Processes not displayed in JDK 6 JConsole connection window This may be due to weird permissions in the TMP dir.

See also this post where David explains how your TMP dir settings can prevent the tutorial examples from working under windows systems - and how to solve it. My problem was a little bit different : I could see processes PID but could not connect to them (nor see the Main class) The source of the problem ? Security security issues are further explained here. Firewall and RMI Linux Conclusion. Endre's Tech Corner!: Linux Java Thread Priorities workaround. For some annoying reason, Sun has decided that to run with threads with working thread priorities on Linux, you have to be root. The logic behind this decision, is that to heighten thread priorities, you have to be root.

But, says many of us, could you not just let us lower thread priorities, at least? No, says Sun. I believe they just don't quite understand what is requested. Anyway - Akshaal found out that if you just set the "ThreadPriorityPolicy" to something else than the legal values 0 or 1, say 2 or 42, or 666 as he suggest himself, a slight logic bug in Sun's JVM code kicks in, and thus sets the policy to be as if running with root - thus you get exactly what one desire. I wrote a little program to test out priorities and these flags. As user, with the following arguments (for the rest of the runs, the things changing are user vs. root, and the ThreadPriorityPolicy): -XX:ThreadPriorityPolicy=0-XX:+PrintGCDetails-XX:+PrintGCTimeStamps-XX:+PrintCompilation $ cat test.sh #!

False Sharing. Memory is stored within the cache system in units know as cache lines. Cache lines are a power of 2 of contiguous bytes which are typically 32-256 in size. The most common cache line size is 64 bytes. False sharing is a term which applies when threads unwittingly impact the performance of each other while modifying independent variables sharing the same cache line. Write contention on cache lines is the single most limiting factor on achieving scalability for parallel threads of execution in an SMP system. I’ve heard false sharing described as the silent performance killer because it is far from obvious when looking at code.

To achieve linear scalability with number of threads, we must ensure no two threads write to the same variable or cache line. Figure 1. above illustrates the issue of false sharing. Java Memory Layout For the Hotspot JVM, all objects have a 2-word header. To show the performance impact let’s take a few threads each updating their own independent counters. Results. Collections Library for millions of elements. If you want to efficiently store large collections of data in memory. This library can dramatically reduce Full GC times and reduce memory consumption as well. When you have a data type which can be represented by an interface and you want a List for this type. 1.List<Iinterfacetype> list = new HugeArrayBuilder<Interfacetype>() {}.create(); The type needs to be described using an interface so its represented can be changed. (Using generated byte code) The HugeArrayBuilder builds generated classes for the InterfaceType on demand.

A more complex example is 1.HugeArrayList hugeList = new HugeArrayBuilder() {{ 2. allocationSize = 1024*1024; 3. classLoader = myClassLoader; 4. }}.create(); How does the library differ Uses long for sizes and indecies.Uses column based data making the per element overhead minimal and speed up scans over a single or small set of attributes. Performance comparison This test compares using HugeCollection vs ArrayList of JavaBeans. The project web site The source 3.mvn test. Identifying which Java Thread is consuming most CPU | Nomad Labs Code. Java without the GC Pauses: Keeping Up with Moore’s Law and Living in a Virtualized World. Java: All about 64-bit programming. I found this article series very interesting All about 64-bit programming in one place which collects "a lot of links on the topic of 64-bit C/C++ software development.

" However some of these issues are relevant to Java and can make a difference. The size_t in C++ is 64-bit Java uses the int type for sizes and this doesn't change in a 64-bit JVM. This gives backward compatibility but limits arrays, collections and ByteBuffer's to this size. As of mid 2011, for £1K you can buy a PC with 24 GB of memory and for £21K you can buy a server with 512 GB of memory. This is already a problem for memory mapping a file larger than 2 GB.

BTW: I have tried using the underlying library directly using reflection which supports `long` lengths and I could get this working for reading files larger than 2 GB, but not writing. x64 has more registers than x86 This is be a small advantage. 64-bit JVM can access more memory This is essential if you need more than 1.2-3.5 GB of memory (depending on the OS). 2. 3. Trove. How to tame java GC pauses? Surviving 16GiB heap and greater.

Memory is cheap and abundant on modern servers. Unfortunately there is a serious obstacle for using these memory resources to their full in Java programs. Garbage collector pauses are a serious treat for a JVM with a large heap size. There are very few good sources of information about practical tuning of Java GC and unfortunately they seem to be relevant for 512MiB - 2GiB heaps size. Recently I have spent a good amount of time investigating performance of various JVMs with 32GiB heap size. In this article I would like to provide practical guidelines for tuning HotSpot JVM for large heap sizes. You may also want to look at two articles explaining particular aspects of HotSpot collectors in more details “Understanding GC pauses in JVM, HotSpot's minor GC” and “Understanding GC pauses in JVM, HotSpot's CMS collector”.

Target application domain GC tuning is very application specific. Heap is used to store data structures in memory. Economy of garbage collection Object demography Pauses in CMS. Visualising Garbage Collection in the JVM. Recently, I have been working with a number of customers on JVM tuning exercises. It seems that there is not widespread knowledge amongst developers and administrators about how garbage collection works, and how the JVM uses memory.

So, I decided to write a very basic introduction and an example that will let you see it happening in real time! This post does not try to cover everything about garbage collection or JVM tuning – that is a huge area, and there are some great resources on the web already, only a Google away. This post is about the HotSpot JVM – that’s the ‘normal’ JVM from Oracle (previously Sun). First, let’s take a look at the way the JVM uses memory. The Permanent Generation The permanent generation is used only by the JVM itself, to keep data that it requires. The size of the permanent generation is controlled by two JVM parameters. The Heap The heap is the main area of memory. This size of the heap is also controlled by JVM paramaters. Garbage collection is great! Joshua Bloch: Performance Anxiety – on Performance Unpredictability, Its Measurement and Benchmarking.

Joshua Bloch had a great talk called Performance Anxiety (30min, via Parleys; slides also available ) at Devoxx 2010, the main message as I read it was Nowadays, performance is completely non-predictable. You have to measure it and employ proper statistics to get some meaningful results.Microbenchmarking is very, very hard to do correctly. No, you misunderstand me, I mean even harder than that! From the resources: Profiles and result evaluation methods may be very misleading unless used correctly. There has been another blog about it but I’d like to record here more detailed remarks. Today we can’t estimate performance, we must measure it because the systems (JVM, OS, processor, …) are very complex with many different heuristics on various levels and thus the performance is highly unpredictable.

Example: Results during a single JVM run may be consistent (warm-up, then faster) but can vary between JVM executions even by 20%. “Benchmarking is really, really hard!” Personal touch Conclusion. Java HotSpot VM Options. Please note that this page only applies to JDK 7 and earlier releases. For JDK 8 please see the Windows, Solaris, Linux and Mac OS X reference pages. This document provides information on typical command-line options and environment variables that can affect the performance characteristics of the Java HotSpot Virtual Machine. Unless otherwise noted, all information in this document pertains to both the Java HotSpot Client VM and the Java HotSpot Server VM. Categories of Java HotSpot VM Options Standard options recognized by the Java HotSpot VM are described on the Java Application Launcher reference pages for Windows and Solaris & Linux.

Options that begin with -X are non-standard (not guaranteed to be supported on all VM implementations), and are subject to change without notice in subsequent releases of the JDK. Some Useful -XX Options Default values are listed for Java SE 6 for Solaris Sparc with -server. The options below are loosely grouped into categories. Behavioral Options. Java Virtual Machine (JVM) - a JVM option to kill the VM when an. A Collection of JVM Options. Getting alerts when Java processes crash. When bugs occur in the Java runtime environment, most administrators want to get notified so they can take corrective action. These actions can range from restarting a Java process, collecting postmortem data or calling in application support personnel to debug the situation further.

The Java runtime has a number of useful options that can be used for this purpose. The first option is “-XX:OnOutOfMemoryError”, which allows a command to be run when the runtime environment incurs an out of memory condition. When this option is combined with the logger command line utility: $ java -XX:OnOutOfMemoryError=”logger Java process %p encountered an OOM condition” … Syslog entries similar to the following will be generated each time an OOM event occurs: Jan 21 19:59:17 nevadadev root: [ID 702911 daemon.notice] Java process 19001 encountered an OOM condition $ java -XX:OnError=”logger -p Java process %p encountered a fatal condition” …

City Life. More Odds and Ends of Hprof heap dump format Hprof binary file format is back in my head again, as I consider whether to use it as a native heap dump format. When I wrote about it previously, I was more interested in how to parse it. Now I must consider its strengths and limitations. Strengths Format widely supported by profilers and heap analyzers. Written by the JVM directly and the hprof agent. Compact encoding of both primitive and reference fields. Hprof class and object identifiers are stable for multiple heap dumps written in the same JVM lifetime. The JVM implementation is very fast. The hprof implementation is very readable, excellent code. Weaknesses The JVM uses machine addresses as object identifiers, which change with every gc, so two heap dumps from the same JVM can't be compared object-wise. JVM heap dumps consistently have dangling references to objects that are not reported in the heap dump and, even if -live is specified, objects that are unreachable from any root.

Conclusion.