Presentation: Understanding Java Garbage Collection (And What You Can Do About It) Mechanical Sympathy: Lock-Based vs Lock-Free Concurrent Algorithms. Last week I attended a review session of the new JSR166StampedLock run by Heinz Kabutz at the excellent JCrete unconference.
StampedLock is an attempt to address the contention issues that arise in a system when multiple readers concurrently access shared state. StampedLock is designed to perform better than ReentrantReadWriteLock by taking an optimistic read approach. While attending the session a couple of things occurred to me. Firstly, I thought it was about time I reviewed the current status of Java lock implementations. Secondly, that although StampedLock looks like a good addition to the JDK, it seems to miss the fact that lock-free algorithms are often a better solution to the multiple reader case.
Test Case To compare implementations I needed an API test case that would not favour a particular approach. Multiple implementations are built for each spaceship and exercised by a test harness. Note: Other CPUs and operating systems can produce very different results. Results Analysis. JVM Memory settings. Caliper - Microbenchmarking framework for Java. Performance and Memory Java Profiler - YourKit Java Profiler. Java Magic. Part 4: sun.misc.Unsafe - mishadoff thoughts.
Java is a safe programming language and prevents programmer from doing a lot of stupid mistakes, most of which based on memory management.
But, there is a way to do such mistakes intentionally, using Unsafe class. This article is a quick overview of sun.misc.Unsafe public API and few interesting cases of its usage. Unsafe instantiation Before usage, we need to create instance of Unsafe object. There is no simple way to do it like Unsafe unsafe = new Unsafe(), because Unsafe class has private constructor.
This is how java validates if code is trusted. We can make our code “trusted”. But it’s too hard. Unsafe class contains its instance called theUnsafe, which marked as private. Note: Ignore your IDE. Java Performance Tuning Guide. By Mikhail Vorontsov JMH is a new microbenchmarking framework (first released late-2013).
Its distinctive advantage over other frameworks is that it is developed by the same guys in Oracle who implement the JIT. In particular I want to mention Aleksey Shipilev and his brilliant blog. JMH is likely to be in sync with the latest Oracle JRE changes, which makes its results very reliable. You can find JMH examples here. JMH has only 2 requirements (everything else are recommendations): 5 things you didn't know about ... Java performance monitoring, Part 2. Full-featured, built-in profilers like JConsole and VisualVM sometimes cost more than they're worth in performance overhead — particularly in systems running on production hardware.
So, in this second article focusing on Java performance monitoring, I'll introduce five command-line profiling tools that enable developers to focus on just one aspect of a running Java process. The JDK includes many command-line utilities that can be used to monitor and manage Java application performance. Although most of them are labeled "experimental" and therefore technically unsupported, they're still useful. Some might even be seed material for special-purpose tools that could be built using JVMTI or JDI (see Resources). iNikem. Monitoring and detecting memory leaks in your java application. So your application is running out of memory, you’re spending days and nights analyzing your application hoping to catch the memory holes in your objects.
The next steps will explain how to monitor and detect your memory leaks to make sure your app is on the safe side. 1. Memory leak suspicion If you have a suspicion there is a memory leak a convenient way to make sure it’s really there is using jconsole. You can locally or remotely connect jconsole to your app and let it monitor for a while(Hour, Half day, Overnight, Week..) 2. Jmap - Memory Map. Eclipse Memory Analyzer (MAT) Memory Analyzer Open Source Project. Martin Anderson - threads v actors. 11 Best Practices for Low Latency Systems. Its been 8 years since Google noticed that an extra 500ms of latency dropped traffic by 20% and Amazon realized that 100ms of extra latency dropped sales by 1%.
Blog - page 1.