background preloader

High performance

Facebook Twitter

Solving OutOfMemoryErrors. OutOfMemoryError - GC overhead limit exceeded. March 25, 2010 Someone asked me recently about the following exception on their ColdFusion server: java.lang.OutOfMemoryError: GC overhead limit exceeded This exception is thrown by the garbage collector (in the underlying jvm, it's not specific to ColdFusion), when it is spending way too much time collecting garbage. This error essentially means that you need to add more memory, or reconfigure your garbage collection arguments.

You can suppress this error by adding -XX:-UseGCOverheadLimit to your JVM startup arguments. Here's what Sun has to say about it: The parallel / concurrent collector will throw an OutOfMemoryError if too much time is being spent in garbage collection: if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, an OutOfMemoryError will be thrown. Related Entries 10 people found this page useful, what do you think? Trackbacks Trackback Address: Comments. Increase the heap size in Eclipse. Some JVMs put restrictions on the total amount of memory available on the heap.

If you are getting OutOfMemoryErrors while running Eclipse, the VM can be told to let the heap grow to a larger amount by passing the -vmargs command to the Eclipse launcher. For example, the following command would run Eclipse with a heap size of 256MB: eclipse [normal arguments] -vmargs -Xmx256M [more VM args] The arguments after -vmargs are directly passed to the VM. Run java -X for the list of options your VM accepts. Options starting with -X are implementation-specific and may not be applicable to all VMs. You can also put the extra options in eclipse.ini. Here is an example; FAQ How do I run Eclipse? This FAQ was originally published in Official Eclipse 3.0 FAQs. Dynamic Memory Allocation. Tuning I/O Performance. Oracle Technology Network > Java Software Downloads View All Downloads Top Downloads New Downloads What's New Java in the Cloud: Rapidly develop and deploy Java business applications in the cloud.

Essential Links Developer Spotlight Java EE—the Most Lightweight Enterprise Framework? Blogs Technologies Contact Us About Oracle Cloud Events Top Actions News Key Topics Oracle Integrated Cloud Applications & Platform Services. Performance Tuning. I/O Performance. Tuning tips by category. Java I/O Performance Tuning. Summary Many JavaTM programs that utilize I/O are excellent candidates for performance tuning. One of the more common problems in Java applications is inefficient I/O. A profile of Java applications and applets that handle significant volumes of data will show significant time spent in I/O routines, implying substantial gains can be had from I/O performance tuning. In fact, the I/O performance issues usually overshadow all other performance issues, making them the first area to concentrate on when tuning performance.

Once an application's reliance upon I/O is established and I/O is determined to account for a substantial slice of the applications execution time, performance tuning can be undertaken. Introduction Java performance is currently a topic of great interest. Because Java is a relatively new language, optimizing compiler features are less sophisticated that those available for C and C++, leaving room for more "hand-crafting". Performance Tuning Through Stream Chaining Summary.

How to improve Java's I/O performance. Java's I/O performance has been a bottleneck for a lot of Java applications because of a poorly designed and implemented JDK 1.0.2 java.io package. A key problem is buffer -- most classes in java.io are not buffered. In fact, the only classes with buffers are BufferedInputStream and BufferedOutputStream, but they provide very limited methods. For example, in most file-related applications, you need to parse a file line by line. But the only class that provides the readLine method is the DataInputStream, and it has no internal buffer.

The new JDK 1.1 improves I/O performance with the addition of a collection of Reader and Writer classes. How to tackle the I/O problem To tackle the problem of inefficient file I/O, we need a buffered RandomAccessFile class. Public class Braf extends RandomAccessFile { } For efficiency reasons, we define a byte buffer instead of char buffer. Byte buffer[]; int buf_end = 0; int buf_pos = 0; long real_pos = 0; Synchronization turn-off: An extra tip. Optimization techniques in I/O.

Performance improvement techniques in I/O This topic illustrates the performance improvement techniques in I/O with the following sections:: Overview of I/O I/O represents Input and Output streams. We use streams to read from or write to devices such as file or network or console. java.io package provides I/O classes to manipulate streams. This package supports two types of streams - binary streams which handle binary data and character streams which handle character data.

InputStream and OutputStream are high level interfaces for manipulating binary streams. Reader and Writer are high level interfaces for manipulating character streams. The following figure shows the relationship of different IO classes addressed in this section. This section examples are tested on Windows millennium, 320mb RAM and JDK 1.3 Note: This section assumes that reader has some basic knowledge of Java I/O.

Optimization with I/O Buffering By default, most of the streams read or write one byte at a time. IOTest.java. Java Large Files Disk IO Performance. High-Performance I/O with Java NIO. VM Garbage Collection Tuning. Note: For Java SE 8, see Java Platform, Standard Edition HotSpot Virtual Machine Garbage Collection Tuning Guide. The Java™ Platform, Standard Edition (Java SE™) is used for a wide variety of applications, from small applets on desktops to web services on large servers. In support of this diverse range of deployments, the Java HotSpot™ virtual machine implementation (Java HotSpot™ VM) provides multiple garbage collectors, each designed to satisfy different requirements.

This is an important part of meeting the demands of both large and small applications. However, users, developers and administrators that need high performance are burdened with the extra step of selecting the garbage collector that best meets their needs. This better choice of the garbage collector is generally an improvement, but is by no means always the best choice for every application. When does the choice of a garbage collector matter? A feature referred to here as ergonomics was introduced in J2SE 5.0. Eden . . .

External Sorting. Sometimes, you want to sort large file without first loading them into memory. The solution is to use External Sorting. You divide the files into small blocks, sort each block in RAM, and then merge the result. Many database engines and the Unix sort command support external sorting. But what if you want to avoid a database? Or what if you want to sort in a non-lexicographic order? Or maybe you just want a simple external sorting example? When we could not find such a simple program, we wrote one. Download: You can grab a copy from the maven repository : Current source code: Please see our subversion tree. Usage for developers: ExternalSort.sort(inputfile,outputfile); This will output a sorted version of inputfile to outputfile. For actual applications, you may want customized row comparators.

Building with maven: Usage for end-users: You can download the jar file and run the program as follows License: Extendible Hashing. Extendible Hashing An alternative to B trees that extends digital searching algorithms to apply to external searching was developed in 1978 by Fagin, Nievergelt, Pippenger, and Strong. Their method, called extendible hashing, leads to a search implementation that requires just one or two probes for typical applications.

The corresponding insert implementation also (almost always) requires just one or two probes. Extendible hashing combines features of hashing, multiway-trie algorithms, and sequential-access methods. Like the hashing methods of Chapter 14, extendible hashing is a randomized algorithm—the first step is to define a hash function that transforms keys into integers (see Section 14.1). For simplicity, in this section, we simply consider keys to be random fixed-length bitstrings. Suppose that the number of disk pages that we have available is a power of 2—say 2d. 10. Figure illustrates the two basic concepts behind extendible hashing. 11. 13. 12. Property 16.4 Property 16.5.