Links of Interest
Get flash to fully experience Pearltrees
import collection.immutable.IndexedSeq import com.lmax.disruptor.dsl.Disruptor import com.lmax.disruptor._
Recently we open sourced the LMAX Disruptor , the key to what makes our exchange so fast. Why did we open source it? Well, we've realised that conventional wisdom around high performance programming is... a bit wrong. We've come up with a better, faster way to share data between threads, and it would be selfish not to share it with the world. Plus it makes us look dead clever.
Fifteen years ago, multiprocessor systems were highly specialized systems costing hundreds of thousands of dollars (and most of them had two to four processors). Today, multiprocessor systems are cheap and plentiful, nearly every major microprocessor has built-in support for multiprocessing, and many support dozens or hundreds of processors. To exploit the power of multiprocessor systems, applications are generally structured using multiple threads. But as anyone who's written a concurrent application can tell you, simply dividing up the work across multiple threads isn't enough to achieve good hardware utilization -- you must ensure that your threads spend most of their time actually doing work, rather than waiting for more work to do, or waiting for locks on shared data structures.
Flash Crisis Robert X. Cringely absolutely nails it in his recent column about some of the consequences of rapidly reducing IO times on programming languages 1 . His major point was that slow but expressive 2 high-level scripting languages such as Ruby and Python have been getting away with their lack of performance due to slow disks. With super-fast seekless flash expected to replace, or at least complement, spinning disks in the storage hierarchy, the long honeymoon of Python and Ruby will come to an end when profiling reveals that IO is fast, and the runtime or interpreter is the bottleneck. This impending “flash crisis” is well known in system circles.
When talking to customers, partners and colleagues about Oracle Solaris ZFS performance, one topic almost always seems to pop up: Synchronous writes and the ZIL. In fact, most ZFS performance problems I see are related to synchronous writes, how they are handled by ZFS through the ZIL and how they impact IOPS load on the pool's disks. Many people blame the ZIL for bad performance, and they even try to turn it off , but that's not good.
To mitigate the risk of data corruption during power loss, some storage devices use battery-backed write caches. Generally, high-end arrays and some hardware controllers use battery-backed write cached. However, because the cache's volatility is not visible to the kernel, Red Hat Enterprise Linux 6 enables write barriers by default on all supported journaling file systems. Write caches are designed to increase I/O performance. However, enabling write barriers means constantly flushing these caches, which can significantly reduce performance.
We all love Linux... sometimes it is better not to look under its hood though as you never know what you might find. I stumbled across a very interesting discussion on a Linux kernel mailing list. It is dated August 2009 so you may have already read it. There is a related RH bug . I'm a little bit surprised by RH attitude in this ticket.
... otherwise known as when is a sync() not a sync()? Recently I ran some performance tests on disk I/O, from both Java and C-based applications. The nature of the applications is such that they require transactional logging for reliability, and therefore need a guarantee that data has been written to disk. After running some simple write tests, I noticed an order of magnitude difference in performance between a couple of machines.
Update: Please see this post for updated information about this event This is possibly the fastest that Team FOSS.IN has ever put together an event. As promised in my last post , here is some information about the new event series that we are putting together.
E is an object-oriented programming language for secure distributed computing , created by Mark S. Miller , Dan Bornstein , and others at Electric Communities in 1997. E is mainly descended from the concurrent language Joule and from Original-E, a set of extensions to Java for secure distributed programming. E combines message -based computation with Java -like syntax. A concurrency model based on event loops and promises ensures that deadlock can never occur.
LMAX is a new retail financial trading platform. As a result it has to process many trades with low latency. The system is built on the JVM platform and centers on a Business Logic Processor that can handle 6 million orders per second on a single thread. The Business Logic Processor runs entirely in-memory using event sourcing. The Business Logic Processor is surrounded by Disruptors - a concurrency component that implements a network of queues that operate without needing locks.
Catherine & Raj have been working in Enterprise Agile transitions in large hardware manufacturers, they share their experiences and advice on leadership and bringing Scrum to hardware teams. Resistance from management is recognized as a bottleneck in agile adoption. When will we reach the tipping point where organizations unshackle themselves from the limitations of command & control?
Catherine & Raj have been working in Enterprise Agile transitions in large hardware manufacturers, they share their experiences and advice on leadership and bringing Scrum to hardware teams. Resistance from management is recognized as a bottleneck in agile adoption. When will we reach the tipping point where organizations unshackle themselves from the limitations of command & control? Tiago Garcez Apr 01, 2013 Martin Thompson explores performance testing, how to avoid the common pitfalls, how to profile when the results cause your team to pull a funny face, and what you can do about that funny face. Martin Thompson Mar 29, 2013 ,
NOTE : This post is quite outdated, stuff has changed since i wrote this. While you can somewhat safely ignore the alterations for increased address space of entities, the Property store has changed in a fundamental way. Please find the new implementation here .