background preloader

Architecture

Facebook Twitter

Scalable NIO Servers – Part 3 – Features « Z|NET Development. We have now analyzed various open source NIO servers for performance and memory consumption. Per my quick, initial testing, only Grizzly, Mina, and Netty were comparable. Now, let’s analyze features and how each of these frameworks use them. For my purposes, I am going to be looking into the following features that I personally value most important for my project: Intercepting Pattern (ie: Filters)Access to high level, yet effcient, buffers rather than lower level byte buffersProtocol independence and abstractionSocket independence and abstractionCustom protocol supportPOJO support for encoding/decodingCustom thread model supportHTTP supportUser documentation (user guide, javadoc, source code, examples Intercepting Pattern: Filters/Handlers Let’s look at the first feature: intercepting pattern. Netty provides this functionality through the use of channel handlers via ChannelUpstreamHandler and ChannelDownstreamHandler.

Mina also provides the intercepting pattern through actual filters. Understanding Throughput-Oriented Architectures | November 2010. By Michael Garland, David B. Kirk Communications of the ACM, Vol. 53 No. 11, Pages 58-66 10.1145/1839676.1839694 Comments Much has been written about the transition of commodity microprocessors from single-core to multicore chips, a trend most apparent in CPU processor families.

Commodity PCs are now typically built with CPUs containing from two to eight cores, with even higher core counts on the horizon. These chips aim to deliver higher performance by exploiting modestly parallel workloads arising from either the need to execute multiple independent programs or individual programs that themselves consist of multiple parallel tasks, yet maintain the same level of performance as single-core chips on sequential workloads. A related architectural trend is the growing prominence of throughput-oriented microprocessor architectures.

Modern GPUs are fully programmable and designed to meet the needs of a problem domain—real-time computer graphics—with tremendous inherent parallelism. GPUs. YouTube Architecture. Update 3: 7 Years Of YouTube Scalability Lessons In 30 Minutes and YouTube Strategy: Adding Jitter Isn't A Bug Update 2: YouTube Reaches One Billion Views Per Day. That’s at least 11,574 views per second, 694,444 views per minute, and 41,666,667 views per hour. Update: YouTube: The Platform. YouTube adds a new rich set of APIs in order to become your video platform leader--all for free. Upload, edit, watch, search, and comment on video from your own site without visiting YouTube. Compose your site internally from APIs because you'll need to expose them later anyway. YouTube grew incredibly fast, to over 100 million video views per day, with only a handful of people responsible for scaling the site. Information Sources Google Video Platform Apache Python Linux (SuSe) MySQL psyco, a dynamic python->C compiler lighttpd for video instead of Apache What's Inside?

The Stats Supports the delivery of over 100 million videos per day. Recipe for handling rapid growth This loop runs many times a day. This is what 128gb of ram looks like. Www.akkadia.org/drepper/cpumemory.pdf. Expanding the Cloud - Adding the Incredible Power of the Amazon EC2 Cluster GPU Instances. Today Amazon Web Services takes another step on the continuous innovation path by announcing a new Amazon EC2 instance type: The Cluster GPU Instance. Based on the Cluster Compute instance type, the Cluster GPU instance adds two NVIDIA Telsa M2050 GPUs offering GPU-based computational power of over one TeraFLOPS per instance. This incredible power is available for anyone to use in the usual pay-as-you-go model, removing the investment barrier that has kept many organizations from adopting GPUs for their workloads even though they knew there would be significant performance benefit.

From financial processing and traditional oil & gas exploration HPC applications to integrating complex 3D graphics into online and mobile applications, the applications of GPU processing appear to be limitless. We believe that making these GPU resources available for everyone to use at low cost will drive new innovation in the application of highly parallel programming models. From CPU to GPU CPU and/or GPU. Eventually Consistent - Revisited. I wrote a first version of this posting on consistency models about a year ago, but I was never happy with it as it was written in haste and the topic is important enough to receive a more thorough treatment. ACM Queue asked me to revise it for use in their magazine and I took the opportunity to improve the article. This is that new version.

Eventually Consistent - Building reliable distributed systems at a worldwide scale demands trade-offs between consistency and availability. At the foundation of Amazon's cloud computing are infrastructure services such as Amazon's S3 (Simple Storage Service), SimpleDB, and EC2 (Elastic Compute Cloud) that provide the resources for constructing Internet-scale computing platforms and a great variety of applications. Under the covers these services are massive distributed systems that operate on a worldwide scale. Historical Perspective In the mid-'90s, with the rise of larger Internet systems, these practices were revisited. Client-side Consistency. Pragmatic Programming Techniques.