background preloader

Scalability

Facebook Twitter

"Building on Quicksand" Paper for CIDR (Conference on ... Towards Robust Distributed Systems. Trading Consistency for Scalability in Distributed Architectures. One of the key aspects of the system architect's role is to weigh up conflicting requirements and decide on a solution, often by trading off one aspect against another.

Trading Consistency for Scalability in Distributed Architectures

As systems become larger and more complex so more and more of the conventional wisdom about how applications should be built is being challenged. At last year's QCon conference in London in March, for example, Dan Pritchard gave a talk about eBay's architecture. One of the main take-away points from his presentation which got a lot of subsequent coverage was the fact that eBay don't use transactions, trading a loss of easy data consistency for considerable improvements in the overall scalability and performance of their system.

Following on from his talk, InfoQ spoke to Dan Pritchard to get some more information: Why doesn't eBay use transactions, or how might one decide against application level transactions? It's not that we don't use transactions. Two data streams for a happy website. One of the most important architectural decisions that must be done early on in a scalable web site project is splitting the data flow into two streams: one that is user specific and one that is generic.

Two data streams for a happy website

If this is done properly, the system will be able to grow easily. On the other hand, if the data streams are not separated from the start, then the growth options will be severely limited. Trying to make such a web site scale will be just painting the corpse, and this change will cost a whole lot more when you need to introduce it later (and it is “when” in this case, not “if”). In a classic online book-store example, book details, prices and shop categories are all generic. They do not depend on any particular user. Different constraints Keeping session footprint low, caching and partitioning have been tried and tested as best practices for scaling web systems. For example, most generic data flow is completely stateless, but user-specific actions are often stateful. Asynchronous Architectures. All computers wait at the same speed -- Dr.

Asynchronous Architectures

Thomas E. Bell, Performance of Distributed Systems, Presentation to ICCM Capacity Management Forum 7, October 1993, San Francisco In Five Scalability Principles, I reviewed an article published by MySQL about the five performance principles that apply to all application scaling efforts. When discussing the first principle -- Don't think synchronously -- I stated that Decoupled processes and multi-transaction workflows are the optimal starting point for the design of high-performance (distributed) systems. That's a quote from High-Performance Client/Server, from a section on Abandoning the Single Synchronous Transaction Paradigm, in Chapter 15, Architecture for High Performance. So I am planning some more posts built around excerpts from the manuscript. Five Scalability Principles. Don’t think synchronously, ... ... don’t think vertically, don’t mix transactions with business intelligence, avoid mixing hot and cold data, and don’t forget the power of memory. -- MySQL site, 2007 The 12 Days of Scale-Out is a section of the MySQL site.

Five Scalability Principles

It consists of a series of twelve articles, eleven of which are case studies describing large-scale MySQL implementations. But Day Six is a bit different -- it spells out five fundamental performance principles that apply to all application scaling efforts. This subject is vitally important to MySQL, whose server replication and high availability features ... allow high-traffic sites to horizontally 'Scale-Out' their applications, using multiple commodity machines to form one logical database -- as opposed to 'Scaling Up', starting over with more expensive and complex hardware and database technology. Scalability Best Practices. Eventually Consistent. I wrote a first version of this posting on consistency models in December 2007, but I was never happy with it as it was written in haste and the topic is important enough to receive a more thorough treatment.

Eventually Consistent

ACM Queue asked me to revise it for use in their magazine and I took the opportunity to improve the article. I posted an update to this article in December 2008 under the tile Eventually Consistent - Revisted. - please read that article instead of this one. I am leaving this one here for transparency/historical reasons and because the comments helped me improve the article. For which I am grateful Recently there has been a lot of discussion about the concept of eventual consistency in the context of data replication. There are two ways of looking at consistency. Building a Non-blocking TCP server using OTP principles. Author Serge Aleynikov <saleyn at gmail.com> Overview A reader of this tutorial is assumed to be familiar with gen_server and gen_fsm behaviours, TCP socket communications using gen_tcp module, active and passive socket modes, and OTP supervision principles.

Building a Non-blocking TCP server using OTP principles

OTP provides a convenient framework for building reliable applications. This is in part accomplished by abstracting common functionality into a set of reusable behaviours such as gen_server and gen_fsm that are linked to OTP's supervision hierarchy. There are several known TCP server designs. Life beyond Distributed Transactions. Building on Quicksand.