background preloader

CS Research

Facebook Twitter

Cloud Computing Patterns. I have attended a presentation by Simon Guest from Microsoft on their cloud computing architecture. Although there was no new concept or idea introduced, Simon has provided an excellent summary on the major patterns of doing cloud computing. I have to admit that I am not familiar with Azure and this is my first time hearing a Microsoft cloud computing presentation. I felt Microsoft has explained their Azure platform in a very comprehensible way. I am quite impressed. Simon talked about 5 patterns of Cloud computing. Let me summarize it (and mix-in a lot of my own thoughts) ... 1. Passive listener model uses a synchronous communication pattern where the client pushes request to the server and synchronously wait for the processing result. In the passive listener model, machine instances are typically sit behind a load balancer. 2. UI brandingBusiness rules for decision criteriaData schema The approach is to provide sufficient "customization" capability for their customer. 3. 4. 5.

Journal/jcsit/0203csit11.pdf. The Writings of Leslie Lamport. This document contains descriptions of almost all my technical papers and electronic versions of many of them for downloading. Omitted are papers for which I no longer have copies and papers that are incomplete. I have also omitted early versions of some of these papers--even in cases where the title changed. Included are some initial drafts of papers that I abandoned before fixing errors or other problems in them. A table of contents precedes the descriptions. Each description attempts to explain the genesis of the work.

Where I think it's interesting, I give the story behind the publication or non-publication of a paper. I would like to have ordered my papers by the date they were written. Whenever possible, I have included electronic versions of the works. Mathematics Bulletin of the Bronx High School of Science (1957), pages 6,7, and 9. This appears to be my first publication, written when I was a high school student. In the summer of 1966, I worked at the M.I.T. Ph.D. . , T. . , T. Distributed Systems Reading List. Developer blog: Efficiency & Scalability.

Software engineers know that distributed systems are often hard to scale and many can intuitively point to reasons why this is the case by bringing up points of contention, bottlenecks and latency-inducing operations. gIndeed, there exists a plethora of reasons and explanations as to why most distributed systems are inherently hard to scale, from theCAP theoremto scarcity of certain resources, e.g., RAM, network bandwidth ...

It's said thatgood engineersknow how to identify resources that may not appear to be relevant to scaling initially but will become more significant as particular kinds of demand grow. If that’s the case, thengreat engineersknow that system architecture is often the determining factor in system scalabilityg—that a system’s own architecture may be its worse enemy — so they define and structure systems in order avoid fundamental flaws. Before we go any further, it’s helpful to formulate a definition of efficiency applicable to our context: More succinctly, we'll write: Tools 9.

FSS

Universe of Distributed Computing. Cloud Computing. Www.cercs.gatech.edu/tech-reports/tr2009/git-cercs-09-13.pdf. Aggregates + Event Sourcing distilled. I have been following the excellent BTW podcast and thinking a lot about CQRS and Event Sourcing. Inspired by Greg Youngs lecture on Functional Programming with DDD I have tried to distill what is the minimal things I need to implement an Aggregate with Event Sourcing in Java. In this blog post I will focus on the command side of things. The purpose here is to explain Event Sourcing in a way that is easy to understand without any unnecessary overhead. If you want to get directly to the example code it is available at GitHub. A simple domain To start we need a minimal domain that is simple to understand, but still interesting enough to be challanging.

To keep it really simple the game only has two commands: Player 1: Create game and make first movePlayer 2: Make move The business rules: The players of a game must be differentWhen the second player has moved no more moves can be madeWhen the second player has moved either it is a tie or one of the players is declared winner Overview Code Analysis. Aggregates + Event Sourcing distilled.

Distributed Storing & Compute Platforms

Distributed Computing. Peer-to-Peer Communication Across Network Address Translators. Bryan FordMassachusetts Institute of Technologybaford (at) mit.edu Pyda SrisureshCaymas Systems, Inc.srisuresh (at) yahoo.com Dan Kegeldank (at) kegel.com J'fais des trous, des petits trous toujours des petits trous - S. Gainsbourg Abstract: Network Address Translation (NAT) causes well-known difficulties for peer-to-peer (P2P) communication, since the peers involved may not be reachable at any globally valid IP address. The combined pressures of tremendous growth and massive security challenges have forced the Internet to evolve in ways that make life difficult for many applications.

The Internet's new de facto address architecture is suitable for client/server communication in the typical case when the client is on a private network and the server is in the global address realm. One of the most effective methods of establishing peer-to-peer communication between hosts on different private networks is known as “hole punching.” The rest of this paper is organized as follows. 2.2 Relaying . Difference Between Grid Computing and Distributed Computing. Definition of Distributed Computing Distributed Computing is an environment in which a group of independent and geographically dispersed computer systems take part to solve a complex problem, each by solving a part of solution and then combining the result from all computers.

These systems are loosely coupled systems coordinately working for a common goal. It can be defined as A computing system in which services are provided by a pool of computers collaborating over a network .A computing environment that may involve computers of differing architectures and data representation formats that share data and system resources. Definition of Grid Computing The Basic idea between Grid Computing is to utilize the ideal CPU cycles and storage of million of computer systems across a worldwide network function as a flexible, pervasive, and inexpensive accessible pool that could be harnessed by anyone who needs it, similar to the way power companies and their users share the electrical grid. 1. 2. The 10 rules of scalability. Enterprise Platform and Integration Concepts - NewApproachforaCloudBasedOLTPSystem. In this blog entry, I want to summarize some emerging ideas about, how OLTP can be performed on distributed systems by weakening consistency.

Scaling out servers is a common approach to achieve a higher performance or higher throughput. If systems are scaled out, more servers are used to handle workload. Using more servers often involves the usage of distributed transactions and partitioning. However, distributed transactions according to ACID which can be achieved using two phase commits are expensive, as they increase the latency of a transaction and weaken the availability or the resilience to network partitions. CAP Theorem and ACID 2.0 According to the CAP theorem, consistency, availability, and partitioning cannot be achieved at the same time. According to Helland (Helland 2007), a transaction should only involve single entities that are stored on the same instance. Fault Tolerance Another issue is fault resilience. The ideas proposed by Helland are very interesting. References. Matternet. Mechanism Design on Trust Networks.

BibTeX @MISC{Ghosh_mechanismdesign, author = {Arpita Ghosh and Mohammad Mahdian and Daniel M. Reeves and David M. Pennock and Ryan Fugger}, title = {Mechanism Design on Trust Networks}, year = {}} Bookmark OpenURL Abstract Abstract. Citations. Structure and Interpretation of Computer Programs.

Cloud Robotics. What if robots and automation systems were not limited by onboard computation, memory, or programming? This is now conceivable with wireless networking and rapidly expanding Internet resources. In 2010, James Kuffner at Google introduced the term "Cloud Robotics" to describe a new approach to robotics that takes advantage of the Internet as a resource for massively parallel computation and sharing of vast data resources. The Google autonomous driving project exemplifies this approach: the system indexes maps and images that are collected and updated by satellite, Streetview, and crowdsourcing to facilitate accurate localization. Another example is Kiva Systems new approach to warehouse automation and logistics using large numbers of mobile platforms to move pallets using a local network to coordinate planforms and update tracking data.

These are just two new projects that build on resources from the Cloud. Beyond Webcams: An Introduction to Online Robots. OnApp to add compute to its expanding federated cloud portfolio. London’s OnApp closed a new round of financing last month, taking its total funding to $20 million. So what’s it going to do with the (undisclosed) new tranche of cash? Add yet another string to its bow, that’s what. Bear in mind that OnApp was only spun out of British hosting provider UK2 a couple of years ago, with software that lets other providers build their own public clouds.

The idea there is to help these other hosting providers – OnApp now counts more than 500 of them as customers — ward off the threat that is Amazon, but in the process the company has steadily used that growing federation to diversify into new lines of business. In 2011, OnApp launched a content delivery network (CDN) based on those service providers’ spare network capacity. All that is made possible through OnApp’s marketplace and now, flush with fresh funding, OnApp is going to use that marketplace to do the same thing with compute capacity, chief commercial officer Kosten Metreweli told me:

HyperDex.org: Home of the Searchable Key-Value Store. PPL. Why We Exist New heterogeneous architectures continue to provide increases in achievable performance, but programming these devices to reach maximum performance levels is not straightforward. The goal of the PPL is to make heterogeneous parallelism accessible to average software developers through domain-specific languages (DSLs) so that it can be freely used in all computationally demanding applications. What We Do The core of our research agenda is to allow the domain expert to develop parallel software without becoming an expert in parallel programming.

Our approach is to use a layered system based on DSLs, a common parallel compiler and runtime infrastructure, and an underlying architecture that provides efficient mechanisms for communication, synchronization, and performance monitoring.