UDDI Version 3.0.2 UDDI Version 3.0.2 UDDI Spec Technical Committee Draft, Dated 20041019 Document identifier: C10k problem The C10k problem is the problem of optimising network sockets to handle a large number of clients at the same time. The name C10k is a numeronym for concurrently handling ten thousand connections. Note that concurrent connections are not the same as requests per second, though they are similar: handling many requests per second requires high throughput (processing them quickly), while high number of concurrent connections requires efficient scheduling of connections. In other words, handling many requests per second is concerned with the speed of handling requests, whereas a system capable of handling high number of concurrent connections does not necessarily have to be a fast system, only one where each request will deterministically return a response within a (not necessarily fixed) finite amount of time. The problem of socket server optimisation has been studied because a number of factors must be considered to allow a web server to support many clients. History
Charms Scalable application services defined Charms give Juju its power. They encapsulate application configurations, define how services are deployed, how they connect to other services and are scaled. Charms are easily shared and there are 100s of Charms already rated and reviewed in our Charm store. Best practice built in Juju is designed to encourage collaboration on the optimal ways to deploy, configure and connect applications to other services. Attributor raises $3.2M to crack down on plagiarism Attributor, a site that helps publishers track down unauthorized copies of their content, has raised another $3.2 million in funding. The San Mateo, Calif. company says it offers a sophisticated way to detect when text or photos have been copied, taking a “fingerprint” of a paragraph or image’s essential features, then scanning 35 billion Web pages to see where the content has been duplicated. Then it helps publishers contact the offending sites and make them link back to the original article, share their advertising revenue, or just take the content down altogether. Attributor also announced some big customers today, namely the Magazine Publishers of America and the United Kingdom’s Periodical Publishers Association. The company already serves large news organizations including Reuters and The Associated Press. The round brings Attributor’s total funding to $25.2 million.
Attributor Digimarc is a digital watermarking technology provider enabling embedding of information into many forms of content, including printed material, audio, video, imagery, and certain objects. Digimarc technology provides solutions for media identification and management, counterfeit and piracy deterrence, and digital commerce. History Digimarc was founded by Geoff Rhoads, an astrophysicist with a background in deep space imaging. Initial inspiration for the company came while photographing images of the planet Jupiter. He felt that his digital images were vulnerable on the internet, even with copyright protection. In 1996, after initial venture funding, Digimarc released its first product: a digital watermarking plug-in bundled with Adobe Photoshop, Corel, and Micrografix. After a second round of venture funding and increased investments in research and technology, Digimarc signed a multi-year contract with a consortium of central banks.
Hadoop – The Power of the Elephant — eBay Tech Blog In a previous post, Junling discussed data mining and our need to process petabytes of data to gain insights from information. We use several tools and systems to help us with this task; the one I’ll discuss here is Apache Hadoop. Created by Doug Cutting in 2006 who named it after his son’s stuffed yellow elephant, and based on Google’s MapReduce paper in 2004, Hadoop is an open source framework for fault tolerant, scalable, distributed computing on commodity hardware. MapReduce is a flexible programming model for processing large data sets:Map takes key/value pairs as input and generates an intermediate output of another type of key/value pairs, while Reduce takes the keys produced in the Map step along with a list of values associated with the same key to produce the final output of key/value pairs. Map (key1, value1) -> list (key2, value2)Reduce (key2, list (value2)) -> list (key3, value3)