background preloader

The RoboEarth Cloud Engine

The RoboEarth Cloud Engine

Forget Dunbar’s Number, Our Future Is in Scoble’s Number February 16, 2009 by Hutch Carpenter Photo credit: Mark Wallace I probably don’t know about your latest job project. I don’t know what your kids are up to. I don’t know about that vacation you’ve got coming up. I can’t say what city you’re visiting for business. But I do know you’ve got a really strong take about where social software helps companies. Why? From Wikipedia, here’s what Dunbar’s Number is: Dunbar’s number is a theoretical cognitive limit to the number of people with whom one can maintain stable social relationships. This is a recurring issue in social networks. I like to break it people down into three types. Three Types of Social Network Participants I’m oversimplifying here, but this is a useful way to segment how people view their social network participation: Close Friends: These folks view social networks as sites for staying up to date on a limited set of close connections. Power Networkers: These folks amass thousands of connections. Then there are the rest of us.

Gridcosm Gridcosm is a collaborative art project in which artists from around the world contribute images to a compounding series of graphical squares. Each level of Gridcosm is made up of nine square images arranged into a 3x3 grid. The middle image is a one-third size version of the previous level. Artists add images around that center image until a new 3x3 grid is completed, then that level itself shrinks and becomes the "seed" for the next level. Participation Everyone is invited to explore and participate in Gridcosm. , which is free. History This project was created in 1997 by Ed Stastny and has subsequently been under more-or-less constant refinement by Ed and Jon Van Oast. Gridcosm is microblogging. Explore the Gridcosm Alternate Views Flash Viewer New Version, Nov. 8, 2005. Articles Gridcosm Primer An overview written by one of Gridcosm's most active participants, Mark Sunshine. Videos Flashy GridcosmVideo on Big Time TV, screen capture of Flash interface flying through levels 2222-2192.

Artificial Intelligence, Powered by Many Humans Personal assistants such as Apple’s Siri may be useful, but they are still far from matching the smarts and conversational skills of a real person. Researchers at the University of Rochester have demonstrated a new, potentially better approach that creates a smart artificial chat partner from fleeting contributions from many crowdsourced workers. Crowdsourcing typically involves posting simple tasks to a website such as Amazon Mechanical Turk, where Web users complete them for a reward of a few cents. When people talk to the new crowd-powered chat system, called Chorus, using an instant messaging window, they get an experience practically indistinguishable from chatting with a single real person. Tests where Chorus was asked for travel advice showed that it could be smarter than any one individual in the crowd, because around seven people were contributing to its responses at any one time. Chorus does that with three simple types of task.

Human Workers, Managed by an Algorithm Global workforce: Remote digital workers earned $0.32 each for producing these self-portraits. Stephanie Hamilton is part of something larger than herself. She’s part of a computer program. The 38-year-old resident of Kingston, Jamaica, recently began performing small tasks assigned to her by an algorithm running on a computer in Berkeley, California. By assigning such tasks to people in emerging economies, MobileWorks hopes to get good work for low prices. The best-known crowd marketplace is Mechanical Turk, which Amazon launched in 2005. Amazon’s marketplace was a revolutionary idea. “If you put something on Mechanical Turk, it’s easy to get cheated,” says Luis Von Ahn, a crowdsourcing expert at Carnegie Mellon University. Now several startups, including CrowdFlower and CrowdSource, have written software that works on top of Mechanical Turk, adding ways to test and rank workers, match them up to tasks, and organize work so it gets double- or triple-checked. “Ugh!”

Adding Human Intelligence to Software Amazon’s Mechanical Turk service has long provided a cheap source of labor, when the job is simple for humans but difficult for computers. Tasks such as describing a picture, for example, can be completed online by remote, human workers. Programmers already use groups of these workers, called turkers, to do many such tasks at the same time. But Mechanical Turk offers no easy way for programmers developing new software applications to combine and coordinate the turkers’ efforts. “Usually in Javascript, you wouldn’t be able to access Mechanical Turk without a lot of work,” explains Greg Little, a PhD candidate at MIT’s Computer Science and Artificial Intelligence Laboratory, who created TurKit. With TurKit, human input is stored in a database. Thanks to TurKit, researchers have already created human computation algorithms stable enough to incorporate into functioning software.

Human-based genetic algorithm In evolutionary computation, a human-based genetic algorithm (HBGA) is a genetic algorithm that allows humans to contribute solution suggestions to the evolutionary process. For this purpose, a HBGA has human interfaces for initialization, mutation, and recombinant crossover. As well, it may have interfaces for selective evaluation. In short, a HBGA outsources the operations of a typical genetic algorithm to humans. Evolutionary genetic systems and human agency[edit] Among evolutionary genetic systems, HBGA is the computer-based analogue of genetic engineering (Allan, 2005). One obvious pattern in the table is the division between organic (top) and computer systems (bottom). Looking to the right, the selector is the agent that decides fitness in the system. The innovator is the agent of genetic change. HBGA is roughly similar to genetic engineering. Differences from a plain genetic algorithm[edit] Functional features[edit] HBGA is a method of collaboration and knowledge exchange.

Human-based computation Human-based computation (HBC) is a computer science technique in which a machine performs its function by outsourcing certain steps to humans. This approach uses differences in abilities and alternative costs between humans and computer agents to achieve symbiotic human-computer interaction. In traditional computation, a human employs a computer[1] to solve a problem; a human provides a formalized problem description and an algorithm to a computer, and receives a solution to interpret. Human-based computation frequently reverses the roles; the computer asks a person or a large group of people to solve a problem, then collects, interprets, and integrates their solutions. Early work[edit] Human-based computation (apart from the historical meaning of "computer") research has its origins in the early work on interactive evolutionary computation. A concept of the automatic Turing test pioneered by Moni Naor (1996) is another precursor of human-based computation. Alternative terms[edit]

Interactive evolutionary computation Interactive evolutionary computation (IEC) or aesthetic selection is a general term for methods of evolutionary computation that use human evaluation. Usually human evaluation is necessary when the form of fitness function is not known (for example, visual appeal or attractiveness; as in Dawkins, 1986[1]) or the result of optimization should fit a particular user preference (for example, taste of coffee or color set of the user interface). IEC design issues[edit] The number of evaluations that IEC can receive from one human user is limited by user fatigue which was reported by many researchers as a major problem. In addition, human evaluations are slow and expensive as compared to fitness function computation. Hence, one-user IEC methods should be designed to converge using a small number of evaluations, which necessarily implies very small populations. However IEC implementations that can concurrently accept evaluations from many users overcome the limitations described above. IGA[edit]

Environmental Modelling & Software - Putting humans in the loop: Social computing for Water Resources Management Abstract The advent of online services, social networks, crowdsourcing, and serious Web games has promoted the emergence of a novel computation paradigm, where complex tasks are solved by exploiting the capacity of human beings and computer platforms in an integrated way. Water Resources Management systems can take advantage of human and social computation in several ways: collecting and validating data, complementing the analytic knowledge embodied in models with tacit knowledge from individuals and communities, using human sensors to monitor the variation of conditions at a fine grain and in real time, activating human networks to perform search tasks or actuate management actions. This exploratory paper overviews different forms of human and social computation and analyzes how they can be exploited to enhance the effectiveness of ICT-based Water Resources Management. Keywords Copyright © 2012 Elsevier Ltd.

Related: