background preloader

Evolutionary programing

Facebook Twitter

Robots evolve to exploit inadvertent cues. Human interaction heavily depends on inadvertent cues: A competitor's sweaty handshake before a negotiation, a girl blushing when introducing herself, or the trace of a smile crossing the face of a poker player all convey important information. Sara Mitri and colleagues at the Laboratory of Intelligent Systems (disclaimer: my former lab) at the EPFL in Switzerland have now shown that it is not just humans who can develop, detect and use inadvertent cues to their advantage (PNAS: "Evolution of Information Suppression in Communicating Robots with Conflicting Interests"). The researchers set up a group of S-bots equipped with omnidirectional cameras and light-emitting rings around their body in a bio-inspired foraging task (see picture above). Like many animals, the robots used visual cues to forage for two food sources in the arena.

Rather than pre-programming the robots' control rules, the researchers used artificial evolution to develop the robots' control systems. Thanks Sara! SmartRockets. Langton's loops. Langton's Loop, in the starting configuration. Langton's loops are a particular "species" of artificial life in a cellular automaton created in 1984 by Christopher Langton. They consist of a loop of cells containing genetic information, which flows continuously around the loop and out along an "arm" (or pseudopod), which will become the daughter loop. The "genes" instruct it to make three left turns, completing the loop, which then disconnects from its parent. History[edit] In 1952 John von Neumann created the first cellular automaton (CA) with the goal of creating a self-replicating machine.[1] This automaton was necessarily very complex due to its computation- and construction-universality.

Specification[edit] Langton's Loops run in a CA that has 8 states, and uses the von Neumann neighborhood with rotational symmetry. As with Codd's CA, Langton's Loops consist of sheathed wires. A colony of loops. Colonies[edit] , where A is the total area of the space in cells. References[edit] Main page - Introduction to Genetic Algorithms - Tutorial with Interactive Java Applets. These pages introduce some fundamentals of genetic algorithms. Pages are intended to be used for learning about genetic algorithms without any previous knowledge from this area.

Only some knowledge of computer programming is assumed. You can find here several interactive Java applets demonstrating work of genetic algorithms. As the area of genetic algorithms is very wide, it is not possible to cover everything in these pages. But you should get some idea, what the genetic algorithms are and what they could be useful for. Do not expect any sophisticated mathematics theories here.

Now please choose next to continue or you can choose any topic from the menu on the left side. There are translations of these pages available as well - Portuguese one (original mirror here) by Hermelindo Pinheiro Manoel, Japanese one by Ishii Manabu and Bulgarian one by Todor Dimitrov Balabanov. You can also check recommendations for your browser or read about the background of these pages. BugFest - Homepage. BUG-FEST Artificial Life Experiment (primary coding: Jan 5 '98 to Jan 9 '98) Welcome to Bug Fest!!

Latest version: BUGFEST 1, v1.13 (bug1v113.zip) [97k] [System Requirements] Eureka! I have found it! I have managed to run a self-balanced simulation with stable predator/prey balances! It won't necessarily do it every time, but things are looking up. I've been quite pleased with this experiment, even though it did not balance itself the way I would have hoped at first (therefore requiring a few "cheats", which are now optional "failsafes"). Anyway, on with the screenshots... The following sequence of 4 images shows a very interesting situation. The information below is the same information that you will find in the documentation for the program, as of version 1.01.

BugFest runs in DOS (though it is somewhat windows-friendly, meaning running under windows will not fry the system). Welcome to the Bug-Fest! My goal was to make a self-sustaining system that would also be interesting to watch. Genetic Algorithm used to build a car with the Box2D physics library. The colors show the crossover and mutation for each member of the population. Neuro Evolving Robotic Operatives. Neuro-Evolving Robotic Operatives, or NERO for short, is a unique computer game that lets you play with adapting intelligent agents hands-on. Evolve your own robot army by tuning their artificial brains for challenging tasks, then pit them against your friends' teams in online competitions! New features in NERO 2.0 include an interactive game mode called territory capture, as well as a new user interface and more extensive training tools.

NERO is a result of an academic research project in artificial intelligence, based on the rtNEAT algorithm. It is also a platform for future research on intelligent agent technology. The NERO project is run by the Neural Networks Group of the Department of Computer Sciences at the University of Texas at Austin . To learn more about NERO , check the About page and the illustrative videos . Currently, we are developing an open source successor to NERO , OpenNERO , a game platform for AI research and education.