background preloader

Bionics

Bionics
Bionics (also known as bionical creativity engineering) is the application of biological methods and systems found in nature to the study and design of engineering systems and modern technology.[citation needed] The transfer of technology between lifeforms and manufactures is, according to proponents of bionic technology, desirable because evolutionary pressure typically forces living organisms, including fauna and flora, to become highly optimized and efficient. A classical example is the development of dirt- and water-repellent paint (coating) from the observation that the surface of the lotus flower plant is practically unsticky for anything (the lotus effect). Ekso Bionics is currently developing and manufacturing intelligently powered exoskeleton bionic devices that can be strapped on as wearable robots to enhance the strength, mobility, and endurance of soldiers and paraplegics. The term "biomimetic" is preferred when reference is made to chemical reactions. History[edit] Related:  Artificial Intelligence

Hierarchical temporal memory Hierarchical temporal memory (HTM) is an online machine learning model developed by Jeff Hawkins and Dileep George of Numenta, Inc. that models some of the structural and algorithmic properties of the neocortex. HTM is a biomimetic model based on the memory-prediction theory of brain function described by Jeff Hawkins in his book On Intelligence. HTM is a method for discovering and inferring the high-level causes of observed input patterns and sequences, thus building an increasingly complex model of the world. Jeff Hawkins states that HTM does not present any new idea or theory, but combines existing ideas to mimic the neocortex with a simple design that provides a large range of capabilities. HTM structure and algorithms[edit] An example of HTM hierarchy used for image recognition Each HTM node has the same basic functionality. Each HTM region learns by identifying and memorizing spatial patterns - combinations of input bits that often occur at the same time. Bayesian networks[edit]

Robotics Robotics is the branch of mechanical engineering, electrical engineering and computer science that deals with the design, construction, operation, and application of robots,[1] as well as computer systems for their control, sensory feedback, and information processing. These technologies deal with automated machines that can take the place of humans in dangerous environments or manufacturing processes, or resemble humans in appearance, behavior, and/or cognition. Many of today's robots are inspired by nature contributing to the field of bio-inspired robotics. The concept of creating machines that can operate autonomously dates back to classical times, but research into the functionality and potential uses of robots did not grow substantially until the 20th century.[2] Throughout history, robotics has been often seen to mimic human behavior, and often manage tasks in a similar fashion. Etymology[edit] History of robotics[edit] Robotic aspects[edit] Components[edit] Power source[edit]

Geology.com - Earth Science News, Maps, Dictionary, Articles, Jobs Autonomic Computing The system makes decisions on its own, using high-level policies; it will constantly check and optimize its status and automatically adapt itself to changing conditions. An autonomic computing framework is composed of autonomic components (AC) interacting with each other. An AC can be modeled in terms of two main control loops (local and global) with sensors (for self-monitoring), effectors (for self-adjustment), knowledge and planner/adapter for exploiting policies based on self- and environment awareness. Driven by such vision, a variety of architectural frameworks based on “self-regulating” autonomic components has been recently proposed. A very similar trend has recently characterized significant research in the area of multi-agent systems. Autonomy-oriented computation is a paradigm proposed by Jiming Liu in 2001 that uses artificial systems imitating social animals' collective behaviours to solve difficult computational problems. Problem of growing complexity[edit] Automatic Aware

Hierarchical Temporal Memory We've completed a functional (and much better) version of our .NET-based Hierarchical Temporal Memory (HTM) engines (great job Rob). We're also still working on an HTM based robotic behavioral framework (and our 1st quarter goal -- yikes - we're late). Also, we are NOT using Numenta's recently released run-time and/or code... since we're professional .NET consultants/developers, we decided to author our own implementation from initial prototypes authored over the summer of 2006 during an infamous sabbatical -- please don't ask about the "Hammer" stories. I've been feeling that the team has not been in synch in terms of HTM concepts, theory and implementation. We have divided our HTM node implementation into 2 high level types. 1) Sensor Node and 2) Cortical Node. An HTM sensor node provides a mechanism to memorize sensor inputs and sequences of those inputs. EXAMPLEtemp = temperature sensor pressure = barometric sensor light = luminousity sensor moisture = humidity sensor

Nanotechnologie Neue Materialien wie Fullerene (d) oder Carbon-Nanotubes (h) sind Nanotechnologie und werden schon jetzt in vielen Gebieten eingesetzt. Schon heute liegt die Größenordnung der Transistoren (siehe Bild) eines handelsüblichen Mikroprozessors im Bereich der Nanotechnologie. Es werden 22 nm breite Strukturen erreicht. Der Sammelbegriff gründet auf der allen Nano-Forschungsgebieten zu Grunde liegenden gleichen Größenordnung der Nanoteilchen vom Einzel-Atom bis zu einer Strukturgröße von 100 Nanometern (nm): Ein Nanometer ist ein Milliardstel Meter (10−9 m). Diese Größenordnung bezeichnet einen Grenzbereich, in dem die Oberflächeneigenschaften gegenüber den Volumeneigenschaften der Materialien eine immer größere Rolle spielen und zunehmend quantenphysikalische Effekte berücksichtigt werden müssen. In der Nanotechnologie stößt man also zu Längenskalen vor, auf denen besonders die Größe die Eigenschaften eines Objektes bestimmt. Ursprünge der Nanotechnologie[Bearbeiten]

Evolvable hardware Evolvable hardware (EH) is a new field about the use of evolutionary algorithms (EA) to create specialized electronics without manual engineering. It brings together reconfigurable hardware, artificial intelligence, fault tolerance and autonomous systems. Evolvable hardware refers to hardware that can change its architecture and behavior dynamically and autonomously by interacting with its environment. Introduction[edit] Each candidate circuit can either be simulated or physically implemented in a reconfigurable device. The concept was pioneered by Adrian Thompson at the University of Sussex, England, who in 1996 evolved a tone discriminator using fewer than 40 programmable logic gates and no clock signal in a FPGA. Why evolve circuits? In many cases, conventional design methods (formulas, etc.) can be used to design a circuit. In other cases, an existing circuit must adapt—i.e., modify its configuration—to compensate for faults or perhaps a changing operational environment. Garrison W.

On Intelligence - Welcome Cermet A cermet is a composite material composed of ceramic (cer) and metallic (met) materials. A cermet is ideally designed to have the optimal properties of both a ceramic, such as high temperature resistance and hardness, and those of a metal, such as the ability to undergo plastic deformation. The metal is used as a binder for an oxide, boride, or carbide. Generally, the metallic elements used are nickel, molybdenum, and cobalt. Depending on the physical structure of the material, cermets can also be metal matrix composites, but cermets are usually less than 20% metal by volume. Cermets are used in the manufacture of resistors (especially potentiometers), capacitors, and other electronic components which may experience high temperature. Cermets are being used instead of tungsten carbide in saws and other brazed tools due to their superior wear and corrosion properties. History[1][edit] After World War II, the need to develop high temperature and high stress-resistant materials became clear.

Autonomous agent An autonomous agent is an intelligent agent operating on an owner's behalf but without any interference of that ownership entity. An intelligent agent, however appears according to a multiply cited statement in a no longer accessible IBM white paper as follows: Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy, and in so doing, employ some knowledge or representation of the user's goals or desires. Non-biological examples include intelligent agents, autonomous robots, and various software agents, including artificial life agents, and many computer viruses. Biological examples are not yet defined. References[edit] External links[edit] See also[edit]

Redwood Center for Theoretical Neuroscience Bangkok Bangkok (English pronunciation: /ˈbæŋkɒk/[5]) is the capital and the most populous city of Thailand. It is known in Thai as Krung Thep Maha Nakhon (กรุงเทพมหานคร, pronounced [krūŋ tʰêːp mahǎː nákʰɔ̄ːn] ( )) or simply Krung Thep . Bangkok traces its roots to a small trading post during the Ayutthaya Kingdom in the 15th century, which eventually grew in size and became the site of two capital cities: Thonburi in 1768 and Rattanakosin in 1782. The Asian investment boom in the 1980s and 1990s led many multinational corporations to locate their regional headquarters in Bangkok. Bangkok's rapid growth amidst little urban planning and regulation has resulted in a haphazard cityscape and inadequate infrastructure systems. History[edit] Administration of the city was first formalized by King Chulalongkorn in 1906, with the establishment of Monthon Krung Thep Phra Maha Nakhon (มณฑลกรุงเทพพระมหานคร) as a national subdivision. Name[edit] )) is unclear. Government[edit] Geography[edit] Topography[edit]

Technological Singularity The technological singularity is the hypothesis that accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing civilization in an event called the singularity.[1] Because the capabilities of such an intelligence may be impossible for a human to comprehend, the technological singularity is an occurrence beyond which events may become unpredictable, unfavorable, or even unfathomable.[2] The first use of the term "singularity" in this context was by mathematician John von Neumann. Proponents of the singularity typically postulate an "intelligence explosion",[5][6] where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent's cognitive abilities greatly surpass that of any human. Basic concepts Superintelligence Non-AI singularity Intelligence explosion Exponential growth Plausibility

Hugo de Garis Hugo de Garis (born 1947, Sydney, Australia) was a researcher in the sub-field of artificial intelligence (AI) known as evolvable hardware. He became known in the 1990s for his research on the use of genetic algorithms to evolve neural networks using three-dimensional cellular automata inside field programmable gate arrays. He claimed that this approach would enable the creation of what he terms "artificial brains" which would quickly surpass human levels of intelligence.[1] He has more recently been noted for his belief that a major war between the supporters and opponents of intelligent machines, resulting in billions of deaths, is almost inevitable before the end of the 21st century.[2]:234 He suggests AIs may simply eliminate the human race, and humans would be powerless to stop them because of technological singularity. De Garis originally studied theoretical physics, but he abandoned this field in favour of artificial intelligence. Evolvable hardware[edit] Current research[edit]

Related: