background preloader

Numenta - numenta.com

Numenta - numenta.com
Related:  Artificial Intelligence

Redwood Center for Theoretical Neuroscience Open Journal of Databases Open Journal of Databases (OJDB) provides a platform for researchers and practitioners of databases to share their ideas, experiences and research results. OJDB publishes following four types of scientific articles: Short communications: reporting novel research ideas. The work represented should be technically sound and significantly advancing the state of the art. Short communications also include exploratory studies and methodological articles.Regular research papers: being full original findings with adequate experimental research. They make substantial theoretical and empirical contributions to the research field. OJDB welcomes scientific papers in all the traditional and emerging areas of database research. Topics relevant to this journal include, but are NOT limited to:

On Intelligence - Welcome The Apache Cassandra Project Hierarchical Temporal Memory We've completed a functional (and much better) version of our .NET-based Hierarchical Temporal Memory (HTM) engines (great job Rob). We're also still working on an HTM based robotic behavioral framework (and our 1st quarter goal -- yikes - we're late). Also, we are NOT using Numenta's recently released run-time and/or code... since we're professional .NET consultants/developers, we decided to author our own implementation from initial prototypes authored over the summer of 2006 during an infamous sabbatical -- please don't ask about the "Hammer" stories. I've been feeling that the team has not been in synch in terms of HTM concepts, theory and implementation. We decided to spend the last couple of meetings purely focused on discussions concerning HTMs. We have divided our HTM node implementation into 2 high level types. 1) Sensor Node and 2) Cortical Node. An HTM sensor node provides a mechanism to memorize sensor inputs and sequences of those inputs.

Hierarchical temporal memory Hierarchical temporal memory (HTM) is an online machine learning model developed by Jeff Hawkins and Dileep George of Numenta, Inc. that models some of the structural and algorithmic properties of the neocortex. HTM is a biomimetic model based on the memory-prediction theory of brain function described by Jeff Hawkins in his book On Intelligence. HTM is a method for discovering and inferring the high-level causes of observed input patterns and sequences, thus building an increasingly complex model of the world. Jeff Hawkins states that HTM does not present any new idea or theory, but combines existing ideas to mimic the neocortex with a simple design that provides a large range of capabilities. HTM structure and algorithms[edit] An example of HTM hierarchy used for image recognition Each HTM node has the same basic functionality. Each HTM region learns by identifying and memorizing spatial patterns - combinations of input bits that often occur at the same time. Bayesian networks[edit]

Bionics Bionics (also known as bionical creativity engineering) is the application of biological methods and systems found in nature to the study and design of engineering systems and modern technology.[citation needed] The transfer of technology between lifeforms and manufactures is, according to proponents of bionic technology, desirable because evolutionary pressure typically forces living organisms, including fauna and flora, to become highly optimized and efficient. A classical example is the development of dirt- and water-repellent paint (coating) from the observation that the surface of the lotus flower plant is practically unsticky for anything (the lotus effect).[citation needed]. Ekso Bionics is currently developing and manufacturing intelligently powered exoskeleton bionic devices that can be strapped on as wearable robots to enhance the strength, mobility, and endurance of soldiers and paraplegics. The term "biomimetic" is preferred when reference is made to chemical reactions.

Hugo de Garis Hugo de Garis (born 1947, Sydney, Australia) was a researcher in the sub-field of artificial intelligence (AI) known as evolvable hardware. He became known in the 1990s for his research on the use of genetic algorithms to evolve neural networks using three-dimensional cellular automata inside field programmable gate arrays. He claimed that this approach would enable the creation of what he terms "artificial brains" which would quickly surpass human levels of intelligence.[1] He has more recently been noted for his belief that a major war between the supporters and opponents of intelligent machines, resulting in billions of deaths, is almost inevitable before the end of the 21st century.[2]:234 He suggests AIs may simply eliminate the human race, and humans would be powerless to stop them because of technological singularity. De Garis originally studied theoretical physics, but he abandoned this field in favour of artificial intelligence. Evolvable hardware[edit] Current research[edit]

Evolvable hardware Evolvable hardware (EH) is a new field about the use of evolutionary algorithms (EA) to create specialized electronics without manual engineering. It brings together reconfigurable hardware, artificial intelligence, fault tolerance and autonomous systems. Evolvable hardware refers to hardware that can change its architecture and behavior dynamically and autonomously by interacting with its environment. Introduction[edit] Each candidate circuit can either be simulated or physically implemented in a reconfigurable device. The concept was pioneered by Adrian Thompson at the University of Sussex, England, who in 1996 evolved a tone discriminator using fewer than 40 programmable logic gates and no clock signal in a FPGA. Why evolve circuits? In many cases, conventional design methods (formulas, etc.) can be used to design a circuit. In other cases, an existing circuit must adapt—i.e., modify its configuration—to compensate for faults or perhaps a changing operational environment. Garrison W.

Dossier : de l'IA faible à l'IA forte, par Jean-Claude Baquiast et Christohe Jacquemin 9 juillet 2008 par Jean-Paul Baquiast et Christophe Jacquemin Dossier L'intelligence artificielle (IA). De l'IA faible à l'IA forte L’Intelligence artificielle (dite ici IA) a connu des développements rapides, principalement aux Etats-Unis, dans les années 1960/1970, en corrélation avec l’apparition des premiers ordinateurs scientifiques. ues. On voit par ailleurs aujourd’hui se développer une IA qui vise à reproduire le plus grand nombre possible des fonctions et performances des cerveaux animaux et humains. En pratique, ces IA fortes sont associés à des robots, à qui elles confèrent des propriétés d’autonomie de plus en plus marquées. Proposons notre définition de l’IA : nous dirons qu’elle vise à simuler sur des ordinateurs et des réseaux électroniques, par l’intermédiaire de programmes informatiques, un certain nombre des comportements cognitifs, ou façons de penser, des cerveaux animaux et humains. C’est d’ailleurs ce qui est en train de se passer avec l’IA. 1. Les systèmes experts

Applications of artificial intelligence Artificial intelligence has been used in a wide range of fields including medical diagnosis, stock trading, robot control, law, remote sensing, scientific discovery and toys. However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore," Nick Bostrom reports.[1] "Many thousands of AI applications are deeply embedded in the infrastructure of every industry." In the late 90s and early 21st century, AI technology became widely used as elements of larger systems, but the field is rarely credited for these successes. Computer science[edit] AI researchers have created many tools to solve the most difficult problems in computer science. Finance[edit] Banks use artificial intelligence systems to organize operations, invest in stocks, and manage properties. Hospitals and medicine[edit] Heavy industry[edit] Music[edit]

Artificial consciousness Artificial consciousness (AC), also known as machine consciousness (MC) or synthetic consciousness (Gamez 2008; Reggia 2013), is a field related to artificial intelligence and cognitive robotics whose aim is to "define that which would have to be synthesized were consciousness to be found in an engineered artifact" (Aleksander 1995). Neuroscience hypothesizes that consciousness is generated by the interoperation of various parts of the brain, called the neural correlates of consciousness or NCC. Proponents of AC believe it is possible to construct machines (e.g., computer systems) that can emulate this NCC interoperation. Artificial consciousness can be viewed as an extension to artificial intelligence, assuming that the notion of intelligence in its commonly used sense is too narrow to include all aspects of consciousness. Philosophical views of artificial consciousness[edit] As there are many designations of consciousness, there are many potential types of AC. 61. Awareness Learning

Autonomic Computing The system makes decisions on its own, using high-level policies; it will constantly check and optimize its status and automatically adapt itself to changing conditions. An autonomic computing framework is composed of autonomic components (AC) interacting with each other. An AC can be modeled in terms of two main control loops (local and global) with sensors (for self-monitoring), effectors (for self-adjustment), knowledge and planner/adapter for exploiting policies based on self- and environment awareness. Driven by such vision, a variety of architectural frameworks based on “self-regulating” autonomic components has been recently proposed. A very similar trend has recently characterized significant research in the area of multi-agent systems. Autonomy-oriented computation is a paradigm proposed by Jiming Liu in 2001 that uses artificial systems imitating social animals' collective behaviours to solve difficult computational problems. Problem of growing complexity[edit] Automatic Aware

Autonomous agent An autonomous agent is an intelligent agent operating on an owner's behalf but without any interference of that ownership entity. An intelligent agent, however appears according to a multiply cited statement in a no longer accessible IBM white paper as follows: Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy, and in so doing, employ some knowledge or representation of the user's goals or desires. Non-biological examples include intelligent agents, autonomous robots, and various software agents, including artificial life agents, and many computer viruses. Biological examples are not yet defined. References[edit] External links[edit] See also[edit]

Related: