background preloader

Autonomic Computing

Autonomic Computing
The system makes decisions on its own, using high-level policies; it will constantly check and optimize its status and automatically adapt itself to changing conditions. An autonomic computing framework is composed of autonomic components (AC) interacting with each other. An AC can be modeled in terms of two main control loops (local and global) with sensors (for self-monitoring), effectors (for self-adjustment), knowledge and planner/adapter for exploiting policies based on self- and environment awareness. Driven by such vision, a variety of architectural frameworks based on “self-regulating” autonomic components has been recently proposed. A very similar trend has recently characterized significant research in the area of multi-agent systems. Autonomy-oriented computation is a paradigm proposed by Jiming Liu in 2001 that uses artificial systems imitating social animals' collective behaviours to solve difficult computational problems. Problem of growing complexity[edit] Automatic Aware Related:  Artificial Intelligence

Artificial consciousness Artificial consciousness (AC), also known as machine consciousness (MC) or synthetic consciousness (Gamez 2008; Reggia 2013), is a field related to artificial intelligence and cognitive robotics whose aim is to "define that which would have to be synthesized were consciousness to be found in an engineered artifact" (Aleksander 1995). Neuroscience hypothesizes that consciousness is generated by the interoperation of various parts of the brain, called the neural correlates of consciousness or NCC. Proponents of AC believe it is possible to construct machines (e.g., computer systems) that can emulate this NCC interoperation. Artificial consciousness can be viewed as an extension to artificial intelligence, assuming that the notion of intelligence in its commonly used sense is too narrow to include all aspects of consciousness. Philosophical views of artificial consciousness[edit] As there are many designations of consciousness, there are many potential types of AC. 61. Awareness Learning

UCSB scientists discover how the brain encodes memories at a cellular level (Santa Barbara, Calif.) –– Scientists at UC Santa Barbara have made a major discovery in how the brain encodes memories. The finding, published in the December 24 issue of the journal Neuron, could eventually lead to the development of new drugs to aid memory. The team of scientists is the first to uncover a central process in encoding memories that occurs at the level of the synapse, where neurons connect with each other. "When we learn new things, when we store memories, there are a number of things that have to happen," said senior author Kenneth S. "One of the most important processes is that the synapses –– which cement those memories into place –– have to be strengthened," said Kosik. This is a neuron. (Photo Credit: Sourav Banerjee) Part of strengthening a synapse involves making new proteins. The production of new proteins can only occur when the RNA that will make the required proteins is turned on. When the signal comes in, the wrapping protein degrades or gets fragmented.

Evolvable hardware Evolvable hardware (EH) is a new field about the use of evolutionary algorithms (EA) to create specialized electronics without manual engineering. It brings together reconfigurable hardware, artificial intelligence, fault tolerance and autonomous systems. Evolvable hardware refers to hardware that can change its architecture and behavior dynamically and autonomously by interacting with its environment. Introduction[edit] Each candidate circuit can either be simulated or physically implemented in a reconfigurable device. The concept was pioneered by Adrian Thompson at the University of Sussex, England, who in 1996 evolved a tone discriminator using fewer than 40 programmable logic gates and no clock signal in a FPGA. Why evolve circuits? In many cases, conventional design methods (formulas, etc.) can be used to design a circuit. In other cases, an existing circuit must adapt—i.e., modify its configuration—to compensate for faults or perhaps a changing operational environment. Garrison W.

Autonomous agent An autonomous agent is an intelligent agent operating on an owner's behalf but without any interference of that ownership entity. An intelligent agent, however appears according to a multiply cited statement in a no longer accessible IBM white paper as follows: Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy, and in so doing, employ some knowledge or representation of the user's goals or desires. Non-biological examples include intelligent agents, autonomous robots, and various software agents, including artificial life agents, and many computer viruses. Biological examples are not yet defined. References[edit] External links[edit] See also[edit]

Simulated reality Simulated reality is the hypothesis that reality could be simulated—for example by computer simulation—to a degree indistinguishable from "true" reality. It could contain conscious minds which may or may not be fully aware that they are living inside a simulation. This is quite different from the current, technologically achievable concept of virtual reality. Virtual reality is easily distinguished from the experience of actuality; participants are never in doubt about the nature of what they experience. Simulated reality, by contrast, would be hard or impossible to separate from "true" reality. There has been much debate over this topic, ranging from philosophical discourse to practical applications in computing. Types of simulation[edit] Brain-computer interface[edit] Virtual people[edit] In a virtual-people simulation, every inhabitant is a native of the simulated world. Arguments[edit] Simulation argument[edit] 1. 2. 3. Relativity of reality[edit] Computationalism[edit] Dreaming[edit]

Bionics Bionics (also known as bionical creativity engineering) is the application of biological methods and systems found in nature to the study and design of engineering systems and modern technology.[citation needed] The transfer of technology between lifeforms and manufactures is, according to proponents of bionic technology, desirable because evolutionary pressure typically forces living organisms, including fauna and flora, to become highly optimized and efficient. A classical example is the development of dirt- and water-repellent paint (coating) from the observation that the surface of the lotus flower plant is practically unsticky for anything (the lotus effect). Ekso Bionics is currently developing and manufacturing intelligently powered exoskeleton bionic devices that can be strapped on as wearable robots to enhance the strength, mobility, and endurance of soldiers and paraplegics. The term "biomimetic" is preferred when reference is made to chemical reactions. History[edit]

On Intelligence Outline[edit] Hawkins outlines the book as follows: The book starts with some background on why previous attempts at understanding intelligence and building intelligent machines have failed. I then introduce and develop the core idea of the theory, what I call the memory-prediction framework. In chapter 6 I detail how the physical brain implements the memory-prediction model—in other words, how the brain actually works. A personal history[edit] The first chapter is a brief history of Hawkins' interest in neuroscience juxtaposed against a history of artificial intelligence research. Hawkins is an electrical engineer by training, and a neuroscientist by inclination. The theory[edit] The hierarchy is capable of memorizing frequently observed sequences (Cognitive modules) of patterns and developing invariant representations. Hebbian learning is part of the framework, in which the event of learning physically alters neurons and connections, as learning takes place. 1. 2. 3. 4. 5. 6. 7. "Aha! 8.

D-Wave Systems D-Wave Systems, Inc. is a quantum computing company, based in Burnaby, British Columbia. On May 11, 2011, D-Wave System announced D-Wave One, labeled "the world's first commercially available quantum computer," operating on a 128 qubit chip-set[1] using quantum annealing [2][3][4][5] to solve optimization problems. In May 2013 it was announced that a collaboration between NASA, Google and the Universities Space Research Association (USRA) launched a Quantum Artificial Intelligence Lab based on the D-Wave Two 512 qubit quantum computer that would be used for research into machine learning, among other fields of study.[6] The D-Wave One was built on early prototypes such as D-Wave's Orion Quantum Computer. Technology description[edit] D-Wave maintains a list of peer-reviewed technical publications on their website, authored by D-Wave scientists and by third party researchers. History[edit] Orion prototype[edit] According to Dr. 2009 Google demonstration[edit] D-Wave One computer system[edit]

Technological Singularity The technological singularity is the hypothesis that accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing civilization in an event called the singularity.[1] Because the capabilities of such an intelligence may be impossible for a human to comprehend, the technological singularity is an occurrence beyond which events may become unpredictable, unfavorable, or even unfathomable.[2] The first use of the term "singularity" in this context was by mathematician John von Neumann. Proponents of the singularity typically postulate an "intelligence explosion",[5][6] where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent's cognitive abilities greatly surpass that of any human. Basic concepts Superintelligence Non-AI singularity Intelligence explosion Exponential growth Plausibility

Related: