background preloader

Autonomic Computing

Autonomic Computing
The system makes decisions on its own, using high-level policies; it will constantly check and optimize its status and automatically adapt itself to changing conditions. An autonomic computing framework is composed of autonomic components (AC) interacting with each other. An AC can be modeled in terms of two main control loops (local and global) with sensors (for self-monitoring), effectors (for self-adjustment), knowledge and planner/adapter for exploiting policies based on self- and environment awareness. Driven by such vision, a variety of architectural frameworks based on “self-regulating” autonomic components has been recently proposed. A very similar trend has recently characterized significant research in the area of multi-agent systems. Autonomy-oriented computation is a paradigm proposed by Jiming Liu in 2001 that uses artificial systems imitating social animals' collective behaviours to solve difficult computational problems. Problem of growing complexity[edit] Automatic Aware Related:  Artificial Intelligence

Artificial consciousness Artificial consciousness (AC), also known as machine consciousness (MC) or synthetic consciousness (Gamez 2008; Reggia 2013), is a field related to artificial intelligence and cognitive robotics whose aim is to "define that which would have to be synthesized were consciousness to be found in an engineered artifact" (Aleksander 1995). Neuroscience hypothesizes that consciousness is generated by the interoperation of various parts of the brain, called the neural correlates of consciousness or NCC. Proponents of AC believe it is possible to construct machines (e.g., computer systems) that can emulate this NCC interoperation. Artificial consciousness can be viewed as an extension to artificial intelligence, assuming that the notion of intelligence in its commonly used sense is too narrow to include all aspects of consciousness. Philosophical views of artificial consciousness[edit] As there are many designations of consciousness, there are many potential types of AC. 61. Awareness Learning

UCSB scientists discover how the brain encodes memories at a cellular level (Santa Barbara, Calif.) –– Scientists at UC Santa Barbara have made a major discovery in how the brain encodes memories. The finding, published in the December 24 issue of the journal Neuron, could eventually lead to the development of new drugs to aid memory. The team of scientists is the first to uncover a central process in encoding memories that occurs at the level of the synapse, where neurons connect with each other. "When we learn new things, when we store memories, there are a number of things that have to happen," said senior author Kenneth S. "One of the most important processes is that the synapses –– which cement those memories into place –– have to be strengthened," said Kosik. This is a neuron. (Photo Credit: Sourav Banerjee) Part of strengthening a synapse involves making new proteins. The production of new proteins can only occur when the RNA that will make the required proteins is turned on. When the signal comes in, the wrapping protein degrades or gets fragmented.

Autonomous agent An autonomous agent is an intelligent agent operating on an owner's behalf but without any interference of that ownership entity. An intelligent agent, however appears according to a multiply cited statement in a no longer accessible IBM white paper as follows: Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy, and in so doing, employ some knowledge or representation of the user's goals or desires. Non-biological examples include intelligent agents, autonomous robots, and various software agents, including artificial life agents, and many computer viruses. Biological examples are not yet defined. References[edit] External links[edit] See also[edit]

Simulated reality Simulated reality is the hypothesis that reality could be simulated—for example by computer simulation—to a degree indistinguishable from "true" reality. It could contain conscious minds which may or may not be fully aware that they are living inside a simulation. This is quite different from the current, technologically achievable concept of virtual reality. Virtual reality is easily distinguished from the experience of actuality; participants are never in doubt about the nature of what they experience. Simulated reality, by contrast, would be hard or impossible to separate from "true" reality. There has been much debate over this topic, ranging from philosophical discourse to practical applications in computing. Types of simulation[edit] Brain-computer interface[edit] Virtual people[edit] In a virtual-people simulation, every inhabitant is a native of the simulated world. Arguments[edit] Simulation argument[edit] 1. 2. 3. Relativity of reality[edit] Computationalism[edit] Dreaming[edit]

On Intelligence Outline[edit] Hawkins outlines the book as follows: The book starts with some background on why previous attempts at understanding intelligence and building intelligent machines have failed. I then introduce and develop the core idea of the theory, what I call the memory-prediction framework. In chapter 6 I detail how the physical brain implements the memory-prediction model—in other words, how the brain actually works. A personal history[edit] The first chapter is a brief history of Hawkins' interest in neuroscience juxtaposed against a history of artificial intelligence research. Hawkins is an electrical engineer by training, and a neuroscientist by inclination. The theory[edit] The hierarchy is capable of memorizing frequently observed sequences (Cognitive modules) of patterns and developing invariant representations. Hebbian learning is part of the framework, in which the event of learning physically alters neurons and connections, as learning takes place. 1. 2. 3. 4. 5. 6. 7. "Aha! 8.

D-Wave Systems D-Wave Systems, Inc. is a quantum computing company, based in Burnaby, British Columbia. On May 11, 2011, D-Wave System announced D-Wave One, labeled "the world's first commercially available quantum computer," operating on a 128 qubit chip-set[1] using quantum annealing [2][3][4][5] to solve optimization problems. In May 2013 it was announced that a collaboration between NASA, Google and the Universities Space Research Association (USRA) launched a Quantum Artificial Intelligence Lab based on the D-Wave Two 512 qubit quantum computer that would be used for research into machine learning, among other fields of study.[6] The D-Wave One was built on early prototypes such as D-Wave's Orion Quantum Computer. Technology description[edit] D-Wave maintains a list of peer-reviewed technical publications on their website, authored by D-Wave scientists and by third party researchers. History[edit] Orion prototype[edit] According to Dr. 2009 Google demonstration[edit] D-Wave One computer system[edit]

Evolvable hardware Evolvable hardware (EH) is a new field about the use of evolutionary algorithms (EA) to create specialized electronics without manual engineering. It brings together reconfigurable hardware, artificial intelligence, fault tolerance and autonomous systems. Evolvable hardware refers to hardware that can change its architecture and behavior dynamically and autonomously by interacting with its environment. Introduction[edit] Each candidate circuit can either be simulated or physically implemented in a reconfigurable device. The concept was pioneered by Adrian Thompson at the University of Sussex, England, who in 1996 evolved a tone discriminator using fewer than 40 programmable logic gates and no clock signal in a FPGA. Why evolve circuits? In many cases, conventional design methods (formulas, etc.) can be used to design a circuit. In other cases, an existing circuit must adapt—i.e., modify its configuration—to compensate for faults or perhaps a changing operational environment. Garrison W.

Generational list of programming languages Here, a genealogy of programming languages is shown. Languages are categorized under the ancestor language with the strongest influence. Of course, any such categorization has a large arbitrary element, since programming languages often incorporate major ideas from multiple sources. ALGOL based[edit] APL based[edit] BASIC based[edit] Batch languages[edit] C based[edit] COBOL based[edit] COMIT based[edit] DCL based[edit] DCLWindows PowerShell (also under C#, ksh and Perl) ed based[edit] Eiffel based[edit] Forth based[edit] Fortran based[edit] FP based[edit] HyperTalk based[edit] Java based[edit] JOSS based[edit] Lisp based[edit] ML based[edit] PL/I based[edit] Prolog based[edit] SASL Based[edit] SETL based[edit] sh based[edit] Sh Simula based[edit] Tcl based[edit] Others[edit] External links[edit] Diagram & history of programming languages

Hugo de Garis Hugo de Garis (born 1947, Sydney, Australia) was a researcher in the sub-field of artificial intelligence (AI) known as evolvable hardware. He became known in the 1990s for his research on the use of genetic algorithms to evolve neural networks using three-dimensional cellular automata inside field programmable gate arrays. He claimed that this approach would enable the creation of what he terms "artificial brains" which would quickly surpass human levels of intelligence.[1] He has more recently been noted for his belief that a major war between the supporters and opponents of intelligent machines, resulting in billions of deaths, is almost inevitable before the end of the 21st century.[2]:234 He suggests AIs may simply eliminate the human race, and humans would be powerless to stop them because of technological singularity. De Garis originally studied theoretical physics, but he abandoned this field in favour of artificial intelligence. Evolvable hardware[edit] Current research[edit]