background preloader

Autonomic Computing

Autonomic Computing
The system makes decisions on its own, using high-level policies; it will constantly check and optimize its status and automatically adapt itself to changing conditions. An autonomic computing framework is composed of autonomic components (AC) interacting with each other. An AC can be modeled in terms of two main control loops (local and global) with sensors (for self-monitoring), effectors (for self-adjustment), knowledge and planner/adapter for exploiting policies based on self- and environment awareness. Driven by such vision, a variety of architectural frameworks based on “self-regulating” autonomic components has been recently proposed. A very similar trend has recently characterized significant research in the area of multi-agent systems. Autonomy-oriented computation is a paradigm proposed by Jiming Liu in 2001 that uses artificial systems imitating social animals' collective behaviours to solve difficult computational problems. Problem of growing complexity[edit] Automatic Aware

http://en.wikipedia.org/wiki/Autonomic_computing

Related:  Artificial Intelligence

Artificial consciousness Artificial consciousness (AC), also known as machine consciousness (MC) or synthetic consciousness (Gamez 2008; Reggia 2013), is a field related to artificial intelligence and cognitive robotics whose aim is to "define that which would have to be synthesized were consciousness to be found in an engineered artifact" (Aleksander 1995). Neuroscience hypothesizes that consciousness is generated by the interoperation of various parts of the brain, called the neural correlates of consciousness or NCC. Proponents of AC believe it is possible to construct machines (e.g., computer systems) that can emulate this NCC interoperation. Artificial consciousness can be viewed as an extension to artificial intelligence, assuming that the notion of intelligence in its commonly used sense is too narrow to include all aspects of consciousness.

Evolvable hardware Evolvable hardware (EH) is a new field about the use of evolutionary algorithms (EA) to create specialized electronics without manual engineering. It brings together reconfigurable hardware, artificial intelligence, fault tolerance and autonomous systems. Evolvable hardware refers to hardware that can change its architecture and behavior dynamically and autonomously by interacting with its environment. Machine learning Machine learning is a subfield of computer science[1] that evolved from the study of pattern recognition and computational learning theory in artificial intelligence.[1] Machine learning explores the construction and study of algorithms that can learn from and make predictions on data.[2] Such algorithms operate by building a model from example inputs in order to make data-driven predictions or decisions,[3]:2 rather than following strictly static program instructions. Machine learning is closely related to and often overlaps with computational statistics; a discipline that also specializes in prediction-making. It has strong ties to mathematical optimization, which deliver methods, theory and application domains to the field.

Autonomous agent An autonomous agent is an intelligent agent operating on an owner's behalf but without any interference of that ownership entity. An intelligent agent, however appears according to a multiply cited statement in a no longer accessible IBM white paper as follows: Intelligent agents are software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy, and in so doing, employ some knowledge or representation of the user's goals or desires. Non-biological examples include intelligent agents, autonomous robots, and various software agents, including artificial life agents, and many computer viruses. Bionics Bionics (also known as bionical creativity engineering) is the application of biological methods and systems found in nature to the study and design of engineering systems and modern technology.[citation needed] The transfer of technology between lifeforms and manufactures is, according to proponents of bionic technology, desirable because evolutionary pressure typically forces living organisms, including fauna and flora, to become highly optimized and efficient.

Computational learning theory In theoretical computer science, computational learning theory is a mathematical field related to the analysis of machine learning algorithms. Overview[edit] Theoretical results in machine learning mainly deal with a type of inductive learning called supervised learning. In supervised learning, an algorithm is given samples that are labeled in some useful way. For example, the samples might be descriptions of mushrooms, and the labels could be whether or not the mushrooms are edible. On Intelligence Outline[edit] Hawkins outlines the book as follows: The book starts with some background on why previous attempts at understanding intelligence and building intelligent machines have failed. I then introduce and develop the core idea of the theory, what I call the memory-prediction framework.

Technological Singularity The technological singularity is the hypothesis that accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing civilization in an event called the singularity.[1] Because the capabilities of such an intelligence may be impossible for a human to comprehend, the technological singularity is an occurrence beyond which events may become unpredictable, unfavorable, or even unfathomable.[2] The first use of the term "singularity" in this context was by mathematician John von Neumann. Proponents of the singularity typically postulate an "intelligence explosion",[5][6] where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent's cognitive abilities greatly surpass that of any human. Basic concepts Superintelligence Non-AI singularity

Meta learning (computer science) Meta learning is a subfield of Machine learning where automatic learning algorithms are applied on meta-data about machine learning experiments. Although different researchers hold different views as to what the term exactly means (see below), the main goal is to use such meta-data to understand how automatic learning can become flexible in solving different kinds of learning problems, hence to improve the performance of existing learning algorithms. Flexibility is very important because each learning algorithm is based on a set of assumptions about the data, its inductive bias. This means that it will only learn well if the bias matches the data in the learning problem.

Related: