background preloader

Multi-agent system

Multi-agent system
Simple reflex agent Learning agent Concept[edit] Multi-agent systems consist of agents and their environment. Typically multi-agent systems research refers to software agents. However, the agents in a multi-agent system could equally well be robots,[5] humans or human teams. Agents can be divided into different types: Very simple like: passive agents[6] or agent without goals (like obstacle, apple or key in any simple simulation)Active agents[6] with simple goals (like birds in flocking, or wolf–sheep in prey-predator model)Or very complex agents (like cognitive agent, which contain complex calculations) Environment also can be divided into: Virtual EnvironmentDiscrete EnvironmentContinuous Environment Characteristics[edit] The agents in a multi-agent system have several important characteristics:[10] Self-organization and self-steering[edit] Systems paradigms[edit] Many M.A. systems are implemented in computer simulations, stepping the system through discrete "time steps". First a "Who can?" Related:  Intelligence Forms

Group intelligence Group intelligence is a term used in a subset of the social psychology literature to refer to a process by which large numbers of people converge upon the same knowledge through group interaction. The term is not commonplace in the mainstream academic study of human intelligence. Social psychologists study group intelligence and related topics such as decentralized decision making and group wisdom, using demographic information to study the ramifications for long-term social change. Marketing and behavioral finance experts use similar research to forecast consumer behavior (e.g. buying patterns) for corporate strategic purposes. Definition[edit] The term group intelligence describes how, under the best circumstances, large numbers of people simultaneously converge upon the same knowledge. James Surowiecki, in The Wisdom of Crowds, claims that, counterintuitively, group intelligence requires independence of thought as well as superior judgment. History[edit] See also[edit]

Category:Animal intelligence Animal intelligence is the study about the origins of animal intelligence by studying the mental processes of other species. The basic premise of this research is that we need to understand the processes of association and learning in other animals in order to understand how human culture, art, religion, mathematics and more may have developed. Subcategories This category has the following 3 subcategories, out of 3 total. Pages in category "Animal intelligence" The following 39 pages are in this category, out of 39 total.

Systems thinking Impression of systems thinking about society[1] A system is composed of interrelated parts or components (structures) that cooperate in processes (behavior). Natural systems include biological entities, ocean currents, the climate, the solar system and ecosystems. Designed systems include airplanes, software systems, technologies and machines of all kinds, government agencies and business systems. Systems Thinking has at least some roots in the General System Theory that was advanced by Ludwig von Bertalanffy in the 1940s and furthered by Ross Ashby in the 1950s. Systems thinking has been applied to problem solving, by viewing "problems" as parts of an overall system, rather than reacting to specific parts, outcomes or events and potentially contributing to further development of unintended consequences. Systems science thinking attempts to illustrate how small catalytic events that are separated by distance and time can be the cause of significant changes in complex systems.

Ambient intelligence An (expected) evolution of computing from 1960–2010. In computing, ambient intelligence (AmI) refers to electronic environments that are sensitive and responsive to the presence of people. Ambient intelligence is a vision on the future of consumer electronics, telecommunications and computing that was originally developed in the late 1990s for the time frame 2010–2020. In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices (see Internet of Things). As these devices grow smaller, more connected and more integrated into our environment, the technology disappears into our surroundings until only the user interface remains perceivable by users. A typical context of ambient intelligence environment is a Home environment (Bieliková & Krajcovic 2001). Overview[edit] History[edit] Criticism[edit]

Intelligence ambiante Un article de Wikipédia, l'encyclopédie libre. L'évolution des ordinateurs : la course à la miniaturisation et à la diffusion dans le milieu ambiant L'intelligence ambiante est ce que pourrait devenir l'informatique dans la première moitié du XXIe siècle en repoussant les limites technologiques qu'elle avait à la fin du XXe siècle [réf. nécessaire]. Ce concept semble pouvoir tenir lieu de traduction non littérale aux concepts nés en Amérique du Nord sous le vocable initial d'informatique ubiquitaire, systèmes pervasifs ou encore ordinateur évanescent [réf. nécessaire]. Dans cette approche, le concept même de système d’information ou d'ordinateur change : d’une activité de traitement exclusivement centrée sur l’utilisateur, l'informatique devient interface entre objets communicants et personnes, et entre personnes [réf. nécessaire]. Facteurs en jeu[modifier | modifier le code] Vers une informatique diffuse[modifier | modifier le code] Perspectives économiques[modifier | modifier le code]

g factor (psychometrics) The g factor (short for "general factor") is a construct developed in psychometric investigations of cognitive abilities. It is a variable that summarizes positive correlations among different cognitive tasks, reflecting the fact that an individual's performance at one type of cognitive task tends to be comparable to his or her performance at other kinds of cognitive tasks. The g factor typically accounts for 40 to 50 percent of the between-individual variance in IQ test performance, and IQ scores are frequently regarded as estimates of individuals' standing on the g factor.[1] The terms IQ, general intelligence, general cognitive ability, general mental ability, or simply intelligence are often used interchangeably to refer to the common core shared by cognitive tests.[2] The existence of the g factor was originally proposed by the English psychologist Charles Spearman in the early years of the 20th century. Mental tests may be designed to measure different aspects of cognition.

Fluid and crystallized intelligence Fluid intelligence or fluid reasoning is the capacity to think logically and solve problems in novel situations, independent of acquired knowledge. It is the ability to analyze novel problems, identify patterns and relationships that underpin these problems and the extrapolation of these using logic. It is necessary for all logical problem solving, e.g., in scientific, mathematical, and technical problem solving. Crystallized intelligence is the ability to use skills, knowledge, and experience. Crystallized intelligence is one’s lifetime of intellectual achievement, as demonstrated largely through one's vocabulary and general knowledge. The terms are somewhat misleading because one is not a "crystallized" form of the other. Fluid and crystallized intelligence are thus correlated with each other, and most IQ tests attempt to measure both varieties. History[edit] Theoretical development[edit] Fluid versus crystallized[edit] Factor structure[edit] Measurement of fluid intelligence[edit]

Intelligence Intelligence is most widely studied in humans, but has also been observed in non-human animals and in plants. Artificial intelligence is the simulation of intelligence in machines. Within the discipline of psychology, various approaches to human intelligence have been adopted. The psychometric approach is especially familiar to the general public, as well as being the most researched and by far the most widely used in practical settings.[1] §History of the term[edit] Intelligence derives from the Latin verb intelligere, to comprehend or perceive. §Definitions[edit] The definition of intelligence is controversial. From "Mainstream Science on Intelligence" (1994), an editorial statement by fifty-two researchers: A very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. What is considered intelligent varies with culture. §Human intelligence[edit]

Intelligence quotient IQ scores have been shown to be associated with such factors as morbidity and mortality,[2][3] parental social status,[4] and, to a substantial degree, biological parental IQ. While the heritability of IQ has been investigated for nearly a century, there is still debate about the significance of heritability estimates[5][6] and the mechanisms of inheritance.[7] History[edit] Early history[edit] The English statistician Francis Galton made the first attempt at creating a standardised test for rating a person's intelligence. French psychologist Alfred Binet, together with Victor Henri and Théodore Simon had more success in 1905, when they published the Binet-Simon test in 1905, which focused on verbal abilities. The score on the Binet-Simon scale would reveal the child's mental age. General factor (g)[edit] The many different kinds of IQ tests use a wide variety of methods. An illustration of Spearman's two-factor intelligence theory. The War Years in the United States[edit] L.L. John B.

Flynn effect Test score increases have been continuous and approximately linear from the earliest years of testing to the present. For the Raven's Progressive Matrices test, subjects born over a 100-year period were compared in Des Moines, Iowa, and separately in Dumfries, Scotland. Improvements were remarkably consistent across the whole period, in both countries.[1] This effect of an apparent increase in IQ has also been observed in various other parts of the world, though the rates of increase vary.[2] There are numerous proposed explanations of the Flynn effect, as well as some skepticism about its implications. Origin of term[edit] Rise in IQ[edit] IQ tests are updated periodically. Ulric Neisser estimates that using the IQ values of today the average IQ of the United States in 1932, according to the first Stanford–Binet Intelligence Scales standardization sample, was 80. Some studies have found the gains of the Flynn effect to be particularly concentrated at the lower end of the distribution.

PASS theory of intelligence Description[edit] Based on A. R. Assessment of PASS processes[edit] The PASS theory provides the theoretical framework for a measurement instrument called the Das-Naglieri Cognitive Assessment System (CAS), published in 1997.[5] This test is designed to provide a nuanced assessment of the individual’s intellectual functioning, providing information about cognitive strengths and weaknesses in each of the four processes. Links to general intelligence[edit] Contemporary theories about intelligence can be divided into two classes, psychometric and cognitive. Links to brain activity[edit] It is useful to link PASS processes to the brain. Remediation and cognitive enhancement[edit] One unusual property of the PASS theory of cognitive processes is that it has proven useful for both intellectual assessment (e.g. the CAS) and educational intervention. Challenges[edit] A frequently cited criticism is based on the factor analysis of the test battery. References[edit] Jump up ^ Das, J. Das, J. Das, J.

Triarchic theory of intelligence Different components of information processing[edit] Schematic illustrating one trial of each stimulus pool in the Sternberg task: letter, word, object, spatial, grating. Sternberg associated the workings of the mind with a series of components. These components he labeled the metacomponents, performance components, and knowledge-acquisition components (Sternberg, 1985). The metacomponents are executive processes used in problem solving and decision making that involve the majority of managing our mind. Sternberg’s next set of components, performance components, are the processes that actually carry out the actions the metacomponents dictate. The last set of components, knowledge-acquisition components, are used in obtaining new information. Whereas Sternberg explains that the basic information processing components underlying the three parts of his triarchic theory are the same, different contexts and different tasks require different kind of intelligence (Sternberg, 2001). See also[edit]

Related: