Liste de concepts logiques Un article de Wikipédia, l'encyclopédie libre. Cet article liste les principaux concepts logiques, au sens philosophique du terme, c'est-à-dire en logique générale (issue de la dialectique). Nota : Liste des concepts logiques de la philosophie LIDA (cognitive architecture) The LIDA (Learning Intelligent Distribution Agent) cognitive architecture is an integrated artificial cognitive system that attempts to model a broad spectrum of cognition in biological systems, from low-level perception/action to high-level reasoning. Developed primarily by Stan Franklin and colleagues at the University of Memphis, the LIDA architecture is empirically grounded in cognitive science and cognitive neuroscience. In addition to providing hypotheses to guide further research, the architecture can support control structures for software agents and robots. Providing plausible explanations for many cognitive processes, the LIDA conceptual model is also intended as a tool with which to think about how minds work. Though it is neither symbolic nor strictly connectionist, LIDA is a hybrid architecture in that it employs a variety of computational mechanisms, chosen for their psychological plausibility. Jump up ^ Hofstadter, D. (1995).
Computational-representational understanding of mind Computational representational understanding of mind (abbreviated CRUM) is a hypothesis in cognitive science which proposes that thinking is performed by computations operating on representations. This hypothesis assumes that the mind has mental representations analogous to data structures and computational procedures analogous to algorithms, such that computer programs using algorithms applied to data structures can model the mind and its processes. CRUM takes into consideration several theoretical approaches of understanding human cognition, including logic, rule, concept, analogy, image, and connection based systems. There is much disagreement on this hypothesis, but CRUM has been the most theoretically and experimentally successful approach to mind ever developed (Paul Thagard, 2005). See also External links
Biologically inspired cognitive architectures Biologically Inspired Cognitive Architectures (BICA) was a DARPA project administered by the Information Processing Technology Office (IPTO) which began in 2005 and is designed to create the next generation of Cognitive architecture models of human artificial intelligence. Its first phase (Design) ran from September 2005 to around October 2006, and was intended to generate new ideas for biological architectures that could be used to create embodied computational architectures of human intelligence. The second phase (Implementation) of BICA was set to begin in the spring of 2007, and would have involved the actual construction of new intelligent agents that live and behave in a virtual environment. However, this phase was canceled by DARPA, reportedly because it was seen as being too ambitious. Now BICA is a transdisciplinary study that aims to design, characterise and implement human-level cognitive architectures. External links References
Robotic Software | NooTriX It is no scoop that Smartphones are attractive for robotics. They gather in a small case a display, wireless for communication, computing capabilities, and a bunch of sensors. That’s cool! In this tutorial, we will be talking about ROS namespaces which allow to combine nodes in ways unplanned by developers. This is actually what all ROS is about: allow building ROS Groovy was released on December the 31st, 2012.
Cognitive Modeling Tu, Th 14:00-15:15 ECOT 831 Instructors Course Overview Cognitive modeling involves the design of computer simulation and mathematical models of human cognition and perception. We will read state-of-the-art research in the field of cognitive modeling, critique the work, and discuss its contributions to the field. In 2008, we plan to focus on sequential dependencies in human cognition, i.e., how one experience influences subsequent perceptions, decisions, and judgements. The instructors believe that sequential dependencies offer deep insight into mechanisms and principles of learning in the brain. Prerequisites The course is open to any students who have some background in cognitive science or artificial intelligence. Course Requirements Readings Written Commentaries For some of the readings, we'll ask you to write a one-page commentary on the paper, The commentary consists of approximately one page of comments, questions, or critiques of the assigned reading(s) for that class. Presentation
Marvin Minsky Marvin Lee Minsky (born August 9, 1927) is an American cognitive scientist in the field of artificial intelligence (AI), co-founder of Massachusetts Institute of Technology's AI laboratory, and author of several texts on AI and philosophy. Biography Isaac Asimov described Minsky as one of only two people he would admit were more intelligent than he was, the other being Carl Sagan. Probably no one would ever know this; it did not matter. In the 1980s, Minsky and Good had shown how neural networks could be generated automatically—self replicated—in accordance with any arbitrary learning program. In the early 1970s at the MIT Artificial Intelligence Lab, Minsky and Seymour Papert started developing what came to be called The Society of Mind theory. Awards and affiliations Marvin Minsky is affiliated with the following organizations: Minsky is a critic of the Loebner Prize. Personal life Minsky is an atheist.
ROS Cognitive model A cognitive model is an approximation to animal cognitive processes (predominantly human) for the purposes of comprehension and prediction. Cognitive models can be developed within or without a cognitive architecture, though the two are not always easily distinguishable. History Cognitive modeling historically developed within cognitive psychology/cognitive science (including human factors), and has received contributions from the fields of machine learning and artificial intelligence to name a few. There are many types of cognitive models, and they can range from box-and-arrow diagrams to a set of equations to software programs that interact with the same tools that humans use to complete tasks (e.g., computer mouse and keyboard). Box-and-arrow models A number of key terms are used to describe the processes involved in the perception, storage, and production of speech. Computational models Symbolic Subsymbolic Hybrid Dynamical systems Locomotion
Perceptron The perceptron algorithm was invented in 1957 at the Cornell Aeronautical Laboratory by Frank Rosenblatt. Definition The perceptron is a binary classifier which maps its input (a real-valued vector) to an output value (a single binary value): where is a vector of real-valued weights, is the dot product (which here computes a weighted sum), and is the 'bias', a constant term that does not depend on any input value. The value of (0 or 1) is used to classify as either a positive or a negative instance, in the case of a binary classification problem. is negative, then the weighted combination of inputs must produce a positive value greater than in order to push the classifier neuron over the 0 threshold. In the context of artificial neural networks, a perceptron is an artificial neuron using the Heaviside step function as the activation function. Learning algorithm Below is an example of a learning algorithm for a (single-layer) perceptron. Definitions To represent the weights: 1. .
Documentation - ROS Wiki CCNBook/Sims/Language/Dyslexia - Computational Cognitive Neuroscience Wiki The project file: dyslex.proj (click and Save As to download, then open in Emergent IMPORTANT: this project requires at least version 5.3.0, which fixes the unit lesioning functionality. Additional file for pretrained weights (required): dyslex_trained.wts.gz Back to CCNBook/Sims/All or Language Chapter. This model simulates normal and disordered (dyslexic) reading performance in terms of a distributed representation of word-level knowledge across Orthography, Semantics, and Phonology. Because the network takes some time to train (for 250 epochs), we will just load in a pre-trained network to begin with. Normal Reading Performance For our initial exploration, we will just observe the behavior of the network as it "reads" the words presented to the orthographic input layer. You will see the activation flow through the network, and it should settle into the correct pronunciation and semantics for the first word, "tart" (the bakery food). Reading with Complete Pathway Lesions