OpenAi - - about us
The OpenAI site is centered around an Open Source project and community involving artificial intelligence. The term "Open Source" means that the source code for the project is available for free and can be used by others free of charge. Artificial Intelligence refers to the general aim of intelligent computing, making machines think and learn. The project itself is the creation of a set of tools that are considered to be models of human intelligence. These tools are intended to be integrated into programs or used stand alone for research. We're looking for this site to be: a home for the OpenAI project a place for the community to connect an information repository for AI in general a showcase for the tools The project itself is geared toward developing a specification for AI related tools. OpenAI will also provide the details of the specification online as it develops so that the community can help in it's creation by giving insight and criticism.

La méditation et le TDAH touchent des régions du cerveau qui se chevauchent
Lundi, 08 Avril 2013 10:00 Journal FORUM Les personnes qui pratiquent la méditation sont plus habiles à se concentrer sur une source d’information précise. (Image: iStockphoto) Les adeptes de la méditation de pleine conscience possèderaient une plus grande épaisseur corticale dans des régions du cerveau responsables de la régulation de l'attention. Une partie de ces mêmes zones serait plus mince chez les individus souffrant d'un trouble déficitaire de l'attention avec ou sans hyperactivité (TDAH). Des chercheurs de l'Université de Montréal et de l'Université McGill ont établi ce rapprochement qui fait l'objet d'un article publié dans la revue Biological Psychology. «Cette recherche donne à penser qu'on aurait peut-être intérêt à procéder à des études cliniques bien contrôlées pour vérifier si des personnes aux prises avec un TDAH pourraient bénéficier des effets de la méditation. Pierre Rainville Marie Lambert-Chan À lire aussi

Khan Academy
An Introduction to Neural Networks
Prof. Leslie Smith Centre for Cognitive and Computational Neuroscience Department of Computing and Mathematics University of Stirling. lss@cs.stir.ac.uk last major update: 25 October 1996: minor update 22 April 1998 and 12 Sept 2001: links updated (they were out of date) 12 Sept 2001; fix to math font (thanks Sietse Brouwer) 2 April 2003 This document is a roughly HTML-ised version of a talk given at the NSYN meeting in Edinburgh, Scotland, on 28 February 1996, then updated a few times in response to comments received. Please email me comments, but remember that this was originally just the slides from an introductory talk! Why would anyone want a `new' sort of computer? What is a neural network? Some algorithms and architectures. Where have they been applied? What new applications are likely? Some useful sources of information. Some comments added Sept 2001 NEW: questions and answers arising from this tutorial Why would anyone want a `new' sort of computer? Good at Not so good at Fast arithmetic

Lotus Artificial Life - Hardware Artificial Life
This applet displays a cellular automata substrate capable of supporting evolving, self-reproducing which are capable of universal computation. The applet is fully interactive, allowing you to apply selection based on organisms visual characteristics using a variety of implements. Selection may also applied automatically. Currently the built in selection methods are for size and shape only. The cellular automata uses a strict von-Neumann neighbourhood and is based on an innovative, multi-layered design. The whole architecture is designed to be implemented on massively parallel hardware. Note: if you're playing with wiping out organisms manually you'll probably want to have the 'No selection at all' checkbox ticked - this causes all cells to be born pregnant and removes some constraints which abort malformed offspring.

Introduction aux Réseaux de Neurones Artificiels Feed Forward
Plongeons-nous dans l'univers de la reconnaissance de formes. Plus particulièrement, nous allons nous intéresser à la reconnaissance des chiffres (0, 1, ..., 9). Imaginons un programme qui devrait reconnaître, depuis une image, un chiffre. De façon plus générale, un réseau de neurone permet l'approximation d'une fonction. Dans la suite de l'article, on notera un vecteur dont les composantes sont les n informations concernant un exemple donné. Voyons maintenant d'où vient la théorie des réseaux de neurones artificiels. Comment l'homme fait-il pour raisonner, parler, calculer, apprendre... ? Approches adoptée en recherche en Intelligence Artificielle procéder d'abord à l'analyse logique des tâches relevant de la cognition humaine et tenter de les reconstituer par programme. La seconde approche a donc mené à la définition et à l'étude de réseaux de neurones formels qui sont des réseaux complexes d'unités de calcul élémentaires interconnectées. Découvertes III-1. III-2. III-C. III-D. IV-1.

<em>g</em>, a Statistical Myth
g, a Statistical Myth Attention Conservation Notice: About 11,000 words on the triviality of finding that positively correlated variables are all correlated with a linear combination of each other, and why this becomes no more profound when the variables are scores on intelligence tests. Unlikely to change the opinion of anyone who's read enough about the area to have one, but also unlikely to give enough information about the underlying statistical techniques to clarify them to novices. Includes multiple simulations, exasperation, and lots of unwarranted intellectual arrogance on my part. Follows, but is independent of, two earlier posts on the subject of intelligence and its biological basis, and their own sequel on heritability and malleability. This doubtless more than exhausts your interest in reading about the subject; it has certainly exhausted my interest in writing about it. The origin of g: Spearman's original general factor theory The modern g (And it's not just me.

New Pattern Found in Prime Numbers
(PhysOrg.com) -- Prime numbers have intrigued curious thinkers for centuries. On one hand, prime numbers seem to be randomly distributed among the natural numbers with no other law than that of chance. But on the other hand, the global distribution of primes reveals a remarkably smooth regularity. This combination of randomness and regularity has motivated researchers to search for patterns in the distribution of primes that may eventually shed light on their ultimate nature. In a recent study, Bartolo Luque and Lucas Lacasa of the Universidad Politécnica de Madrid in Spain have discovered a new pattern in primes that has surprisingly gone unnoticed until now. “Mathematicians have studied prime numbers for centuries,” Lacasa told PhysOrg.com. Benford’s law (BL), named after physicist Frank Benford in 1938, describes the distribution of the leading digits of the numbers in a wide variety of data sets and mathematical sequences. “BL is a specific case of GBL,” Lacasa explained.

OVERVIEW OF NEURAL NETWORKS
This installment addresses the subject of computer-models of neural networks and the relevance of those models to the functioning brain. The computer field of Artificial Intelligence is a vast bottomless pit which would lead this series too far from biological reality -- and too far into speculation -- to be included. Neural network theory will be the singular exception because the model is so persuasive and so important that it cannot be ignored. Neurobiology provides a great deal of information about the physiology of individual neurons as well as about the function of nuclei and other gross neuroanatomical structures. The building-block of computer-model neural networks is a processing unit called a neurode, which captures many essential features of biological neurons. In the diagram, three neurodes are shown, which can perform the logical operation "AND", ie, the output neurode will fire only if the two input neurodes are both firing. Neural networks are "black boxes" of memory.