background preloader

Machine Learning

Facebook Twitter

Cs.gmu.edu/~eclab/projects/ecj/docs/manual/manual.pdf. Johnmyleswhite/MLNotes. Carte auto adaptative. Un article de Wikipédia, l'encyclopédie libre.

Carte auto adaptative

Les cartes auto adaptatives ou auto organisatrices forment une classe de réseau de neurones artificiels fondée sur des méthodes d'apprentissage non-supervisées. On les désigne souvent par le terme anglais self organizing maps (SOM), ou encore cartes de Teuvo KohonenTeuvo Kohonen du nom du statisticien ayant développé le concept en 1984. Elles sont utilisées pour cartographier un espace réel, c'est-à-dire pour étudier la répartition de données dans un espace à grande dimension.

En pratique, cette cartographie peut servir à réaliser des tâches de discrétisation, quantification vectorielle ou classification. Self Organizing Map AI for Pictures. Introduction this article is about creating an app to cluster and search for related pictures. i got the basic idea from a Longhorn demo in which they showed similar functionality. in the demo, they selected an image of the sunset, and the program was able to search the other images on the hard drive and return similar images. there are other photo library applications that offer similar functionality. honestly ... i thought that was pretty cool, and wanted to have some idea how they might be doing that. internally, i do not know how they actually operate ... but this article will show one possibility. also writing this article to continue my AI training Kohonen SOM luckily there is a type of NN that works with unsupervised training. i'm guessing that it is the 2nd or 3rd most popular type of NN?

Self Organizing Map AI for Pictures

Anyways, that is my current understanding; here are some other articles i recommend. SOM tutorial part 1. Kohonen's Self Organizing Feature Maps Introductory Note This tutorial is the first of two related to self organising feature maps.

SOM tutorial part 1

Initially, this was just going to be one big comprehensive tutorial, but work demands and other time constraints have forced me to divide it into two. Nevertheless, part one should provide you with a pretty good introduction. Nh4. Machine à vecteurs de support. Un article de Wikipédia, l'encyclopédie libre.

Machine à vecteurs de support

Les SVM ont été développés dans les années 1990 à partir des considérations théoriques de Vladimir Vapnik sur le développement d'une théorie statistique de l'apprentissage : la Théorie de Vapnik-Chervonenkis. Les SVM ont rapidement été adoptés pour leur capacité à travailler avec des données de grandes dimensions, le faible nombre d'hyper paramètres, leurs garanties théoriques, et leurs bons résultats en pratique. Memetique.

Effet Zeigarnik. Un article de Wikipédia, l'encyclopédie libre.

Effet Zeigarnik

Artificial intelligence. AI research is highly technical and specialized, and is deeply divided into subfields that often fail to communicate with each other.[5] Some of the division is due to social and cultural factors: subfields have grown up around particular institutions and the work of individual researchers.

Artificial intelligence

AI research is also divided by several technical issues. Some subfields focus on the solution of specific problems. Others focus on one of several possible approaches or on the use of a particular tool or towards the accomplishment of particular applications. The central problems (or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate objects.[6] General intelligence is still among the field's long-term goals.[7] Currently popular approaches include statistical methods, computational intelligence and traditional symbolic AI.

History[edit] Machine learning. Machine learning is a subfield of computer science[1] that evolved from the study of pattern recognition and computational learning theory in artificial intelligence.[1] Machine learning explores the construction and study of algorithms that can learn from and make predictions on data.[2] Such algorithms operate by building a model from example inputs in order to make data-driven predictions or decisions,[3]:2 rather than following strictly static program instructions.

Machine learning

Machine learning is closely related to and often overlaps with computational statistics; a discipline that also specializes in prediction-making. It has strong ties to mathematical optimization, which deliver methods, theory and application domains to the field. Machine learning is employed in a range of computing tasks where designing and programming explicit, rule-based algorithms is infeasible.

Example applications include spam filtering, optical character recognition (OCR),[4] search engines and computer vision. L-2011-114. Ikonomakis-et.-al._Text-Classification-Using-Machine-Learning-Techniques. Introduction%20to%20machine%20learning. Schloss Dagstuhl : Seminar Homepage. September 9 – 14, 2012, Dagstuhl Perspectives Workshop 12371 Organizers Anthony D.

Schloss Dagstuhl : Seminar Homepage

Joseph (University of California – Berkeley, US)Pavel Laskov (Universität Tübingen, DE)Fabio Roli (Università di Cagliari, IT)Doug Tygar (University of California – Berkeley, US) Coordinators Blaine Nelson (Univ. For support, please contact Documents Dagstuhl Report, Volume 2, Issue 9 Dagstuhl Manifesto, Volume 3, Issue 3 List of ParticipantsShared DocumentsDagstuhl's Impact: Documents availableDagstuhl Perspectives Workshop Schedule [pdf] Press Room Intelligenter Virenschutz für Computer von Peter Welchering SWR2 am 01.12.2012 (in German) Mit jeder Cyberattacke wird der Computer schlauer von Peter Welchering FAZ.net am 23.11.2012 (in German) Der Computer schlägt zurück von Peter Welchering Deutschlandfunk "Forschung aktuell" am 15.09.2012 (in German) "Selbstlernende Software"; Simone Mir Haschemi im Gespräch mit Paval Laskov.