background preloader

Wikipedia

Wikipedia
A self-organizing map (SOM) or self-organizing feature map (SOFM) is a type of artificial neural network (ANN) that is trained using unsupervised learning to produce a low-dimensional (typically two-dimensional), discretized representation of the input space of the training samples, called a map. Self-organizing maps are different from other artificial neural networks in the sense that they use a neighborhood function to preserve the topological properties of the input space. This makes SOMs useful for visualizing low-dimensional views of high-dimensional data, akin to multidimensional scaling. The model was first described as an artificial neural network by the Finnish professor Teuvo Kohonen, and is sometimes called a Kohonen map or network.[1][2] Like most artificial neural networks, SOMs operate in two modes: training and mapping. A self-organizing map consists of components called nodes or neurons. Large SOMs display emergent properties. Learning algorithm[edit] Variables[edit] Related:  Machine Learning

Teuvo Kohonen Teuvo Kohonen (born July 11, 1934) is a prominent Finnish academician (Dr. Eng.) and researcher. He is currently professor emeritus of the Academy of Finland. Prof. Kohonen has made many contributions to the field of artificial neural networks, including the Learning Vector Quantization algorithm, fundamental theories of distributed associative memory and optimal associative mappings, the learning subspace method and novel algorithms for symbol processing like redundant hash addressing. Most of his career, Prof. Teuvo Kohonen was elected the First Vice President of the International Association for Pattern Recognition from 1982 to 1984, and acted as the first president of the European Neural Network Society from 1991 to 1992. For his scientific achievements, Prof. IEEE Neural Networks Council Pioneer Award, 1991Technical Achievement Award of the IEEE Signal Processing Society, 1995Frank Rosenblatt Technical Field Award, 2008 Bibliography[edit] External links[edit]

Multilayer perceptron A multilayer perceptron (MLP) is a feedforward artificial neural network model that maps sets of input data onto a set of appropriate outputs. A MLP consists of multiple layers of nodes in a directed graph, with each layer fully connected to the next one. Except for the input nodes, each node is a neuron (or processing element) with a nonlinear activation function. MLP utilizes a supervised learning technique called backpropagation for training the network.[1][2] MLP is a modification of the standard linear perceptron and can distinguish data that are not linearly separable.[3] Theory[edit] Activation function[edit] If a multilayer perceptron has a linear activation function in all neurons, that is, a linear function that maps the weighted inputs to the output of each neuron, then it is easily proved with linear algebra that any number of layers can be reduced to the standard two-layer input-output model (see perceptron). is the output of the th node (neuron) and Layers[edit] in the , where

Scholarpedia Figure 1: The array of nodes in a two-dimensional SOM grid. The Self-Organizing Map (SOM), commonly also known as Kohonen network (Kohonen 1982, Kohonen 2001) is a computational method for the visualization and analysis of high-dimensional data, especially experimentally acquired information. Introduction The Self-Organizing Map defines an ordered mapping, a kind of projection from a set of given data items onto a regular, usually two-dimensional grid. Like a codebook vector in vector quantization, the model is then usually a certain weighted local average of the given data items in the data space. The SOM was originally developed for the visualization of distributions of metric vectors, such as ordered sets of measurement values or statistical attributes, but it can be shown that a SOM-type mapping can be defined for any data items, the mutual pairwise distances of which can be defined. History Figure 2: Left image: Models of acoustic spectra of Finnish phonemes, organized on an SOM.

Protein Secondary Structure Prediction with Neural Nets: Feed-Forward Networks Introduction to feed-forward nets Feed-forward nets are the most well-known and widely-used class of neural network. The popularity of feed-forward networks derives from the fact that they have been applied successfully to a wide range of information processing tasks in such diverse fields as speech recognition, financial prediction, image compression, medical diagnosis and protein structure prediction; new applications are being discovered all the time. (For a useful survey of practical applications for feed-forward networks, see [Lisboa, 1992].) In common with all neural networks, feed-forward networks are trained, rather than programmed, to carry out the chosen information processing tasks. The feed-forward architecture Feed-forward networks have a characteristic layered architecture, with each layer comprising one or more simple processing units called artificial neurons or nodes. Diagram of 2-Layer Perceptron Training a feed-forward net 1. 2. 3.

Kohonen Networks Kohonen Networks Introduction In this tutorial you will learn about: Unsupervised Learning Kohonen Networks Learning in Kohonen Networks Unsupervised Learning In all the forms of learning we have met so far the answer that the network is supposed to give for the training examples is known. Kohonen Networks The objective of a Kohonen network is to map input vectors (patterns) of arbitrary dimension N onto a discrete map with 1 or 2 dimensions. Learning in Kohonen Networks The learning process is as roughly as follows: initialise the weights for each output unit loop until weight changes are negligible for each input pattern present the input pattern find the winning output unit find all units in the neighbourhood of the winner update the weight vectors for all those units reduce the size of neighbourboods if required The winning output unit is simply the unit with the weight vector that has the smallest Euclidean distance to the input pattern. Demonstration Exercises

Feature learning Feature learning or representation learning[1] is a set of techniques in machine learning that learn a transformation of "raw" inputs to a representation that can be effectively exploited in a supervised learning task such as classification. Feature learning algorithms themselves may be either unsupervised or supervised, and include autoencoders,[2] dictionary learning, matrix factorization,[3] restricted Boltzmann machines[2] and various form of clustering.[2][4][5] When the feature learning can be performed in an unsupervised way, it enables a form of semisupervised learning where first, features are learned from an unlabeled dataset, which are then employed to improve performance in a supervised setting with labeled data.[6][7] Clustering as feature learning[edit] K-means clustering can be used for feature learning, by clustering an unlabeled set to produce k centroids, then using these centroids to produce k additional features for a subsequent supervised learning task. See also[edit]

Extended Kohonen Maps Home A Kohonen map is a Self-Organizing Map (SOM) used to order a set of high-dimensional vectors. It can be used to clarify relations in a complex set of data, by revealing some inherent order. This webpage gives access to software that can be used to create standard Kohonen maps, as well as some extensions. Update 2001/11/16: Recompiled koh.exe for Windows allows for processing of much larger data files. Contents: Note: Images on this webpage use grey-scales to convey information. Literature The primary source on Kohonen maps is: Teuvo Kohonen.Self-Organization and Associative Memory. The extensions were first described in: Peter Kleiweg.Neurale netwerken: Een inleidende cursus met practica voor de studie Alfa-Informatica. Kohonen's algorithm A Kohonen map is created using Artificial Neural Network techniques. The result of the training is that a pattern of organization emerges in the map. To demonstrate this algorithm, Kohonen used the set of 32 vectors reproduced in the table below. koh.c

Feedforward neural network In a feed forward network information always moves one direction; it never goes backwards. A feedforward neural network is an artificial neural network where connections between the units do not form a directed cycle. This is different from recurrent neural networks. The feedforward neural network was the first and simplest type of artificial neural network devised. In this network, the information moves in only one direction, forward, from the input nodes, through the hidden nodes (if any) and to the output nodes. Single-layer perceptron[edit] The simplest kind of neural network is a single-layer perceptron network, which consists of a single layer of output nodes; the inputs are fed directly to the outputs via a series of weights. A perceptron can be created using any values for the activated and deactivated states as long as the threshold value lies between the two. Perceptrons can be trained by a simple learning algorithm that is usually called the delta rule. (times See also[edit]

SOM tutorial Kohonen's Self Organizing Feature Maps Introductory Note This tutorial is the first of two related to self organising feature maps. Initially, this was just going to be one big comprehensive tutorial, but work demands and other time constraints have forced me to divide it into two. I will appreciate any feedback you are willing to give - good or bad. Overview Kohonen Self Organising Feature Maps, or SOMs as I shall be referring to them from now on, are fascinating beasts. A common example used to help teach the principals behind SOMs is the mapping of colours from their three dimensional components - red, green and blue, into two dimensions. Figure 1 Screenshot of the demo program (left) and the colours it has classified (right). One of the most interesting aspects of SOMs is that they learn to classify data without supervision. Before I get on with the nitty gritty, it's best for you to forget everything you may already know about neural networks! Network Architecture Figure 2 Figure 3

Related: