background preloader

All entries

All entries

PyBrain Videos This video presentation was shown at the ICML Workshop for Open Source ML Software on June 25, 2010. It explains some of the features and algorithms of PyBrain and gives tutorials on how to install and use PyBrain for different tasks. This video shows some of the learning features in PyBrain in action. Algorithms We implemented many useful standard and advanced algorithms in PyBrain, and in some cases created interfaces to existing libraries (e.g. Supervised Learning Back-PropagationR-PropSupport-Vector-Machines (LIBSVM interface) Evolino Unsupervised Learning K-Means ClusteringPCA/pPCALSH for Hamming and Euclidean SpacesDeep Belief Networks Reinforcement Learning Value-based Q-Learning (with/without eligibility traces)SARSANeural Fitted Q-iteration Policy Gradients REINFORCENatural Actor-Critic Exploration Methods Epsilon-Greedy Exploration (discrete)Boltzmann Exploration (discrete)Gaussian Exploration (continuous)State-Dependent Exploration (continuous) Black-box Optimization Networks Tools

Sampling Distribution of Difference Between Means Sampling Distribution of Difference Between Means Author(s) David M. Lane Prerequisites Sampling Distributions, Sampling Distribution of the Mean, Variance Sum Law I Learning Objectives State the mean and variance of the sampling distribution of the difference between means Compute the standard error of the difference between means Compute the probability of a difference between means being above a specified value The sampling distribution of the difference between means can be thought of as the distribution that would result if we repeated the following three steps over and over again: (1) sample n1 scores from Population 1 and n2 scores from Population 2, (2) compute the means of the two samples (M1 and M2), and (3) compute the difference between means, M1 - M2. As you might expect, the mean of the sampling distribution of the difference between means is: which says that the mean of the distribution of differences between sample means is equal to the difference between population means.

Weka 3 - Data Mining with Open Source Machine Learning Software in Java Weka is tried and tested open source machine learning software that can be accessed through a graphical user interface, standard terminal applications, or a Java API. It is widely used for teaching, research, and industrial applications, contains a plethora of built-in tools for standard machine learning tasks, and additionally gives transparent access to well-known toolboxes such as scikit-learn, R, and Deeplearning4j. DownloadDocsCoursesBook

Dirichlet Process, Infinite Mixture Models, and Clustering # Generate some fake data with some uniform random means generateFakeData < - function( num.vars=3, n=100, num.clusters=5, seed=NULL ) { if(is.null(seed)){ set.seed(runif(1,0,100)) } else { set.seed(seed) data <- data.frame(matrix(NA, nrow=n, ncol=num.vars+1)) mu <- NULL for(m in 1:num.vars){ mu <- cbind(mu,rnorm(num.clusters, runif(1,-10,15), 5)) for (i in 1:n) { cluster <- sample(1:num.clusters, 1) data[i, 1] <- cluster for(j in 1:num.vars){ data[i, j+1] <- rnorm(1, mu[cluster,j], 1) data$X1 <- factor(data$X1) var.names <- paste("VAR",seq(1,ncol(data)-1), sep="") names(data) <- c("cluster",var.names) return(data) # Set up a procedure to calculate the cluster means using squared distance dirichletClusters <- function(orig.data, disp.param = NULL, max.iter = 100, tolerance = .001) n <- nrow( orig.data ) data <- as.matrix( orig.data ) pick.clusters <- rep(1, n) k <- 1 mu <- matrix( apply(data,2,mean), nrow=1, ncol=ncol(data) ) is.converged <- FALSE iteration <- 0 ss.old <- Inf ss.curr <- Inf while ( ! k < - k + 1

Projects matching python. About: BayesOpt is an efficient, C++ implementation of the Bayesian optimization methodology for nonlinear-optimization, experimental design and stochastic bandits. In the literature it is also called Sequential Kriging Optimization (SKO) or Efficient Global Optimization (EGO). There are also interfaces for C, Matlab/Octave and Python. Changes: -Complete refactoring of inner parts of the library. -Updated to the latest version of NLOPT (2.4.1). -Error codes replaced with exceptions in C++ interface. -API modified to support new learning methods for kernel hyperparameters (e.g: MCMC). -Added configuration of random numbers (can be fixed for debugging). -Improved numerical results (e.g.: hyperparameter optimization is done in log space) -More examples and tests. -Fixed bugs. -The number of inner iterations have been increased by default, so overall optimization time using default configuration might be slower, but with improved results.

Generalized linear model In statistics, the generalized linear model (GLM) is a flexible generalization of ordinary linear regression that allows for response variables that have error distribution models other than a normal distribution. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value. Intuition[edit] Ordinary linear regression predicts the expected value of a given unknown quantity (the response variable, a random variable) as a linear combination of a set of observed values (predictors). However, these assumptions are inappropriate for many types of response variables. Similarly, a model that predicts a probability of making a yes/no choice (a Bernoulli variable) is even less suitable as a linear-response model, since probabilities are bounded on both ends (they must be between 0 and 1). Overview[edit] Model components[edit] 1. as

SourceForge.net: boost your Machine Learning projects - Project Web Hosting - Open Source Software projects:lasvm [Léon Bottou] 1. Introduction LASVM is an approximate SVM solver that uses online approximation. It reaches accuracies similar to that of a real SVM after performing a single sequential pass through the training examples. Further benefits can be achieved using selective sampling techniques to choose which example should be considered next. As show in the graph, LASVM requires considerably less memory than a regular SVM solver. See the LaSVM paper for the details. 2. We provide a complete implementation of LASVM under the well known GNU Public License. This source code contains a small C library implementing the kernel cache and the basic process and reprocess operations. These programs can handle three data file format: LIBSVM/SVMLight files These files represent examples using a simple text format. <line> = <target><feature>:<value> ... The target value and each of the feature/value pairs are separated by a space character. Binary files Binary files take less space and load faster. Split files

Linear regression In statistics, linear regression is an approach for modeling the relationship between a scalar dependent variable y and one or more explanatory variables denoted X. The case of one explanatory variable is called simple linear regression. For more than one explanatory variable, the process is called multiple linear regression. (This term should be distinguished from multivariate linear regression, where multiple correlated dependent variables are predicted,[citation needed] rather than a single scalar variable.) In linear regression, data are modeled using linear predictor functions, and unknown model parameters are estimated from the data. Linear regression was the first type of regression analysis to be studied rigorously, and to be used extensively in practical applications. Linear regression has many practical uses. If the goal is prediction, or forecasting, or reduction, linear regression can be used to fit a predictive model to an observed data set of y and X values. where Example.

Related: