background preloader

Machine Learning Resources

Machine Learning Resources

Data Beta yooreeka - Google Code The Yooreeka project started with the code of the book "Algorithms of the Intelligent Web " (Manning 2009). Although the term "Web" prevailed in the title, in essence, the algorithms are valuable in any software application. An Errata page for the book has been posted here. The second major revision of the code (v. 2.x) will introduce some enhancements, some new features, and it will restructure the packages from the root org.yooreeka You can find the Yooreeka 2.0 API (Javadoc) here and you can also visit us at our Google+ home. Lastly, Yooreeka 2.0 will be licensed under the Apache License rather than the somewhat more restrictive LGPL. Machine Learning Department - Carnegie Mellon University

Fisher's method Under Fisher's method, two small p-valuesP1 and P2 combine to form a smaller p-value. The yellow-green boundary defines the region where the meta-analysis p-value is below 0.05. For example, if both p-values are around 0.10, or if one is around 0.04 and one is around 0.25, the meta-analysis p-value is around 0.05. In statistics, Fisher's method,[1][2] also known as Fisher's combined probability test, is a technique for data fusion or "meta-analysis" (analysis of analyses). It was developed by and named for Ronald Fisher. In its basic form, it is used to combine the results from several independent tests bearing upon the same overall hypothesis (H0). Application to independent test statistics[edit] Fisher's method combines extreme value probabilities from each test, commonly known as "p-values", into one test statistic (X2) using the formula where pi is the p-value for the ith hypothesis test. Limitations of independent assumption[edit] reduced for Extension to dependent test statistics[edit]

cool ML links research & consulting GmbH International Expedia Challenge: First Place for commendo "Learning to rank hotels to maximize purchases": commendo research performes best and placed in leading position at the international Expedia Competition, which wanted to find models to help personalize Expedia Hotel Searches much better! more information 2012: The journey continues... commendo is now part of Opera Solutions Opera Solutions Adds Leading Recommender Engine Capabilities with the Acquisition of Commendo Research & Consulting more information The year of 2009: The journey begins... fabulously! Netflix Prize and Federal State Award of Austria winner commendo came out on top in both the Netflix Prize and the 2009 Austrian Federal State Award for Consulting and Information technology! more information

Mining the Web: Additional readings Soumen Chakrabarti Here I will post comments and additional readings organized by chapters in the book, or propose new sections and chapters. Chapter 1, Introduction General additional reading: Baeza-Yates, R. and Ribeiro-Neto, B. (1999). Modern Information Retrieval. Chapter 2, Crawling and monitoring the Web Additional open-source crawlers: Archive.org's crawler, UbiCrawler. The first edition has no discussion of maintaining crawls and keeping them fresh. Brewington and CybenkoSquillante et al.Tomlin et al.Cho, Olston, Pandey, Ntoulas Chapter 3, Indexing and search Blelloch papers on index compression via document and term ID assignment Text search support in XML: ELIXIR, XIRQL, XRank, TeXQuery. Chapter 4, Similarity and clustering Corpus models: mixture, aspect, latent dirichlet, GaP Simpler treatment of EM, more discussion on pitfalls NMF with square loss and divergence loss. Chapter 5, Supervised learning from feature vectors Chapter 6, Semi-supervised learning Learning graphical models

Recommender system Recommender systems or recommendation systems (sometimes replacing "system" with a synonym such as platform or engine) are a subclass of information filtering system that seek to predict the 'rating' or 'preference' that user would give to an item.[1][2] Recommender systems have become extremely common in recent years, and are applied in a variety of applications. The most popular ones are probably movies, music, news, books, research articles, search queries, social tags, and products in general. However, there are also recommender systems for experts, jokes, restaurants, financial services, life insurance, persons (online dating), and twitter followers .[3] Overview[edit] The differences between collaborative and content-based filtering can be demonstrated by comparing two popular music recommender systems - Last.fm and Pandora Radio. Each type of system has its own strengths and weaknesses. Recommender system is an active research area in the data mining and machine learning areas.

Boltzmann machine A graphical representation of an example Boltzmann machine. Each undirected edge represents dependency. In this example there are 3 hidden units and 4 visible units. This is not a restricted Boltzmann machine. A Boltzmann machine is a type of stochastic recurrent neural network invented by Geoffrey Hinton and Terry Sejnowski in 1985. Boltzmann machines can be seen as the stochastic, generative counterpart of Hopfield nets. They are named after the Boltzmann distribution in statistical mechanics, which is used in their sampling function. Structure[edit] A graphical representation of a Boltzmann machine with a few weights labeled. . A Boltzmann machine, like a Hopfield network, is a network of units with an "energy" defined for the network. , in a Boltzmann machine is identical in form to that of a Hopfield network: Where: The connections in a Boltzmann machine have two restrictions: . Often the weights are represented in matrix form with a symmetric matrix , with zeros along the diagonal. where

Challenges in Building Large-Scale Information Retrieval Systems Building and operating large-scale information retrieval systems used by hundreds of millions of people around the world provides a number of interesting challenges. Designing such systems requires making complex design tradeoffs in a number of dimensions, including (a) the number of user queries that must be handled per second and the response latency to these requests, (b) the number and size of various corpora that are searched, (c) the latency and frequency with which documents are updated or added to the corpora, and (d) the quality and cost of the ranking algorithms that are used for retrieval. In this talk I'll discuss the evolution of Google's hardware infrastructure and information retrieval systems and some of the design challenges that arise from ever-increasing demands in all of these dimensions. I'll also describe how we use various pieces of distributed systems infrastructure when building these retrieval systems.

Fast Gradient Descent " Machine Learning (Theory) Fast Gradient Descent Nic Schaudolph has been developing a fast gradient descent algorithm called Stochastic Meta-Descent (SMD). Gradient descent is currently untrendy in the machine learning community, but there remains a large number of people using gradient descent on neural networks or other architectures from when it was trendy in the early 1990s. Gradient descent does not necessarily produce easily reproduced results. Many people would add point (4): gradient descent on many architectures does not result in a global optima. SMD addresses point (3).

Related: