background preloader

Hilarymason.com

Hilarymason.com
Related:  Machine LearningData analysis

Machine Learning in Gradient Descent In Machine Learning, gradient descent is a very popular learning mechanism that is based on a greedy, hill-climbing approach. Gradient Descent The basic idea of Gradient Descent is to use a feedback loop to adjust the model based on the error it observes (between its predicted output and the actual output). The adjustment (notice that there are multiple model parameters and therefore should be considered as a vector) is pointing to a direction where the error is decreasing in the steepest sense (hence the term "gradient"). Notice that we intentionally leave the following items vaguely defined so this approach can be applicable in a wide range of machine learning scenarios. The ModelThe loss functionThe learning rate Gradient Descent is very popular method because of the following reasons ... Batch vs Online Learning In batch learning, all training will be fed to the model, who estimates the output for all data points. ɳ = ɳ_initial / (t ^ 0.5). Parallel Learning

Data Mining: Text Mining, Visualization and Social Media The Turing Test for artificial intelligence is a reasonably well understood idea: if, through a written form of communication, a machine can convince a human that it too is a human, then it passes the test. The elegance of this approach (which I believe is its primary attraction) is that it avoids any troublesome definition of intelligence and appeals to an innate ability in humans to detect entities which are not 'one of us'. This form of AI is the one that is generally presented in entertainment (films, novels, etc.). However, to an engineer, there are some problems with this as the accepted popular idea of artificial intelligence. I believe that software engineering can be evaluated in a simple measure of productivity. Turing AI, while clearly an interesting intellectual concept, is like building an artificial bird instead of building an aeroplane: When we achieve this, we will have built an AI, but it won't be a Turing AI and it may not even pass the Turing Test.

Junk Charts Tiny Speck About I am the Founder of We Wire People, and this is where I blog. On Enterprise IT and Social mostly, but sometimes also on more personal items such as spirituality and what moves me in my personal life After almost 15 years of working for Global 100 customers via an employer, I decided to follow my heart and passion and become self-employed in order to do what I like to do best: connect people through technology in the broadest sense of the word. Across departments, companies, industries, entire countries or even continents. On this blog I share my ideas and opinions, welcoming comments and reactions: I believe in exchanging ideas and information and mutual enthusiasm in order to create new or better ways. Here's some history on my educations, my work, and my spiritual journey: that last bit is a big part of my life My educations 30 years ago I programmed my first game, in BASIC. My work After University, I joined Capgemini. I really, really love the work. My business goal?

Reading and Text Mining a PDF-File in R 0inShare Here is an R-script that reads a PDF-file to R and does some text mining with it: Machine Learning (Theory) Normal Deviate Steve Blank Culture War: Classical Statistics vs. Machine Learning 'Statistical Modeling: The Two Cultures' by L. Breiman (Statistical Science 2001, Vol. 16, No. 3, 199–231) is an interesting paper that is a must read for anyone traditionally trained in statistics, but new to the concept of machine learning. It gives perspective and context to anyone that may attempt to learn to use data mining software such as SAS Enterprise Miner or who may take a course in machine learning (like Dr. Ng's (Stanford) youtube lectures in machine learning .) From the article, two cultures are defined: "There are two cultures in the use of statistical modeling to reach conclusions from data. Classical Statistics/Stochastc Data Modeling Paradigm: " assumes that the data are generated by a given stochastic data model Algorithmic or Machine Learning Paradigm: "uses algorithmic models and treats the data mechanism as unknown." Classical Statistics: Focus is on hypothesis testing of causes and effects and interpretability of models. As Breiman states: The above article concluded:

An Introduction to WEKA - Machine Learning in Java WEKA (Waikato Environment for Knowledge Analysis) is an open source library for machine learning, bundling lots of techniques from Support Vector Machines to C4.5 Decision Trees in a single Java package. My examples in this article will be based on binary classification, but what I say is also valid for regression and in many cases for unsupervised learning. Why and when would you use a library? I'm not a fan of integrating libraries and frameworks just because they exist; but machine learning is something where you have to rely on a library if you're using codified algorithms as they're implemented more efficiently than what you and I can possibly code in an afternoon. Efficiency means a lot in machine learning as supervised learning is one of the few programs that is really CPU-bound and can't be optimized further with I/O improvements. 1.J48 classifier = new J48(); 2.classifier.setOptions(new String[] { "-U" }); With respect to: 1.SVM classifier = new SMO(); 3. 01.double targetIndex;

Related:  People