Rationality, DT and Statistics
Get flash to fully experience Pearltrees
Hello to all, Like the rest of you, I'm an aspiring rationalist. I'm also a software engineer. I design software solutions automatically.
Last Update: 12.31am, Thur 7 Aug 2003
Mr. Malfoy is new to the business of having ideas, and so when he has one, he becomes proud of himself for having it. He has not yet had enough ideas to unflinchingly discard those that are beautiful in some aspects and impractical in others; he has not yet acquired confidence in his own ability to think of better ideas as he requires them. What we are seeing here is not Mr. Malfoy's best idea, I fear, but rather his only idea. - Harry Potter and the Methods of Rationality
Consciousness, information integration, and the brain Based on a phenomenological analysis, we have argued that consciousness corresponds to the capacity to integrate information. We have then considered how such capacity can be measured, and we have developed a theoretical framework for consciousness as information integration.
Bayes' Theorem for the curious and bewildered; an excruciatingly gentle introduction. Your friends and colleagues are talking about something called "Bayes' Theorem" or "Bayes' Rule", or something called Bayesian reasoning. They sound really enthusiastic about it, too, so you google and find a webpage about Bayes' Theorem and...
In algorithmic information theory (a subfield of computer science ), the Kolmogorov complexity (also known as descriptive complexity , Kolmogorov– Chaitin complexity , algorithmic entropy , or program-size complexity ) of an object, such as a piece of text, is a measure of the computational resources needed to specify the object. It is named after Andrey Kolmogorov , who first published on the subject in 1963. [ 1 ] [ 2 ] For example, consider the following two strings of length 64, each containing only lowercase letters and digits:
Around 1960, Ray Solomonoff founded the theory of universal inductive inference , the theory of prediction based on observations; for example, predicting the next symbol based upon a given series of symbols. The only assumption is that the environment follows some unknown but computable probability distribution . It achieves excellent theoretical results and is based on solid philosophical foundations: [ 1 ] It is a mathematically formalized Occam's razor : [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] Shorter computable theories have more weight when calculating the probability of the next observation, using all computable theories which perfectly describe previous observations. Marcus Hutter 's universal artificial intelligence builds upon this to calculate the expected value of an action.
A sequence is a series of multiple posts on Less Wrong on the same topic, to coherently and fully explore a particular thesis. Reading the sequences is the most systematic way to approach the Less Wrong archives. If you'd like an abridged index of the sequences, try XiXiDu's guide , or Academian's guide targeted at people who already have a science background. If you prefer books over blog posts, Thinking and Deciding by Jonathan Baron and Good and Real by Gary Drescher have been mentioned as books that overlap significantly with the sequences. ( Read more about how the sequences fit in with work done by others.) eReader Formats
Honest disagreement is often a good sign of progress. - Gandhi Now that most communication is remote rather than face-to-face, people are comfortable disagreeing more often. How, then, can we disagree well ?
A list of references and resources for LW Updated: 2011-05-24 F = Free E = Easy (adequate for a low educational background) M = Memetic Hazard (controversial ideas or works of fiction) Summary
For years, my self-education was stupid and wasteful. I learned by consuming blog posts, Wikipedia articles, classic texts, podcast episodes, popular books, video lectures , peer-reviewed papers, Teaching Company courses, and Cliff's Notes. How inefficient!
Many cognitive biases have been demonstrated by research in psychology and behavioral economics . These are systematic deviations from a standard of rationality or good judgment.
+ Author Affiliations In the aftermath of many natural and man-made disasters, people often wonder why those affected were underprepared, especially when the disaster was the result of known or regularly occurring hazards (e.g., hurricanes). We study one contributing factor: prior near-miss experiences. Near misses are events that have some nontrivial expectation of ending in disaster but, by chance, do not. We demonstrate that when near misses are interpreted as disasters that did not occur , people illegitimately underestimate the danger of subsequent hazardous situations and make riskier decisions (e.g., choosing not to engage in mitigation activities for the potential hazard).
This website is devoted to the art of rationality, and as such, is a wonderful corrective to wrong facts and, more importantly, wrong procedures for finding out facts. There is, however, another type of cognitive phenomenon that I’ve come to consider particularly troublesome, because it militates against rationality in the irrationalist, and fights against contentment and curiousity in the rationalist. For lack of a better word, I’ll call it perverse-mindedness.
In statistics , the multiple comparisons , multiplicity or multiple testing problem occurs when one considers a set of statistical inferences simultaneously [ 1 ] or infers a subset of parameters selected based on the observed values. [ 2 ] Errors in inference, including confidence intervals that fail to include their corresponding population parameters or hypothesis tests that incorrectly reject the null hypothesis are more likely to occur when one considers the set as a whole. Several statistical techniques have been developed to prevent this from happening, allowing significance levels for single and multiple comparisons to be directly compared. These techniques generally require a stronger level of evidence to be observed in order for an individual comparison to be deemed "significant", so as to compensate for the number of inferences being made. [ edit ] History The interest in problem of multiple comparisons began in the 1950s with the work of Tukey and Scheffé .