# math

## Principle of maximum entropy

The principle of maximum entropy states that, subject to precisely stated prior data (such as a proposition that expresses testable information ), the probability distribution which best represents the current state of knowledge is the one with largest entropy . Another way of stating this: Take a precisely stated prior data or testable information about a probability distribution function. Consider the set of all trial probability distributions that would encode the prior data. Of those, one with maximal information entropy is the proper distribution, according to this principle.
The principle of indifference (also called principle of insufficient reason ) is a rule for assigning epistemic probabilities . Suppose that there are n > 1 mutually exclusive and collectively exhaustive possibilities. The principle of indifference states that if the n possibilities are indistinguishable except for their names, then each possibility should be assigned a probability equal to 1/ n . In Bayesian probability , this is the simplest non-informative prior . The principle of indifference is meaningless under the frequency interpretation of probability , in which probabilities are relative frequencies rather than degrees of belief in uncertain propositions, conditional upon a state of information. [ edit ] Examples

## Principle of indifference

Simpson's paradox for continuous data: a positive trend appears for two separate groups (blue and red), a negative trend (black, dashed) appears when the data are combined. This clickable gif image shows an explicative example of Simpson's Paradox. Though the percentage of male students who obtained the scholarship for maths is higher than the percentage of female students who obtained that scholarship, and the percentage of male students who obtained the scholarship for physics is higher than the percentage of female students who obtained that scholarship, the percentage of male students who obtained a scholarship (for maths or for physics) is lower than the percentage of female students who obtained a scholarship. In probability and statistics , Simpson's paradox , or the Yule–Simpson effect , is a paradox in which a trend that appears in different groups of data disappears when these groups are combined, and the reverse trend appears for the aggregate data.

## Die Champions League des Data-Mining | Technology Review

10.02.12 – Rachel Metz 27.000 Experten haben sich inzwischen für die Kaggle-Wettbewerbe eingeschrieben. (Kaggle) Das US-Start-up Kaggle organisiert gut dotierte Wettbewerbe, wie man mit neuen Algorithmen Datenhalden brauchbare Prognosen entlocken könnte. Wofür die akademische Forschung Jahre braucht, gelingt hier manchmal in wenigen Wochen. Es gibt Dinge, die lassen sich inzwischen leicht prognostizieren.

## Komplexitätstheorie

Die Komplexitätstheorie als Teilgebiet der Theoretischen Informatik befasst sich mit der Komplexität von algorithmisch behandelbaren Problemen auf verschiedenen mathematisch definierten formalen Rechnermodellen . Die Komplexität von Algorithmen wird in deren Ressourcenverbrauch gemessen, meist Rechenzeit oder Speicherplatzbedarf . Es werden jedoch auch speziellere Komplexitätsmaße wie die Größe eines Schaltkreises oder die Anzahl benötigter Prozessoren bei parallelen Algorithmen untersucht. Die Komplexitätstheorie unterscheidet sich von der Berechenbarkeitstheorie , die sich mit der Frage beschäftigt, welche Probleme prinzipiell algorithmisch gelöst werden können.
Die beiden Feigenbaum-Konstanten δ und α sind mathematische Konstanten , die in der Chaosforschung eine wichtige Rolle spielen. Erforschung [ Bearbeiten ] Der Zahlenwert von δ wurde erstmals 1977 von den Physikern Siegfried Großmann und Stefan Thomae publiziert. Mitchell Feigenbaum , der diese Zahl bereits 1975 beim Studium der Fixpunkte von iterierten Funktionen entdeckt hatte, publizierte 1978 eine Arbeit über die Universalität dieser Konstante.

## Feigenbaum-Konstante

Eine reelle Zahl (oder allgemeiner: eine komplexe Zahl ) heißt transzendent , wenn sie nicht als Lösung einer algebraischen Gleichung beliebigen (endlichen) Grades für

## Gödel's incompleteness theorems

Gödel's incompleteness theorems are two theorems of mathematical logic that establish inherent limitations of all but the most trivial axiomatic systems capable of doing arithmetic . The theorems, proven by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics . The two results are widely, but not universally, interpreted as showing that Hilbert's program to find a complete and consistent set of axioms for all mathematics is impossible, giving a negative answer to Hilbert's second problem . The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an " effective procedure " (e.g., a computer program, but it could be any sort of algorithm) is capable of proving all truths about the relations of the natural numbers ( arithmetic ). For any such system, there will always be statements about the natural numbers that are true, but that are unprovable within the system.
Baron Münchhausen pulls himself out of a mire by his own hair (illustration by Oskar Herrfurth) The Münchhausen Trilemma (after Baron Münchhausen , who allegedly pulled himself and the horse on which he was sitting out of a swamp by his own hair ), also called Agrippa's Trilemma (after Agrippa the Skeptic ), is a philosophical term coined to stress the purported impossibility to prove any truth even in the fields of logic and mathematics. It is the name of an argument in the theory of knowledge going back to the German philosopher Hans Albert , and more traditionally, in the name of Agrippa. [ citation needed ] [ edit ] Trilemma If we ask of any knowledge: " How do I know that it's true?"