background preloader

Power law

Power law
An example power-law graph, being used to demonstrate ranking of popularity. To the right is the long tail, and to the left are the few that dominate (also known as the 80–20 rule). In statistics, a power law is a functional relationship between two quantities, where a relative change in one quantity results in a proportional relative change in the other quantity, independent of the initial size of those quantities: one quantity varies as a power of another. Empirical examples of power laws[edit] Properties of power laws[edit] Scale invariance[edit] One attribute of power laws is their scale invariance. , scaling the argument by a constant factor causes only a proportionate scaling of the function itself. That is, scaling by a constant simply multiplies the original power-law relation by the constant . and , and the straight-line on the log-log plot is often called the signature of a power law. Lack of well-defined average value[edit] A power-law has a well-defined mean over only if for where , and .

Table of mathematical symbols When reading the list, it is important to recognize that a mathematical concept is independent of the symbol chosen to represent it. For many of the symbols below, the symbol is usually synonymous with the corresponding concept (ultimately an arbitrary choice made as a result of the cumulative history of mathematics), but in some situations a different convention may be used. For example, depending on context, the triple bar "≡" may represent congruence or a definition. Each symbol is shown both in HTML, whose display depends on the browser's access to an appropriate font installed on the particular device, and in TeX, as an image. Guide[edit] This list is organized by symbol type and is intended to facilitate finding an unfamiliar symbol by its visual appearance. Basic symbols: Symbols widely used in mathematics, roughly through first-year calculus. Basic symbols[edit] Symbols based on equality sign[edit] Symbols that point left or right[edit] Brackets[edit] Other non-letter symbols[edit]

The Small-World Phenomenon: An Algorithmic Perspective 1 Jon Kleinberg 2 Abstract: Long a matter of folklore, the ``small-world phenomenon'' -- the principle that we are all linked by short chains of acquaintances -- was inaugurated as an area of experimental study in the social sciences through the pioneering work of Stanley Milgram in the 1960's. This work was among the first to make the phenomenon quantitative, allowing people to speak of the ``six degrees of separation'' between any two people in the United States. Since then, a number of network models have been proposed as frameworks in which to study the problem analytically. But existing models are insufficient to explain the striking algorithmic component of Milgram's original findings: that individuals using local information are collectively very effective at actually constructing short paths between two points in a social network. The Small-World Phenomenon. Milgram's basic small-world experiment remains one of the most compelling ways to think about the problem. The Present Work. .

Miller's law Miller's Law can refer to three different principles. In communication[edit] Miller's law, part of his theory of communication, was formulated by George Miller, Princeton Professor and psychologist. It instructs us to suspend judgment about what someone is saying so we can first understand them without imbuing their message with our own personal interpretations. The law states: "To understand what another person is saying, you must assume that it is true and try to imagine what it could be true of The point is not to blindly accept what people say, but to do a better job of listening for understanding. In psychology[edit] The observation, also by George Armitage Miller, that the number of objects an average person can hold in working memory is about seven.[3] In software development[edit] Miller's Law was formulated by Mike Beltzner and is named in respect of Dave Miller, long-standing owner of the Bugzilla product: References[edit]

THE LAST DAYS OF THE POLYMATH People who know a lot about a lot have long been an exclusive club, but now they are an endangered species. Edward Carr tracks some down ... From INTELLIGENT LIFE Magazine, Autumn 2009 CARL DJERASSI can remember the moment when he became a writer. His wife, Diane Middlebrook, thought it was a ridiculous idea. Even at 85, slight and snowy-haired, Djerassi is a det­ermined man. Eventually Djerassi got the bound galleys of his book. Diane Middlebrook died of cancer in 2007 and, as Djerassi speaks, her presence grows stronger. Carl Djerassi is a polymath. His latest book, “Four Jews on Parnassus”, is an ima­gined series of debates between Theodor Adorno, Arnold Schönberg, Walter Benjamin and Gershom Scholem, which touches on art, music, philosophy and Jewish identity. The word “polymath” teeters somewhere between Leo­nardo da Vinci and Stephen Fry. “To me, promiscuity is a way of flitting around. Djerassi is right to be suspicious of flitting. Young’s achievements are staggering.

Clustering coefficient In graph theory, a clustering coefficient is a measure of the degree to which nodes in a graph tend to cluster together. Evidence suggests that in most real-world networks, and in particular social networks, nodes tend to create tightly knit groups characterised by a relatively high density of ties; this likelihood tends to be greater than the average probability of a tie randomly established between two nodes (Holland and Leinhardt, 1971;[1] Watts and Strogatz, 1998[2]). Two versions of this measure exist: the global and the local. The global version was designed to give an overall indication of the clustering in the network, whereas the local gives an indication of the embeddedness of single nodes. Global clustering coefficient[edit] The global clustering coefficient is based on triplets of nodes. Watts and Strogatz defined the clustering coefficient as follows, "Suppose that a vertex has neighbours; then at most edges can exist between them (this occurs when every neighbour of ). over all

Hick's law Hick's law, or the Hick–Hyman Law, named after British psychologist William Edmund Hick and Ray Hyman, describes the time it takes for a person to make a decision as a result of the possible choices he or she has: increasing the number of choices will increase the decision time logarithmically. The Hick–Hyman law assesses cognitive information capacity in choice reaction experiments. The amount of time taken to process a certain amount of bits in the Hick–Hyman law is known as the rate of gain of information. Background[edit] Hick first began experimenting with this theory in 1951. Hick performed a second experiment using the same task, while keeping the number of alternatives at 10. While Hick was stating that the relationship between reaction time and the number of choices was logarithmic, Hyman wanted to better understand the relationship between the reaction time and the mean number of choices. Law[edit] In the case of choices with unequal probabilities, the law can be generalized as:

An Introduction to Wavelets: What Do Some Wavelets Look Like? W hat do S ome W avelets L ook L ike? Wavelet transforms comprise an infinite set. The different wavelet families make different trade-offs between how compactly the basis functions are localized in space and how smooth they are. Some of the wavelet bases have fractal structure. This figure was generated using the WaveLab command: wave=MakeWavelet(2, -4, 'Daubechies', 4, 'Mother', 2048). Within each family of wavelets (such as the Daubechies family) are wavelet subclasses distinguished by the number of coefficients and by the level of iteration. The number next to the wavelet name represents the number of vanishing moments (A stringent mathematical definition related to the number of wavelet coefficients) for the subclass of wavelet. wave = MakeWavelet(2,-4,'Daubechies',6,'Mother', 2048); wave = MakeWavelet(2,-4,'Coiflet',3,'Mother', 2048); wave = MakeWavelet(0,0,'Haar',4,'Mother', 512); wave = MakeWavelet(2,-4,'Symmlet',6,'Mother', 2048); Or click HERE to download a PDF version (360 Kbytes)

Exclusive: How Google's Algorithm Rules the Web | Wired Magazine Want to know how Google is about to change your life? Stop by the Ouagadougou conference room on a Thursday morning. It is here, at the Mountain View, California, headquarters of the world’s most powerful Internet company, that a room filled with three dozen engineers, product managers, and executives figure out how to make their search engine even smarter. This year, Google will introduce 550 or so improvements to its fabled algorithm, and each will be determined at a gathering just like this one. You might think that after a solid decade of search-market dominance, Google could relax. Still, the biggest threat to Google can be found 850 miles to the north: Bing. Team Bing has been focusing on unique instances where Google’s algorithms don’t always satisfy. Even the Bingers confess that, when it comes to the simple task of taking a search term and returning relevant results, Google is still miles ahead. Google’s response can be summed up in four words: mike siwek lawyer mi.

Loi de Fitts Un article de Wikipédia, l'encyclopédie libre. §Le modèle[modifier | modifier le code] Mathématiquement, la loi de Fitts a été formulée de plusieurs manières différentes. Une forme commune est la formulation de Shannon (proposée par Scott MacKenzie, et nommée d'après sa ressemblance avec le théorème de Shannon-Hartley (en)) pour le mouvement suivant une unique dimension : où T est le temps moyen pris pour effectuer le mouvement ;a et b sont des paramètres pouvant être déterminés empiriquement par régression linéaire ;D est la distance séparant le point de départ du centre de la cible ;L est la largeur de la cible mesurée selon l'axe de mouvement ; L peut également être considérée comme la tolérance de la position finale, étant donné que le point final du mouvement peut tomber dans la fourchette de plus ou moins L/2 du centre. §Notes et références[modifier | modifier le code]

Primality Proving 2.1: Finding very small primes For finding all the small primes, say all those less than 10,000,000,000; one of the most efficient ways is by using the Sieve of Eratosthenes (ca 240 BC): Make a list of all the integers less than or equal to n (greater than one) and strike out the multiples of all primes less than or equal to the square root of n, then the numbers that are left are the primes. (See also our glossary page.) For example, to find all the odd primes less than or equal to 100 we first list the odd numbers from 3 to 100 (why even list the evens?) The first number is 3 so it is the first odd prime--cross out all of its multiples. Now the first number left is 5, the second odd prime--cross out all of its multiples. This method is so fast that there is no reason to store a large list of primes on a computer--an efficient implementation can find them faster than a computer can read from a disk. To find individual small primes trial division works well.

Related: