background preloader

P versus NP problem

P versus NP problem
Diagram of complexity classes provided that P≠NP. The existence of problems within NP but outside both P and NP-complete, under that assumption, was established by Ladner's theorem.[1] The P versus NP problem is a major unsolved problem in computer science. Informally, it asks whether every problem whose solution can be quickly verified by a computer can also be quickly solved by a computer. It was essentially first mentioned in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether a certain NP complete problem could be solved in quadratic or linear time.[2] The precise statement of the P=NP problem was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures"[3] and is considered by many to be the most important open problem in the field.[4] It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute to carry a US$1,000,000 prize for the first correct solution. Context[edit]

NP - Wikipedia From Wikipedia, the free encyclopedia NP may refer to: Arts and entertainment[edit] Organizations[edit] Places[edit] NP postcode area, Newport, Wales, United KingdomNepal (ISO 3166-1 alpha-2 country code NP) .np, the country code top level domain (ccTLD) for NepalNichols Point, Australia Science, technology and mathematics[edit] Biology and medicine[edit] Mathematics and computer science[edit] Physics and chemistry[edit] NP junction or PN junction, the simplest electronic device, used to make diodes and transistorsNeper (Np), a dimensionless logarithmic unit for ratios of measurements of physical field and power quantitiesNeptunium, a chemical element with symbol NpPower number (Np), a dimensionless number relating the resistance force to the inertia force Other uses in science and technology[edit] Other uses[edit]

NP-hardness - Wikipedia Definition[edit] A decision problem H is NP-hard when for every problem L in NP, there is a polynomial-time reduction from L to H.[1]:80 An equivalent definition is to require that every problem L in NP can be solved in polynomial time by an oracle machine with an oracle for H.[7] Informally, we can think of an algorithm that can call such an oracle machine as a subroutine for solving H, and solves L in polynomial time, if the subroutine call takes only one step to compute. Another definition is to require that there is a polynomial-time reduction from an NP-complete problem G to H.[1]:91 As any problem L in NP reduces in polynomial time to G, L reduces in turn to H in polynomial time so this new definition implies the previous one. Awkwardly, it does not restrict the class NP-hard to decision problems, for instance it also includes search problems, or optimization problems. Consequences[edit] If P ≠ NP, then NP-hard problems cannot be solved in polynomial time. Examples[edit] NP-hard NP-easy

Computational complexity theory - Wikipedia Computational complexity theory is a branch of the theory of computation in theoretical computer science and mathematics that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other. A computational problem is understood to be a task that is in principle amenable to being solved by a computer, which is equivalent to stating that the problem may be solved by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Closely related fields in theoretical computer science are analysis of algorithms and computability theory. Computational problems[edit] Problem instances[edit] Turing machine[edit]

Machine learning - Wikipedia Machine learning is the subfield of computer science that, according to Arthur Samuel in 1959, gives "computers the ability to learn without being explicitly programmed."[1] Evolved from the study of pattern recognition and computational learning theory in artificial intelligence,[2] machine learning explores the study and construction of algorithms that can learn from and make predictions on data[3] – such algorithms overcome following strictly static program instructions by making data-driven predictions or decisions,[4]:2 through building a model from sample inputs. Machine learning is employed in a range of computing tasks where designing and programming explicit algorithms with good performance is difficult or unfeasible; example applications include email filtering, detection of network intruders or malicious insiders working towards a data breach,[5] optical character recognition (OCR),[6] learning to rank and computer vision. Overview[edit] Tom M. Relation to statistics[edit]

Radial basis function network - Wikipedia In the field of mathematical modeling, a radial basis function network is an artificial neural network that uses radial basis functions as activation functions. The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including function approximation, time series prediction, classification, and system control. They were first formulated in a 1988 paper by Broomhead and Lowe, both researchers at the Royal Signals and Radar Establishment.[1][2][3] Network architecture[edit] Figure 1: Architecture of a radial basis function network. is used as input to all radial basis functions, each with different parameters. Radial basis function (RBF) networks typically have three layers: an input layer, a hidden layer with a non-linear RBF activation function and a linear output layer. . , and is given by where is the number of neurons in the hidden layer, is the center vector for neuron , and and . is . .

Lloyd's algorithm Example of Lloyd's algorithm. The Voronoi diagram of the current points at each iteration is shown. The plus signs denote the centroids of the Voronoi cells. Iteration 1 Iteration 2 Iteration 3 Iteration 15 In the last image, the points are very near the centroids of the Voronoi cells. In computer science and electrical engineering, Lloyd's algorithm, also known as Voronoi iteration or relaxation, is an algorithm named after Stuart P. Algorithm description[edit] Lloyd's algorithm starts by an initial placement of some number k of point sites in the input domain. It then repeatedly executes the following relaxation step: The Voronoi diagram of the k sites is computed.Each cell of the Voronoi diagram is integrated, and the centroid is computed.Each site is then moved to the centroid of its Voronoi cell. Convergence[edit] The algorithm converges slowly or, due to limitations in numerical precision, may not converge. Applications[edit] Different distances[edit] See also[edit] References[edit]

Rocchio algorithm - Wikipedia Algorithm[edit] As demonstrated in the Rocchio formula, the associated weights (a, b, c) are responsible for shaping the modified vector in a direction closer, or farther away, from the original query, related documents, and non-related documents. In particular, the values for b and c should be incremented or decremented proportionally to the set of documents classified by the user. If the user decides that the modified query should not contain terms from either the original query, related documents, or non-related documents, then the corresponding weight (a, b, c) value for the category should be set to 0. In the later part of the algorithm, the variables Dr, and Dnr are presented to be sets of vectors containing the coordinates of related documents and non-related documents. and are the vectors used to iterate through the two sets and form vector summations. Time complexity[edit] Training = Testing = Usage[edit] Limitations[edit] See also[edit] References[edit]

Named-entity recognition - Wikipedia Named-entity recognition (NER) (also known as entity identification, entity chunking and entity extraction) is a subtask of information extraction that seeks to locate and classify named entity mentioned in unstructured text into pre-defined categories such as person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc. Most research on NER systems has been structured as taking an unannotated block of text, such as this one: Jim bought 300 shares of Acme Corp. in 2006. And producing an annotated block of text that highlights the names of entities: [Jim]Person bought 300 shares of [Acme Corp.]Organization in [2006]Time. In this example, a person name consisting of one token, a two-token company name and a temporal expression have been detected and classified. State-of-the-art NER systems for English produce near-human performance. Named-entity recognition platforms[edit] Notable NER platforms include: Problem definition[edit]

Vector space model - Wikipedia Definitions[edit] Documents and queries are represented as vectors. Vector operations can be used to compare documents with queries. Applications[edit] In practice, it is easier to calculate the cosine of the angle between the vectors, instead of the angle itself: Where is the intersection (i.e. the dot product) of the document (d2 in the figure to the right) and the query (q in the figure) vectors, is the norm of vector d2, and As all vectors under consideration by this model are elementwise nonnegative, a cosine value of zero means that the query and document vector are orthogonal and have no match (i.e. the query term does not exist in the document being considered). Example: tf-idf weights[edit] In the classic vector space model proposed by Salton, Wong and Yang [1] the term-specific weights in the document vectors are products of local and global parameters. , where and is term frequency of term t in document d (a local parameter) is inverse document frequency (a global parameter). .

Related: