background preloader

Computational complexity theory

Computational complexity theory
Computational complexity theory is a branch of the theory of computation in theoretical computer science and mathematics that focuses on classifying computational problems according to their inherent difficulty, and relating those classes to each other. A computational problem is understood to be a task that is in principle amenable to being solved by a computer, which is equivalent to stating that the problem may be solved by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Closely related fields in theoretical computer science are analysis of algorithms and computability theory. Computational problems[edit] Problem instances[edit] Turing machine[edit]

P versus NP problem Diagram of complexity classes provided that P≠NP. The existence of problems within NP but outside both P and NP-complete, under that assumption, was established by Ladner's theorem.[1] The P versus NP problem is a major unsolved problem in computer science. Informally, it asks whether every problem whose solution can be quickly verified by a computer can also be quickly solved by a computer. Consider the subset sum problem, an example of a problem that is easy to verify, but whose answer may be difficult to compute. An answer to the P = NP question would determine whether problems that can be verified in polynomial time, like the subset-sum problem, can also be solved in polynomial time. Aside from being an important problem in computational theory, a proof either way would have profound implications for mathematics, cryptography, algorithm research, artificial intelligence, game theory, multimedia processing, philosophy, economics and many other fields. Context[edit] Is P equal to NP?

Stockholm syndrome Stockholm syndrome, or capture-bonding, is a psychological phenomenon in which hostages express empathy and sympathy and have positive feelings toward their captors, sometimes to the point of defending and identifying with the captors. These feelings are generally considered irrational in light of the danger or risk endured by the victims, who essentially mistake a lack of abuse from their captors for an act of kindness.[1][2] The FBI's Hostage Barricade Database System shows that roughly 8 percent of victims show evidence of Stockholm syndrome.[3] Stockholm syndrome can be seen as a form of traumatic bonding, which does not necessarily require a hostage scenario, but which describes "strong emotional ties that develop between two persons where one person intermittently harasses, beats, threatens, abuses, or intimidates the other."[4] One commonly used hypothesis to explain the effect of Stockholm syndrome is based on Freudian theory. History[edit] Extension to other scenarios[edit]

Complexity There is no absolute definition of what complexity means, the only consensus among researchers is that there is no agreement about the specific definition of complexity. However, a characterization of what is complex is possible.[1] Complexity is generally used to characterize something with many parts where those parts interact with each other in multiple ways. The study of these complex linkages is the main goal of complex systems theory. In science,[2] there are at this time a number of approaches to characterizing complexity, many of which are reflected in this article. Overview[edit] Definitions of complexity often depend on the concept of a "system"—a set of parts or elements that have relationships among them differentiated from relationships with other elements outside the relational regime. Some definitions relate to the algorithmic basis for the expression of a complex phenomenon or model or mathematical expression, as later set out herein. Varied meanings of complexity[edit]

Astrophysics Astrophysics (from Greek astron, ἄστρον "star", and physis, φύσις "nature") is the branch of astronomy that deals with the physics of the universe, especially with "the nature of the heavenly bodies, rather than their positions or motions in space."[1][2] Among the objects studied are galaxies, stars, planets, extrasolar planets, the interstellar medium and the cosmic microwave background.[3][4] Their emissions are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. Because astrophysics is a very broad subject, astrophysicists typically apply many disciplines of physics, including mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics. In practice, modern astronomical research often involves a substantial amount of work in the realm(s) of theoretical and/or observational physics.

70 Things Every Computer Geek Should Know. | Arrow Webzine The term ‘geek’, once used to label a circus freak, has morphed in meaning over the years. What was once an unusual profession transferred into a word indicating social awkwardness. As time has gone on, the word has yet again morphed to indicate a new type of individual: someone who is obsessive over one (or more) particular subjects, whether it be science, photography, electronics, computers, media, or any other field. How to become a real computer Geek? Little known to most, there are many benefits to being a computer geek. You may get the answer here: The Meaning of Technical Acronyms USB – Universal Serial BusGPU – Graphics Processing UnitCPU – Central Processing UnitATA- AT Attachment (AT Attachment Packet Interface (ATAPI)SATA – Serial ATAHTML – Hyper-text Markup LanguageHTTP – Hypertext Transfer ProtocolFTP – File Transfer ProtocolP2P - peer to peer 1. One of the best list of default passwords. 1A. 2. 3. 4.

Time complexity Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, where an elementary operation takes a fixed amount of time to perform. Thus the amount of time taken and the number of elementary operations performed by the algorithm differ by at most a constant factor. Since an algorithm's performance time may vary with different inputs of the same size, one commonly uses the worst-case time complexity of an algorithm, denoted as T(n), which is defined as the maximum amount of time taken on any input of size n. Table of common time complexities[edit] The following table summarizes some classes of commonly encountered time complexities. Constant time[edit] Despite the name "constant time", the running time does not have to be independent of the problem size, but an upper bound for the running time has to be bounded independently of the problem size. Here are some examples of code fragments that run in constant time: Logarithmic time[edit] and

Disney algorithm builds high-res 3D models from ordinary photos Disney Research has developed an algorithm which can generate 3D computer models from 2D images in great detail, sufficient, it says, to meet the needs of video game and film makers. The technology requires multiple images to capture the scene from a variety of vantage points. View all The 3D model is somewhat limited in that it is only coherent within the field of view encompassed by the original images. It does not appear to fill in data. However, judging from Disney Research's demo video, the detail achieved is incredibly impressive. A photo from Disney's sample set The corresponding 3D model Unlike other systems, the algorithm calculates depth for every pixel, proving most effective at the edges of objects. The algorithm demands less of computer hardware than would ordinarily be the case when constructing 3D models from high-res images, in part because it does not require all of the input data to be held in memory at once. The system is not yet perfect. Source: Disney Research

Home Materials science Depiction of two "Fullerene Nano-gears" with multiple teeth. Materials science, also commonly known as materials engineering, is an interdisciplinary field applying the properties of matter to various areas of science and engineering. This relatively new scientific field investigates the relationship between the structure of materials at atomic or molecular scales and their macroscopic properties. It incorporates elements of applied physics and chemistry, with significant media attention focused on Nano science and nanotechnology. History[edit] Before the 1960s (and in some cases decades after), many materials science departments were named metallurgy departments, from a 19th and early 20th century emphasis on metals. Fundamentals[edit] The basis of materials science involves relating the desired properties and relative performance of a material in a certain application to the structure of the atoms and phases in that material through characterization. Classes of materials[edit] [edit]

How Computers Boot Up : Gustavo Duarte The previous post described [motherboards and the memory map] memory-map in Intel computers to set the scene for the initial phases of boot. Booting is an involved, hacky, multi-stage affair - fun stuff. Here’s an outline of the process: Things start rolling when you press the power button on the computer (no! If all is well the CPU starts running. Most registers in the CPU have well-defined values after power up, including the instruction pointer (EIP) which holds the memory address for the instruction being executed by the CPU. The motherboard ensures that the instruction at the reset vector is a jump to the memory location mapped to the BIOS entry point. The CPU then starts executing BIOS code, which initializes some of the hardware in the machine. After the POST the BIOS wants to boot up an operating system, which must be found somewhere: hard drives, CD-ROM drives, floppy disks, etc. The BIOS now reads the first 512-byte sector (sector zero) of the hard disk. 185 Comments

Related: