background preloader

Wiki_articles

Facebook Twitter

Lambda calculus. The lowercase lambda, the 11th letter of the Greek alphabet, is used to symbolize the lambda calculus. Because of the importance of the notion of variable binding and substitution, there is not just one system of lambda calculus, and in particular there are typed and untyped variants. Historically, the most important system was the untyped lambda calculus, in which function application has no restrictions (so the notion of the domain of a function is not built into the system). In the Church–Turing Thesis, the untyped lambda calculus is claimed to be capable of computing all effectively calculable functions. The typed lambda calculus is a variety that restricts function application, so that functions can only be applied if they are capable of accepting the given input's "type" of data.

Lambda calculus in history of mathematics[edit] Informal description[edit] Motivation[edit] Computable functions are a fundamental concept within computer science and mathematics. (read as "the pair of and "). Church–Turing thesis. Several independent attempts were made in the first half of the 20th century to formalize the notion of computability: American mathematician Alonzo Church created a method for defining functions called the λ-calculus,British mathematician Alan Turing created a theoretical model for machines, now called Turing machines, that could carry out calculations from inputs,Austrian-American mathematician Kurt Gödel, with Jacques Herbrand, created a formal definition of a class of functions whose values could be calculated by recursion.

All three computational processes (recursion, the λ-calculus, and the Turing machine) were shown to be equivalent—all three approaches define the same class of functions.[2][3] This has led mathematicians and computer scientists to believe that the concept of computability is accurately characterized by these three equivalent processes. Formal statement[edit] J. B. The thesis can be stated as follows: Every effectively calculable function is a computable function.[8] 64-bit. Reduced instruction set computing. Reduced instruction set computing, or RISC , is a CPU design strategy based on the insight that simplified (as opposed to complex) instructions can provide higher performance if this simplicity enables much faster execution of each instruction. A computer based on this strategy is a reduced instruction set computer, also called RISC. The opposing architecture is called complex instruction set computing, i.e. CISC. Various suggestions have been made regarding a precise definition of RISC, but the general concept is that of a system that uses a small, highly-optimized set of instructions, rather than a more specialized set of instructions often found in other types of architectures.

History and development[edit] An IBM PowerPC 601 RISC microprocessor. Michael J. Co-designer Yunsup Lee holding RISC-V prototype chip in 2013. Characteristics and design philosophy[edit] Instruction set[edit] Hardware utilization[edit] Other features that are typically found in RISC architectures are: Functional programming. IEEE 754-2008. The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point computation established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE).

Many hardware floating point units use the IEEE 754 standard. The current version, IEEE 754-2008 published in August 2008, includes nearly all of the original IEEE 754-1985 standard and the IEEE Standard for Radix-Independent Floating-Point Arithmetic (IEEE 854-1987). The international standard ISO/IEC/IEEE 60559:2011 (with identical content to IEEE 754) has been approved for adoption through JTC1/SC 25 under the ISO/IEEE PSDO Agreement[1] and published.[2] The standard defines The standard also includes extensive recommendations for advanced exception handling, additional operations (such as trigonometric functions), expression evaluation, and for achieving reproducible results. Formats[edit] An IEEE 754 format is a "set of representations of numerical values and symbols".

A format comprises: Turing machine. An artistic representation of a Turing machine (Rules table not represented) A Turing machine is a hypothetical device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a computer. The "Turing" machine was invented in 1936 by Alan Turing[1] who called it an "a-machine" (automatic machine). The Turing machine is not intended as practical computing technology, but rather as a hypothetical device representing a computing machine. Turing machines help computer scientists understand the limits of mechanical computation.

Turing gave a succinct definition of the experiment in his 1948 essay, "Intelligent Machinery". Referring to his 1936 publication, Turing wrote that the Turing machine, here called a Logical Computing Machine, consisted of: Informal description[edit] Formal definition[edit] where to. Floating point. A diagram showing a representation of a decimal floating-point number using a mantissa and an exponent. In computing, floating point describes a method of representing an approximation of a real number in a way that can support a wide range of values. The numbers are, in general, represented approximately to a fixed number of significant digits (the significand) and scaled using an exponent.

The base for the scaling is normally 2, 10 or 16. The typical number that can be represented exactly is of the form: Significant digits × baseexponent The term floating point refers to the fact that a number's radix point (decimal point, or, more commonly in computers, binary point) can "float"; that is, it can be placed anywhere relative to the significant digits of the number. Over the years, a variety of floating-point representations have been used in computers. Overview[edit] There are several mechanisms by which strings of digits can represent numbers. Symbolically, this final value is. Extended Backus–Naur Form. In computer science, Extended Backus–Naur Form (EBNF) is a family of metasyntax notations, any of which can be used to express a context-free grammar.

EBNF is used to make a formal description of a formal language which can be a computer programming language. They are extensions of the basic Backus–Naur Form (BNF) metasyntax notation. Basics[edit] EBNF is a code that expresses the grammar of a formal language. An EBNF consists of terminal symbols and non-terminal production rules which are the restrictions governing how terminal symbols can be combined into a legal sequence. Examples of terminal symbols include alphanumeric characters, punctuation marks, and white space characters. digit excluding zero = "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9" ; digit = "0" | digit excluding zero ; This production rule defines the nonterminal digit which is on the left side of the assignment.

A production rule can also include a sequence of terminals or nonterminals, each separated by a comma: Deterministic finite-state machine. An example of a deterministic finite automaton that accepts only binary numbers that are multiples of 3. The state S0 is both the start state and an accept state. In automata theory, a branch of theoretical computer science, a deterministic finite automaton (DFA)—also known as deterministic finite state machine—is a finite state machine that accepts/rejects finite strings of symbols and only produces a unique computation (or run) of the automaton for each input string.[1] 'Deterministic' refers to the uniqueness of the computation.

In search of simplest models to capture the finite state machines, McCulloch and Pitts were among the first researchers to introduce a concept similar to finite automaton in 1943.[2][3] A DFA is defined as an abstract mathematical concept, but due to the deterministic nature of a DFA, it is implementable in hardware and software for solving various specific problems. Formal definition[edit] Let w = a1a2 ... an be a string over the alphabet Σ.

Example[edit] . . Deterministic finite-state machine. Complex instruction set computing. Examples of CISC instruction set architectures are System/360 through z/Architecture, PDP-11, VAX, Motorola 68k, and x86. Historical design context[edit] Incitements and benefits[edit] Before the RISC philosophy became prominent, many computer architects tried to bridge the so-called semantic gap, i.e. to design instruction sets that directly supported high-level programming constructs such as procedure calls, loop control, and complex addressing modes, allowing data structure and array accesses to be combined into single instructions. Instructions are also typically highly encoded in order to further enhance the code density. The compact nature of such instruction sets results in smaller program sizes and fewer (slow) main memory accesses, which at the time (early 1960s and onwards) resulted in a tremendous savings on the cost of computer memory and disc storage, as well as faster execution.

New instructions[edit] Design issues[edit] The RISC idea[edit] Superscalar[edit] See also[edit] Von Neumann architecture. Von Neumann architecture scheme The design of a Von Neumann architecture is simpler than the more modern Harvard architecture which is also a stored-program system but has one dedicated set of address and data buses for reading data from and writing data to memory, and another set of address and data buses for fetching instructions. A stored-program digital computer is one that keeps its programmed instructions, as well as its data, in read-write, random-access memory (RAM). Stored-program computers were an advancement over the program-controlled computers of the 1940s, such as the Colossus and the ENIAC, which were programmed by setting switches and inserting patch leads to route data and to control signals between various functional units.

In the vast majority of modern computers, the same memory is used for both data and program instructions, and the Von Neumann vs. Harvard distinction applies to the cache architecture, not main memory. History[edit] Independently, J. Regular expression. The regular expression(? <=\.) {2,}(? =[A-Z]) matches at least two spaces occurring after period (.) and before an upper case letter as highlighted in the text above. Each character in a regular expression is either understood to be a metacharacter with its special meaning, or a regular character with its literal meaning. Together, they can be used to identify textual material of a given pattern, or process a number of instances of it that can vary from a precise equality to a very general similarity of the pattern.

A regular expression processor processes a regular expression statement expressed in terms of a grammar in a given formal language, and with that examines the target text string, parsing it to identify substrings that are members of its language, the regular expressions. History[edit] Regular expressions originated in 1956, when mathematician Stephen Cole Kleene described regular languages using his mathematical notation called regular sets. Basic concepts[edit] Boolean "or"