background preloader

Computer science

Computer science
Computer science deals with the theoretical foundations of information and computation, together with practical techniques for the implementation and application of these foundations History[edit] The earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division. Further, algorithms for performing computations have existed since antiquity, even before sophisticated computing equipment were created. Blaise Pascal designed and constructed the first working mechanical calculator, Pascal's calculator, in 1642.[3] In 1673 Gottfried Leibniz demonstrated a digital mechanical calculator, called the 'Stepped Reckoner'.[4] He may be considered the first computer scientist and information theorist, for, among other reasons, documenting the binary number system. Contributions[edit] Philosophy[edit] Related:  Technology-

Educational psychology Educational psychology is the study of human learning. The study of learning processes, both cognitive and affective, allows researchers to understand individual differences in behavior, personality, intellect, and self- concept. The field of educational psychology heavily relies on testing, measurement, assessment, evaluation, and training to enhance educational activities and learning processes.[1] This can involve studying instructional processes within the classroom setting. Educational psychology can in part be understood through its relationship with other disciplines. The field of educational psychology involves the study of memory, conceptual processes, and individual differences (via cognitive psychology) in conceptualizing new strategies for learning processes in humans. History[edit] Early years[edit] Educational Psychology is a fairly new and growing field of study. Plato and Aristotle[edit] Educational psychology dates back to the time of Aristotle and Plato. John Locke[edit]

Data mining Process of extracting and discovering patterns in large data sets Data mining is the process of extracting and discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems.[1] Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal of extracting information (with intelligent methods) from a data set and transforming the information into a comprehensible structure for further use.[1][2][3][4] Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD.[5] Aside from the raw analysis step, it also involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating.[1] Etymology[edit] Background[edit] The manual extraction of patterns from data has occurred for centuries. Process[edit]

HTML HTML or HyperText Markup Language is the standard markup language used to create web pages. HTML is written in the form of HTML elements consisting of tags enclosed in angle brackets (like <html>). HTML tags most commonly come in pairs like <h1>and </h1>, although some tags represent empty elements and so are unpaired, for example <img>. The first tag in a pair is the start tag, and the second tag is the end tag (they are also called opening tags and closing tags). The purpose of a web browser is to read HTML documents and compose them into visible or audible web pages. Web browsers can also refer to Cascading Style Sheets (CSS) to define the look and layout of text and other material. History[edit] The historic logo made by the W3C Development[edit] In 1980, physicist Tim Berners-Lee, who was a contractor at CERN, proposed and prototyped ENQUIRE, a system for CERN researchers to use and share documents. Further development under the auspices of the IETF was stalled by competing interests.

An improved general E-unification method Bachmair, 1987 L. BachmairProof Methods for Equational Theories dissertation, U. of Illinois, Urbana-Champaign (1987) Bachmair et al., 1986 L. Proc. Bachmair et al., 1987 L. Proceedings of CREAS (1987) Dershowitz and Jounnaud, 1991 N. Handbook of Theoretical Computer Science, North-Holland, Amsterdam (1991), pp. 243-320 Dougherty and Johann, 1990 D. M.E. Fay, 1979 M. Proc. Fages and Huet, 1986 F. Theoretical Computer Science, 43 (1986), pp. 189-200 Gallier and Snyder, 1989 J.H. Theoretical Computer Science, 67 (1989), pp. 203-260 Goguen and Meseguer, 1981 J.A. ACM SIG-PLAN Notices (1981) Goguen and Meseguer, 1985 Houston Journal of Mathematics (1985), pp. 307-334 Herbrand, 1971 J. W. Hullot, 1980 J. Proc. Kirchner, 1984 C. Proc. Kirchner, 1985 C. Thèse d'Etat, Université de Nancy I (1985) Kirchner, 1986 C. Proc. Lankford, 1975 D. Tech. Martelli and Montanari, 1982 A. ACM Transactions on Programming Languages and Systems, 4 (1982), pp. 258-282 Martelli et al., 1986 A. Proc. Plotkin, 1972 G. B. Robinson and Wos, 1969

Cognitive science Cognitive science is the interdisciplinary scientific study of the mind and its processes.[1] It examines what cognition is, what it does and how it works. It includes research on intelligence and behavior, especially focusing on how information is represented, processed, and transformed (in faculties such as perception, language, memory, reasoning, and emotion) within nervous systems (human or other animal) and machines (e.g. computers). Cognitive science consists of multiple research disciplines, including psychology, artificial intelligence, philosophy, neuroscience, linguistics, and anthropology.[2] It spans many levels of analysis, from low-level learning and decision mechanisms to high-level logic and planning; from neural circuitry to modular brain organization. The fundamental concept of cognitive science is "that thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures."[2] Principles[edit]

Machine learning Machine learning is a subfield of computer science[1] that evolved from the study of pattern recognition and computational learning theory in artificial intelligence.[1] Machine learning explores the construction and study of algorithms that can learn from and make predictions on data.[2] Such algorithms operate by building a model from example inputs in order to make data-driven predictions or decisions,[3]:2 rather than following strictly static program instructions. Machine learning is closely related to and often overlaps with computational statistics; a discipline that also specializes in prediction-making. It has strong ties to mathematical optimization, which deliver methods, theory and application domains to the field. Machine learning is employed in a range of computing tasks where designing and programming explicit, rule-based algorithms is infeasible. Overview[edit] Tom M. Types of problems and tasks[edit] History and relationships to other fields[edit] Theory[edit]

Category:Image processing Image processing is the application of signal processing techniques to the domain of images — two-dimensional signals such as photographs or video. Image processing does typically involve filtering an image using various types of filters. Related categories: computer vision and imaging. Subcategories This category has the following 13 subcategories, out of 13 total. Pages in category "Image processing" The following 200 pages are in this category, out of 213 total. (previous 200) (next 200)(previous 200) (next 200) Algorithm Flow chart of an algorithm (Euclid's algorithm) for calculating the greatest common divisor (g.c.d.) of two numbers a and b in locations named A and B. The algorithm proceeds by successive subtractions in two loops: IF the test B ≥ A yields "yes" (or true) (more accurately the numberb in location B is greater than or equal to the numbera in location A) THEN, the algorithm specifies B ← B − A (meaning the number b − a replaces the old b). Similarly, IF A > B, THEN A ← A − B. The process terminates when (the contents of) B is 0, yielding the g.c.d. in A. In mathematics and computer science, an algorithm ( i/ˈælɡərɪðəm/ AL-gə-ri-dhəm) is a step-by-step procedure for calculations. Informal definition[edit] While there is no generally accepted formal definition of "algorithm," an informal definition could be "a set of rules that precisely defines a sequence of operations Boolos & Jeffrey (1974, 1999) offer an informal meaning of the word in the following quotation: Formalization[edit]

Anthropology Anthropology /ænθrɵˈpɒlədʒi/ is the study of humankind, past and present,[1][2] that draws and builds upon knowledge from social and biological sciences, as well as the humanities and the natural sciences.[3][4] Since the work of Franz Boas and Bronisław Malinowski in the late 19th and early 20th centuries, anthropology in Great Britain and the US has been distinguished from ethnology[5] and from other social sciences by its emphasis on cross-cultural comparisons, long-term in-depth examination of context, and the importance it places on participant-observation or experiential immersion in the area of research. In those European countries that did not have overseas colonies, where ethnology (a term coined and defined by Adam F. Origin of the term[edit] The term anthropology originates from the Greek anthrōpos (ἄνθρωπος), "human being" (understood to mean humankind or humanity), and -λογία -logia, "study." Fields[edit] According to Clifford Geertz, Sociocultural[edit] Biological[edit]

Internet U.S. Army soldiers "surfing the Internet" at Forward Operating Base Yusifiyah, Iraq The Internet is a global system of interconnected computer networks that use the standard Internet protocol suite (TCP/IP) to link several billion devices worldwide. It is a network of networks[1] that consists of millions of private, public, academic, business, and government networks of local to global scope, linked by a broad array of electronic, wireless, and optical networking technologies. The Internet carries an extensive range of information resources and services, such as the inter-linked hypertext documents and applications of the World Wide Web (WWW), the infrastructure to support email, and peer-to-peer networks for file sharing and telephony. Most traditional communications media, including telephony and television, are being reshaped or redefined by the Internet, giving birth to new services such as voice over Internet Protocol (VoIP) and Internet Protocol television (IPTV). Terminology Users

Data (computing) In an alternate usage, binary files (which are not human-readable) are sometimes called "data" as distinguished from human-readable "text".[4] The total amount of digital data in 2007 was estimated to be 281 billion gigabytes (= 281 exabytes).[5][6] At its heart, a single datum is a value stored at a specific location. To store data bytes in a file, they have to be serialized in a "file format". Typically, programs are stored in special file types, different from those used for other data. Keys in data provide the context for values. Computer main memory or RAM is arranged as an array of "sets of electronic on/off switches" or locations beginning at 0. Data has some inherent features when it is sorted on a key. Retrieving a small subset of data from a much larger set implies searching though the data sequentially. The advent of databases introduced a further layer of abstraction for persistent data storage.

E-unification Previous: Equational term rewrite systems Next: Quasi-identity logic Up: Supplementary Text Topics Just as unification plays a crucial role in the study of term rewrite systems (see Chapter III of LMCS), one has E-unification for work with ETRS's. Indeed the equational theorem prover EQP that William McCune used to verify the Robbin's Conjecture (discussed at the end of Chapter III of LMCS) uses AC-unification. In the following E will denote a set of equations. PROOF.\ (Exercise.) One can no longer assume that an E-unifiable pair has a most general E-unifier -- this depends on the choice of E. There is an E-unifier µ of s and t which is more general than any E-unifier of s and t. This leads to the following unification types: Using this we can give the unification types for sets of equations E: E is if every E-unifiable s,t is unitary E is if every E-unifiable s,t is unitary or finitary, and some E-unifiable s,t is finitary. Given a term s in (which is actually a coset of terms). PROOF. .

Neuroplasticity Contrary to conventional thought as expressed in this diagram, brain functions are not confined to certain fixed locations. Neuroplasticity, also known as brain plasticity, is an umbrella term that encompasses both synaptic plasticity and non-synaptic plasticity—it refers to changes in neural pathways and synapses which are due to changes in behavior, environment and neural processes, as well as changes resulting from bodily injury.[1] Neuroplasticity has replaced the formerly-held position that the brain is a physiologically static organ, and explores how - and in which ways - the brain changes throughout life.[2] Neuroplasticity occurs on a variety of levels, ranging from cellular changes due to learning, to large-scale changes involved in cortical remapping in response to injury. Neurobiology[edit] Cortical maps[edit] Merzenich and DT Blake (2002, 2005, 2006) went on to use cortical implants to study the evolution of plasticity in both the somatosensory and auditory systems.

Related: