background preloader

Fundamentals

Facebook Twitter

Harry Foundalis - The Bongard Problems. The domain in which I did my research in cognitive science is the Bongard Problems. These are problems on visual pattern recognition that appeared first in the appendix of a book published by the Russian scientist M. M. Bongard in 1967, in what was then the USSR. They became more widely known to the western world when D. R. Hofstadter mentioned them in his book, “Goedel, Escher, Bach: an Eternal Golden Braid”, and speculated on how an automated system could be built to solve such problems. Rather than tiring the reader with words, I prefer to show what Bongard Problems (BP’s) are by presenting a — rather trivial — BP, below. There are six boxes on the left, and another six on the right. As mentioned, the above example is trivial. Did it take you more than a few seconds to see what is going on? Maybe you are — I don’t know — but clearly, solving BP’s fast is evidence for only some specific aspects of human intelligence.

Now, not all problems can be handled so easily. !) Ideone.com | Online IDE & Debugging Tool >> C/C++, Java, PHP, Python, Perl and 40+ compilers and intepreters. 10 Papers Every Programmer Should Read (At Least Twice) I spent most of yesterday afternoon working on a paper I’m co-writing. It was one of those days when the writing came easy. I was moving from topic to topic, but then I realized that I was reaching too far backward – I was explaining things which I shouldn’t have had to explain to the audience I was trying to reach. When I first started writing, one of the pieces of advice that I heard was that you should always imagine that you are writing to a particular person. It gets your juices going – you’re automatically in an explanatory state of mind and you know what you can expect from your audience.

I was doing that, but I noticed that I was drifting. I was losing my sense of audience. The problem I was experiencing is only getting worse. So, I was thinking about this and trying to not to get too glum. We’ve taken an interesting turn in the industry over the past ten years. Here’s the original list. On the criteria to be used in decomposing systems into modules – David Parnas. Introduction to Algorithms | MIT Video Course.

Process calculus. In computer science, the process calculi (or process algebras) are a diverse family of related approaches for formally modelling concurrent systems. Process calculi provide a tool for the high-level description of interactions, communications, and synchronizations between a collection of independent agents or processes. They also provide algebraic laws that allow process descriptions to be manipulated and analyzed, and permit formal reasoning about equivalences between processes (e.g., using bisimulation).

Leading examples of process calculi include CSP, CCS, ACP, and LOTOS.[1] More recent additions to the family include the π-calculus, the ambient calculus, PEPA, the fusion calculus and the join-calculus. Essential features[edit] Mathematics of processes[edit] parallel composition of processesspecification of which channels to use for sending and receiving datasequentialization of interactionshiding of interaction pointsrecursion or process replication Parallel composition[edit] and to . . Turing completeness. In computability theory, a system of data-manipulation rules (such as a computer's instruction set, a programming language, or a cellular automaton) is said to be Turing complete or computationally universal if it can be used to simulate any single-taped Turing machine.

The concept is named after Alan Turing. A classic example is lambda calculus. Computability theory includes the closely related concept of Turing equivalence. Two computers P and Q are called Turing equivalent if P can simulate Q and Q can simulate P. Thus, a Turing-complete system is one that can simulate a Turing machine; and, per the Church–Turing thesis, that any real-world computer can be simulated by a Turing machine, it is Turing equivalent to a Turing machine. In colloquial usage, the terms "Turing complete" or "Turing equivalent" are used to mean that any real-world general-purpose computer or computer language can approximately simulate any other real-world general-purpose computer or computer language.

c2.com. Turing machine. An artistic representation of a Turing machine (Rules table not represented) A Turing machine is a hypothetical device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a computer.

The "Turing" machine was invented in 1936 by Alan Turing[1] who called it an "a-machine" (automatic machine). The Turing machine is not intended as practical computing technology, but rather as a hypothetical device representing a computing machine. Turing machines help computer scientists understand the limits of mechanical computation. Turing gave a succinct definition of the experiment in his 1948 essay, "Intelligent Machinery". ...an unlimited memory capacity obtained in the form of an infinite tape marked out into squares, on each of which a symbol could be printed.

Informal description[edit] where to.