background preloader

Modal Logic

Modal Logic
1. What is Modal Logic? Narrowly construed, modal logic studies reasoning that involves the use of the expressions ‘necessarily’ and ‘possibly’. A list describing the best known of these logics follows. 2. The most familiar logics in the modal family are constructed from a weak logic called K (after Saul Kripke). Necessitation Rule: If A is a theorem of K, then so is □A. (In these principles we use ‘A’ and ‘B’ as metavariables ranging over formulas of the language.) The operator ◊ (for ‘possibly’) can be defined from □ by letting ◊A = ~□~A. The system K is too weak to provide an adequate account of necessity. (M) claims that whatever is necessary is the case. Many logicians believe that M is still too weak to correctly formalize the logic of necessity and possibility. S4 is the system that results from adding (4) to M. S4: □□…□ = □ and ◊◊…◊ = ◊ S5: 00…□ = □ and 00…◊ = ◊, where each 0 is either □ or ◊ The system B (for the logician Brouwer) is formed by adding axiom (B) to M. 3. 4. 5. 6.

Epistemic Logic First published Wed Jan 4, 2006 Epistemic logic is the logic of knowledge and belief. It provides insight into the properties of individual knowers, has provided a means to model complicated scenarios involving groups of knowers and has improved our understanding of the dynamics of inquiry. 1. Epistemic logic gets its start with the recognition that expressions like ‘knows that’ or ‘believes that’ have systematic properties that are amenable to formal study. Modern treatments of the logic of knowledge and belief grow out of the work of a number of philosophers and logicians writing from 1948 through the 1950s. While this article deals with modern developments, epistemic logic has a venerable history. Contemporary epistemic logic may appear quite technical and removed from traditional epistemological reflections. For the most part, epistemic logic focuses on propositional knowledge. and similarly for belief for some arbitrary proposition A. P(W), where P denotes the powerset operation. 2.

Epistemology 1. The Varieties of Cognitive Success There are many different kinds of cognitive success, and they differ from one another along various dimensions. Exactly what these various kinds of success are, and how they differ from each other, and how they are explanatorily related to each other, and how they can be achieved or obstructed, are all matters of controversy. This section provides some background to these various controversies. 1.1 What Kinds of Things Enjoy Cognitive Success? Cognitive successes can differ from each other by virtue of qualifying different kinds of things. Some of the recent controversies concerning the objects of cognitive success concern the metaphysical relations among the cognitive successes of various kinds of objects: Does the cognitive success of a process involve anything over and above the cognitive success of each state in the succession of states that comprise the execution of that process? 1.2 Constraints and Values 1.3 Substantive and Structural 1.4. 2. 3.

Belief First published Mon Aug 14, 2006; substantive revision Sun Nov 21, 2010 Contemporary analytic philosophers of mind generally use the term “belief” to refer to the attitude we have, roughly, whenever we take something to be the case or regard it as true. To believe something, in this sense, needn't involve actively reflecting on it: Of the vast number of things ordinary adults believe, only a few can be at the fore of the mind at any single time. Most contemporary philosophers characterize belief as a “propositional attitude”. 1. 1.1 Representationalism It is common to think of believing as involving entities—beliefs—that are in some sense contained in the mind. It is also common to suppose that beliefs play a causal role in the production of behavior. One strand of representationalism, endorsed by Fodor, takes mental representations to be sentences in an internal language of thought. 1.1.1. Supposing representations are structured, then, what kind of structure do they have?

The Mathematics of Boolean Algebra First published Fri Jul 5, 2002; substantive revision Fri Feb 27, 2009 Boolean algebra is the algebra of two-valued logic with only sentential connectives, or equivalently of algebras of sets under union and complementation. The rigorous concept is that of a certain kind of algebra, analogous to the mathematical notion of a group. 1. A Boolean algebra (BA) is a set A together with binary operations + and · and a unary operation −, and elements 0, 1 of A such that the following laws hold: commutative and associative laws for addition and multiplication, distributive laws both for multiplication over addition and for addition over multiplication, and the following special laws: x + (x · y) = x x · (x + y) = x x + (−x) = 1 x · (−x) = 0 These laws are better understood in terms of the basic example of a BA, consisting of a collection A of subsets of a set X closed under the operations of union, intersection, complementation with respect to X, with members ∅ and X. 2. 3. 4. 5. 6. 7.

The Algebra of Logic Tradition First published Mon Mar 2, 2009; substantive revision Fri May 1, 2009 The algebra of logic as introduced by Boole in his Mathematical Analysis of Logic (1847) was designed to provide an algorithmic alternative (via a slight modification of ordinary algebra) to the traditional catalog approach of Aristotelian logic. However, three-fourths of the way through this book, after finishing his discussion of Aristotelian logic, Boole started to develop the general tools that would be used in his Laws of Thought (1854) to greatly extend the Aristotelian logic by permitting an argument to have many premises and to involve many classes. To handle the infinitely many possible logical arguments of this expanded logic, he presented theorems that provided key tools for an algorithmic analysis (a catalog was no longer feasible). 1. 1847—The Beginnings of the Modern Versions of the Algebra of Logic 2. 1854–Boole's Final Presentation of his Algebra of Logic 3. The Reflexive Law (A=A). 4.

Diagrams 1. Introduction Diagrams or pictures probably rank among the oldest forms of human communication. Challenging a long-standing prejudice against diagrammatic representation, those working on multi-modal reasoning have taken different kinds of approaches which we may categorize into three distinct groups. We have the following goals for this entry. For further discussion, we need to clarify two related but distinct uses of the word ‘diagram’: diagram as internal mental representation and diagram as external representation. External diagrammatic representations: These are constructed by the agent in a medium in the external world (paper, etc), but are meant as representations by the agent. As we will see below, logicians focus on external diagrammatic systems, the imagery debate among philosophers of mind and cognitive scientists is mainly about internal diagrams, and research on the cognitive role of diagrams touches on both forms. 2. 2.1 Euler Diagrams Figure 1: Euler Diagrams Example 1.

Second-order and Higher-order Logic First published Thu Dec 20, 2007; substantive revision Wed Mar 4, 2009 Second-order logic is an extension of first-order logic where, in addition to quantifiers such as “for every object (in the universe of discourse),” one has quantifiers such as “for every property of objects (in the universe of discourse).” This augmentation of the language increases its expressive strength, without adding new non-logical symbols, such as new predicate symbols. For classical extensional logic (as in this entry), properties can be identified with sets, so that second-order logic provides us with the quantifier “for every set of objects.” There are two approaches to the semantics of second-order logic. 1. In symbolic logic, the formula (Px → Px) will be true, no matter what object in the universe of discourse is assigned to the variable x. In first-order languages, there are some things we can say, and some that we cannot. ∃x Px → ∃x(Px & ∀y(Py → (y = x v x < y))) ∀X[X0 & ∀y(Xy → XSy) → ∀y Xy] 2.

The Identity of Indiscernibles First published Wed Jul 31, 1996; substantive revision Sun Aug 15, 2010 The Identity of Indiscernibles is a principle of analytic ontology first explicitly formulated by Wilhelm Gottfried Leibniz in his Discourse on Metaphysics, Section 9 (Loemker 1969: 308). It states that no two distinct things exactly resemble each other. This is often referred to as ‘Leibniz's Law’ and is typically understood to mean that no two objects have exactly the same properties. The Identity of Indiscernibles is of interest because it raises questions about the factors which individuate qualitatively identical objects. 1. The Identity of Indiscernibles (hereafter called the Principle) is usually formulated as follows: if, for every property F, object x has F if and only if object y has F, then x is identical to y. ∀F(Fx ↔ Fy) → x=y. The converse of the Principle, x=y → ∀F(Fx ↔ Fy), is called the Indiscernibility of Identicals. Another useful distinction is between the pure and the impure. 2. 3. 4.

Possible Objects First published Fri Apr 15, 2005; substantive revision Mon Dec 16, 2013 Deep theorizing about possibility requires theorizing about possible objects. One popular approach regards the notion of a possible object as intertwined with the notion of a possible world. 1. Possible objects—possibilia (sing. possibile)—are objects that are possible. There is a wide-spread conservative view on objects, which says that any object is an actual object. (1) Any object is an actual existing object; (2) Any object is an actual object, that is, it is either an actual existing object or an actual non-existing object; (3) Any object is an existing object, that is, it is either an actual existing object or a non-actual existing object; (4) Any object that is actual is an existing object; (5) Any object that exists is an actual object. We shall first examine possibilism. 2. In general, any object that actually exists possibly exists. 2.1 Possibilist Realism 2.1.1 Modal Counterpart Theory

Interpretations of Probability First published Mon Oct 21, 2002; substantive revision Mon Dec 19, 2011 ‘Interpreting probability’ is a commonly used but misleading characterization of a worthy enterprise. The so-called ‘interpretations of probability’ would be better called ‘analyses of various concepts of probability’, and ‘interpreting probability’ is the task of providing such analyses. Whatever we call it, the project of finding such interpretations is an important one. 1. Probability theory was a relative latecomer in intellectual history. (Non-negativity) P(A) ≥ 0, for all A ∈ F. Call P a probability function, and (Ω, F, P) a probability space. The assumption that P is defined on a field guarantees that these axioms are non-vacuously instantiated, as are the various theorems that follow from them. We could instead attach probabilities to members of a collection S of sentences of a formal language, closed under (countable) truth-functional combinations, with the following counterpart axiomatization: 3′. 2. 3.

Dutch Book Arguments First published Wed Jun 15, 2011 The Dutch Book argument (DBA) for probabilism (namely the view that an agent's degrees of belief should satisfy the axioms of probability) traces to Ramsey's work in “Truth and Probability”. He mentioned only in passing that an agent who violates the probability axioms would be vulnerable to having a book made against him and this has led to considerable debate and confusion both about exactly what Ramsey intended to show and about if, and how, a cogent version of the argument can be given. The basic idea behind the argument has also been applied in defense of a variety of principles, some of which place additional constraints on an agent's current beliefs, with others, such as Conditionalization, purporting to govern how degrees of belief should evolve over time. 1. 1.1 The Probability Axioms and the Dutch Book Theorem Condition 2, which requires only that the truths of propositional logic are given the value one, is sometimes replaced with or even

Related: