Past events

Logic Colloquium

April 27, 2018, 4:10 PM (60 Evans Hall)

Vaughan Pratt
Professor Emeritus of Computer Science, Stanford University

The Class CAT of Locally Small Categories as a Functor-Free Framework for Foundations and Philosophy

Exploiting an evident circularity, we give three elementary properties of the category Set that uniquely determine it up to equivalence within the class CAT of locally small categories. The analogous three properties extend this result to many other categories of interest in algebra, topology, type theory, and philosophy via a framework which structures categories with nothing more than distinguished objects each serving as either a primitive representative of a given sort such as cat or box, or a palette of sorted values for a given property such as color or mass. The notion of property thereby defined is as extensional as the notion of sort. In those categories catering for both sorts and and properties, qualia of sort s for property p arise ambiguously as sort-s members of the palette p and as p-states of the primitive s, for example calico as a possible color for a cat. This framework provides a paradox-free interpretation of C.I. Lewis’s notion of qualia as well as serving the function of the pituitary gland in Descartes’ mind-body dualism. While functors and natural transformations underpin the whole framework we exploit the Yoneda embedding to hide them beneath the framework’s more elementary conceptual language.

Slides

Logic Colloquium

April 13, 2018, 4:00 PM (60 Evans)

Carlos Eduardo Areces
Professor of Computer Science, Universidad Nacional de Córdoba, Argentina

Henkin Completeness in Modal Logic

Saul Kripke published in 1959 a proof of completeness for first-order S5 modal logic [7] and in 1963 he extended the method to cover the propositional modal systems T, S4, S5, and B [8]. His method employed a generalization of Beth’s tableaux and completeness was established by showing how to derive a proof from a failed attempt to find a counter model.

In 1966, Kaplan criticized Kripke’s proof in his review of Kripke’s article [6] and suggested a different and, arguably, more elegant approach based on an adaptation of Henkin’s model theoretic completeness proof for first-order logic [5]. (Actually, completeness proof for S5 following Henkin’s ideas had already been published in 1959 in an article by Bayart [2], and other proofs were been published at about the time of Kaplan’s by Makinson in 1966 [9] and Cresswell in 1967 [4].)

Henkin’s proof of completeness for first-order logic used to important ideas: 1) that a consistent set of formulas can be extended to a maximally consistent set of formulas, and 2) that existential quantifiers can be witnessed using constants, which can then be used to form the domain of the model. The first of these two ideas is fundamental in modern completeness proofs for classical propositional modal logics which build a canonical model (satisfying all consistent formulas) that has as domain the (uncountable) set of all maximally consistent sets of formulas. The second idea (the use of witnesses) seemed less useful in the propositional setting — till the arrival of hybrid modal logics [1].

One of the main characteristic of hybrid modal logics is the inclusion of nominals which are special propositional symbols that name particular states in the model. This “naming” is achieved by restricting the interpretation of nominals to be singleton sub- sets of the domain. Nominals can be used as witnesses for the modal existential oper- ators and, in this way, the completeness construction needs to build just a single max- imal, witnessed consistent set of formulas from which a countable model is built (see, e.g., [3, Chap 7.3] and [10]). As a side-effect a particularly strong completeness result can be proved, that establishes that it is possible to give a complete axiomatic system for the basic hybrid logic, which remains complete under the extension with a particular family of axioms w.r.t. the corresponding class of models.

[1] C. Areces and B. ten Cate., Hybrid logics, Handbook of Modal Logics, (P. Blackburn, F. Wolter, and J. van Benthem), Elsevier, 2006, pp. 821–868.

[2] A. Bayart, Quasi-adequation de la logique modale de 2eme ordre, Logique et analyse, vol. 2 (1959), no. 6/7 pp. 99–121.

[3] P. Blackburn, M. De Rijke, and Y. Venema, Modal Logic, Cambridge University Press, Cambridge, 2002.

[4] M. Cresswell, A Henkin completeness for T , Notre Dame Journal of For- mal Logic, vol. 8 (1967), no. 3 pp. 186–190.

[5] L. Henkin, The completeness of the first-order functional calculus, The Journal of Symbolic Logic, vol. 14 (1949), no. 3, pp. 159–166.

[6] D. Kaplan, Review: Saul A. Kripke, semantical analysis of modal logic i. normal modal propositional calculi, The Journal of Symbolic Logic, vol. 31 (1966), no. 1, pp. 120–122.

[7] S. Kripke, A completeness theorem in modal logic, The Journal of Symbolic Logic, vol. 24 (1959), no. 1, pp. 1–14.

[8] S. Kripke, Semantical analysis of modal logic i. normal modal propositional calculi, Zeitschrift fur mathematische Logik und Grundlagen der Mathematik, vol. 9 (1963), no. 5-6, pp. 67–96.

[9] D. Makinson, On some completeness theorems in modal logic, Zeitschrift fur mathematische Logik und Grundlagen der Mathematik, vol. 12 (1966), no. 1, pp. 379–384.

[10] B. ten Cate, Model theory for extended modal languages. PhD thesis, University of Amsterdam, 2005.

Logic Colloquium

March 23, 2018, 4:10 PM (60 Evans Hall)

Nam Trang
Visiting Assistant Professor, Department of Mathematics, University of Califorina, Irvine

Forcing, elementary embeddings, and determinacy

Forcing and elementary embeddings are central topics in set theo= ry. Most of what set theorists have focused on are the study of forcing and elementary embeddings over models of ZFC.

In this talk, we focus on forcing and elementary embeddings over models of the Axiom of Determinacy (AD). In particular, we focus on answering the following questions: work in V which models AD. Let P be a forcing poset and g ⊆ P be V-generic. 1. Does V[g] model AD? 2. Is there an elementary embedding from V to V[g]?

Regarding question (1), we want to classify what forcings preserve AD. We show that forcings that add Cohen reals, random reals, and many other well-known forcings do not preserve AD. We, however, give an example of a forcing that preserves AD. Regarding question (2), an analogous statement to the famous Kunen’s theorem for models of ZFC, can be shown: suppose V = L(X) for some set X and V models AD, then there is no (nontrivial) elementary embedding from V to itself that moves some ordinal. We conjecture that there are no (nontrivial) elementary embeddings from V to itself. Woodin has shown that there are (P, g) that gives rise to nontrivial elementary embeddings from V to V[g].

We present some of the results discussed above. There is still much work to do to completely answer these questions. This is an ongoing joint work with D. Ikegami.

Alfred Tarski Lectures

March 15, 2018, 4:00 PM (126 Barrows Hall)

Hugh Woodin
Professor of Philosophy and Mathematics, Harvard University

The Ultimate L Conjecture

Alfred Tarski Lectures

March 14, 2018, 4:00 PM (1 LeConte Hall)

Hugh Woodin
Professor of Philosophy and Mathematics, Harvard University

The HOD Dichotomy

Alfred Tarski Lectures

March 13, 2018, 4:00 PM (1 LeConte Hall)

Hugh Woodin
Professor of Philosophy and Mathematics, Harvard University

Ultimate L

Logic Colloquium

March 09, 2018, 4:10 PM (60 Evans Hall)

Ray Briggs
Professor of Philosophy, Stanford University

Real-Life Newcomb Problems

Decision theorists have spilled much ink over the difference between causal and evidential decision rules, whose recommendations differ in puzzle cases. Recently, a class of rivals to causal and evidential rules, aimed at solving more complicated puzzle cases.

But how realistic are the puzzle cases where these decision rules disagree? I argue that such cases are rare, and can often be handled without addressing the difference between decision rules, by changing other parts of the problem description.

Logic Colloquium

February 23, 2018, 4:10 PM (60 Evans Hall)

Douglas Marshall
Assistant Professor of Philosophy, Carleton College

An Approach to Two Puzzles Concerning the Applicability of Mathematics

In its applications to empirical science, mathematics allows us to represent and reason about the physical world. A similar phenomenon occurs internally to mathematics when one branch of mathematics allows us to represent and reason about the subject matter of another. I argue that taking seriously the close analogies between these two kinds of case will help us resolve two philosophical puzzles concerning the applicability of mathematics: 1. If (as some platonists contend) mathematics studies abstract objects, but the empirical sciences study concrete objects, then how is mathematics relevant to the empirical sciences? 2. How can one branch of mathematics be applied to another if (as some empiricists contend) mathematics has no factual content?

Logic Colloquium

February 09, 2018, 4:10 PM (60 Evans Hall)

Michael Beeson
Professor Emeritus, San Jose State University

Proof-checking Euclid

We used computer proof-checking methods to verify the correctness of our proofs of the propositions in Euclid Book I. We used axioms as close as possible to those of Euclid, in a language closely related to that used in Tarski’s formal geometry. We used proofs as close as possible to those given by Euclid, but filling Euclid’s gaps and correcting errors. Then we checked those proofs in the well-known and trusted proof checkers HOL Light and Coq. The talk will contain many geometrical diagrams and discuss both the geometry and the proof-checking. (This is joint work with Julien Narboux and Freek Wiedijk.)

Logic Colloquium

January 26, 2018, 4:10 PM (60 Evans Hall)

John MacFarlane
Professor of Philosophy, University of California, Berkeley

Is Logic a Normative Discipline?

Various philosophers, including Kant, Frege, and more recently Hartry Field and Greg Restall, have held that logic is in some interesting sense a normative discipline. Field, in particular, has argued that we need a normative characterization of logical validity in order to understand what proponents of alternative logics are disagreeing about. I will discuss what one needs to say in order to sustain an interesting version of this claim, and I will consider whether Field gives good reasons for accepting it.

Logic Colloquium

December 01, 2017, 4:10 PM (60 Evans Hall)

Marianna Antonutti
Marie Skłodowska-Curie Postdoctoral Fellow, Munich Center for Mathematical Philosophy

What is ordinary mathematics?

The term “ordinary mathematics” is used to denote a collection of areas that are, in some sense, central to the practice of most working mathematicians, typically taken to contain such fields as number theory, real and complex analysis, geometry, and algebra. This notion has been taken for granted by both philosophers and logicians; for example, reverse mathematics is often described as the study of what set existence axioms are necessary to prove theorems of ordinary mathematics, and the ability to recover ordinary mathematics has been considered a key measure of the success of a foundational framework at least since Russell. However, the precise extent of this notion has not been explicitly discussed. This talk will propose two ways in which this notion can be made more precise. According to the first, ordinary mathematics is that part of mathematics that concerns countable or countably representable objects. According to the second, ordinary mathematics is that part of mathematics that does not make essential use of intrinsically set theoretic methods or concepts. I will discuss potential counterexamples to both views, and assess the prospects for formulating a precise account of the notion of ordinary mathematics.

This is joint work with Benedict Eastaugh.

Logic Colloquium

November 17, 2017, 4:10 PM (60 Evans Hall)

Wesley H. Holliday
Associate Professor of Philosophy, UC Berkeley

Possibilities for Boolean, Heyting, and modal algebras

In this talk, we describe an alternative to the standard representation theory for Boolean, Heyting, and modal algebras descending from Stone (1934, 1937) and Jónsson and Tarski (1951). While the standard theory leads to the well-known “possible world semantics” in logic, the alternative theory forms the basis of the recently investigated “possibility semantics” in logic. We will give an overview of this line of research, making connections to longstanding open problems and highlighting the contributions of University of California students.

The talk will be based on the following papers:

W. H. Holliday, “Possibility Frames and Forcing for Modal Logic,” 2015.

G. Bezhanishvili and W. H. Holliday, “Locales, Nuclei, and Dragalin Frames,” 2016.

G. Bezhanishvili and W. H. Holliday, “A Semantic Hierarchy for Intuitionistic Logic,” 2017.

N. Bezhanishvili and W. H. Holliday, “Choice-Free Stone Duality,” 2017.

Logic Colloquium

November 03, 2017, 4:10 PM (60 Evans Hall)

Lenore Blum
Distinguished Career Professor of Computer Science, Carnegie Mellon University

Alan Turing and the Other Theory of Computation

Most logicians and theoretical computer scientists are familiar with Alan Turing’s 1936 seminal paper setting the stage for the foundational (discrete) theory of computation. Most however remain unaware of Turing’s 1948 seminal paper introducing the notion of condition, setting the stage for a natural theory of complexity for what I call the “other theory of computation” emanating from the classical tradition of numerical analysis, equation solving and the continuous mathematics of calculus. This talk will recognize Alan Turing’s work in the foundations of numerical computation (in particular, his 1948 paper “Rounding-Off Errors in Matrix Processes”), how it provides a unifying concept for the two major traditions of the Theory of Computation, and its influence on complexity theory today.

Logic Colloquium

October 20, 2017, 4:10 PM (60 Evans Hall)

Thomas Icard
Assistant Professor of Philosophy, Stanford University

Some Philosophical Uses of Randomized Computation

The aim of this talk will be to convey some of the ways that familiar ideas and tools from randomized and probabilistic computation might bear on issues of philosophical interest, focusing especially on questions about cognition, representation, and reasoning. A first question is when it would ever make sense for an agent to employ randomization in the course of decision making. Drawing on ideas from game theory, reinforcement learning, and statistics, we tentatively propose a unified answer to the question. A second set of questions revolves around the idea of characterizing an agent’s implicit causal knowledge of the world by appeal to the explicit causal structure of a probabilistic algorithm. This second topic raises novel questions about the logic of counterfactuals beyond the usual propositional setting. Much of this is work in progress.

Logic Colloquium

October 06, 2017, 4:10 PM (60 Evans Hall)

Yann Pequignot
Postdoctoral Fellow, Department of Mathematics, UCLA

Finite versus infinite: an insufficient shift

Analytic sets enjoy a classical representation theorem based on well-founded relations. I will explain a similar representation theorem for Sigma^1_2 sets due to Marcone which is based on an intriguing topological graph: the Shift Graph. The chromatic number of this graph is 2, but its Borel chromatic number is infinite. We use this representation theorem to show that the Shift Graph is not minimal among the graphs of Borel functions which have infinite Borel chromatic number. While this answers negatively the primary outstanding question from (Kechris, Solecki and Todorcevic; 1999), our proof unfortunately does not construct any explicit example of a Borel function whose graph has infinite Borel chromatic number and admits no homomorphism from the Shift Graph.

Logic Colloquium

September 22, 2017, 4:10 PM (60 Evans Hall)

Nadja Hempel
Assistant Adjunct Professor of Mathematics, UCLA

Mekler construction in generalized stability

Given a so called nice graph (no triangles, no squares, for any choice of two distinct vertices there is a third vertex which is connected to one and not the other), Mekler considered the 2-nilpotent subgroup generated by the vertices of the graph in which two elements given by vertices commute if and only if there is an edge between them. These groups form an interesting collection of examples from a model theoretic point of view. It was shown that such a group is stable if and only if the corresponding graph is stable and Baudisch generalized this fact to the simple theory context. In a joint work with Chernikov, we were able to verify this result for k-dependent and NTP2 theories. This leads to the existence of groups which are (k+1)-dependent but not k-dependent, providing the first algebraic objects witnessing the strictness of these hierarchy.

Logic Colloquium

September 08, 2017, 4:10 PM (60 Evans Hall)

Cameron Freer
Founder, Borelian Corporation and Chief Scientist, Remine LLC

Model theory of invariant probabilistic constructions

There is a long history of studying the logic obtained by assigning probabilities, instead of truth values, to first-order formulas. In a 1964 paper, Gaifman studied probability distributions on countable structures that are invariant under renaming of the underlying set – which he called “symmetric measure-models”, and which are essentially equivalent to what today are known as S-invariant measures. In this paper, he asked the question of which first-order theories admit invariant measures concentrated on the models of the theory.

We answer this question of Gaifman, a key first step towards understanding the model theory of these measures, which can be thought of as “probabilistic structures”. In this talk, we will also discuss related questions, such as how many probabilistic structures are models of a given theory, and when probabilistic structures are almost surely isomorphic to a single classical model.

Joint work with Nathanael Ackerman and Rehana Patel.

Logic Colloquium

August 25, 2017, 4:10 PM (60 Evans Hall)

Andrew Marks
U.C. Los Angeles

A constructive solution to Tarski’s circle squaring problem

In 1925, Tarski posed the problem of whether a disc in 2 can be partitioned into finitely many pieces which can be rearranged by isometries to form a square of the same area. Unlike the Banach-Tarski paradox in 3, it can be shown that two Lebesgue measurable sets in 2 cannot be equidecomposed by isometries unless they have the same measure. Hence, the disk and square must necessarily be of the same area.

In 1990, Laczkovich showed that Tarski’s circle squaring problem has a positive answer using the axiom of choice. We give a completely constructive solution to the problem and describe an explicit (Borel) way to equidecompose a circle and a square. This answers a question of Wagon.

Our proof has three main ingredients. The first is work of Laczkovich in Diophantine approximation. The second is recent progress in a research program in descriptive set theory to understand how the complexity of a countable group is related to the Borel cardinality of the equivalence relations generated by its Borel actions. The third ingredient is ideas coming from the study of flows in networks.

This is joint work with Spencer Unger.

May 06, 2017, 9:30 AM (North Gate Hall, Room 105)

Jeremy Avigad, Johan van Benthem, Ronald Fagin, Barbara Partee

Logic at UC Berkeley

A two-day conference in mathematical logic and related areas organized by The Group in Logic and the Methodology of Science at UC Berkeley (logic.berkeley.edu). The conference is partly occasioned by the fact that the Group in Logic turns sixty this year.

In 1957, a group of faculty members, most of them from the departments of Mathematics and Philosophy, initiated a pioneering interdisciplinary graduate program leading to the degree of Ph.D. in Logic and the Methodology of Science. The Group has fostered interdisciplinary work in which logic has interacted with mathematics, philosophy, statistics, computer science, linguistics, physics and other disciplines.

While mathematical logic at UC Berkeley cannot be identified only with the Group in Logic, the Group has played a vital role in Berkeley’s worldwide prominence in mathematical logic and significantly contributed to making Berkeley a mecca since the fifties for people interested in mathematical logic and its applications. A full list of all those researchers in logic who taught at UC Berkeley, or studied at UC Berkeley, or visited Berkeley for shorter or longer periods would result in a who’s who of mathematical logic.

While marking an important moment for logic at UC Berkeley, the conference will be forward looking rather than merely celebratory. We have invited eight internationally prominent scholars to talk about the future of mathematical logic in their respective areas of specialization.

The first day of the conference will have four invited speakers in the so-called “foundational” areas: set theory, model theory, recursion theory, and proof theory. The second day will have four invited speakers in areas where mathematical logic plays a prominent role, namely philosophy of logic and mathematics, formal semantics for natural languages, modal logic, and logic in computer science. The line up is given below.

For a map showing the conference lecture rooms, see the red arrows here.

May 5 (141 McCone Hall 9AM-2PM; 3 LeConte Hall 2-5PM)

introductory remarks (9-9:30am): Paolo Mancosu (UC Berkeley), “A brief history of the Group in Logic and the Methodology of Science” (slides)

set theory (9:30-10:45am): John Steel (UC Berkeley), “Absolutely ordinal definable sets” (slides)

model theory (11am-12:15pm): Ehud Hrushovski (Hebrew University, Oxford University), “Reflections on model theory and foundations

recursion theory (2:15-3:30pm): Denis Hirschfeldt (University of Chicago), “Computability Theory” (slides)

proof theory (3:45-5:00pm): Michael Rathjen (Leeds University), “On relating type theories to (intuitionistic) set theories” (slides)

May 6 (105 North Gate Hall)

philosophy of logic and mathematics (9:30-10:45am): Jeremy Avigad (Carnegie Mellon University), “Modularity of Mathematics” (slides)

formal semantics for natural languages (11am-12:15pm): Barbara Partee (University of Massachusetts at Amherst), “The Intertwining Influences of Logic, Philosophy, and Linguistics in the Development of Formal Semantics and Pragmatics” (slides)

modal logic (2:15-3:30pm): Johan van Benthem (University of Amsterdam, Stanford University, Tsinghua University), “Modal logic, on the cusp of philosophy and mathematics” (handout)

logic in computer science (3:45-5:00pm): Ronald Fagin (IBM Almaden Research Center), “Applying logic to practice in computer science” (slides)

May 05, 2017, 9:00 AM (McCone Hall, Room 141 from 9 AM to 2 PM; LeConte Hall, Room 3 from 2 PM to 5 PM)

Denis Hirschfeldt, Ehud Hrushovski, Michael Rathjen, John Steel

Logic at UC Berkeley

A two-day conference in mathematical logic and related areas organized by The Group in Logic and the Methodology of Science at UC Berkeley (logic.berkeley.edu). The conference is partly occasioned by the fact that the Group in Logic turns sixty this year.

In 1957, a group of faculty members, most of them from the departments of Mathematics and Philosophy, initiated a pioneering interdisciplinary graduate program leading to the degree of Ph.D. in Logic and the Methodology of Science. The Group has fostered interdisciplinary work in which logic has interacted with mathematics, philosophy, statistics, computer science, linguistics, physics and other disciplines.

While mathematical logic at UC Berkeley cannot be identified only with the Group in Logic, the Group has played a vital role in Berkeley’s worldwide prominence in mathematical logic and significantly contributed to making Berkeley a mecca since the fifties for people interested in mathematical logic and its applications. A full list of all those researchers in logic who taught at UC Berkeley, or studied at UC Berkeley, or visited Berkeley for shorter or longer periods would result in a who’s who of mathematical logic.

While marking an important moment for logic at UC Berkeley, the conference will be forward looking rather than merely celebratory. We have invited eight internationally prominent scholars to talk about the future of mathematical logic in their respective areas of specialization.

The first day of the conference will have four invited speakers in the so-called “foundational” areas: set theory, model theory, recursion theory, and proof theory. The second day will have four invited speakers in areas where mathematical logic plays a prominent role, namely philosophy of logic and mathematics, formal semantics for natural languages, modal logic, and logic in computer science. The line up is given below.

For a map showing the conference lecture rooms, see the red arrows here.

May 5 (141 McCone Hall 9AM-2PM; 3 LeConte Hall 2-5PM)

introductory remarks (9-9:30am): Paolo Mancosu (UC Berkeley), “A brief history of the Group in Logic and the Methodology of Science” (slides)

set theory (9:30-10:45am): John Steel (UC Berkeley), “Absolutely ordinal definable sets” (slides)

model theory (11am-12:15pm): Ehud Hrushovski (Hebrew University, Oxford University), “Reflections on model theory and foundations

recursion theory (2:15-3:30pm): Denis Hirschfeldt (University of Chicago), “Computability Theory” (slides)

proof theory (3:45-5:00pm): Michael Rathjen (Leeds University), “On relating type theories to (intuitionistic) set theories” (slides)

May 6 (105 North Gate Hall)

philosophy of logic and mathematics (9:30-10:45am): Jeremy Avigad (Carnegie Mellon University), “Modularity of Mathematics” (slides)

formal semantics for natural languages (11am-12:15pm): Barbara Partee (University of Massachusetts at Amherst), “The Intertwining Influences of Logic, Philosophy, and Linguistics in the Development of Formal Semantics and Pragmatics” (slides)

modal logic (2:15-3:30pm): Johan van Benthem (University of Amsterdam, Stanford University, Tsinghua University), “Modal logic, on the cusp of philosophy and mathematics” (handout)

logic in computer science (3:45-5:00pm): Ronald Fagin (IBM Almaden Research Center), “Applying logic to practice in computer science” (slides)

Logic Colloquium

April 21, 2017, 4:10 PM (60 Evans Hall)

Jana Marikova
Assistant Professor of Mathematics, Western Illinois University

Convexly valued o-minimal fields

O-minimal structures are ordered structures in which the definable sets behave in a particularly tame fashion. For example, definable one-variable functions are piecewise continuous and differentiable. A prototypical example is the ordered real field in which the definable sets are precisely the semialgebraic sets.

Valuations yield a way of understanding an (expansion of) a field in terms of two, often simpler, associated structures, namely its residue field and value group.

We shall investigate o-minimal fields with valuations that are determined by convex subrings. In particular, we shall see that the class of o-minimal fields with onvex subrings that give rise to o-minimal residue fields have various desirable properties, including quantifier elimination in a suitable language.

Slides

Alfred Tarski Lectures

April 14, 2017, 4:10 PM (50 Birge Hall)

Lou van den Dries
Professor, Department of Mathematics, University of Illinois at Urbana-Champaign

Model Theory of Transseries: Results and Open Problems

The book Asymptotic Differential Algebra and Model Theory of Transseries (arXiv:1509.02588) by Aschenbrenner, van der Hoeven, and myself will soon appear as the Annals of Mathematics Studies, number 195. It contains the main results of twenty years of investigating 𝕋. One of these results is easy to state: The theory of this differential field is completely axiomatized by the requirements of being a Liouville closed H-field with small derivation and the intermediate value property for differential polynomials. I will explain the meaning of these terms, further results on 𝕋 and some consequences for what is definable in 𝕋.

I will also discuss some of the many attractive open problems in this area.

Alfred Tarski Lectures

April 12, 2017, 4:10 PM (50 Birge Hall)

Lou van den Dries
Professor, Department of Mathematics, University of Illinois at Urbana-Champaign

Orders of Infinity and Transseries

The “orders of Infinity” of the title are du Bois Reymond’s growth rates of functions, put on a firm basis by Hardy and Hausdorff. This led to the notion of a Hardy field (Bourbaki). Other ways of describing these orders of infinity are transseries (formal series containing powers of the variable x as well as exponential and logarithmic terms like ex and log x), and Conway’s surreal numbers. These topics are all closely related, with the differential field 𝕋 of transseries taking center stage.

Alfred Tarski Lectures

April 10, 2017, 4:10 PM (50 Birge Hall)

Lou van den Dries
Professor, Department of Mathematics, University of Illinois at Urbana-Champaign

Model Theory as a Geography of Mathematics

I like to think of model theory as a geography of mathematics, especially of its “tame” side. Here tame roughly corresponds to geometric as opposed to combinatorial-arithmetic. In this connection I will discuss Tarski’s work on the real field, and the notion of o-minimality that it suggested.

A structure M carries its own mathematical territory with it, via interpretability: its own posets, groups, fields, and so on. Understanding this “world according to M” can be rewarding. Stability-like properties of M forbid certain combinatorial patterns, thus providing highly intrinsic and robust information about this world.

Logic Colloquium

April 07, 2017, 4:10 PM (60 Evans Hall)

Pierre Simon
Assistant Professor of Mathematics, UC Berkeley

On omega-categorical structures and their automorphism groups

The study of omega-categorical structures and their automorphism groups has been for several decades a very active area of research and point of contact between model theory, group theory, combinatorics and descriptive set theory. A countable structure is omega-categorical if its automorphism group acts oligomorphically on it, that is has finitely many orbits on n-tuples, for all n. In this talk, I will introduce this subject by presenting the various angles of approach and talk about a selection of topics, in particular concerning countable subgroups of automorphism groups. I will present a recent work with Itay Kaplan on the existence of finitely generated dense subgroups and ask a number of questions.

Logic Colloquium

March 24, 2017, 4:10 PM (60 Evans Hall)

Françoise Point
FNRS-FRS (UMons)

On expansions of (ℤ, +,0)

In the special case of the structure (ℤ, +,0), we will consider the following model-theoretic question: given a well-behaved first-order structure, which kind of predicates can one add and retain model-theoretic properties of the structure one started with, such as stability- like properties, quantifier elimination in a reasonable language, decidability. We will review recent results on stability properties of expansions of (ℤ, +,0) by a unary predicate and we will make the comparison with former results on the decidability and model-completeness of the corresponding expansions of (ℤ, +,0,  < ) (and if time permits, with the corresponding expansions of the field of real numbers).

Slides

Logic Colloquium

March 10, 2017, 4:10 PM (60 Evans Hall)

Caroline Terry
Brin Postdoctoral Fellow, University of Maryland, College Park, and Research Fellow, Simons Institute for the Theory of Computing

Structure and enumeration theorems for hereditary properties of -structures

The study of structure and enumeration for hereditary graph properties has been a major area of research in extremal combinatorics. Over the years such results have been extended to many combinatorial structures other than graphs. This line of research has developed an informal strategy for how to prove these results in various settings. In this talk we use tools from model theory to formalize this strategy. In particular, we generalize certain definitions, tools, and theorems which appear commonly in approximate structure and enumeration theorems in extremal combinatorics. Our results apply to classes of finite -structures which are closed under isomorphism and model-theoretic substructure, where is any finite relational language.

Slides

Logic Colloquium

February 24, 2017, 4:10 PM (60 Evans Hall)

Sean Walsh
Department of Logic and Philosophy of Science, University of California, Irvine

Realizability Semantics for Quantified Modal Logic

In 1985, Flagg produced a model of first-order Peano arithmetic and a modal principle known as Epistemic Church’s Thesis, which roughly expresses that any number-theoretic function known to be total is recursive. In some recent work ([1]), this construction was generalized to allow a construction of models of quantified modal logic on top of just about any of the traditional realizability models of various intuitionistic systems, such as fragments of second-order arithmetic and set theory. In this talk, we survey this construction and indicate what is known about the reduct of these structures to the non-modal language.

References: [1] B. G. Rin and S. Walsh. Realizability semantics for quantified modal logic: Generalizing Flagg’s 1985 construction. The Review of Symbolic Logic, 9(4):752–809, 2016.

Logic Colloquium

February 10, 2017, 4:10 PM (60 Evans Hall)

Clifton Ealy
Associate Professor of Mathematics, Western Illinois University

Residue field domination in real closed valued fields

In an algebraically closed valued field, as shown by Haskell, Hrushovski, and Macpherson, the residue field and the value group control the rest of the structure: tp(L/Ck(L)Γ(L) will have a unique extension to M ⊇ C, as long as the residue field and value group of M are independent from those of L. (Here k(L) and Γ(L) denote the residue field and value group, respectively, of L, and C is a maximal field.)

This behaviour is striking, because it is what typically occurs in a stable structure (where types over algebraically closed sets have unique extensions to independent sets) but valued fields are far from stable, due to the order on the value group.

Real closed valued fields are even further from stable since the main sort is ordered. One might expect the analogous theorem about real closed valued fields to be that tp(L/M) is implied by tp(L/Ck(L)Γ(L)) together with the order type of L over M. In fact we show that the order type is unnecessary, that just as in the algebraically closed case, one has that tp(L/M) is implied by tp(L/k(L)Γ(L)). This is joint work with Haskell and Marikova.

No knowledge of value fields will be assumed.

Slides

Logic Colloquium

January 27, 2017, 4:10 PM (60 Evans Hall)

Ilijas Farah
Professor of Mathematics, York University

Model theory of operator algebras

Unexpected and deep connections between the theory of operator algebras and logic have been discovered in the past fifteen years. Even more remarkably, diverse areas of logic that are generally (and arguably) thought to have little connection to one another have found natural - and sometimes necessary - applications to operator algebras. In this talk I will emphasize applications of model theory to operator algebras.

Logic Colloquium

December 02, 2016, 4:10 PM (60 Evans Hall)

Prakash Panangaden
Professor, School of Computer Science, McGill University

A Logical Characterization of Probabilistic Bisimulation

Labelled Markov Processes (LMPs) are a combination of traditional labelled transition systems and Markov processes. Discrete versions of such systems have been around for a while and were thoroughly explored by Larsen and Skou in the late 1980s and early 1990s. Our contribution has been to extend this study to systems with continuous state spaces.

The main technical contribution that I will discuss in this talk is a definition of probabilistic bisimulation for such systems and a logical characterization for it. The surprise is that a very simple modal logic with no negative constructs or infinitary conjunctions suffices to characterize bisimulation, even with uncountable branching. This is quite different from the traditional situation with Kripke structures where van Benthem and Hennessy and Milner characterized bisimulation using negation in an essential way.

Logic Colloquium

November 18, 2016, 4:10 PM (60 Evans Hall)

Phokion G. Kolaitis
UC Santa Cruz & IBM Research - Almaden

Dependence Logic vs. Constraint Satisfaction

During the past decade, dependence logic has emerged as a formalism suitable for expressing and analyzing notions of dependence and independence that arise in different scientific areas. The sentences of dependence logic have the same expressive power as those of existential second-order logic, hence dependence logic captures the complexity class NP on the collection of all finite structures over a relational vocabulary. We identify a natural fragment of universal dependence logic and show that, in a precise sense, this fragment captures constraint satisfaction, a ubiquitous algorithmic problem encountered across different areas of computer science. This tight connection between dependence logic and constraint satisfaction contributes to the descriptive complexity of constraint satisfaction and elucidates the expressive power of universal dependence logic.

This is joint work with Lauri Hella, University of Tampere, Finland.

Slides

Working Group in the History and Philosophy of Logic, Mathematics, and Science

November 16, 2016, 6:00 PM (234 Moses Hall)

Juliette Kennedy
Associate Professor, Department of Mathematics and Statistics, University of Helsinki

Squeezing arguments and strong logics

G. Kreisel has suggested that squeezing arguments, originally formulated for the informal concept of first order validity, should be extendable to second order logic, although he points out obvious obstacles. We develop this idea in the light of more recent advances and delineate the difficulties across the spectrum of extensions of first order logics by generalised quantifiers and infinitary logics. This is joint work with Jouko Väänänen.

Logic Colloquium

November 04, 2016, 4:10 PM (60 Evans Hall)

Mai Gehrke
Senior Research Director, CNRS & Université Paris Diderot - Paris 7

Canonical extensions and applications

In 1951-52 Jònsson and Tarski published a two part paper in which they cast Stone duality for Boolean algebras in algebraic form as so-called canonical extensions, and used this formulation to extend Stone duality to Boolean algebras with operators (BAOs). That is, Boolean algebras with additional operations which preserve join in each coordinate. In this way they obtained the relational semantics of modal logic. Since then the theory of canonical extensions, and thus duality for additional structure on lattices and Boolean algebras, has been generalised and better understood. These tools have played a central role in the discovery of the connection between formal language theory and Stone duality. In particular, the fact that the profinite completion of any abstract algebra is the Stone dual of the Boolean algebra of recognisable languages over the algebra equipped with certain residuation operations.

In this talk, I will give an introduction to these ideas and an overview of recent developments.

Logic Colloquium

October 21, 2016, 4:10 PM (60 Evans Hall)

Erich Grädel
Professor of Mathematical Foundations of Computer Science, RWTH Aachen University

Back and Forth Between Team Semantics, Games, and Tarski Semantics

Team semantics, due to Wilfrid Hodges, is the mathematical basis of modern logics of dependence and independence in which, following a proposal by Väänänen, dependencies are considered as atomic statements, and not as annotations of quantifiers. In team semantics, a formula is evaluated not against a single assignment of values to the free variables, but against a set of such assignments.

Logics with team semantics have strong expressive power and some surprising properties. They can be analyzed in several different ways. First-order formulae with team semantics and dependencies can be translated into sentences of existential second-order logic, with an additional predicate for the team (and with classical Tarski semantics). We can thus understand the power of a specific logic of dependence or independence by identifying the fragment of existential second-order logic to which it corresponds. Further, logics can be understood through their games. In general the model-checking games for logics with team semantics turn out to be second-order reachability games.

We shall then have a closer look at the specific case of logics with inclusion dependencies, and reveal their connection with safety games and logics with greatest fixed-points.

Slides

Logic Colloquium

October 07, 2016, 4:10 PM (60 Evans Hall)

Dag Westerståhl
Professor of Theoretical Philosophy and Logic, Stockholm University

A Carnapian approach to the meaning of logical constants: the case of modal logic

Does a consequence relation in a language L, as a syntactic relation between sets of L-sentences and L-sentences, fix the meaning of the logical constants in L? In his 1943 book The Formalization of Logic, Carnap worried that this seems to fail even for classical propositional logic CL. However, by applying the viewpoint of modern formal semantics, which in particular requires meaning assignment to be compositional, Carnap’s worries about CL can be allayed. More importantly, it provides a precise framework for asking Carnap’s question about any logic. (To what extent) does classical first-order consequence determine the meaning of ? What about other (generalized) quantifiers? What about the intuitionistic meaning of the connectives? Or in (classical) modal logic? [1] answers the first of these questions. In this talk I focus on the last one. Despite obvious similarities between and , there are important differences. Roughly, while there is essentially just one first-order logic, there are innumerable modal logics. That makes answers to Carnap style questions about the meaning of more intricate, and more interesting, than in the case of . Although we in a sense cover extremely familiar ground, the perspective one gets, in particular of neighborhood semantics for modal logic, seems novel. This is joint work with Denis Bonnay.

[1] D. Bonnay and D. Westerståhl, ‘Compositionality solves Carnap’s Problem’, Erkenntnis 81 (4), 2016 (721–739).

Slides

Logic Colloquium

September 23, 2016, 4:10 PM (60 Evans Hall)

Andreas Blass
Professor of Mathematics, University of Michigan, and Visiting Scientist, Simons Institute for the Theory of Computing

Between countable and continuum

I plan to give a general introduction to cardinal characteristics of the continuum, including several examples of characteristics and of the sorts of results that the theory obtains. Afterward, if time permits, I’ll describe some recent work on a specific characteristic suggested by a familiar theorem from calculus. The talk will be aimed at people who are not set theorists.

Logic Colloquium

September 09, 2016, 4:10 PM (60 Evans Hall)

Sam Buss
Professor of Mathematics and Computer Science, UC San Diego

Frege Proofs, Extended Frege Proofs, and Total NP Search Problems

This talk will discuss some of the fundamental problems of proof complexity and their connection to computational complexity. The talk focuses on the proof complexity of the Frege and extended Frege proof systems for propositional logic. Recent work includes new proofs for the pigeonhole principle (!), Frankl’s theorem, the AB=I theorem, and the Kneser-Lovasz theorem on chromatic numbers. These problems are closely related to Total NP Search Problems (TFNP), a complexity class that lies midway between P and NP. The consistency statements for exponentially large Frege and extended Frege proofs can be used to characterize the provably total functions of second-order theories of bounded arithmetic for polynomial space and exponential time.

Logic Colloquium

August 26, 2016, 4:10 PM (60 Evans Hall)

Martin Otto
Professor in Mathematical Logic, Department of Mathematics, Technische Universitaet Darmstadt (currently visiting the Simons Institute for Theory of Computing, Berkeley)

Back & Forth Between Malleable Finite Models

This talk attempts to survey some model-theoretic techniques, model transformations and constructions - especially for logics of a modal/guarded flavour - that can replace classical compactness arguments, which are not available in restriction to finite models or other non-elementary classes of interest. Even where these algebraic-combinatorial and game-oriented techniques concur with the usually smoother infinitary methods of classical model theory they occasionally offer additional constructive or quantitative information. Some of the underlying combinatorial constructions to be discussed involve rather generic finite coverings for graphs and hypergraphs and are of independent interest. They can also be used towards alternative proofs of powerful extension properties for partial isomorphisms, which also have applications for the finite model theory of related logics.

Special Logic Seminar

May 10, 2016, 4:10 PM (234 Moses Hall)

Johan van Benthem
Amsterdam and Stanford

Decidable Versions of First-Order Predicate Logic

There are two ways of finding decidability inside first-order logic: one is by restricting attention to language fragments, the other is by generalizing the usual semantics. In this talk, I explain a recent generalized semantics by Aldo Antonelli, which leads to a decidable version of predicate logic that induces an effective translation into the Guarded Fragment. I will discuss this proposal and the resulting program.

Also, I make a comparison with existing decidable first-order semantics via general assignment models and via games, and with the move in modal logic from relational semantics to neighborhood semantics. Finally, I discuss the resulting landscape of weak decidable first-order logics, and its links with decidable fragments of the first-order language. This suggests many open problems that I will flag throughout.

Ref. H. Andréka, J. van Benthem & I. Németi, 2016, ‘On a New Semantics for First‐Order Predicate Logic’, Journal of Philosophical Logic, to appear.

Logic Colloquium

May 06, 2016, 4:10 PM (60 Evans Hall)

Omer Ben Neria
Hedrick Assistant Adjunct Professor of Mathematics, UCLA

The distance between HOD and V

The pursuit of better understanding of the universe of set theory V motivated an extensive study of definable inner models M whose goal is to serve as good approximations to V.
A common property of these inner models is that they are contained in HOD, the universe of hereditarily ordinal definable sets. Motivated by the question of how “close” HOD is to V, we consider various related forcing methods and survey known and new results. This is a joint work with Spencer Unger.

Logic Colloquium

April 22, 2016, 4:10 PM (60 Evans Hall)

Kai Wehmeier
Professor of Logic and Philosophy of Science, UC Irvine

The Role of Variables in Predicate Logic: Frege vs. Tarski

Standard presentations of predicate logic, following Tarski, segment a quantified statement such as x(P(x) ∧ R(x, a)) into the constituents x and P(x) ∧ R(x, a). At least on the face of it, Frege held a different view about the logical form of quantification; for him the same sentence decomposes into the constituents xxx and P( ) ∧ R( , a). In order to adjudicate between these views, I will develop some of the details of Fregean predicate logic, with special emphasis on the compositionality of its semantics. I will then compare Tarskian and Fregean treatments of variables and draw some philosophical conclusions from this comparison. In particular, I will address the extensionality of predicate logic and Kit Fine’s so-called antinomy of the variable.

Alfred Tarski Lectures

April 08, 2016, 4:10 PM (20 Barrows)

William W. Tait
Professor Emeritus, Department of Philosophy and CHSS, University of Chicago

Cut-Elimination for Subsystems of Classical Second-Order Number Theory: Cut-Elimination for Π11 − CA with the ω-Rule—and Beyond(?)

I will present a simplification of the proof of cut-eliminability of Gaisi Takeuti and Mariko Yasugi for this system. In particular, I will avoid the use of Takeuti’s ordinal diagrams. (I do use the finite part of the Veblen hierarchy, though.) Given time and sufficient conviction, I may speak about the possibility of extending the result to Π21 − CA and beyond.

Alfred Tarski Lectures

April 06, 2016, 4:10 PM (20 Barrows)

William W. Tait
Professor Emeritus, Department of Philosophy and CHSS, University of Chicago

Cut-Elimination for Subsystems of Classical Second-Order Number Theory: The Predicative Case

I will present the classical results of Gentzen and Schuette for first-order and ramified second-order number theory.

Alfred Tarski Lectures

April 04, 2016, 4:10 PM (20 Barrows)

William W. Tait
Professor Emeritus, Department of Philosophy and CHSS, University of Chicago

On Skepticism about the Ideal

There is a historic skepticism about mathematics that hangs on the fact that the objects of mathematics, structures and their elements—numbers, functions, sets, etc., are ideal, i.e. that empirical facts, facts about the natural world, have no relevance to the truth of propositions about them.

Of course, the view that ‘existence’ simply means empirical existence, so that the term is misused as applied to ideal things can be countered only on pragmatic grounds, that we use the term in other contexts and that it is very useful there. The rejection of ideal existence becomes meaningful only if one has a transcendental ground on which to stand and judge applications of the term. I believe that the ground, at least implicitly, has been a wrong thesis about how language works, namely the view that genuine reference to objects presupposes a non-linguistic interaction with them. And that is what I want to talk about. My argument draws on a reading of Wittgenstein’s Philosophical Investigations, a reading which is more positive than a more common one according to which he, himself, was a skeptic.

Logic Colloquium

March 18, 2016, 4:10 PM (60 Evans Hall)

Guram Bezhanishvili
Professor of Mathematical Sciences, New Mexico State University

The algebra of topology: Tarski’s program 70 years later

In a slogan, Tarski’s program can be described as “creating an algebraic apparatus adequate for the treatment of portions of point-set topology” (McKinsey-Tarski, 1944). The “algebraic apparatus” can be developed in several ways. For example, we can look at the algebra of open sets of a topological space X or the powerset algebra of X equipped with topological closure (or interior). Both of these approaches are not only closely related, but are also widely used in logic. The first approach yields a topological representation of Heyting algebras, and hence provides topological completeness of intuitionistic logic (one of the first completeness results for intuitionistic logic, established by Tarski in the 1930s). On the other hand, the second approach provides a topological representation of closure algebras, thus yielding topological completeness of Lewis’s modal system S4. McKinsey and Tarski observed a close connection between these two approaches, which allowed them to prove that Gödel’s translation of intuitionistic logic into S4 is full and faithful. I will review this line of research of Tarski and his collaborators, discuss further advances in this direction, obtained in the second half of the twentieth century, and finish with the latest new developments.

Slides

Logic Colloquium

March 04, 2016, 4:10 PM (60 Evans Hall)

Katrin Tent
Professor of Mathematics and Mathematical Logic, Universität Münster

Describing finite groups by short sentences

We say that a class of finite structures for a finite first-order signature is r-compressible for an unbounded function r : N → N+ if each structure G in the class has a first-order description of size at most O(r(|G|)). We show that the class of finite simple groups is log-compressible, and the class of all finite groups is log3-compressible. The result relies on the classification of finite simple groups, the bi-interpretability of the twisted Ree groups with finite difference fields, the existence of profinite presentations with few relators for finite groups, and group cohomology.

Logic Colloquium

February 19, 2016, 4:10 PM (60 Evans Hall)

Christoph Benzmüller
Stanford University and Freie Universität Berlin

A Success Story of Higher-Order Theorem Proving in Computational Metaphysics

I will report on the discovery and verification of the inconsistency in Gödel’s ontological argument with reasoning tools for higher-order logic. Despite the popularity of the argument since the appearance of Gödel’s manuscript in the early 70’s, the inconsistency of the axioms used in the argument remained unnoticed until 2013, when it was detected automatically by the higher-order theorem prover LEO-II.

Understanding and verifying the refutation generated by LEO-II turned out to be a time-consuming task. Its completion required the reconstruction of the refutation in the Isabelle/HOL proof assistant. This effort also led to a more efficient way of automating higher-order modal logic S5 with a universal accessibility relation within the logic embedding technique we utilize in our work.

The development of an improved syntactical hiding for the logic embedding technique allows the refutation to be presented in a human-friendly way, suitable for non-experts in the technicalities of higher-order theorem proving. This should bring us a step closer to wider adoption of logic-based artificial intelligence tools by philosophers.

If time permits, I will also point to alternative, ongoing applications of the logic embeddings technique, such as the encoding of Zalta’s theory of abstract objects or the mechanization of Scott’s free logic.

Slides

Logic Colloquium

February 05, 2016, 4:10 PM (60 Evans Hall)

Vijay D’Silva
Google Research

Interpolant Construction and Applications

In the last decade, algorithms for the construction of interpolants from proofs have found numerous practical applications such as the analysis of circuits, generation of Floyd/Hoare proofs of program correctness, and parallelization of constraint solvers. The first part of this talk will trace how interpolation has spread from mathematical logic to commercial software via computational complexity theory and computer aided verification. The second part will delve into details of the algorithms for construction of interpolants in logics and theories. In the propositional case, the space of interpolant constructions has a semi-lattice structure that determines the logical strength and size of the interpolants obtained. In the case of first-order theories, this structure is still being unravelled. The talk will conclude with a review of open theoretical and empirical problems concerning interpolant construction.

Slides

Model Theory Seminar

January 29, 2016, 4:10 PM (60 Evans Hall)

Pierre Simon
CNRS, Université Claude Bernard - Lyon 1

Decomposition of dependent structures

A structure is a set equipped with a family of predicates and functions on it, for example a group or a field. A structure is said to be dependent if it is not as complicated as a random graph (in some precise sense). The fields of complex numbers, real numbers and p-adic numbers are examples of dependent structures. If a dependent structure contains no definable order, then it is stable: a much stronger property that is now very well understood. Stability can be thought of as an abstraction of algebraic geometry. At the other extreme, are dependent structures which are very much controlled by linear orders. We call such structures distal. Distality is meant to serve as a general model for semi-algebraic (or analytic) geometry. In this talk, I will explain those notions and state a recent theorem saying that any dependent structure can be locally decomposed into a stable-like and a distal-like part.

Logic Colloquium

January 22, 2016, 4:10 PM (60 Evans Hall)

John R. Steel
Professor of Mathematics, UC Berkeley

Ordinality Definability in Models of the Axiom of Determinacy

Let HOD be the class of all hereditarily ordinal definable sets, and let M be a model of the Axiom of Determinacy. The inner model HOD^M consisting of all sets that are hereditarily ordinal definable in the sense of M is of great interest, for a variety of reasons. In the first part of the talk, we shall give a general introduction that explains some of these reasons.

We shall then describe a general Comparison Lemma for “hod mice” (structures that approximate HOD^M). Modulo still-open conjectures on the existence of iteration strategies, this lemma yields models M of the Axiom of Determinacy such that HOD^M can be analyzed fine structurally (for example, satisfies the GCH), and yet satisfies very strong large cardinal hypotheses (for example, that there are superstrong cardinals).

Logic Colloquium

December 04, 2015, 4:10 PM (60 Evans Hall)

Natarajan Shankar
Computer Scientist, SRI International

PVS and the Pragmatics of Formal Proof

A formal system establishes a trinity between language, meaning, and inference. Many domains of thought can be captured using formal systems so that sound conclusions can be drawn through correct inferences. Philosophers have long speculated about machines that can distinguish sound arguments from flawed ones, and with modern computing, we can realize this dream to a significant extent. A research program to mechanize formal proof inevitably confronts a range of foundational and pragmatic choices. Should the foundations be classical or constructive, and should they be based on set theory, type theory, or category theory? What definitional principles should the language admit? How can formalizations be constructed from reusable modules? Should proofs be represented in a Hilbert calculus, natural deduction, or sequent calculus? Which inference steps in the proof calculus can and should be effectively mechanized in order to close the gap between informal arguments and their formal counterparts?

SRI’s Prototype Verification System (PVS) is an interactive proof assistant aimed at capturing the pragmatic features of mathematical expression and argumentation. The PVS language augments a simply typed higher-order logic with subtypes, dependent types, and algebraic datatypes. The PVS interactive proof checker integrates a number of decision procedures into the sequent calculus proof system. We examine the implications of these choices for the original goal of constructing mathematically sound arguments.

Slides

Logic Colloquium

November 20, 2015, 4:10 PM (60 Evans Hall)

Edward Zalta
Senior Research Scholar, Center for the Study of Language and Information, Stanford University

Reflections on Foundations for Mathematical Structuralism

In a paper recently published in Mind (“Foundations for Mathematical Structuralism”), Nodelman and I offer a rigorous formulation of mathematical structuralism. A mathematics-free theory of structures and structural objects is presented in a background formalism and then the whole is applied to the analysis of mathematics. While the principal structuralist intuitions about structures and the elements of structures are preserved, distinctions formalized in the theory undermine a variety of arguments that have been put forward concerning the nature of structural elements. Arguments from Russell, Dedekind, Benacerraf, Shapiro, Linnebo, Awodey, Keranen, and others, are considered. For those interested in reading the paper in advance, a preprint is available at: http://mally.stanford.edu/Papers/structuralism.pdf

Logic Colloquium

November 06, 2015, 4:10 PM (60 Evans Hall)

Silvain Rideau
Morrey Visiting Assistant Professor, Department of Mathematics, UC Berkeley

Transferring imaginaries

Since it was first formulated by Poizat thirty years ago, the question of eliminating imaginaries, has become an important, if not necessary, step in understanding the model theory of given algebraic structures; it consists in classifying all the definable equivalence relations in a structure. An ever growing collection of such results is now available but this question remains open in many otherwise well understood structures.

Some of the recent elimination of imaginaries results are actually reductions to previously known elimination of imaginaries results. The goal of this talk will be to consider these reductions from an abstract standpoint and provide various settings in which one might derive elimination of imaginaires in a theory from elimination of imaginaries in some other theory.

Slides

Logic Colloquium

October 23, 2015, 4:10 PM (60 Evans Hall)

Larry Moss
Professor of Mathematics, Indiana University, Bloomington

Natural Logic

Much of modern logic originates in work on the foundations of mathematics. My talk reports on work in logic that has a different goal, the study of inference in language. This study leads to what I will call “natural logic”, the enterprise of studying logical inference in languages that look more like natural language than standard logical systems. An early paper in this field is George Lakoff’s 1970 paper “Linguistics and Natural Logic.”

By now there is a growing body of work which presents logical systems that differ from first-order logic in various ways. Most of the systems are complete and decidable. Some are modern versions of syllogistic logic, but with additional features not present in syllogistic logics. And then there are flavors of logic which look rather far from either traditional or modern logic.

The talk will be programmatic and far-ranging rather than detailed. I hope to touch on computer implementations of natural logics, teaching materials on this topic, and interactions of logic and cognitive science. And keeping with most other talks in the Logic Colloquium, I’ll show that the whole subject of natural logic is full of questions with mathematical interest.

Slides

Logic Colloquium

October 09, 2015, 4:10 PM (60 Evans Hall)

Takayuki Kihara
JSPS Postdoctoral Fellow, Department of Mathematics, UC Berkeley

Recursion Theoretic Methods in Descriptive Set Theory and Infinite Dimensional Topology

The notion of degree spectrum of a structure in computable model theory is defined as the collection of all Turing degrees of presentations of the structure. We introduce the degree spectrum of a represented space as the class of collections of all Turing degrees of presentations of points in the space. The notion of point degree spectrum creates a connection among various areas of mathematics including computability theory, descriptive set theory, infinite dimensional topology and Banach space theory.

Through this new connection, for instance, we construct a family of continuum many infinite dimensional Cantor manifolds with property C in the sense of H aver/Addis-Gresham whose Borel structures at an arbitrary finite rank are mutually non-isomorphic. This provides new examples of Banach algebras of real valued Baire class two functions on metrizable compacta, and strengthen various theorems in infinite dimensional topology such as Pol’s solution to Alexandrov’s old problem.

To prove our main theorem, an invariant which we call degree co-spectrum, a collection of Turing ideals realized as lower Turing cones of points of a Polish space, plays a key role. The key idea is measuring the quantity of all possible Scott ideals (omega-models of WKL) realized within the degree co-spectrum (on a cone) of a given space. This is joint work with Arno Pauly (University of Cambridge, UK).

Slides

Logic Colloquium

September 25, 2015, 4:10 PM (60 Evans Hall)

Dana Scott
University Professor, Emeritus, Carnegie Mellon University; Visiting Scholar, UC Berkeley

Can Modalities Save Naive Set Theory?

The late “Grisha” Mints once asked the speaker whether a naive set theory could be consistent in modal logic. Specifically he asked whether restricting the comprehension scheme to necessary properties was safe. Scott was working on a set theory in the Lewis system S4 of modal logic and Mints was happy to position his question in the same modal system. Obviously a very, very weak modality can avoid paradoxes, but such results are not especially interesting. At that time (2009) Scott could not answer the consistency question, and neither could Mints. Last November Scott noted that CMU Philosophy was hosting a seminar on a naive set theory by Harvey Lederman. Scott wrote him for his paper and said, “By the way, there is this question of Grisha Mints, and I wonder if you have an opinion?” Lederman sent back a sketch of a proof of the inconsistency of a strengthened version of comprehension. That proof at first did not quite work out, but was repaired in correspondence. Lederman mentioned the questions to two of his colleagues, and in March of 2015 Tiankai Liu suggested a possible model of a weaker comprehension scheme, which after a small correction gave a consistency proof. A few days later, Peter Fritz came up with essentially the same model. A paper has now been submitted for publication jointly by Fritz, Lederman, Liu, and Scott.

Slides

Paper

Logic Colloquium

September 11, 2015, 4:10 PM (60 Evans Hall)

Thomas Icard
Assistant Professor of Philosophy and Symbolic Systems, Stanford University

A Survey on Topological Semantics for Provability Logics

This will be a survey of ideas and results on topological interpretations of provability logics, especially polymodal provability logics. Esakia first proved completeness for the basic Gödel-Löb logic of provability with respect to scattered spaces. Abashidze and Blass (independently) proved completeness w.r.t. a particular scattered space defined on the ordinal omega^omega. I will discuss older work of my own that extended the Abashidze-Blass result to a polymodal provability logic, which is complete with respect to a polytopological space on epsilon_0. I will then discuss more recent developments, including work by Beklemishev, Fernandez, Joosten, Gabelaia, Aguilera, and others, generalizing and improving upon much of this.

Slides

Logic Colloquium

August 28, 2015, 4:10 PM (60 Evans Hall)

Antonio Montalban
Associate Professor of Mathematics, UC Berkeley

Analytic Equivalence Relations with 1-many Classes

We will survey the new results connecting Vaught’s conjecture and computability theory. We will look at some of these result from the more general viewpoint of analytic equivalence relations.

There are two types of results we will concentrate on. On the one hand are the ones about equivalence relations satisfying hyperarithmetic-is-recursive, which provide a purely computability theoretic equivalent to Vaught’s conjecture. On the other hand are the ones about equivalence relations on the natural numbers that are “intermediate,” which I hope will eventually provide another equivalent statement.

Slides

Logic Colloquium

May 01, 2015, 4:10 PM (60 Evans Hall)

Stuart Russell
Professor of Computer Science and Smith-Zadeh Professor in Engineering, University of California, Berkeley

Unifying logic and probability: The BLOG language

Logic and probability are ancient subjects whose unification holds significant potential for the field of artificial intelligence. The BLOG (Bayesian LOGic) language provides a way to write probability models using syntactic and semantic devices from first-order logic. In modern parlance, it is a relational, open-universe probabilistic programming language that allows one to define probability distributions over the entire space of first-order model structures that can be constructed given the constant, function, and predicate symbols of the program. I will describe the language mainly through examples and cover its application to monitoring the Comprehensive Nuclear-Test-Ban Treaty.

Logic Colloquium

April 17, 2015, 4:10 PM (60 Evans Hall)

Tadeusz Litak
FAU Erlangen-Nürnberg

Whence Cometh Incompleteness?

I will explain the background of my recent collaboration with Wes Holliday. It is well known among modal logicians that while “naturally” obtained modal logics tend to be Kripke complete, things look very differently when one looks at the class of all modal logics. In the 1970’s, S.K. Thomason found a fundamental reason why this phenomenon is unavoidable: modal consequence over Kripke frames is every bit as complex as full second-order consequence. Shortly afterward, Wim Blok proved another amazing result: even those logics that are complete tend to be surrounded by uncountably many incomplete ones, which are sound wrt precisely the same class of Kripke structures. Kripke incompleteness turned out to be much more of a norm than anybody had expected; and in a very precise sense, the problem is irreparable, at least as long as the notion of proof remains decidable.

A question then arises if some sensible generalization of Kripke semantics would make the phenomenon of incompleteness disappear. In a way, the answer was known even before Kripke frames dominated the picture: every modal logic is complete wrt Boolean Algebras with Operators (BAOs). As it turns out, this is the best general completeness result we can hope for. It is possible to find naturally motivated semantics properly contained between Kripke semantics and algebraic semantics which do not suffer from the “Thomason effect”: the induced notion of consequence can be axiomatized. However, as soon as these notions nontrivially depart from the full generality of algebraic semantics and try to capture at least some of the defining properties of Kripke frames, not only can one still find examples of incomplete logics, but the limitative result of Blok survives.

These questions were investigated in my PhD dissertation a decade ago, building on several decades of work in the modal community. By the time the dissertation was completed, there was still one possible escape route left: one of the three defining properties of (dual algebras of) Kripke frames could perhaps allow a general completeness result. More recently, Wes Holliday’s work on “possibility semantics” for modal logic rekindled the interest in that particular notion of completeness. Now we have found that even this escape route is blocked. Furthermore, the story turns out to involve an unjustly forgotten example, paper and program of Johan van Benthem.

Alfred Tarski Lectures

April 03, 2015, 4:10 PM (2 LeConte Hall)

Julia F. Knight
Charles L. Huisking Professor of Mathematics, University of Notre Dame

Computability and complexity of uncountable structures

Alfred Tarski Lectures

April 01, 2015, 4:10 PM (2 LeConte Hall)

Julia F. Knight
Charles L. Huisking Professor of Mathematics, University of Notre Dame

Comparing classes of countable structures

Alfred Tarski Lectures

March 30, 2015, 4:10 PM (2 LeConte Hall)

Julia F. Knight
Charles L. Huisking Professor of Mathematics, University of Notre Dame

Computability and complexity of mathematical structures

Logic Colloquium

March 20, 2015, 4:10 PM (60 Evans Hall)

Keita Yokoyama
Assistant Professor, School of Information Science, Japan Advanced Institute of Science and Technology

Reverse mathematics and program termination

It is well-known that proof theory and computer science have a good connection. In this talk, I will focus on the relation between reverse mathematics and the study of program termination. In the study of reverse mathematics, Ramsey’s theorem is one of the most important topics and the strength of Ramsey’s theorem is well-investigated from both of the proof-theoretic view point and the computability theoretic view point. On the other hand, Ramsey’s theorem is often used in computer science. In the study of termination analysis, Podelski/Rybalchenko used Ramsey’s theorem for pairs to introduce a new algorithm for termination checking. Looking at the above connection with Ramsey’s theorem, we study the relation between reverse mathematics and termination analysis.

This is a joint work with Stefano Berardi and Silvia Steila.

Logic Colloquium

March 06, 2015, 4:10 PM (60 Evans Hall)

Valeria de Paiva
AI and NLU Sunnyvale Lab, Nuance Communications

Constructive Modal Logics: 15 years later

This talk will review the program of intuitionistic modal logic in its two original sources: modal logicians interested in Intuitionism and functional programmers/type theorists interested in (computational) modalities. I will discuss some trends of this kind of work since the first Intuitionistic Modal Logic and Applications (IMLA) in Trento in 1999 and the tendencies that we can see in this line of work today. http://www.cs.bham.ac.uk/~vdp/

Slides

Logic Colloquium

February 20, 2015, 4:10 PM (60 Evans Hall)

Andrew Marks
NSF Postdoctoral Fellow, Department of Mathematics, California Institute of Technology

Baire measurable paradoxical decompositions and a Baire category solution to the dynamical von Neumann-Day problem

The Banach-Tarski paradox states that the unit ball in 3 is equidecomposable with two unit balls in 3 by rigid motions. In 1930, Marczewski asked whether there is such an equidecomposition where each piece has the Baire property. Using an intricate construction, Dougherty and Foreman gave a positive answer to this question.

We generalize Dougherty and Foreman’s result to completely characterize which Borel group actions have Baire measurable paradoxical decompositions. We show that if a group acting by Borel automorphisms on a Polish space has a paradoxical decomposition, then it admits a paradoxical decomposition using pieces having the Baire property. We also obtain a Baire category solution to the dynamical von Neumann-Day problem, in the spirit of Whyte and Gaboriau-Lyons: if a is a nonamenable action of a group on a Polish space X by Borel automorphisms, then there is a free Baire measurable action of 𝔽2 on X which is Lipschitz with respect to a. The main tool we use to prove these theorems is a version of Hall’s matching theorem for Borel graphs.

This is joint work with Spencer Unger at UCLA.

Logic Colloquium

February 06, 2015, 4:10 PM (60 Evans Hall)

Pierre Simon
CNRS, Université Claude Bernard - Lyon 1

Around finite VC-dimension and the (p,k)-theorem

Let N points be chosen at random in some finite domain of the plane. With high probability, for every circle, the proportion of points inside the circle is very close to the area of the circle. (Hence for example a computer can approximately reconstruct a disc given random points inside and outside of it.) This works because the family of discs satisfies a combinatorial property known as finite VC-dimension. This notion originates in machine learning theory: the property described implies that a class of finite VC-dimension is “learnable”. In the first part of my talk, I will present this and state a measure-free analog of this learnability property called the (p,k)-theorem (due to Alon-Kleitman and Matousek).

The definition of finite VC-dimension also appeared independently in model theory under the name NIP (negation of the Independence Property). A formula f(x,y) is NIP if the family of definable sets it defines has finite VC-dimension. In the second part of the talk, I will discuss some interesting consequences of the (p,k)-theorem in model theory and mention open problems.

Logic Colloquium

January 23, 2015, 4:10 PM (60 Evans Hall)

Samson Abramsky
Christopher Strachey Professor of Computing, Oxford University

Contextual Semantics: From Quantum Mechanics to Logic, Databases, Constraints and Natural Language Semantics

Quantum Mechanics presents a radically different perspective on physical reality compared with the world of classical physics. In particular, results such as the Bell and Kochen-Specker theorems highlight the essentially non-local and contextual nature of quantum mechanics. The rapidly developing field of quantum information seeks to exploit these non-classical features of quantum physics to transcend classical bounds on information processing tasks.

In this talk, we shall explore the rich mathematical structures underlying these results. The study of non-locality and contextuality can be expressed in a unified and generalised form in the language of sheaves or bundles, in terms of obstructions to global sections. There are also strong connections with logic. For example, Bell inequalities, one of the major tools of quantum information and foundations, arise systematically from logical consistency conditions.

These general mathematical characterisations of non-locality and contextuality also allow precise connections to be made with a number of seemingly unrelated topics, in classical computation, logic, and natural language semantics. By varying the semiring in which distributions are valued, the same structures and results can be recognised in databases and constraint satisfaction as in probability models arising from quantum mechanics. A rich field of contextual semantics, applicable to many of the situations where the pervasive phenomenon of contextuality arises, promises to emerge.

Slides

Logic Colloquium

December 12, 2014, 4:10 PM (60 Evans Hall)

Christos Papadimitriou
C. Lester Hogan Professor of Electrical Engineering and Computer Science, UC Berkeley

Exponential existence proofs and complexity

Many computational problems in NP have the property that a solution is guaranteed to exist for every instance, and yet these problems do not seem to belong in P. Examples are factoring, Nash equilibrium, finding a short vector in a lattice, or a prime in [n, 2n], among many others. Such problems can be categorized by the corresponding existence proof. Rather surprisingly, it turns out that, in all known cases, this existence proof contains a simple graph-theoretic argument (such as “every graph has an even number of odd nodes,” or “every directed acyclic graph has a sink” or the pigeonhole principle) albeit applied to an exponentially large graph. Several natural problems are known to be complete for the corresponding classes. I shall introduce and discuss this intriguing aspect of complexity theory, including certain recent developments.

Logic Colloquium

November 21, 2014, 4:10 PM (60 Evans Hall)

Juliet Floyd
Professor of Philosophy, Boston University

The Sheffer Box

In May 2012, with the help of Bernard Linsky, I discovered a missing box of original documents and research materials pertaining to the work of H.M. Sheffer (1882-1964), the logician known for his strokes. The box had been assembled and saved by Burton Dreben, with whom I worked on Sheffer in 1988. This talk will report on my assembly and analysis of these primary resource materials.

Sheffer is of interest for our understanding of the reception of logic in the United States: the work of Frege, Russell, Zermelo, the early Wittgenstein, and others. A student of Royce, William James, Huntington, and Russell, he made some contributions to the algebra of logic tradition, though his own “general theory of notational relativity” never reached mathematical fruition. He influenced many, including C.H. Langford, and, slightly and indirectly, Zermelo.
In addition, however, the philosophical strands of his thinking form an interesting comparison and contrast to those of his contemporaries Wittgenstein and C.I. Lewis, on whom he lectured at Harvard, and those of Emil Post, with whom Sheffer corresponded about his “general theory of notational relativity”.

Sheffer took down the only extant notes of Bertrand Russell’s 1910 lectures at Cambridge University, and these, now being edited by Bernard Linsky, are among the materials I shall discuss. Other materials in the box included correspondence with Russell in which Sheffer describes meeting with Frege, Peano, Hadamard and others in 1911; notes of Sheffer’s spring 1922 logic lectures at Harvard, taken down by Marvin Farber; Farber’s correspondence with Sheffer concerning his work with Zermelo in 1923, as well as other lecture notes of relevance to our understanding of the history and presentation of logic in the United States the 1910s and 1920s.

Slides

Handout

Logic Colloquium

November 07, 2014, 4:10 PM (60 Evans Hall)

Balder ten Cate
Computer Scientist at LogicBlox and Associate Adjunct Professor of Computer Science, UC Santa Cruz

Craig interpolation theorems and database applications

The Craig interpolation theorem, proved by William Craig in the 1950s, states that every valid first-order implication A |= B has an interpolant C such that A |= C and C |= B and all non-logical symbols occurring in C occur both in A and in B. Moreover, such an interpolant C can be effectively constructed from a proof of the implication A |= B.

The Craig interpolation theorem is considered one of the cornerstones in our understanding of first-order model theory and proof theory.

Over the last few years, we have seen the emergence of several new applications of Craig interpolation in the area of data management. In particular, interpolation-based techniques have been proposed for querying under access restrictions and semantic query optimization.

These applications, in turn, have led to the study of new variants of Craig interpolation. I will present two such new theorems, namely an interpolation theorem for first-order formulas with relational access restrictions, and an effective interpolation theorem for the guarded-negation fragment of first-order logic.

Slides

Logic Colloquium

October 24, 2014, 4:10 PM (60 Evans Hall)

Thomas Scanlon
Professor of Mathematics, University of California, Berkeley

Differential algebraic equations from definability

Many valued “functions” play an important role in complex analysis. In many cases of interest, these may converted to honest functions through the application of a differential operator. For example, the complex logarithm is only well defined up to the addition of an integral multiple of twice pi times the square root of negative one, but the logarithmic derivative which takes a function f(z) to the derivative of log(f(z)) is a well-defined function from (nowhere zero, differentiable, complex valued) functions to functions. Curiously, the logarithmic derivative is actually a differential rational function. That is, it is given by an expression which is a rational function of its argument and some of the derivatives of the argument. Other functions obtained by applying differential operators to eliminate the ambiguity of multivalued functions share this feature. I will describe a general context in which multivalued functions arising as inverses to certain analytic covering maps may be converted to well-defined differential algebraic functions by composing them with a well chosen differential operator.

While the theorem itself has some nice consequences in the theories of functional transcendence and differential algebra, in this lecture I will focus on two ideas from logic underlying the proof. First, the notion of elimination of imaginaries in the theories of algebraically closed and differentially closed fields will supply the requisite differential operator. Secondly, a beautiful theorem of Peterzil and Starchenko on definable complex analytic sets will be used to show that the classical analytic construction of such functions is necessarily algebraic.

Slides

Logic Colloquium

October 17, 2014, 4:10 PM (60 Evans Hall)

Dominic Hughes
Visiting Scholar, Department of Mathematics, Stanford University

First-order proofs without syntax

Proofs of first-order logic are traditionally syntactic, built inductively from symbolic inference rules. This talk reformulates classical first-order logic (predicate calculus) with proofs which are combinatorial rather than syntactic. A combinatorial proof is defined by graph-theoretic conditions which can be verified easily (linear time complexity).

To be accessible to a broad audience (mathematicians, computer scientists, logicians and philosophers—including undergraduates) the lecture uses many colorful pictorial examples. Niche technicalities (such as relationships with Gentzen’s sharpened Hauptsatz for LK and Herbrand’s theorem) are deferred to the end.

The work extends ‘Proofs Without Syntax’ [Annals of Mathematics, 2006], which treated the propositional case.

Logic Colloquium

October 03, 2014, 4:10 PM (60 Evans Hall)

Sanjit Seshia
Associate Professor of Electrical Engineering and Computer Science, University of California, Berkeley

The Logic of Cars: Reasoning about Cyber-Physical Systems with Computational Logic

Cyber-physical systems (CPS) are those that tightly integrate computational processes with the physical world. Examples include medical devices and systems, automotive systems, intelligent power systems, and robots. In this talk, we explore how computational logic can be an effective way to model, design, and verify cyber-physical systems. Using today’s automobiles as a motivating application, we will explore the use of logic, especially temporal logic, in reasoning about such systems. The talk will survey some core theoretical problems along with current approaches to solve them, and sketch directions for future work.

Logic Colloquium

September 05, 2014, 4:10 PM (60 Evans Hall)

Paolo Mancosu
Professor of Philosophy, University of California, Berkeley

In good company? On Hume’s principle and the assignment of numbers to infinite concepts

In a recent article, I have explored the historical, mathematical, and philosophical issues related to the new theory of numerosities. The theory of numerosities provides a context in which to assign numerosities to infinite sets of natural numbers in such a way as to preserve the part-whole principle, namely if a set A is properly included in B then the numerosity of A is strictly less than the numerosity of B. Numerosities assignments differ from the standard assignment of size provided by Cantor’s cardinality assignments. In this talk I generalize some specific worries, raised by Richard Heck, emerging from the theory of numerosities to a line of thought resulting in what I call a ‘good company’ objection to Hume’s principle (HP). The talk has four main parts. The first takes a historical look at nineteenth-century attributions of equality of numbers in terms of one-one correlation and argues that there was no agreement as to how to extend such determinations to infinite sets of objects. This leads to the second part where I show that there are countably infinite many abstraction principles that are ‘good’, in the sense that they share the same virtues of HP and from which we can derive the axioms of second-order arithmetic. All the principles I present agree with HP in the assignment of numbers to finite concepts but diverge from it in the assignment of numbers to infinite concepts. The third part connects this material to a debate on Finite Hume Principle between Heck and MacBride and states the ‘good company’ objection as a generalization of Heck’s objection to the analyticity of HP based on the theory of numerosities. I then give a taxonomy of possible neo-logicist responses to the ‘good company’ objection.

Logic Colloquium

April 25, 2014, 4:10 PM (60 Evans Hall)

Lou van den Dries
Professor of Mathematics, University of Illinois (Urbana-Champaign); Research Professor, MSRI (Spring 2014)

The Differential Field of Transseries

Transseries originated some 25 years ago, mainly in Ecalle’s work, but also independently in the study of Tarski’s problem on the field of reals with exponentiation. Some 20 years ago I formulated some conjectures about the differential field of transseries, and started exploring its model-theoretic properties, first jointly with Matthias Aschenbrenner, and a few years later also in collaboration with Joris van der Hoeven. Around 2011 we finally saw a clear path towards a proof, sharpening the conjectures in the meantime. In the last few weeks we finished the job, with one proviso: as part of a division of labor and for lack of time, some final details have only been checked by some of us, but not yet by all three of us. In any case, I will motivate these conjectures, and give an account of their status.

Slides

Logic Colloquium

April 12, 2014, 4:10 PM (60 Evans Hall)

Zoe Chatzidakis
Senior Researcher, CNRS, Paris; Research Professor, MSRI, Berkeley (Spring 2014)

Model Theory of Difference Fields and Applications

I will present classical results on the model theory of difference fields, and some recent applications by Hrushovski and myself.

Alfred Tarski Lectures

April 04, 2014, 4:10 PM (3 LeConte Hall)

Stevo Todorcevic
University of Toronto and CNRS, Paris

Combinatorial and Set-theoretic Forcing

Alfred Tarski Lectures

April 02, 2014, 4:10 PM (3 LeConte Hall)

Stevo Todorcevic
University of Toronto and CNRS, Paris

Chain-conditions of Horn-Tarski

Alfred Tarski Lectures

March 31, 2014, 4:10 PM (3 LeConte Hall)

Stevo Todorcevic
University of Toronto and CNRS, Paris

The Measurability Problem for Boolean Algebras

Logic Colloquium

March 14, 2014, 4:10 PM (60 Evans Hall)

G. Aldo Antonelli
Professor of Philosophy, University of California, Davis

General First-Order Models: Concepts and Results

In his 1950 dissertation, Leon Henkin showed how to provide higher- order quantifiers with non-standard, or “general” interpretations, on which, for instance, second-order quantifiers are taken to range over collections of subsets of the domain that may fall short of the full power-set. In contrast, first-order quantifiers are usually regarded as immune to this sort of non- standard interpretations, since their semantics is ordinarily taken to be completely determined once a first-order domain of objects is selected.

The asymmetry is particularly evident from the point of view of the modern theory of generalized quantifiers, according to which a first-order quantifier is construed as a predicate of subsets of the domain. But the generalized conception still views first-order quantifiers as predicates over the full power-set. Accordingly, the possibility that they, similarly to their second-order counterparts, might denote arbitrary collections of subsets has not been pursued in full generality.

This talk introduces a Henkin-style semantics for arbitrary first-order quantifiers, exploring some of the resulting properties, and emphasizing the effects of imposing various further closure conditions on the second-order component of the interpretation. Among other results, we show by a model- theoretic argument that in certain cases the notion of validity relative to models satisfying the closure conditions is axiomatizable.

Logic Colloquium

February 28, 2014, 4:10 PM (60 Evans Hall)

Alexander Melnikov
Lecturer, Massey University, Auckland, New Zealand, and Postdoctoral Scholar, University of California, Berkeley

Infinitely Generated Abelian Groups with Solvable Word Problem

Logic Colloquium

February 14, 2014, 4:10 PM (60 Evans Hall)

Sherrilyn Roush
Professor of Philosophy, University of California, Berkeley

The Difference Between Knowledge and Understanding

In the classic Gettier problem, cases are imagined where a person has very good reasons to believe a proposition p, and p is true, but many people think the person still doesn’t have knowledge. I characterize what is missing in these cases probabilistically, as a failure of what I call relevance matching. Relevance matching is best interpreted as understanding why p is true, and it is deterministically related to but distinct from the tracking conditions that in my view define knowledge. This view of understanding makes it a simulation among your mental states of the dispositions of factors probabilistically relevant to p’s being true, and as such avoids requiring unrealistic information- and computation-intensive beliefs and inferences about why p is true in order to understand.

Logic Colloquium

January 31, 2014, 4:10 PM (60 Evans Hall)

Jan Reimann
Assistant Professor of Mathematics, The Pennsylvania State University

Effective Multifractal Spectra

Multifractal measures play an important role in the study of point processes and strange attractors. A central component of the theory is the multifractal formalism, which connects local properties of a measure (pointwise dimensions) with its global properties (average scaling behavior).

In this talk I will introduce a new, effective multifractal spectrum, where we replace pointwise dimension by asymptotic compression ratio. It turns out that the underlying measure can be seen as a universal object for the multifractal analysis of computable measures. The multifractal spectrum of a computable measure can be expressed as a “deficiency of multifractality” spectrum with respect to the universal measure. This in turn allows for developing a quantitative theory of dimension estimators based on Kolmogorov complexity. I will discuss some applications to seismological dynamics.

Logic Colloquium

December 06, 2013, 4:10 PM (60 Evans Hall)

Alexander S. Kechris
Professor of Mathematics, California Institute of Technology

Topological Dynamics and Ergodic Theory of Automorphism Groups of Countable Structures

I will discuss some aspects of the topological dynamics and ergodic theory of automorphism groups of countable first-order structures and their connections with logic, finite combinatorics, and probability theory. This is joint work with Omer Angel and Russell Lyons.

Logic Colloquium

November 15, 2013, 4:10 PM (60 Evans Hall)

Dana S. Scott
University Professor, Emeritus, Carnegie Mellon University; Visiting Scholar, University of California, Berkeley

Stochastic Lambda-Calculi

Many authors have suggested ways of adding random elements an probability assessments to versions of Church’s Lambda-Calculus. When asked recently about models, the speaker realized that the so-called Graph Model (based on using enumeration operators acting on the power set of the integers) could easily be expanded to allow random variables taking values in the model. Other models can be treated similarly. The talk will discuss randomizing algorithms and how a continuation-passing semantics can be used for modeling a branching combinator using random coin tossing. These ideas can also be employed for introducing many other kinds of random combinators.

Logic Colloquium

November 01, 2013, 4:10 PM (60 Evans Hall)

Lara Buchak
Assistant Professor of Philosophy, University of California, Berkeley

Risk and Inequality

Decision theory concerns the evaluation of gambles. When choosing among gambles, individuals are forced to consider how they will turn out under various circumstances, and decide how to trade off the possibility that a gamble will turn out well against the possibility that it will turn out poorly. How should we aggregate the values one might get in different possible circumstances, in order to arrive at a single value for a gamble? The orthodox view is that there is only one acceptable way to do this: rational individuals must maximize expected (i.e. average) utility. The contention of my recent book, however, is that the orthodox theory (expected utility theory) dictates an overly narrow way in which considerations about risk can play a role in an individual’s choices. There, I argued for an alternative, more permissive, theory of decision-making: risk-weighted expected utility theory (REU theory). This theory allows individuals to pay proportionally more attention to the worst-case scenario than the best-case scenario.

Social choice theory concerns the evaluation of social distributions: distributions of goods or outcomes to individuals. To determine which social distribution is better, we must consider how distributions go for various individuals, and decide how to trade off the fact that one distribution is better for some people against the fact that is it worse for others. How should we aggregate the values that go to each person, in order to arrive as a single value for a social distribution and to say which of two social distributions is better? A traditional answer to this question is that the value of a social distribution is the average of the utility values that each individual in the society receives (utilitarianism). And one traditional justification of utilitarianism relies on the assumption that expected utility theory is the correct decision theory.

There are two ways in which decision theory has been used to justify a particular aggregation method in social choice theory. The first is to propose that facts about the values of social distributions are determined or discerned from individuals’ preferences about gambles. This is the method employed by John Rawls and John Harsanyi, for example: both consider individual preferences about gambles over social distributions in which one does not know which place one will occupy in society, and use these to determine the aggregative social welfare function. The second way to use decision theory to justify a particular aggregation method in social choice theory is to examine the conditions on the preference relation in decision theory, and explore whether analogous conditions might hold of the betterness relation. This is the rough strategy behind John Broome’s justification of utilitarianism in his book Weighing Goods.

Existing philosophical versions of these justifications are based on views about decision-making that I’ve argued are incorrect: thus, I claim, their conclusions for social choice theory are unjustified.

In this work-in-progress talk, I explore the prospects for using REU theory to justify an alternative evaluation of social distributions, one that allows us to pay special attention (but not exclusive attention) to the worst-off person as opposed to the best-off person.

Logic Colloquium

October 18, 2013, 4:10 PM (60 Evans Hall)

James Freitag
National Science Foundation Postdoctoral Fellow, University of California, Berkeley

Differential Algebra and the Model Theory of Groups

Sacks called differential fields the “least misleading” examples of omega-stable theories. This talk will reinforce that philosophy, because the theorems we will discuss fall into two general situations: (1) theorems from model theory which inspired results in differential algebra or (2) theorems from differential algebra which were generalized to the model-theoretic setting. Specifically, we will be discussing theorems regarding differential algebraic groups and groups definable in superstable theories (a generalization of omega-stability). We will discuss three types of theorems: (1) Jordan-Hölder theorems, (2) indecomposability theorems, and (3) results on central extensions. To open the talk, we will give a survey of the model theory of differential fields.

Logic Colloquium

October 04, 2013, 4:10 PM (60 Evans Hall)

Richard Tieszen
Professor of Philosophy, San Jose State University

Monads and Mathematics: Gödel, Leibniz, and Husserl

On the basis of his discussions with Kurt Gödel, Hao Wang (A Logical Journey: From Gödel to Philosophy, p. 166) tells us that “Gödel’s own main aim in philosophy was to develop metaphysics – specifically, something like the monadology of Leibniz transformed into exact theory – with the help of phenomenology”.  Gödel began to study Edmund Husserl’s phenomenology in 1959. In 1928 Husserl (“Phenomenology”, Encyclopedia Britannica draft) wrote that “The ideal of the future is essentially that of phenomenologically based (”philosophical“) sciences, in unitary relation to an absolute theory of monads”.  In the Cartesian Meditations and other works Husserl identifies ‘monads’ (in his sense) with ‘transcendental egos in their full concreteness’. In the first part of my talk I explore some prospects for a Gödelian monadology that result from this identification, with reference to texts of Gödel, Wang’s reports, and aspects of Leibniz’s original monadology. The latter part of the talk will be on human monads, the incompleteness theorems, and (Turing) machines. (For background see my recent book, After Gödel: Platonism and Rationalism in Mathematics and Logic, Oxford University Press.)

Logic Colloquium

September 20, 2013, 4:10 PM (60 Evans Hall)

Lotfi A. Zadeh
Professor Emeritus of Electrical Engineering and Computer Sciences, Director, Berkeley Initiative in Soft Computing (BISC), University of California, Berkeley

Truth and Meaning

The theory which is outlined in my lecture, call it RCT for short, is a departure from traditional theories of truth and meaning, including correspondence theory, coherence theory, possible-world semantics and truth-conditional semantics. The principal objective of RCT is construction of a procedure which on application to a proposition, p, drawn from a natural language leads to: (a) a mathematically well-defined meaning of p; and (b) truth value of p. 

The centerpiece of RCT is the concept of a restriction. Informally, a restriction, R(X), on a variable, X, is a limitation on the values which X can take. Typically, a restriction is described in a natural language. Simple example. Usually X is significantly larger than a, where a is a real number. The canonical form of a restriction is: X isr R, where X is the restricted variable, R is the restricting relation, and r is an indexical variable which defines the way in which R restricts X.

There are two key postulates in RCT. First, the meaning postulate, MP. MP asserts that the meaning of p is a restriction, X isr R, in which X, R and r are implicit in p. This restriction is referred to as the canonical form of p. Second, the truth postulate, TP, which asserts that the truth value of p is the degree to which X satisfies R.

In RCT, a proposition, p, is associated with two truth values—internal truth value and external truth value. The internal truth value modifies the meaning of p. The external truth value relates to the degree of agreement of p with factual information. In the definition of truth value which was stated earlier, the truth value is internal. In RCT, truth values are numerical, taking values in the unit interval, or linguistic, e.g., very true, not quite true, more or less true, usually true, etc.

Slides

Logic Colloquium

September 06, 2013, 4:10 PM (60 Evans Hall)

Theodore A. Slaman
Professor of Mathematics, University of California, Berkeley

On Normal Numbers

A real number is simply normal in base b if in its base-b expansion each digit appears with asymptotic frequency 1/b. It is normal in base b if it is simply normal in all powers of b, and absolutely normal if it is simply normal in every integer base. By a theorem of E. Borel, almost every real number is absolutely normal. We will present three main results. We give an efficient algorithm, which runs in nearly quadratic time, to compute the binary expansion of an absolutely normal number. We demonstrate the full logical independence between normality in one base and another. We will give a necessary and sufficient condition on a set of natural numbers M for there to exist a real number X such that X is simply normal to base b if and only if b is an element of M.

Tarski Lectures

April 26, 2013, 4:10 PM (60 Evans Hall)

Jonathan Pila
Reader in Mathematical Logic, University of Oxford

The Zilber-Pink Conjecture

Tarski Lectures

April 24, 2013, 4:10 PM (TBA)

Jonathan Pila
Reader in Mathematical Logic, University of Oxford

Special Points and Ax-Lindemann

Tarski Lectures

April 22, 2013, 4:10 PM (TBA)

Jonathan Pila
Reader in Mathematical Logic, University of Oxford

Rational Points of Definable Sets and Diophantine Problems

Logic Colloquium

April 19, 2013, 4:10 PM (60 Evans Hall)

W. Hugh Woodin
Professor of Mathematics, University of California, Berkeley

Ultimate Truth or Ultimate Chaos?

Set Theory is arguably reaching a tipping point. But which way will it tip?

Logic Colloquium

April 05, 2013, 4:10 PM (60 Evans Hall)

C. Ward Henson
Professor Emeritus of Mathematics, University of Illinois at Urbana-Champaign

Continuous Model Theory and Gurarij’s Universal, Homogeneous, Separable Banach Space

Gurarij’s Banach space was constructed in the 1960s using a metric version of a Fraisse construction; it is universal isometrically (for separable Banach spaces) and is homogeneous in an almost-isometric sense relative to its finite-dimensional subspaces. It is the analogue (for Banach spaces) of such structures as the random graph and Urysohn’s metric space. General results in Banach space theory from the 1960s show that its dual space is of the form L1(μ) for some measure μ, so it falls into the important class of “classical Banach spaces”, a fact that is far from obvious based on the original construction. Wolfgang Lusky showed in the 1970s that the Gurarij space is isometrically unique, a surprising result. He also indicated that the set of smooth points of norm 1 is an orbit of its automorphism group. In this talk it will be shown how these results can be seen and extended using continuous model theory. In particular, the class of separable Gurarij spaces can be realized as the class of separable models of a certain continuous theory T (of unit balls of Banach spaces); this theory has quantifier elimination and is the model completion of the theory of all Banach spaces. An optimal amalgamation result yields a simple formula for the induced metric on the type spaces of T over sets of parameters, which is a key to the applications that will be discussed in this talk. A highlight of recent research is the following: let X be Gurarij’s space and let E be any finite dimensional space. Then there is anisometric linear embedding J of E into X such that J (E) has the unique Hahn-Banach extension property in X; moreover, the set of all such embeddings forms a full orbit under the action of the automorphism group of X. Model-theoretically this situation is equivalent to saying that X expanded by naming all elements of J(E) is an atomic model of its theory. This is joint work with Itai Ben Yaacov, and our paper is available at the arXiv. [Note: Earlier Gurarij’s name was transliterated “Gurarii” and in the west it is always pronounced “Ger-ar-eee”.]

Logic Colloquiium

March 15, 2013, 4:10 PM

Farmer Schlutzenberg
Visiting Scholar in Mathematics, University of California, Berkeley

Jónsson Cardinals in L(R)

The standard axioms of set theory leave many natural questions undecided. Determinacy axioms and large cardinal axioms give a much more complete picture, one that is natural and compelling. Strong connections are known to exist between these two families of axioms.

The Axiom of Determinacy states that in any infinite two- player game of perfect information on the natural numbers one player has a winning strategy. AD implies large cardinal axioms. It also contradicts the Axiom of Choice, and even implies all sets of reals are Lebesgue measurable. Therefore AD fails in the full universe of sets, where Choice holds. However, there are important subuniverses where AD may hold, among them the universe L(R) of all sets constructible from the reals. Assuming AD holds in L(R), a detailed analysis of L(R) is possible, extending classical results of descriptive set theory on analytic and Borel sets.

A cardinal is Jónsson if every structure of that size has a proper elementary substructure of that size. They were introduced by Bjarni Jónsson in 1972. We will survey the above background and sketch a proof that, given AD, there are many Jónsson and Rowbottom cardinals in L(R). Both results use the same key argument. (The result for Jónsson cardinals was first announced by Woodin.)

Logic Colloquium

March 01, 2013, 4:10 PM

Wesley H. Holliday
Assistant Professor of Philosophy, University of California, Berkeley

Recent Work in Epistemic Logic

Epistemic logic provides a formal framework for modeling the knowledge of agents. Used by philosophers, theoretical computer scientists, AI researchers, game theorists, and others, epistemic logic has become one of the main application areas for modal logic. In this talk, I will survey some recent work in epistemic logic related to my research. The focus will be on how the formalization of philosophical ideas has led to interesting logical issues. Examples will include new kinds of preservation theorems and axiomatization results, as well as open problems. Some of the papers mentioned in the talk are available at http://philosophy.berkeley.edu/people/page/128.

Slides

Logic Colloquium

February 15, 2013, 4:10 PM

Itay Neeman
Professor of Mathematics, University of California, Los Angeles

Forcing Axioms

Forcing axioms are statements about the existence of filters over partially ordered sets, meeting given collections of dense open subsets. They have been used for several decades as center points for consistency proofs. A forcing axiom is shown consistent, typically through an iterated forcing construction, and then the consistency of other statements can be established by deriving them from the axiom.

In my talk I will begin with background on forcing and on some of the most important forcing axioms, including Martin’s Axiom (MA), and the Proper Forcing Axiom (PFA). I will discuss some key points in the axioms’ consistency proofs, and difficulties in adapting the proofs to obtain analogues of PFA that involve meeting more than aleph-sub-one dense open sets. I will end with recent work on new consistency proofs, and higher analogues of PFA.

Logic Colloquium

February 01, 2013, 4:10 PM

John MacFarlane
Professor of Philosophy, University of California, Berkeley

On the Motivation for the Medieval Distinction between Formal and Material Consequence

Fourteenth-century logicians define formal consequences as consequences that remain valid under uniform substitutions of categorematic terms, but they say little about the significance of this distinction. Why does it matter whether a consequence is formal or material? One possible answer is that, whereas the validity of a material consequence depends on both its structure and “the nature of things”, the validity of a formal consequence depends on its structure alone. But this claim does not follow from the definition of formal consequence by itself, and the fourteenth- century logicians do not give an argument for it. For that we must turn to Abelard, who argues explicitly that consequences that hold under uniform substitutions of their terms take their validity from their construction alone, and not from “the nature of things”. I will look at Abelard’s argument in its historical context. If my reconstruction is correct, the argument, and hence also the significance of the distinction between formal and material consequence, depends on a conception of “the nature of things” that we can no longer accept. This should give pause to contemporary thinkers who look to medieval notions of formal consequence as antecedents of their own.

Logic Colloquium

November 30, 2012, 4:10 PM

Martin Davis
Professor Emeritus of Mathematics and of Computer Science, Courant Institute, New York University, and Visiting Scholar in Mathematics, University of California, Berkeley

Pragmatic Platonism

It will be argued that Platonism in mathematics comes from the empirical fact of the success of mathematicians in dealing with the infinite and not from metaphysical speculation. Thus, absolute certainty is no more to be expected of mathematical knowledge than it is of empirical science.

Logic Colloquium

November 16, 2012, 4:10 PM (60 Evans Hall)

Grigori Mints
Professor of Philosophy and Mathematics, Stanford University; Senior Computer Scientist, SRI International, Menlo Park (July 2012 - April 2013)

Failure of Interpolation for Intuitionistic Logic of Constant Domains

The intuitionistic logic CD of constant domains is axiomatized by adding the schema (∀x)(C ∨ A(x)) → C ∨ (∀x)A(x) (x is not free in C) to the familiar axiomatization of intuitionistic predicate logic. CD is sound and complete for Kripke semantics with constant individual domains. The interpolation theorem says that for any provable implication A → B there is an interpolant I such that A → I and I → B are both provable, and I contains only predicates common to A and B. There are at least two claimed proofs of the interpolation theorem for CD, both published in The Journal of Symbolic Logic (v. 42, 1977 and v. 46, 1981). We prove that in fact the interpolation theorem fails for CD: the provable implication ((∀x)(∃y)(Py&(Qy → Rx))&¬(∀x)Rx) → ((∀x)(Px → (Qx ∨ r)) → r) does not have an interpolant. Posted as http://arxiv.org/abs/1202.3519. This is a joint work with Grigory Olkhovikov (Ural State University) and Alasdair Urquhart (University of Toronto).

Logic Colloquium

November 02, 2012, 4:10 PM (60 Evans Hall)

Joseph Mileti
Assistant Professor of Mathematics and Statistics, Grinnell College

Computable Combinatorics

We survey the history and some recent results analyzing theorems about trees, graph colorings, and other combinatorial structures from the viewpoint of computable mathematics. By analyzing the complexity of solutions to computable instances of these problems, we discover some striking connections across different theorems and gain insight into the techniques required in every possible proof.

Slides

Logic Colloquium

October 19, 2012, 4:10 PM (60 Evans Hall)

Thomas Scanlon
Professor of Mathematics, University of California, Berkeley

A Logic for General Differential Equations

Differential and difference equations have been studied from the point of view of model theory through the theories of model-complete differential and difference fields. The known theorems are similar, but the theories have been developed in parallel. I will report on joint work with Rahim Moosa in which we propose a general theory of D-fields, proving the existence of model companions, describing the definable sets, and establishing the fine-structural Zilber trichotomy principle uniformly. Our theory also grounds a general Galois theory for difference/differential equations and provides a rigorous framework for understanding the confluence from difference to differential equations.

Logic Colloquium

October 05, 2012, 4:10 PM (60 Evans Hall)

Seth Yalcin
Associate Professor of Philosophy, University of California, Berkeley

Some Aspects of the Logical Structure of Conversation

Normally, uttering a sentence changes the state of a conversation. Suppose we thought a natural language as determining a state transition system, where each state corresponds to a state of a possible conversation, and each sentence corresponds to a label inducing a state transition. Then we can ask: what properties are characteristic of the state transition systems appropriate to modeling conversation in natural language (or various fragments thereof)? A leading idea in the philosophy of language, and in linguistic semantics and pragmatics, is that the central effect of uttering a declarative sentence in conversation is to add a proposition to the common background assumptions of the interlocutors. We give a representation theorem indicating what formal properties are characteristic of this effect from a state transition system perspective. We then go on to discuss some fragments of language that appear to lack these properties. I connect the results to the debate about how ‘dynamic’ natural language is, and to debates about what a compositional theory of meaning should look like. (This work is joint with Daniel Rothschild, All Souls College, Oxford.)

Slides

Handout

Logic Colloquium

September 21, 2012, 4:10 PM (60 Evans Hall)

Antonio Montalban
Associate Professor of Mathematics, University of California, Berkeley

A Computability-Theoretic Equivalent to Vaught’s Conjecture

We find two computability-theoretic properties on the models of a theory T which hold if and only if T is a counterexample to Vaught’s conjecture.

Logic Colloquium

September 07, 2012, 4:10 PM (60 Evans Hall)

Maryanthe Malliaris
L. E. Dickson Instructor in Mathematics, University of Chicago

Saturation of Ultrapowers and the Structure of Unstable Theories

The talk will be about some very recent progress on Keisler’s order, a far-reaching program of understanding basic model-theoretic structure through the lens of regular ultrapowers. The focus of the talk will be on classification of unstable theories; some applications to problems in set theory/general topology will also be mentioned, including a recent result of Malliaris and Shelah solving the oldest problem on cardinal invariants of the continuum.

I will assume basic familiarity with ultraproducts, Los’s theorem, and saturation, and plan to give most other relevant definitions. Papers mentioned in the talk are available at http://math.uchicago.edu/~mem.

Logic Colloquium

August 24, 2012, 4:10 PM (60 Evans Hall)

Dana Scott
University Professor Emeritus, Carnegie Mellon University; Visiting Scholar, University of California, Berkeley

Lambda Calculus: Then and Now

A very fast development in the early 1930’s following Hilbert’s codification of Mathematical Logic led to the Incompleteness Theorems, Computable Functions, Undecidability Theorems, and the general formulation of Recursive Function Theory. The so-called Lambda Calculus played a key role. The history of these developments will be traced, and the much later place of Lambda Calculus in Mathematics and Programming-Language Theory will be outlined.

Slides

Logic Colloquium

April 20, 2012, 4:10 PM (60 Evans Hall)

Umesh Vazirani
Roger A.Strauch Professor of Electrical Engineering and Computer Sciences Director, Berkeley Quantum Computation Center, University of California, Berkeley

A Turing Test for Randomness

Is it possible to certify that the n-bit output of a random number generator is “really random”? A possible approach to this question is via the theory of algorithmic randomness due to Kolmogorov, Chaitin, and Solomonoff, which identifies randomness with uncompressibility by any Turing Machine. Unfortunately this definition does not result in an efficient test for randomness. Indeed, in the classical World it seems impossible to provide such a test.

Quantum mechanics allows for a remarkable random number generator: its output is certifiably random in the sense that if the output passes a simple statistical test, and there is no information communicated between the two boxes in the randomness generating device (based, say, on the speed of light limit imposed by special relativity), then the output is certifiably random. Moreover, the proof that the output is truly random does not even depend upon the correctness of quantum mechanics! Based on joint work with Thomas Vidick.

Logic Colloquium

April 06, 2012, 4:10 PM (60 Evans Hall)

Jouko Väänänen
Professor of Mathematics, University of Helsinki, and Professor of Foundations of Mathematics, University of Amsterdam

Second-Order Logic and Model Theory

Second-order logic is so strong that it makes sense to ask, are complete finitely axiomatized second-order theories always categorical? Carnap claimed that the answer is yes, but his proof did not work. Ajtai and Solovay showed that the answer depends on set theory. We present some recent new results in this field. This is joint work with Hyttinen and Kangas.

Logic Colloquium

March 16, 2012, 4:10 PM (60 Evans Hall)

George M. Bergman
Professor Emeritus of Mathematics, University of California, Berkeley

Linear Maps and Ultrafilters

Most of my talk will be devoted to showing that if k is an infinite field, I an infinite set, and g : kI → V a linear map to a finite-dimensional k-vector-space, then g has elements where “almost all’’ is made precise in terms of a finite set of card(k)+-complete ultrafilters on I. In particular, if I is not enormous,”almost all’’ means “all but finitely many’’.

Before proving the above, I will show why such results are of value in studying homomorphisms from an infinite direct product of algebras (e.g., of associative algebras, or Lie algebras) to another algebra, and sketch the proofs of some easier results on linear maps kI → V.

(The result of the first paragraph above is Lemma 7 of a paper with N.Nahlus, which can be read in preprint form at http://math.berkeley.edu/~gbergman/papers/prod_Lie2.pdf or in published form at http://dx.doi.org/10.1016/j.jalgebra.2012.01.004. The discussion of how such results are used, and proofs of easier results of the same nature, can be found in §§1-2 of that paper.)

Logic Colloquium

March 02, 2012, 4:10 PM (60 Evans Hall)

Alf Onshuus
Professor Titular, University of the Andes

VC-Density and dp-Rank in Model Theory

VC-dimension was introduced by Vapnik and Chervonenkis to measure, in some way, the complexity of a family of subsets of a given universe. This notion has had important applications in statistics an learning theory, usually in the framework where one looks at a definable family of definable sets (usually in a cartesian power of the real or the real exponential field). Now, in this context the model theory becomes clearly related and in fact any model where all uniform definable families of sets have finite VC-dimension is precisely the model of a theory with the “independence property” introduced by Shelah to define dependent theories (also known as theories with NIP). This setting of “VC-theory” of a given uniform definable family of definable sets in a fixed structure and the relation with notions in model theory around it is the setting for the talk.

I will survey some of the results on VC-density (a very important notion in VC-theory) that arise from the model-theoretic analysis, introduce and show some results about dp-rank (a rank introduced by Shelah in the context of dependent theories), and show some evidence of a possible strong relation between the two notions.

Alfred Tarski Lectures

February 24, 2012, 4:10 PM (60 Evans Hall)

Per Martin-Löf
Emeritus Professor of Logic, Departments of Mathematics and Philosophy, Stockhold University

Tarski’s Metamathematical Reconstruction of the Notions of Truth and Logical Consequence

Alfred Tarski Lectures

February 22, 2012, 4:10 PM (TBA)

Per Martin-Löf
Emeritus Professor of Logic, Departments of Mathematics and Philosophy, Stockhold University

Propositions, Truth and Consequence

Alfred Tarski Lectures

February 21, 2012, 4:10 PM (TBA)

Per Martin-Löf
Emeritus Professor of Logic, Departments of Mathematics and Philosophy, Stockhold University

Assertion and Inference

Logic Colloquium

February 03, 2012, 4:10 PM (60 Evans Hall)

Leo Harrington
Professor of Mathematics, University of California, Berkeley

Is There a Philosophical View Already in Mathematical Logic?

This talk will suggest that a certain “view”, readily available from basic ingredients of mathematical logic, bears a resemblance with views found respectively in the philosophers Parmenides, Hegel, and Heidegger. And this talk will attempt to indicate the possibility that, while staying inside mathematical logic, this view can be elaborated to maintain such a resemblance with a philosophical view found in common in the works of these philosophers.

Logic Colloquium

January 20, 2012, 4:10 PM (60 Evans Hall)

Sherrilyn Roush
Associate Professor of Philosophy and Chair, Group in Logic and the Methodology of Science, University of California, Berkeley

Reasoning and the Growth of Error

I address several questions about controlling the growth of error introduced by reasoning. For example, if every step of reasoning introduces a new source of error, how can we possibly improve our reliability – as we think we do – by proof-checking, filling in gaps in proofs, or consulting an expert, all of which add steps to our reasoning? I will devote most attention to the topic of closure of knowledge under known implication. How can knowledge be closed if in one step of valid deductive reasoning I can go from knowledge that my car is parked on Main Street to a belief that it has not been stolen since I parked it, which it seems I do not know? I argue that this closure problem is entirely a matter of how fast false positive error in the conclusion grows over deductive reasoning from the premise, and develop a graded model with upper bounds on this growth. It turns out that problem examples like the car theft case, and more radical skeptical cases, can be generated only by committing an error of reasoning.

Logic Colloquium

December 02, 2011, 4:10 PM (60 Evans Hall)

Lara Buchak
Assistant Professor of Philosophy, University of California, Berkeley

Risk and Tradeoffs

The prevailing view is that subjective expected utility theory is the correct theory of instrumental rationality. Subjective expected utility theory (hereafter, EU theory) is thought to characterize the preferences of all rational decision makers. And yet, there are some preferences that violate EU theory that seem both intuitively appealing and prima facie consistent. An important group of these preferences stem from how ordinary decision makers take risk into account: ordinary decision makers seem to care about “global” properties of gambles, but EU theory rules out their doing so.

EU theory allows agents to subjectively determine how much they value outcomes (their utility function) and how likely they think various states are to obtain (their probability function). However, there is arguably a third subjective component of instrumental rationality, namely one’s norm for translating these two values into preferences among risky acts. On EU theory, every rational agent must use the same norm, a norm that is insensitive to global properties. I propose a theory on which decision makers subjectively determine how they want to take risk into account, and thus subjectively determine all three components of instrumental rationality. By providing a “representation theorem,” I show that the third component corresponds at the level of preferences to how an individual values tradeoffs in different structural parts of the gamble, e.g., to whether he cares more about what happens in the worst-case scenario or the best-case scenario. And I therefore show how non-EU maximizers can be seen as instrumentally rational: as taking the means to their ends.

Logic Colloquium

November 18, 2011, 4:10 PM (60 Evans Hall)

Theodore A. Slaman
Professor and Chair of the Department of Mathematics, University of California, Berkeley

The First-Order Consequences of the Existence of an Infinite Random Sequence

We will discuss the question, “What first-order statements follow from the existence of an infinite random sequence by effective means?” The answer depends on the degree of randomness in the infinite source. In one case, it isolates a mysterious subtheory of arithmetic strictly between Σ1-Induction and Σ2-Bounding.

Logic Colloquium

October 28, 2011, 4:10 PM (60 Evans Hall)

Tomasz Placek
Professor of Philosophy, Jagiellonian University, Krakow, Poland; Visiting Scholar in Philosophy, University of California, Berkeley

Possibilites without Possible Worlds/Histories

Possible worlds have turned out to be a particularly useful tool of modal metaphysics, although their global character makes them philosophically suspect. Hence, it would be desirable to arrive at some local modal notions that could be used instead of possible worlds. In this talk I will focus on what is known as historical (or real) modalities, an example of which is tomorrow’s sea-battle. The modalities involved in this example are local since they refer to relatively small chunks of our world: a gathering of inimical fleets on a bay near-by has two alternative possible future continuations: one with a sea-battle and the other with no-sea-battle. The objective of this talk is to sketch a theory of such modalities that is framed in terms of possible continuations rather than possible worlds or possible histories. The proposal will be tested as a semantic theory for a language with historical modalities, tenses, and indexicals. The talk builds on my JPL paper (cf. http://www.springerlink.com/content/q2423241l6525063/).

Logic Colloquium

October 14, 2011, 4:10 PM (60 Evans Hall)

Daisuke Ikegami
Japan Society for the Promotion of Science Postdoctoral Fellow, University of California, Berkeley

Gale-Stewart Games and Blackwell Games

In 1953, Gale and Stewart developed the general theory of infinite games (the so-called Gale-Stewart games) which are two- player zero-sum infinite games with perfect information. The theory of Gale-Stewart games has been deeply investigated by many logicians and it has been one of the main topics in set theory while having connections with other topics of set theory as well as model theory and computer science.

In 1969, Blackwell proved the extension of von Neumann’s mini-max theorem where he introduced infinite games with imperfect information – nowadays called Blackwell games. Although Blackwell games have not been as much investigated as Gale-Stewart games, in 1998, Martin proved that the Axiom of Determinacy (AD) implies the Axiom of Blackwell Determinacy (Bl-AD) and conjectured the converse, which is still not known to be true.

In this talk, we introduce Blackwell games and their determinacy and talk about the connection between the determinacy of Gale-Stewart games and that of Blackwell games. A part of the work is with Benedikt Löwe & Devid de Kloet and another part is with Hugh Woodin.

Logic Colloquium

September 30, 2011, 4:10 PM (60 Evans Hall)

Lotfi A. Zadeh
Professor in the Graduate School, University of California, Berkeley; Director, Berkeley Initiative in Soft Computing

Can Mathematics Deal with Computational Problems Which Are Stated in a Natural Language?

Here are a few very simple examples of computational problems which are stated in a natural language. (a) Most Swedes are tall. What is the average height of Swedes? (b) Probably John is tall. What is the probability that John is short? What is the probability that John is very short? What is the probability that John is not very tall? (c) Usually Robert leaves his office at about 5 pm. Usually it takes Robert about an hour to get home from work. At what time does John get home? (d) X is a real-valued random variable. Usually X is much larger than approximately a. Usually X is much smaller than approximately b. What is the probability that X is approximately c, where c is a number between a and b? (e) A and B are boxes, each containing 20 balls of various sizes. Most of the balls in A are large, a few are medium, and a few are small. Most of the balls in B are small, a few are medium, and a few are large. The balls in A and B are put into a box C. What is the number of balls in C which are neither large nor small? For convenience, such problems will be referred to as CNL problems.

It is a long-standing tradition in mathematics to view computational problems which are stated in a natural language as being outside the purview of mathematics. Such problems are dismissed as ill-posed and not worthy of attention. In the instance of CNL problems, mathematics has nothing constructive to say. In my lecture, this tradition is questioned and a system of computation is suggested which opens the door to construction of mathematical solutions of CNL problems. The system draws on the fuzzy-logic-based formalism of computing with words (CW). (Zadeh 2006) A concept which plays a pivotal role in CW is that of precisiation of meaning. More concretely, precisiation involves translation of natural language into a mathematical language in which the objects of computation are well-defined — though not conventional — mathematical constructs.

A key idea involves representation of the meaning of a proposition, p, drawn from a natural language, as a restriction on the values which a variable, X, can take. Generally, X is a variable which is implicit in p. The restriction is represented as an expression of the form X isr R, where X is the restricted variable, R is the restricting relation and r is an indexical variable which defines the way in which R restricts X. This expression is referred to as the canonical form of p. Canonical forms of propositions in a natural language statement of a computational problem serve as objects of computation in CW.

Construction and use of mathematical solutions of CNL problems is an unexplored domain in mathematics. The importance of this domain derives from the fact that much of human knowledge, and particularly world knowledge, is described in natural language.

Logic Colloquium

September 16, 2011, 4:10 PM (60 Evans Hall)

Adam Day
Miller Research Fellow, University of California, Berkeley

Randomness and Neutral Measures

Algorithmic randomness provides a way to define a random outcome. The underlying idea is that a random outcome should have no ‘rare’ mathematical properties. A combination of measure theory and recursion theory provides a framework for defining what a rare property is and consequently for defining randomness itself.

Most research in algorithmic randomness has been conducted using computable measures. However, there are some interesting results that have come from considering noncomputable measures. In a surprising result, Levin established the existence of probability measures for which all in^Lfinite binary sequences are random. These measures are termed neutral measures. In this talk I will introduce Levin’s neutral measures. I will also outline some recent joint work with Joseph Miller in which we determine some key properties of neutral measures.

Logic Colloquium

September 02, 2011, 4:10 PM (60 Evans Hall)

Paolo Mancosu
Professor and Chair of Philosophy, University of California, Berkeley

Axiomatics and purity of methods: On the relationship between plane and solid geometry

Traditional geometry concerns itself with planimetric and stereometric considerations, which are at the root of the division between plane and solid geometry. To raise the issue of the relation between these two areas brings with it a host of different problems that pertain to mathematical practice, epistemology, semantics, ontology, methodology, and logic. In addition, issues of psychology and pedagogy are also important here.

In this talk (which is based on joint work with Andy Arana), my major concern is with methodological issues of purity. In the first part I will give a rough sketch of some key episodes in mathematical practice that relate to the interaction between plane and solid geometry. In the second part, I will look at a late nineteenth century debate (on “fusionism”) in which for the first time methodological and foundational issues related to aspects of the mathematical practice covered in the first part of the paper came to the fore. I conclude this part of the talk by remarking that only through an axiomatic and analytical effort could the issues raised by the debate on “fusionism” be made precise. The third part of the talk focuses on Hilbert’s axiomatic and foundational analysis of the plane version of Desargues’ theorem on homological triangles and its implications for the relationship between plane and solid geometry. Finally, building on the foundational case study analyzed in the third section, in the fourth section I point the way to the analytic work necessary for exploring various important claims on “purity”, “content”, and other relevant notions.

Logic Colloquium

April 22, 2011, 4:10 PM (60 Evans Hall)

Matthew D. Foreman
Professor of Mathematics, University of California, Irvine

Classifying Measure Preserving Diffeomorphisms of the Torus

The statistical behavior of deterministic dynamical systems can be completely random. The ergodic theorem tells us that the statistical behavior of a system carrying an invariant ergodic measure is captured by the isomorphism class of that measure. Motivated by this type of phenomenon, in 1932 von Neumann proposed the program of classifying the ergodic measure preserving systems.

In this talk we will describe applications of the tools of descriptive set theory to show that this program is impossible. In earlier joint work with Weiss, it was shown that the isomorphism relation is “turbulent” and hence no generic class can be reduced to an S-infinity action. With Rudolph and Weiss it was shown that the isomorphism relation is a complete analytic set.

But what about concrete systems: C-infinity measure preserving transformations of the torus? Recent work with Weiss extends these results to such diffeomorphisms. Finally a very recent result shows that the “graph-isomorphism problem” can be reduced to isomorphism of measure preserving diffeomorphisms. As a corollary the isomorphism relation is complete for S-infinity actions.

Logic Colloquium

April 08, 2011, 4:10 PM (60 Evans Hall)

Ronald Fagin
Manager, Foundations of Computer Science, IBM Almaden Research Center

P vs. NP, and Community Refereeing in the Web Era

P and NP are “complexity classes”. The problem of whether they are the same is renowned in computer science and mathematics, and a solution yields a Clay Millennium Prize of one million dollars, mathematical immortality, and deep insight into efficient computation. In August of 2010 there was a lot of excitement over the announcement of a possible solution: a claimed mathematical proof that P and NP are different, using ideas from mathematical logic and statistical physics. We will describe the P vs. NP problem at a high level for a general scientific audience. What is this problem? How is it important? We will also discuss the fascinating sociological phenomenon of mathematical refereeing in “internet time”. Finally, we will discuss the idea of the attempted proof, and issues that were raised during the “internet refereeing” process.

Herbert Enderton Memorial Gathering

March 15, 2011, 11:00 AM (Berkeley City Club, 2315 Durant Avenue, Berkeley, California)

You are invited to join us to celebrate Herbert’s life on April 15, his 75th birthday. The gathering will be in Berkeley, CA at the Berkeley City Club from 11 am to 3 pm. Lunch will be served.

Whether or not you can attend, we would love to have you send us short written stories and memories of Herbert that we can draw on for the eulogy and share with his loved ones.

Warmly, Cathy, Eric, and Bert

(Please forward this note to anyone whose address we might not have. All who wish to attend are welcome; we just need a head count. RSVP: cenderton@hotmail.com)

(A group rate is available for lodging at the Berkeley City Club. (510) 848-7800.)

Logic Colloquium

March 11, 2011, 4:10 PM (60 Evans Hall)

Bjørn Kjos-Hanssen
Associate Professor of Mathematics, University of Hawaii at Manoa, and, Visiting Associate Professor of Mathematics, University of California, San Diego

Discerning Randomness from an Asymptotic Hamming Distance

A notion of asymptotic Hamming distance suitable for the study of algorithmic randomness of infinite binary sequences is developed. As an application, it is shown that there is no fixed procedure that computes a (weakly) Mises-Wald-Church stochastic sequence from an arbitrary complex sequence. Here a sequence is complex if its fixes have Kolmogorov complexity bounded below by an unbounded, nondecreasing computable function.

Logic Colloquium

February 25, 2011, 4:10 PM (60 Evans Hall)

Phokion G. Kolaitis
Professor of Computer Science, University of California Santa Cruz, and, Research Staff Member, IBM Almaden Research Center

Random Graphs and the Parity Quantifier [Slides]

The classical zero-one law for first-order logic asserts that the asymptotic probability of every first-order definable property of finite graphs always exists and is either zero or one. Over the years zero-one laws have been established for numerous extensions of first-order logic, including least-fixed point logic, finite-variable infinitary logics, and certain fragments of existential second-order logic. The zero-one law, however, fails to hold for any logical formalism that is powerful enough to express parity, that is, the property “there is an odd number of elements”; in fact, for such logical formalisms, even the convergence law fails to hold. In this work, we turn the parity barrier into a feature and systematically investigate the asymptotic probabilities of properties of finite graphs expressible in first-order logic augmented with the parity quantifier. Our main result is a “modular convergence law” that captures the limiting behavior of properties expressible in this extension of first-order logic on finite graphs. This is joint work with Swastik Kopparty, Institute for Advanced Study.

Logic Colloquium

February 11, 2011, 4:10 PM (60 Evans Hall)

Grigori Mints
Professor of Philosophy and of Mathematics, Stanford University

Extension of Epsilon Substitution to Second-Order Systems

The problem of extending the epsilon substitution method to second-order systems was posed by Hilbert in 1928 and repeated in “Grundlagen der Mathematik” (1939). In this talk I will describe the first-order formulation (due to Hilbert, Ackermann, von Neumann, and Bernays) and outline possible extensions to the second order, progress achieved up to now, and obstacles to a termination proof.

Logic Colloquium

January 28, 2011, 4:10 PM (60 Evans Hall)

Thomas Scanlon
Professor of Mathematics, University of California, Berkeley

Number-theoretic Consequences of the Model Theory of Real Geometry

We say that a first-order structure (R, <, …) in some language expanding the language of orders is o-minimal if < linearly orders R and every definable subset of R is a finite union of points and intervals. Tarski’s quantifier elimination theorem for the real field implies that the real numbers considered as an ordered ring is o-minimal and subsequent work by various authors has established the o-minimality of more sophisticated structures on the real numbers. The definable sets in an o-minimal structure (in any number of variables) admit a very tame geometric structure theory, very much at odds with the Gödel phenomena in arithmetic. Nevertheless, Pila and Wilkie bridged the divide between geometry and arithmetic by establishing bounds on the number of rational points in sets definable in o-minimal expansions of the real numbers. In recent work, Pila employed this counting theorem to prove the celebrated André-Oort conjecture on algebraic relations on special points of modular curves. In this talk, I will describe the Pila-Wilkie counting theorem and some of my own work relating arithmetic questions about algebraic dynamical systems to o-minimality and the Pila-Wilkie bounds.

Logic Colloquium

December 03, 2010, 4:10 PM (60 Evans Hall)

John R. Steel
Professor of Mathematics, University of California, Berkeley

The Triple Helix

Many different foundational theories have been developed and studied by set theorists. Remarkably, no divergence at the level of statements about “concrete” objects (for example, natural numbers) has emerged, and indeed, these theories are linearly ordered by the inclusion order on their sets of arithmetic consequences. The place of a theory in this order is known as its consistency strength. Our methods for comparing the consistency strengths of set theories center on three intertwined canonical hierarchies of sets. In this talk we shall describe those three hierarchies in very general terms. We shall focus on the philosophical significance of the results and open questions concerning them.

Logic Colloquium

November 12, 2010, 4:10 PM (60 Evans Hall)

G. Aldo Antonelli
Professor of Philosophy, University of California, Davis

The Abstraction Mystique

The logical status of abstraction principles, and especially Hume’s Principle, has been long debated, but the best currently available tool for explicating a notion’s logical character – the notion of permutation invariance introduced by Tarski in 1966 – has not received a lot of attention in this debate. This talk aims to fill this gap. After characterizing abstraction principles as particular classifiers, i.e., mappings from the subsets of a domain into that domain and exploring some of their properties, the paper introduces several distinct notions of permutation invariance for such principles, assessing the philosophical significance of each.

Logic Colloquium

October 29, 2010, 4:10 PM (60 Evans Hall)

Isaac Goldbring
Hedrick Assistant Adjunct Professor of Mathematics, University of California, Los Angeles

A Survey of the Model Theory of Urysohn’s Metric Space

Urysohn’s metric space U is the unique (up to isometry) Polish (i.e. complete, separable) metric space which is universal, that is it contains an isometric copy of all Polish metric spaces, and ultrahomogeneous, that is any isometry between finite subspaces of U extends to an isometry of U. Urysohn’s metric space (and its isometry group) has been studied by topologists and descriptive set theorists for a plethora of reasons. In this talk, I will outline much of what is known on the model theory of Urysohn’s metric space in the context of model theory for metric structures, which I will introduce at the beginning of my talk. I will discuss matters such as axiomatizability, quantifier elimination, independence relations, and definability.

Logic Colloquium

October 08, 2010, 4:10 PM (60 Evans Hall)

Henry Towsner
Hedrick Assistant Adjunct Professor of Mathematics, University of California, Los Angeles

The Infinitary Approach to Finite Combinatorics

The “Correspondence Principle” makes it possible to take combinatorial problems about finite structures and translate them into questions about infinitary, measure-theoretic structures. While there are fairly elementary combinatorial approaches, it has recently become clear that the model-theoretic approach creates a particularly rich and flexible infinitary structure. We will outline the model-theoretic proof of the correspondence principle and point out some of its new applications. To illustrate the flexibility of this method, we will give a short proof of the Szemeredi regularity lemma, an important tool in the study of finite graphs.

Logic Colloquium

October 01, 2010, 4:10 PM (60 Evans Hall)

Solomon Feferman
Professor Emeritus of Mathematics and Philosophy, Stanford University

What’s Definite? What’s not?

Most axiomatizations of set theory that have been treated metamathematically have been based either entirely on classical logic or entirely on intuitionistic logic. But a natural conception of the set-theoretic universe is as an indefinite (or “potential”) totality, to which intuitionistic logic is more appropriately applied, while each set is taken to be a definite (“or completed”) totality, for which classical logic is appropriate; so on that view, set theory should be axiomatized on some correspondingly mixed basis. Similarly, in the case of predicative analysis, the natural numbers are conceived to form a definite totality, while the universe of sets (or functions) of natural numbers are viewed as an indefinite totality, so that, again, a mixed semi-constructive logic should be the appropriate one to treat the two together. In the first part of the talk I will present ways of formulating such semi-constructive systems of analysis and set theory and survey some results characterizing their proof-theoretic strength. Interestingly, though the logic is weakened, one can usefully strengthen certain principles. In the last part of the talk I will relate this work to the controversial discussion as to whether certain statements in the language of set theory such as the Continuum Hypothesis express definite or indefinite problems.

Logic Colloquium

September 17, 2010, 4:10 PM (60 Evans Hall)

Alice Medvedev
Visiting Assistant Professor of Mathematics, University of California Berkeley

The Recursive Spectrum of a Strongly Minimal Modular Theory of Groups in a Finite Language

The recursive spectrum of a fixed strongly minimal theory T is the set of integers n such that the model of T of dimension n admits a recursive presentation. How complicated can this set be? Some interesting examples can be constructed by coding complicatedness into an infinite language, or by using Hrushovski constructions. We are working towards showing that the recursive spectrum of a strongly minimal locally modular theory in finite language must be nothing; everything; or only the prime model. The title describes the current state of this joint work with Uri Andrews.

Logic Colloquium

September 03, 2010, 4:10 PM (60 Evans Hall)

W. Hugh Woodin
Professor of Mathematics, University of California Berkeley

Strong Axioms of Infinity and the Search for V

Professor Woodin plans to give a broad and nontechnical overview of the prospects of the existence of an ultimate version of Gödel’s Axiom of Constructibility. The talk will be essentially a repeat of the Plenary Lecture in Logic at the International Congress of Mathematicians in Hyderabad, India, which he delivered on Thursday, August 26.

Logic Colloquium

April 23, 2010, 4:10 PM (60 Evans Hall)

Richard A. Shore
Professor of Mathematics, Cornell University

The Limits of Determinacy in Second-Order Arithmetic

Logic Colloquium

April 16, 2010, 4:10 PM (60 Evans Hall)

Umesh V. Vazirani
Strauch Professor of Electrical Engineering and Computer Sciences, University of California, Berkeley, and Director of the Berkeley Quantum Computation Center

Is There a Quantum Analog of the PCP (Probabilistically Checkable Proofs) Theorem?

Logic Colloquium

April 12, 2010, 4:10 PM (60 Evans Hall)

Grigor Sargsyan
National Science Foundation Postdoctoral Fellow, and Assistant Adjunct Professor, University of California, Los Angeles

Descriptive Inner Model Theory

It is by now a well-known theorem of Martin, Steel, and Woodin that if there is a measurable cardinal above omega-many Woodin cardinals then AD holds in L(R). Since this theorem was proved, many theorems of a similar nature have appeared in set theory. For instance, Steel recently showed that PFA, the proper forcing axiom, implies that AD holds in L(R). Many results that followed the Martin-Steel-Woodin theorem, including Steel’s aforementioned result, used a substantial amount of inner model theory to prove that all sets in a certain “canonical” collection of sets of reals are determined. In this talk we will take canonical stand for Universally Baire. Letting then Gamma be the collection of Universally Baire sets, many of the theorems that have been proven after the Martin-Steel-Woodin result are about the size of Gamma, and in particular, the earlier results just show that under certain hypotheses all sets of reals in L(R) are in Gamma. Descriptive inner model theory then can be defined as the the study of the properties of Gamma in various mathematically rich extensions of ZFC, using methods from inner model theory. Inner model theory enters into the study of the properties of Gamma through a conjecture known as the Mouse Set Conjecture, which allows one to translate questions about sets of reals into questions about iteration strategies for a certain kind of mice known as hod mice. In this talk, we will explain all this terminology and will outline some of the more recent theorems, including the following.

Theorem (S.) If there is a Woodin limit of Woodins then there is Gamma^* which is a subset of Gamma and such that and L(Gamma^*, R) satisfies AD_R + Theta is regular.

We will also explain some applications in calibrating lower bounds of consistency strength and in the inner model problem.

Logic Colloquium

April 02, 2010, 4:10 PM (60 Evans Hall)

Seth Yalcin
Assistant Professor of Philosophy, University of California, Berkeley

Modus Tollens Denied

I present and defend a number of counterexamples to Modus Tollens, construing Modus Tollens as a pattern of inference expressible in ordinary language. I ask what these counterexamples teach us about the meaning of conditional sentences and about the concept of logical consequence appropriate to understanding natural language inference. Since under classical assumptions, Modus Tollens can be proved from Modus Ponens, I ask in particular whether we should abandon Modus Ponens (as Vann McGee has independently argued we should), or give up aspects of the classical reasoning that allows us to prove Modus Tollens from Modus Ponens. I develop the latter approach.

Logic Colloquium

February 26, 2010, 4:10 PM (60 Evans Hall)

Lotfi A. Zadeh
Professor in the Graduate School and Director of the Berkeley Initiative in Soft Computing, University of California, Berkeley

Toward a Logic of Everyday Reasoning

In one form or another, attempts to construct a logic of everyday reasoning go back to antiquity. The classical, Aristotelean, bivalent logic may be viewed as a product of such attempts. However, Aristotelean logic does not qualify as a logic of everyday reasoning because it does not come to grips with the core issue — the intrinsic imprecision of everyday reasoning. In modern times, logical systems — driven by a quest for the ultimate in precision, rigor and depth — have become more and more estranged from everyday reasoning. But as we move further into the age of machine intelligence and automated decision-making, the need for a logic of everyday reasoning and natural language understanding becomes increasingly apparent.

Today, a logic of everyday reasoning is not in existence. In my lecture, I sketch two attempts to construct one. The first attempt relates to what may be called precisiable everyday reasoning. A very simple example is: Icy roads are slippery; slippery roads are dangerous; therefore icy roads are dangerous. In this case, it can be shown that the seemingly reasonable conclusion is incorrect, and that a correct conclusion can be inferred. A simple example of unprecisiable reasoning is the following. I hail a cab and ask the driver to take me to address A the fastest way. The driver takes me to A but it is not possible to prove or disprove that the route which he chose is the fastest way. Construction of a logic of unprecisiable everyday reasoning requires a radical departure from traditional approaches to logical reasoning.

Logic Colloquium

February 12, 2010, 4:10 PM (60 Evans Hall)

Jennifer Chubb
Assistant Professor of Mathematics, University of San Francisco

Degree Spectra of Relations

We consider algorithmic properties of additional relations on computable structures. For example, for a computable linear ordering we may consider the successor relation, which does not have to be computable. The degree spectrum of a relation on a computable structure is the set of degrees (Turing or other) of images of that relation under isomorphism to another computable copy of the structure. We’ll see a variety of examples of degree spectra of definable relations, and will use algorithmic information theory to analyze the strong degree spectra of the omega-type initial segment of computable linear orderings of type omega + omega*. I will discuss some general results, and present some examples from recent collaborative projects.  

Logic Colloquium

January 29, 2010, 4:10 PM (60 Evans Hall)

John MacFarlane
Associate Professor of Philosophy, University of California, Berkeley

Fuzzy Epistemicism

It is taken for granted in much of the literature on vagueness that semantic and epistemic approaches to vagueness are fundamentally at odds. If we can analyze borderline cases and the sorites paradox in terms of degrees of truth, then we don’t need an epistemic explanation. Conversely, if an epistemic explanation suffices, then there is no reason to depart from the familiar simplicity of classical bivalent semantics. I question this assumption, showing that there is an intelligible motivation for adopting a many-valued semantics even if one accepts a form of epistemicism. The resulting hybrid view has advantages over both classical epistemicism and traditional many-valued approaches.

Logic Colloquium

December 04, 2009, 4:10 PM (60 Evans Hall)

Sherrilyn Roush
Associate Professor of Philosophy, University of California, Berkeley

Rational Self-Doubt

If one is highly confident that Rosco is the murderer from having seen the crime, and then learns of the substantial experimental psychology evidence that human beings are very unreliable at eyewitness testimony, is one thereby obligated to reduce one’s confidence about Rosco? Current Bayesian rationality theory forbids any such revision because of idealizations that I argue are unnecessary. I develop a more general probabilistic framework that shows how we can learn about the world from information about our own unreliability, through higher-order updating that re-calibrates us. It shows why taking doubt about one’s own judgment seriously does not end up in incoherence or runaway skepticism, and what the added value of this kind of evidence is.

Logic Colloquium

November 20, 2009, 4:10 PM (60 Evans Hall)

Denis R. Hirschfeldt
Professor of Mathematics, University of Chicago

Lowness Properties and Cost Functions

One of the byproduts of recent developments in the theory of algorithmic randomness is an increased interest in notions of computability-theoretic lowness. We will focus on two such notions: K-triviality, which is a natural notion of “randomness-theoretic weakness” with a number of interesting characterizations, and strong jump traceability, which has a more combinatorial definition but is also strongly connected to algorithmic randomness. In particular, we will see how these notions can be characterized in terms of two other powerful ideas arising from the theory of algorithmic randomness: cost functions and diamond classes.

We will use these themes to explain some of the issues of current interest in the rapidly developing area of algorithmic randomness, including a discussion of some major open questions.

Logic Colloquium

November 06, 2009, 4:10 PM (60 Evans Hall)

Martin Davis
Professor Emeritus of Mathematics and of Computer Science, Courant Institute, New York University, and Visiting Scholar in Mathematics, University of California, Berkeley

Reflections on Hilbert’s 10th Problem with a New Conjecture

Hilbert’s 10th Problem asked for an algorithm to determine whether a given polynomial equation with integer coefficients has an integer solution. In this talk I will discuss the negative solution to the problem as well as various extensions.

Logic Colloquium

October 23, 2009, 4:10 PM (60 Evans Hall)

Lara Buchak
Assistant Professor of Philosophy, University of California, Berkeley

Different Decision or Different Decision Theory?: Modeling Risk Aversion

Decision theory is supposed to accommodate any preferences that a rational decision maker might have. I show that standard decision theory – expected utility theory – cannot accommodate common preferences that stem from the way apparently rational decision makers account for risk. I propose a generalization of expected utility theory that can accommodate these preferences, and show that it has all the theoretical power of the standard theory: in particular, it has a representation theorem that allows us to derive an agent’s beliefs, desires, and attitudes towards risk from his preferences. I then explore a classic strategy for responding to counterexamples of the type I present: re-characterizing the choice problem facing an agent, by individuating outcomes more finely. This introduces an important question in the methodology of decision theory: under what circumstances should we interpret the agent as facing a different choice problem rather than adopt a different theory to describe his behavior? I approach this question formally, and show that we cannot individuate the outcomes to make his behavior compatible with expected utility theory without also making his behavior compatible with many other opposing theories: thus, I suggest, we cannot save the standard theory.

Logic Colloquium

October 09, 2009, 4:10 PM (60 Evans Hall)

Nate Ackerman
Visiting Scholar in Mathematics, University of California, Berkeley

Trees, Sheaves, and Definition by Recursion

We will show there is a topological space for which presheaves are the same thing as trees. We will further show that there is a sheaf on this topological space which has an important relationship with Baire space.

We will then use these connections to show how a definition by transfinite recursion can be thought of as an operation on sheaves, and how the well-definedness of such a definition can be thought of as a property of the sheaf we are working on.

This will then allow us to define a second-order tree as a sheaf on a tree and to expand our notion of definition by transfinite recursion to all well-founded second-order trees (even those which are ill-founded as normal trees).

We will then mention how these techniques can be used to prove a variant of the Suslin-Kleene Separation theorem.

Logic Colloquium

September 25, 2009, 4:10 PM (60 Evans Hall)

Dana S. Scott
University Professor, Emeritus, Carnegie Mellon University, and Visiting Scholar, University of California, Berkeley

Mixing Modality and Probability

Orlov first [1928] and Gödel later [1933] pointed out the connection between the Lewis System S4 and Intuitionistic Logic. McKinsey and Tarski gave an algebraic formulation and proved completeness theorems for propositional systems using as models topological spaces with the interior operator corresponding to the necessitation modality. Earlier, Tarski and Stone had each shown that the lattice of open subsets of a topological space models intuitionistic propositional logic. Expanding on a suggestion of Mostowski about interpreting quantifiers, Rasiowa and Sikorski used the topological models to model many first-order logics. After the advent of Solovay’s recasting of Cohen’s independence proofs as using Boolean-valued models, topological models for modal higher-order logic were studied by Gallin and others. For Boolean-valued logic, the complete Boolean algebra M = Meas([0,1])/Null of measurable subsets of the unit interval modulo sets of measure zero gives every proposition a probability. Perhaps not as well known is the observation that the measure algebra also carries a nontrivial S4 modality defined with the aid of the sublattice Open([0,1])/Null of open sets modulo null sets. This sublattice is closed under arbitrary joins and finite meets in the measure algebra, but it is not the whole of the measure algebra. By working by analogy to the construction of Boolean-valued models for ZF, we can construct over M a model for a modal ZF (MZF) where membership and equality predicates have interesting and natural modal properties. In such a universe the real numbers correspond to random variables, and – following a suggestion of Alex Simpson (Edinburgh) – there is also a well-motivated modeling of random reals. A modal set theory, however, requires a reexamination of comprehension principles, and much work remains to be done to organize methods of proof to take account of the new distinctions encountered. It is also possible to recast well-known theorems of Ergodic Theory as principles about this modal universe, and the question to be considered is whether the new perspective can lead to some new results.

Logic Colloquium

September 11, 2009, 4:10 PM (60 Evans Hall)

Theodore A. Slaman
Professor and Chair of Mathematics, University of California, Berkeley

Degree Invariant Functions

We will discuss the issues surrounding Martin’s conjectured characterization of the functions on reals which are invariant with respect to Turing degree. We will give a recent application, obtained jointly with Montalban and Reimann, to the question of whether Turing equivalence is universal among countable Borel equivalence relations.

Logic Colloquium

August 28, 2009, 4:10 PM (60 Evans Hall)

Paolo Mancosu
Professor of Philosophy, University of California, Berkeley

Measuring the size of infinite collections of natural numbers: Was Cantor’s theory of infinite number inevitable?

Cantor’s theory of cardinal numbers offers a way to generalize arithmetic from finite sets to infinite sets using the notion of one-to-one correspondence between two sets. As is well known, all countably infinite sets have the same ‘size’ in this account, namely that of the cardinality of the natural numbers. However, throughout the history of reflections on infinity another powerful intuition has played a major role: if a collection A is properly included in a collection B than the ‘size’ of A should be less than the ‘size’ of B (part-whole principle). This second intuition was not developed mathematically in a satisfactory way until quite recently. In this talk I begin by reviewing the contributions of some thinkers who argued in favor of the assignment of different sizes to infinite collections of natural numbers. Then, I present some recent mathematical developments that generalize the part-whole principle to infinite sets in a coherent fashion. Finally, I show how these new developments are important for a proper evaluation of a number of positions in philosophy of mathematics which argue either for the inevitability of the Cantorian notion of infinite number (Gödel) or for the rational nature of the Cantorian generalization as opposed to that, offered by Bolzano, based on the part-whole principle (Kitcher).

Alfred Tarski Lectures

April 10, 2009, 4:10 PM (60 Evans Hall)

Anand Pillay
Professor of Mathematical Logic, University of Leeds, Swanlund Chair and Professor Emeritus, University of Illinois at Urbana-Champaign

Measures and Domination

We introduce Keisler measures on definable sets, and in suitable contexts generalize uniqeness of Haar measure on compact groups, to definable groups, using the Vapnik-Chervonenkis theorem from learning theory/probability theory.

Alfred Tarski Lectures

April 08, 2009, 4:10 PM (TBA)

Anand Pillay
Professor of Mathematical Logic, University of Leeds, Swanlund Chair and Professor Emeritus, University of Illinois at Urbana-Champaign

Lie Groups from Nonstandard Models

We discuss various kinds of standard part or reduction maps in the light of the logic topology, and relate model-theoretic standard part maps for definable groups to algebraic geometric reduction maps in the case of elliptic curves.

Alfred Tarski Lectures

April 06, 2009, 4:10 PM (60 Evans Hall)

Anand Pillay
Professor of Mathematical Logic, University of Leeds, Swanlund Chair and Professor Emeritus, University of Illinois at Urbana-Champaign

The Logic Topology

We introduce first-order theories, concentrating on their “type spaces”. We discuss various notions of definable set, and define the “logic topology” on suitable definable sets.

Colloquia 2002–2009