Events 2011-2012

Logic Colloquium

September 02, 2011, 4:10 PM (60 Evans Hall)

Paolo Mancosu
Professor and Chair of Philosophy, University of California, Berkeley

Axiomatics and purity of methods: On the relationship between plane and solid geometry

Traditional geometry concerns itself with planimetric and stereometric considerations, which are at the root of the division between plane and solid geometry. To raise the issue of the relation between these two areas brings with it a host of different problems that pertain to mathematical practice, epistemology, semantics, ontology, methodology, and logic. In addition, issues of psychology and pedagogy are also important here.

In this talk (which is based on joint work with Andy Arana), my major concern is with methodological issues of purity. In the first part I will give a rough sketch of some key episodes in mathematical practice that relate to the interaction between plane and solid geometry. In the second part, I will look at a late nineteenth century debate (on “fusionism”) in which for the first time methodological and foundational issues related to aspects of the mathematical practice covered in the first part of the paper came to the fore. I conclude this part of the talk by remarking that only through an axiomatic and analytical effort could the issues raised by the debate on “fusionism” be made precise. The third part of the talk focuses on Hilbert’s axiomatic and foundational analysis of the plane version of Desargues’ theorem on homological triangles and its implications for the relationship between plane and solid geometry. Finally, building on the foundational case study analyzed in the third section, in the fourth section I point the way to the analytic work necessary for exploring various important claims on “purity”, “content”, and other relevant notions.

Logic Colloquium

September 16, 2011, 4:10 PM (60 Evans Hall)

Adam Day
Miller Research Fellow, University of California, Berkeley

Randomness and Neutral Measures

Algorithmic randomness provides a way to define a random outcome. The underlying idea is that a random outcome should have no ‘rare’ mathematical properties. A combination of measure theory and recursion theory provides a framework for defining what a rare property is and consequently for defining randomness itself.

Most research in algorithmic randomness has been conducted using computable measures. However, there are some interesting results that have come from considering noncomputable measures. In a surprising result, Levin established the existence of probability measures for which all in^Lfinite binary sequences are random. These measures are termed neutral measures. In this talk I will introduce Levin’s neutral measures. I will also outline some recent joint work with Joseph Miller in which we determine some key properties of neutral measures.

Logic Colloquium

September 30, 2011, 4:10 PM (60 Evans Hall)

Lotfi A. Zadeh
Professor in the Graduate School, University of California, Berkeley; Director, Berkeley Initiative in Soft Computing

Can Mathematics Deal with Computational Problems Which Are Stated in a Natural Language?

Here are a few very simple examples of computational problems which are stated in a natural language. (a) Most Swedes are tall. What is the average height of Swedes? (b) Probably John is tall. What is the probability that John is short? What is the probability that John is very short? What is the probability that John is not very tall? (c) Usually Robert leaves his office at about 5 pm. Usually it takes Robert about an hour to get home from work. At what time does John get home? (d) X is a real-valued random variable. Usually X is much larger than approximately a. Usually X is much smaller than approximately b. What is the probability that X is approximately c, where c is a number between a and b? (e) A and B are boxes, each containing 20 balls of various sizes. Most of the balls in A are large, a few are medium, and a few are small. Most of the balls in B are small, a few are medium, and a few are large. The balls in A and B are put into a box C. What is the number of balls in C which are neither large nor small? For convenience, such problems will be referred to as CNL problems.

It is a long-standing tradition in mathematics to view computational problems which are stated in a natural language as being outside the purview of mathematics. Such problems are dismissed as ill-posed and not worthy of attention. In the instance of CNL problems, mathematics has nothing constructive to say. In my lecture, this tradition is questioned and a system of computation is suggested which opens the door to construction of mathematical solutions of CNL problems. The system draws on the fuzzy-logic-based formalism of computing with words (CW). (Zadeh 2006) A concept which plays a pivotal role in CW is that of precisiation of meaning. More concretely, precisiation involves translation of natural language into a mathematical language in which the objects of computation are well-defined — though not conventional — mathematical constructs.

A key idea involves representation of the meaning of a proposition, p, drawn from a natural language, as a restriction on the values which a variable, X, can take. Generally, X is a variable which is implicit in p. The restriction is represented as an expression of the form X isr R, where X is the restricted variable, R is the restricting relation and r is an indexical variable which defines the way in which R restricts X. This expression is referred to as the canonical form of p. Canonical forms of propositions in a natural language statement of a computational problem serve as objects of computation in CW.

Construction and use of mathematical solutions of CNL problems is an unexplored domain in mathematics. The importance of this domain derives from the fact that much of human knowledge, and particularly world knowledge, is described in natural language.

Logic Colloquium

October 14, 2011, 4:10 PM (60 Evans Hall)

Daisuke Ikegami
Japan Society for the Promotion of Science Postdoctoral Fellow, University of California, Berkeley

Gale-Stewart Games and Blackwell Games

In 1953, Gale and Stewart developed the general theory of infinite games (the so-called Gale-Stewart games) which are two- player zero-sum infinite games with perfect information. The theory of Gale-Stewart games has been deeply investigated by many logicians and it has been one of the main topics in set theory while having connections with other topics of set theory as well as model theory and computer science.

In 1969, Blackwell proved the extension of von Neumann’s mini-max theorem where he introduced infinite games with imperfect information – nowadays called Blackwell games. Although Blackwell games have not been as much investigated as Gale-Stewart games, in 1998, Martin proved that the Axiom of Determinacy (AD) implies the Axiom of Blackwell Determinacy (Bl-AD) and conjectured the converse, which is still not known to be true.

In this talk, we introduce Blackwell games and their determinacy and talk about the connection between the determinacy of Gale-Stewart games and that of Blackwell games. A part of the work is with Benedikt Löwe & Devid de Kloet and another part is with Hugh Woodin.

Logic Colloquium

October 28, 2011, 4:10 PM (60 Evans Hall)

Tomasz Placek
Professor of Philosophy, Jagiellonian University, Krakow, Poland; Visiting Scholar in Philosophy, University of California, Berkeley

Possibilites without Possible Worlds/Histories

Possible worlds have turned out to be a particularly useful tool of modal metaphysics, although their global character makes them philosophically suspect. Hence, it would be desirable to arrive at some local modal notions that could be used instead of possible worlds. In this talk I will focus on what is known as historical (or real) modalities, an example of which is tomorrow’s sea-battle. The modalities involved in this example are local since they refer to relatively small chunks of our world: a gathering of inimical fleets on a bay near-by has two alternative possible future continuations: one with a sea-battle and the other with no-sea-battle. The objective of this talk is to sketch a theory of such modalities that is framed in terms of possible continuations rather than possible worlds or possible histories. The proposal will be tested as a semantic theory for a language with historical modalities, tenses, and indexicals. The talk builds on my JPL paper (cf. http://www.springerlink.com/content/q2423241l6525063/).

Logic Colloquium

November 18, 2011, 4:10 PM (60 Evans Hall)

Theodore A. Slaman
Professor and Chair of the Department of Mathematics, University of California, Berkeley

The First-Order Consequences of the Existence of an Infinite Random Sequence

We will discuss the question, “What first-order statements follow from the existence of an infinite random sequence by effective means?” The answer depends on the degree of randomness in the infinite source. In one case, it isolates a mysterious subtheory of arithmetic strictly between Σ1-Induction and Σ2-Bounding.

Logic Colloquium

December 02, 2011, 4:10 PM (60 Evans Hall)

Lara Buchak
Assistant Professor of Philosophy, University of California, Berkeley

Risk and Tradeoffs

The prevailing view is that subjective expected utility theory is the correct theory of instrumental rationality. Subjective expected utility theory (hereafter, EU theory) is thought to characterize the preferences of all rational decision makers. And yet, there are some preferences that violate EU theory that seem both intuitively appealing and prima facie consistent. An important group of these preferences stem from how ordinary decision makers take risk into account: ordinary decision makers seem to care about “global” properties of gambles, but EU theory rules out their doing so.

EU theory allows agents to subjectively determine how much they value outcomes (their utility function) and how likely they think various states are to obtain (their probability function). However, there is arguably a third subjective component of instrumental rationality, namely one’s norm for translating these two values into preferences among risky acts. On EU theory, every rational agent must use the same norm, a norm that is insensitive to global properties. I propose a theory on which decision makers subjectively determine how they want to take risk into account, and thus subjectively determine all three components of instrumental rationality. By providing a “representation theorem,” I show that the third component corresponds at the level of preferences to how an individual values tradeoffs in different structural parts of the gamble, e.g., to whether he cares more about what happens in the worst-case scenario or the best-case scenario. And I therefore show how non-EU maximizers can be seen as instrumentally rational: as taking the means to their ends.

Logic Colloquium

January 20, 2012, 4:10 PM (60 Evans Hall)

Sherrilyn Roush
Associate Professor of Philosophy and Chair, Group in Logic and the Methodology of Science, University of California, Berkeley

Reasoning and the Growth of Error

I address several questions about controlling the growth of error introduced by reasoning. For example, if every step of reasoning introduces a new source of error, how can we possibly improve our reliability – as we think we do – by proof-checking, filling in gaps in proofs, or consulting an expert, all of which add steps to our reasoning? I will devote most attention to the topic of closure of knowledge under known implication. How can knowledge be closed if in one step of valid deductive reasoning I can go from knowledge that my car is parked on Main Street to a belief that it has not been stolen since I parked it, which it seems I do not know? I argue that this closure problem is entirely a matter of how fast false positive error in the conclusion grows over deductive reasoning from the premise, and develop a graded model with upper bounds on this growth. It turns out that problem examples like the car theft case, and more radical skeptical cases, can be generated only by committing an error of reasoning.

Logic Colloquium

February 03, 2012, 4:10 PM (60 Evans Hall)

Leo Harrington
Professor of Mathematics, University of California, Berkeley

Is There a Philosophical View Already in Mathematical Logic?

This talk will suggest that a certain “view”, readily available from basic ingredients of mathematical logic, bears a resemblance with views found respectively in the philosophers Parmenides, Hegel, and Heidegger. And this talk will attempt to indicate the possibility that, while staying inside mathematical logic, this view can be elaborated to maintain such a resemblance with a philosophical view found in common in the works of these philosophers.

Alfred Tarski Lectures

February 21, 2012, 4:10 PM (TBA)

Per Martin-Löf
Emeritus Professor of Logic, Departments of Mathematics and Philosophy, Stockhold University

Assertion and Inference

Alfred Tarski Lectures

February 22, 2012, 4:10 PM (TBA)

Per Martin-Löf
Emeritus Professor of Logic, Departments of Mathematics and Philosophy, Stockhold University

Propositions, Truth and Consequence

Alfred Tarski Lectures

February 24, 2012, 4:10 PM (60 Evans Hall)

Per Martin-Löf
Emeritus Professor of Logic, Departments of Mathematics and Philosophy, Stockhold University

Tarski’s Metamathematical Reconstruction of the Notions of Truth and Logical Consequence

Logic Colloquium

March 02, 2012, 4:10 PM (60 Evans Hall)

Alf Onshuus
Professor Titular, University of the Andes

VC-Density and dp-Rank in Model Theory

VC-dimension was introduced by Vapnik and Chervonenkis to measure, in some way, the complexity of a family of subsets of a given universe. This notion has had important applications in statistics an learning theory, usually in the framework where one looks at a definable family of definable sets (usually in a cartesian power of the real or the real exponential field). Now, in this context the model theory becomes clearly related and in fact any model where all uniform definable families of sets have finite VC-dimension is precisely the model of a theory with the “independence property” introduced by Shelah to define dependent theories (also known as theories with NIP). This setting of “VC-theory” of a given uniform definable family of definable sets in a fixed structure and the relation with notions in model theory around it is the setting for the talk.

I will survey some of the results on VC-density (a very important notion in VC-theory) that arise from the model-theoretic analysis, introduce and show some results about dp-rank (a rank introduced by Shelah in the context of dependent theories), and show some evidence of a possible strong relation between the two notions.

Logic Colloquium

March 16, 2012, 4:10 PM (60 Evans Hall)

George M. Bergman
Professor Emeritus of Mathematics, University of California, Berkeley

Linear Maps and Ultrafilters

Most of my talk will be devoted to showing that if k is an infinite field, I an infinite set, and g : kI → V a linear map to a finite-dimensional k-vector-space, then g has elements where “almost all’’ is made precise in terms of a finite set of card(k)+-complete ultrafilters on I. In particular, if I is not enormous,”almost all’’ means “all but finitely many’’.

Before proving the above, I will show why such results are of value in studying homomorphisms from an infinite direct product of algebras (e.g., of associative algebras, or Lie algebras) to another algebra, and sketch the proofs of some easier results on linear maps kI → V.

(The result of the first paragraph above is Lemma 7 of a paper with N.Nahlus, which can be read in preprint form at http://math.berkeley.edu/~gbergman/papers/prod_Lie2.pdf or in published form at http://dx.doi.org/10.1016/j.jalgebra.2012.01.004. The discussion of how such results are used, and proofs of easier results of the same nature, can be found in §§1-2 of that paper.)

Logic Colloquium

April 06, 2012, 4:10 PM (60 Evans Hall)

Jouko Väänänen
Professor of Mathematics, University of Helsinki, and Professor of Foundations of Mathematics, University of Amsterdam

Second-Order Logic and Model Theory

Second-order logic is so strong that it makes sense to ask, are complete finitely axiomatized second-order theories always categorical? Carnap claimed that the answer is yes, but his proof did not work. Ajtai and Solovay showed that the answer depends on set theory. We present some recent new results in this field. This is joint work with Hyttinen and Kangas.

Logic Colloquium

April 20, 2012, 4:10 PM (60 Evans Hall)

Umesh Vazirani
Roger A.Strauch Professor of Electrical Engineering and Computer Sciences Director, Berkeley Quantum Computation Center, University of California, Berkeley

A Turing Test for Randomness

Is it possible to certify that the n-bit output of a random number generator is “really random”? A possible approach to this question is via the theory of algorithmic randomness due to Kolmogorov, Chaitin, and Solomonoff, which identifies randomness with uncompressibility by any Turing Machine. Unfortunately this definition does not result in an efficient test for randomness. Indeed, in the classical World it seems impossible to provide such a test.

Quantum mechanics allows for a remarkable random number generator: its output is certifiably random in the sense that if the output passes a simple statistical test, and there is no information communicated between the two boxes in the randomness generating device (based, say, on the speed of light limit imposed by special relativity), then the output is certifiably random. Moreover, the proof that the output is truly random does not even depend upon the correctness of quantum mechanics! Based on joint work with Thomas Vidick.