still from the TV series

Close Enough? Reasoning Mistakes Great and Small

When a person makes a mistake in logical reasoning, does it matter how close they come to the mark or is wrong just wrong? Philosopher Julia Staffel investigates the differences between slightly and highly irrational thinkers.

“Close” only counts in horseshoes and hand grenades.

 Major League Baseball player and manager Frank Robinson, as quoted in Time magazine, July 31, 1973

 

Philosopher Julia Staffel is more generous than the barrier-breaking baseball star: Sometimes “close” is good enough, she says, and there’s a difference between missing by a little and missing by a mile — at least when it comes to logical reasoning. Below, Staffel, an assistant professor of philosophy and humanities center Faculty Fellow, offers a peek at her new work, which will address a critical void in the philosophical study of reasoning and uncertainty.


Your project is based on the premise that the way that philosophy traditionally talks about the reasoning people engage in doesn’t actually reflect what goes on. What is the traditional view, and what do new modes of thinking do you suggest instead?
Traditionally, philosophers and psychologists thought of human reasoning on the model of formal logic. They assumed that when we reason, we go through arguments in our minds that have valid logical forms such as syllogisms (“All men are mortal, Socrates is a man, therefore Socrates is mortal.”), and perhaps more complicated formats.

In traditional systems of logic, probabilities and uncertainties don’t really have a place, so for a long time, it was more or less ignored that much of human reasoning involves dealing with uncertainties and probabilities. It was only with the development of modern probability theory and statistics (which hasn’t been around that long) that the question came into focus — how people deal with uncertainty in their reasoning, and whether or not they do so in a rational manner.

Philosophers and psychologists have both taken up the project of investigating this question, but from different angles. Psychologists have focused on finding out how people, in fact, reason when they are uncertain, whereas philosophers have focused on what makes such reasoning rational.

A popular position in philosophy is the view that people’s degrees of uncertainty should be representable by a probability function in order to be considered rational. For example, suppose someone is 90% confident that she locked the front door, and 50% confident that she turned down the heater before leaving the house. If this person is rational, then she must be 45% confident that she both locked the door and turned down the heater. This is because the laws of probability prescribe that the probability of a conjunction of independent claims is the product of their individual probabilities (0.9 x 0.5 = 0.45).

This is, of course, not to say that people really have those numbers in mind when they reason, or that they always consciously realize how their reasoning actually generates its outputs. The numbers merely function as convenient representations of quantities that we can find in our attitudes, such as a degree of confidence.

Interestingly, psychologists have found that there are lots of instances in which people systematically violate the laws of probability when reasoning with uncertainties. This led to a widespread pessimistic view of human rationality in the literature, since it seemed like human reasoners were fundamentally irrational.

Reasoning well is beneficial for individuals, because it helps them be more successful in life, but it is also essential for the functioning of society. Perhaps right now this is more obvious than ever.

-Philosopher Julia Staffel

What will your research add to this problem?
In my view, rationality is not an all-or-nothing matter. My goal is to better understand to what degree people are irrational in their reasoning.

Since philosophical theories of what makes our attitudes rational usually just focus on characterizing the ideal case, they don’t have good resources to capture the extent to which someone’s degree of confidence diverges from the degree of confidence they should ideally have. But this can make a big difference. Suppose you go to the doctor, who has to estimate how likely it is that a particular test result indicates that you have a serious disease. And let’s suppose that the doctor should be about 5% confident that you have the disease, if she had reasoned well with her available information. It seems that if the doctor was slightly more or less confident, say 3% or 8%, then this would be a lot less irrational than if she was 90% confident instead. But the standard philosophical views can’t systematically make this distinction. They simply register whether a reasoner has reached a rational conclusion or not.

In my work, I use mathematical models of people’s belief systems and reasoning strategies to be able to capture differences between slightly and highly irrational thinkers. It is obviously worrisome if we are highly irrational in particular contexts, but small divergences from the rational ideal don’t seem troublesome in many contexts. Many reasoning problems are very difficult, and our minds have to use simplifying shortcuts or heuristics to deal with them. But that seems fine as long as they generate close-to-optimal conclusions.

How do other fields draw on these ideas about reasoning?
Fields as diverse as education, computer science and cognitive psychology all draw on these ideas. The connection to cognitive psychology is obvious — we want to know whether people are really reasoning in ways that philosophers have argued to be rational.

But the ideas also relate in interesting ways to problems in computer science. Computers are reasoners, and in many applications we want them to process probabilistic information. This information might also be conflicting or contradictory, depending on its source. Formal models of rationality are of interest to computer scientists because they help them make computers be better and faster reasoners.

Another important field that draws on these ideas is education. Teaching students to be critical thinkers who can distinguish good from bad arguments and who can find their own argumentative mistakes is really a holy grail of education. But doing so is very difficult, and understanding how people ought to reason, and what kinds of reasoning mistakes people make can be very useful in developing better teaching methods.

What impact could your work have in the public sphere?
I hope that my work will make a contribution to the bigger project of helping people be better reasoners. Reasoning well is beneficial for individuals, because it helps them be more successful in life, but it is also essential for the functioning of society. Perhaps right now this is more obvious than ever.