## The Power of Mathematical Thinking

by Jordan Ellenberg

Read: 2018-01-02, Rating: 6/10.

I wanted to like this book, but some concepts are explained in brief and others are strung out without conclusions. The title is misleading. This book is more about the power of mathematical thinking than how not to be wrong.

#### My Notes:

WHEN AM I GOING TO USE THIS?

Math is a science of not being wrong about things, its techniques and habits hammered out by centuries of hard work and argument. With the tools of mathematics in hand, you can understand the world in a deeper, sounder, and more meaningful way.

ABRAHAM WALD AND THE MISSING BULLET HOLES

A mathematician is always asking, “What assumptions are you making? And are they justified?”

survivorship bias

What makes that math? Isn’t it just common sense? Yes. Mathematics is common sense.

Without the rigorous structure that math provides, common sense can lead you astray.

#### PART 1 Linearity

False linearity

Nonlinear thinking means which way you should go depends on where you already are.

Horace’s famous remark “Est modus in rebus, sunt certi denique fines, quos ultra citraque nequit consistere rectum” (“There is a proper measure in things. There are, finally, certain boundaries short of and beyond which what is right cannot exist”)

not all curves are straight lines

It doesn’t matter whether it’s a circle or a polygon with very many very short sides.

The slogan to keep in mind: straight locally, curved globally.

Every smooth curve, when you zoom in enough, looks just like a line.

Newton said, look, let’s go all the way. Reduce your field of view until it’s infinitesimal

Zeno, a fifth-century-BCE Greek philosopher of the Eleatic school who specialized in asking innocent-seeming questions about the physical world that inevitably blossomed into huge philosophical brouhahas.

to walk to the ice cream store is impossible.

Zeno’s paradox is much like another conundrum: is the repeating decimal 0.99999. . . . . . equal to 1?

Everyone knows that 0.33333. . . . .= 1/3. Multiply both sides by 3 and you’ll see 0.99999. . . .= 3/3= 1.

What’s the numerical value of an infinite sum? It doesn’t have one—until we give it one. That was the great innovation of Augustin-Louis Cauchy, who introduced the notion of limit into calculus in the 1820s.

In the mathematical context, the good choices are the ones that settle unnecessary perplexities without creating new ones.

linear regression, the statistical technique that is to social science as the screwdriver is to home repair. It’s the one tool you’re pretty much definitely going to use, whatever the task.

When doing any serious mathematical thinking, you’re going to have to multiply 6 by 8 sometimes, and if you have to reach for your calculator each time you do that, you’ll never achieve the kind of mental flow that actual thinking requires. You can’t write a sonnet if you have to look up the spelling of each word as you go.

The ideas of mathematics can sound abstract, but they make sense only in reference to concrete computations. William Carlos Williams put it crisply: no ideas but in things.

An important rule of mathematical hygiene: when you’re field-testing a mathematical method, try computing the same thing several different ways. If you get several different answers, something’s wrong with your method.

Siméon-Denis Poisson came up with the pithy name “la loi des grands nombres” to describe it.

Jakob Bernoulli had worked out a precise statement and mathematical proof of the Law of Large Numbers. It was now no longer an observation but a theorem.

And the theorem tells you that the Big-Small game isn’t fair. The Law of Large Numbers will always push the Big players’ scores toward 50%, while those of the Smalls are apt to vary much more widely.

De Moivre’s insight is that the size of the typical discrepancy* is governed by the square root of the number of coins you toss.

If you want to make the error bar half as big, you need to survey four times as many people.

But de Moivre wasn’t done. He found that the discrepancies from 50-50, in the long run, always tend to form themselves into a perfect bell curve

It feels like something is making it happen.

It’s dangerous to feel this way.

Proportions can be misleading even in simpler, seemingly less ambiguous cases.

Don’t talk about percentages of numbers when the numbers might be negative.

I blame word problems. They give a badly wrong impression of the relation between mathematics and reality.

#### PART II Inference

The Baltimore stockbroker con works because, like all good magic tricks, it doesn’t try to fool you outright. That is, it doesn’t try to tell you something false—rather, it tells you something true from which you’re likely to draw incorrect conclusions.

Aristotle, as usual, was here first: “it is probable that improbable things will happen. Granted this, one might argue that what is improbable is probable.”

British statistician R. A. Fisher’s famous formulation, “the ‘one chance in a million’ will undoubtedly occur, with no less and no more than its appropriate frequency, however surprised we may be that it should occur to us.”

When you’re trying to draw reliable inferences from improbable events, wiggle room is the enemy.

the standard methods of assessing results, the way we draw our thresholds between a real phenomenon and random static, come under dangerous pressure in this era of massive data sets, effortlessly obtained.

Long story. The point is, reverse engineering is hard.

We use probability even to talk about events that cannot possibly be thought of as subject to chance.

null hypothesis significance test

1. Run an experiment.

Suppose the null hypothesis is true, and let p be the probability (under that hypothesis) of getting results as extreme as those observed.

The number p is called the p-value. If it is very small, rejoice; you get to say your results are statistically significant. If it is large, concede that the null hypothesis has not been ruled out.

effects are usually so minuscule that they can be safely ignored. Just because we can detect them doesn’t always mean they matter.

Scientists, subject to the intense pressure to publish lest they perish, are not immune to the same wiggly temptations. If you run your analysis and get a p-value of .06, you’re supposed to conclude that your results are statistically insignificant. But it takes a lot of mental strength to stuff years of work in the file drawer.

when you’re a believer, it’s easy to come up with reasons that the analysis that gives a publishable p-value is the one you should have done in the first place.