picture
Individual

Two new theorems in number theory

In grade school we learn how to divide one whole number by another. Sometimes nothing is left over, but often the division leaves a “remainder”. One learns to say “Eleven divided by five is two remainder one”. Numbers that always leave a remainder when divided by another number other than themselves or 1 are called prime. All other numbers are called composite (except 1, which is neither prime nor composite).
If you want to get to the new mathematics immediately, click here. If you want to take a more leisurely route, keep reading.
The fascination of primes, as Emily Riehl notes, often draws people into mathematics. Oliver Sacks describes a pair of twins who, though quite incapable of calculating, could produce, and recognize, very large prime numbers. Sacks, when he understood that the numbers they murmured to each other were primes, joined in, contributing eight- and ten-digit primes. The twins, pleased at his comprehension, and delighted to find even larger primes to contemplate than their own six-figure numbers, soon were “swapping twenty-figure primes” (The Man who Mistook his Wife for a Hat (1985) 203), far larger than any computer at the time could calculate. Though I haven’t the insight of Sacks’s twins, smallish primes do have a different feeling for me, a sort of spikiness. Composite numbers, especially even numbers with lots of divisors, feel smooth and rather dull.
A more precise characterization of primes: say that m divides n if there is a number k such that km = n. Write m|n for this relation. A little work will show that it is a partial order relation, i.e. reflexive (n divides itself), antisymmetric (if m|n and n|m then m = n) and transitive (if m|n and n|q then m|q). The divisibility order has a least element, namely 1, which divides every number. In any partial order, an atom is an element x such that if y precedes x in that order, then y is either the least element or x itself: there is, in other words, nothing “between” x and the least element. Primes are the atoms of the divisibility order.
Number theory concerns itself with relations among the natural numbers, especially those definable in terms of the elementary operations +, –, ·, ÷ (the last more often denoted with a slash).
Being prime is a one-place relation definable in terms of division, hence ultimately in terms of addition: a prime is a number p such that for no number q other than p and 1 is it true that q + q + … + q for any number of iterations of ‘+ q’. Another such relation is being at a distance: p and q are at a distance of 2, for example, just in case either pq = 2 or qp = 2.
To say that the primes are the atoms of the divisibility order is to say that every number, because it lies “above” one or more atoms, is divisible by at least one prime. The number 360, for example, is divisible by 5: 360 = 5 · 72. 72 is in turn divisible by 3: 72 = 3 · 24. Notice that when we extract a divisor, what’s left is always smaller than what we started with. Since we cannot extract a divisor from 1 (remember that 1 is not a prime), the process of extracting divisors must end; the result will be a list of atoms—i.e. primes—each of which divides 360, and 360 will be the result of multiplying all the primes in the list together. More generally: every number is a product of primes. Our number 360 is 2 · 2 · 2 · 3 · 3 · 5.
Twin primes
A little thought will show that the primes must “thin out”. After all, no even number except 2 is prime: half the numbers greater than are ruled out. Likewise, no number divisible by 3 is prime except 3 itself is prime: one-third of the numbers greater than are ruled out (but half of those, that is, one-sixth of all the numbers greater than 3, were ruled out already because they are divisible by 2). (A sophisticated elaboration of this line of reasoning yields the Prime Number Theorem, which gives the odds that a randomly chosen number in the interval between 1 and an arbitrary number N will be prime, in terms of N.) Every time we encounter a new prime, we can cross off all its multiples, some of which will coincide with numbers already crossed off, but some of which will be crossed off for the first time. The method of crossing off numbers in this way so as to determine which are composite (and so also which are prime) is called the Sieve of Eratosthenes. It is guaranteed to exhibit all the primes below any given number, and it suggests strongly that the “density” of primes among all numbers must decline monotonically as we move further away from 1.
The first of the two theorems proved recently concerns the distribution of primes. Call primes whose distance is 2 twin primes. One might well wonder whether, if we move far enough away from 1, the distance between primes is always more than 2, or in other words whether there are only finitely many twin primes.
More generally, one might wonder whether for each distance d, however large, there is some number k, depending on d, such that no two primes which are both larger than k are at a distance of d or less. Not only would the gaps between primes get bigger “on average” (as they must if the primes are to thin out), but the gaps would get bigger uniformly. so that after some point there would be no more small gaps, and in particular no twin primes.
The graphics below suggest that
  • (i) the primes do thin out, but slowly
  • (ii) twin primes thin out, but even more slowly
  • (iii) the distribution of primes (and of twin primes) is irregular
Primes up to 699. Twin primes are in orange. Each column contains 10 numbers. The blue row contains 5 and its multiples, the top row 10 and its multiples.
Primes up to 699, compressed and blurred. All the rows of even numbers, and the row of 5 and its multiples, have been removed.
The theorem just proved by Yi Tang Zhang of the University of New Hampshire states that for a large enough distance d (“large” being 7 · 107), we are guaranteed that there will be primes at that distance (or less) no matter how far we travel from 1. I infer that their distribution is so irregular that even though their density decreases monotonically as we move away from 1, still there will always be “clumps” of primes that are close together. The clumps will get rarer and rarer, but at no point will there cease to be clumps.
The Odd Goldbach Conjecture
Readers of Gödel Escher Bach will remember the Goldbach Conjecture, which Hofstadter entwines with the Goldberg Variations and with their composer Johann Sebastian Bach, himself no stranger to number symbolism. The Conjecture, contained in a letter of Christian Goldbach to Leonhard Euler, states that every even number greater than 2 is the sum of two primes. (If you’re thinking of looking for a counterexample, start with a number greater than 4 · 1018.)
Various weakenings of the Conjecture have been proposed. The “odd” Goldbach conjecture states that every odd number greater than seven is the sum of three odd primes. In a paper recently posted to the arXiv, Harald Helfgott has proved the Odd Conjecture (the paper is supplemented by Helfgott and Platt’s numerical verification for numbers up to 8.875 · 1030 ). Only last year Terence Tao proved that every odd number greater than 1 is the sum of at most five primes. Helfgott’s work improves upon both Tao’s and on older work which showed that the Odd Conjecture is true for every “sufficiently large” odd number.
Helfgott’s lucid introduction lays out the strategy of the proof. It has been known since Vinogradov’s publication in 1937 that there existed a constant N such that for all odd numbers greater than N the Odd Goldbach Conjecture holds. An explicit bound was soon given by Borodzin: N = 3315, which is much larger than the number of neutrinos in the universe (10102; Borodzin’s number is roughly 10107). Helfgott’s proof has two components. The first, traditional, component consisted in lowering the constant N to a range such that all instances of the Conjecture below that N could be calculated, “by hand” as it were, rather than by argument. N = 1030 turned out to be feasible (it would not have been ten years ago: as it was, the authors needed 40,000 core hours on the computers they used).
I must admit to being a bit unhappy with the style of proof. The issue could have been raised with the famous computer-assisted proof of the Four-Color Theorem, but at the time people were preoccupied with deciding whether proofs in which the role of computers was ineliminable (and which were therefore not übersehbar, “overseeable” (in French perhaps controllable) were admissible. To illustrate the problem, let me use a much simpler case. I want to show that for all n, the sum of the first n numbers is (n + 1)n/2, so that, for example, 1+2+3 = (3 · 4)/2 = 6. Now suppose I have a proof in the traditional mode for all n greater than 1 million; for n less then a million, I verify all instances on a computer.
The issue is not that the traditional proof holds for only part of the range of the relevant variable (in this case N). Many traditional proofs have limitations of that sort. But typically there is a reason why the domain of validity of the proof should have whatever limitations it has. Here is a slightly artificial example (which requires some technicality). Galois and Abel showed that if n ≥ 5, then a general polynomial equation of degree n cannot be solved using elementary operations and radicals. The domain of validity, one might say, of their proof of unsolvability is bounded. But we know perfectly well why polynomial equations of degree 4 and below can be solved: it’s because the associated Galois groups are, in the group-theoretical sense, solvable, a result we can prove “by hand” just by determining all the subgroups of the alternating group A4 and so on.
If I understand Helfgott’s strategy correctly, the choice of the bound N = 1030 was based on the feasibility of numerical verification for all odd numbers below that N. If so, then the boundary between the “conceptual” and the “numerical” parts of the proof is determined by something extra-mathematical, namely the computing power of today’s computers. As computers get even more powerful, bounds like N can be raised, and so, insofar as the results of numerical calculation are “brute facts” from the standpoint of theory, the proportion of bruteness will go up. I don’t know whether that is good or bad for mathematics, but it does seem to amount to a significant change in the goals of mathematicians.
On the other hand, bruteness does make an appearance here and there even in traditional mathematics. Consider regular polytopes. A regular polytope in Euclidean space is a figure all of whose faces (of whatever dimenson) and angles are equal. In dimension two, the regular polytopes are just the regular polygons (triangle, square, pentagon, hexagon, and so forth); of these there are infinitely many. In dimension three, as Plato already knew, there are exactly five regular polytopes: the tetrahedron, cube, octahedron, dodecahedron, and icosahedron. In dimension four, there are six. In dimension five and higher, there are just three—the equivalents of the tetrahedron, cube, and octahedron. Funkiness, in other words, ends with dimension four. Even though the funkiness of dimensions three and four is entirely explicable, there remains, for me, something brute, something “lumpy”, as if the dodecahedron and the icosahedron were sports, lusus naturæ mathematicæ. On a grander scale, perhaps that is true in the world of numbers as well.

LinkMay 21, 2013 in History of Science · Philosophy of Mathematics