Pundit makes stuff up, is refuted
Stanley Fish has recently asserted that “the conclusions reached in philosophical disquisitions do not travel. They do not travel into contexts that are not explicitly philosophical […], and they do not even make their way into the non-philosophical lives of those who hold them”. This is at best a gross overgeneralization; it can be refuted by five minutes’ research online.*
No doubt lots of people will step up to defend the relevance of philosophical conclusions. I want to consider a different issue. Once upon a time when a poobah like Fish issued pronouncements like this, it would have taken time to gather the evidence to refute him. Now it takes almost no time or expense. It seems that by and large the poobahs have yet to catch on, perhaps because for poobahs research is optional. When they pontificate on the day’s events, they mostly rely on their general knowledge. This is owing no doubt to deadline pressures; but it is also characteristic of the role. A well-furnished mind has been, since the days of Cicero at least, a prerequisite of the orator; but even the most well-furnished have lacunæ and lapses, and one suspects that some of our poobahs’ furnishings are sparse.
When I lecture I sometimes find myself veering into topics I haven’t prepared and don’t know much about. I too have to fall back on my general knowledge. I used to be able to count on knowing more than my students on most of the subjects I was likely to veer into. But now they have computers and iPads. If I don’t get a date or name right they can catch it almost immediately. They sometimes do, and sometimes they tell me. I’ve learned two things: one is not to fake it, the other is to take advantage of those computers and iPads—have them do some fact-checking for me. It’s instructive for both of us.
The issue I want to raise is: what becomes of “general knowledge”, or rather the social value of having lots of it, now that anyone with a phone or tablet can simulate the possession of a well-furnished mind? Is the orator’s storehouse obsolete? And if poobah discourse, like the extemporaneous public oratory of Chautauqua days, depends for its effect partly on the impressive marshalling of general knowledge, will it now gradually fade away?
*The issue Fish raises is real enough, and has a long history: for one case, see Miles Burnyeat, “Can the Sceptic Live his Scepticism?”, in Richard Rorty et al (eds.), Philosophy in History (1980) and the subsequent literature.
Odd measures #2: Entourage Depth Index
This Odd Measure was inspired by a quote from Mariah Carey:
I had my team with me but the pups had a mini entourage of their own, of course! And why wouldn’t they? It was a big shoot and even my entourage had an entourage—my stylist had an assistant, my security had extra security [for another version, see this].
Merely having an entourage, of course, is indicative of fame. I’ve never had even the smallest entourage, and I would guess that most academics are below the threshold. Susan Sontag, who I sat next to on a panel once, had a small entourage to sweep her away to the next event. But intellectuals by and large just don’t seem to need them.
Divas, on the other hand, do. Likewise Presidents—the Secret Service is an entourage all by itself. Or Roger Ailes, head of Fox news, who in public is constantly “buffered” by an “elaborate private security detail” paid for by News Corp. I’ll bet that like Mariah Carey’s, his security too has security of its own.
A better measure of your fame and (self-)importance will take account not merely of the size of your entourage, but of its depth. On this basis we would expect the little people—those who have no entourage—to be assigned an index of 0. If you’re like Paula Abdul and all you can muster is a first-order entourage, your index ought to be 1. Mariah Carey’s index, as we’ve seen, will be at least 2.
One complication emerges as we consider the elaborate arrangements around Presidents, Queens, Miss America, Dr. Evil and the like. The collection of entourages around a person, ordered by the “belongs to the immediate entourage of” relation, will be, in favorable cases, a directed tree, that is a directed graph with a designated node such that between that node and any other node there is exactly one path. In the graph of entourages all paths lead to the Star.
Take the tree at right (all edges are understood to be directed upwards). The longest paths contain three entourages, but there are also paths containing only two. How, then, should the ENTOURAGE DEPTH INDEX be defined?
An important person has a large entourage, of course, but also, one would think, a deep one. Largeness corresponds to breadth, to the average number of nodes at each level; overall depth to some measure where we give greater weight to nodes that are farther away from the top. (If you set the EDI simply to the length of the longest path, then you’re ignoring breadth altogether.) I’m not sure there is a universal measure: after all, in some contexts, breadth would indicate importance better than depth, and in others vice versa. Criminal organizations tend toward flatness, armies toward depth.
The examples mentioned above suggest another index, the SDI or Security Depth Index. But I suspect that, as the story about Roger Ailes shows, the SDI would not be a measure of importance so much as of paranoia.
Higgitt’s historical heifer
[Cross-post from New APPS.] I want to recommend and to comment on a post by Rebekah Higgitt at her new venue Teleskopos.
She starts with a wonderful quotation from Augustus de Morgan (yes, that de Morgan), on the difficulty that ordinary readers may have in understanding the “merit of any new step in the advancement of a system”. In his biography of Newton (see also Essays on Newton p51), he writes:
Unless he be acquainted with the history of preceding efforts, he comes to the consideration of that merit from the wrong direction; for he reads the history from the end. He goes to the mail-coach, back from the railroad instead of forward from the old strings of pack-horses: from a macadamized road lighted with gas to the rough stones and the oil-lamps, instead of beginning with the mud and the link-boys.
We all come in at the wrong end; the difficulty is that of having a beginning, of not being there all along. The ranks of the famous, for example, appear differently to each cohort. For anyone under thirty-five, John Lennon has always been dead; he is dead, of course, but for me he is also—not just by hearsay but in experience—the twenty-something Beatle who told the rich folks in Albert Hall to rattle their jewelry, the promoter of peace, the artist felled by bullets in 1980. So too the student of physics approaches its great discoveries from the wrong end, when they are completed, established, no longer at the quickening edge of science.
As Higgitt says, the ordinary reader lacks context (other than what is part of “general knowledge”): the “small guys surrounding the big ones”, the textbooks we no longer read, the decor taken for granted as de Morgan takes for granted lamps lighted with gas (rather new in the 1840s!), as Einstein took for granted synchronized clocks (his father designed them: see Peter Galison, Einstein’s clocks). To the mathematician Hamilton in 1853 De Morgan writes:
In reading an old mathematician you will not read his riddle unless you plough with his heifer;* you must see with his light if you want to know how much he saw [quoted by Higgitt].
Higgitt goes on to quote De Morgan’s review of Whewell’s History of the inductive sciences (1847; the review was published in 1849). Whewell, says De Morgan, “appears to have considered foreign to his purpose” inquiries into the when and even the by whom of scientific discovery. But
it is of great importance in the history of philosophy to show, that the germs of brilliant discoveries have often been long in the hands of mankind, unappreciated and little thought of, till some accidental association with a fertile principle or abstract truth, developed their nature, and gave them new value. The more we apply ourselves, with antiquarian industry, to examine the history of the human mind, the more apparent it will be, that the present accumulation of science, however massive, has grown particle by particle, and has never really experienced any sudden increase.
Is this, Higgitt asks, a plea for the cultural history of science avant la lettre?
On a question of Mohan Matthen
Mohan Matthen writes at NewAPPS:
p: I am sitting here writing a blog post. It is now later than when I wrote the previous sentence.
Can I doubt the truth of what I just wrote?
Let’s say I can rationally doubt p if there is some scenario S that falsifies p, and I cannot conclusively rule S false. Certainly there are admissible scenarios that falsify p: I may be dreaming. So I can rationally doubt p.
Let’s say doubt is sceptical if it spreads to unrelated propositions. The dream scenario falsifies p, but it also falsifies everything else I seem to perceive. Let’s say a doubt is empirical if it does not spread. I may doubt that my computer is working properly. But this doubt, based on the computer’s odd performance, does not spread to the proposition that my printer is working properly. (More details here.)
I would claim that I cannot empirically doubt p. I can doubt that it is now later than when I started writing, but only by dream scenarios and other sceptical stratagems that cast doubt on all contingent propositions.
p: I am writing a blog post. My fingers are above the keyboard. Can p be doubted?
In a way, no. But an Aristotelian would hold that up and down are absolute notions: ‘up’ is away from the center of the universe, ‘down’ is toward it.
The Newtonian says: no, ‘up’ is the direction toward the center of some nearby large mass, and ‘down’ is away from that center; ‘up’ and ‘down’ are notions of purely local application (and in some reference frames, simply inapplicable); the Aristotelians’ ‘up’ and ‘down’ is “overturned” as being, though conceivable, of no application anywhere.
As for ‘above’, “x is above y” will mean that y and x lie along a line incident with the center of the nearby mass that defines ‘up’ and ‘down’, and that y is farther from that center. So understood the relation ‘is above’ (new style) is coextensive with the relation ‘is above’ (old style). Moreover, various commonplaces about aboveness can be explained in the new system, e.g. that it takes effort to situate one thing above another, or that if y is above x then y, if left unsupported, will fall on x.
Can the Newtonian be said to doubt that her fingers are above the keyboard? Notice that in the exchange above, statements (or, for that matter, thoughts) in which the relation of aboveness figures have been given two interpretations. I want to emphasize both two and interpretation. The original thought—that my fingers are above the keyboard—is not interpreted, and not itself an interpretation of something else. It’s just there, as something I think or say (you could add: on the basis of certain experiences, but that doesn’t seem to advance the argument).
The two interpretations seem to be incompatible, or rather they seem to have incompatible consequences in a world in which there is more than one candidate for “nearby large mass”.
The Aristotelian is committed to holding that if I am sitting on the visible side of the moon with my head pointing toward the Earth, my feet are above my head (‘above’ being understood Aristotelian-wise); the Newtonian is committed to holding that my head is above my feet (‘above’ being understood Newtonian-wise); and commonsense says that if x is (properly) above y, then y is not properly above x. (I have to bring in commonsense here, because there is no incompatibility between what the Aristotelian says and what the Newtonian says except through mediation by the “naïve” notion of ‘above’.)
My attitude toward my naïve belief that my fingers are above the keyboard cannot accurately be characterized as “doubt”. As a convert to Newtonianism, I am no more inclined to doubt that claim empirically than I was before I was converted (and whether I was converted from naïveté or from Aristotelianism). Newtonianism, like Aristotelianism, gives me an interpretation under which the claim is true.
On the other hand, I do, after a fashion, rationally doubt the claim insofar as I can imagine a scenario in which it, but also every other statement of a certain type, turns out false (e.g. if there were an even larger mass than Earth nearby and in the feet-to-head direction). Indeed, Newtonian physics gives me a systematic reason to relinquish the naïve notion of aboveness altogether insofar as it, along with various other notions about bodies and space, has a fatal connection with Aristotelianism (as many 17th-century philosophers thought it did). (The “retraining of intuitions” that Eric refers to was in part a relinquishment of Aristotelian/commonsense notions, e.g. of nonrelative rest and of differences between qualitative and specific change—alteration as contrasted with corruption. Descartes is quite explicit about this in Le Monde. Boyle in his Origin of forms and qualities devotes quite a bit of effort to convincing his reader that there is no distinction to be made between alteration, or change of quality, and generation and corruption, or change in substantial form; some of that effort consists in persuading his reader to give up Aristotelian interpretations of common experiences.)
I’m not sure whether commonsense notions of ‘before’ and ‘after’ stand to General Relativity as do commonsense notions of ‘above’ and ‘below’ to Newtonian (and post-Newtonian) physics. I suspect that in the world imagined by George Gamow, in which the speed of light is about 20 miles an hour, ‘before’ and ‘after’ would prove less useful, because simultaneity relations are not relativistically invariant. That would not lead me, exactly, to doubt that it is now later than when I wrote the previous sentence; but, as before, it would lead me to reinterpret what I take myself to mean when I make statements like that, and perhaps, in time, to relinquish ‘before’ and ‘after’. I would, for example, introduce the concept of worldline in interpreting commonsense notions of ‘before’ and ‘after’; and I would learn that those notions are applicable only along worldlines.
Jeffrey Zacks, a colleague here in Psychology, and his collaborators have been studying human perception of events for the last ten years. A recent paper, in press at the Journal of cognitive neuroscience, and available at his website (pdf) argues that perceptual event boundaries occur in experience at points where prediction becomes difficult.
[…] working memory representations of the current event guide perceptual predictions about the immediate future [less than 10 sec]. These predictions are checked against what happens next in the perceptual stream; most of the time perceptual predictions about what happens next are accurate. From time to time, however, activity becomes less predictable, causing a spike in prediction errors. These spikes in prediction error are fed back to update working memory and reorient the organism to salient new features in the environment. According to this model, the increase in prediction error and consequent updating results in the subjective experience of an event boundary in perceptual experience.
The tenets of Zacks’s view are (i) that the unity of experience consists in representations actively maintained in working memory; (ii) present experience consists partly in anticipations of future experiences. Memory, insofar as it enters the stream of experience, would be on this account proleptic, forward-looking; mere recall has no place.
Aristotle says that animals don’t recollect: they don’t search their memories for information about the past (De memoria ii, 453a8, Hist. anim. 488b26; see Grote, Aristotle 476). On what grounds he said that I don’t know, but whether it was a shrewd surmise or a lucky guess he seems to have been right. Aristotle also put forward a version of what became the predominant philosophical picture of memory—that it consists in the registering of an “impression” which is subsequently to be recalled, as if the mind had a filing-card drawer or a mental museum (such as figured in Ancient and Renaissance arts memoriæ). That picture, attractive though it is, may well be fundamentally misleading. Modelling biological memory on the specifically human capacity that consists in voluntary recall of items subject to intersubjective standards of accuracy (e.g., the procedures of memorization employed by the reciters of epic poetry, to take an example Aristotle would have known) may turn out to be yet another case where intuition has led us astray.
A predominantly proleptic function for working memory, moreover, fits nicely with theories according to which perception requires activity on the part of the perceiver, so that the perception of red, for example, to use Mohan’s example (taken from Justin Broackes) is effectively the perception of a pattern of sensations that arises from the perceiver’s having regarded the red thing from several perspectives—a feat normally posssible only by moving. Event perception too may be governed, if not by activity itself, then by anticipations of activity.
I stopped reading…
A result of reading
I was reading a poem of Ashbery. I stopped when I encountered the phrase ‘partial symmetry’. That phrase evoked, as an errant odor might, an image, or rather the skeletal trace of one, of my reading, long ago when I was studying music, a book on finite geometries. I recalled in particular the phrase ‘incomplete block design’, a mathematical object that indeed exhibits partial symmetry. The image and phrase could have set off a long train of reminiscence and reverie, but instead I began thinking about what the poem had done to me.
Modern criticism tends to set great store by evocativeness, allusiveness, multum in parvo. A poem ought to suggest more than it says. “Standing upon a peak in Darien”—the last line of Keats’s “On reading Chapman’s Homer”—is the type. A scientific report, on the other hand, ought to suggest no more than it says. It would be an aberrant report that lent itself to indefinite chains of allusion. Its effect requires no Pacific lurking offstage, no flower absent from all bouquets. The scientific flower is there in black and white or not at all.
Were I writing at length I would tease out the suggestions of ‘suggestion’. I might even venture a definition. After all, there are distinctions to be made. Scientific papers can be suggestive too. Einstein’s 1905 papers certainly were, but not in the way of Keats. Here, though, I rely on the reader’s sense of the difference.
We expect more
We—the present audience for serious philosophy—expect of philosophical writing that at its best it should place itself somewhere between poem and report. A philosophical work that suggests no more than it says tends not to be read once the dialectic has moved on. After the moving finger writes there follow many hands erasing until almost nothing is left (except for those of us who make a profession of reading old texts); what remains does so by virtue of its power of suggestion.