Author: Mark Liberman

  • Criticism as courtship

    In his latest On Language column, Ben Zimmer examines “Crash Blossoms“, and introduces the topic with a literary allusion:

    Elizabeth Barrett Browning once gave the poetry of her husband, Robert, a harsh assessment, criticizing his habit of excessively paring down his syntax with opaque results. “You sometimes make a dust, a dark dust,” she wrote him, “by sweeping away your little words.”

    When Elizabeth Barrett wrote this to Robert Browning, in July of 1845, he was not her husband. They had first met in May of 1845. He was smitten, but she was skeptical, and forbade him to speak or write of love.  His efforts to evade this ban included his poem “The Flight of the Duchess“, as Fred Manning Smith suggests (“Elizabeth Barrett and Browning’s ‘The Flight of the Duchess’“, Studies in Philology 39(1):102-117, 1942):

    A comparison of The Flight of the Duchess, a poem published by Robert Browning in 1845, with the letters of Robert Browning and Elizabeth Barrett, written during their courtship, in 1845-1846, raises several questions:  How much of his own life and the life of Miss Barrett during the year 1845 does Browning bring into The Flight of the Duchess? Does the poem give Browning’s answer to the question whether Miss Barrett should disobey her father and go to Italy? May the words of the Gypsy Queen be taken as an expression of Browning’s desire to marry Elizabeth? In several poems the poet paid tribute to his wife during their life together in Italy and in several he paid tribute to her after her death — is not this a poem written of her and to her during their courtship? Mrs. Browning’s Sonnets from the Portuguese were written during their courtship and are based on things they talk about in their letters of 1845-1846 — may we not say that The Flight of the Duchess bears a similar relation to their letters written during 1845? Scholars have noticed the similarity between the elopement of the Brownings and the flight of Caponsacchi and Pompilia in The Ring and the Book — does The Flight of the Duchess bear a like relation to the elopement, before rather than after it took place?

    Elizabeth’s “dark dust” remark was part of a letter first discussed in Edward Snyder and Frederic Palmer, “New Light on the Brownings”, The Quarterly Review 269:48-63, 1937, and described by Smith as follows (“More Light on ‘Elizabeth Barrett and Browning’s The Fight of the Duchess’“, Studies in Philology 39(4):693-695, 1942):

    A letter written by Elizabeth Barrett to Robert Browning in July, 1845, in which Miss Barrett gives a detailed criticism of “The Flight,” has recently come into the possession of Professor Frederic Palmer, one of the authors of the Quarterly Review article. This letter with thirteen letters in which Miss Barrett criticizes other poems by Browning was purchased by Professor George Herbert Palmer in 1932. The other letters have been published (See F. G. Kenyon, New Poems by Robert Browning and Elizabeth Barrett Browning), but for some unknown reason the letter criticizing “The Flight” has never been published. The letter suggests, according to the authors of the Quarterly Review article (they do not publish it because of its length), seventy-three changes in “The Flight,” more changes than are suggested in all the other thirteen letters together. Why does Miss Barrett pay so much attention to “The Flight”? In answering this question the authors of the article arrive at certain conclusions I reach in “Elizabeth Barrett and Browning’s The Flight of the Duchess.”

    I haven’t been able to find a copy of the full text of this letter, or even of the Snyder and Palmer article, so as to determine which passage(s) in The Flight of the Duchess needed dusting — neither seems to be available on line, and the Penn library’s copies of the relevant works are in remote storage.

    [Elizabeth Barrett and Robert Browning eloped to Italy in August of 1846.]

  • Egregious fabrication of quotes at the Sunday Times?

    Regular LL readers know that we’re not naive about the relationship between “news” and truth, especially when it comes to science reporting or the accuracy and context of MSM quotations and even video clips. In fact, we could fairly be accused of excessive cynicism. But this is breathtaking: “Science Reporting Gone Wild“, Neuroworld, 1/18/2010; “The British media’s ‘Blonde Moment’“, Neuroskeptic 1/28/2010.

    Either Aaron Sell, a psychologist at UCSB, is lying about what he said to John Harlow, the West Coast Bureau Chief for the Sunday Times, or John Harlow seriously needs to be fired.

    (Well, I guess there’s also the “editorial computers hacked by a team from the Onion” theory…)

    There are other reasons that I prefer to answer journalists’ questions via email, but this is certainly a good one all by itself.

    Based on the links above, or the dozens of other examples we’ve documented over the years, it’s hard to avoid the conclusion that many if not most journalists feel free to mis-remember, select, edit, re-order, fill in, and generally simulate (not to say fabricate) quotes, to fit the story that they’ve decided to tell. When the mis-quotes are roughly congruent with at least some out-of-context piece of what the source actually said, then nobody usually pays any attention, even if a recording of the misquoted passage is easily available.

    But another fact about journalists is that they sometimes — maybe often — don’t really know much about the topic of their story. This is especially likely to be a problem with science reporting, where misunderstanding may lead to airbrushed quotes that are nonsensical, or at least largely unrelated to what any sources ever actually said.

    Perhaps that’s what happened here. And then again, maybe Harlow just doesn’t care about whether or not what he writes is true, or is happy enough to write what he knows perfectly well to be false. According to the description of the sequence of events in the Neuroskeptic post, the last hypothesis is better supported:

    Harlow, whose recent output includes “Brad Pitt and Angelina Jolie no more” and that incisive piece of reportage, “Sandra Bullock overtakes Streep in dash for awards glory”, wrote to Sell saying that he was writing an article about blondes, and asking whether Sell’s data was relevant.

    Sell hadn’t considered hair color in his research, but he reanalyzed his data on Harlow’s request. He found no association between blondness and personality, which is not surprising because it’s hair we’re talking about. Harlow, apparently unhappy with this, wrote the article anyway, simply making up various claims about blondes and attributing them to Sell and his paper, backed up with some fake quotes.

    If that’s what really happened, it goes well beyond the usual ignorance, carelessness, and sensation-seeking.

    [Update — mgh reminds us of a similar event in 2006-2007, discussed here.  In that case, I concluded that “these people are not lying, exactly. They simply don’t care one way or another about what the facts are, and this shifts their work out of the category of lies and into the category for which Harry Frankfurt has suggested the technical term bullshit“. That was because he falsehoods seem mostly to have originated with others, and the journalists were mainly guilty of failing to exercise even the most elementary sort of checking.  Thus it was at least arguable that Isabel Oakeshott and Chris Goulay were bullshitters rather than liars.

    In the case John Harlow’s Blonde Warrior Princess article, we seem to be left with only two options: either the scientist in question, for some strange reason, lied to Harlow about unpublished aspects of his research, and then decided to deny it after Harlow based an article on that conversation; or else Harlow fabricated the whole thing, because he thought the fabrication would make a better story than what the scientist actually told him.]

  • All words have 900 definitions?

    Reader RC sent in an item from the Australian Law Journal that brings together several LL topics: the relations of language to  legal interpretation, computation, and nonstandard brain states.

    Here is the seal whose inter-word dots are discussed in the quoted transcript:

    Wikipedia explains what a “McKenzie Friend” is (and gives some background on DM). Beyond that, you’re on your own.


    [From ALJ 2010]:

    An obscure directions hearing

    Most cases in the trial divisions of the New South Wales Supreme Court are the subject of directions hearings a couple of months before the date fixed for trial to ensure that pleadings and affidavit evidence are properly finalised and the matter is actually ready for hearing.

    Usually these proceed with workmanlike efficiency. However, particularly where there is a litigant in person, odd things occur. The following is the edited transcript of a directions hearing last November before McClellan CJ at CL:

    Applicant F in person

    Mr B Hodgkinson SC for the Respondent

    HIS HONOUR: Mr F?

    M: Appearances from plenipotentiary judge, DM.

    HIS HONOUR: I am sorry?

    M: I am a plenipotentiary judge. My name is DM. I am from America

    HIS HONOUR: That may be, but what is your right to appear here?

    M: Excuse me?

    HIS HONOUR: What is your right to appear here?

    M: Under knowledge of a fraud and the right to stop and correct it. In other words, I have a –

    HIS HONOUR: Unless you are a legal practitioner in this State you can’t appear in this court.

    M: I have been already certified here in the New South Wales courts on four different occasions.

    HIS HONOUR: Do you currently hold a practising certificate in New South Wales?

    M: No. I just got in from America a couple of days ago. I am making an appearance –

    HIS HONOUR: Then, Mr F, you will have to appear. Mr F, you will have to appear for yourself.

    HIS HONOUR: You will have to appear for yourself. Do you understand? The gentleman with you does not have a right of audience in a New South Wales court. Do you understand?

    APPLICANT: I do also understand that my friend is a judge who can appear in any court anywhere in the world.

    HIS HONOUR: Not in New South Wales, I am sorry.

    APPLICANT: But New South Wales is also in the same planet.

    HIS HONOUR: That’s true, but we have a statute in New South Wales which controls legal practitioners who can appear in the Supreme Court and they have to be admitted to appear. Do you understand?

    … [Later]

    APPLICANT: You see, it is my understanding that Judge DM has been appearing in a number of hearings in this country and even yesterday and technically speaking there is nothing to stop him because he is a judge of the world court and he can step in any court and rule. That’s my understanding. So I would appreciate if you can kindly consider what you said previously and allow judge DM talk on my behalf and he has very important things to say as I understand.

    HIS HONOUR: He cannot appear without a practising certificate as a lawyer.

    HIS HONOUR (to DM): I need evidence that you have a practising certificate in New South Wales. However, you are entitled, if the court grants you leave, to have a friend –

    M: McKenzie Friend, yes. I have been a McKenzie Friend, both here and in New Zealand.

    HIS HONOUR: Well, that might be right, but you can’t come here and act as a lawyer. Do you understand?

    M: No, I never have. I have always been a McKenzie Friend here and in New Zealand. I didn’t say I was a lawyer. I said I was here to assist him.

    HIS HONOUR: You gave me the understanding that you thought you had a right of appearance, or at least –

    M: A right as a McKenzie Friend because I have pertinent information that’s relevant to this court.

    HIS HONOUR: Just a minute. Mr Hodgkinson, what do you say?

    HODGKINSON: Your Honour, frankly, I am in the dark. I don’t know what the relevant information is. I haven’t heard of this gentleman before. I am not sure that I could make any sensible submission. I think the best thing, if there is an application to be made, is to allow Mr M to appear as a McKenzie Friend, then the basis for the application without amplification ought be made to the court. That would at least allow us to compute the reasoning and take an informed view.

    HIS HONOUR: What I will do, Mr Hodgkinson, is I will allow Mr M to speak this morning, but confine the leave I grant to this morning. Do you understand?

    M: Yes.

    M: The paperwork in this case goes back twelve years, as you well know, and I saw the file brought in, it’s about four inches thick. The syntax, and I am the judge in 1988 who wrote the mathematical interface on all 5,000 languages proving that language is a linear equation in algebra certifying that all words have 900 definitions through this mathematical algebraic formula and over the course of the past 21 years have developed an accuracy level in the syntaxing of language sentence structure to prove the correct sentence structure communication syntax language is required in a court system.

    Now, the seal behind you which advertises the Crown’s seal and jurisdiction of this court uses the correct syntax. That is why you have the dots. Now, the dots between the words are prepositional phrases. There’s only two places where dots as allowed as a syntax prepositional phrase to certify the value of each word and that is on money, coinage and on seals. When you created, when your Government created the seal they used the correct sentence structure, they used the correct syntax and they are advertising that you have the correct syntax and knowledge of it.

    I have looked at the paperwork for the past twelve years and both the doctor and the State in one hundred percent of every single sentence you have got in that folder is modified with adverbs and adjectives and there is not one legal sentence or a prepositional phrase to certify the value of any word so, therefore, the facts of the case have been have been muddled since this case started twelve years ago. The necessity of having the accuracy of a fact in a court, if you are not in a fact you have not committed perjury. And Bernie Madhoff, who you would know has just walked away from Wall Street with $69 billion, was prosecuted under the fictitious conveyance of language of title 18.1001.

    Now, this law, title 18.1001, is required on all 250 countries’ passports. In other words, fraudulent conveyance. The title 15 chapter 2(b) section 78FF carries a $25 million fine to modify language to extort money from a private citizen from a corporation. This gentleman represents corporation and every single document he has filed has been modified with adverbs and adjectives. So if you are going to modify a fact and change it to something that is not what the true definition of that word is you have got a babble of information in front of you. Now, I know that when we communicate, you and I – you’ve got a mess.

    [Further discussion ensued and the directions hearing was adjourned for some weeks to allow the applicant to apply to amend his Statement of Claim.]

    M: Your Honour, can I leave my book with you?

    HIS HONOUR: Yes, you may.

  • Buzzword correlations

    I haven’t had a chance to do any analysis of last night SOTU address, but Nate Silver has some interesting observations about the matrix of word-count comparisons to other such addresses over recent decades.

    [Update: and more here from Jamie Pennebaker.]

    [Update 2: discussion of computational models of standing ovations by Dan Katz, including a link to a NetLogo applet.]

    [Update 3: for a discussion of the actual content, see James Fallows in the Atlantic.]

    [Update 4, taking the prize — the Daily Show.]

    The Daily Show With Jon Stewart Mon – Thurs 11p / 10c
    Speech Therapy – Post-Racial
    www.thedailyshow.com
    Daily Show
    Full Episodes
    Political Humor Health Care Crisis
  • Pragmatics as comedy

    The theory of Speech Acts gives us a couple of dozen descriptive categories for the things people do with words and phrases. The theory of Dialog Acts gives us a couple of dozen descriptive categories focusing specifically on the things people do to a conversation with words and phrases. Rhetorical Structure Theory (RST) and its various competitors give us a couple of dozen descriptive categories for the ways people use relations between words and phrases in framing an argument or telling a story. There are several other descriptive systems for discourse structures, such as the one used by the Penn Discourse Treebank.

    Discourse analysis using such categories, though often insightful, is rarely funny. But you can make people laugh by caricaturing a text or conversation through self-referential descriptions of discourse functions and relations, abstracted away from specific content.  I can think of two specific examples of this, though I’m sure that I’ve seen others over the years.

    What brought this to mind was Chris Clarke’s recent blog post, “This is the title of a typical incendiary blog post“, 1/24/2010. It starts this way:

    This sentence contains a provocative statement that attracts the readers’ attention, but really only has very little to do with the topic of the blog post. This sentence claims to follow logically from the first sentence, though the connection is actually rather tenuous. This sentence claims that very few people are willing to admit the obvious inference of the last two sentences, with an implication that the reader is not one of those very few people. This sentence expresses the unwillingness of the writer to be silenced despite going against the popular wisdom. This sentence is a sort of drum roll, preparing the reader for the shocking truth to be contained in the next sentence.

    This sentence contains the thesis of the blog post, a trite and obvious statement cast as a dazzling and controversial insight.

    Each sentence gives an abstract description of its function in managing the interaction between reader and writer, and often also a description of its relationship to preceding or following material. These are the same sorts of things provided by theories of dialog acts and rhetorical structures and so on, except that Clarke’s descriptions are much finer-grained, e.g. not “preparation“, but  “a … drum roll preparing the reader for the shocking truth to be contained in the next sentence”; not “statement” but “a trite and obvious statement cast as a dazzling and controversial insight”.

    An older instance of the same sort of thing, long a favorite of mine, is this sketch from the Neo-Futurist‘s show Too Much Light Makes the Baby go Blind:

    (Transcript here.)

    The sketch starts like this:

    He:  Statement.
    xxxxxStatement.
    xxxxxStatement.
    xxxxxQuestion?
    She: Agreement.
    He:  Reassuring statement.
    xxxxxConfident statement.
    xxxxxConfident statement.
    xxxxxOverconfident statement.
    She: Question?
    He:  Elaborate defensive excuse.
    She: Half-hearted agreement.

    Again, part of the fun is the elaboration of dialogic categories — a bit later in the script, we get items like “Self-assured agreement as denial”,  “Extremely exaggerated elucidation”, and “Attempted condescending conclusive statement”. But in this case, the comedic effect depends crucially on what the actors add in their performance.  The script is not particularly funny to read, at least in my opinion, but the sketch is a hoot to listen to.

    In John Cleese’s doubletalk neuroscience lecture, embedded below, the discourse caricature is almost entirely embodied in the performance. It’s clear that he’s explaining something, and it all seems to fit together somehow, but I don’t think that we’re getting a clear picture of relations like exemplification, concession, generalization and so on.  Still, Cleese convinces us that there is a structure there, somehow implicit in the discourse particles, prosody, gesture, and posture:

    I suspect that  linguistic theories of discourse and dialog would be better if they gave us a language for  characterizing interpersonal and rhetorical functions at the level of Clarke’s caricature of a blog post, or the neo-futurists’ caricature of a failed flirtation. And Cleese’s  performative abstraction of professorial rhetoric may expose some similar opportunities for theories of prosodic interpretation.

    [Update — in addition to the several other examples cited in the comments below, there’s also Spamalot’s “A Song Like This“.]

  • Noah’s Arch?

    Today’s Non Sequitur:


    Alas for the (otherwise clever) joke, this is not a very likely confusion for speakers of American English. We can estimate exactly how (un-) likely it is, other things equal, from this confusion matrix given in Anne Cutler et al., “Patterns of English phoneme confusions by native and non-native listeners“, Journal of the Acoustical Society of America 116(6), 2004:

    Even at 0 dB SNR, final American-English /tʃ/ was heard as /k/ only 0.4% of the time by native speakers.

    (Dutch listeners in this experiment apparently never made that particular error at all, because the relevant cell of the ir confusion matrix — look at the paper to find it — is blank.)

    Of course, the original message would presumably have been in proto-Afroasiatic, or Sumerian, or something, where by the laws of chance, the two words were probably not even as close as ark and arch are.

  • The East Asian Heartland and its Bronze Age Connections

    That’s the title of Victor Mair’s talk tomorrow [Wednesday 1/27] afternoon, 5:00-6:30, in the Rainey Auditorium at the University of Pennsylvania Museum of Archaeology and Anthropology.  So if you’re in the Philadelphia area, and you’re a fan of Victor’s LL posts, or of his work with the Xinjiang mummies, or of his many books, or you’re just interested in Bronze Age Asia, come to 3260 South Street at 5:00 for a treat.

  • Magnetic fields

    Several readers have written to suggest LL coverage of the latest viral site, “Sleep Talkin’ Man“. So if you’re one of the half-dozen netizens who haven’t yet browsed this compendium of oneirophonic entertainment, by all means do so now.

    I haven’t written about this because I don’t have much to say, except that it’s interesting how interested people are in such things. In some Elysian bistro, André Breton and Philippe Soupault are doubtless kicking themselves for being born too early to publish in the t-shirt and coffee-mug market:

    En « logicien passionné de l’irrationnel », Breton est alerté par les phrases involontaires qui se forment dans le demi-sommeil ; tout illogiques, gratuites, absurdes même qu’elles soient, elles n’en constituent pas moins des « éléments poétiques de premier ordre » …

  • Modal deafness

    The business about musical modality and emotion reminds me of an amazing unpublished experimental result.  At least, it’s amazing if it’s true; and I think it probably is.


    Thirty years ago or so, microcomputers were just being invented, and most of them didn’t have audio I/O, and none of them had much in the way of software for creation and manipulation of sounds.  So researchers interested in audio analysis or synthesis wrote their own programs — usually in Fortran or assembly language — on suitably-equipped “minicomputers”, which despite the prefix mini- were rather large and expensive devices.  Both the hardware and the programming skills were hard to come by, and so when I was first at Bell Labs, the word got around that I was willing and able to help people make stimuli for acoustic perception experiments.

    At one point, a grad student in psychology at Yale got the idea to see whether the phenomenon of “categorical perception” applied to tones in the context of musical chords. His basic idea was to create a continuum of stimuli from (say) a major triad to a minor triad, with the middle note moving in steps of (say) a tenth of a semitone from a minor third to a major third relative to the root, and then to compare discrimination and classification accuracy along this continuum. He asked for help, and so I made him a suitable set of stimuli (using Max Mathews’ MUSIC program on Peter Denes’s DDP-224), and sent him happily back to New Haven.

    But a couple of weeks later, he was back with bad news.  To screen his subjects, he’d run them first in a simple ABX discrimination task on the two end-points of the continuum. These subjects were the usual undergrad psychology students, further selected as having normal hearing and being especially interested in music. However, most of them did surprisingly badly, on a task that should have been trivial, and about a third of them performed at chance levels. So he figured that we must have screwed up the stimuli.

    But analysis on the computer showed that the chords had the pitches that they were supposed to have.  So we figured maybe it was some problem with the digital instrument I used — not enough higher harmonics, chords too long or too short, excessively regular phase relationships, something. We tried lots of alternatives, including chords played on a regular acoustic piano. Unfortunately, the result didn’t change. It seems that this is a surprisingly hard task, at least for some people.

    Just to clarify what’s going on here, in the simplest case you’re asked to tell whether two triads in root position — two major, two minor, or one of each in either order — are the same or different.  You don’t need to know the terminology, you don’t need to identify which chords belong to which categories, you just need to tell whether two short, adjacent sounds are the same or different. In a simple same-different presentation, we’re talking about things like this:

    versus this:

    I was frankly incredulous.

    How could someone who enjoys listening to music not be able to hear that? It’s a sort of masking effect, apparently, because the same subjects performed perfectly if the middle tones were presented independent of the other two notes in the triad:

    A post-doc then in residence at Bell Labs was equally incredulous when I told her about this result. But when she listened to the stimuli, she turned out to be one of those who couldn’t reliably hear the difference. She insisted that the stimuli must be faulty, despite all the checks and re-checks.

    So we found a piano in a decent state of tune, and ran a quick test using pairs of chords played live. Same result — it was trivial for her to discriminate two piano notes in isolation a semi-tone apart, but when the same notes were played as the middle note of a triad in root position, her discrimination performance was at or near chance.

    She was embarrassed and annoyed, and we agreed that this was all bizarre and weird.  She had taken piano lessons as a child, she could sing well, she had a subscription to the symphony, she enjoyed listening to recorded music… And never mind all the happy-sad stuff, the different chords in question often play completely different roles in the syntax of tonal harmony.  Was it really true that she couldn’t perceive the tonal structures of the music that she loved to listen to? Was it all just percussion to her?

    Neither of us could believe that, and so I tried something else. I embedded the same (major or minor) triads in a stereotypical I-IV-V-I cadential sequence, where the (flatted or unflatted) third is the leading tone of the V chord:

    Suddenly, the difference was crystal clear to her.  Never mind discrimination, she could unerringly identify the valid cadences. This is a classic example of “release from masking” — the difference between the two notes is trivially perceived in isolation; then the difference is masked when the same two notes are embedded in a chord; and then the difference is trivially perceived again when three additional chords are added in sequence.

    I made up some new stimuli and showed them to the Yale grad student. He was interested, but (as I recall) his advisor thought this was a distraction — at best an echo of gestalt principles that were rather out of fashion in those days — and so I don’t think any of this stuff was ever published.

    But one of the things that I learned by checking the references in the Bowling et al. paper is that a similar result had been published decades before I was born, in Christian Paul Heinlein, “The affective characteristics of the major and minor modes in music“, Comparative Psychology 8(2):101-142, 1928. Heinlein performed

    … a group of preliminary tests designed for the purpose of ascertaining degree of difficulty in discrimination between major and minor chords when presented in (a) the tonic open position with the root repeated in the soprano, and (6) in the tonic open position with root doubled in the bass and the fundamental triad Third introduced in the soprano. […]

    In each of these forms, two pair-types are included. The first type represents a pair in which the chords are identical in their tonal structure: the second type represents a pair in which the chords differ from each other through the alteration of the triad Third by a semi-tone ascent or descent. Such alteration transforms a major chord into a minor chord, or a minor chord into a major chord, either chord being founded upon the same fundamental as root.

    Note that these both of these chord-structures ought to be perceptually easier than the triads that we used, because in the first type, the third is further in pitch from the other notes in the chords,

    and in the second type, the third also appears as the highest note (“soprano”) in the chord:

    Still, according to Heinlein,

    The results, although evidencing marked individual differences in response, emphasize one fact; namely, that both trained and untrained subjects find it easier to discriminate between two chords in the inverted position [with the third as the highest note] than in the tonic position [where the third is an inner voice].

    And the overall level of performance in “major-minor modal discrimination” was consistent with the results from the Yale undergrads. Here’s the relevant table of results from Heinlein’s paper, with the percentage scores for “major-minor modal discrimination” in column (D):

    Note that 7 out of his 30 subjects performed at or below 60% on this task, and that several of the musically-trained subjects performed in the 60-80% range.

  • Tone collections and affective reactions

    A few days ago, I pointed to a recent paper arguing that “major and minor tone collections elicit different affective reactions because their spectra are similar to the spectra of voiced speech uttered in different emotional states” ( Daniel L. Bowling et al., “Major and minor music compared to excited and subdued speech“, Journal of the Acoustical Society of America, 127(1): 491–503, January 2010).

    The argument in this paper has a nice rhetorical shape: the authors use a new form of quantitative analysis to explain the psycho-physiological substrate of a generally-accepted cultural association.  But in this case, both sides of the explanation strike me as having some very odd properties. In this post, I’ll try to explain what struck me as strange in their characterization of the cultural association between “tone collections” and “affective reactions”. At some point in the future, I’ll return to their quantitative analysis of music and speech.

    They express the scale-affect association this way:

    Other things being equal (e.g., intensity, tempo, and rhythm), music using the intervals of the major scale tends to be perceived as relatively excited, happy, bright, or martial, whereas music using minor scale intervals tends to be perceived as more subdued, sad, dark, or wistful.

    And they cite authorities from Zarlino 1571 to Burkholder et al. 2005 in support of this view.  But their emotional terminology involves a wide range of different psychological dimensions, which they collapse, without discussion, into just one.

    Consider their first two oppositions: excited vs. subdued and happy vs. sad. One of these is a dimension of arousal, while the other is a dimension of emotional valence or polarity — and both in music and in life, these two dimensions are independent if not orthogonal.

    Someone who’s happy can be in a subdued state of calm relaxation, rather than being excited and ebullient; and someone who’s unhappy can be in an excited state of panic, fear, grief, or rage, as opposed to being subdued and depressed.

    And in the kinds of music under discussion, melody and harmony are used congruently with other musical dimensions to achieve the composer’s affective goals.  One of the least “subdued” and “wistful” pieces of music ever written is this one:

    Mozart here sets to music a hymn about the day of judgment, the day of wrath, the day the earth will be burnt to ashes. The music’s relatively “excited” and even “martial” character surely does depend on “other things” like “intensity, tempo, and rhythm” — but the fact that it’s in D minor doesn’t make it even a little bit more “subdued” or “wistful” than it would have been if Mozart had written it in D major instead.

    It’s fair to say that the mood of this music is “dark” — but that’s a matter of emotional valence or polarity, not physiological arousal.

    A bit later in the same work comes some music that is genuinely subdued and even a little wistful:

    And again, the fact that it’s in B-flat major doesn’t make it any more “excited” or “martial” than it would have been if it were in a minor key.

    I don’t know any systematic survey, but it seems to me that this is typical. To the extent that “major and minor tone collections elicit different affective reactions”, it’s not in terms of arousal-dimensions like excited/subdued, but rather in terms of psychological dimensions like those sometimes called “valence” or “polarity”. And crucially, these are independent of level of arousal.

    As a result, it seems odd to me that Bowling et al. chose to look at the differences between “excited” and “subdued” speech as the putative source of the affective associations of major and minor scales.

    Most descriptions of emotional states — whether by psychologists, drama coaches, or novelists –  involve a much richer ontology of affect.  For example, Rainer Banse & Klaus Scherer (“Acoustic profiles in vocal emotion expression“, Journal of Personality and Social Psychology 70:614-636 ,1996) distinguish 14 emotional categories: anxiety, boredom, cold anger,  contempt, despair, disgust, elation, happiness, hot anger, interest, panic, pride, sadness, and shame.

    I mention this paper because it’s in Bowling et al.’s bibliography — and it was also the model that I used in 2001 to design a corpus of acted emotional speech (published as Emotional Prosody Speech and Transcripts). We also used a range of dominant-to-submissive attitudes, crossed with a range of degrees of vocal effort created by distance to the interlocutor.   I’ll be curious to see how Bowling et al.’s quantitative measures fare when applied to material like this.

    [Update — for another example, in response to Brett’s comment below, listen to the start of the second and third movements of Brahms’ piano trio in B:

    Or the largo from L’inverno vs. the presto from L’estate in Vivaldi’s Four Seasons:

    The point is not that minor-key pieces are always vigorous and major-key pieces always languorous, just that’s no reliable association in the other direction.]

  • Look it up

    We’ve been reprinting or linking to Rob Balder’s PartiallyClips comics for more than six years. (I believe that this was our first link, and this was our second one.) Now Rob has handed the strip over to Tim Crist, who started right out with a lexicographical theme:

    (As always, click on the image for a larger version.)

    The OED does have the citation 1881 PALGRAVE Vis. Eng. 159 Rival intolerants each ‘gainst other flamed.

  • Ask Language Log: “On point”

    From reader JHG:

    Is it just my perception, or is the phrase “on-point” in the midst of a meteoric rise in usage and a de facto expansion in meaning?  I have heard it used repeatedly as a general term of approval or commendation rather than to mean only “germane.” We may not have the next “cool” on our hands, but I think there’s a trend here. Any way to validate one listener’s perceptions with some research?

    This is not something that I’ve noticed, but JHG might be right, and we can test the “meteoric rise in usage” hypothesis with simple text searches.  This is a crude measure, since it doesn’t distinguish among the many uses of the word string “on point” — but there’s no reason to expect “meteoric” changes in the frequency of the ballet sense “on the tips of the toes”, or the military sense “posted at the head of an advancing column”, or the more complex derivations of the string, like “… SLR lenses are usually superior to those found on point-and-shoot and hybrid models…”.

    Let’s start with the five-year periods available by searching the COCA corpus:

    This does suggest a recent increase in frequency — but not exactly a “meteoric” one, since 0.89 uses per million words is still not all that common.  A decade-wise check of the Time Magazine corpus shows a much less coherent picture:

    It’s true that the 2000s are up over the 1990s, but the 1970s, 1950s and 1920s showed higher rates as well, and none of the rates are very high in absolute terms.

    Frequency counts in the Google News archive — which (I believe) conflates usage frequency with time-variation in archive size — confusingly show a peak in 2004:

    And a year-by-year comparison of counts in three specific newspaper archives (New York TImes, Guardian, Los Angeles Times) over the past decade looks pretty random, or at least pretty hard to square with an overall “meteoric rise” in the frequency of one of the senses of this phrase:

    NYT Guardian LAT
    2009 62 9 50
    2008 66 20 58
    2007 46 11 49
    2006 44 16 68
    2005 36 9 42
    2004 29 9 73
    2003 28 15 46
    2002 36 5 58
    2001 38 9 96
    2000 41 6 97
    1999 40 6 80

    Finallly, a Blogpulse search over the past six months (the longest period available) shows an apparent decline in the percentage of blog posts using this phrase:

    All of this tends to invalidate JHG’s perception of a trend, at least one that would increase the overall frequency of the phrase “on point”.

    But there is one thing that may lead to an explanation of such an intuition.  I was surprised to find that the OED’s relevant entry for on point (glossed “relevant, apposite, accurate; ‘spot on’; (also) direct, focused”) has citations only back to 1993:

    1993 National (Ottawa, Ont.) Nov.-Dec. 23/1 They should be on the lookout for seminars and publications on point, and make as many contacts in the industry as they can. 1994Vibe Nov. 26/2 Much props to Kenji Jasper for knowing what real hip hop is all about. His review of the Boogiemonsters album..was right on point.

    I’m sure that the usage is older than that — and a few minutes search in the NYT archive uncovered (for example) David Margolick, “Patient’s Lawsuit Says Saving Life Ruined It“, 3/18/1990:

    “There is no case directly on point, but the case law suggests that if you save someone’s life you cannot be held liable,” said Deborah R. Lydon of Dinsmore & Shohl of Cincinnati, which represents the hospital.

    But still, it seems quite possible that this usage was rare outside of legal contexts until the mid-1990s or so, and has recently increased in relative frequency.  Thus in 1990, this was one of eleven uses of “on point” in the NYT archive (9%). In 2009, 17 out of 61 instances involved this sense (28%).  And if you don’t read the dance reviews, you’d see a 17-fold increase.

    The absolute frequency is still pretty low, but this is enough of a change to trigger the version of the frequency illusion where we perceive a major effect — something that happens “all the time”, in general or in the usage of a particular group — even when the actual frequencies involved are small, here less than one per million words.

    As for semantic bleaching to a “general term of approval or commendation”, I didn’t see any clear examples of this. But without mind-reading, it’s hard to be sure what someone meant in any individual case. Even if someone said or wrote something like “That carrot cake was really on point” (not that I saw anything like that), they might just mean that it “really hit the spot”, or was “really what was called for in that context”, and used on point as a quirky way to express the idea.

    [Update — OK, I reckoned that if anyone would push a metaphorical extension of rhetorical relevance, it would be fashion writers — and a quick NYT archive search combining “on point” with various fashion-related words turned up Cathy Horyn, “A Daring Stand at Rochas, Rare as a Paris Snowfall“, NYT, 3/3/2005:

    Against a digitalized backboard, and with digital logo belts, Karl Lagerfeld sent out a collection that was briskly on point. The key message was the coat, in wool, shearling and broadtail, with a high funnel collar and ties that wound twice at the waist and gave a different perspective to volume, and a certain toughness.

    As usual in fashion writing, the semantics are confusing.]

    [Note that given a copy of the New York Times Annotated Corpus, or some other large corpus with time stamps on documents spanning the past few decades, you could use automated sense disambiguation — or just plain old scholarly scrutiny — to track usage and meaning shifts of this general kind. There are many examples where we know roughly what happened when, but it would be nice to have some cases with a much finer-grained analysis.]

  • Manfred Schroeder

    Manfred Schroeder died on Dec. 28, 2009, as I just learned.  He was a physicist specializing in acoustics, who worked at Bell Labs from 1954 to 1969, and then split his time between Göttingen and Bell Labs.  He carried forward the tradition of Harvey Fletcher, an accomplished physicist whose most important work was in the psychology of hearing. As Manfred jokingly pointed out to me when we first met in 1975, this is also the tradition of  Gleb Vikentyevich Nerzhin, the mathematician in Solzhenitsyn’s The First Circle who seals his fate by chosing to work on psycho-acoustics rather than cryptography.

    Although you probably don’t know Manfred Schroeder’s name, his work on psycho-acoustics led to several innovations that have almost certainly affected your life. First, in the 1970s, he developed with Bishnu Atal and Joseph Hall the idea of perceptual coding, described as follows in the abstract for their joint paper “Optimizing digital speech coders by exploiting masking properties of the human ear”, Journal of the Acoustical Society of America, 66(6): 1647-1652, 1979:

    In any speech coding system that adds noise to the speech signal, the primary goal should not be to reduce the noise power as much as possible, but to make the noise inaudible or to minimize its subjective loudness. “Hiding” the noise under the signal spectrum is feasible because of human auditory masking: sounds whose spectrum falls near the masking threshold of another sound are either completely masked by the other sound or reduced in loudness. In speech coding applications, the “other sound” is, of course, the speech signal itself. In this paper we report new results of masking and loudness reduction of noise and describe the design principles of speech coding systems exploiting auditory masking.

    This simple and elegant idea is the basic design principle behind MP3 and AAC coding.  Manfred was involved in developing both of these, but in any case, the foundational idea of perceptual coding is largely due to him.

    The second innovation (“Code-Excited Linear Prediction”, or CELP) is a bit harder to understand. Many modern methods of digital acoustic analysis represent the sound spectrum in terms of “linear prediction”, where each successive output sample is modeled as a linear combination of the N previous output samples plus an error term.  (This is equivalent to modeling the signal in the frequency domain in terms of N/2 resonances or “poles”.) The basic idea of this sort of analysis was developed by Norbert Wiener in his work during WWII on radar-controlled anti-aircraft guns, and eventually published in a declassified form as Extrapolation, Interpolation and Smoothing of Stationary Time Series with Engineering Applications (1949).

    Manfred also played a role in the early application of these ideas to speech analysis and synthesis. According to Bishnu Atal, “The History of Linear Prediction“, IEEE Signal Processing Magazine, 2006:

    … in 1966, I was one day in Manfred R. Schroeder’s office at Bell Labs when John Pierce brought a tape showing a new speech time compression system. Schroeder was not impressed. After listening to the tape, he said that there had to be a better way of compressing speech. Manfred mentioned the work in image coding by Chape Cutler at Bell Labs based on differential pulse code modulation (DPCM) technique, which was a simplified version of predictive coding. Our discussions that afternoon kept me thinking. Since my recently started Ph.D. thesis work focused on automatic speaker recognition, I hesitated to start a side project on speech compression at that time. Also, I had doubts whether I could add anything useful to this crowded field of research. However, Manfred’s remarks at our meeting made a deep impression.

    But this is not yet the second invention that I mentioned — LPC was developed more or less simultaneously in Japan by Itakura and Saito, and LPC itself would not have had such a significant impact on your life without another development, which didn’t come along for another two decades.

    The error term in linear prediction is also sometimes called an “innovation” term, since it represents the aspect of the signal not predicted by the model.  In signal coding applications, you can think of the innovation or error term as a “source” signal exciting an auto-regressive “filter”. If the full innovation sequence is transmitted, the signal is reconstructed perfectly, but on the other hand, no compression is achieved; so the trick is to code the innovation sequence as parsimoniously as possible.  As LPC (“linear predictive coding”) for speech was originally developed in the 1960s, the source signal in voiced speech is modeled as a quasi-periodic impulse train, representing the frequency and amplitude of the glottal source, and in unvoiced speech, the source is modeled as amplitude-modulated white noise. The good news is that this parametric source can be transmitted with very few bits; the bad news is that the result doesn’t sound very good.

    In the early 1980s, Manfred Schroeder and Bishnu Atal developed a different idea, described in their paper “Code-excited linear prediction (CELP): High-quality speech at very low bit rates“, ICASSP 1985.

    We describe in this paper a code-excited linear predictive coder in which the optimum innovation sequence is selected from a code book of stored sequences to optimize a given fidelity criterion.

    The “fidelity criterion”, needless to say, is based on a perceptual distortion measure — and it turns out that random code-books do a pretty good job. CELP is now the most widely used form of speech coding, and in particular is the basis of  all (?) digital cellular telephony.

    If you read Manfred’s home page, which is still available at Göttingen, you’ll see that he made many other contributions, in areas from concert-hall acoustics to computer graphics.  Among his publications, my favorite is his book Number Theory in Science and Communication, and I think it might have been his favorite as well.  Certainly I never saw him as happy and excited as when he explained to me about his idea for quadratic-residue diffusors to solve the acoustic problem caused by modern concert-hall design, where relatively low height compared to width causes undesirable median-plane sound reflections.

    (I should mention here another small-world connection — Joe Hall, co-author of the original paper on perceptual coding, is Barbara Partee’s cousin.)

  • Mining a year of speech

    John Coleman was on the BBC Digital Planet program a couple of weeks ago, discussing a recently-awarded grant from the (British/American/Canadian) “Digging into Data” challenge.  The proposal was submitted under the title “Mining a Year of Speech”, and also involves the British Library Sound Archive, and some researchers at Penn, including Jiahong Yuan, Chris Cieri, and me.  An Oxford University press release is here.

    Last week, John was in Philadelphia, discussing plans for who’ll do what when.  On the U.K. side, the primary goal is to index the audio of the spoken part of the British National Corpus. On the U.S. side, we’ll be indexing a variety of other spoken materials, and working with our British partners on issues of pronunciation modeling across dialects, integration of diverse metadata from different sources, and approaches to web-based search and retrieval for various types of researchers.

    One of the things that I learned during John’s visit is that during his time at Bell Labs, before he took the job at Oxford, he occupied the office that I had used during my last few years there. And as it happens, one of the other awards in the Digging into Data challenge was to a group involving Mats Rooth at Cornell — and Mats, I believe, occupied the same office during the interval between John’s time there and mine.

    For an example of what can be done with this sort of text/audio alignment, take a look at the presentation on the oyez.org website of U.S. Supreme Court oral arguments (e.g. this one).  The techniques we’ll be using on the Digging into Data project were developed (mainly by Jiahong Yuan) for the SCOTUS application, under an NSF grant that just ended this past year.

  • Tonal relationships and emotional effects

    I’m a bit pressed for time this morning, so discuss among yourselves: Daniel L. Bowling et al., “Major and minor music compared to excited and subdued speech“, Journal of the Acoustical Society of America, 127(1): 491–503, January 2010.  The abstract:

    The affective impact of music arises from a variety of factors, including intensity, tempo, rhythm, and tonal relationships. The emotional coloring evoked by intensity, tempo, and rhythm appears to arise from association with the characteristics of human behavior in the corresponding condition; however, how and why particular tonal relationships in music convey distinct emotional effects are not clear. The hypothesis examined here is that major and minor tone collections elicit different affective reactions because their spectra are similar to the spectra of voiced speech uttered in different emotional states. To evaluate this possibility the spectra of the intervals that distinguish major and minor music were compared to the spectra of voiced segments in excited and subdued speech using fundamental frequency and frequency ratios as measures. Consistent with the hypothesis, the spectra of major intervals are more similar to spectra found in excited speech, whereas the spectra of particular minor intervals are more similar to the spectra of subdued speech. These results suggest that the characteristic affective impact of major and minor tone collections arises from associations routinely made between particular musical intervals and voiced speech.

    As always, comments are likely to be more interesting if you read the paper and figure out what they did before expressing an opinion. You might find it interesting to compare and contrast this work.

    [I’ll explain and discuss what they did in another post, when I have a spare 45 minutes or so to explain it.]

  • Ludicrous, even derogatory?

    Here’s a case where English has it relatively easy. There’s been plenty of fuss over whether to retain actress or to use actor for females as well as males, whether to adopt new gender-neutral terms like chair and craft in place of chairman and craftsman, and so on. But most English words for social roles and titles are already linguistically gender-neutral: president, senator, minister, dean, secretary, teacher, boss, judge, lawyer,

    In languages like Italian and Spanish, in contrast, nearly all such words are specified for grammatical gender, and their grammatical gender is usually interpreted sexually. Furthermore, the option to create gender-neutral replacements is linguistically unavailable — the only practical alternatives are to use one gender (usually masculine) as the default for both sexes, or to coin a new word for the marked sexual category (as in English chairwoman or househusband).

    This issue is discussed at length in Miren Gutierrez and Oriana Boselli, “Rejecting the Derogatory ‘Feminine’”, IPS,  12/26/2009. And what I learned from this article is that Italian and Spanish have dealt with the issue in strikingly different ways.

    When it comes to titles of importance, in Italian, you find yourself reading about “il ministro Mara Carfagna” – even if Carfagna, the minister of equality, is a woman. In contrast, in Spanish there is no option but ‘ministra’, ending in the feminine ‘a’.

    Both options seem to make a lot of people unhappy. The article says that “In Italian, most women prefer the masculine titles, because the feminine version (when it exists) is considered ludicrous, even derogatory”. But

    Politician Luisa Capelli, from L’Italia dei valori party (The Italy of Values), thinks that “leaving behind the supposed universal neutrality of the masculine form is an essential passage so that the feminine experience gets respect.”

    “It is not true that these feminine forms (for positions of power) do not exist in Italian: there are plenty of examples from feminists, linguists and semiologists who have made a number of proposals,” says Capelli. “You can say ‘avvocata’ (lawyer) and ‘ministra’, but nobody does. Although many of us use those words, we are ignored. To change the symbolic order is hard work that requires a consensus based on the profound convictions of people.”

    The Spanish, in contrast, have freely coined many feminine forms of terms for titles and roles, though some people think there need to be more of them:

    This apparently dull issue of feminine titles jumped to the front pages recently, when Bibiana Aido, Spain’s Minister of Equality, used the word ‘miembra’ (member) in public.

    What’s the big deal? The word doesn’t exist. Yet.

    “In most personal nouns,” says [José Luis] Aliaga Jiménez [professor of Linguistics of the Universidad de Zaragoza], there is a correlation between grammatical gender and the referential meaning of ‘sex’. It is a culturally significant correlation… All nouns referred to a person end up with a gender variation, sooner or later. And it is in that context that the words ‘miembra’, ‘testiga’ (witness) emerge, since, following the common rule in Spanish, the final ‘a’ is interpreted as belonging to the feminine.”

    ‘Testigo’ and ‘miembro’ are so far exceptions to the common rule and have no official feminine variation.

    And apparently there is resistence in Spanish to the use of masculine forms for  traditionally female jobs like azafato (“male flight attendant”), amo de casa (“househusband”) or niñero (“male nanny”).

    In contrast to both Italian and Spanish, the trend in English-speaking countries seems to be to avoid pairs of gendered terms (e.g. chairman/chairwoman or actor/actress) in favor of a single neutral term, whether it’s a neologism (chair) or a term that previously had gendered associations (actor).  This apparently reflects the attitude attributed to Italians in the article’s opening sentence:

    In Italian, most women prefer the masculine titles, because the feminine version (when it exists) is considered ludicrous, even derogatory.

    But English ends up with a solution that’s largely gender-neutral, and this is not an option in languages like Italian and Spanish, because grammatical gender is too firmly established in noun morphology and in syntactic agreement phenomena.

    The article’s way of explaining this involves an amusing misunderstanding (or malapropism):

    Modern English lacks grammatical gender, whereas Indo-European languages, including Italian and Spanish, can distinguish between masculine and feminine.

    Perhaps the state of linguistic education in Italy and Spain is just as bad as in the U.S.

    [Hat tip: Randy McDonald]

  • That queerest of all the queer things

    Alexander Graham Bell patented the telephone in 1876.  In 1880, Mark Twain wrote a comic sketch about how strange it is to overhear one end of a telephone conversation.  A century and a quarter later, people have gotten used to the experience with landlines — or at least stopped complaining about it — but we still tend to perceive overheard cell phone conversations in public places as more distracting and annoying than real-life conversations, even when the real-life conversations are just as loud or even louder.

    Now there’s increasing experimental evidence that phone conversations are not only cognitively more troublesome than in-person conversations for outsiders, they’re more difficult for participants as well. One recent study interviewed pedestrians who had just walked along a 375-foot path across an open plaza where a clown on a unicycle was riding around. Only 2 out of 24 cell phone users reported seeing the clown. In comparison, the unicycling clown was reported by 12 out of 21 people involved in real-life conversations as they walked the same path.

    The abstract of Ira Hyman et al., “Did You See the Unicycling Clown? Inattentional Blindness while Walking and Talking on a Cell Phone“, Applied Cognitive Psychology 2009:

    We investigated the effects of divided attention during walking. Individuals were classified based on whether they were walking while talking on a cell phone, listening to an MP3 player, walking without any electronics or walking in a pair. In the first study, we found that cell phone users walked more slowly, changed directions more frequently, and were less likely to acknowledge other people than individuals in the other conditions. In the second study, we found that cell phone users were less likely to notice an unusual activity along their walking route (a unicycling clown). Cell phone usage may cause inattentional blindness even during a simple activity that should require few cognitive resources.

    Actually, the conversational pairs in their Experiment 1 seem to have taken longer to cross the square than the cell phone users, and to have stopped more often, though they did less weaving and direction changing:

    Here’s the clown from Experiment 2:

    A description of the method and the numbers of subjects in each category:

    Observations were collected of individuals walking along the same diagonal path used in Experiment 1. Observers were positioned at both ends of this path and attempted to interview all individuals who exited Red Square classifiable under any of the same four conditions. We interviewed 151 individuals (67 classified as males, 84 as females; 139 classified as college-age, 10 as older and 2 as unsure). Of these individuals, 78 were single individuals without electronics, 24 were cell phone users, 28 were music player users and 21 were part of a pair (for pairs, observers interviewed the closest individual).

    And here’s the table of results:

    The real-life conversational pairs in fact noticed the clown more often than the unoccupied single pedestrians, perhaps because they walked more slowly and stopped more often, and perhaps because if one participant noticed the clown, he or she pointed it out to the other.

    (The overall rates of clown-noticing were fairly low, apparently not because unicycling clowns are routine on the campus of Western Washington University, nor because WWU students are unusually inattentive, but rather because the clown was a bit off to the side of the diagonal path across the plaza where the experiment was conducted.)

    As the study’s authors observe, there are quite a few alternative explanations for the effect, and more than one of them may be true:

    One possible explanation for the effect of cell phone conversations is that they cause a particular drain on attentional resources and thus lead to inattentional blindness. Fougnie and Marois (2007) argued that divided attention tasks that drain central executive processing capacity are more likely to produce inattentional blindness. Similarly, Strayer and Johnston (2001) found that cell phone conversations were particularly disruptive in comparison to listening to books on tape, a radio broadcast, or shadowing using a cell phone. Something about the conversation seems to limit attentional capacity. We, like other researchers, found that having a conversation with a person next to you did not increase inattentional blindness (Crudell et al., 2005; Hunton & Rose, 2005; Strayer & Drews, 2007). Similarly, Klauer et al. (2006) reported that a passenger in the adjacent seat decreased accident rates whereas a cell phone conversation increased accident rates. Strayer and Drews (2007) suggested that in-vehicle conversations are less problematic because the driver and the passenger can more easily coordinate the conversation with the driving demands. Klauer et al. (2006) suggested that two observers increases the odds of noticing important aspects of the driving environment and our finding that pairs were more likely to see the clown is consistent with this point. Of course there are other differences between conversing with someone who is present and someone via a cell phone that may contribute to inattentional blindness. For example, the degraded sound quality of cell phone conversations may require more attentional resources to process both the content and the precise timing of turn-taking. In addition, an absent partner may cause an individual to engage visual processing to imagine the other person. This additional visual interference may increase inattentional blindness.

    [And yes, people are quite capable of extraordinary levels of inattentional blindness even when no conversations of any sort are involved.]

    [Update — I should add, since I’ve made this point in other cases, that this is a relatively small study, carried out in a specific social setting, involving a limited sample of subjects. And the fact that the results are consistent with expectations (including mine) is perhaps a reason to be more skeptical, not less. (As Dick Hamming used to say, we should always beware of finding what we’re looking for.) Still, I’ll take this study as increasing the plausibility of the view that cell phone conversation tends to soak up attentional resources in a way that face-to-face conversation doesn’t.]

  • Interpretation in the legal academy

    [This is a guest post by Neal Goldfarb.]

    While the Linguistic Society of America was holding its annual meeting last weekend in Baltimore, the nation’s law professors assembled in New Orleans for the annual meeting of the Association of American Law Schools. We know that some of the linguists talked about law; did any of law professors talk about linguistics?

    There were certainly issues at the AALS meetings about which linguists might have had interesting things to say. At the session on Law and Interpretation, the topic was “Interdisciplinary Interpretation.” According to the program, “The panel brings together experts from a range of disciplines—psychology, economics, and political science—to provide insight and discussion about what each of those fields specifically, and the interdisciplinary approach more broadly, can bring to legal interpretation.” Economics, but not linguistics?

    Meanwhile, the session on Constitutional Law dealt with the distinction between interpretation and construction, an issue that is most strongly associated these days with Larry Solum. (Solum operates Legal Theory Blog, and is not to be confused with Larry Solan.)

    Solum describes interpretation as “the activity of determining the linguistic meaning (or semantic content) of a legal text” and construction as “the activity of translating the semantic content of a legal text into legal rules.” He says, “We interpret the meaning of a text, and then we construct legal rules to help us apply the text to concrete fact situations.” (Link.) And this: “The correctness of an interpretation depends on linguistic facts (about patterns of usage) and contextual facts (about the circumstances of utterance)”

    Judging by his use of expressions like “semantic content” and “patterns of usage,” Solum seems to be aware of—and open to—ideas from linguistics. And that impression is reinforced by this passage from his book-length paper, Semantic Originalism:

    Making the semantic turn in the theory of constitutional meaning will require an excursus beyond the disciplinary boundaries of the academic study of law (as practiced in the law schools and departments of political science) and into territory within the domain of the philosophy of language and linguistics. The fundamental premise of the move beyond law is that constitutional semantics can only be sensibly understood as applied philosophy of language (or applied linguistic theory).

    Solum’s work has gotten a lot of attention from legal academics, and I think that a big reason for that is his language-centric focus. For instance, here’s Tulane law professor Stephen Griffin at Balkinization:

    Solum’s long article may prove to be a turning point, although I suspect he faces many hurdles in winning acceptance for his central contention that the foundations of originalism are firmly rooted in a semantic, factual, and non-normative account of the meaning of the Constitution.…

    In the dance of arguments on originalism, Solum is right to point out that the debate has been almost entirely normative…. Solum’s theory, in my view only hinted at in work by other scholars (and thus quite original), changes the focus to how meaning is determined as a fact.

    One aspect of Solum’s work that as far as I know hasn’t been discussed—but should be—is his assumption that there is a clear dividing line between semantics and pragmatics. Solum accepts the mainstream view that there is such a thing as a strictly linguistic meaning that is distinct from the message that the speaker intends to communicate. But that distinction has come under attack from several directions in recent years, and it would be interesting to work out how Solum’s arguments would have to be changed to accommodate a different approach to semantics.

    [This is a guest post by Neal Goldfarb.]