From Science Daily:

We are constantly bombarded with linguistic input, but our brains are unable to remember long strings of linguistic information. How does the brain make sense of this ongoing deluge of sound?

In a new paper in Behavioral and Brain Sciences, “The Now-or-Never Bottleneck: A Fundamental Constraint on Language,” Cornell psychology professor Morten Christiansen and Nick Chater (University of Warwick, U.K.) argue that language processing, acquisition and evolution, as well as the structure of language itself, are all profoundly shaped by what they call the “now-or-never bottleneck”: fundamental limitations on sensory and cognitive memory. They propose the brain’s language processing system overcomes this bottleneck by processing linguistic input immediately, before it is obliterated by later input and lost forever.

The brain does this, say the authors, by incrementally “chunking” linguistic material into a hierarchy of increasingly abstract representational formats, from phonemes to syllables, to words, phrases and beyond. And when we speak, the sequence of chunking operations is reversed.

“Our argument has fundamental theoretical implications for many disciplines, because it requires a radical shift in explanatory focus when approaching language,” says Christiansen.

The now-or-never bottleneck is not specific to language, Christiansen and Chater write, but arises from general principles of perceptuo-motor processing and memory. Studies show much acoustic information is lost after just 50 milliseconds, with the auditory trace all but gone after 100 milliseconds. Similarly, and of relevance for sign language, we can only retain visual information for about 60 to 70 milliseconds. Unless the perceived information is processed, it…

Continue Reading