February 6, 2020 | 3:30PM
Saieh Hall, Room 203
Jonathan Brennan, University of Michigan
The compositional brain: clues about computation and implementation
The hierarchical syntax of human language sets it apart from other communicative and cognitive systems, yet there is significant debate about the role that this syntax plays in how the brain speaks and understands language in real-time. Further, evidence about "where" or "when" the brain processes syntax doesn't answer "how" neural circuits carry out these computations. I discuss recent work that aims to (partially!) address each of these questions.
I first discuss efforts to make a computationally rigorous connection between theorized models of grammar and observable neural signals. We bridge this gap using Recurrent Neural Network Grammars (RNNGs) which are generative models of (tree, string) pairs that use neural networks to drive derivational choices. Parsing with them yields a variety of incremental complexity metrics which we evaluate against human electroencephalography signals recorded while participants simply listen to an audiobook story. Such evaluations show a reasonble fit between incremental changes in the model's state and dynamic neural activity. Further, model comparisons against ablated forms of the RNNG isolate the contribution of hierarchical composition to capturing brain dynamics.
In a second part, I discuss evidence that hierarchical composition may be carried out in part by neural oscillations operating between 1 to 4 cycles per second. These oscillations emerge for structured, but not unstructured, linguistic input, and they covary in a systematic (but complicated) way with the speed and comprehensibility of sentence stimuli.