Sentence processing

How does the language processing system make efficient use of multiple sources of information to produce a sufficiently rich representation? What information may go underspecified? How does grammatical knowledge constrain representations considered during online sentence processing?
Correlate retrieval in sluicing: Harris 2015, 2018, work in progress
Implicit prosody in processing: Harris et al. 2016, work in progress
Russian serial verbs: work in progress
Focus-sensitive coordination: Harris 2016, Harris & Carlson 2016, 2018

For more details, please refer to this overview of my research agenda or my vita. Ongoing research is also described on the Language Processing Lab page.

Recent Publications

More Publications

(2018). Acceptability (and other) studies on the syntax/semantics interface. In Grant Goodall (Ed.), Cambridge Handbook of Experimental Syntax.

(2018). Locality and Alternatives on Demand: Resolving discourse-linked wh-phrases in sluiced structures. In Grammatical Approaches to Language Processing - Essays in Honor of Lyn Frazier.


(2018). Preference for single events guides perception in Russian: A phonemic restoration study. In The Proceedings of the 54th Annual Chicago Linguistics Society.


(2018). Information structure preferences in focus-sensitive ellipsis: How defaults persist. In Language & Speech, 61, 480-512.


(2018). Zero-Adjective contrast in much-less ellipsis: the advantage for parallel syntax. In Language, Cognition, & Neuroscience, 1, 77-97.


(2016). Implicit prosody pulls its weight: Recovery from garden path sentences. In the Proceedings of the 8th Speech Prosody, 207-211.

PDF Publisher

(2016). Keep it local (and final): Remnant preferences in 'let alone' ellipsis. The Quarterly Journal of Experimental Psychology, 69, 1278-1301.


(2016). Processing let alone coordination in silent reading. In Lingua, 169, 70-94.


(2015). Structure modulates similarity-based interference in sluicing: An eye tracking study. In Frontiers in Psychology, 6, e1839.

PDF Publisher

(2014). Who 'else' but Sarah?. In Connectedness: Papers in Celebration of Sarah VanWagenen. UCLA Working Papers in Linguistics, 175–187.

PDF Publisher


More Talks

L2 Adaptation to Unreliable Prosody During Structural Analysis: A Visual World Study
Nov 3, 2018
Pupil dilation indexes closure mismatches between prosody and syntax
Oct 11, 2018
Correlative adverbs mark not only scope but also contrast: Corpus and eye tracking data
Mar 11, 2018
Cue reliability affects anticipatory use of prosody in processing globally ambiguous sentences.
Mar 11, 2018



Courses for 2018-2019

Fall 2018: Pragmatic Theory with Jessica Rett

Course description:

Pragmatic research addresses a notoriously broad domain. In this course, we emphasized the theoretical components of pragmatics research, focusing on topics that highlight the internal structure of pragmatic mechanisms, as well as the ways in which pragmatic information is embedded within the architecture of the language faculty. We also introduce methods and ongoing developments in experimental pragmatics, an area that has become a driving force in shaping research interests in the field.

Winter 2019: Linguistic Processing

Course description:

The core areas of psycholinguistics include language acquisition, language perception, language production, language comprehension, language and the brain, and language disorders and damage. This course emphasizes depth over breadth, and so we will not delve into all of these topics. Instead, we will be focusing on just two areas of research: mental representations and processing of lexical units, and sentence comprehension. We start with the basics of lexical access and decision, exploring various models of the processes involved. We then move to an overview of classic models of sentence processing which vary according to a number of related properties such as the modularity/interactionism of information channels and the serialism/parallelism of processing. Finally, we discuss several topics in current and classical language research, including the filler-gap dependencies, semantic processing, and sentence production.

Winter 2019: Language Processing

Course description:

Psycholinguistics is a relatively young, but rapidly growing, discipline that addresses how language might be realized as a component within the general cognitive system, and how language is comprehended, produced, and represented in memory. It is an interdisciplinary effort, drawing on research and techniques from linguistics, psychology, neuroscience, and computer science, and utilizes a variety of methods to investigate the underlying representations and mechanisms that are involved in linguistic computations.

This course concentrates on (i) uncovering and characterizing the subsystems that account for linguistic performance, (ii) exploring how such subsystems interact, and whether they interact within a fixed order, and (iii) investigating how the major linguistic subsystems relate to more general cognitive mechanisms.

Spring 2019: Research Methods

Course description:

Linguistic research has always placed a high premium on data in various forms: native-speaker introspection, fieldwork, corpora, judgment studies, reaction time studies, eye movements, and electrophysiology, to name a few. As the empirical base of linguistics had evolved, community- wide standards for data collection and analysis have become increasingly important. This course provides a practical, hands-on introduction to research design and analysis, with an emphasis on experimental data collection, study design, and proper statistical analysis. Assuming no programming, statistics, or experimental background, the course will provide you with the necessary conceptual and practical tools for carrying out experimental research.

By the end of the course, you should be able to design an experiment that uses an appropriate method and that minimizes confounds, for which you would be able to apply appropriate statistical analysis techniques. Students will work in groups to design an experiment or corpus study to be presented at the end of the course, on an issue relevant to their own research interests.

Courses regularly taught at UCLA


  • LING 165C: Semantics I
  • LING 132: Language Processing


  • Ling 207: Pragmatic Theory
  • Ling 239: Research Design and Statistical Methods
  • LING 252: Topics in Semantics
    • Fall 2016: Focus in Meaning and Experimentation
  • LING 254: Topics in Linguistics
    • Winter 2015: Evaluating perspective in meaning and discourse
    • Fall 2017: Implicit prosody and sentence processing
  • LING 264: Psycholinguistics / Neurolingusitics Seminar
    • Every quarter - see schedule here


Eye tracking corpora and tools

Los Angeles Reading Corpus of Individual Differences

The Los Angeles Reading Corpus of Individual Differences (LARCID) is a corpus of natural reading and individual differences measures. The corpus is currently a feasibility pilot of eye tracking data collected from 15 readers. Five texts from public domain sources were included. In addition to the eye tracking measures, a battery of individual difference measures, along with basic demographic information, was collected in a separate session. Individual difference measures included the Rapid Automatized Naming, Reading Span, N-Back, and Raven’s Progressive Matrices tasks.

Pilot data, write up, and R-markdown files can be found on this Open Science Framework page. Comments welcome!


Robodoc is a Python program that automatically cleans eye tracking data of blinks and track losses. This new version improves usability and command line options. Learn more about this handy code here.

Corpus tools

Embedded appositives corpus

The Embedded Appositives Corpus is an annotated collection of 278 sentences containing appositives embedded syntactically in the complement of propositional attitude predicates and verbs of saying, drawn from 177 million words of novels, newspaper articles, and TV transcripts. Intended to inform work on appositives, conventional implicatures, and textual entailment. Includes a Javascript interface, an XML corpus, and a short write-up describing the data and their theoretical relevance.

NPR Corpus scraper

THE NPR Corpus scraper is a collection of Python programs built to crawl NPR and download transcripts into XML format, with links to audio files of radio interviews into a directory. It can be tweaked to crawl other news sites. Note: this tool requires a working knowledge of Python. To be posted with instructions soon!

The script downloads the Linguist List job posting archives for the years specified below. After some reformatting, it removes all but tenture track job postings and categorizes the jobs according to keywords listed in the posting. The method for categorization largely follows previous efforts; see the Language Log postings on the 2008 data, 2009 data, and 2009-2012 data.

A fully executable R Markdown tutorial is hosted on github. To clone with git into a folder called scrape, run this command from the terminal:

git clone scrape

Odds and ends


Simple to the point of trivial, this Ruby program writes results from Linger’s .dat files to a single file with the experiment name automatically appended along with the number of subjects run. Primarily for command line phobics. If Ruby is installed on Windows, simply place in the same folder as your .dat files, and then double click on the icon to run. Also works with Mac and Linux.