Skip to main content

Spring CogSci Fest 2017

Spring CogSci Fest
Tuesday, May 2nd, 4:00pm
Swift 107
Research Presentations and Reception
 
For all undergraduates, graduate students, and faculty.

Come find out what your peers have been up to: exciting presentations from graduate and undergraduate students in Cognitive Science!
 

Presenting Students:

  • Role of Child-Directed Speech Constructions in Early Language: An Analysis of the CHILDES Corpus
    • The theory of Construction Grammar in linguistics, that form and meaning pairings are the structural basis of language, is one currently gaining steam in the academic community. One of its main tenets includes the notion that syntactic constructions provide semantic information, meanings that are generalized from the particular verbs with which the structures are commonly associated. In order to investigate the legitimacy of this theory, this research focused on using quantitative methods to annotate, organize, and analyze large samples of child-directed speech for such constructions. Utilizing two specific resources, the CHILDES corpus and the FrameNet project, over 10,000 lines of transcribed child-directed speech were analyzed and coded for target verbs appearing in ditransitive, caused-motion, and intransitive motion constructions. Results of the corpus analysis showed that the caused-motion and intransitive motion constructions did in fact correlate with a higher frequency of certain verbs, as expected under a constructionist account.
  • Quantification and Visual Event Segmentation
    • Linguistic theory tells us that if a word is describing events, then a comparison with more involving this word should be about number. Otherwise, it should not be about number. In psychology, we think that people categorize their visual experience directly in terms of EVENTS or non-EVENTS. We thus expect that vision can directly impact how more is understood. This project takes a step towards evaluating this prediction, by investigating how people use subtle visual cues to segment dynamic displays, and thus under what conditions visual information is categorized in terms like ‘EVENT’ or ‘non-EVENT’.
  • A multi-method, family study of visual processing in autism
    • An essential part of perception involves integrating local features in our environment to create a coherent whole, which is atypical in autism spectrum disorder (ASD) and perhaps among relatives. It is unknown how these atypicalities are linked to underlying neurobiology and whether they index genetic liability to ASD. My project uses a multi-method approach within a family-genetic design to explore the behavioral and neural basis of visual perception in individuals with ASD and their biological parents. Using eye-tracking and electrophysiology neural measures, this project characterizes visual perception and explores how they may relate to clinical-behavioral phenotypes observed in ASD and a subset of parents with the broad autism phenotype, or subtle traits that mirror the core symptoms of ASD. This work may assist to unpack a still poorly understood component of the complex ASD phenotype, and serve as groundwork for larger scale investigations to identify endophenotypes for genetic, neurobiological, and treatment studies.
  • Great Expectations: weighting expectancy when processing degraded speech
    • Full access to acoustic complexity of speech allows for efficient and effortless processing. However, in day-to-day conversations, speech is commonly degraded by extrinsic factors such as environment or background noise. Listeners can compensate for this degradation with semantic expectancy, which is the ability to predict speech from surrounding linguistic information during spoken language processing. This project examines how acoustic degradation from background noise influences how listeners use expectancy and how this processing affects speech perception. This presentation will discuss how spectral degradation modulates the use of expectations and how a reliance on top-down cognitive processing might come with the cost of inaccurate speech recognition.
  • Neural correlates of linking communicative signals and cognition in 6-month-olds
    • At 3 and 4 months, listening to human and nonhuman primate vocalizations support infant cognition. By 6 months, only human speech confers this advantageous effect. Backward human speech does not have this effect at any age. Here we use EEG to reveal the neural correlates of listening to these three types of sounds. Our results indicate that for 6-month-olds, primate vocalizations and backward speech elicit robust P300s in right parietal regions, while lemur vocalizations elicit enhanced gamma activation (40-60Hz) in right frontal regions. These results suggest that there may be multiple routes by which a signal can support infant cognition.