In the Immunolingo
Environment, we are studying the adaptive immune system through the multidisciplinary lens of immunology, linguistics, machine learning, and statistics.
In collaboration with Dr. Aniello De Santo and Dr. Hossep Dolatian, we study syntax-prosody mapping with mathematical linguistic tools, such as logical transductions. Our work on ditransitive typology highlights the parts of the syntax-prosody interface that so far are underspcified and require further empirical study.
In collaboration with Dr. So Young Lee, we investigate how well pre-trained language models such as BERT fare in learning NPI licensing constraints. We are especially interested in whether language models exhibit the same psycholinguistic behavior such as grammatical illusion effects (as found in Xiang et al. (2009) for humans) and in whether language models are capable of learning cross-linguistic NPI licensing constraints.
In my dissertation, I investigate the nature of NPI-licensing and Negative Concord Items. My work aims to give an empirically and computationally informed characterization that can account for typological differences. The typological study includes deeper studies of English and Hungarian, but also touches on other languages in the literature through the lens of a quantifier-based theory, as advocated for by Dr. Anastasia Giannakidou. For the computational part of the work, I aim to determine the necessary computational complexity of these dependencies, in particular, whether they can fit in the subregular region of the Chomsky-hierarchy. This computational work draws heavily on Dr. Jeffrey Heinz's and Dr. Thomas Graf's work.
This is a collaboration between John Hopkins University, the Cooperative Robotics Lab and the Pediatric Mobility Lab & Design Studio at University of Delaware, and Dr. Jeffrey Heinz and his students at the Lingusitics Departments at University of Delaware and Stony Brook University to develop a socially interactive robot capable of learning from its experiences and modifying its behavior to improve physical therapy outcomes for young children with motor disabilities (e.g., Downs Syndrome). We are currently testing different robot behaviors in a controlled physical therapy environment that includes multiple play stations monitored by video and infrared cameras. In parallel, we are evaluating and developing machine learning algorithms informed by existing work in computational linguistics, and applying it to robot learning. This research is supported by the National Institutes of Health grant 1R01HD087133-01.
In collaboration with Dr. Karoliina Lohiniva, we are investigating a series of questions related to multiple wh-questions and additives comparing Finnish, Hungarian, and English. In the core, we use Kotek's framework of Q particles to analyze the syntax and semantics of multiple wh-questions in these languages with and without additives. The questions we seek to answer are: What does the additive particle do in Finnish and Hungarian wh-questions, syntactically, semantically, and pragmatically? What are possible meanings and combinations of wh-, d-linked wh-, and aggressively non-d-linked wh-words in multiple wh-questions?
In collaboration with Jinwoo Jo, we are investigating the nature and typology of the productive causative morpheme in Hungarian, Japanese, and Korean. We argue contrary to Horvath and Siloni's (2011) split-lexicalist theory, and claim that this type of causativization is always syntactic: it occupies a head position of CauseP. We extend Pylkannen's (2008) typology by proposing that Hungarian causatives select for active VoiceP, Korean select for VoiceP, and Japanese select for TP.
This project investigates three different negative word orders in Hungarian declarative sentences: pre-verbal negation, focus negation, and verbal particle+verb negation. Pre-verbal negation and focus negation have both been analyzed as NegP projecting over the clause in the literature. I argue that focus negation patterns with verbal particle+verb in that they are both are actually constituent negation. I base my argument on coordination facts, where focus negation behaves similar to constituent negation in English, and Klima's (1964) tests. I hypothesize that constituent negation has to always be focused in languages that have constituent negation.
In collaboration with Amanda Payne and Jeffrey Heinz, we evaluate the computational complexity of Correspondence Theory, which explicitly recognizes the correspondence between underlying and surface elements. We have three distinct results. First, we show that the GEN function, assuming Correspondence Theory, is not definable using Monadic Second Order logic (MSO). On the other hand, we show that the set of output candidates for a given input is in fact definable not only in MSO-logic, but we suspect it is doable in First-Order (FO) logic too. Lastly, we find that typical underlying representation (UR) to surface representation (SR) mappings can be directly described with FO logic without recourse to optimization.
In collaboration with Justin Rill, we investigated complex sentences in Balinese. Based on evidence from adverb positioning, we concluded that in Balinese what looks like simple argumetn raising is in fact remnant movement.