Homophony in Cross-Linguistic Perspective: Homophones are words that have the same phonological form but unrelated meanings (like ‘night’ and ‘knight’ in English; those that have the same spelling, like (financial) ‘bank’ and (river) ‘bank’ are known as homonyms specifically). Homophones are employed widely accross languages possibly as a result of maximizing communication efficiency (Piantadosi et al., 2012), but homophones also require disambiguation and may cause difficulty in learning and processing. In this project, I examine (1) if there is a cross-linguistic difference in the abundance of homophones as a result of the difference in phonologcial resouces that are available in each language; and (2) if such a difference leads to differences in homophone processing both within and without contexts. Here are some studies that I have been working on:
- A corpus analysis of homophone abundance across seven languages (American English, Egyptian Arabic, German, Japanese, Korean, Mandarin, Spanish)
- A computational simulation (Nphon model) of lexicons to see if homophones are avoided in word creation
- An estimation of possible syllables in seven languages based on their phonological systems
- An auditory lexical decision experiment that compares homophone density effect in American English, Mandarin and Japanese
- A cross-modal repetition priming experiment that compares homophony resolution in American English and Japanese in neutral and biasing contexts
Some preliminary results, as well as references to many great existing works on homophone effects in different languages, can be found here. A manuscript with more elaborate analyses is in preparation.
Orthographic Interference in Auditory Homophone Processing: Homophones do not always have different orthographic representations. Some studies reported very different results for these two kinds of homophones (e.g. Pexman & Lupker, 1999) while others found similar phenomena(e.g. Rodd et al., 2002) in visual word recognition. These conflicting results lead to a theoretical debate between orthographic competition and semantic competition when explaning the typically reported homophone disadvantage in visual word processing. This debate is also relevant in the attempt to reconcile the reported homophone advantage in Mandarin (e.g. Ziegler et al., 2000) and in Japanese (e.g. Hino et al., 2013), since both languages employ logographic writing systems in which orthographic representation are connected more closely to meanings, as opposed to sounds in alphabetical languages.
I am choosing to examine the effects in auditory processing because homographic and non-homographic homophones are still identical in that both can be considered as mappings between one sound and multiple meanings. While orthographic representations are activated obligatorily in visual word processing, we might be able to manipulate the extent to which they are involved in auditory word processing. I am running:
- A series of auditory lexical decision experiments, each followed by a different additional task, to manipulate the involvement of orthographic representations
Word Length Expectations in CDS Segmentation: Building homophones is not the only way to ease the pressure brought by limited phonological resources. Languages can also build longer words to enrich the possible combinations of syllables. Lew-Williams and Saffran (2012) have shown that 10-month-old babies are senstive to word length in brief exposures to word exemplars, and can make use of this knowledge, together with statistical cues, to segment continuous speech input. Since explicit information of word length can only be found in individually uttered words, but implicit information of word length, i.e., phonological richess, can be found in all kinds of speech, I want to take a step further and ask: are kids sensitive to the size of phoneme inventory and the complexity of phonotactics in the input, and can they form expectations about word length based on that and make use of those expectations to segment continuous speech? This major question can be decomposed into several more specific ones:
- Is there a reliable correlation between phonological richness and word length across languages?
- When adults/kids are exposed to continuous speech, does an increase in the types of syllables, with other factors being constant, in the speech make them more inclined to segment the speech into shorter “words”?
If the answer to the second questions is positive, I will proceed to examine more specific factors that influence the richness of phonological resources:
- When adults/kids are exposed to continuous speech, does an increase in the types of phonemes, with other factors being constant, in the speech make them more inclined to segment the speech into shorter “words”?
- When adults/kids are exposed to continuous speech, does an increase in the complexity of phonotactics, with other factors being constant, in the speech make them more inclined to segment the speech into shorter “words”?
Homophone Learning in Different Langauges: Given the evidence that supports substantial differences in the abundance of homophones across languages (e.g. Japanese 11% vs. English 3%), we might expect this difference to exist also in child-directed speech. Several questions can be raised based on this speculaton:
- Do Japanese-learning children encounter more homophones than, say, English-learning children? (Can start from a lookup in CHILDES)
- Is there a difference in the development trajectory of understanding and producing homophones, or, more broadly, of accepting the one-to-one mapping between forms and meanings? (Analyses of production data in CHILDES + language learning experiments)
- Are some of the cues that can help children learn homophone (e.g. Conwell, 2017; Dautriche et al., 2018) made more available or exaggerated in languages that have more homophones?
- How do children distinguish homophony (same sound, unrelated meanings) from polysemy (same sound, related senses, here is a wonderful paper on children’s learning of polysemy: Floyd & Goldberg, 2020)?