Friday, November 30, 2007

Some questions on the ideas put forward by y'all

Hello everyone,

I was re-reading everyone's abstracts and thought that there were some interesting coincidences and intriguing mismatches.

For example, I might be reading Susan's wrong, but it seems that she predicts that, with a 1-to-1 mapping from articulation to acoustics, the magnitude of the change in the articulation predicts basically two levels of structure: one corresponding to slower and larger movements and one to more abrupt changes. Only the latter would, according to my interpretation of her abstract, provide acoustic cues. This matches Ying and Jeff's finding that an automatic feature extractor operating on the raw acoustic signal is good at finding manner features, but not so good with place features. On the contrary, motor cues are most useful for place of articulation, but not so great with manner.

Susan also suggests that it is not only the level of fine acoustic cues that affects speech perception, but also the other level. May this be related to Jessica, Robert and Matt (JRM)'s learner of phonotactic constraints and segments? If I get their point right, both infants and adults would rely on acoustic cues in context to discover/perceive phonological categories, and this process may go from a holistic chunk (perhaps syllable-based?) to phone-sized elements. In relation to this, I was thinking that while one may expect that articulatory experience would aid the formation of prosodic categories (like syllabic/word templates), it was at first surprising for me that it is also important for learning of segments. Now I see that this fits really well with JRM's proposal that we may learn phonotactics and segments at the same time, don't you think?

Further, this learning strategy (of phonotactics+segments) must be flexible and still active in adulthood, given that Lisa D's adults are able to learn a minimal pair of words that relies on a non-native phonotactic contrast, but her results also suggest that the presence of a minimal pair (which, through semantic cues, forces listeners to focus on the relevant acoustic cues) is a necessary condition for learning in adults. On the other hand, the presence of minimal pairs cannot be a necessary condition in toddlers (cf. Pater et al. 2004), as Ying and Jeff point out, while already in childhood semantic information is actually helpful, according to Lisa G's results. Is there a developmental trend here? May it be geared by a difference in processing abilities? (If so, would we predict that adults in a high-processing load task would behave like Pater's toddlers and have a *harder* time with the minimal pair than with the non-minimal pair?)

Another question that follows from Susan's hypothesis that these two levels interact in perception is how language experience affects this interaction. Grant's paper seems to suggest that this interaction is mediated by specific experience, such that sheer experience with one of the categories (AmEng speakers must have experience with retroflex sounds, and even with the retroflex fricative in words like 'shrill') is not enough, but speakers must have been exposed to the particular contrast that relies on the acoustic/higher level features under study. This is particularly important for early category formation: Chandan's corpus analysis looks at the correlation of pitch and VOT, which could be interpreted as respective examples of the larger and finer categories proposed by Susan. If IDS takes advantage of the apparent predilection of infants for pitch, will it have its voiced segments be primarily marked with pitch? However, given young infants' very limited articulatory abilities, we'd expect them to rely primarily on acoustic-feature based contrasts, which -now following Ying and Jeff - would predict a primacy of VOT.

1 comment:

Amanda Seidl said...

In fact, I think that Lisa D's own results may be able to answer the high vs. low load question in that she shows a difference between high vs. low proficiency learners. The question then becomes what differentiates these groups? Is it a load issue or is it a degree of metalinguistic awareness or some such thing.

I find the question in Grant's paper especially interesting since it brings up a question I have been thinking a lot about --namely, whether experience with allophonic variation leads to featural activation or whether only features being active phonemically leads to such activation. We've found that 4-month-old English babies can learn a constraint on vowel nasalization, but this is not the case with 11-month-olds, who behave as American adults, but not as French speaking or Bengali speaking adults who can learn such a constraint since vowel nasalization is contrastive in their languages. Thus, complementary distribution is not sufficient to activate a feature and perhaps to low level cues (e.g., speed of the velum, it's faster in languages with contrastive nasalization) provide the learner with more cues as to what is contrastive vs. what is not in his/her language.
-Amanda