Saturday, December 8, 2007

probabilities

Hi all,
I have been rereading Matt's recent article on phonotactic learning with phonotactic probabilities and thinking about the upcoming workshop on cues in phonological learning and I am just wondering about the different weightings of cues in infants than adults and the different kinds of cues that we will discuss at the workshop: distributional, articulatory, acoustic, etc... In Matt's paper he found that adults were pretty insensitive to the probability variation of /s/ in onset position and I am wondering whether we might not expect different things from infants. Let me see what you all think of my predictions. While infants are clearly *very* sensitive to probabilities in the input they also might be less sensitive to articulatory ease since they don't really articulate yet (this would not be true for toddlers of course), so they might show a different performance pattern here than adults. This also begs the the question of why adults show a lack of sensitivity (perhaps something that Matt can help me with). Is it because they are less sensitive to probabilities over time or because they show a heightened sensitivity to ease of articulation?
-Amanda

4 comments:

Matt Goldrick said...

Just so everyone else knows the article Amanda's referring to can be found here.

I share your intuition that this is why infants and adults might look different. As to why adults look this way, in a companion article in the LabPhon 10 Proceedings Meredith and I sketch a connectionist account in which biases introduced by perception/production suppress the influence of phonotactic probability. I think this fits more into the latter type of account; they are sensitive to perceptual/articulatory complexity and this suppresses sensitivity to probabilities for certain specific structures.

Now, the discussion above (and your post) is predicated on the assumption that the effects derive from articulatory ease. But /s/-onset is in fact articulatorily complex. My suspicion is that the effects come from perceptual complexity. There's several recent papers that have shown effects in production of perceptual complexity--work by Lisa Davidson on the production of non-native clusters and work by Kie Zuraw in Tagalog morphophonology.

Amanda Seidl said...

Well, I guess that we have to define what we need by articulatory and perceptual complexity. It's hard for me to think clearly because Noah is watching Stuart Little in the background, but I was thinking that complexity, both articulatory and perceptual, might be quite distinct for adults and infants. I imagine that things the infant can produce and perceive are given lower complexity ratings (if an infant could rate such things) than things that the infant can produce. This goes for perception as well. So, if infants cannot perceive static state acoustic cues, but are pretty good with dynamic ones or if high frequency sounds are hard for babies to perceive then we are going to find a different state of affairs for them and this can really be divorced from the phonological system and from input frequency as well.

Matt Goldrick said...

I can see two possibilities. One is that learning mechanisms are influenced by the speaker/hearer's experience with producing/perceiving sounds. This perspective would certainly make the predictions you're outlining above. The other is that over evolution biases that reflect articulatory/acoustic complexity have been hard-wired into acquisition processes. This wouldn't make the same predictions.

Anonymous said...

My apologies for joining this discussion quite late. Partly as a response to Alex's original post, I would like to add 1) in order to define complexity (both acoustic and articulatory) and its implications for learning, it would require some notion of "space" for each domain of input. 2) we should be prepared for some surprises as we start to conceive how this can be done.

Consider the voiced/voiceless distinction in the acoustic domain. There are practically an infinite range of possible "cues" that can be useful for this task. Some may have been familiar to phoneticians (e.g. VOT, f0 at voice onset, initial F1, etc.), other may look completely arbitrary (e.g. zero-crossing rate, energy change within a frequency band, etc.). Although the early work on speech recognition (esp. Stevens and his studens) had a close connection with the more familiar perceptual cues/space, more recent work on ASR basically demonstrated that fairly arbitrary cues, when combined through a statistical model, can reliably make distinctions between words. (the phoneme-level distinction is much more difficult, though)

If we are going to talk about "cue weighting", it seems useful to list which cues are possible candidates of re-weighting by infants. Once we have some idea of the space, this may illuminate the discussion of complexity. I'm not sure how this can be done, but some manipulation of the stimuli may be necessary. Another fun project might be first constructing some adhoc categories with arbitrary cues, and then trying to see if infants can learn it, as sort of a proof of concept.

As for the articulatory complexity, I think it's fair to say we know very little about how to set up a space, and what the kids are doing. Perhaps more ultrasound data would help here.