Thursday, January 24, 2008
Presentations available for download
Please find the powerpoints presented at this symposium in web.ics.purdue.edu/~acristia/LSA08.htm
Monday, January 14, 2008
First question session (after Chandan and Grant's presentations)
Mary Beckman asked Chandan whether he had considered an interaction with prosodic context, given that since IDS has shorter phrases, there are more boundaries and hence an interaction with other phenomena which makes pitch more important
Robert Daland seconds this question and points to the fact that more frequent boundaries should raise the number of stops at phrase-edges which may change the cue weighting (e.g. because of strengthening)
Amanda Seidl asked Chandan whether perhaps speech directed to younger babies would have cleaner categories given Sundberg's proposal of under/overspecification in younger infant-directed speech [I add here that this proposal suggests that vowels are overspecified - that is, have more marked acoustic categories - at younger ages, while consonants are underspecified then and overspecified later; and also that there is some counterevidence to these claims - see Englund & Behne, 2006]
* A member of the audience supports the idea that the f0-voicing association is dependent on the speaker's control, given that there are languages such as Spanish where listeners have no f0 sensitivity in this context, unlike English and Japanese listeners
* Another possibility arises from thinking English as a fortis/lenis rather than voicing language - note that the voiceless (marked) category was more coherent than the voiced category.
Peter Richtsmeier asks Grant whether attention to vocalic cues is a sort of 'default'. Grant points out that what is remarkable is that English listeners were ignoring fricative noise, given that they have a similar contrast in English where they do use cues in the fricative portion.
Mary Beckman points out that the Mandarin alveopalatal-retroflex contrast is the result of the phonologization of a diachronic sound change.
* Is there a block effect in the labeling task? Grant answers that English listeners are not integrating cues with this added experience, but just getting better at discriminating.
Matthew Goldrick asks whether integration of cues is the 'default' and we learn to separate cues or whether it is the other way around [On this topic, Tomiak, Mulenix and Sawusch (1987) show that adult listeners only show the 'integration' effect only when they hear the sounds as speech, but not when it's non-speech - which suggests that perhaps we integrate through over-learning. Although learning to integrate is probably only applicable to cues that are not 'inherently' processed together - e.g. integration should be the default for pitch and loudness.]
Robert Daland seconds this question and points to the fact that more frequent boundaries should raise the number of stops at phrase-edges which may change the cue weighting (e.g. because of strengthening)
Amanda Seidl asked Chandan whether perhaps speech directed to younger babies would have cleaner categories given Sundberg's proposal of under/overspecification in younger infant-directed speech [I add here that this proposal suggests that vowels are overspecified - that is, have more marked acoustic categories - at younger ages, while consonants are underspecified then and overspecified later; and also that there is some counterevidence to these claims - see Englund & Behne, 2006]
* A member of the audience supports the idea that the f0-voicing association is dependent on the speaker's control, given that there are languages such as Spanish where listeners have no f0 sensitivity in this context, unlike English and Japanese listeners
* Another possibility arises from thinking English as a fortis/lenis rather than voicing language - note that the voiceless (marked) category was more coherent than the voiced category.
Peter Richtsmeier asks Grant whether attention to vocalic cues is a sort of 'default'. Grant points out that what is remarkable is that English listeners were ignoring fricative noise, given that they have a similar contrast in English where they do use cues in the fricative portion.
Mary Beckman points out that the Mandarin alveopalatal-retroflex contrast is the result of the phonologization of a diachronic sound change.
* Is there a block effect in the labeling task? Grant answers that English listeners are not integrating cues with this added experience, but just getting better at discriminating.
Matthew Goldrick asks whether integration of cues is the 'default' and we learn to separate cues or whether it is the other way around [On this topic, Tomiak, Mulenix and Sawusch (1987) show that adult listeners only show the 'integration' effect only when they hear the sounds as speech, but not when it's non-speech - which suggests that perhaps we integrate through over-learning. Although learning to integrate is probably only applicable to cues that are not 'inherently' processed together - e.g. integration should be the default for pitch and loudness.]
Second question session (after Lisa D., Jessica+Matt+Robert, and Ying+Jeff)
Mary Beckman pointed out that vowels [or a release] are sometimes necessary to establish the place of articulation of consonants, so phonetics and phonotactics *should* be analyzed together. Jessica points out that context also introduces variation, and that sometimes the dependency is longer distance than just the neighboring segments [please confirm! I doubt my notes here!]
Jerry Jeager (?) asks Ying and Jeff why they used a hierarchical model - she has found evidence from slips of the tongue by 18-month-olds to 5-year-olds and adults for combination of features but not hierarchical structure. Ying and Jeff answer that this is just a manner of analyzing the data, although it is possible that the hierarchy works prior to motor experience. Jessica points to a study by Jusczyk [Goodman and Baumann, 1998] where babies react to manner of articulation similarities, but not place of articulation similarities, between syllables in a list. It's possible that later one only retains the 'bottom' categories, and not the hierarchy itself.
* would the contribution of semantic cues be more helpful in the case of morphophonological alternations? Lisa D. answers that meaning helped her participants slightly, not a lot.
?? Susan - merging and resplitting <- phonotactic differences were harder
* Using TIMIT for learning features is cheating! How about the phonemic segmentation problem? Ying refers to the other part of his dissertation, where he starts from the raw acoustics.
Jerry Jeager (?) asks Ying and Jeff why they used a hierarchical model - she has found evidence from slips of the tongue by 18-month-olds to 5-year-olds and adults for combination of features but not hierarchical structure. Ying and Jeff answer that this is just a manner of analyzing the data, although it is possible that the hierarchy works prior to motor experience. Jessica points to a study by Jusczyk [Goodman and Baumann, 1998] where babies react to manner of articulation similarities, but not place of articulation similarities, between syllables in a list. It's possible that later one only retains the 'bottom' categories, and not the hierarchy itself.
* would the contribution of semantic cues be more helpful in the case of morphophonological alternations? Lisa D. answers that meaning helped her participants slightly, not a lot.
?? Susan - merging and resplitting <- phonotactic differences were harder
* Using TIMIT for learning features is cheating! How about the phonemic segmentation problem? Ying refers to the other part of his dissertation, where he starts from the raw acoustics.
Final discussion
Robert Daland points out that amplitude is another cue to sentence boundaries - Amanda answers that indeed there are many others (such as initial strengthening), but the point remains that younger babies were less able to attend to single cues than older babies.
Jerry Jeager points out Levelt's model does not address actual production. Lisa G agrees that Levelt's model is useful, and further clarifies that she was trying to imply that there's a need to get converge in terms of categories with articulatory phonology.
Matt notes that S. Blumstein also found in L2 more transfer for non-words
* what is the nature of the mechanism governing attention?
* Lisa D. is asked whether she makes a distinction between 'lexical' and 'semantic'. She answers that, for the purposes of the task, it doesn't matter: in fact, it is possible that anything that 'made them care' would have helped.
Amanda (?) asks whether it is an explicit or implicit task - and whether an active versus a passive task would have made a difference.
Mary Beckman points out that perhaps the answer is having a *symbolic* category.
Jennifer Cole notes that we have to factor in that it is language - not any arbitrary cue can be used, since some things are unlearnable [I think she may have referred to tasks where you had to learn that a given rule applied only to female speakers, but not male ones - on this note, doesn't that actually happen in speech, that an alternation occurs only in one sociolect?]
Lisa D. points the effects of telling participants that they are about to hear a foreign language (e.g. Russian) actually improves their performance.
[I think there was a return to the question of frication versus vocalic information]
Matt wonders also about the fact that in speech social information is coded primarily in the vowels leading people to attend more to them, but notes that it is also in encoded consonants.
Amanda suggests that perhaps inventory size is a reason for it.
Lisa D. confirms this with her experience with Bengali speakers who... ??
Mary Beckman wonders how do we know why we attend to something - Jennifer Cole agrees: in fact, it seems so hard conceptually that we are able to converge on the right cues given the multidimensionality of speech. Jessica points out that we couldn't just throw out information that is not adjacent to the sound in question in cases of long distance dependencies such as vowel harmony. How does one determine the limits, the distance at which one will stop paying attention.
Ken Dejong (?) points out that we have been using the term 'attention' without really defining it. Attention could mean recruitment of resources and usage of information. There are two issues attached to this: we tend to underestimate people's ability to use something and, for development, we need to address how we learn to focus attention.
Matt asks whether it is just interference of categories that allows to focus attention. For instance, Dell did a study where people had to learn that engma appeared in syllable initial position, and it was really hard.
Mary Beckman relates this to babies' inability to learn that a nasalized vowel is followed by t, given that this sequence is illegal in English.
Amanda answers that perhaps it isn't given that sometimes the nasalized vowel is the only cue to the presence of a nasal consonant and this one is dropped.
Jessica remembers that when she heard the stimuli used in that study they sounded like vowel+nt sequences to her.
?? Ken Dejong asks Lisa G whether it is a question of salience - Lisa D adds whether it is mirroring
LG answers that when a novel word was at the end of the sentence, the novel word was not more variable, but the rest of the sentence was.
Jerry Jeager points out Levelt's model does not address actual production. Lisa G agrees that Levelt's model is useful, and further clarifies that she was trying to imply that there's a need to get converge in terms of categories with articulatory phonology.
Matt notes that S. Blumstein also found in L2 more transfer for non-words
* what is the nature of the mechanism governing attention?
* Lisa D. is asked whether she makes a distinction between 'lexical' and 'semantic'. She answers that, for the purposes of the task, it doesn't matter: in fact, it is possible that anything that 'made them care' would have helped.
Amanda (?) asks whether it is an explicit or implicit task - and whether an active versus a passive task would have made a difference.
Mary Beckman points out that perhaps the answer is having a *symbolic* category.
Jennifer Cole notes that we have to factor in that it is language - not any arbitrary cue can be used, since some things are unlearnable [I think she may have referred to tasks where you had to learn that a given rule applied only to female speakers, but not male ones - on this note, doesn't that actually happen in speech, that an alternation occurs only in one sociolect?]
Lisa D. points the effects of telling participants that they are about to hear a foreign language (e.g. Russian) actually improves their performance.
[I think there was a return to the question of frication versus vocalic information]
Matt wonders also about the fact that in speech social information is coded primarily in the vowels leading people to attend more to them, but notes that it is also in encoded consonants.
Amanda suggests that perhaps inventory size is a reason for it.
Lisa D. confirms this with her experience with Bengali speakers who... ??
Mary Beckman wonders how do we know why we attend to something - Jennifer Cole agrees: in fact, it seems so hard conceptually that we are able to converge on the right cues given the multidimensionality of speech. Jessica points out that we couldn't just throw out information that is not adjacent to the sound in question in cases of long distance dependencies such as vowel harmony. How does one determine the limits, the distance at which one will stop paying attention.
Ken Dejong (?) points out that we have been using the term 'attention' without really defining it. Attention could mean recruitment of resources and usage of information. There are two issues attached to this: we tend to underestimate people's ability to use something and, for development, we need to address how we learn to focus attention.
Matt asks whether it is just interference of categories that allows to focus attention. For instance, Dell did a study where people had to learn that engma appeared in syllable initial position, and it was really hard.
Mary Beckman relates this to babies' inability to learn that a nasalized vowel is followed by t, given that this sequence is illegal in English.
Amanda answers that perhaps it isn't given that sometimes the nasalized vowel is the only cue to the presence of a nasal consonant and this one is dropped.
Jessica remembers that when she heard the stimuli used in that study they sounded like vowel+nt sequences to her.
?? Ken Dejong asks Lisa G whether it is a question of salience - Lisa D adds whether it is mirroring
LG answers that when a novel word was at the end of the sentence, the novel word was not more variable, but the rest of the sentence was.
Labels:
children+infants,
motor,
semantic
Monday, December 10, 2007
Are learning phonotactics and learning segments codependent?
Robert, Matt, Jessica and I have been having a discussion on this topic over email. Look out! Comments are interspersed.
1. phonotactic learning studies in babies often seem to mimic complementary distribution (e.g. Amanda's coronal stops before high vowels and labial beforemid); yet learning relies on babies still being able to discriminate coronal and labial before both types of vowel in order to treat legal and illegal trials differently. So complementary distribution (at least with this short exposure) cannot influence babies' ability to discriminate between sounds. [ac]
mm, two thoughts on this, one slightly out on a limb and the other well out on a limb. first thought: in my opinion, there is not a good theory of the relationship between category formation and discrimination. most models are designed for one or the other task, and attempts to model both generally involve taking a model of categorization and adding on an ad hoc mechanism for predicting discrimination. there is no apparent reason why discrimination has to decline when categories form, and in some cases it appears not to. my co-authors may disagree with me on this point, but i feel like our model does not specifically address discrimination. second thought: i have not seen any compelling evidence that 9-month-olds "know" that there is such a thing as "coronal" that encompasses multiple phones. on the other hand, there is abundant evidence that they can discriminate pretty much anything that is meaningfuly different. that's not really in opposition to your point, i guess. more a caveat that we should be careful what generalizations we assume the baby is assigning to her stimuli. [RD]
Ditto to what Robert said -- it's not clear that infants haven't learned about native phonetic categories prior to the loss of sensitivity to foreign contrasts. And b) It sounds like what you're saying is that infants can learn phonotactic dependencies at 4 months -- not that they know native language phonotactics at 4 months. So I'm not sure how that bears on the issue of what they have learned about the native language. Certainly being able to learn that sort of pattern indicates that they're likely doing that sort of processing in the real world. But real language almost certainly takes longer to learn than our miniature artificial ones. For example, 6 month olds (and that's the youngest we've tested) show consonant learning in my phonetic learning studies, although there's no evidence that they know native language consonant categories yet. [JM]
It seems to me that this sort of data is also problematic for the segments-then-phonotactics view that we're criticizing in our work. Under a strong version of that hypothesis, I think we'd expect that infants would be incapable of learning anything about phonotactics until they were totally done learning about segments (either in their native language, or in the context of the experiment). I think these data are good news for us, actually. [MG]
2. Phonotactic learning occurs long before babies start converging on their languages inventory. For instance, we've had babies learn a V-C dependency at 4 and 6 months, and a #C dependency at 7 months. [Incidentally, we found that constraints on allophones are hard to learn at later ages; so the VC dependency was on nasal vowels, and 4 and 6-mo could learn it, 11mo babies couldn't if they were learning English, but were ok with it if learning French.] [ac]
i don't think there's very clear evidence that infants haven't converged on their language's inventory by 4-6 months. the evidence we have is that by 6 months, their within-category discrimination DECREASES relative to across-category discrimination. which is certain proof that there is a category there. however, this does not mean there is no category there before: absence of evidence is not unequivocal evidence of absence. it is entirely possible that the category onset occurs before the decline in discrimination. this goes back to the point i was mentioning before -- until we have a better theory of why discrimination declines with categorization, we have to be careful about how we interpret the presence/absence of categories. there is one really interesting study by nobuo masataka that speaks to this point -- he found that japanese learning infants' vowel productions were significantly influenced by the most recent vowel their mother had produced. in particular, if mother produced /a/, the next vowel baby produces is, on average, spectrally shifted toward /a/, and the same holds true for the other point vowels. if i recall correctly, this was 4-month-olds! seen from a certain theoretical lens, this could be evidence of extremely early category formation. alternatively, it could be viewed as a means by which categories are learned. in either case, though, it is clear evidence that infants have already discovered aspects of the forward mapping from articulation to acoustics. [RD]
I haven't read that Masataka study but from your description I'm not sure why you would take it to be evidence for phonetic categories. A simpler explanation would be the latter one you mentioned -- the babies are starting to figure out how articulation relates to acoustics. [JM]
Although I think the hypothesis y'all propose in the abstract is on the right track, I wasn't sure whether the following two facts were problematic:
1. phonotactic learning studies in babies often seem to mimic complementary distribution (e.g. Amanda's coronal stops before high vowels and labial beforemid); yet learning relies on babies still being able to discriminate coronal and labial before both types of vowel in order to treat legal and illegal trials differently. So complementary distribution (at least with this short exposure) cannot influence babies' ability to discriminate between sounds. [ac]
mm, two thoughts on this, one slightly out on a limb and the other well out on a limb. first thought: in my opinion, there is not a good theory of the relationship between category formation and discrimination. most models are designed for one or the other task, and attempts to model both generally involve taking a model of categorization and adding on an ad hoc mechanism for predicting discrimination. there is no apparent reason why discrimination has to decline when categories form, and in some cases it appears not to. my co-authors may disagree with me on this point, but i feel like our model does not specifically address discrimination. second thought: i have not seen any compelling evidence that 9-month-olds "know" that there is such a thing as "coronal" that encompasses multiple phones. on the other hand, there is abundant evidence that they can discriminate pretty much anything that is meaningfuly different. that's not really in opposition to your point, i guess. more a caveat that we should be careful what generalizations we assume the baby is assigning to her stimuli. [RD]
Ditto to what Robert said -- it's not clear that infants haven't learned about native phonetic categories prior to the loss of sensitivity to foreign contrasts. And b) It sounds like what you're saying is that infants can learn phonotactic dependencies at 4 months -- not that they know native language phonotactics at 4 months. So I'm not sure how that bears on the issue of what they have learned about the native language. Certainly being able to learn that sort of pattern indicates that they're likely doing that sort of processing in the real world. But real language almost certainly takes longer to learn than our miniature artificial ones. For example, 6 month olds (and that's the youngest we've tested) show consonant learning in my phonetic learning studies, although there's no evidence that they know native language consonant categories yet. [JM]
It seems to me that this sort of data is also problematic for the segments-then-phonotactics view that we're criticizing in our work. Under a strong version of that hypothesis, I think we'd expect that infants would be incapable of learning anything about phonotactics until they were totally done learning about segments (either in their native language, or in the context of the experiment). I think these data are good news for us, actually. [MG]
2. Phonotactic learning occurs long before babies start converging on their languages inventory. For instance, we've had babies learn a V-C dependency at 4 and 6 months, and a #C dependency at 7 months. [Incidentally, we found that constraints on allophones are hard to learn at later ages; so the VC dependency was on nasal vowels, and 4 and 6-mo could learn it, 11mo babies couldn't if they were learning English, but were ok with it if learning French.] [ac]
i don't think there's very clear evidence that infants haven't converged on their language's inventory by 4-6 months. the evidence we have is that by 6 months, their within-category discrimination DECREASES relative to across-category discrimination. which is certain proof that there is a category there. however, this does not mean there is no category there before: absence of evidence is not unequivocal evidence of absence. it is entirely possible that the category onset occurs before the decline in discrimination. this goes back to the point i was mentioning before -- until we have a better theory of why discrimination declines with categorization, we have to be careful about how we interpret the presence/absence of categories. there is one really interesting study by nobuo masataka that speaks to this point -- he found that japanese learning infants' vowel productions were significantly influenced by the most recent vowel their mother had produced. in particular, if mother produced /a/, the next vowel baby produces is, on average, spectrally shifted toward /a/, and the same holds true for the other point vowels. if i recall correctly, this was 4-month-olds! seen from a certain theoretical lens, this could be evidence of extremely early category formation. alternatively, it could be viewed as a means by which categories are learned. in either case, though, it is clear evidence that infants have already discovered aspects of the forward mapping from articulation to acoustics. [RD]
I haven't read that Masataka study but from your description I'm not sure why you would take it to be evidence for phonetic categories. A simpler explanation would be the latter one you mentioned -- the babies are starting to figure out how articulation relates to acoustics. [JM]
Labels:
acoustic,
children+infants,
discussions,
phonotactics
Saturday, December 8, 2007
probabilities
Hi all,
I have been rereading Matt's recent article on phonotactic learning with phonotactic probabilities and thinking about the upcoming workshop on cues in phonological learning and I am just wondering about the different weightings of cues in infants than adults and the different kinds of cues that we will discuss at the workshop: distributional, articulatory, acoustic, etc... In Matt's paper he found that adults were pretty insensitive to the probability variation of /s/ in onset position and I am wondering whether we might not expect different things from infants. Let me see what you all think of my predictions. While infants are clearly *very* sensitive to probabilities in the input they also might be less sensitive to articulatory ease since they don't really articulate yet (this would not be true for toddlers of course), so they might show a different performance pattern here than adults. This also begs the the question of why adults show a lack of sensitivity (perhaps something that Matt can help me with). Is it because they are less sensitive to probabilities over time or because they show a heightened sensitivity to ease of articulation?
-Amanda
I have been rereading Matt's recent article on phonotactic learning with phonotactic probabilities and thinking about the upcoming workshop on cues in phonological learning and I am just wondering about the different weightings of cues in infants than adults and the different kinds of cues that we will discuss at the workshop: distributional, articulatory, acoustic, etc... In Matt's paper he found that adults were pretty insensitive to the probability variation of /s/ in onset position and I am wondering whether we might not expect different things from infants. Let me see what you all think of my predictions. While infants are clearly *very* sensitive to probabilities in the input they also might be less sensitive to articulatory ease since they don't really articulate yet (this would not be true for toddlers of course), so they might show a different performance pattern here than adults. This also begs the the question of why adults show a lack of sensitivity (perhaps something that Matt can help me with). Is it because they are less sensitive to probabilities over time or because they show a heightened sensitivity to ease of articulation?
-Amanda
Labels:
acoustic,
adults,
children+infants,
complexity,
discussions,
motor,
phonotactics
Sunday, December 2, 2007
Some changes on the blog
I've cross-classified all the posts through labels such that we can easily access which abstracts/posts discuss which topics. I used two kinds of labels: what kinds of cues are discussed (acoustic, motor, semantic, phonotactics) and what kind of method is used (experiments with adults, children/infants; modeling).
I've also reposted my discussion post, which is outdated given that Susan Nittrouer won't be coming, but I thought it might give us a starting point.
I've also reposted my discussion post, which is outdated given that Susan Nittrouer won't be coming, but I thought it might give us a starting point.
Friday, November 30, 2007
Some questions on the ideas put forward by y'all
Hello everyone,
I was re-reading everyone's abstracts and thought that there were some interesting coincidences and intriguing mismatches.
For example, I might be reading Susan's wrong, but it seems that she predicts that, with a 1-to-1 mapping from articulation to acoustics, the magnitude of the change in the articulation predicts basically two levels of structure: one corresponding to slower and larger movements and one to more abrupt changes. Only the latter would, according to my interpretation of her abstract, provide acoustic cues. This matches Ying and Jeff's finding that an automatic feature extractor operating on the raw acoustic signal is good at finding manner features, but not so good with place features. On the contrary, motor cues are most useful for place of articulation, but not so great with manner.
Susan also suggests that it is not only the level of fine acoustic cues that affects speech perception, but also the other level. May this be related to Jessica, Robert and Matt (JRM)'s learner of phonotactic constraints and segments? If I get their point right, both infants and adults would rely on acoustic cues in context to discover/perceive phonological categories, and this process may go from a holistic chunk (perhaps syllable-based?) to phone-sized elements. In relation to this, I was thinking that while one may expect that articulatory experience would aid the formation of prosodic categories (like syllabic/word templates), it was at first surprising for me that it is also important for learning of segments. Now I see that this fits really well with JRM's proposal that we may learn phonotactics and segments at the same time, don't you think?
Further, this learning strategy (of phonotactics+segments) must be flexible and still active in adulthood, given that Lisa D's adults are able to learn a minimal pair of words that relies on a non-native phonotactic contrast, but her results also suggest that the presence of a minimal pair (which, through semantic cues, forces listeners to focus on the relevant acoustic cues) is a necessary condition for learning in adults. On the other hand, the presence of minimal pairs cannot be a necessary condition in toddlers (cf. Pater et al. 2004), as Ying and Jeff point out, while already in childhood semantic information is actually helpful, according to Lisa G's results. Is there a developmental trend here? May it be geared by a difference in processing abilities? (If so, would we predict that adults in a high-processing load task would behave like Pater's toddlers and have a *harder* time with the minimal pair than with the non-minimal pair?)
Another question that follows from Susan's hypothesis that these two levels interact in perception is how language experience affects this interaction. Grant's paper seems to suggest that this interaction is mediated by specific experience, such that sheer experience with one of the categories (AmEng speakers must have experience with retroflex sounds, and even with the retroflex fricative in words like 'shrill') is not enough, but speakers must have been exposed to the particular contrast that relies on the acoustic/higher level features under study. This is particularly important for early category formation: Chandan's corpus analysis looks at the correlation of pitch and VOT, which could be interpreted as respective examples of the larger and finer categories proposed by Susan. If IDS takes advantage of the apparent predilection of infants for pitch, will it have its voiced segments be primarily marked with pitch? However, given young infants' very limited articulatory abilities, we'd expect them to rely primarily on acoustic-feature based contrasts, which -now following Ying and Jeff - would predict a primacy of VOT.
I was re-reading everyone's abstracts and thought that there were some interesting coincidences and intriguing mismatches.
For example, I might be reading Susan's wrong, but it seems that she predicts that, with a 1-to-1 mapping from articulation to acoustics, the magnitude of the change in the articulation predicts basically two levels of structure: one corresponding to slower and larger movements and one to more abrupt changes. Only the latter would, according to my interpretation of her abstract, provide acoustic cues. This matches Ying and Jeff's finding that an automatic feature extractor operating on the raw acoustic signal is good at finding manner features, but not so good with place features. On the contrary, motor cues are most useful for place of articulation, but not so great with manner.
Susan also suggests that it is not only the level of fine acoustic cues that affects speech perception, but also the other level. May this be related to Jessica, Robert and Matt (JRM)'s learner of phonotactic constraints and segments? If I get their point right, both infants and adults would rely on acoustic cues in context to discover/perceive phonological categories, and this process may go from a holistic chunk (perhaps syllable-based?) to phone-sized elements. In relation to this, I was thinking that while one may expect that articulatory experience would aid the formation of prosodic categories (like syllabic/word templates), it was at first surprising for me that it is also important for learning of segments. Now I see that this fits really well with JRM's proposal that we may learn phonotactics and segments at the same time, don't you think?
Further, this learning strategy (of phonotactics+segments) must be flexible and still active in adulthood, given that Lisa D's adults are able to learn a minimal pair of words that relies on a non-native phonotactic contrast, but her results also suggest that the presence of a minimal pair (which, through semantic cues, forces listeners to focus on the relevant acoustic cues) is a necessary condition for learning in adults. On the other hand, the presence of minimal pairs cannot be a necessary condition in toddlers (cf. Pater et al. 2004), as Ying and Jeff point out, while already in childhood semantic information is actually helpful, according to Lisa G's results. Is there a developmental trend here? May it be geared by a difference in processing abilities? (If so, would we predict that adults in a high-processing load task would behave like Pater's toddlers and have a *harder* time with the minimal pair than with the non-minimal pair?)
Another question that follows from Susan's hypothesis that these two levels interact in perception is how language experience affects this interaction. Grant's paper seems to suggest that this interaction is mediated by specific experience, such that sheer experience with one of the categories (AmEng speakers must have experience with retroflex sounds, and even with the retroflex fricative in words like 'shrill') is not enough, but speakers must have been exposed to the particular contrast that relies on the acoustic/higher level features under study. This is particularly important for early category formation: Chandan's corpus analysis looks at the correlation of pitch and VOT, which could be interpreted as respective examples of the larger and finer categories proposed by Susan. If IDS takes advantage of the apparent predilection of infants for pitch, will it have its voiced segments be primarily marked with pitch? However, given young infants' very limited articulatory abilities, we'd expect them to rely primarily on acoustic-feature based contrasts, which -now following Ying and Jeff - would predict a primacy of VOT.
Labels:
acoustic,
adults,
children+infants,
discussions,
modeling,
motor,
phonotactics,
semantic
Monday, October 8, 2007
acceptance email
On behalf of the Program Committee, I am pleased to inform you that your organized session proposal, "Attention to Cues and Phonological Categorization” has been accepted for presentation at the 2008 LSA Annual Meeting in Chicago, IL, to be held 3-6 January. All sessions will be held at the Palmer House Hilton. Your session will be held on Friday, 4 January, from 9:00 AM to 12:00.
To facilitate accuracy in producing the handbook, please send the Secretariat one long abstract describing the session as a whole (500 words or less) and short abstracts (100 words or less) for each participant's presentation. (Even though you may have included the final description with your previous abstract submission, please resend as a Word or simple text file; no pdf files please.) Send these via email (rlewis@lsadc.org) by 15 October 2007. The abstracts and session description must have the following additional information:
• name and affiliation of authors in the order they are to appear in the program
• title of each paper
• If any presenter is also presenting an ADS, ANS, NAAHoLS, or SSILA paper, please notify us of the day/time/title of the session in which they will present that paper.
• abstract (NOTE: If your abstract uses any special fonts, you must also send a pdf of the abstract to the Secretariat as special fonts do not transmit accurately.)
• your final equipment request, if you need something other than an LCD projector.
Authors are encouraged to provide handouts for their papers. Please bring at least 100 copies of your handout for distribution. All rooms will be equipped with a microphone, screen, and LCD projector. You must bring your own laptop and connector. VGA cables will be provided, but you are responsible for connecting to the VGA cable. We urge you to check compatibility between your laptop and the LCD projector before sessions begin (between 8:00 and 9:00 AM, and between 12:00 and 2:00 PM).
If we do not receive the session participants' meeting pre-registration form and fees by 15 November, your session will be withdrawn from the program.
You may pre-register for the meeting at https://lsadc.org/info/meet-annual.cfm#orgses. Information about the Hilton and directions for reserving a room on-line are available at http://www1.hilton.com/en_US/hi/hotel. Please note that the deadline for pre-registering and making hotel reservations is 17 December 2007. At the end of October, more detailed information about the meeting will be available on the LSA web site: http://www.lsadc.org.
We look forward to seeing you in January.
Sincerely,
Felix Oliver
Executive Director
Linguistic Society of America
(202) 835-1714
To facilitate accuracy in producing the handbook, please send the Secretariat one long abstract describing the session as a whole (500 words or less) and short abstracts (100 words or less) for each participant's presentation. (Even though you may have included the final description with your previous abstract submission, please resend as a Word or simple text file; no pdf files please.) Send these via email (rlewis@lsadc.org) by 15 October 2007. The abstracts and session description must have the following additional information:
• name and affiliation of authors in the order they are to appear in the program
• title of each paper
• If any presenter is also presenting an ADS, ANS, NAAHoLS, or SSILA paper, please notify us of the day/time/title of the session in which they will present that paper.
• abstract (NOTE: If your abstract uses any special fonts, you must also send a pdf of the abstract to the Secretariat as special fonts do not transmit accurately.)
• your final equipment request, if you need something other than an LCD projector.
Authors are encouraged to provide handouts for their papers. Please bring at least 100 copies of your handout for distribution. All rooms will be equipped with a microphone, screen, and LCD projector. You must bring your own laptop and connector. VGA cables will be provided, but you are responsible for connecting to the VGA cable. We urge you to check compatibility between your laptop and the LCD projector before sessions begin (between 8:00 and 9:00 AM, and between 12:00 and 2:00 PM).
If we do not receive the session participants' meeting pre-registration form and fees by 15 November, your session will be withdrawn from the program.
You may pre-register for the meeting at https://lsadc.org/info/meet-annual.cfm#orgses. Information about the Hilton and directions for reserving a room on-line are available at http://www1.hilton.com/en_US/hi/hotel. Please note that the deadline for pre-registering and making hotel reservations is 17 December 2007. At the end of October, more detailed information about the meeting will be available on the LSA web site: http://www.lsadc.org.
We look forward to seeing you in January.
Sincerely,
Felix Oliver
Executive Director
Linguistic Society of America
(202) 835-1714