Fonhispania 2009: New Approaches to the Phonetics-Phonology Interface

Mesa redonda con los Dres. Coleman, Goldstein y Reiss.
Moderador: Dr. Joaquín Romero
Lunes, 2 de marzo de 2009
Transcrito por Eugenia San Segundo y Joseph V. Casillas

01:00 - 03:27
Dr. Romero: Ok, well, welcome everybody to the last session of today. We are basically trying to (...) we are going to try to get some discussion going, a round table, a discussion table on some of the issues that have come up in the three talks that we've heard today. They've been three (...), I would say, extremely interesting talks that have given us views from very different perspectives of what Phonology, Phonetics, Phonology/Phonetics interface is, and I think (...) it kind of gives us a sample of some of the most interesting discussions going on in the field at the time. So, rather than diving directly into the specifics of each of the talks, I think it might be worth it, especially since we have some Master's students here who might be perhaps more interested in some more general aspects of some of the things that we've talked about here, that it might be worth going back to some of the basic issues that have been discussed or sort of touched upon in the three talks (...) and at the risk of either being too simplistic or not getting an answer at all, I think, perhaps, my first question would be: Does it make any sense to even talk about Phonology? Or, you know, perhaps asking what Phonology is? I know, and I think everybody can see from what we've heard today that we have very different ideas of what Phonology is. What Phonology means for one person is probably very different from what it means for somebody else.
And so, I think it might be worth starting our round of discussion by asking our three speakers if at the expense (...) at the risk of being simplistic, if they could give us an explanation of what they think Phonology is (...), briefly, or maybe what they think it isn't, (...) that might be easier, or even what they think the scope of Phonology should be. When they talk about Phonology what do they mean? That might be a first (...) first topic and then, also, since the (...) general topic of the conference is the new approaches to the Phonetics/Phonology interface (...)
I'm actually a little puzzled because we haven't heard a lot about that, we haven't heard a lot about interface, and that will be, I guess, my second question (...) to you, ask if you would want to even consider the notion of interface as being interesting, relevant, important... Do we need to talk about Phonetics/Phonology interface? What is it? Because we've had three talks where the relationship between the two levels is seen as extremely different. In our first talk, Professor Coleman basically told us that he doesn't believe there is any Phonology, so I guess if there's no Phonology, there's probably no interface. Maybe I'm wrong to assume that. In our second talk by Professor Goldstein, (...) one of the major assumptions of Articulatory Phonology is that they're probably the same thing, that Phonetics and Phonology are probably the same thing, and in that case maybe it doesn't make much sense to talk about interface either.
In the last talk by Professor Reiss, where he stated that Phonology needs to have (...) does not need to have any access to phonetic substance, then I guess we don't need an interface there either. So, I guess...
Dr. Goldstein: So I think we're all done. We can go home now. (They laugh)
Dr. Romero: So I said at the risk of not getting any answers, I warned you, but hopefully we'll be able to get around that and bring the discussion to, to begin with, to that sort of very basic general level of what you think Phonology is, what the scope of Phonology should be and if you think that it makes any sense at this point to talk about interface and what you think that interface might be, or might entail, and then after that we can open the floor to questions from anybody that might like to participate at any level of detail, hopefully not getting too mired in the details of specific papers but something that could be of interest to everybody, especially to our students here. So maybe John, if you might want to start...
03:27 - 05:31
05:31 - 08:09
Dr. Coleman: Yeah, sure. Well, first actually I would like to respond with a clarification. I didn't say there is no Phonology. My question this morning was: do humans (...) human beings have Phonology? You know, when they're not ordinarily talking to one another, and that's a different question from: is there Phonology? Clearly there are (...) domains of activity in which we find it useful to construct phonological representations and to use them, so in a number of (...) of the real daily jobs of Linguistics it's very useful to have phonological representations and to construct grammars, if we will, to describe that. So if you're going to do, I mean, I'm interested in Computational Phonology. If you're going to program computers to talk, or whatever, then (...), or even just to annotate data that you organized on a computer, then it can be very useful to have phonological representations. This is, this is a long way removed though from the sort of (...) the realist cognitive view of Phonology that's dominated the field for so many decades now. For all kinds of other practical purposes it's extremely useful to recognize the utility of Phonology and phonological theory and its products. We might, I think, sensibly, ask the question: do communities, do, you know, (...) where is human language in all of this? Do communities of individuals have some shared knowledge that we might want to call phonological? And what form would it take? I'm not, I'm not ruling all these things out of court. I think that, in fact, the problem is, as we've seen, that the (...) that our understanding, that the term Phonology is just overstretched and we use it to mean all of these many different things that are really, really quite different. You know, (from) social product to practical tool, a cognitive entity, these are all quite different things and so, to give them the one label, this is one thing that I agree with you very much, Charles, you know, you said that there was at least three levels, but, you know, there's clearly many different (...), and that's what makes this type of conversation difficult in practice because is so easy to talk at cross purposes because we're actually thinking about quite different things. The Glossematicians, in fact, invented a whole new set of terms, I think, didn't they, talking about Phonematics for the substance aspect of Phonology, and contrasted that with Cinematics, the substance-free phonological units and algebraic representations. So, yeah, maybe we need (...) extra terms. Of course, then, proliferating terminology can be obstructive to discourse too, particularly if we don't, you know, have a shared vocabulary for doing that.
The second part of your question was about interface?
Dr. Romero: Inter...
Dr. Coleman: Interfaces. Yeah, so in the (...) so in (...) sorry, in the human mind and in the human brain no I don't see any need for a Phonology/Phonetics interface. I think there are many interfaces though to be characterized. I think we need to have an explanation or at least some proposals about the interface between our auditory, well, the outside world and our auditory perceptual representations, between those and our articulatory representations, between each of those and the various semantic representations. If we want to observe the nature of the reading and writing process then there'll be other interfaces, and given, and these are important given that (...) well, it's been well studied that learning to write also has an impact on our (...) judgments about human language, so in experiments on phonemic awareness, for instance, these are people's ability to decompose words into phonemes or to distinguish between one word or another on a phonemic basis, (...) is an ability which is biased, or shaped in part by alphabetic literacy. So, although linguists tend to (...) try to keep questions of reading and writing out of certain questions of core grammar and Phonology, I think it's, if we're looking at the psychology of language, it becomes difficult to remove the visual and written aspect of language completely, particularly if we're talking about perception.
08:09 - 09:42
09:42 - 11:45
Dr. Goldstein: Yeah, I mean, I particularly agree with John's sense that there are many interfaces we're talking about and the notion that there is Phonology and Phonetics that have to be interfaced doesn't make a lot of sense to me. So there's (...) there're several layered subsystems in the (...) in this larger system that we can call speech, or knowledge of speech or, I mean, whatever you want to call the entire (...) phonological/phonetic speech communication and our own knowledge of that system. And I think the trick is to find coherent systematic aspects at whatever level you find them. And (...) but there's a temptation always to (...) to distinguish, and I think this is largely how Phonology and Phonetics have been distinguished, I don't think that's the way Charles was meaning it, but I think a common way of saying, "Well, by Phonetics I mean what's physical and by Phonology I mean what's internal or mental." And I just want to give one example that for me was very eye-opening about how contrasting cognitive and physical is always a mistake, or I think is a mistake, not always, but I think is a mistake.
So we have, I mean, you can give physical descriptions at many levels, I mean I think it's very misguided to say, "Well, this is (...) this is what the (...) I have this pretheoretical description of Physical Reality that's, say, a slice of spectral sequences and, go, if I describe speech in that way I may get a certain kind of invariance but if I describe speech in terms of the gestures that produce it the description, that's also a physical one, looks different.
And I think the best example for me that I found was (...) in the example of the phase transitions I gave with respect to in phase versus out of (outer?) phase, so you can look at that and say, "Ok., wiggling my two fingers and (...) is that, that looks like a physical phenomenon, right? Because, you know, my two fingers belong to the same body and they are in fact, mechanically coupled. Ok, so there's mechanical and neural coupling between my two hands... and to say that there's, there are modes and there are qualitative ways in which I can do that task that are (...) that are accessible to me seems like that could be due to the fact that there's physical in the everyday sense of that connection between them, linking between them, mechanical linking. But it turns out you can do the exact same experiment with, as it was originally done, with two people sitting on adjacent barstools. And so the two oscilating limbs don't belong to the same person. So there're two people sitting next to one another oscillating their legs. The results are exactly the same. It doesn't, they don't have to be the same part of the same physical system in that low-level, in the mechanical sense of that. So, if you turn that around that's a case where, ok, I've got a system that is exhibiting this systematic behavior and what is necessary in order to get that to go, obviously, it obviously doesn't work if the two people can't see each other, right? So, you have to be able to see and you have to have the intention of synchronizing. That's true in this as well. You have to have the intention of synchronizing the fingers in one mode or the other. 11:45 - 13:30
13:30 - 15:20
So, what I learned from the case of two people sitting on barstools swinging their legs is that a case where the cognitive system, whatever you want to say, the visual system is providing the kind of, in a way, lower level link between the
two actions that are being coordinated in what looks like a physical system, it's actually looking down. The cognition is providing a lower level piece of that higher level system that from some other point of view would look like, "oh that's (...) that's a physical system." So that, to me, this is the cognitive and physical stuff interpenetrate at (...) at every level, well not, maybe not at every level, but at several levels, and so that's why we have to be very careful when we look at the particular examples and we say that this is substance-free. That may be, and I think (...) I think it's worthwhile looking for what kind of computations might be substance-free and again this is an example of, the example I've just given you is substance-free. It doesn't matter whether your fingers, whether they're your legs and someone else's leg, but there's a generalization there that, in this case, looks like physical dynamical description, but something cognitive has to also interact with it in some way to get that kind of systematic description. So that's what I mean by "there's got to be connections between coherent systems that make up our speech/phonology capacity at a number of levels", and I think we need to look at how to find those coherent subsystems, wherever they are, and how they go together without sort of separating them in advance and think mental versus physical, as I think many have wanted to assume that that's what we mean by Phonetics and Phonology (...) and I don't mean you.
Dr. Reiss: Can I first ask for clarification from Louis, or do I have to give my spiel?
Dr. Romero: If you could just give us your view quickly first and then we'll get into the interaction.
Dr. Reiss: Ok. So, what is Phonology, and the interface issue...So I guess I'll invoke Sapir again and Hammerberg and this idea that (...) that it's Phonology that determines what's phonetic, in the sense that there's this famous example from Sapir of a (...) if you go like (breathes into microphone) you know, if you go like that it's not (...) the physical action can't be determined, whether it's a voiceless "w" or blowing out a candle, or if you hiss or make an "s", right? So that you can't call something speech, and so you can't call it Phonetics unless it's somehow the result of an intention, of a linguistic intention. So, in that sense, the interface is maybe the intention between (...) taking a simplistic view of Phonetics as physical action, which is not going to always work, but that's one sense of (...) the interface that I would take, and also the idea of Phonology as the linguistic intention matched with speech, I guess. And then I can cheat and take this other notion of (...) interface between two fields, which comes back to our discussion just before, that one thing I skipped over was this idea that I think phonologists should be happy to get rid of things. And one way to get rid of things from the domain of Phonology is to accept the kinds of explanations that phoneticians give. So we're talking about John Ohala's explanations for recurrent sounds patterns and typological generalizations in that there's (...), I don't know if anybody here does this, but there are many phonologists who think it is a good idea to take those findings that you can reproduce in a laboratory that have to do with how people perceive speech, or misperceive speech, and in many (...) much of the phonological literature the approach is to redundantly encode that in the Phonology. And so, I should have made this point clear, but that's something that I think I'm on the side of the phoneticians with, that if there's a good perceptual or articulatory explanation for why something doesn't occur or is, to take a ridiculous example; do you really need a phonological constraint against having vowels that are plus-high plus-low? Right?
You can think of other reasons why they don't occur if (...) and so (...) in the sense of interface between the fields I feel like we can, as a phonologist, we can hand off to the phoneticians and the psycholinguists some of the aspects of speech that need to be explained and say "well, that's your job" and not be (...) not redundantly encoded as constraints in Phonology. So I'm completely in favor of adopting the John Ohala-type explanations for sound patterns.
Dr. Romero: And if you'd like to address the (...) or did you already?
Dr. Reiss: To ask? Oh, yeah, ok. So, to return to (...), so, I guess to really pick on this, when person A starts swinging her leg and person B starts coupling with that (...) their leg swings start coupling, the legs are not actually coupling, right? That's mediated by the nervous systems by the (...) I mean, you said this basically.
Dr. Goldstein: (not intelligible).
Dr. Reiss: Right, so person B sees person A's leg swinging and then the oscillations are somehow represented in your nervous system. I mean, and that's, you said something about this, I guess, when you were talking about reading a word several times and then pronouncing it. You get the same effect, right? So you don't actually have the phys... so this relates to the issue of whether it's a physical system or a neurological system or a cognitive system, it's a...
Dr. Goldstein: Or any information, yeah.
Dr. Reiss: But the two legs coupling are actually mediated by at least the perceptional system.
15:20 - 19:49
19:54 - 25:16
Dr. Romero: Ok, any questions or any observations about these issues from anybody?
Dr. Solé: Is there a microphone on? So, would you be happy with saying that parts of the, you know, parts of the phonological structure may be merged (...) constraints of the physical system (...) so what is left to do in Phonology? I mean, how do you go about something if part of the structure is explained in terms of physical auditory/acoustic constraints things that are contrastive and perceptible, robust, good enough, so what's there left to do if there's no substance and whatever structure there is, is partly explained by substance itself? What is there left to do?
Dr. Reiss: So, to give an example that I haven't used, if you have a theory of feature changing, or rules, or deletion rules, you need some kind of feature logic. So, do you use unification-based logic or do you use some other kind of feature logic? Figuring out if you get a better system by assuming primitive features, or binary features, or multi-valued features, those are all legitimate questions. My own work, I've worked on things like what kinds of notions of locality can you use. So, in terms of (...) you've got long distance assimilations, a vowel-harmony phenomenon, you've got opaque vowels and transparent vowels. What kinds of conditions can you (...) what kinds of specifications can be used in terms of features and directionality and for assimilation processes at a distance? So, how do you define locality if you want to have the idea that phonological processes happen in local fashion? Is there syllable structure? Right? Or what model of syllable structure do we want? Is there (...) are they members of an onset branching? I mean, I'm not sure if Louis' work would ever bare on this, you now, the question of (...) the complex, some people treat /s/ plus stop clusters as complex onsets. Other people treat them as an appendix plus an onset. Is there any kind of structural argument that you could make to distinguish those things? Now, yes, it's true that we're talking about an "s" plus a voiceless stop, and in that sense we're referring, like the glossematicians said, we're referring to the entities in terms of their somehow abstract relation to their phonetic substance, but you might end up being able to make purely structural arguments about the organization of those things. Just like you do in syntax, where you can make arguments about syntactic constituency that are abstract with respect to the substance (...) you know, (they) don't relate to meaning, things in the world in an obvious way. So, locality issues, I gave the example of the need for using quantificational logic, just to do something as simple as decide if two segments (Let's assume a feature system for now) ok, just as simple as deciding if two segments are identical or non-identical. So, there are these rules like "delete a vowel, but only between non-identical consonants under certain stress conditions". In other words, this sometimes goes by the name of anti-gemination, right? Don't delete the vowel if it's going to bring identical consonants together. In order to (...) you can't represent non-identity, right? So, delete the vowels if the consonants are non-identical. You can't do that using any kind of auto-segmental representation. You need to use quantificational logic and say: is there some feature for which these two segments differ? Right? Because you have to scan through the set of features. So that (...) kind of (...) you need the power of an existential quantifier. So, for me, that's a non-substantive result about Phonology. Whether it's right or wrong, that's the way I analyze it, and what it led me to do is to say: In order to do this I need some kind of algebraic system, that's very ugly, but it seems necessary and it's more powerful than the feature geometry model. And so it gave me a reason to reject the feature geometry model. Well, I have both if I need something that's more powerful, and the feature geometry one I already didn't like because it was too (...) it supposedly kind of models the shape of the vocal tract, right? It's very substance-based and so by (...) the idea was: ok, I need quantifiers in the Phonology and it leads me to get rid of the auto-segmental (...), the feature geometry representation. So that's a taste of the kinds of things I look at.
Dr. Solé: But, for example, I mean, if we could provide a satisfactory phonetic explanation for that, you know, we could say: well, if you drop a vowel between two homorganic consonants, then the first consonant is not going to be released, therefore, you lose part of the information, you may lose a syllable, whereas, if they are heterorganic and they are not overlapped, then there's going to be two releases and, therefore, you know, you preserve part of the structure.
Dr. Reiss: Yeah, but...
Dr. Solé: Ok, if we could provide... and show that it is less likely to drop vowels if the result is whatever, then you would just drop this because it's being explained phonetically.
Dr. Reiss: No, no, no, no. Because that might be the historical explanation for why such systems arise. It turns out there's a very wide typology. So some languages drop only between identical, some drop only between non-identical. Some insert only between identical, some insert only between non-identical. So it's a challenge, but in a synchronic grammar, you bring two morphemes together and the speaker has to compute, and sometimes it only looks at so-called place features or manner features, there's not always the full set of features, so no matter what you can accept a historical perceptual phonetic explanation, but in a synchronic grammar a speaker still has to evaluate if..., you know, some languages ignore the voicing for the identity computation, David Oden wrote about this, and so you have to look at the set of all features except the voicing features, or laryngeal features, and decide if there's any place where they differ. So how that rule arose is one issue, but the actual application of it in a grammar has to be computed in real... by the grammar. (...) I think Juliette wants to yell at me.
25:16 - 27:14
27:15 - 28:53
Dr. Blevins: Actually I wrote a paper on this antigemination phenomenon as well, and if you read it carefully you'll see that the facts are not as they're described and that McCarthy's original article on these phenomena ignored the fact that in the Semitic languages he was looking at there were clear exceptions to anti-gemination. So, again, I mean, I think it is very important to start out with a full picture of how these sound patterns are integrated into the grammar (...) ok?
Dr. Reiss: So, given what you've said, you're saying there's no evidence for any grammar at all?
Dr. Blevins: No, I'm saying it's just morphologically restricted. If you look at the cases of clear anti-gemination...
Dr. Reiss: Right, so those cases, I mean, he doesn't only give (...), so Oden wrote a response to McCarthy and added more cases and if all of the cases of anti-gemination are false, then we have to worry about it. Right, if there's no productive, maybe there's no productive Phonology at all? But, I mean, so, what for you counts as productive Phonology?
Dr. Blevins: I'm not just saying that just talking about this one example, because I think the historical explanation is actually superior to any synchronic explanation that's been proposed...
Dr. Reiss: (...) But, I mean, take English insertion of vowels (...) before the plural marker. You know, so 'bushes' and 'cats' and 'dogs', right? That's a synchronic phenomenon. It passes the Wug test and so on, right?
Dr. Blevins: Yes.
Dr. Reiss: And so, you need some way to evaluate the features set on the suffix and the features at the end of the stem, and decide whether or not you insert the epenthetic vowel, right? That's pretty clear synchronic Phonology.
So that's an insertion case, right? For relative identity of the consonant (...) so there's insertion cases and deletion cases and there's identity and non-identity. So, I mean, it seems that English insertion is sensitive to a certain amount of identity, not full identity.
Dr. Blevins: Ok, yeah, this is a different case. It's a different question. I was just making a comment on that last one. I'm sure there are other people who want to talk. I wanted to change the subject.
Dr. Romero: Maybe we can let John make a comment first.
Dr. Coleman: Yeah, I just wanted to interpose a brief observation about this similarity/dissimilarity metric. This sounds to me as something that can very easily be computed. You know, neural nets or whatever could be used to compute the degree of dissimilarity or similarity of two bits of a signal or some encoding of that signal in terms of features if you wish. This does not require, on the basis of the substance of the input and output or the two things that are being compared (...) you might be able to convince me that the computation itself is a substance-free computation, but be that as it may, it's easy enough to perform the computation of comparing the degree of similarity or dissimilarity, leaving out specific dimensions if you wish.
Dr. Reiss: I mean, again, I think you're heading, you're arguing against features basically, right?
Dr. Coleman: Not necessarily. I think, actually in this particular case, whether you use features or not or more concrete acoustic artic(...) the computation of similarity of two forms, however those forms are encoded, can be done fairly concretely using a variety of computations.
Intervention: But it doesn't have to be quantificational...
Dr. Coleman: It doesn't have to be quantificational, yes.
Dr. Reiss: Well, yeah, except that there's certain... so when people learn phonological systems or do things like this they don't, there's things they pay... it comes back to this issue of which analogies work and which don't, right? So, how loud the person was speaking to you does not affect it, right? But what the F0...you know, F1 frequency was, that seems to be relevant.
Dr. Coleman: Yes.
Dr. Reiss: And so, I would say, that the dimensions along which you compute things, like similarity and so on, are what I call features. And we know when were born that we don't count the... we don't take account of the loudness of the voice, how far away the person is or what color shirt they're wearing, and that there are certain aspects of the signal...
Dr. Coleman: Well, that's an empirical question.
Dr. Reiss: It doesn't seem to be the case, right? I mean children don't learn language in a way that would suggest that that is the case. Right? The ambient air temperature doesn't seem to affect language (acquisition).
28:53 - 31:49
31:49 - 34:02
Dr. Coleman: Well, apparently people's ability to discriminate between vowels, vowel categories, does depend upon... can depend... can be biased by the presence of stuffed toys in the room.
Dr. Reiss: Stuffed toys in the room? Well, I wouldn't want to make a theory about that.
Dr. Coleman: Yeah, well there's a very good theory about that, but I think it's amazing what environmental factors people can be sensitive to and are biased by. It's not to say they have to be or they always are, but they can be.
Dr. Reiss: You don't seriously want to have stuffed toy presence as part of your theory of phonological acquisition? Or ambient air temperature or...?
Dr. Coleman: I might.
Dr. Reiss: You might?
Dr. Coleman: Yeah.
Dr. Reiss: Ok, all right.
Dr. Blevins: This is just a point of information for the students here, and that is that Jeff Wilkie, who is at the University of Ottawa, is actually working on an automated similarity metric for segments, and so he is actually approaching this without the use of features, but he's comparing it to actual speaker judgements and having some success. So he's at University of Ottawa.
Dr. Hyman: Yeah, I have a question for John and Louis. I'm going to agree with one part of what Charles said that 'Phonology is grammar', and I don't know if you see it that way as well, but if you do, is there something that you might call Phonetics that's not grammar?, and in which case would there be a possibility that there is a difference between Phonetics and Phonology? I'm kind of surprised because, as you will see in my handout tomorrow that I've sighted a number of people and what they've said about Phonetics versus Phonology. I'm very much in agreement with this notion not to prejudge what is or isn't part of grammar, but some things in Phonetics presumably are not part of grammar. So, I don't understand how there can't be an interface, or how it can all be one, and I was just wondering if you could comment on that and what a grammar is?
Dr. Coleman: Well, I'll quickly pass on that, because I think it sounds to me like explaining one unknown -for me personally- one unknown in terms of another. I used to think that Phonology was grammatical Phonetics, if you like, the grammar of Phonetics. Sort of a Martinet-type view... Functional Phonetics... but now, having followed the route that I've followed, and come to look at things in the way I do, I say I'm not sure that I can answer the question because I'm not sure that I understand what grammar is, as something separate form human behavior. So I don't have anymore to say on that.
Dr. Goldstein: I mean, the sound that's generated from a vocal tract when a pattern of gestures is imposed on it, I guess is not grammar. That's Physics. The actual sound that comes out given that I perform certain actions, that's Physics, that's not grammar, but the rest, I mean, the (...) I mean, my interest really isn't these representations and how we plan them and how we produce them. I don't have a sense of whether I want to call that part of grammar or not, but I don't think you can pull what the nature of those, I mean, the structure of those representations, I think that's a different issue from whether this one is a labial or a dorsal, and whether that makes a difference in the Phonology or not, but the structure of how the information is structured at various levels, I'm not sure how you separate that from grammar in the broad sense. They're made in a way that (...) not just parts of the grammar that don't care (...) won't care about what these (...) the detail of these structures is. It seems to me, my notion of grammar has got to include all of these parts that aren't pure Physics.
34:02 - 36:10
36:10 - 41:38
Dr. Reiss: (...) that sounds like Hammerburg, you know, if it's cognitive it's Phonology.
Dr. Hyman: Certainly, I mean, in terms of (...) of course grammar is ... has complex either modules or levels or whatever you want to say, and so at the time that some things like you mentioned rendaku and parsing, you know, which is grammatical, you know, so one could even, OT doesn't need a universal constraint if the voicing feature is a morpheme. So, at some point you get that in the natural history of changes, at some point something that started out as "phonetic" and maybe in that blurry interface area (...) is it part of grammar or not?, does it become something more grammaticalized, morphologized and so forth, at which point it has a different character, I would assume. In that case, there would be no (...) in most people's view of what a grammar is, there would be no controversy that that's grammar. And yet other people, a lot of people, like Abby Cohn, talks about Phonetics/Phonology doublets. That nasalization of a vowel in the context of a nasal consonant may be phonetic in one language and phonological in the other... kind of thing, and I guess that's something that you don't distinguish or it would all be one? Or...? It's something that a number of us have found useful...
Dr. Reiss: I mean...Isn't Keating's discussion of that -I mean, she's a phonetician- it seems so clear and useful.
Dr. Hyman: Yeah, (...) there are quite a lot of people that make that kind of distinction, so it's kind of interesting that the three of you sitting up there that we haven't got the canonical view. I thought that this was so canonical that (....) it's all over my handout tomorrow...
Dr. Reiss: What is canonical?
Dr. Hyman: The idea that... well, I can just read it to you. You know, that all these terms that are used like quantitative versus qualitative, gradient versus categorical, continuous/discrete, or quantal physical versus symbolic... You know, the terms that people have used, including some metaphors like: analog/digital, semantic versus syntactic, as Pierrehumbert put it. The statement: "The relationship of Phonology to Phonetics is profoundly affected by the fact that it involves disparate representations." Pierrehumbert, 1990, page 378.
Dr. Reiss: You know, I hate to tell you but I probably agree with all of that. I probably share your canonical view.
Dr. Hyman: Yeah?
Dr. Reiss: Yeah.
Dr. Goldstein: Well, I don't. I don't think all those things go together, that's all. I mean, there are all of those things, but it's not like this is a domain that's characterized by all of those properties, this is a domain that's characterized... Those are different properties that are true of different aspects of the system and there's not just one (....), this up here it's all digital and quantificational.
Dr. Goldstein: (just the pre) modular, I mean the thing is that even to have these different qualities, well, not necessarily modular...
Dr. Goldstein: Well, I don't know what modular means.
Dr. Hyman: I mean, here's the phonetic part over here and over here.
Dr. Goldstein: No, I don't buy that. Not for me.
Dr. Hyman: But, the relationships would have different statuses in different languages.
Dr. Goldstein: Well, absolutely. I agree that there can be different kinds of, you know, that the process of what can be called nasal assimilation or spreading could be those two different ways, but there could be more than two different ways. It's not like there's the phonetics way and the phonology way. That's the problem I have. When you actually start looking at the system, you see lots of structure. There's lots of information and there's lots of structure. The mistake, for me, is saying it's either Phonetic (...) all of those properties go together on one side that's Phonetics and all the properties go together on the other side, that's Phonology. There are different resources that we bring to this. I think that my interpretation of what Charles is getting is that there is some resource that all this other stuff doesn't get at, and I think that well could be right. On the other hand, for me, the problem is that I'm not sure because enough of the work has assumed that's the case and started out with those representations and we don't know what would be, we don't know exactly if we started from a different place what kind of operations we would need. I have no trouble with needing them, I mean, God, we know how to do Math. That's so weird. I'm not sure what it is that we need.
Dr. Reiss: Even neurons firing in a discrete way. They either fire or they don't.
Dr. Goldstein: Yeah, I have no problem with that, but I just don't know what's in that part for me. If you really started by taking some sense the program that your suggestions... sort of removing everything from that what you could get elsewhere, other kinds of structures from elsewhere, what would you have left? I don't know what the answer is. I'm sure you don't know the answers, but that's an interesting point. But again, my objection is, whatever it is, I just don't think there is going to be a Phonetics and a Phonology. There is just going to be layers of things that have structure.
Dr. Coleman: Can I come back on the grammar point again, having thought about that a little more? One of the central duties of a generative grammar is to, or at least until OT came along, is to distinguish between grammatical, well-formed, and ill-formed strings or structures, and (...) so in the 1960's in the phonological context this was presented by Hallensal in terms of the human ability to know or to calculate that the word 'blick' is a well-formed but non-lexical item, whereas 'benick' is an ill-formed item and there is a (...) there is a Gedankenexperiment that proves that human beings have the ability to discriminate between grammatical and ungrammatical phonetic things. But the experiment has been done repeatedly. I've done it. Ohala and Ohala have done it the experiment. Greenberg and Jenkins did the experiment, and others besides (...) the experiment to test what ordinary people actually think when presented with these so-called ungrammatical versus well-formed but non-occurring nonsense words. How do they judge them? Do they make a sharp divide between all of these words like 'benick' which are clearly bad and ill-formed and all of these like 'blick', which are fine? No, they don't. The results are extremely well known. The degree of acceptability and the degree of grammaticality, disentangling those is very difficult, those words on those scales are very intertwined. There's not a sharp boundary between them at all. Human beings will accept what to a phonological theorist would seem blatantly ill-formed words. In our version of this experiment, for instance, people were far happy to accept 'mrupatien', with a 'mru' at the beginning of the word, as contrasted to 'mupatien' with a 'mu' at the beginning. So they had both items in the test set so they knew there was a difference. They, in fact, gave them different scores. 'Mrupatien is definitely worse than 'mupatien', but 'mrupatien' is better than 'swammon', which has no phonotactic ill-formedness at all, but which is just kind of contains rare, lower frequency components. This property, this ability of human beings has been studied so well that we actually have a very good understanding of what it is that human subjects use when making this judgement. It's primarily the lexical neighborhood and to some extent some knowledge of the phonotactic probability of all of the parts of the word, including the good bits and the bad bits, and these things are all combined to enable people to make the responses that they make. But that seems to...Those results seem to challenge the tradition or generative grammatical notion of the human beings having an ability to clearly, sharply distinguish between well-formed and ill-formed items. 41:38 - 44:59
44:59 - 47:41
Dr. Reiss: That might just mean that those kinds of judgement tests have nothing to do with Phonology. That's another interpretation, right? That generative phonologists were wrong in claiming that that was relevant to Phonology.
Dr. Coleman: Yes, it's possible.
Dr. Reiss: but it doesn't argue against the existence of Phonology.
Dr. Coleman: (...)
Dr. Romero: That kind of ties in with one of, perhaps one of the problems that a lot of phoneticians have with standard or traditional Phonetics in the sense that it seems to be removed from any psychological reality or things that can be proven in that it is sometimes kind of self-serving in the sense that it doesn't relate to anything that we can consider real, but it rather is interested in its own mechanics or in its own rules and it doesn't really have much to say about the reality of the world, and maybe that's what John was talking about.
Dr. Coleman: Yeah, it is. I mean, I would accept your point. I think that everyone has got the right to move ground a bit, and retreat a little bit in the face of a result that goes against what you would have expected when it was set up, but if Theoretical Phonology retreats so far into a sort of area of investigation that is beyond (...) completely untestable, then fine, but it's no longer, for me, it no longer has any interest, it's no longer a scientific theory of anything. It's a mode of investigation and study about something, but if it's beyond test then I'm not sure what interest it has really.
Dr. Reiss: You can't do any phonetic studies without referring to phonological and lexical categories, right? How do you choose your tokens? (...) without referring to... we're going to see what the average F2 levels are for /e/. You can't design your experiment without referring to those categories that you're suggesting don't exist. I'm just parroting Hammaberg here.
Dr. Coleman: Well, that's bringing in a new kind of (...)
Dr. Reiss: He's saying they're real. They're more real than your categories. You need them.
Dr. Coleman: Well, as I said earlier, there's a huge literature of work on how people actually categorize vowels and consonants that suggest that to attach too much reality to the notion that the different tokens of an /e/ form a category is very questionable.
Dr. Elordieta: I would like to ask Charles (...) I think it's related to the whole thing about Phonetics and Phonology. Probably I misunderstood you but your talk seemed to me that you said "let's not worry as phonologists about why [p] doesn't become [s] after [r]". Then you lose predictive power. And also, if you think about natural classes and all that, or features, right? I mean it's a whole discussion they're probably after "sp" which gives rise to the feature geometry which may be wrong, but at least there was a principal explanation or at least an approach as to why you cannot find that, and if you sweep that into Phonetics, or substance, let's say, it's because transducers do not do that. I would like to ask you to clarify that.
Dr. Goldstein: Why is there a problem? (...) Our theory of language now as a whole makes that prediction, but it doesn't happen to be, this grammatical aspect is not relevant to that. Why is that a problem? We have a theory of language that predicts that [p] isn't going to become [p] after "uh" but it's nothing about structure of this component that does that.
Dr. Elordieta: It seems to me that he said...
Dr. Reiss: Ok, I think, I agree with Louis, basically, but I kind of cheated with that example. I would prefer to have more constrained theory of rules. Let's say maybe a rule can only change one feature at a time. (...) But we know that languages have alternations between segments that, you know, if we're going to do one feature at a time, we have to have three rules in to do it. You know, you have lots of languages where voiced stops alternate with voiced fricatives, but instead of [d] alternating with [z] it's an [l] or an [r] or something. If you're going to do it step by step it takes several rules. So you could do something like that, you could get some bizarre alternation between segments, but I think I would take some approach like what Louis was suggesting that you potentially could get, you know, there's some very strange sound changes. The famous one in Indo-European is the Armenian word for two is "erkow" and it comes by regular sound change from "Duo" in Indo-European. From "duo" to "erkow", but it's millennia of sound change after sound change after sound change, and so you can get bizarre chains of events, or they're bizarre when you look at the beginning point and the end point, but there's many that are not going to occur. So, the absence of that particular alternation is just the absence of all the possible sequences of sound change that could occur and give rise to phonological rules, that one happens never to have occurred. That's all the claim is. So the theory of Phonology, it's not (...) the predictive power of Phonology, it's not within the job of Phonology to account for the absence of that. It's a fact about Historical Linguistics.
47:41 - 51:16
51:16 - 53:24
Dr. Elordieta: So, basically you can find anything in any language?
Dr. Reiss: No you can't find them, because they might be unattestable, but that's not a phonological effect.
Dr. Elordieta: So tomorrow morning somebody can find a language in which that happens and...
Dr. Reiss: No, if we can implant into somebody's head a Phonology, we should be able to plant that, just like the Halle and Idsardi stress parameter setting that nobody could ever learn, no data would incite you to posit that, but if we could implant it, if we could program your brain, there should be nothing stopping us. It should be computable by your stress faculty. That's the idea, but you're never going to find it because no natural situation could ever lead you to posit it. So it's unattestable but it's computable. UG allows it but the nature of the world and the way that babies are born and learn language from their environment, they are not programmed, means you're never going to find that parameter setting.
Dr. Elordieta: So, you are not worried about what is possible and what is not possible? For you that is not a valid question?
Dr. Reiss: Right, so in some sense, it sounds like I am overgenerating the class of possible languages, right? But I am doing it (...) at such an abstract level that I'm not ever going to refer to whether you have final devoicing or final voicing. I'm saying you can have feature changing rules in codas. The fact that it happens to be voicing is an accident. I've said this in conversation before, it's very convenient for me, or I would be very happy to accept that sign languages have Phonology and use the same Phonology as spoken languages. All you're doing is setting up the different input/output systems. Different transducers. I don't know if it's true or not, but obviously if you're a substance-free person you want to believe that they're the same. But you don't have voicing, right? Voicing is just a label.
Dr. Hyman: I think what would have a bigger effect on me is if you were to say that here is this great result I get by making this assumption. You know, that this stuff falls into place because certainly what I, and I think most people, get excited about is the result, and it's not factoring out and showing that any A can become B in the context of C and it doesn't matter what A, B and C are.
Dr. Reiss: Right, yes, because I was honoring my audience, who are mostly phoneticians, and then trying to say the importance of Phonetics. You're a phonologist, so when I send my results to you, to help you understand what I mean (...)
Dr. Hyman: Well, I like the long distance stuff you were alluding to, but, you know, I think that, to me, that's kind of the argument. I'm more excited about interfaces. I love the Phonology/Morphology interface, the Morphology/Syntax interface, the Phonology/Syntax interface, and I guess the big question here is whether there's something called Phonetics/Phonology interface that's somewhere on a par with those other guys, you know? and I think where we're getting disagreements on that, or whatever it is, but I think the excitement is in the interfaces for me, and that includes the substance part.
Dr. Reiss: But I mean (...) I can give you papers, I have them on my computer right now.
Dr. Hyman: You want to tell them about it?
Dr. Reiss: Well (...) I can't present a phonological analysis that shows the superiority of this approach.
Dr. Hyman: No, because in the abstract obviously the (Halle and Idsardi) makes sense, I understand the principle, and so forth, so that's all fine and great. I don't, unlike Louis' presentation, where I wanted to go and try to apply it to everything I knew, I don't know what to do with this. That's why I'm having the reaction, but maybe because it wasn't aimed at me, then is what you're saying.
Dr. Reiss: Right. Now you've promised to read my (...) some of my papers. Ok, very good.
Dr. Hyman: Do you have any with you?
Dr. Reiss: Yeah, I've got them all on my computer. But I mean (...) They're available online.
Dr. Hyman: Because I don't have anything to read on the plane home. That goes for any of you.
53:24 - 55:36
55:36 - 01:03:31
Dr. Romero: Juliette?
Dr. Blevins: Yeah, you, Charles, I have a question for you. You went very quickly by the role of Typology in what we're all doing, Phonetics, Phonology, whatever you want to call it. I thought you said you weren't really so interested in Typology, and I'm trying to understand how you could even come up with a feature system, which you seem to be supposing in your calculations, if you don't look at cross-linguistic sound systems?
Dr. Reiss: Well, I think, if you look at (...) I think that my basic methodology is actually the same as syntactic structures uses. So basically you say, start with the weakest thing you can do to generate strings, and then you say that's not good enough. Finite state grammars aren't going to be good enough for Syntax, so we need something more powerful. I think you could say (...) start with (...) one language and I need at least three vowel distinctions, a three-way distinction for vowels. So I need features to distinguish (...) enough vowel features to distinguish this three-vowel system. That's a minimum I need. I don't start positing 150 different vowel features. Then, as I look at more languages I may expand that when I have evidence, when I'm forced to expand, I need more contrast, I posit more features to get those contrasts.
Dr. Blevins: But that's Typology.
Dr. Reiss: Yeah, so it's Typology in the sense of allowing, yeah, ok, it's Typology in the sense of allowing you to expand how many primitives you need...
Dr. Blevins: But not just primitives, likewise for your locality. I mean, so, you're going to look, I don't know what particular phenomenon you're interested in, but let's say you're looking at harmony systems, in order to have a research question about locality, it seems to me that you're going to need to look at more than one language.
Dr. Reiss: Well, let's say you look at one language and it seems like all interactions involve adjacency, and then you look at a new language and it doesn't work and you have to look at the onset of the following syllable. That's just normal scientific method, right? You expand your ontological commitments when you have to. When you're forced to posit something new, yes, in that sense, it's Typology. I guess that's not (...) I mean Typology is used in various ways, right? The OT use of Typology is to make your constraints set (...) the Factorial Typology says any re-ranking of your constraints should be basically, they say, in a tested language, or a real language, and what I'm saying is that that's not enough. So Typology gives you a lower bound on what you need. So I agree with you, you need a lower bound. So I'm saying you need to allow more than what your survey of the 5,000 languages that you look at (...)
Dr. Blevins: And I mean, again, for the benefit of the students here, I mean, I think it's really important to stress how little we really know about the world's languages and that every day new discoveries are made about the range of variation in sound systems.
Dr. Reiss: Right, but to come back to the syntactic structures example, I think (...), I know John doesn't accept this, but I'll go on anyway, Chomsky's demonstration that finite state grammars are insufficient for English syntax and therefore insufficient for universal syntax cannot be challenged by any amount of new data, right? Because he says English is a human language. I'm trying to model the human language faculty. No new data can affect (...) so you can't say he didn't look at enough languages. That result stands no matter what you find about any language. The minimal power that the language faculty has cannot be finite state. So that stands even if you only look at English.
Dr. Blevins: Well, we're not talking about Syntax here. I don't want to get into Syntax. I mean, I don't (...)
Dr. Reiss: But the same, I can take my quantification case. I have to say, I need at least the power. You know, let's say, if I find one example that you're convinced is real that you don't need the computation of non-identity, so let's say not McCarthy's case. If I find one case, if I find that one case and I say the phonological component needs the power of quantificational logic, looking at 100 new languages can't make me go back to a lower level of power, right? It's a minimum level of power that I've discovered I need, and I should only be driven to that when I'm forced to and that's just standard scientific method, right?
Dr. Blevins: Right, but I think there's some questions in this room about whether those questions arise when we look at what we know about how people use language specifically in a domain of perception and production of speech sounds.
Dr. Reiss: Yeah, but that's a methodological question of how you start. You just assume the least powerful thing and you use Typology to increase your power. That's all.
Dr. Blevins: Yeah, so Typology is important.
Dr. Reiss: But not the way that OT people use it.
Dr. Blevins: We don't really use it. Only when it's convenient.
Dr. Reiss: Ok. All right, so we're on the same side somehow.
Dr. Romero: Any further questions or comments? We're approaching an hour, so I think if nobody has anything else to comment on we'll leave it here.
Dr. Coleman: One final comment that I think is intended to be very constructive (they laugh)
Dr. Romero: This is what I wanted to avoid.
Dr. Coleman: No, no, I think it will be. (...) Your observation, Charles, about the features of sign language (...), this is not something we can do now, but it is conceivable to me that within a few years we might well be able to map out in the brain where the features are, if you like, ok? Now, when people are processing a spoken language, well, it's something that we can conceive strange to look for and look at and maybe getting somewhere (...). What is already known about (...) sign language (....) user's knowledge of sign-language is that the mental activity recruits the same regions of the brain. That is, in fact, the portions of our -what is nowadays our- auditory system for, that I claim to be the sight for electrical storage is in other animals and in history originally visual, is an old visual system.
Question: (not intelligible)
Dr. Coleman: Well, maybe it does, maybe it doesn't. Now if we can look and see what's going on, where the features are for spoken languages, if that were a feasible thing to do and there was an answer to that, then you could presumably do the same thing for sign language. And if, in fact, you found that same regions were employed, the same set of features in the brain were employed, then you'd be in a very strong position. That is beginning to sound to me like an eminently testable hypothesis. So (...) this shows the direction for future work that maybe we should be working towards.
Dr. Romero: All right, well, thank you very much to the three speakers and everybody else for an exciting discussion, and we'll leave it here for today and we'll see you all tomorrow.