domingo, 25 de febrero de 2018

Event report: "Pronunciation: The Missing Link" - Chester Uni, Feb 17, 2018 - Part 2

Hello, again! I said I would devote an entire post to my presentation at the PronSIG event in Chester (see full report here) , and I had not yet done it because I'm currently drowning in deadlines, and on the other hand, as this presentation was part of an article I've been writing, I did not yet want to give it all away. I have used my own data to show the phenomena in question in the presentation, and I cannot distribute those videos or snapshots because of ethical reasons (though I have permission to show them in presentations with an anonymising filter, which I did), which is another reason I won't be posting my slides publicly, though I'll add a few captures.

This time I cannot make any disclaimers regarding the representation of other people's presentations, as I'm summarising my own, but there area few points I would need to make: 1) even though this is my own work, I don't claim that my points here are"innovative", or "unique", because I'm sure there must be a lot of people around researching this.  2) And yes, this is about intonation teaching, the theoretical background we develop, how much it responds to real-life interaction, and how we can teach English intonation for interaction, and it is based on L1 English (someone might then want to record conversations using E as a LF and do something similar across different L2/FL Englishes)

3) Another remark I always make at the beginning of my presentation is that, at least in Argentina, to the best of my knowledge, the best work on English intonation and speech genres from a SFL perspective has been (and still is) carried out at Universidad Nacional de La Pampa, in a project including researchers and EFL teachers Lucía Rivas, Miriam Germani, and the rest of the team (sorry I cannot name you all). I have collaborated briefly as part of the project, and I'm very grateful to have worked with them, as it's always great to exchange ideas with a group of like-minded academics who want to see intonation teaching theory evolve.

4) And my final disclaimer: because my PhD research is based on language in social interaction, and I listen to everyday conversation all day long, there is a lot I get to notice about "language in the wild" that I cannot do justice to when I make a presentation for ELT. My selection of points to make in this presentation responds to the need that English language teachers have to be presented with theory on intonation that can inform their teaching. Some of us might want to go further and pursue an MA or a PhD because we want to engage in linguistic research, but many teachers will not, and should not be in a position to want to. So if any of my readers is a discourse or conversation analyst, yes, they will surely gasp in horror when they see some of my simplifications, but I am very respectful of the teachers I have trained and train, and because I am a teacher myself, I know we need something to hold on to, even if it is a "half truth". So this is me using my researcher role informing my teacher trainer role to show other language teachers how research in language in the "real" world can help them develop theory for intonation that can empower students across different speech repertoires, and not theory based in introspection or decontextualised examples.

5) And yes, I wrote this in one sitting while I was having my Sunday breakfast because my blog is of secondary importance at the moment (Sorry! Forgive the typos, etc etc)

The framework

I use Systemic Functional Linguistics to inform my view on genre, and as we know, different generic manifestations may have different configurations, which illustrate and also construct different social purposes. Even though texts may exhibit different levels of genre hybridity and blending, there are features that make generic types recognisable, even at a glance.

So I asked the audience to take a quick look at four written texts, and identify what generic types they would associate those to. They did so quickly and successfully, even without having read the text. That already says a lot about the features that as members of a culture we associate social purposes to.

I then did something I once did in my Discourse Analysis and Phonetics II lessons: I played some snippets of spoken genres, in a distorted form, and asked the audience to identify them. That was harder, especially because of the audio distortion, but the audience guessed quite well. So then, beyond the lexico-grammar, there are features that make spoken text types recognisable as members of a group, as Tench (1996) illustrates:

Prosodic features in different speech genres (Tench, 1996)

The next step is to problematise the role of written vs spoken as a dichotomy...

...and to see these uses of language as part of a continuum, with texts illustrating greater levels of "spokenness" and "writtenness" (Eggins 2004; Mortoro, 2012 > class notes, Flowerdew & Miller, 2005). We can see spokenness and writtenness, and monologism as dialogism as extremes in a cline, and the organisation of different speech genres (and yes, I know that there are many other ways in which we could organise those types in the continuum, this is just one approximation) dependent on a number of features including levels of pre-planning, use of formulaic expressions, possibilities for readjustment, contingencies, and whether the goals and trajectories are clear from the start or not:

The truth is that in many ELT lessons, when we say we "do speaking", we are probably working on the monologic, more written-like types of text and even when we "do conversation", we are mostly teaching lexico-grammatical formulas that contextualise more monologic types of texts (let's be honest: how many times in ordinary interaction do we preface our talk with "In my opinion/view..." or "Firstly...Secondly...". In my 8 hours of conversational!).

Some lit-review and data-based findings 

So the core of my presentation was the comparison of two bigger groups of "genres": narrative, and expository (which, as we all know, are made of a number of "sub-genres", each differing in their possible stages and lexicogrammatical configurations...see Eggins & Slade, 2007).  I presented snippets of more monologic types of stories (as in the introduction to a TED talk), and conversational stories (from my own data), and did the same with more expository texts (more difficult to trace in ordinary interaction). I presented a few generalisations as to possible differences we may encounter in terms of narrative and then expository texts (there is a whole body of literature on this, and yes, I went all simplistic because of the purposes of the presentation,  heading towards practical advice for teaching....before any conversationalist starts rolling their eyes!), putting together my knowledge of Discourse Analysis and Conversation Analysis:

My idea was to show how bundles of features may differ in more written- and more spoken-like texts, and how we can teach them for monologue, and for interaction, in particular, how we can teach our EFL learners to be co-interactants, and not, monologists.

I discussed, based on data, a few generalisations that we can make in the light of David Brazil's account of intonation sequences and combinations, with rising tones (the "loop" symbol) accompanying contextualising, background information, and falls moving discourse forward (the "play" symbol). These patterns are more common in the Orientation and Complication stages of narratives (later stages are generally made of greater "play" sequences), and they were found in both monologic, and conversational stories. 

In general, the Orientation stage was found to be "neater" and more clearly patterned in monological stories, versus conversational stories, where the background actions and the contextualisation in the Orientation stage left slots for listeners to produce their "go-ahead" response upon their opening, and then continuers and markers of affiliation (Stivers, 2008, and others), not to mention how recipient activity may "derail" the story, leading tellers to find a moment to make their way back into the story.

I also discussed the role of acccentuation/tonicity: Event sentences (Gussenhoven, 1984) are often used to make the surprise or contingent nature of the "remarkable event" of the Complication stage explicit (I have also found passive and causative sentences to be doing that in my conversational data).

I made some generalisations as to how we can teach intonation for interaction, and to show how level tones (the "pause" icon) and pauses can also be used by our students to engage their recipients in conversation, and create opportunities for realistic interaction, which will always put students in a position to have to re-adjust, and co-create, instead of thinking of speaking as the clash of two independent, non-related, interactional projects by two separate people (as many of the interactions we hear among students in international exams seem to be!). Tonality blocks of background or foreground information with suitable tones, and the use of pitch height, can also inform the recipients of what their legitimate spots for response incoming could be, and what type of sympathetic/agreeing/etc response can be expected from them (Again, CA/IL people, if you are reading this, just cover your eyes for the slide that follows!):

In the same way, I examined patterns and configurations in expository texts in both "monologic" and conversational data.

Some ideas

I then moved on to present ideas to create opportunities for students to use intonation patterns for both more "monologic" and more "dialogic" types of texts:

Creating opportunities for the use of tone in signoposting & contextualising in more monologic expository text types.

A template for the planning of spoken narrative, together with associated lexico-grammatical & intonational features

Creating opportunities for co-construction in interactional "expositions" (descriptions of states of affairs, or procedures...)

Final remarks

I know this write-up may look confusing and it can hardly give you access to what I demonstrated in my presentation, as I'm not including the data snippets that illustrate all this here. I also know that many of the things I said were not clear enough for all the members of the audience, as I am aware my enthusiasm often leads me to be overgenerous and overwhelming, and people get drowned in my enthusiastic rant, and I end up saying more that people can process in a limited amount of time (one of the things it's high time I learned to overcome!). But at least I wanted to show you a preview of how my current research on prosody in interaction may, at some point, find its way back into language teaching. I wanted to also once again uphold what my research keeps confirming:  intonation choices are co-built in interaction, and they reveal, and simultaneously, co-construct, context and social action. And all theorising on it, in my very humble view, should bear that into consideration.


I would like once again to thank PronSIG, and Mark Hancock, for inviting me to share all this with the audience at the "Pronunciation: The Missing Link" event. I would also like to say thanks to Dr. Ogden and Dr. Szczepek-Reed for their advice during my brainstorming period for this presentation, which I think ended up being something different from what I originally envisioned. And of course, a massive thank you to the Department of Language & Linguistic Science for supporting my research in every way.

domingo, 18 de febrero de 2018

Event report: "Pronunciation: The Missing Link" - Chester Uni, Feb 17, 2018 - Part 1

Last Saturday I left the East to cross the Pennines and after a three-hour train journey I arrived in the wonderful and picturesque city of Chester.
Lovely Chester (credit: MNC)

PronSIG were holding a new event at the University of Chester: "Pronunciation: The Missing Link". It was a small but really friendly event, and the audience was really keen, so the atmosphere was right for us pron-thusiasts to share our passion, quandaries, and ideas.
(Credit: Catarina Pontes)

As many of you may have inferred from my posts, I take this whole pronunciation teaching thing really seriously, and I have always been affected by the tension between what I know about pronunciation and intonation in the real world, and what happens in the classroom, and what teachers need to know in order to make all this "real life mess" accessible to learners. I was happy to be in this event, because the talks were all about problematising many "set truths" in ELT, while still providing solutions that fit the reality of our classrooms.

I will be writing up a small report on the event, but if you want to see how it developed, you might want to check the #pronsigchester hashtag, where all the live-tweeting went on. As usual, all potential errors in understanding the claims of the presenters are my own. 

The first in line was the always great Richard Cauldwell, with "Pronunciation and Listening: The Case for Divorce". Richard reminded us of his great metaphor for the world of sounds out there: The Greenhouse, the Garden, and the Jungle. He also refreshed our memory in terms of how the Careful Speech Model, which is privileged in ELT in the teaching of pronunciation for production, does not hold for learners' perception of the "mush" of speech. This is why for the teaching of listening we need to embrace the Spontaneous Speech Model, a model that relishes the sometimes unruly (at least in terms of prescriptivist rules) nature of the "sound substance". The sound substance differs from the "sight substance" in a large number of ways, and traditionally, the teaching of listening has focused on the "logic of meaning", guiding learners to fill in the gaps based on meaning concerns, rather than on decoding the sound substance. Cauldwell invites us to put ourselves in our students' shoes and see how their hearings may actually be "reasonable hearings" in terms of the sound substance (one of the many examples was that of learners hearing "peoples" for "pupils". The speaker was producing GOOSE fronting all the way and yes, no doubt about the fact that it could have been, indeed, within the logic of the sound substance, been a "reasonable hearing". The logic of meaning and grammar would not have allowed it, of course). Richard then presented a number of cases of processes of connected speech ("streamlining processes"), as always going beyond the neat rules of assimilation and elision that we see in textbooks, and introducing a nice catalogue of processes and wordshapes we find for words like "certainly", "obviously", and many others.  Many  of these points will be tackled in Richard's upcoming book, "A Syllabus for Listening - Decoding".
Richard presented a workshop in the afternoon, but I am afraid I was in another session, so I cannot report on it. I know there was a lot of "mouth gymnastics" involved in the production of different soundshapes...

Gemma Archer was the next presenter, and her focus was on pronunciation assessment. It was an interactive presentation, and there was reflection on the number of reasons why teachers may not do pronunciation assessment in the classroom beyond box-ticking forms in speaking exams. We were invited to analyse different types of pronunciation assessment (passages, minimal pairs...), and their strengths and weaknesses. I would like to focus on the fact that by far one of the most widely criticised aspects was the fact that many pronunciation tests are based on reading aloud, which, as we know, is an altogether different cognitive activity. We know that having a set reading test allows for  the narrowing down of what we want to assess, and uniformity in the type of output we get from our learners, but it is always worth remembering that reading aloud is not speaking. A few interesting alternatives were presented: the use of Diapix, and of story boards, to elicit less controlled speech, while still making sure that some of the exponents of what we want to assess are there. It was not mentioned in the presentation, but my favourite form of less controlled pronunciation "test" or practice is role play, and for intonation, at least, Barbara Bradford's Intonation in Context is fabulous.

(I was up next, but I will be writing a separate blog post on my presentation.)

Catarina Pontes led a presentation called "Five Reasons why pronunciation must be included in your lessons". In a pronunciation event, this would sound like preaching to the converted, but as it was planned as an interactive presentation, it ended up being a very useful and engaging forum. Participants shared ideas of activities and resources they used in their classrooms, and some teachers voiced concerns connected to experiences encountered with students, such as reference accent issues, and the exposure to different accents.

Annie McDonald presented some ideas to help students decode spoken language better by listening in chunks. Annie presented a number of mondegreens and how they can be analysed to see what kind of processes students have engaged in to make sense of the sound substance. She moved on to discuss how the regular listening lesson primes students into making sense from their content schemata but does not teach them to decode the actual stream of speech so that next time they are encountered with a similar instantiation, they can recognise it. Annie tried an informal experiment: she selected a few sentences that students were meant to decode word for word, and she worked out her students' percentages of success (quite low). The following lesson, students listened to the text again and then they were able to decode some chunks of speech more successfully. Annie recommended the use of YouGlish and TubeQuizard to look up regular chunks of word clusters so that students can listen to the many manifestations of the same combination of words. Like Richard Cauldwell in the morning, Annie played collections of words/word clusters together, which enables students to get a taste of inter-speaker variation in the producition of the same lexical sequences.

Mark Hancock presented a number of interesting activities to make the teaching of tonic stress "simple". We all know what a nightmare the system of tonicity can be, and personally I always feel soooo guilty teaching the "rules", as I know tonicity is so context-dependent and there are so many exceptions. However, Mark Hancock succeeded in presenting some small contexts for participants to decide on where to put the nuclear accent (which he called tonic stress, as in many other pieces of work). There were some interesting debates, as some participants produced alternative versions (oooh, a flashback to my Phonetics 2 lessons!), and as others were not perhaps aware of where they themselves were placing the  nuclear stress (something at times I notice I may have trouble with myself when I analyse my Spanish speech). The activities presented were truly interactive and easy to apply in our lessons, and they centered around the following areas (I'm using the technical names here because I'm a phon-freak, but Mark was very careful in his simplification of these): deaccentuation of Given info, contrastive focus, intonational idioms/fixed tonicity, and stress shift. All in all, an interesting overview of tonicity with simple activities that I personally believe can help English learners become aware of tonic stress.

I once again want to thank PronSIG and Mark Hancock for having invited me to be part of the event. It's been a delight to go back to my first teaching pronunciation love, and to be around experienced teachers who have so much to share, and who also need someone to tell them that some issues are indeed difficult but that there are ways out. I'm really pleased to see how the teaching of listening is evolving, and how teachers are not being undermined or treated with contempt when it comes to how complex pronunciation can be, and how many thorny issues and sides to it there are. In my humble opinion, finding ways to simplify things for teaching should not mean making people feel dumb (as I have been made to feel in some contexts in the past), and in an event like this, it's clear that no one is treating pronunciation teaching lightly. So I am really happy to be involved in this joint quest for truth and teachability and to be able to share it with like-minded people.

My next post will be about my presentation. Personally, and given the feedback I got during lunch and some of my own hunches as I was designing it, it was also an unexpected surprise to realise how my research can actually inform English language teaching practices when it comes to the teaching of intonation for conversation (and not for monologue.....brace yourselves for a rant in an upcoming post!), so who knows...once my thesis is ready in two years' time...

martes, 2 de enero de 2018

Bliss & Fear: Teaching an Intro to Phonetics seminar to L1 speakers of English

Hiya! Sorry about my blogging silence. In part it was due to my having finally embraced the fact that I am no longer a pronunciation teacher, nor a lecturer in Applied Phonetics, apart from the fact that this context I am in has humbled me in many ways, and I no longer feel entitled to an opinion in many issues. I guess I have become even more aware of all the things I don't know, and of all the stuff there is out there to learn. However, in this new world I am in, I think I can position myself as an experienced teacher in Applied Phonetics, at least as someone who has held that role for a decade in Higher Education and has learned a lot from success and failure, and at the same time, as a student (re/un)learning a lot of Phonetics, so I think that perhaps the next posts will be written in that "capacity", if you wish.

This time, I will be writing (in one sitting, as usual, so forgive the typos) a few reflections on the most exciting challenge I've faced this last term: teaching a seminar on Intro to Phonetics to three groups of students, most of whom speak English as their L1. I would like to compare this experience to my teaching experience in Buenos Aires, and share with you how I felt in this (terrifying) journey, and what I have learned.

This term at York, I have been in charge of three of the nine seminars in Intro to Phonetics and Phonology for undergrads (mostly L1 speakers of English) in their first year of their BA. We've got students from different BA programmes, including degrees in different languages, and in Language and Linguistics. In Buenos Aires, I taught Phonology (though in practice it was Phonetics AND Phonology) at a Translation programme to Spanish-speaking students with a B2 level of English (I've taught a few other courses, most of them at Teacher Training programmes but I'll be discussing Phon1 as it's the one whose content mirrors my module here at York).

At York students have a lecture every week, taught by the module convenor, and then a couple of days later they come to seminars with their homework and reading (hopefully) done. Apart from these two compulsory hours a week, students can come to backup sessions or office hours for questions or extra practice. My students in Buenos Aires at the Translation programme had three running compulsory hours a week, comprising theory and practice on articulatory phonetics, phonology, transcription, and ear-training.

My task here at York is to help students bridge the gap between theory and practice in the topics of the week, and to guide them in the procedural part of the course, particularly transcription, ear-training, and applied theory. The first term (10 weeks) is all about Phonetics, and the topics covered  include:  the anatomy of speech; transcription types; the description of cardinal vowels and vowels in general, and the classificatIon of different consonant groups spanning the whole IPA chart; allophonic variations in certain contexts and across accents; an introduction to different visual representation types; a few bits on acoustics; and ways of studying phonetics instrumentally and experimentally. Students are asked to take in a full textbook (the lovely "An Introduction to English Phonetics" by Richard Ogden) in two and a half months, and become familiar with and competent in the use of jargon in order to explain articulation. Apart from self-correcting quizzes on the class website, students were assessed during this first term through an essay, in which they had to describe the articulatory sequence (in detail) of their pronunciation (whatever their accent) of the word "pudding". When they are back from their Christmas break, they will have a test on transcription, ear-training, and theory, to bring this first part of the module to a close.

If I look back on my Phonetics and Phonology I courses in Buenos Aires (8-month-long modules),  there were quite a few coincidences in terms of content: even though over there we focused only on General British (with a few remarks on General American), my students learned the classification of vowels and consonants, explained articulatory processes, learned transcription rules and skills, did ear-training/dictation tasks, and engaged in production quite a lot, since the improvement of their pronunciation of English was, in part, the underlying goal of the module.

My Buenos Aires students were assessed in different ways, including recordings of pronunciation practice materials, phonemic dictations and transcriptons, and tests on theory, most of which were mostly related to recognition, and with some exercises devoted to explaining phenomena (the Translation courses were more limited in content than the Teacher Training programmes, where perhaps the accounting of theory was done more thoroughly).

In seminars at York, we discussed sagittal sections and animations of articulation to identify sounds in different languages, drew diagrams illustrating manner of articulation, tried a few simple transcriptions of words in different languages, attempted narrow transcriptions of different accent variations, and we also worked on the production of cardinal vowels and other sounds, to build proprioceptive awareness as a tool towards better perception as well.


A first big difference between by Bs As and my York experience was that in Buenos Aires, I would only mostly focus on a single variety of English, whereas at York, I have had to up my teaching skills for not only different accents of English and Englishes, but also, for the phonetics of sounds in different languages, especially in view of the work that as linguistic researchers students may have to do to describe languages and language change when doing fieldwork, for example.

Another key difference was that at York, most of my students could hear the difference between vowel contrasts of English (perhaps not so much between cardinal vowels at first, or vowel contrasts in other languages), and I hardly needed to make any point of spelling-to-sound relationships. So, for example, in terms of a FLEECE-KIT contrast, all my York students needed to know, was probably the symbols used to represent what they produced or heard, whereas my Buenos Aires students needed to learn associated spellings, and of course, be trained in perceiving them as different from the Spanish i-like target, and from each other. 

Whereas in my own courses in Argentina students needed to integrate the whole awareness-perception-symbol-spelling package, at York it has mostly been a perception-to-symbol challenge, and the building of proprioceptive awareness of what they themselves, as L1 speakers of the language, are producing. I would say my English-speaking students struggled more transcribing what they themselves were producing, more than anything else. It was fun to produce speech sequences in slow motion to identify aspiration, devoicing, anticipatory rounding, not to mention the comparison between different starting points for diphthongs, and TRAP and STRUT varieties among different students in the class (it was fascinating!). I discovered while doing this how all my teaching of pronunciation of L2 had equipped me with tools that my English-speaking students could use to make sense of their own pronunciation, believe it or not!

Another fascinating and scary difference lay in our (my students' vs my) experience of English. I, of course, have the teaching expertise and the theoretical knowledge of Phonetics, but I certainly do not have the experience in accents and in everyday English that my students here at York have. At home, it was perhaps easier to be in "control" of things, since my experience of English and my knowledge and awareness of phonetics was, in general, vaster than that of my students, and we were on an equal footing as Spanish-speaking learners of English. My task here at York forced me to juggle my knowledge from years of reading and teaching with what my students thought they'd heard, to what I think they could have meant they heard. All of this, plus, my getting to grips with their own accents (a huge variety in each group), which also posed decoding challenges on my part every time a question was asked (oh, yes, I had to tune-in very quickly to their accents to make sense of their questions!). 

In spite of the L1-L2 differential, both my students of Phonetics in Buenos Aires and those in York had similar sets of difficulties in the process:
  • getting to know the symbols and associating them to particular sounds
  • building proprioceptive awareness of what they are producing
  • becoming familiar with jargon and using it appropriately
  • making sense of technical texts
  • writing cohesively and coherently

I know I should not make a big deal out of this, but being able to project my slides and show animations and play IPA audio files on the spot for everyone to see/hear, as well as having the opportunity to type IPA symbols, or to show a sagittal section instead of drawing it, has made a big difference. Back in Buenos Aires, thanks to the lack of government investment in educational institutions, booking a projector was virtually impossible (two or three for the whole college!), and I would spend a lot of precious time during my sessions writing transcriptions on the board, or drawing diagrams (with the markers I'd buy with my own money, and getting a harsh voice due to dust whenever I had to clean the chalkboard in some classrooms).  Not to mention the fact that we had no internet connection in some of our colleges, so web resources had to be set as homework. Should I also mention that students would have had no access to up-to-date bibliography if it hadn't been for their lecturer's (let's call it) "good will"?
Yes, my students in Buenos Aires did cognitively "record" a lot of things faster because they were always copying from the board, and engaging in some sort of live-processing of content that some of my slide-staring students at York may not be doing, but the time I have in my hands now to help students experience and see things and read up-to-date stuff rather than to have them copying things from the board, is something I am really grateful for.

My experience of having taught Phonetics before has been an advantage despite my linguistic "disadvantage". It has allowed me to sequence the tasks in a way that I think helped students understand the science behind the theory (I'm convinced that it's all about the way we grade content, after all), and it made it possible for me to predict and anticipate some difficulties that students were going to have (which, self-fulfilling prophecy or not, they did have). I obviously do not have control over all the aspects of course as I did back home, as the lectures are planned and delivered by someone else, the seminar tasks are already set (I did add a lot of things of my own, as I could not help myself!), and even though I mark their exam papers, I have no role in the design of assessment. So in a way, I am delivering someone else's "vision" of how Phonetics should be taught, and even though it's been a challenging thing for me, it's also a good way to learn to see the world and the subject differently (after all, I am on the other side of the world now!)

Of course, I think my undeniable strength as a tutor is my passion. I love teaching, and I love Phonetics, so I think that I may have managed to pass that on a little, with my quirkiness and my cheerful slides, and my constant "could you say that again, please?" to my students, as I attempted to draw their attention to differences among the accents in the room, grinning with fascination as I heard them say the words.

It all goes to show that I have learned an awful lot of Phonetics from my students, to be honest, and I think that on my part (based on the good ratings they gave my teaching at the end-of-term feedback surveys), I have made Phonetics accessible and a little bit more understandable to them.

I'll be back at teaching in a couple of weeks, doing Phonology this time (I have to admit I'm not as excited as I was with Phonetics, which I like better....sorry!), and I hope I can have an even better experience helping students appreciate and understand this fantastic world. And as I do that, learn even more Phonetics from them.

miércoles, 25 de octubre de 2017

Brief colloquium report: The Phoneme: new applications of an old concept

Today I poked my head out of my screen to take a break and attend this very interesting talk by Dom Watt, whose Advanced Phonetics student I'm lucky enough to be at the moment.
Here is the abstract:

And below is my own summary, written in one sitting (as usual!). As always, any inaccuracy or misapprehension of what was presented is entirely my fault. Hope this all makes sense to you!

The talk had the notion of phoneme at the centre, and all the debates existing around its "existence". The first minutes of the talk were a nice overview of the "phoneme" and related notions and ideas leading to it through time: from the contributions of the sanscrit author Patañjali in the 2nd century recognising abstract categories of sound that present variability at the physical level, and the first Icelandic grammarians in the 12th century, to the writings of Sapir in the 1920 and the "phoneme slices" that people claim to have in their languages.

More modern discussions ensued of what the phoneme came to be understood as have been developed by Duriche-Desgenetes (1873), Luis Havet, Badoin de Courtenay (1871) with psychophonetics and physiophonetics, and of course, Henry Sweet in the 1870s and Daniel Jones already in 1911.  In the US in the early 20th century, the notion of phoneme came to surface thanks to Bloomfield.

A few definitions of phoneme were revisted by Watt, especially those by Jones (1957), Watt (2009), and a very quote by Pike (1947): "Phonetics gathers raw material. Phonemics cooks it".

A very useful metaphor to discuss phonemes and allophones was recalled by Dom, that of Clark Kent and Superman as being in complementary distribution, and Superman and Spiderman for example being two different allophones of two different phonemes. (It reminded me that I used to refer to phonemes as any of us, and allophonic variants as us in our roles and attires: at school, at a party....Lately I've turned to Johnny Depp as the phoneme, and his million characters as his allophones, his "realisations" in films...)
Other interesting comparisons were introduced, such as the grapheme-allograph relations in Arabic, or even the number of ways we can represent a certain letter, say "A", which poses a very interesting question: what is the boundary that makes a certain sound no longer the same, how much can variation be stretched, what is the boundary?

Alternative analysis of the phoneme included Trubetzkoy's (1939) phonemic oppositions grounded in phonetics, formal notions of phonemes as bundles of features, as those put forward by Jakobson, Fant and Halle in 1952, based on acoustic analyses of instantaneous "time slices" (somehow looking for the centre of events in the signal). Watt also mentioned a game-changer, the work of Chomsky and Halle (1968), that abandons binarity and allows for phonetic gradation with the introduction of articulatory features in their description.

Watt continued the presentation by referring to the debates on the nature and existence of the phoneme that included quotes from Ladd (2013:370) and Dresher (2011:241). The work by Fowler, Shankweiler and Studdert-Kennedy (2016), who revisit a paper they themselves wrote in 1967, was given special attention, since it provides nine forms of evidence of the existence of the phoneme as an entity, including issues like phonemic awareness, adult visual word recognition, the presence of systematic phonological and morphological processes, the existence of speech errors (spoonerisms), and the fact that co-articulation, as was previously claimed, does not really eliminate the presence of a phoneme.

Of course, as Dom remarks, when we look at MRIs, spectograms and waveforms, we may not so easily be able to see discrete units, but machines seem to be programmed to see the signal as composed of chunks. It was interesting to see a cochleargram, because as Watt pointed out, it does show perhaps more continuity than a wide-band spectogram, for instance.

The second part of the talk discussed phonemes in phonetic work done through speech technology, for forensic and also sociophonetic purposes. It discussed some of the findings by (the absolutely brilliant!) PhD student Georgina Brown, who has adapted the ACCDIST programme by Mark Huckvale in UCL into Y-ACCDIST as part of her PhD research. One of the achievements of Y-ACCDIST is the use of the software for speaker comparison even when the data are not necessarily comparable (ACCDIST works well when all speakers have read the same text). I cannot fully do justice to this part of the talk because there are some technical bits that I am not familiar with, and I don't have a head for statistics, but I'll report on what I could follow:
Some examples of the use of the programme were presented, which include the measurement of distance between possible pairs of phonemes through what is known as a Feature Selection process, in which several features are left out to focus on the ones which are most relevant or less redundant, and that helps modelling.
Comparisons across speakers were run through the programme, and Y-ACCDIST was able to assign speakers to a particular accent with almost 90% accuracy. It was interesting to hear that the programme was more accurate when particular features (and not the whole) set was compared, and also when human intervention in the filtering of features to be compared was added to the speaker accent allocation process.
All in all, Watt concludes, the discoveries of the application of tools like Y-ACCDIST and the evidence provided in Fowler et al suggest that it is too premature to declare the demise of the phoneme.
The question period was interesting, and it included comments on issues like the fact that perhaps many approaches to speech analysis begin from the notion of the phoneme but fail to see what happens in naturalistic speech and what participants themselves feel is relevant, and that there is considerable phenomena that cannot be explained through the notion of the phoneme. There is always a search for robustness in experimental settings that fails to see that what should be more robust is what is actually done in natural situations.

All in all, a fascinating talk, with a lot of food for thought. If you ask me, does the phoneme exist? I would say that it's like magic, you feel it's there but at times you cannot pinpoint the actual trick that makes it work.

domingo, 8 de octubre de 2017

Brief conf report: English UK North Academic - University of Liverpool, October 7th.

Yesterday I got on the train from York to Liverpool (in what ended up being an endless 3 1/2 train-train-bus journey...yes, transport may also fail in Britain!) to attend and present at the English UK North Academic conference (programme here).

It was a really friendly, welcoming environment of teachers of English working in the North of the UK, and there must have been over 100 attendees.  I would like to very briefly report on three of the talks, and then comment on my own presentation as well.

Michaela Seserman from the University of Liverpool discussed the tools she uses in her EAP courses to do pronunciation work. Michaela discussed some important questions we need to ask prior to deciding to use certain apps, and also weighed some pros and cons of each. Seserman proposed a form of integration of the in-built voice recognition systems that smartphones currently hold, the tools that Quizlet offers, and the messaging possibilities of the WeChat platform. Even though it was perhaps not very clear how pronunciation improvement actually came to happen, the idea of teacher and students exchanging audio recordings for practice and dictation via mobile messaging is a very appealing one. As Michaela pointed out, these are tasks that learners can also spontaneously decide to do outside class.

Russell Stannard, the TeacherTrainingVids guy, showed how screen capture software (he recommends SnagIt but there are free alternatives available) can help you give better feedback on written work. So a teacher may videocapture a student's written assignment and give feedback (as we might do face-to-face), by highlighting areas of the essay, for instance, and making oral comments on it, or showing the assignment instructions on screen to point out what may not have been addressed. It reminds me of the type of recorded feedback I used to give my students, and I agree with Russell that this whole idea of personalising feedback and having a sort of "conversation" with the student and the material really does make a difference. It's a way of "being there" when you cannot "be there", while also showing students we care for them individually and that we can address each of their specific strengths and challenges -which in writing we may fail to do clearly, or which may be misinterpreted-.

I particularly enjoyed the workshop on corpus linguistics by Dr. Vander Viana from the University of Stirling. Vander showed us some easily accessible corpora (sorry, readers, but I cannot ensure that this will be freely accessible to you in your context/country) and search engines that we can use to help our students test the frequency, acceptability and likelihood of their lexical choices when writing, or speaking. We discussed collocations, colligation, and semantic prosody (which apparently in corpus ling is different from how we understand it in SFL!), and we reflected on the claim that we actually process speech in an "idiomatic" way (not referring to idioms, but to was such a great intro point to my own talk later, to be honest!). Most of the cited material came from Sinclair (1991); McCarthy et al (2005); and Tognini-Bonelli (2001), and you can read Viana's work if you visit his webpage.

I was invited to make a presentation thanks to the generosity of Mark Hancock, who put my name forward (I've thanked him publicly many times, but I believe we should always be grateful to the ones who do nice things for us)...and to make it even better, he got me a PronPack t-shirt! And also thanks to Nigel Paramor.
Even though I know my stuff, it is always a bit intimidating to stand in a room full of native speakers of English who teach English and theorise about their own language. I know it is a silly fear, but I know many other non-native teachers of English will sympathise. Anyway.

My talk ("Intonation building blocks for more comprehensive speaking skills training") was based on the type of speaking tasks that I designed for my Lab 3 and Lab 4 lessons at ISP Joaquín V González and Profesorado del Consudec during the last few years. Some background: most of the work that is done during the final two Applied Phonetics modules ("Lab") in teacher training in Buenos Aires (at least) is related to the application of phonological theory to the production of different speech genres (and for this, I am grateful to Prof. Silvina Iannicelli, because I got my first lecturing post filling in for her at ENSLV SBS in 2006, and she had a course planned along a sequence of texts ranging from rehearsed to more spontaneous text type production, and that sort of sparked my interest in the prosodic configuration of speech genres). As I became a bit more experienced, one of the things that usually made me uncomfortable about the type of work we did in these courses was that most of the tasks were based on reading, and there was always an assumption that intonation patterns were easily/automatically transferrable to spoken situations of language use. (It's a bit like doing a million fill-in-the-gap past tense exercises, and then expecting students to automatically and spontaneously use the simple past in their written or spoken stories.)
Plus, at times we also forget that reading aloud is an ability in itself, and that reading aloud as a result of previous imitation of a recorded model of the same text is also another type of ability that activates other skills and requirements. These are highly useful and valuable steps in the process, but they do not amount to, nor ensure, that the students will appropriate intonation patterns. In my tutoring experience, I have had students producing English fall-rises in reading and Spanish rise-falls on the exact same phrase, on a similar context, when speaking.

So at some point in my tutoring/lecturing history, I decided to change that a little, and to use reading aloud as one of the steps of the process, but then also create opportunities for use in slightly more spontaneous speech tasks in a way that ensures that students need to use certain intonation patterns that have been found to have a certain regularity in specific speech genres, or in connection to certain lexico-grammatical structures.

So, back to EUKN: My talk was about speech genres, and how several speech genres have higher degrees of "writtenness" in them (Eggins, 1994), and how these have perhaps more easily predictable and stable patterns of intonation and chunking; whereas more interactive genres challenge the intonational descriptions as we know them (such in the case of "list intonation", or the intonation of questions).

I put forward the metaphor of building blocks as a means of proposing that for some speech genres, it is useful to see information units (and some lexicogrammatical collocations) as part of the same block that students can monitor as a whole as they plan their next block (rather than worrying about putting together a string of words, one after the next, when they talk).

I have followed a process that goes from the breaking of the dichotomy between spoken vs written texts, into a continuum of levels of writtenness-spokenness (as SFL scholars have done for a couple of decades), and the use of a building block metaphor consisting of LEGO type blocks that occur in more written-like spoken genres (where the blocks have a set role, position, and the final goal is clear), and TETRIS blocks that we may encounter in more interactive texts (where trajectories are built as we go along, and there are lower levels of pre-planning.

I will only be able to share a few of my slides, as I am writing an article/resource on the whole notion and application of intonation blocks (and I'm also seeking psycholinguistic and further classroom evidence), and I owe the English UK North attendees the preview of the full set of slides (because I have authorised EUKN to do so).

Some comments on challenging, through corpus-study, the notions of "question intonation", and "list intonation". How intonation in real life as manifested in different speech genres does not easily exhibit the intonation patterns described in ELT textbooks.

Reflection upon the fact that we generally don't do speaking training in an integrated manner, as we may do with written genres.
Possible (though never definitive, nor exhaustive, nor always fixed, because language use.... ;) ) organisation of different speech genres along a cline.
The building block metaphor I propose to inform lexico-grammatical, sequential and intonational choices.

An example of a production task (which probably we have done in our lessons a million times!) that we can exploit to teach step-ups in pitch, and contrastive accent.
Examples of lexico-grammatical blocks in initial position that do anticipatory work. These have been found to be quite consistent in LEGO types of texts (the ordering and tone choice works differently in interactive texts)
Example of an outline for student production of short conversational stories that focus on grammatical choices and the preparatory (loop) or advancing (increment) contextualisation by rising or falling tones (respectively)

Example of ways in which we can contextualise reported speech through level tones and contrastive stress in TETRIS-like situations of language use (though also common, with direct speech, in speeches, or lectures, LEGO text types)

Examples of ways in which we can create opportunities for use of level tones in conversational lists (vs counting, or sequences of steps where lists may be found to have rising tones)

During the presentation, I systematised briefly some of these (basically, it was like teaching my whole Phonetics 2 syllables in 50 minutes!) and presented a number of activities to illustrate how we can generate opportunities for use of these building blocks, and then, of course, it is up to every teacher to find ways of helping students monitor their spoken texts, block by block. 

I am sure that the idea of working on speech chunks is not new, or revolutionary, but I wish to emphasise how intonation can be an active, essential, part of each of these blocks of processing and production, and how the notion of a block can contribute to students' awareness that linguistic structures work together, making different contributions in the contextualisation of meaning and structural organisation of speech.

(And the refs!)

All in all, this was a really enjoyable event, and very special for me, as I haven't been teaching for a year (starting this week again, yay!) and I spent this whole year trying to find an excuse to write down the principles and ideas that informed my integrative intonation teaching methods when I was lecturing in Buenos Aires. Hope they make sense to you!

(And now...back to my research. Enough productive procrastination!)

P.S.: this post somehow opened up a chest of memories for me, and I forgot to acknowledge another lecturer, Prof. Claudia Gabriele, who in her own way showed me that there are ways of "creating opportunities" for practice of intonation. I was her Lab assistant for a few years, and I was particularly inspired by her use of role plays and other speaking tasks for a more natural application of intonation patterns. Sorry about this unintentional omission in the original post.

jueves, 17 de agosto de 2017

PTLC, part 3

Hello, again! This is my last post on the Phonetics Teaching and Learning Conference at UCL (click to read parts 1 and 2 ). I will not be able to to report on all the content of every talk, so you'll have to excuse my selection. Any error (if any) is due to my low caffeine levels or lack of understanding of what the claims in the talks were, and I'm happy to make corrections if the authors point them out. And the proceedings will be available soon, so you will be able to read the full papers in a couple of months!

Andrej Stopar Perception of the General British vowels /ʌ/, /ɑ:/, /ɒ/, and /ɔ:/ by Slovenian speakers of English
Stopar presented a continuation of his presentation in 2015, in which Slovenian speakers of English are reported to have participated in perception experiments on different English vowel contrasts after instruction. Slovenian vowels are very similar to Spanish, so I found the results quite interesting, since the vowel that appears to have given students more trouble is that in the LOT set. Stopar mentions a few errors that could actually be due to the fact that words and non-words were used (students choosing a THOUGHT vowel for the nonsense word "fot"). In general the perception pre-test results were quite good for one of the groups, but the biggest difference was made in the group whose pre-test had been quite weak.

Kakeru Yazawa, Mariko Kondo and Paola Escudero: "Modelling Japanese speakers' perceptual learning of English /iː/ and /ɪ/ within the L2LP framework". This talk discussed  Escudero's model of Second Language Linguistic Perception for Japanese students working on American English vowels /iː/ and /ɪ/, and they discussed their results in the light of Boersma's (1998) Stochastic Optimality Theory, that unlike traditional OT sees a gradient (and not a categorical set) of ranking values and constraints and the perception process as a continuum that should not easily assign vowel perceptions to a particular phonological category. 

Yumi Ozaki, Kakeru Yazawa and Mariko Kondo: "L2 English speech rhythm of Japanese speakers: An alternative implementation of the Varco metrics" Ozaki et al propose an alternative form of using the Varco metrics, because Japanese constraints render the use of the regular Varco metrics for vocalic and consonantal intervals as problematic. Instead of using the mean duration of intervals, they suggest an nVarco: using overal segmental durations. More proficient Japanese learners have been found to exhibit more variability in vocalic and consonantal duration.

Hui-Chuan Liu:  "Identification of Mandarin high-level tone and high-falling tone by Vietnamese learners"
Liu discussed the problems that her Vietnamese learners had in the identification of two contrastive tones, high-level vs high-falling. Since Vietnamese tones have no large falling slopes, this tone brings about difficulty for those learning Mandarin. In her study, Liu found that duration of syllables had a bearing on the type of errors made by learners, but she recommended  not using length as a teaching resource because it is not distinctive, nor reliable.

Eva Estebas-Vilaplana: "How imitation can help the acquisition of L2 pronunciation"
I love Estebas-Villaplana's presentations in general (and her book is so useful!), and even more so because she teaches at UNED, a distance learning uni in Spain. Teaching English pronunciation to Spanish speakers in distance education is a great challenge, so I was very curious about what she would be presenting this time. The experience that Estebas presented involved getting Spanish speakers to read a text in their own L2 English accent, and another version in an accented English version of Spanish (Spanglish or Englishñol, should I say?). Her students imitated English rhythm and reduction better in their "mock" versions, than in their real L2 English. 
Estebas suggests using these "mock" Spanish accents to see what kinds of features students store in connection to how they believe English sounds, and use them to improve their L2 English accents.
For some years now, we have used this strategy in Buenos Aires for /t/ and /d/, and it worked quite well. Many of our students have access to accents of Spanish like Puerto Rico or singers like Ricki Martin, whose /t,d/ are alveolar, and that has proven useful. But I had never considered that students can also improve on rhythm using this strategy!

Yusuke Shibata, Masaki Taniguchi and Young Shin Kim: "A brief intensive method to help Japanese learners perform English tonicity" Shibata et al presented a few methods they have used to help learners acquire narrow focus. Basically, they worked on short exchanges like the ones in textbooks where there is verbatim repetition, and they trained learners in deaccenting repeated items. They tested them before and after instruction, and their improvement was considerable. Some audience members suggested doing a longitudinal study to see how much of this "sticks" some time after instruction, since the exchanges used to test were very short, and students may have got the hang of it strategically rather than actually learning about narrow focus.

Marina Cantarutti: "Questioning the teaching of “question intonation”: the case of classroom elicitations" - My paper had two big parts: the first section consisted in reviewing the theory on 'question intonation' that you can find in the intonation textbook materials used at Teacher Training Colleges in Buenos Aires (Wells, Tench, Brazil, Baker...). Starting from their assumptions, and based on my prior experience of these materials being insufficient to account for the choices speakers make in different speech styles and genres, I did a short corpus study of Teacher Talk (one of the speech situations we train teachers on in their 3rd year), in particular, of teacher elicitations in the recitation stage. By making a conversation-analytic approach of turn-taking, sequencing, and embodied behaviour, I have found the role of "terminal" tone (the last tone in the question) to be related to epistemic assymetries and the handling of the turn-taking system, and not in any way related to the syntax of the question, or attitudinal approaches, neither to finding out or making sure concerns (unless we redefine these notions, something I mention in the full paper). My biggest claim is that we can no longer apply "one-size-fits-all" syntactic and attitudinal approaches to intonation, and that functional approaches work only if properly combining top-down generic analyses with bottom-up, local, turn-by-turn analyses.

Miriam Germani and Lucía Rivas: "A genre approach to prosody teaching intonation from a discourse perspective" 
I'm a big fan of Germani & Rivas' work, we are quite like-minded in our approaches to intonation teaching, and I was very lucky to work with them on different occasions. Germani and Rivas (as I did on my own preso) discussed the shortcomings of intonation textbooks, and the simplifications that these make, that create two common problems (that all of us who do real discourse intonation face!): a) students taxonomise but do not do any real discourse-functional study, they merely repeat classifications; b) students fail to see the contributions that intonation makes to textual organisation and interpersonal projections of meaning. By using Systemic Functional Linguistics, and following the basis in Martin & Rose's (2008) "Genre Relations", and the descriptions of intonation in Brazil and Halliday & Greaves, among others, Germani & Rivas have helped their students make better top-down, holistic descriptions of meaning in text, and have improved their students' accounting of intonation choices.

Hajar Binasfour, Jane Setter and Erhan Aslan: "Enhancing L2 learners’ perception and production of the Arabic emphatic sounds". Binasfour et al describe how the use of Praat can help learners improve on their perception and production of Arabic emphatic sounds, by teaching them how to identify pharyngealisation in spectrograms and compare their production of sounds to that of accurate Arabic emphatics.

Pekka Lintunen, Aleksi Mäkilähde and Pauliina Peltonen: "Learner perspectives on pronunciation feedback" Lintunen et al reviewed the results of an experience of peer and teacher-led feedback on pronunciation. There were some interesting findings, including the fact that students valued peer feedback for pronunciation, and that they trusted the feedback of their non-native teachers of English. Lintunen et al also made a point of the fact that pronunciation feedback-giving is different from other feedback practices, and so it needs to be taught to teacher trainees as a special skill.

Gladys Saunders What does the rapid spread of /u/-fronting in American English have to do with the teaching of French phonetics? Saunders mentions the now established process of GOOSE-fronting and the way this can be used as a reference point to learn specific French vowel sounds.

Daniela Martino: "Sequencing and technology–aided activities in the acquisition of foreign sounds" Martino presented a sequence of presentation and practice of phonetic and phonological features and a number of web platforms that she has used to help her trainees improve on their English pronunciation. The first stage was identification, and she used Sonocent AudioNotetaker (which Cauldwell has popularised), (now down, unfortunately), and TubeQuizard, to create tasks and quizzes leading students to perceive processes such as weakening. Students are also invited to do transcriptions in IPA by using subtitling software. During the imitation stage, students use Soundcloud to record and upload their productions and they use the front facing camera of their mobiles to monitor their articulation. Martino has found the combination of these tools and this sequence to be effective for her students' progress.

 Ana Cendoya: "Technology–aided pronunciation teaching in an ESP/EAP course" Cendoya described the techniques she uses to help her engineering students to improve on their pronunciation when making presentations. She mentioned the use of E-portfolios and strategies to build proprioception, and mentioned how students with time became less "resistant" to recording, and feedback.

Hsuehchu Chen and Qianwen Han: "A corpus-based online Mandarin pronunciation learning system for Cantonese learners: development, evaluation, and implementation" Chen et al described the new Mandarin corpus system ( and how by using its features, and Praat, they have helped their Cantonese learners improve on their pronunciation.

Shawn L. Nissen, Kate E. Lester, Laura Catharine Smith, Lisa D. Isaacson and Teresa R. Bell "Using electropalatography in second language pronunciation instruction: a preliminary examination of voiceless German fricatives" Nissen showed us the evolution of electropalatography through the ages, and displayed the newest technology, which has made access to these devices much cheaper. The experiment presented, though limited to a few participants, shows how the information derived from EPG may help learners fine-tune their sounds, especially when the differences between their production and the expected target are quite small.
(Image credits: PTLC Twitter account)

Lunch and coffee breaks were as enjoyable as the talks. PTLC seems like a small conference, but it has the right atmosphere, and UCL with all its phonetic history makes the perfect setting for this meeting. Unlike other conferences, PTLC requires in their call for papers that full papers (not abstracts) are presented, so if you are thinking of participating in 2019, get your research going today!

Thanks to Michael, Joanna and Molly for organising, and thank you for trusting me with paper reviews as well. It's been a fabulous experience, and I really hope I can present more interesting stuff in two years' time, when my own research will (hopefully!) have yielded some results!