The Versatile ELT BlogA space for short articles about topics of interest to language teachers.
Subscribe to get notified of
|
The Versatile ELT BlogA space for short articles about topics of interest to language teachers.
Subscribe to get notified of
|
Illustrative sentencesLanguage learners benefit greatly from example sentences, since it is an opportunity to learn language from language, my big thing. For this reason, I devoted a considerable amount of my teaching, training and writing to helping students gain the maximum benefit from illustrative sentences. In the early 2000s, I attended my first Teaching and Language Corpora conference in Bertinoro, a beautiful hilltop town near Bologna, and presented my incipient formula for computationally selecting the most useful sentences from corpora to present to students. I programmed a tool that allocated the frequency of every word in a sentence and average it. Sentence length was also a criterion. As mentioned in previous posts, the great English lexiocagrapher, Patrick Hanks was my colleague at this time and I asked him what criteria his kind used when selecting sentences to include in their dictionaries. He said there was no list. I worked on this further and came up with a list of ten criteria that I discussed with Patrick and he added one more. I gave this list to Pavel Rychlý, who was developing Sketch Engine and his team used these criteria as a basis for their GDEX algorithm, i.e. good example sentences. It is now a standard part of SkELL and Sketch Engine. My criteria are listed on this 2006 webpage. So, it’s a good thing that corpora can select illustrative sentences, but can students? And should they? In short, yes and yes. But then what? How does a learner know what they can learn from an illustrative sentence apart from it being a targeted piece of input which they might soak in, as they do from any input they are exposed to. The answer lies in knowing the properties of the target word that are necessary to shift it from active to passive use. I am a strong advocate of the Collins COBUILD Advanced Learner’s Dictionary because it even presents its definitions in full sentences. Full sentence definitions are goldmines. From the sentence defintion, you can easily extract concept checking questions (CCQs). For example, Collins: A wildcard is a symbol such as * or ? which is used in some computing commands or searches in order to represent any character or range of characters.
Collins: An aphorism is a short witty sentence which expresses a general truth or comment.
These sentences typically start with a hypernym, here symbol, which immediately limits what it is and is not. Their definitions progress with the target word’s features, functions, etc. Each of these is encapsulated in a phrase or clause in the sentence definition. They are the properties of the word. The Collins then provides example sentences in which the abstract properties are made concrete. If students know what they can learn from full sentence definitions, they can see how the meanings of words manifest in authentic sentences. I’m writing a student workbook at the moment which will probably be called Discovering Phrasal Verbs, in which students are repeatedly tasked with finding example sentences in corpora. The book explains the importance of the semantics of the phrasal verb particles (prepositions and adverbs) and the importance of the subjects and objects of the verbs. These properties are the most important contributors to the meanings of the otherwise opaque, or at best translucent, phrasal verbs. When you search corpora for a phrasal verb, the sheer volume of data can be overwhelming. Fortunately, SkELL uses GDEX, so the 40 sentences it presents are manageable. The other tool I recommend is CorpusMate because it is very fast, it enables searches with wildcards, and the cotext is colour-coded using the same colours for parts of speech as VersaText. The wildcard searches are necessary when the phrasal verb is separable, e.g. tear .* away, keep .* .* away. AI is another source of illustrative sentences. In ChatGPT's own words, "The sentences generated by AI are original constructs, created using the language patterns learned during training." They are by definition inauthentic sentences, which means they were not motivated by any communicative impetus, hence they lack real-world contexts. These sentences often resemble those made up by textbook authors and test creators. It is reasonable to ask if the trade-off between authentic and inauthentic example sentences in terms of learnability is worth it. Do students really benefit more from authentic than inauthentic sentences? Like all good questions in ELT, the answer starts with, it depends. My it depends revolves around what the students are tasked with. If the textbook provides made-up example sentences without any task other than perhaps read, read aloud, translate or memorise some sentences, the students will function at the bottom of Bloom's Taxonomy. Garbage in, garbage out. But if the tasks involve higher order thinking skills in which the students skim and scan multiple examples of authentic language in search of specific properties to which they have been alerted, they develop a better understanding of the properties of the target word, and ultimately a more sophisticated understanding of language per se emerges. Like all good citizen-scientists, students engaged in “extreme noticing” need systems to record their findings that will in turn deepen their conceptual grasp of the target language and prepare them to use it confidently. It is well-known that guided discovery is not for everyone. I was a school music teacher in my 20s and one would occasionally hear, Never try to teach a pig to sing: it wastes your time and annoys the pig. This is yet another aphorism attributed to Mark Twain, but who knows? Guided discovery demands a strong rationale, clear instructions, the right tools and an understanding that the students are going to benefit from the multiple affordances of the tasks. It is important that students are made aware of the multiplicity of these learning experiences in the process of acquiring words and their properties. No reflection, no connection. The sentence is a suitable unit of language to observe the cotext of a word, i.e., its collocations, colligations, its subjects and objects and other properties depending on the part of speech. When you see the word in multiple sentences, as concordances provide, you can discern typical properties. This process of pattern recognition is akin to first language acquisition (FLA), but in SLA, our guided discovery tasks bring it to the surface, making awareness conscious. Given the best scaffolding, students can learn a great deal from illustrative sentences.
0 Comments
Sort of knowing a wordBack in 1991, I was teaching English on a weekend residential course in old Czechoslovakia. One of the assistants was a student in the arts faculty, where many years later I would find myself head of teacher training. In chatting with this student, she said, but you’re a native speaker – of course you know every word in the English language.
Max’s podcasts are mostly targeted at B1 level, and since his work is partly motivated by Krashen’s comprehensible input hypothesis, I listen for gist. My Russian is well below B1 but I comprehend a lot, and could probably retell the thrust of his monologues in English. This is thanks to my study of Russian, its closeness to Czech and its vocabulary having many English and international words, including those that have Greek and Latin origins. I should mention that there are many false friends between Russian and Czech, my favourite being užasný (amazing, awesome) vs. ужасный (terrible). Each of Max’s podcasts revolve around a single topic, so there is always the general context to help with the gist, but since he only speaks Russian in the podcasts, you have to infer the topic as well. There is no time during the podcast to analyse his use of words so that you might be able to use them in the cotexts that he employs. It is challenging to observe collocation, colligation and chunks on a single listening, and it is not why we listen. So, while the gym has a leg abduction machine, I would say that our brains have a language abduction machine. Abductive reasoning is a form of logical inference that seeks the simplest and most likely conclusion from a set of observations. We do a lot of abducting when our comprehensible input is only just comprehensible.
Our word knowledge typically emerges over time in both first and second language acquisition contexts. In FLA our word knowledge mainly accrues through multiple exposure, although we do use dictionaries, chat to friends about new and surprising uses of words. We even read and watch videos about language. In SLA, our word knowledge mainly accrues through structured study, which is both motivated and reinforced by exposure as we read, write, speak and listen. The emergent stages of vocabulary competence can be described thus:
An important application of this continuum is in the revision and recycling of previously studied words. We obviously cannot learn everything there is to know about a word on its first encounter, so this helps temper our expectations. We can also structure the word knowledge that we add in successive revisions. This layering is especially valuable in creating our own vocabulary workbooks and flashcards. I am devoting some pages to flashcards, the use of AI, and this continuum in the book I am writing at the moment. It might have the bumptious title, How to Learn Vocabulary Properly. We’ll see!
|
To make a comment, click the title of the post.
Archives
July 2024
Categories
All
|