The Versatile ELT BlogA space for short articles about topics of interest to language teachers.
Subscribe to get notified of
|
The Versatile ELT BlogA space for short articles about topics of interest to language teachers.
Subscribe to get notified of
|
Illustrative sentencesLanguage learners benefit greatly from example sentences, since it is an opportunity to learn language from language, my big thing. For this reason, I devoted a considerable amount of my teaching, training and writing to helping students gain the maximum benefit from illustrative sentences. In the early 2000s, I attended my first Teaching and Language Corpora conference in Bertinoro, a beautiful hilltop town near Bologna, and presented my incipient formula for computationally selecting the most useful sentences from corpora to present to students. I programmed a tool that allocated the frequency of every word in a sentence and average it. Sentence length was also a criterion. As mentioned in previous posts, the great English lexiocagrapher, Patrick Hanks was my colleague at this time and I asked him what criteria his kind used when selecting sentences to include in their dictionaries. He said there was no list. I worked on this further and came up with a list of ten criteria that I discussed with Patrick and he added one more. I gave this list to Pavel Rychlý, who was developing Sketch Engine and his team used these criteria as a basis for their GDEX algorithm, i.e. good example sentences. It is now a standard part of SkELL and Sketch Engine. My criteria are listed on this 2006 webpage. So, it’s a good thing that corpora can select illustrative sentences, but can students? And should they? In short, yes and yes. But then what? How does a learner know what they can learn from an illustrative sentence apart from it being a targeted piece of input which they might soak in, as they do from any input they are exposed to. The answer lies in knowing the properties of the target word that are necessary to shift it from active to passive use. I am a strong advocate of the Collins COBUILD Advanced Learner’s Dictionary because it even presents its definitions in full sentences. Full sentence definitions are goldmines. From the sentence defintion, you can easily extract concept checking questions (CCQs). For example, Collins: A wildcard is a symbol such as * or ? which is used in some computing commands or searches in order to represent any character or range of characters.
Collins: An aphorism is a short witty sentence which expresses a general truth or comment.
These sentences typically start with a hypernym, here symbol, which immediately limits what it is and is not. Their definitions progress with the target word’s features, functions, etc. Each of these is encapsulated in a phrase or clause in the sentence definition. They are the properties of the word. The Collins then provides example sentences in which the abstract properties are made concrete. If students know what they can learn from full sentence definitions, they can see how the meanings of words manifest in authentic sentences. I’m writing a student workbook at the moment which will probably be called Discovering Phrasal Verbs, in which students are repeatedly tasked with finding example sentences in corpora. The book explains the importance of the semantics of the phrasal verb particles (prepositions and adverbs) and the importance of the subjects and objects of the verbs. These properties are the most important contributors to the meanings of the otherwise opaque, or at best translucent, phrasal verbs. When you search corpora for a phrasal verb, the sheer volume of data can be overwhelming. Fortunately, SkELL uses GDEX, so the 40 sentences it presents are manageable. The other tool I recommend is CorpusMate because it is very fast, it enables searches with wildcards, and the cotext is colour-coded using the same colours for parts of speech as VersaText. The wildcard searches are necessary when the phrasal verb is separable, e.g. tear .* away, keep .* .* away. AI is another source of illustrative sentences. In ChatGPT's own words, "The sentences generated by AI are original constructs, created using the language patterns learned during training." They are by definition inauthentic sentences, which means they were not motivated by any communicative impetus, hence they lack real-world contexts. These sentences often resemble those made up by textbook authors and test creators. It is reasonable to ask if the trade-off between authentic and inauthentic example sentences in terms of learnability is worth it. Do students really benefit more from authentic than inauthentic sentences? Like all good questions in ELT, the answer starts with, it depends. My it depends revolves around what the students are tasked with. If the textbook provides made-up example sentences without any task other than perhaps read, read aloud, translate or memorise some sentences, the students will function at the bottom of Bloom's Taxonomy. Garbage in, garbage out. But if the tasks involve higher order thinking skills in which the students skim and scan multiple examples of authentic language in search of specific properties to which they have been alerted, they develop a better understanding of the properties of the target word, and ultimately a more sophisticated understanding of language per se emerges. Like all good citizen-scientists, students engaged in “extreme noticing” need systems to record their findings that will in turn deepen their conceptual grasp of the target language and prepare them to use it confidently. It is well-known that guided discovery is not for everyone. I was a school music teacher in my 20s and one would occasionally hear, Never try to teach a pig to sing: it wastes your time and annoys the pig. This is yet another aphorism attributed to Mark Twain, but who knows? Guided discovery demands a strong rationale, clear instructions, the right tools and an understanding that the students are going to benefit from the multiple affordances of the tasks. It is important that students are made aware of the multiplicity of these learning experiences in the process of acquiring words and their properties. No reflection, no connection. The sentence is a suitable unit of language to observe the cotext of a word, i.e., its collocations, colligations, its subjects and objects and other properties depending on the part of speech. When you see the word in multiple sentences, as concordances provide, you can discern typical properties. This process of pattern recognition is akin to first language acquisition (FLA), but in SLA, our guided discovery tasks bring it to the surface, making awareness conscious. Given the best scaffolding, students can learn a great deal from illustrative sentences.
0 Comments
One swallow does not summer make
Hoey in fact studied foreign languages so that he could experience the processes of language learning and the practical applications of linguistic and pedagogical theory. When he was observing language in context, that is by reading and listening, he would notice certain collocations but he needed proof of their typicality before he could consider them worth learning. Just because someone has combined a pair of words does not mean that this combination is a typical formulation in the language. The lexicographer, Patrick Hanks (1940–2024) felt the same: Authenticity alone is not enough. Evidence of conventionality is also needed (2013:5). Some years before these two Englishman made these pronouncements, Aristotle (384–322 BC) observed that one swallow does not a summer make. Other languages have their own version of this proverb, sometimes using quite different metaphors, but all making the same point. In order to ascertain that an observed collocation is natural, typical, characteristic or conventional, it is necessary to hunt it down, and there is no better hunting ground for linguistic features than databases containing large samples of the language, a.k.a corpora. In the second paragraph, Hoey experienced the processes … Is experience a process a typical collocation? This is the data that CorpusMate yields: In the same paragraph, we have the following collocation candidates:
Here is some more data from CorpusMate. In the following example, we have a wildcard which allows for one element to appear between the two words of the collocation. Even in these first 12 of the 59 results, other patterns are evident. The process of validating your findings through multiple sources or methods is known as triangulation, and it is an essential stage in most research. When we train students to triangulate their linguistic observations, it is quite likely that they are familiar with this process from their other school subjects. This is not just a quantitative observation, i.e. this collocation occurs X times in the corpus. It is qualitative as well: the students observe other elements of the cotext, such as the use of other words and grammar structures that the collocation occurs in. They might also observe contextual features that relate to the genres and registers in which the target structure occurs. They are being trained in task-based linguistics as citizen scientists, engaging their higher order thinking skills as pattern hunters. This metacognitive training is a skill for life that will extend far beyond the life of any language course they are undertaking. Triangulation does not apply only to collocation. Any aspect of language can be explored in this way. You may have noticed the word order in the idiom: does not a summer make. Many people have run with this curious word order and exploited it creatively. It is thus a snowclone. Here are some examples from SkELL. Respect our students' intelligence and equip them to learn language from language. ReferencesCroswaithe, P. & Baisa, V. (2024) A user-friendly corpus tool for disciplinary data-driven learning: Introducing CorpusMate International Journal of Corpus Linguistics.
Hanks, P. (2013) Lexical Analysis: Norms and Exploitations. MIT. Hoey, M. (2000) A world beyond collocation: new perspectives on vocabulary teaching. Teaching Collocation. Further Developments in the Lexical Approach. LTP (ed. Lewis, M.) The mighty power of the asteriskFollowing on from my previous post about constellations, in this post I'm looking at the asterisk, which some of my students refer to as a star. I admit that that's a pretty tenuous connection and I apologise whole-heartedly. But the universe does get a mention here, so bear with me. I'm drafting a vocabulary workbook at the moment, or should I say yet another vocabulary workbook. In this one, the students are often tasked with discovering how words work in grammar patterns. They mainly use CorpusMate. This is quite a new, free, open-access and superfast corpus tool that was designed by Peter Croswaithe and programmed by Vit Baisa, who also programmed my VersaText and was instrumental in the development of SkELL. The CorpusMate corpus does not have part of speech tagging, which means that you can't search for a pattern such as Verb + noun + v-ing and there are hundreds of verbs in English that function in this pattern, e.g. remember, picture, catch, tolerate, leave. There are not only hundreds of words, but there are also hundreds of patterns that nouns, verbs and adjectives function in. My current vocabulary book revolves around the COBUILD grammar pattern reference books from the late 1990s. In fact, I wrote my masters dissertation on the grammar patterns of the verbs in the then new Academic Wordlist (2000) that Averil Coxhead had created for her masters dissertation. It is always interesting to see how much can be gleaned about the grammar pattern of a word without part of speech tags. The asterisk is mighty. In fact, it holds the secret to the meaning of life, the universe and everything. Read on! A supercomputer called Deep Thought was asked what the meaning of life, the universe and everything was. It calculated that it was 42. See the announcement in this extract from the film. Douglas Adams, the author of The Hitchhiker's Guide to the Galaxy, the cult 1979 novel from which this comes, always claimed that he chose the number 42 randomly. But 42 is the ASCII code for the asterisk, which in computer searches means anything and everything. Did Deep Thought calculate that the meaning of life, the universe and everything is anything and everything? This search uses two asterisks. The first has spaces before and after it, which makes the program search the corpus for all of the words in between the items to the left and the right of it. The second asterisk is used with a dot and is attached to ing. Dot-star, as my students call it. This makes the program search for words ending with ing. While this might include words such as thing and during, the fact is that the -ing word that follows remember followed by another word tends to be an -ing verb. This is the reality of pattern grammar. Click this link to see the first 250 of the 589 results of this search. This data is automatically sorted so that this pattern in the use of remember can be gleaned. To make these patterns even more visible and student friendly, clicking on the Pattern finder button generates a tidy table. Here are the top nine entries in that table. As mentioned, Verb + noun + -ing is one of hundreds of grammar patterns of words that the COBUILD team uncovered. They were not the first to identify this or many other patterns, but they were able to demonstrate with large corpora the semantic relationships between words that function in the same pattern. This means that the words in a grammar pattern are related in meaning. Important and interesting. Chunks Now back to our mighty asterisk! With queries such as these, you can find the frequent chunks that a target word is used in. The concordance extract below shows some things that people are "in search of a". I'm hoping that a vocabulary workbook that has students learning about the patterns that words function in and the words that go in these patterns through tasking them with discovering these properties of words for themselves using CorpusMate and other tools will be a stimulating voyage of discovery which will add a layer of systematicity to their vocabulary study which will in turn lead to them using words with more confidence and fewer hesitations. Think fluency. They might become stars themselves!
|
To make a comment, click the title of the post.
Archives
October 2024
Categories
All
|