Accessment: Show what you know

Home / teaching tips / Accessment: Show what you know

“I read through the whole textbook and found a word that was only used once. I was sure the students wouldn’t know it!” Lillian, a teacher trainer in Peru, was explaining her conversation with a teacher who had written a test for his English students.

“But,” Lillian asked, “Why would you choose to test the students on something you thought they wouldn’t know?”

The teacher did not have a good answer; it showed his the lack of understanding of the principles of assessment and went against Swain’s (1984) principle of bias for best: “Do everything possible to elicit the learners’ best performance” (p. 195). The teacher thought, as too many teachers do, that the purpose of an assessment is to trick students or to cheat them out of marks. The teacher didn’t understand that an assessment was meant to give students an opportunity to show what they know. Race, Brown & Smith (2005) suggest that the consequences of poor assessment are serious:

Nothing we do to, or for our students is more important than our assessment of their work and the feedback we give. The results influence students for the rest of their lives and careers–fine if we get it right, but unthinkable if we get it wrong. (p. xi)

Every assessment needs to consider what Brown (1996) explains as construct validity, “The degree to which a test measures what it claims, or purports to be measuring” (p. 231). In the case of the teacher posing the difficult vocabulary item, the test should instead have been a chance to students to use high-frequency vocabulary that they had studied as key part of his class. Assessing that vocabulary would demonstrate that the teaching was effective and that the students were able to acquire the language in an efficient way.

If the students were not able to identify and use the target vocabulary, then it would not be a question of whether or not they had worked hard enough. The teacher should also reflect whether his teaching strategies were weak. These include motivation for students to want to learn, understand, and use new vocabulary items and strategies that make vocabulary more memorable.

Imagine students had instead been directed to study 100 new and useful vocabulary items as part of the course. What would be a sensible way to assess their comprehension? Teachers often choose multiple-choice tests for the simple reason that they are easy to mark. But multiple-choice questions are not authentic language experiences. After all, how often does someone stop another on the street to ask a question, giving the correct answer and three or four choices clever distractors?

A more authentic approach would be to ask students to use each word in a sentence. However, this may also be unsatisfactory as the sentences may do little to show the students’ comprehension. For example, simply knowing that a word is a noun (e.g., car, cat cup), a student may write, “I bought a ________ at the store.” In such cases, a teacher cannot be sure whether the student really understood the word other than knowing it is some kind of physical object.

A better approach is to mirror how people use language in the real world. People often forget a key word and find synonyms or circumlocutions (round-about ways of explaining things) to get their meaning across. If someone is going to work on a farm, he might want to ask for a shovel but, forgetting the word a moment, substitute the synonym spade. Alternatively, as a circumlocution, he might say, “I need something to dig with.”

Hopefully, the 100 vocabulary words the teacher is assessing are not random, but rather part of a semantic field (see Moore, Donelson, Eggleston & Bohnemeyer, 2015; Evans, 2011). A semantic field ties together groups of words because they have something in common. For example, they might focus on a particular part of speech, like adverbs or prepositions, or be used in particular contexts, such as a courtroom, or deal with a particular subject, such as farming.

If the vocabulary is bound together in a semantic set, using the principle of showing what you know should give students the opportunity to use the vocabulary in context. For example, “You and your partner are planning to start an organic coffee farm. Discuss the what you will need to start and write a list of the ten most important things, defining and explaining each one.”

This approach likely sounds complicated; and it is! A multiple-choice test would be far easier, but this type of assessment task accomplishes much more:

1. It is a learning task, not just an assessment task. Having students work together creates opportunities for peer teaching. Students have the chance to learn or re-learn the target vocabulary and grammatical structures.

2. Students negotiate meaning (clarify what they and others are saying) and scaffold their learning, (build on each other’s ideas). In these ways, they likely expand their vocabulary beyond that which is part of the task.

3. The task allows students to use all their linguistic resources, just as they would in the real world. For example, beyond using synonyms and circumlocutions, they can also use body language, facial expressions, and even draw to make their point. Communication is the goal, not memorization.

4. There is an increase in motivation because students see that there is real-world application to the type of task. They can imagine themselves in such a scenario or a similar scenario, e.g., starting a business.

A related concern is how–or whether–such an assessment should be marked.

All assessments require some kind of feedback, but that feedback can be either formative or summative. The purpose of formative assessment is to give students feedback to help them improve. Typically, this involves giving a student a test and then the answers–but not collecting marks. Instead, the students are made aware of what they do and do not know and are hopefully motivated to improve.

This shifts responsibility to the students to make them understand where they need help and further study. Alternatively, summative assessment is about making decisions about whether students are able to proceed to the next level. That might mean the next course, or graduation, or professional certification.

Summative assessments are essential but if we only give them without formative opportunities for students to improve, then classrooms quickly become places where students see themselves as “good” or “bad” at learning English, rather than as a place where they can hope to make progress toward their language learning goals. 

Returning to the question of how more unstructured assessments should be marked, there are several options. The first and simplest is to ask students to reflect on their performance on such a task. Having them ask themselves questions such as “Did I demonstrate that I understood the key vocabulary?” and “Where do I need to improve?” is another strategy to make them more responsible for their own learning.

For students who need more direction and support, another option is to provide rubrics in the form of a grid that shows the teacher’s expectations in a range of areas. Besides vocabulary, these might include the use of grammar, sentence complexity, pronunciation, and other factors. With such rubrics, it is important for students to be aware of them before they begin the task so they know where to focus their attention. To engage students to a greater degree, ask their help in writing the rubric: “Class, for this task, what language concerns do you think are important for you to show what you know?” List the areas on the board and create a grid of what might be considered exceptional, good, and in need of more study. Creating such a rubric becomes a useful language-learning opportunity for the class.

At first, these approaches may seem to be more work for teachers. They may also seem to take up already-valuable classroom time. But being efficient is not the same as being effective. Simply “finishing” a chapter is not important if some–or even all–of the students in a class have not understood or acquired the knowledge in ways in which they can use it in the real world. Instead, in every assessment, teachers need to ask themselves the question, “How can this test help students show what they know?” It is the starting point both of helping students improve and improving one’s teaching practice.

References

Brown, J. D. (1996). Testing in language programs. Upper Saddle River, NJ: Prentice Hall

Evans, N. (2011). Semantic typology. In J. J. Song (Ed.), The Oxford Handbook of Linguistic Topology, pp. 504–533. Oxford: Oxford University Press

Moore, R., Donelson, K. Eggleston, A. & Bohnemeyer, J. (2015). Semantic typology: New approaches to crosslinguistic variation in language and cognition. Linguistics Vanguard. Retrieved from https://pdfs.semanticscholar.org/5cb3/1d3385930fba90647c4267e356a24478e099.pdf

Race, P., Brown, S. & Smith, B. (2005) 500 Tips on assessment. London: Routledge

Swain, M. (1984). Teaching and testing communicatively. TESL Talk 15, 1 and 2. 7-18.