Students are what Dunning and Kruger would call “amateurs.” In other words, students lack the very expertise required to know their own academic shortcomings and how to overcome them. Students require scaffolding from experts to counteract this effect, but most experts are amateurs at scaffolding. This blog post addresses and discusses this issue.
I have some bad news for you: Amateurs are bad at knowing whether they’re good at something. In fact, they tend to think they’re far better at things than they are. And if a student is taking a class, that student is likely an amateur in that area. This means most students, on their own, cannot accurately gauge how effective their learning is. This phenomenon is commonly referred to as the Dunning-Kruger effect and is outlined in a short and accessible but research-based TedEd video, facilitated by Dunning himself (Dunning et al., 2017). Furthermore, many common study techniques do not improve memory retention or skill development, but rather give students a false sense of confidence, which can compound the Dunning-Kruger effect (Brown et al., 2014).
If students can’t tell whether they’re learning, or even whether they’re good at something, that creates a serious problem, does it not? Students can study as much as they want and still fail the test. Students can remain oblivious to their skill levels until it’s too late to make rewarded adjustments. Students can enter the field thinking they’re far more capable than they are.
As I see it, the definition for the Dunning-Kruger effect can be reworded as: Amateurs have a low capacity for self-assessment. That does not mean that they cannot self-assess, but rather that they need some scaffolding or guidance in order to self-assess effectively. Students benefit heavily from specific, bridge-forming, expert feedback. Most modern educators provide at least some of this scaffolding. When you teach a student to differentiate between grass varieties, you also teach them how bad they used to be — and potentially still are — at differentiating between grass varieties. I, for example, am firmly under the impression that I know what grass is, but put the grass varieties test in front of me and I’d fail it. That’s because I don’t know what I don’t know, which is that there are thousands of grass varieties, accompanied by thousands of ways to differentiate between them. Right? I genuinely don’t know about grass.
But what I’m getting at is that student self-awareness development and educator feedback, AKA formative feedback, are significant aspects of effective learning. And this effectiveness usually increases the earlier it starts in the learning process. A course should not only end with a reality check, but begin with one. Although they use the term “retrieval testing,” instead of “formative assessment,” Brown et al. (2014) have my back on this, as do Volante and Becket (2011). Ochuot and Modiba (2018) go so far as to argue “that formative assessment should transcend grading [summative assessment]” (p. 1). They also argue that formative assessment helps not only individual students, but also the entire learning environment of the classroom, including the instructor herself. If done properly. And that’s the trick. How exactly do you get an amateur to see what you, an expert, see? How do you get students to understand and accept their amateurism and engage in the energy/time-consuming tasks that will gradually move them toward mastery?
Those questions are far too complex for me to answer entirely in a blog post, but I will offer what I have readily available: some recommendations drawn from an amalgamation of my experience as an educator and the above-cited research. Since we’d have to indulge in far more detail than I’m providing to spot the nuances, I won’t bother separating ideas; it’s safe to say the pedagogical community agrees with the following, presented in point form for fun and accessibility:
Make classrooms and other learning spaces safe spaces to fail in
- If you fail or make a mistake, don’t get embarrassed or sweep it under the rug. Use it as a learning opportunity. Model how an expert fixes mistakes or even use the mistake as a discussion-starter. (E.g., “Oh, whoa. Can anybody spot the mistake I made 5 minutes ago?” or “OK, that was bad. Anybody care to explain why you should never do that in front of a client?”)
- Get in the habit of positively reinforcing attempts the way you already reinforce success. The goal is for students to know whether they’ve done something correctly but feel rewarded for their efforts, regardless of success. Effective learning involves more failure than success, so it’s counterproductive to punish all failure.
Be transparent about learning objectives
- Students can self-assess better if they understand what the end-goal is.
- Do a treasure-hunt style activity at the beginning of the semester using the learning objectives or syllabus.
- At the beginning of the semester, show a product or skill students should be able to replicate by the end of the semester. It may even be useful to make students attempt the task right away, if just hypothetically or in part, so they experience how large their gap is.
- Do a learning activity with a major assignment rubric. Students could use the rubric to peer-assess or even help make the rubric based on the assignment requirements and curriculum outcomes.
- Involve students in final exam/assessment creation. Students could submit potential test questions (works well for essay and written response questions) or just be coached on creating a practice test based on course material. Alternatively, they could be involved in some major decision involved in the final assessment creation.
Include and scaffold reflection and peer- and self-assessment activities
- For written assignments, students might read and provide feedback for their own or one another’s drafts.
- Spend some time educating students on how to provide and receive constructive criticism. Berger’s (2012) video can be a good source for this.
- Ochuot and Modiba (2018) found that scaffolding, as a type of formative feedback, is most effective when focused on misconceptions and reasons. Especially during formative feedback, try to avoid merely indicating whether something is acceptable or not. For example, Volante and Beckett (2011) suggest giving students feedback without grades. In an assignment where a grade is required, the instructor could initially give only written feedback and require students to send a “For next time, I know to…” statement or a completed rubric to view their grades.
Offer copious feedback and opportunities to practice
- Try a flipped classroom, where students attempt the material on their own time and use class time to ask questions and seek out feedback from the instructor.
- Create a study guide or, better yet, coach students into creating their own study guide(s).
- Provide practice exercises, guide students to create and share their own practice exercises, and/or provide links to resources where practice activities can be reliably acquired.
In my research for this post, Dunning et al. (2017) helped me realize the cognitive state students are in as amateurs in their fields. Brown et al. (2014) helped me realize how ineffective most traditional Western-style learning activities are, further augmenting my belief that amateurs are poor self-assessors. Volante and Beckett (2011) gave me a sense of how teachers feel about and tend to implement formative assessments. Finally, Ouchuot and Mobida (2018) were the most helpful in qualifying useful attitudes and strategies for formative assessment. This is all recommended reading if these topics interest you.
To avoid stress, try not to think of formative assessment as an extra thing to do. Think of it as a different way to approach your next course designing task. As you create a feedback bridge between you and your students, you may find that existing walls and frustrations become less … wall-like and frustrating.
Berger, R. (2012). Austin’s butterfly: Building excellence in student work. EL Education. Retrieved November 17, 2020, from https://modelsofexcellence.eleducation.org/resources/austins-butterfly
Brown, P., Roediger, H. III, & McDaniel, M. (2014). Make it stick: The science of successful learning. The Belknap Press of Harvard University Press.
Dunning, D. (Educator), Wednesday Studio (Director), Anderson, A. (Narrator), and Drew, T. (Music). (2017). Why incompetent people think they’re amazing [Video]. TedEd. https://www.ted.com/talks/david_dunning_why_incompetent_people_think_they_re_amazing?language=en
Ochuot, H. A., & Modiba, M. (2018). Formative assessment as critical pedagogy: A case of business studies. Interchange, 49(4), 477–497. http://dx.doi.org.lc.idm.oclc.org/10.1007/s10780-018-9341-6
Volante, L., & Beckett, D. (2011). Formative assessment and the contemporary classroom: Synergies and tensions between research and practice. Canadian Journal of Education, 34(2), 239–255. https://lc.idm.oclc.org/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=ofm&AN=525894401&site=ehost-live&scope=site