This is the first post of a three-blog series about education. The idea is to discuss some current views and basically make you reflect. Don’t forget to read the apologue and the epilogue
Many teachers in Brazil are going back to class this week and a major concern they share with school managers, policymakers, parents, and students themselves can probably be captured in the following question:
How much have students learned in the pandemic?
Perhaps an even more accurate question would be how much have they NOT learned? No matter which one you prefer, before you can answer them, you must at least assume two things:
- There are certain outcomes students must achieve in their learning experience;
- These outcomes can be measured/ascertained based on certain criteria;
Regardless of what you, dear teacher, might feel, you need to be able to check whether students were successful in getting from the place of “I don’t know this” to “I know this”. You need evidence and that usually means grades (at least in our educational system). There’s a problem, though. Can these numbers (grades) really tell us if our learners were able to accomplish the learning outcomes of their course? Can numerical grades truly measure learning? If not, how can students show their learning to us? In other words:
How can students make their learning visible?
If we think about it for a couple of minutes, we might come to the conclusion that, again, we need to find clues that can help us see what our learners are doing/producing. We might even consider that process quite similar to the scientific method. We start with a question – Are students in fact learning? – then we move on to a hypothesis – I believe this particular student is learning because of their grades – then we devise an experiment – But I suppose the test grades aren’t enough so I’ll ask them to work on a project – then we collect data, analyze, and come to a few conclusions (based on logical thinking, the specialized literature, which should be checked by peers)
All of that might feel quite daunting and you may be very uncomfortable with the idea of being like a scientist. You might even think that this is irrelevant and that you can rely on test grades to assess your students’ learning. What if I told you that much of what we have “learned” is forgotten by the end of the day, the week, or the month? What if you applied the exact same test on your students a few days after? Do you think they’d get the same test results? If things are quite easily forgotten, does it mean they have actually learned them?
Let’s do our own experiment. Go back to your time as a student. Perhaps even a language student. Try to remember a test you took and that you got a very good grade on. Reflecting on that experience now, can you tell me with confidence that you can still remember the concepts your “learned” then? If you took the same test today, would you be able to ace it? I remember getting very good grades in high school. I even got selected to represent my school with some other colleagues in a physics Olympiad because of my grades. The thing is: I’ve forgotten most of it and I’m pretty sure I’d fail pretty much any physics exam today.
Assessment of performance, Assessments of Learning, Assessment for Learning
Naturally, you might say (and I wouldn’t argue) that I haven’t practiced my physics problem-solving skills for decades and that’s why I can’t retrieve most of it. You may even tell me that it’s different with language learning, particularly when you keep using the language long after your classes have finished. You’d probably be right, after all:
Practice doesn’t make perfect but it makes more permanent
I’m not here to dispute that. What I want to raise your awareness to is the fact that in many cases we’re not really assessing learning. Rather we’re assessing performance. Learning and performance are two separate things. You may have learned something well and it will last for many years (yes, it’s possible to unlearn things) but you might fail a performance test. The opposite is also true. How many times have we gotten lucky and done well on a test simply by guessing the answers (this can happen a lot when we use multiple-choice tests)?
The Bjorks (2011), Robert and Elizabeth, a couple who happens to share the same passion and expertise for cognitive psychology, tell us that performance has to do with what can be observed and measured at the time of taking a test. Learning is what sticks with us. It changes our knowledge in a more permanent way and it can’t always be captured by standard testing.
If regular high-stakes tests (summative assessment) normally assess performance, what can we use instead? If standardized testing won’t do, how about using more personalized types of exam?
Think about two test models:
- Question with gap and multiple choices – previously chosen by the teacher
- Open question: what have you learned in our classes? – personalized, no prompts, no exclusion criteria
Sure, correcting test 1 would require a lot less work. But what if instead of applying many multiple choice tests, we could apply fewer and use more open-ended tests with our students? I suppose this would guarantee that their answers were “better evidence” of their learning, wouldn’t you agree? We can call this type, although still summative, assessment of learning. It does look at students’ work in retrospect, like assessment of performance, but it allows them to create their own script to some extent.
Another great way to think about assessment is by keeping track of students’ learning curve and shift the focus to the process instead of the product. That is concept of formative assessment. It doesn’t really require a single event on which a numerical grade will be given to students and that will determine how much they “know” or “don’t know”. Formative assessment is interested in how learners make progress toward the expected learning outcome (and possibly beyond) and that is the foundation of assessment for learning (AFL). In that sense, we can look at five important characteristics as discussed by Cambridge Assessment International Education:
1. Questioning enables a student, with the help of their teacher, to find out what level they are at.
2. The teacher provides feedback to each student about how to improve their learning.
3. Students understand what successful work looks like for each task they are doing.
4. Students become more independent in their learning, taking part in peer assessment and self-assessment.
5. Summative assessments (e.g. the student’s exam or portfolio submission) are also used formatively to help them improve.
The Cambridge Assessment International Education report goes on and mentions that:
AFL helps in making understanding and knowledge, as John Hattie describes it, ‘more visible’. AFL helps learners understand what excellence looks like and how they can develop their own work to reach that level.
Despite controversies about Hattie’s statistical methods when looking at more than 1000 meta-analyses, how he focused on academic achievement and left out other important variables, and how he chose the studies (click here for a summarized critique of his work) we can say that his work has certainly stirred things up in the last decade by claiming that certain things make learning more visible. What are they and how can we use them?
Making Thinking and Learning Visible
Before we discuss some insights and practical ideas based on the work of John Hattie, let me share a recent experience with you. I was invited by Gallery Teachers to deliver another masterclass (you can find the first one here) and the topic they suggested was making thinking visible. I embraced it and thought of connecting it to making learning visible. I must say I was quite happy with the result (which you can find here) especially because I had a wonderful panelist who, the amazing Neil Harris, who not only helped me think of quite relevant questions but also delivered a brilliant masterclass on assessment of, for, and as learning (which you can find here). If you’re happy with only the Q&A, you can find them below
Now let’s get down to business, shall we? If we assume that learning requires memory and attention and that deep learning takes place upon reflection, we can suggest that thinking precedes learning in many classroom contexts (sure there are types of learning that differ but let’s focus on this one). So we need to understand how to “see” our students thinking to make sure they’re on the right track toward their learning outcomes.
Project Zero by Harvard offers us incredible insights on how we can see our students thinking through a series of questions grouped under what they call thinking routines. If you visit their website, you’re bound to find lots of different routines and resources to help you make your students thinking more visible. I’ll focus on only three here and give you some practical examples:
- SEE-THINK-WONDER
- CLAIM-SUPPORT-QUESTION
- I used to think… Now I think
The first routine can be used to introduce a new topic. Learners might look at a prompt (an image, a video, a short paragraph, a word, a diagram) and start brainstorming things like:
I think it’s a… I believe we can used it for… I think it has to do with… I suppose it’s connected to…
Then they started reflecting on the things they cannot immediately see but would like to know:
I wonder if it can… I wonder where it can be used… I wonder how it can be used in a sentence…
The second routine may help you revise or practice a topic with your students. Think about a lesson in which you’d like to ask them about the past perfect tense. You might ask them to claim something about it like The past perfect tense is used for a situation that happened in the past. You can then ask them to support that claim by providing an example. They might say something like I had studied for our test. Then you could question their example by pointing out that you can’t really understande the difference between that and I studied for our test. You’d be encouraging them to think deeper and refine their answer. They might (and probably should) be the ones who question their own claims from time to time. That state of inquiry could lead them to self-directed study based on their curiosity and willingness to learn more about a particular subject.
Routine 3 is about contrasting what you thought you knew with what you believe you know now. That’s a great routine for you to reflect on how deep your learning is. You might want to use it with your students to revise materials, to encourage them to use new chunks, to help them think of errors they made in the past and use the correct forms, and to get them to self-assess.
How does all of that connect with Making Learning Visible? If we look at John Hattie’s list of things that impact learning based on the effect sizes of over 1000 meta-anlyses (remember the claims of lack of scientific rigor over his analysis), we’ll see teacher efficacy, student expectations, response to intervention, student efficacy, teacher clarity, and feedback (to mention only a few).
To give you more to reflect on now that you’re preparing for your school’s next term, we can focus on feeback (also supported by the works of Yeager and Dweck (2020) and student efficacy (discussed in the vast literature left by the late Albert Bandura (1984). A few simple strategies to make sure you help your students learn more based on everything discussed above are:
- Work with portfolios and e-portfolios. That way you’ll be able to follow your students work throughout the semester (you can use Jamboard, Padlet, Flipgrid or Canva);
- In remote classes, make sure your students have a “virtual space” to work in so that you can see them doing the things you asked (it can also be Jamboard or Padlet – Google Slides work too);
- Help them set their own deadlines and reflect on their work frequently. This allows you to guide them and give lots of feedback;
- Feedback is the key really. Be specific. Tell them what was just right, what could’ve been better, what was not good, and how they can make it better;
- Give students the opportunity to choose the layout/format of their work. As you’ll see below, not everything needs to be written;
- Spend some time working on study skills, goal setting, project management, metacognition and any tool that might help them develop their self-efficacy;
- Use low-stakes tests (pop quizzes on Kahoot for instance) to help them remember and reflect on the things they’re learning;
- Include peer assessment as much as you can. A fresh look from their colleagues can provide excellent insights;
Here’s my take on it: having worked with students from different levels, I believe AFL does make learning more visible. I can give a few examples from my own groups. I’m a guest lecturer of Language and Cognition at PUC-PR and my students don’t have any tests. They do have to share an e-portfolio and work on a final project for my subject. Their mission is to design a product based on the discussions we had in class (referencing the authors and texts we worked with.
One of my groups decided to create a podcast on managing emotions!

Another group made an amazing infographic about emotional intelligence

One of my students built her dream school based on the principles we discussed on The Sims! How incredible is that?

Conclusion
Learning is a complex phenomenon that cannot be easily measured, especially when we use conventional methods that basically turn everything students produce into numbers. However, we must be able to synthetize what students can do now when compared to when they started their course. I truly believe we can shift things if we start thinking about assessment for learning. That means we’ll pay a lot more attention to each of our learners’ individual paths rather than a snapshot of their learning experience captured on a test. If we do not obsess with a single format or a one-size-fits-all approach, we might get impressive work from our students (perhaps a podcast, an infographic, or a 3D model of a whole school!)
We can certainly benefit from from Project Zero’s Thinking Routines and Hattie’s Making Learning Visible (not without criticism). They offer some insightful and practical ideas of what makes learning happen and how we can “see” it happening before our eyes.
Remember that we need to be more empirical and that means looking for the evidence that our students are actually learning something. If we manage to do that, I believe that assessment can become a more functional aspect of learning as it will not simply get students ready to perform well that day, when they take the test, and get over it. Learning is not about getting your test results with barely any feedback on them, a few times during the semester and be done with it. It needs to be the foundation of learning in an ever-adjusting process of trying things out, getting feedback, trying again, keeping a record, and making slow but consistent improvements over time. Then it might stick with us for the rest of our lives.
References
Bandura, A. (1984). Recycling misconceptions of perceived self-efficacy. Cognitive therapy and research, 8(3), 231-255.
Bjork, Elizabeth & Bjork, Robert. (2011). Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning. Psychology and the Real World: Essays Illustrating Fundamental Contributions to Society. 56-64.
Hattie, J. (2012). Visible learning for teachers: Maximizing impact on learning. Routledge.
Yeager, D. S., & Dweck, C. S. (2020). What can be learned from growth mindset controversies?. American Psychologist, 75(9), 1269.
Pingback: 5 reflective activities for training sessions – The TEFL Zone
Pingback: There’s no magic formula but the Science of Learning can help – EDCrocks
Pingback: What if…? Learning in an Alternate Reality – EDCrocks
Pingback: Reflections on Education. EPILOGUE – Hey! Teachers! Leave them Kids Alone – EDCrocks