Peer review – your humanitarian training needs this!

Peer review is a cheat code for online courses.

 

There are courses that I run without it, but it’s always a design element that I consider.

 

That’s because well designed peer review in a course gets you scale. And the humanitarian sector needs scale. There are too many people working with no training, in jobs people’s lives depend on. It’s not something where we get to just put it in the “too hard” bucket.

 

Peer review allows you to work with a really big group of people – and for them to get meaningful feedback.

 

I’ve taken some online courses with just videos. I learned some things, I think.

 

But without guidance, it’s hard to learn. Without feedback to know whether what you’re doing is good or not, it’s hard to focus on what’s important. You can fool yourself about how good a job you’re doing.

 

Humans are really good at learning. But we’re also really good at fooling ourselves.

 

Without feedback, it’s easy to believe you’re doing fine, when you’re missing something huge.

 

So you need feedback. But one person can only give feedback to so many people. And even that is often cursory. You get a “well done” or a “try harder to do [X]” .

 

To scale up, you’ve got a few options, none of them very satisfactory.

 

You can bring in a group of people (experts) to provide the feedback. But experts are hard to find! They’re not waiting around. Even if you can get them, it costs money and the course quickly becomes uneconomic to run.

 

You can limit the size of the group on the course. But then fewer people can learn – and it’s often uneconomic to run. There’s a place for it, but you’re not going to make a big change in the way an organisation – or the sector – works if you train 15 people. You can help them. You can help their teams. But the impact will not spread – certainly not at the pace you’d like it to.

 

Or you can give up on the personalized feedback.

 

Lots of online courses just don’t try to give feedback. You just watch the videos and move on with life.

 

To be fair, most don’t even try to create practice exercises – so there’s nothing to give feedback on! Practice is essential for learning though.

 

There are some things you can do that give feedback without personalized feedback, though. Model answers are a good one. Worked examples (which could come before a learner practices) are another. Scenarios with decision options are good too – if they’re well-written, which is pretty rare.

 

And that’s the other way people try to go for scale – point-and-click e-learning.

 

Great e-learning is like a snow leopard. There’s some out there, but you never encounter it in the wild.

 

It’s a way to get scale, but it basically gives up on meaningful feedback. Sometimes it just displays information. This is typically like a more annoying YouTube video or PDF guideline. Other times, there’s some simplistic quizzes about content that you read on the previous slide. Your ability to answer those questions is not meaningful feedback.

 

But with well-constructed peer review, you can provide good quality feedback at scale.

 

This means finding what the experts look for in the kind of work your learners are doing. What are they looking for when they evaluate whether a piece of work is good? Are they looking for clarity or comprehensiveness? Use of visuals or writing style?

 

Then, with that in a rating scale, learners can start to rate each other’s work. And that rating is like the feedback of an expert. It’s certainly about the most critical points of the work – not just the impression it created. And it can be more detailed, because each learner is only giving feedback on the work of a few colleagues – not a whole group.

 

What we find on most of our courses is that peer reviewers are a little bit stricter than the experts.

 

Experts give a little more leniency and benefit of the doubt on quality. I guess they’ve seen so much that’s very poor in their professional lives that they can be forgiving of something that’s good enough to get near to the borderline.

 

Not everyone does a great job with peer review. I’ve been going through the reviews from a recent course, and the variety is enormous. We set up the peer review to not allow a “Good Job!” as a review, but some people really don’t write much, and others write lots.

 

I’m sure some of the difference is practical and pragmatic.

 

Some don’t have much time. Some write faster. Some read slower, so they have less time to write the reviews. There’s all sorts of reasons.

 

One of the factors seems to be understanding what helps. In the courses, there’s a trade-off between spending time helping participants give good feedback, and helping them do the practice tasks. I’m not sure I get it right, and I am increasingly OK with accepting that a course can’t do everything. Perhaps we could spend more time on the feedback, though I’d struggle to know what to drop.

 

There’s also the question of belief in the value of peer feedback.

 

Some people remain convinced that expert or instructor feedback is what really helps. That’s even though the people who are giving them feedback on the course are some of the people who would actually look at and use their work in the real world.

 

Why would I (the instructor/facilitator) be able to say what’s confusing or helpful better than a typical reader?

 

But if you’re not sold on the peer feedback process, I’m sure it reduces the motivation to throw yourself into it.

 

More is not always better when it comes to feedback. It’s not better feedback because your peer wrote more words. But being specific does help. Giving examples does help. Covering a few more aspects of a criteria does help.

 

Despite all the differences, we see a huge amount of consistency in the reviews.

 

It’s very rare that one reviewer thinks something is great and another thinks it’s terrible. And the scale ratings we use are pretty consistent and make it easy to eliminate extremes. We can tell the process “works” when it comes to getting an overview of the course projects. And it works in terms of learning too. But the differences are fascinating.

 

With peer review you get consistent information on the quality of the work – across a big group.

 

We’ve run courses with over two hundred people on them. And we were very happy that the people who were getting great ratings really did great work – and the people who got low ratings did poor work.

 

I’m not trying to judge people. But they need to know the strengths of their work in order to grow. And peer review means we can create courses that do that – at a scale that shifts the dial.

 

Got Questions?

Whether you’re looking to refine your team’s skills, understand complex challenges better, or enhance your overall impact, I’d love to talk to you, with no commitment from you.

Contact

Your path to better performance and more impact starts here

I have worked in the non-profit sector for my entire career, since 2010 entirely focused on building capacity in humanitarian NGOs. I know the reality of managing aid projects in the field, and am an expert in learning design and running training – using research-backed methods. Whether you’re looking to refine your team’s skills, understand complex challenges better, or enhance your overall impact, I’m ready to assist you every step of the way.

Email me

greg@gregorjack.com