8 Comments
Aug 4, 2022Liked by Dr Philippa Hardman

Great post again and thanks for sharing.

We passed it round the Keypath office and some questions came back like, what happened to the graded assessment/knowledge check? What about low level blooms outcomes like, "recite" or "classify"?

Is the assessment now the feedback, or is there an important distinction we missed?

Are low level blooms knowledge checks built into the activities or are they part of feedback? And are we seeing the model too linearly in trying to split them apart and actually activities and feedback are more blurred?

Or did we miss the point :P

Thanks again Dr! DanC

Expand full comment
author
Aug 4, 2022·edited Aug 4, 2022Author

All great questions, thanks Dan + team!

Some immediate thoughts:

Totally agree on the importance of laying & embedding foundational knowledge. Research suggests that lower-level Bloom's skills (e.g. remember, understand) are better achieved through strategies like varied repetition & interleaving which require repeated & intentional recall for application, rather than immediate abstract knowledge checks, e.g. quizzes.

In practice this means: 1. key concepts / skills are highlighted - sometimes literally! - in both the prompt content & the feedback 2. key concepts / skills are returned to throughout the experience, and learners are required to recall and apply these, ideally with increasing complexity and reduced scaffolding over time.

In terms of assessment: research suggests that the more authentic the assessment, the better. So, again, rather than an abstract quiz a better way to approach assessment is to require learners to produce/create something (Bloom's higher order tasks) which in turn require them to demonstrate an understanding of core, foundational concepts.

In practice this means: swapping out a quiz about python for a project - e.g. a coding exercise - which requires learners to understand (and demonstrate an understanding) of what python is etc.

As for measuring understanding: this can be trick, especially at scale. However, there's lots of interesting research to suggest that strategies like self assessment (e.g. in comparison to an example of "great") and P2P comparative assessment is more effective than a score generated by a quiz.

I'd love to hear what you and the team make of this! :-)

Expand full comment
Aug 5, 2022Liked by Dr Philippa Hardman

Love this exchange. Is there a distinction to be made regarding purpose of a given learning intervention? If the purpose is purely to recall (for compliance, or say to know how to measure a drug dose) or to build a complex skill or unconscious reflex (so in medicine it could be first aid), where authentic assessment would arguably make incredible difference?

Expand full comment
author

Great question, Natalia! The evidence shows that for information to be transferred meaningfully into long term memory & behavioural change, we have to do more than just test our ability to recall it in isolation.

The effective "learning transfer" of any new knowledge requires the use of strategies like varied repetition, spaced retrieval and application.

Without this, the learner might be able to recall in the immediate term (e.g. do a quiz and "pass) but the experience will not have any medium or long term effect on knowledge or skill / behaviour.

One interesting thing to consider is what might authentic practice & assessment look like. I think often it's associated with complex technologies - e.g. immersive virtual realities - but we don't need technology to deliver authenticity. E.g. strategies like "cognitive annotation" aka walkthroughs paired with the strategies mentioned above can be authentic *enough* to enable learning transfer & impact knowledge and skill/behaviour.

:-)

Expand full comment
Aug 3, 2022Liked by Dr Philippa Hardman

Fantastic post as always - I'm missing one key takeaway - and one reason why a framework like this isn't currently 'leading' the market (volume wise). You need to provide:

- additional functionality / link out to platforms that allow you to practice vs. just play back a video + MCQs - of course it's doable, but takes more time, expertise and possibly cost

- if feedback is delivered via screenshare + walkthrough -it means it's delivered to a person. If it's through social learning - you 'save' on instructor costs but need to provide, manage and moderate the platform. If it's an instructor, you need to pay that person.

So would the revised courses not be both:

- more expensive

- less scalable?

Is this the right approach? Yes. We very much believe in (a version of) this approach at my company, so I'm playing a bit of devil's advocate here! Am I surprised that this isn't the standard design? Not really, partly for those reasons (instructor / reviewer cost + expertise and confidence using different platforms).

Expand full comment
author

Lots of great points here, Natalia. A few things which come to mind in response:

- Yep, you're right: most platforms are built for content + quiz which is a blocker. There's definitely room in the market for a platform that makes hands-on practice easier (watch this space)!

- That said, even using platforms that already exist, there are ways to deliver *more* hands-on & contextualised learning using existing technologies, e.g. hacking assignments functionality.

- In terms of production costs: given that this approach typically sees a ~70% reduction in bespoke content creation, it is arguably more - not less - scalable than existing approaches.

- On scalability of assessment & feedback: there are a number of smart options here which enable us to scale feedback for challenge-based learning in a way that is more rich and quicker to produce that writing MCQ + answers, e.g. recorded walkthrough feedback.

Keen to hear what you think! :-)

Expand full comment
Aug 3, 2022Liked by Dr Philippa Hardman

Great come back! I would be interested in chatting more about 'hacking assignments functionality' - I agree, sometimes it's about looking at what you have differently, rather than reusing in the same way as always.

Interesting point about cost - that probably depends also on difference in rates, if any, between content development and delivery cost. Arguably even if you reduce content by 70%, if your instructor costs are high, it may not even out or even cost more. BUT - this is if you only take into account pure cost, and not possible ROI in improved learner outcomes, client retention, brand value etc. The open question here is - 'packaging' those improvements in a way that speaks to clients, etc. Though I imagine this is out of scope of the course ;)

On scalability - you've got my attention now. It is certainly on my mind a lot, recently.

Expand full comment
author

Yeah, I think instructor input is always expensive and that we need to be innovative and creative to mitigate this in a way that isn't just online quizzes.

Case Study: I designed a MOOC for Oxford Uni in which 1000+ students engaged and completed. Partly this is because we:

- Brought the prof to life through pre-recorded video feedback + pre-written & scheduled announcements to drive motivation

- Delivered a single end of course live Q&A, with questions collected throughout the MOOC

- Had a start and end date, and enabled students to progress through the experience together and peer review one another's work

Beyond this, there are also some interesting blockchain based technologies which might enable us to verify skills @ scale - I'm researching here right now, so watch this space.

Expand full comment