My new editorial, Asking a More Productive Question about AI and Assessment, was published today. "The primary problem AI causes for educators is that its existence changes evidence [of student learning] that was previously persuasive into evidence that is no longer persuasive. Or, to return to the murder mystery metaphor, the very fact that AI exists 'contaminates the crime scene.' It spoils the usefulness of the assessments educators have come to rely on over decades of carefully negotiated trade-offs. One day, your catalog of tried-and-true assessments provide you with persuasive evidence of student learning. Then you wake up the next day, and suddenly they don’t." https://rdcu.be/ewn3a #AI #learning #assessment
Or, it points out that the assessments that have been "trusted" are paper assessments that are convenient to create and grade in an industrialized fashion. In engineering, it churns out students with 4.0 GPAs that are great at putting math to paper, but have never soldered a part onto a circuitboard. AI vs Academia is ... in some ways... hyena vs vulture in my mind... fighting over the carcasses of dead creativity in the name of homogenization.
Love this analogy. We need to rethink the learner capabilities we really care about in a world *where AI exists* and figure out how to measure that new thing.
Thanks for sharing your expertise here! What resonated with me most is your call, which I have embraced in my own writing program administrative work and teaching: to "ponder with fresh eyes...what evidence of learning would I now find persuasive?" In writing classrooms, we can measure rhetorical behaviors that indicate student learning (from our disciplinary perspective). With every new technology in the past several decades, writing studies professors have adapted and pivoted in our assessments by being able to "be persuaded of student learning" by a process model. I love how you give these practices that phrase!
For well-defined of knowledge it’s ok, for I’ll-definited domain of knowledge area, AI will have problem in learning design and also assessment!
Right. It's muddied the relationship between the signifier (the exam, paper, etc.) and the signified (the learning process behind the product). At the very least, we need to move forward knowing that we can't treat the signifier-signified relationship with much confidence.
And that's the good news.
We’re forming AI 2030 working groups right now to work with global cross-sector HR, legal, tech, education, government, and corporate experts to curate research and develop new tools for the future of various types of assessment and evaluation. Ready to forward ethical and responsible AI? DM me. 😊
Exactly! Assessment of all types will need further investigation. How will colleges assess admissions, applications, departments, promotion, and tenure, HR performance reviews? Get wisdom; it is the principal thing.
Traditional assessment designs are not workable. There is a serious need to rethink what are we achieving.
VP Technology Innovation
1moThe article poses the right question "Given that AI exists in the world, and that students are likely to use it (whether accidentally or on purpose), what evidence of learning would I now find persuasive?”. Hopefully it inspires more educators to share examples for how they are redesigning learning experiences and assessments.