Investigating Performance: Design and Outcomes With Xapi
By Sean Putman and Janet Laane Effron
()
About this ebook
Learning & Development professionals have the potential to support meaningful, actionable assessment of course design and learner achievement through data collection and analysis. Now it’s time to roll up the sleeves and get to work. This book gives a practical overview of the tools, techniques, and strategies needed to get started.
Related to Investigating Performance
Related ebooks
The Six Disciplines of Breakthrough Learning: How to Turn Training and Development into Business Results Rating: 5 out of 5 stars5/5Training and Development: An Essential Guide for Students & Practitioners; Includes 30+ Ready to Use Templates Rating: 0 out of 5 stars0 ratingsTime To Listen: Assumptions Aside Rating: 0 out of 5 stars0 ratingsBuilding an Innovative Learning Organization: A Framework to Build a Smarter Workforce, Adapt to Change, and Drive Growth Rating: 0 out of 5 stars0 ratingsLeading for Instructional Improvement: How Successful Leaders Develop Teaching and Learning Expertise Rating: 0 out of 5 stars0 ratingsThe Caveman Guide To Training and Development: The Caveman Guide To Training and Development, #1 Rating: 0 out of 5 stars0 ratingsUnlock Behavior, Unleash Profits: Developing Leadership Behavior That Drives Profitability in Your Organization Rating: 0 out of 5 stars0 ratingsEffective Group Problem Solving: How to Broaden Participation, Improve Decision Making, and Increase Commitment to Action Rating: 0 out of 5 stars0 ratingsBeyond Training Ain't Performance Fieldbook Rating: 0 out of 5 stars0 ratingsStrategies for Training Excellence - Delivering adult training that works, that sticks and that makes a difference. Rating: 0 out of 5 stars0 ratingsActive Training: A Handbook of Techniques, Designs, Case Examples, and Tips Rating: 3 out of 5 stars3/5Engage: The Trainer's Guide to Learning Styles Rating: 0 out of 5 stars0 ratingsOvercoming Poopy e-Learning: How to Effectively Evaluate “e” Rating: 0 out of 5 stars0 ratingsRapid Video Development for Trainers: How to Create Learning Videos Fast and Affordably Rating: 0 out of 5 stars0 ratingsVisual Design for Online Learning Rating: 0 out of 5 stars0 ratingsTeam Building inside #5: resource & time management: Create and Live the team spirit! Rating: 0 out of 5 stars0 ratingsASAE Handbook of Professional Practices in Association Management Rating: 0 out of 5 stars0 ratingsNeeds Assessment for Organizational Success Rating: 0 out of 5 stars0 ratingsActive Learning Online: Five Principles that Make Online Courses Come Alive Rating: 0 out of 5 stars0 ratingsThe Ten-Minute Trainer: 150 Ways to Teach it Quick & Make it Stick! Rating: 4 out of 5 stars4/5The Handbook for Learning and Development Professionals Rating: 0 out of 5 stars0 ratingsTraining & Development For Dummies Rating: 0 out of 5 stars0 ratingsChoosing 360: A Guide to Evaluating Multi-rater Feedback Instruments for Management Development Rating: 0 out of 5 stars0 ratingsLearning On Demand: How the Evolution of the Web Is Shaping the Future of Learning Rating: 5 out of 5 stars5/5101 Ways to Make Learning Active Beyond the Classroom Rating: 5 out of 5 stars5/5Moodle for Mobile Learning Rating: 0 out of 5 stars0 ratingsHow to Be a Positive Leader: Small Actions, Big Impact Rating: 1 out of 5 stars1/5Unlock Your Leadership: Secrets & Straight Answers on Standing Out, Moving Up, and Getting Ahead as the Leader You Really Are Rating: 0 out of 5 stars0 ratingsTraining A Clear and Concise Reference Rating: 0 out of 5 stars0 ratings
Business For You
Emotional Intelligence: Exploring the Most Powerful Intelligence Ever Discovered Rating: 4 out of 5 stars4/5Collaborating with the Enemy: How to Work with People You Don't Agree with or Like or Trust Rating: 4 out of 5 stars4/5The Richest Man in Babylon: The most inspiring book on wealth ever written Rating: 4 out of 5 stars4/5The ChatGPT Millionaire Handbook: Make Money Online With the Power of AI Technology Rating: 4 out of 5 stars4/5Super Learning: Advanced Strategies for Quicker Comprehension, Greater Retention, and Systematic Expertise Rating: 4 out of 5 stars4/5Company Rules: Or Everything I Know About Business I Learned from the CIA Rating: 4 out of 5 stars4/5The Five Dysfunctions of a Team: A Leadership Fable, 20th Anniversary Edition Rating: 4 out of 5 stars4/5The Art Of Critical Thinking: How To Build The Sharpest Reasoning Possible For Yourself Rating: 4 out of 5 stars4/5Your Next Five Moves: Master the Art of Business Strategy Rating: 5 out of 5 stars5/5The Book of Beautiful Questions: The Powerful Questions That Will Help You Decide, Create, Connect, and Lead Rating: 4 out of 5 stars4/5Becoming Bulletproof: Protect Yourself, Read People, Influence Situations, and Live Fearlessly Rating: 4 out of 5 stars4/5High Conflict: Why We Get Trapped and How We Get Out Rating: 4 out of 5 stars4/5Capitalism and Freedom Rating: 4 out of 5 stars4/5The Catalyst: How to Change Anyone's Mind Rating: 4 out of 5 stars4/5How to Get Ideas Rating: 4 out of 5 stars4/5Robert's Rules Of Order: QuickStudy Laminated Reference Guide Rating: 5 out of 5 stars5/5Financial Words You Should Know: Over 1,000 Essential Investment, Accounting, Real Estate, and Tax Words Rating: 4 out of 5 stars4/5Strategy Skills: Techniques to Sharpen the Mind of the Strategist Rating: 4 out of 5 stars4/5A More Beautiful Question: The Power of Inquiry to Spark Breakthrough Ideas Rating: 4 out of 5 stars4/5
Reviews for Investigating Performance
0 ratings0 reviews
Book preview
Investigating Performance - Sean Putman
Introduction
Time to Get Real
You develop a course, sometimes with intense care and attention to detail, sometimes quickly with a suddenly imposed deadline looming over your head. In the end, though, how do you know if the course did its job? If you did your job? And, more to the point, did the end users get what they needed to do well at their jobs?
You take a course; you know how to play the game. (Standardized testing taught you well.) You can scan for the right information, give the correct responses, make a good guess as far as expectations. You know which videos you can safely ignore, using their playing time to answer a few emails. And all will be forgotten in a few hours or days or weeks, except maybe a few vague concepts, a random cartoon, or a point of cognitive dissonance.
You go to conferences and read blog posts where everyone is talking about real learning, contextual learning: the ideal scenarios that are rooted in the realities of how people learn rather than the limitations of your LMS. Heck, you might even be speaking at the conferences and writing the blog posts. You want learning to be like that; you want to do great stuff.
But you can’t – at least, not using the measuring stick you currently have. It’s not that assessment is a bad thing, but when we are working at scale, the measures we have don’t really tell us what we need to know.
The Lie That Is Test Scores
I am not a number, I am a free man!
Suppose you just got a passing grade on your last elearning test on software training? Or safety training? What does your test score really tell you? Something about knowledge? Or just memory, or pattern recognition?
If I’m solving a problem or answering a question simply by recognizing the form and recalling the rubrics, that doesn’t say much about whether I actually understand what I am doing; it speaks to knowledge more than understanding. (See, for example, Oppenheimer on fluency and priming.)
If we look at learning as a continuum, from Recognition to Recall to Re-creation (or Application, or Creation), the kind of learning that ties to performance is generally in the last category, but the kind of testing available in most elearning is designed primarily to assess the first two, which, for learning professionals, is both limiting and frustrating. Limiting, because it forces us to design to a specific form of assessment, and frustrating, because we know we could give our learners so much more, something so much better, but we need to be able to quantify and validate what we do. We don’t usually have the luxury of one-on-one coaching or individual assessment of transfer to performance. And we can never be sure, if test results are poor, or if they are excellent, whether the root cause is the learner, the course, or some proportion of both.
Then there’s also the painful reality that testing, as a form of assessment, comes too late in the process. The course has been designed, built, and launched; end users have worked their way through it, and it’s only at the end of this enormous investment of time and resources that we attempt to determine if the course did its job.
The Need for Better
We know that all the talk at conferences about the need for real learning, for meaningful, contextual learning, isn’t just aspirational, and that it’s not a luxury, it’s necessary. Necessary for the organization to have the knowledge, skills and insight to meet it’s goals. Necessary for the employees to succeed and advance in their work.
If we want to be effective in what we deliver, it’s also necessary to know what aspects of our courses are working long before we’re seeing test results. We have all the tools we need to iterate our design and resources except the critical tool: ongoing feedback from end users.
It’s become increasingly possible to do better. Barriers to information and data access have dropped dramatically over the past decade, and with a few tools and a good game plan, we’re well positioned to use that information to understand how our end users see our courses and how they are learning, really, in both formal and informal ways.
The Opportunity
With the launch of the Experience API (xAPI) we started looking more closely at how to use data not just to evaluate learners, but to evaluate and support every aspect of learning within organizations – from course development, to understanding learning channels outside of courses, to understanding the performance impacts of learning experiences.
When we use data analysis to understand what’s working and what’s not working in organizational learning, here’s the key: Focus more on what the data tells us about how to improve what we’re doing, and less on what it tells us about what end users are doing.
In the chapters ahead, we’ll be looking at data around all aspects of learning, from course development, to feedback loops, to performance assessment, as well as big-picture questions about approaches that meet organizational needs.
But what about Big Data? Fortunately for learning professionals who are adding data analysis to their professional toolkit, most of what we deal with isn’t on the scale of Big Data. We are concerned with intelligent data
– that is, the right data to answer our questions, inform decisions, and drive improvements. This means we can get started with analysis using tools that are already familiar.
Opportunity for Design Improvements
We’ll start with a look at how data collection and analysis can be used to improve not just instructional design but also the user interface design for the course. Data collection and data awareness start in the design phase, during prototyping. We will also talk about assessing what user data should be collected in the context of the course, to ascertain trends about which elements of the course are delivering value, causing confusion, or excluding or being ignored by different user groups. We can also investigate which outside activities we want to include in our data collection.
Opportunity for Meaningful Measures
The overriding question that gets asked about most courses is, Did it work?
The answer to this question is rarely found in test scores. Through tools such as xAPI, we can dig deeper, investigating what course elements are effective, not just in test results, but in real world results.
We’ll be discussing measures of learner performance within the course content and, equally important, performance of the course content to meet learner needs. There’s the opportunity to drill deeper and analyze the value of specific course content for different user groups, as well as see where employees are learning outside of the course and how those other experiences and resources have been beneficial. And of course we will look at how to use data to measure learning where it really matters: performance in context.
Analytics
The data we collect is the starting point, and we’ll always approach design and data collection with analysis in mind. The focus in analysis is multifaceted; it means looking beyond analyzing user performance to analyzing course performance in both content and design. We can frame analysis, like data collection, to consider both leading and trailing measures of performance, supporting learners better and efficiently iterating and improving courses based on feedback loops.
The tools are in place to support meaningful, actionable assessment of course design and learner achievement. Now it’s time to roll up our sleeves and put those tools to work.
Chapter 1
Defining Statement Properties
Before getting into the basics of data manipulation, it is important to understand how data is going to be collected. In this chapter, we are going to talk about the Experience API (xAPI). The Experience API allows activities to be tracked as learners perform them, and this chapter will introduce and define the objects and properties that go into collecting xAPI data. If you are a non-developer (which you might be if you’re reading this book) the xAPI specification may seem completely overwhelming. After all, it is intended for developers who are creating software, scripts, or apps that will be generating data. Fortunately, many software vendors have created applications that will generate xAPI data. For example, most of the popular rapid development tools for elearning will generate output in an xAPI format. There are also quite a few open source libraries that are available for use in custom applications. Once integrated into the application, they can do the heavy lifting of making xAPI statements.
What we want to help instructional designers (IDs) and developers understand is, what is available in the output of xAPI? What data is available for analytics when using xAPI? In this chapter we will help unpack the xAPI specification into terms that make sense for IDs.
Basic Term Definitions
First, let’s learn some xAPI vocabulary, which will help us understand the statement generation process and all the pieces in the ecosystem that generate and store xAPI statements. Some terms will be defined as we go, but a few basic terms are:
Activity – An action that is being tracked in conjunction with a verb. In terms of a statement, it is something that somebody did. If we look at a simple statement, Sean wrote a book,
book
is the activity in regards to the xAPI statement. Activities can consist of almost anything real or virtual.
Activity Statement – In its simplest form, the activity statement is made up of an actor, a verb, and an object that are tied back to an activity. The actor states who performed the action. The verb states the action performed. The object is what the action was performed on. The statement can then reference an activity that provides a broader context for the statement. In xAPI, a statement is the method for collecting and storing data as a chunk of JSON.
Application Programming Interface (API) – A defined way for a piece of software to communicate with other software. When an API communicates, the communication is referred to as a call
. Just as you might make a phone call to communicate information, an API makes a call to communicate data.
Learning Record Store (LRS) – The LRS is a system that uses APIs to move and store statements. An LRS must be present for xAPI to function. An LRS is not required by the specification to have any data visualization. It is meant to store statements and related data as will be shown throughout this and subsequent chapters.
Learning Record Provider (LRP) – The learning record provider is the origin of the statement that communicates with the LRS. It might be similar to a SCORM package that has all the learning assets within. It can also be a separate entity, such as a software package, that is separate from the activity.
JavaScript Object Notation (JSON) – JSON is a syntax for storing and exchanging data. It is the data format in which the statements are written. The simple format makes it easy for the data to be passed from LRS to LRS. Below is an example of JSON for a simple statement:
Example statement:
This statement is describing the activity ‘Sean Putman attempted xAPI Statements.’
Example statement JSON dataUsually a LRS will not show the statement in JSON in the default view; it will show it in a human readable form similar to how it’s written above the JSON example. The JSON is stored in the database and can be queried for data visualization or other tools.
There are many other terms to become familiar with as the chapter moves along. Understanding the terms above is a good jump start into knowing what goes into a statement and how the properties, or different parts of a statement, can work together. More terms will be defined in context throughout the chapter.
Statement Basics
What goes into a basic statement? Some of the required parts of a statement will be added by the software (e.g. timestamps). From the point of view of an instructional designer there are three main items to consider when making a statement. (To make a statement even better, other properties can be considered, but these will be discussed later.) The main three are: actor, verb and object, or I did something.
Each statement has three requirements:
Each property (actor, verb, object) cannot be used more than once.
There must be an actor, a verb, and an object.
The statement can list these properties (actor, verb, object) in any order.
A fourth property that is recommended is an ID, or identifier, for the statement. The ID property should be set by the learning record provider (software or library), but if for some reason it is not, then the LRS will add one when the statement is stored. In other words, if the LRP does not provide a unique way to identify a statement, the LRS must do this before it can accept the statement because otherwise there will be no way to find the statement again. Most rapid development tools give a unique ID to statements generated from their published outputs. Custom solutions (HTML-based, or Apps) will need to be programmed to create the unique ID. It is recommended that the originating program (the LRP) create the unique ID, especially if other programs will need to use the statement. When the LRS creates an ID, it is