Explore 1.5M+ audiobooks & ebooks free for days

Only $12.99 CAD/month after trial. Cancel anytime.

Investigating Performance: Design and Outcomes With Xapi
Investigating Performance: Design and Outcomes With Xapi
Investigating Performance: Design and Outcomes With Xapi
Ebook223 pages2 hours

Investigating Performance: Design and Outcomes With Xapi

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Through conversations with Learning & Development professionals about the need for better data, it’s become clear that good design is not only about designing for data, but also designing from data. Investigating Performance comprehensively looks at designing to get valuable data needed to understand both course and user performance via data collection and analytics. The authors present tools and techniques to take advantage of the data collected to improve learning approaches throughout the design cycle. Investigating Performance helps the reader establish a non-developer level overview of the xAPI spec to provide a baseline understanding of xAPI and its usage, then moves on to an exploration of data types, strategy, and the basics of data analysis.

Learning & Development professionals have the potential to support meaningful, actionable assessment of course design and learner achievement through data collection and analysis. Now it’s time to roll up the sleeves and get to work. This book gives a practical overview of the tools, techniques, and strategies needed to get started.
LanguageEnglish
PublisherBookBaby
Release dateMar 27, 2017
ISBN9781483598314
Investigating Performance: Design and Outcomes With Xapi

Related to Investigating Performance

Related ebooks

Business For You

View More

Reviews for Investigating Performance

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Investigating Performance - Sean Putman

    Introduction

    Time to Get Real

    You develop a course, sometimes with intense care and attention to detail, sometimes quickly with a suddenly imposed deadline looming over your head. In the end, though, how do you know if the course did its job? If you did your job? And, more to the point, did the end users get what they needed to do well at their jobs?

    You take a course; you know how to play the game. (Standardized testing taught you well.) You can scan for the right information, give the correct responses, make a good guess as far as expectations. You know which videos you can safely ignore, using their playing time to answer a few emails. And all will be forgotten in a few hours or days or weeks, except maybe a few vague concepts, a random cartoon, or a point of cognitive dissonance.

    You go to conferences and read blog posts where everyone is talking about real learning, contextual learning: the ideal scenarios that are rooted in the realities of how people learn rather than the limitations of your LMS. Heck, you might even be speaking at the conferences and writing the blog posts. You want learning to be like that; you want to do great stuff.

    But you can’t – at least, not using the measuring stick you currently have. It’s not that assessment is a bad thing, but when we are working at scale, the measures we have don’t really tell us what we need to know.

    The Lie That Is Test Scores

    I am not a number, I am a free man!


    Suppose you just got a passing grade on your last elearning test on software training? Or safety training? What does your test score really tell you? Something about knowledge? Or just memory, or pattern recognition?

    If I’m solving a problem or answering a question simply by recognizing the form and recalling the rubrics, that doesn’t say much about whether I actually understand what I am doing; it speaks to knowledge more than understanding. (See, for example, Oppenheimer on fluency and priming.)

    If we look at learning as a continuum, from Recognition to Recall to Re-creation (or Application, or Creation), the kind of learning that ties to performance is generally in the last category, but the kind of testing available in most elearning is designed primarily to assess the first two, which, for learning professionals, is both limiting and frustrating. Limiting, because it forces us to design to a specific form of assessment, and frustrating, because we know we could give our learners so much more, something so much better, but we need to be able to quantify and validate what we do. We don’t usually have the luxury of one-on-one coaching or individual assessment of transfer to performance. And we can never be sure, if test results are poor, or if they are excellent, whether the root cause is the learner, the course, or some proportion of both.

    Then there’s also the painful reality that testing, as a form of assessment, comes too late in the process. The course has been designed, built, and launched; end users have worked their way through it, and it’s only at the end of this enormous investment of time and resources that we attempt to determine if the course did its job.

    The Need for Better

    We know that all the talk at conferences about the need for real learning, for meaningful, contextual learning, isn’t just aspirational, and that it’s not a luxury, it’s necessary. Necessary for the organization to have the knowledge, skills and insight to meet it’s goals. Necessary for the employees to succeed and advance in their work.

    If we want to be effective in what we deliver, it’s also necessary to know what aspects of our courses are working long before we’re seeing test results. We have all the tools we need to iterate our design and resources except the critical tool: ongoing feedback from end users.

    It’s become increasingly possible to do better. Barriers to information and data access have dropped dramatically over the past decade, and with a few tools and a good game plan, we’re well positioned to use that information to understand how our end users see our courses and how they are learning, really, in both formal and informal ways.

    The Opportunity

    With the launch of the Experience API (xAPI) we started looking more closely at how to use data not just to evaluate learners, but to evaluate and support every aspect of learning within organizations – from course development, to understanding learning channels outside of courses, to understanding the performance impacts of learning experiences.

    When we use data analysis to understand what’s working and what’s not working in organizational learning, here’s the key: Focus more on what the data tells us about how to improve what we’re doing, and less on what it tells us about what end users are doing.

    In the chapters ahead, we’ll be looking at data around all aspects of learning, from course development, to feedback loops, to performance assessment, as well as big-picture questions about approaches that meet organizational needs.

    But what about Big Data? Fortunately for learning professionals who are adding data analysis to their professional toolkit, most of what we deal with isn’t on the scale of Big Data. We are concerned with intelligent data – that is, the right data to answer our questions, inform decisions, and drive improvements. This means we can get started with analysis using tools that are already familiar.

    Opportunity for Design Improvements

    We’ll start with a look at how data collection and analysis can be used to improve not just instructional design but also the user interface design for the course. Data collection and data awareness start in the design phase, during prototyping. We will also talk about assessing what user data should be collected in the context of the course, to ascertain trends about which elements of the course are delivering value, causing confusion, or excluding or being ignored by different user groups. We can also investigate which outside activities we want to include in our data collection.

    Opportunity for Meaningful Measures

    The overriding question that gets asked about most courses is, Did it work? The answer to this question is rarely found in test scores. Through tools such as xAPI, we can dig deeper, investigating what course elements are effective, not just in test results, but in real world results.

    We’ll be discussing measures of learner performance within the course content and, equally important, performance of the course content to meet learner needs. There’s the opportunity to drill deeper and analyze the value of specific course content for different user groups, as well as see where employees are learning outside of the course and how those other experiences and resources have been beneficial. And of course we will look at how to use data to measure learning where it really matters: performance in context.

    Analytics

    The data we collect is the starting point, and we’ll always approach design and data collection with analysis in mind. The focus in analysis is multifaceted; it means looking beyond analyzing user performance to analyzing course performance in both content and design. We can frame analysis, like data collection, to consider both leading and trailing measures of performance, supporting learners better and efficiently iterating and improving courses based on feedback loops.

    The tools are in place to support meaningful, actionable assessment of course design and learner achievement. Now it’s time to roll up our sleeves and put those tools to work.

    Chapter 1

    Defining Statement Properties

    Before getting into the basics of data manipulation, it is important to understand how data is going to be collected. In this chapter, we are going to talk about the Experience API (xAPI). The Experience API allows activities to be tracked as learners perform them, and this chapter will introduce and define the objects and properties that go into collecting xAPI data. If you are a non-developer (which you might be if you’re reading this book) the xAPI specification may seem completely overwhelming. After all, it is intended for developers who are creating software, scripts, or apps that will be generating data. Fortunately, many software vendors have created applications that will generate xAPI data. For example, most of the popular rapid development tools for elearning will generate output in an xAPI format. There are also quite a few open source libraries that are available for use in custom applications. Once integrated into the application, they can do the heavy lifting of making xAPI statements.

    What we want to help instructional designers (IDs) and developers understand is, what is available in the output of xAPI? What data is available for analytics when using xAPI? In this chapter we will help unpack the xAPI specification into terms that make sense for IDs.

    Basic Term Definitions

    First, let’s learn some xAPI vocabulary, which will help us understand the statement generation process and all the pieces in the ecosystem that generate and store xAPI statements. Some terms will be defined as we go, but a few basic terms are:

    Activity – An action that is being tracked in conjunction with a verb. In terms of a statement, it is something that somebody did. If we look at a simple statement, Sean wrote a book, book is the activity in regards to the xAPI statement. Activities can consist of almost anything real or virtual.

    Activity Statement – In its simplest form, the activity statement is made up of an actor, a verb, and an object that are tied back to an activity. The actor states who performed the action. The verb states the action performed. The object is what the action was performed on. The statement can then reference an activity that provides a broader context for the statement. In xAPI, a statement is the method for collecting and storing data as a chunk of JSON.

    Application Programming Interface (API) – A defined way for a piece of software to communicate with other software. When an API communicates, the communication is referred to as a call. Just as you might make a phone call to communicate information, an API makes a call to communicate data.

    Learning Record Store (LRS) – The LRS is a system that uses APIs to move and store statements. An LRS must be present for xAPI to function. An LRS is not required by the specification to have any data visualization. It is meant to store statements and related data as will be shown throughout this and subsequent chapters.

    Learning Record Provider (LRP) – The learning record provider is the origin of the statement that communicates with the LRS. It might be similar to a SCORM package that has all the learning assets within. It can also be a separate entity, such as a software package, that is separate from the activity.

    JavaScript Object Notation (JSON) – JSON is a syntax for storing and exchanging data. It is the data format in which the statements are written. The simple format makes it easy for the data to be passed from LRS to LRS. Below is an example of JSON for a simple statement:


    Example statement:

    This statement is describing the activity ‘Sean Putman attempted xAPI Statements.’

    Example statement JSON data

    Usually a LRS will not show the statement in JSON in the default view; it will show it in a human readable form similar to how it’s written above the JSON example. The JSON is stored in the database and can be queried for data visualization or other tools.

    There are many other terms to become familiar with as the chapter moves along. Understanding the terms above is a good jump start into knowing what goes into a statement and how the properties, or different parts of a statement, can work together. More terms will be defined in context throughout the chapter.

    Statement Basics

    What goes into a basic statement? Some of the required parts of a statement will be added by the software (e.g. timestamps). From the point of view of an instructional designer there are three main items to consider when making a statement. (To make a statement even better, other properties can be considered, but these will be discussed later.) The main three are: actor, verb and object, or I did something. Each statement has three requirements:

    Each property (actor, verb, object) cannot be used more than once.

    There must be an actor, a verb, and an object.

    The statement can list these properties (actor, verb, object) in any order.

    A fourth property that is recommended is an ID, or identifier, for the statement. The ID property should be set by the learning record provider (software or library), but if for some reason it is not, then the LRS will add one when the statement is stored. In other words, if the LRP does not provide a unique way to identify a statement, the LRS must do this before it can accept the statement because otherwise there will be no way to find the statement again. Most rapid development tools give a unique ID to statements generated from their published outputs. Custom solutions (HTML-based, or Apps) will need to be programmed to create the unique ID. It is recommended that the originating program (the LRP) create the unique ID, especially if other programs will need to use the statement. When the LRS creates an ID, it is

    Enjoying the preview?
    Page 1 of 1