Assessment FOR learning in games and assessment OF learning in games is of paramount importance as is illustrated by the following two quotes.

Jim Gee (2009):
If you are testing outside the game, you had better have a good reason for doing it. The very act of completing a game should serve as an assessment of whatever the intervention was designed to teach or measure“.

Corti (2011):
serious games will only grow as an industry if the learning experience is definable, quantifiable and measurable

Corti and Gee seem to embrace a quite old fashioned but substantiated view that is voiced by several instructional designers and that acts as an overarching guideline for artifacts which main goal is somehow related to learning: Assessments need to be aligned with learning objectives (Gagné, Briggs, & Wager, 1988; Smith & Ragan, 1999). Traditionally, a rough distinction has been made between summative assessment and formative assessment. Assessment OF learning (i.e. summative assessment) has as main purpose to gather evidence of learning. Assessment FOR learning (i.e. formative assessment) has as main purpose to gather evidence for interventions to improve learning. Both types of assessment differ in purpose but may use similar methods. Following Gee’s recommendation would mean we need to strive for games that can guarantee that they incorporate such an array of interventions that the final intervention would enable the learner to complete the game and hereby also achieves its learning goal (= assessment OF learning). As we can all imagine, this would automatically mean that all previous interventions – and there a lot of them as games often need many actions (and interventions) before they can be completed – should scaffold the learner towards this successful completion. In other words, successful games need to incorporate assessment FOR learning (see also Wouters & van Oostendorp, 2012).

However, such embedded assessment or so-called stealth assessment (coined by Valerie J. Shute) induces a real challenge for the design and development of a game as we know that providing scaffolds to gamers for the purpose of learning can ruin their game experience. A delicate balance between learning and gaming is needed to maintain flow when interacting with the game content. Content, which should be meaningful, sets a third dimension when balancing the design and development of serious games.
This all boils down to the functional requirement: embedded assessment in serious games should be valid, reliable, and without interrupting play. I would like to call this “seamless assessment” as users might need to be informed beforehand that their actions and feelings will be exploited for the virtue of improving their learning and game experience.

The key question is therefore: “How can we achieve seamless assessment in serious games?”

In general, this question can be tackled from two sides. A top-down approach that is offered by Evidence Centred Design for Assessment (ECD) or a mainly bottom-up approach that is offered by Learning Analytics (LA).

ECD is a conceptual framework that can be used to develop assessment models, which in turn support the design of valid assessments. ECD fits well with the assessment of learning in digital games (Shute, 2012). There are three main theoretical models in the ECD framework: competency, evidence, and task models (see Mislevy & Haertel, 2006; Mislevy, Steinberg, & Almond, 2003). The competency model consists of student-related variables (e.g., knowledge, skills, and other attributes) on which one wants to make claims. The evidence model will show how, and to what degree, specific observations and artifacts can be used as evidence to inform inferences about the levels or states of competency model variables. The task model in the ECD framework specifies the activities or conditions under which data are collected.

Learning analytics (LA) is the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs (definition from the Society for learning analytics research). Methods & techniques in LA are: social network analysis (SNA), information retrieval technologies (educational data mining, machine learning, classical statistical analysis techniques), and natural language processing (NLP). Its roots are in management information systems and business intelligence but LA is a quickly emerging research field in the landscape of Technology Enhanced Learning (TEL). Learning analytics are expected to provide new insights into educational practices and offer ways to improve teaching, learning, and decision-making (Siemens & Gasevic, 2012). So, LA is meant to support learners becoming better learners, to support teachers becoming better teachers, and to support educational institutions to improve their business. LA and its methods & techniques seem to offer a promising approach for tackling questions where ECD might be lacking, like: “How can we validly and reliably identify the growth of individuals in group based actions and collaborative products?” Herein, the focus should not only be on the learners, but also on their tools and contexts. Indeed, the WHAT, HOW and CONTEXT of assessment do matter.

New devices and tools are available on the market offering new interaction modalities opportunities to develop innovative solutions for continuous user monitoring and seamless assessment. However, for seamless assessment in serious games still some considerable challenges exist. For example, how can we prevent the issue of ‘never the twain shall meet’ when combining a predominantly bottom up approach (LA) with a top down approach (ECD)? To me, this seems more complex than building the Channel tunnel (Chunnel, Tunnel sous la Manche) starting from both sides of the Canal.

Kanaaltunnel

In addition:
How can we address issues with big data (e.g., latency) when real time data gathering and highly frequent – almost real-time – interventions are needed in order to preserve learning and playing?
Finally, how should we deal with people that are not “into games”?

I would like to receive your opinions on this as our current society with a predominant paradigm of non-formal learning urges us to catch and broaden the opportunities of seamless assessment in training and learning for 21th century skills using serious games. In this, I would like to underscore that I see an important role for trainers/teachers exploiting seamless assessment data from serious games in their feedback to guide learners’ continuous development. This echoes a view on feedback that is mentioned by Ferrel (2012): “Effective dialogue should be adaptive (contingent on student needs); discursive (rich in two-way exchange); interactive (linked to actions related to the task goal); and reflective (encourage students and tutors to reflect on goal-action-feedback cycle).” (p. 11).

Indeed, seamless assessment data might play a decisive role reaching this ideal of more effective and efficient learners’ development in formal and non-formal learning contexts. So we should rather follow up Gee’s and Corti’s recommendations to boost serious games potential in such contexts.

References

Corti (2011). Proof of Learning: Assessment in Serious Games. Last accessed: 4 October 2012,
http://www.gamasutra.com/view/feature/2433/proof_of_learning_assessment_in_.php

Ferrell, G. (2012). A view of the Assessment and Feedback landscape: baseline analysis of policy and practice from the JISC Assessment & Feedback programme. Last accessed: 18 October 2012, http://www.jisc.ac.uk/media/documents/programmes/elearning/Assessment/JISCAFBaselineReportMay2012.pdf

Gagné, R.M., Briggs, L.J., & Wager, W.W. (1988). Principles of instructional design (3rd ed.). New York: Holt, Rinehart, and Winston.

Gee, J.P. (2009). Discussant for the session Peering behind the digital curtain: Using situated data for assessment in collaborative virtual environments and games. AERA, San Diego, CA.

Mislevy, R.J., & Hartel, G.D. (2006). Implications of evidence-centered design for educational testing. Educational Measurement: Issues and Practice, 25(4), 6-20.

Mislevy, R.J., Steinberg, L.S., & Almond, R.G. (2003). On the structure of educational assessments. Measurement: Interdisciplinary Research and Perspectives, 1, 3-62.

Shute, V.J., & Ke, F. (2012). Games, Learning, and Assessment. In D. Ifenthaler, D. Eseryel & X. Ge (Eds.), Assessment in Game-Based Learning: Foundations, Innovations, and Perspectives (pp. 43-58). New York: Springer.

Siemens, G., & Gasevic, D. (2012). Guest-editors of Special issue: Learning and Knowledge Analytics. Educational Technology & Society, 15(3).

Smith, P., & Ragan, T. (1999). Instructional design. Hoboken, NJ:Wiley.

Society for learning analytics research. Last accessed: 15 October 2012, http://www.solaresearch.org/mission/about

Wouters, P., & van Oostendorp, H. (2012). A meta-analytic review of the role of instructional support in game-based learning. Computers & Education, 60, 412-425.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.