This event has ended. View the official site or create your own event → Check it out
This event has ended. Create your own
View analytic
Wednesday, July 8 • 2:30pm - 3:30pm
Research Methods

Sign up or log in to save this to your schedule and see who's attending!

The Use of Cognitive Clinical Interviews to Explore Learning From Video Game Play
Nathan Holbert, Rosemary Russ, Pryce Davis

As research about the learning that results when children play video games becomes more popular, questions arise about what methodological and analytical tools are most appropriate to access and document this learning. Thus far, researchers have mostly adopted pre/post assessments, ethnography, and learning analytics. In this paper we (re)introduce cognitive clinical interviews as a methodology particularly suited to answering many of the most pressing questions about games and learning. To that end we describe four challenges of studying learning in video games with pre-post assessments that we claim can be addressed by the addition of clinical interviews. We then consider how clinical interviews can help to explain and describe patterns detected from ethnographic observations and detailed game play logs.

100 Games in 5 Years
James Cox

For the past two and a half years, I’ve been working through a challenge, a goal that will carry on until June 1st, 2017: making and releasing 100 games in 5 years. This piece covers a bit on how the challenge came to be, how I’m doing now at the halfway mark, and what what there is to learn from this method of game development.

Situating Big Data Across Heterogeneous Data Sets of Game Data Exhaust, Class Assessment Measures, and Student Talk
Constance Steinkuehler, Matthew Berland, Kurt Squire, Craig G. Anderson, John Binzak,
Lauren Wielgus, David Azari, Jennifer Dalsen, Pasqueline Scaico

One of the defining questions for education over the next decade is, how do we shift education from a data poor to a data rich activity? (T. Kalil, White House Office of Science and Technology Policy, personal communication, September 1, 2013) Over the previous decade, we have seen a rise in shared national and state standards and frameworks that articulate what we, as a country, believe young people and adults should be able to think, know, and do in order to be scientifically literate, but we are only now beginning to see a concomitant rise in large scale, data rich strategies for assessing such knowledge, skills, and dispositions. “Big data” techniques (the capture, curation, storage, and analysis of massive, complex data sets spanning large numbers of individuals in aggregate) have made significant progress in areas such as content knowledge, inquiry practice, and, to a lesser extent, interest, but we have significant work remaining in the areas such as identity, participation, and epistemology – domains historically studied through discourse analysis and other forms of qualitative, conversation-focused methods. The majority of data streams used in big data analyses are “data exhaust” from technologies considered largely in isolation: computer programs reports, intelligent tutoring user outputs, clickstream data sets generated from tablet devices, and user progressions harvested from educational games and the like. We know, however, that such technologies are sociotechnical artifacts (Bijker, 1995) whose potential for learning, like that of any instructional tool, is highly influenced by its context of use. Whether it’s a textbook, calculator, or high-end 3-D graphical data display, a tool is only as good as the activities and practices in which it is embedded. Thus, if we want to catalyze progress toward more expanded frameworks for learning goals that include tricky variables such as identity and dispositions, then, we must include not only the data streams from technology and tool use but also talk and interaction data that surround it. And we would be wise to build on the last several decades of discourse and content analytic techniques used routinely in more qualitatively oriented research.

This project seeks to marry theories of situated cognition to the big data movement by connecting clickstream data from technologies in isolation to key forms of multimodal data available from their contexts of use. Using a data corpus gathered from a five-day game-based implementation of the STEM game Virulent (targeting cellular biology) during an event called Game-A-Palooza, we are combining multiple analytic strategies commonly considered incommensurate: educational data mining, learning analytics, qualitative coding, quantification of qualitative coding, discourse analysis, natural language processing, and standard classroom assessments such as pre-/posttest measures and attitudinal surveys. Data include clickstream telemetry data, individual and group discourse, individual and curricular artifacts, classroom assessments, and online forum postings. In this presentation, we review the project goals and preliminary findings from the study, highlighting not just the progress we’ve made but also the significant challenges to this work. We discuss the benefits and drawbacks to analysis across heterogeneous data sets and our current attempts to better situate telemetric analyses and thereby provide more complete model for big data analysis, one that includes both talk and play data equally or, where not possible, identify find its limitations so that future “data rich” attempts on learning might be better informed by the limitations of technology-rich but talk-poor data sets.


Craig G. Anderson

Grad student, University of Wisconsin - Madison

Matthew Berland

Madison, WI, United States, University of Wisconsin - Madison
avatar for John Binzak

John Binzak

Research Project Assistant, Games+Learning+Society
avatar for James Cox

James Cox

Digital Wizard, Seemingly Pointless
Making 100 games in 5 years. Graduate student in USC's Interactive Media and Games Design program.

Pryce Davis

Evanston, Illinois, United States, Northwestern University

Wednesday July 8, 2015 2:30pm - 3:30pm
Old Madison

Attendees (46)