Photo by Rikomatic
The key takeaway for me is the whole point about the need for new metrics to measure learning from games. That the old metrics aren't working. This is the same conversation that is happening around web metrics versus blog metrics. If you use web site metrics to evaluate the success of your blog, they don't work for a lot of reasons. There was some push back from someone in the audience "How can we measure virtual worlds when we really haven't yet figured out what works and what doesn't." There was an acknowledgment from a panelist about the tensions between the need to look at results and the need for experimentation and pressures of timing.
Each funder gave an overview gave an overview of their program
MacArthur Foundation
- Talked about the openness to games and the need for research
- Mentioned recent studies about taking laptops out of the classroom and there is a problem with using old metrics and it is important to look for new metrics and rethink learning environments in general.
- Working on a series of research studies on the benefits of games
- New learning environment puts games at the center. It is a new form of teaching, not just skill-based learning. What does the next generation of learning environments look like?
- How do those learning environments impact institutions? What do libraries look like in the future?
- The power of games have caused to rethink many things.
- How to begin to have conversations to understand it.
- New set of research methods for this. How do we think about virtual worlds and help us study human behavior.
National Science Foundation
- Two years ago. The core thrust: high performance computing, data visualization, virtual organizations, and learning and work force development
- Solicitation on the streets for programs at the k-12 level to train students to use cyber tools for science.
- Looking at how to be less of training but more on learning
- Funded the infrastructure, and less well on content. How to find the right funding mechanism to facilitate collaborative teams?
- Interested in games as an objective component to study. What are the implications and ripple effects?
- We need metrics! Not only those that apply metrics but need to rethink them.
Microsoft Corporation
- Interested in games for learning and social change
- Partners in Learning program -- gave stats. Emphasis is on digital literacy. Program is world-wide.
- Funded a project to work with state of Michigan - requirement for high school graduation is that all students have an online learning experience.
- Grantees - like Global Kids - found the right group to fund. They have been a leader. TakingITglobal is a grantee. Grants are about taking it scale.
- Scaling is an important point
- Issues: What is our assurance of quality of the game? What are the metrics? What is the meaning making? Translate what gamers do into the language of "normal traditional education" and out of school time learning?
Some questions:
What have you seen in terms of partnerships for people seeking funds for games?
There is more open dialog. Interesting projects that are not the historical type. Seeing more inquiries of small nonprofit academic research entities. We're seeing less traditional applications.
There are two areas - one where applicants collaborate and one where we the funders collaborate. "We're just as guilty of being in a silo and mired in our work." Make it priority for organizations to work together and share lessons learned and commit to common metrics. It is also no small feat to get traditional foundation board to agree to fund game initiatives. Raising up of kids s is essential to this conversation. Making kids s public so we as adults can understand what is going on.
On the topics of metrics, we don't have a name for virtual worlds yet. How do we get to metrics when we don't know what we're talking about?
Government funders have a congressional mandate for evaluation of anything that is funded and to use quantitative methods and research analysis. Thus, metrics are very important!
This is a time pressure question! We've got this moment in time with the interest, but figuring out the assessment takes longer. There are different ways of approaching games and different outcomes. For our interest, less interest in games for skill development. "Drill and Skill" We're interested in assessing learning attitudes from games - How do you approach problem solving, feedback, and how does learner's attitude change? How do you measure the change? Where is an existing framework to help us understand this? There won't be one system of assessment - and there different types of dispositions.
It is important to also consider the different contexts for the evaluation. It will be different for universities than smaller nonprofits. They have observed that many nonprofits don't do any evaluation because it seems ominous and hard to understand. They recommend that the organization gets something on paper. This holds them accountable.
Social change is a difficult thing to assess. How do you quantify. Where is the funding for an activist who wants to make game for change?
Some funding mandates do not allow the funding of activists. Assessing social change is the same conversation as assessing learning. It is the same conversation. "If you want to change the world, it's about a learning moment that helps you change your perspective."
Evonne/In Kenzo also posted some notes here.
See Rik Riel's notes here.
The problem, as I see it, with assessment and metrics is that some things are easy to measure and some hard. Naturally you measure what you can, this is not a bad thing in itself, you can learn and improve by looking at this data.
The problem then comes when programs are rewarded or funded on the basis of measurable outcomes, this produces a bias towards onlt teaching the outcomes that are easily measurable. They are the lower level outcomes on Bloom's Taxonomy, drill and practice stuff, not higher order skills.
Posted by: Tony Forster | June 14, 2007 at 05:25 PM