Exhibition evaluation is a very practical activity existing at the interface of different competencies, and can be useful multiple ways. But why is it important to evaluate exhibitions? How do we account for different stakeholders? How has the process changed with Covid? In this workshop, speakers will introduce interest groups who have a say in the evaluation process, including exhibition developers, researchers/evaluators, funders/sponsors, and marketers. This will be followed by a role play where participants will form smaller teams to work on an evaluation challenge. Participants will be encouraged to take on the point of view of different groups, acknowledging the diversity of opinions, exploring synergies and building towards a consensus, before sharing reflections with the rest of the group.
Curator of interactive experiences
Geneva 23
Switzerland
Head of Development
Bergen
Norway
Nils Petter will assume a position of an exhibition fundraiser, revealing how evaluation can help to finance and quality assure projects. Combining Logic Model and Design Thinking provides a quality assurance model that shows how a solid theoretical framework in combination with protocol-based evaluation can help to make high quality products and persuade sponsors to contribute.
Senior Research Fellow
IOE (UCL's Faculty of Education & Society)
London
United Kingdom
Jen will encourage participants to think of the wider context of evaluation, including Covid but particularly in terms of relevant research. In this role, she will highlight questions that can be useful in getting more out of an evaluation. She will take a somewhat provocative stance, reminding participants that a single, narrowly-focused evaluation is of limited use to the wider field and supporting them to take a more critical view of evaluation.
Assistant professor informal science education
Leiden University
Leiden
Netherlands
In this workshop, Anne will be the voice of evaluation researchers who often act as mediators between exhibition developers and visitors. In this case, evaluation ensures that the communication approach accounts for audiences’ expectations and preconceptions, as well as serves the ultimate purpose of passing on the key messages. Evaluation researchers also deal with challenges of collecting data in a "live" situation. How can you make evaluation studies that measure what you want to measure without burdening the visitor too much or influencing their behavior or opinions?