Two industry standards in need of overhauling. My mind is trained to seize on brand-new, innovative conferences, so when I began reading humorist Baratunde Thurston’s “One More Thing” column in the September issue of Fast Company — in which he introduces “the one conference that will fix all the rest” — I thought for a half-second that his “PanelCon 2014” might be an actual event.
Once Thurston gets into promoting the content of this conference, “dedicated to the science and art of the conference panel,” it becomes obvious that he’s poking fun at this education-program staple — and that he’s either been on or attended too many panels of the easily assembled, without-much-thought variety. Here’s his pitch for a PanelCon “session” on the typical Q&A segment: “If you’ve planned your panel correctly, there should be no need to wade into the mass of flesh known as the audience for their inevitably asinine opinions. Discover tricks to feign concern for the thoughts of the unlit and unamplified while maximizing the social media lift they can provide to your greatness.”
Thurston’s dim (and droll) view of one common meeting feature made me think of another: post-con evaluations. That’s because the results of our most recent survey on attendee and exhibitor ROI reveal that the way most meeting professionals measure the value participants get out of their overall experience also needs fixing. Eighty percent of our respondents send out post-event evaluations, but only a minority go beyond collecting the usual satisfaction ratings.
I shared these results with Marion Mayer, a senior manager at Munich-based FairControl, which evaluates the value of live events, to get her perspective. She wrote back that post-event evaluations have a number of shortcomings. What “is often ignored,” she pointed out, “is that some questions cannot be answered if you only survey the actual attendees — as they have obviously made their decision to attend before the event.” It would be insightful if you complemented an attendee survey with a non-attendee survey, she said, so “you also learn more about the reasons why you lose a specific target group (non-attendees) and learn more about their expectations.”
And you “cannot ask an exhibitor directly after an event what his business outcome eventually is,” she said, because “new business develops over time.”
For educational events, Mayer said, “it definitely makes sense to measure the effect of learning/new knowledge on business performance,” which cannot be captured with a post-evaluation alone. “A systematic approach is needed,” she said, “that defines learning outcomes, places mind-triggers [to help] people to recap what they have learned, and in the end measures pre-defined outcomes of the education, looking at the performance on the job.”
Both panels and post-event evaluations used to be no-brainers. But they require rethinking if they are to yield more value to participants and planners.