Why I Have My Eye on AI
For my trends outlook, I was going to focus on how face-to-face experiences will become even more critical and take on different forms in the coming year, but I can do a much better job of that once I hear from the experts. So stay tuned: Joseph Pine and James Gilmore, who have just published a 20th-anniversary edition of their seminal The Experience Economy, have promised to share with us in an upcoming issue what attendees now expect in terms of their event experience, and how that has evolved over two decades.
One thing we know for sure that has changed is registrants’ expectations for a personalized experience — from the ways in which they are marketed to so that they will feel compelled to attend based on their own needs; to, once on site, the customized-to-their-interests messages they receive on their event app and answers to their specific questions via a chatbot; to post-event follow-up based on their individual participant journey.
All of this is made possible by AI, or machine learning, which analyzes data, seeking out trends and patterns in order to carry out advanced functionality. So I’ll be following AI this year, and how, as we become more familiar with its abilities and shortcomings, that will in turn shape how we choose to use it in the business events industry.
AI is all around us, but we don’t understand it, says Janelle Shane, author of You Look Like a Thing and I Love You. Shane, who has a Ph.D. in electrical engineering and a master’s in physics, writes about what’s humorous and unsettling about AI in her AI Weirdness blog, which is built on five principles:
- The danger of AI is not that it’s too smart but that it’s not smart enough.
- AI has the approximate brainpower of a worm.
- AI does not really understand the problem you want it to solve.
- But: AI will do exactly what you tell it to. Or at least it will try its best.
- AI will take the path of least resistance.
Regarding No. 5: Shane points out that AIs are really good at picking up on shortcuts that reinforce bias towards race and gender. “If it’s really hard to solve a problem properly,” Shane says, “they end up leaning even more heavily on the bias that they see in their training data, a phenomenon known as ‘bias amplification.’ So AI is great at uncovering simple, shortcut solutions, but it doesn’t know when a simple solution is a bad one.”
Which brings me back to my emphasis above on “how we choose to use” AI. What we need to do, Shane says, is to take a hybrid approach — one that combines the strengths of AI and humans. — Michelle Russell