At 9 a.m. Jan. 8, Nina Schick and Mo Gawdat will present the Main Stage session, “The Future of Generative AI: Exploring the Benefits, Risks & How It Will Change the Human Experience.” Schick will be on stage in person and Gawdat, a bestselling author and former CBO of Google X, will be broadcast from a remote location.
Schick is an AI authority, entrepreneur, author, and speaker. For the first two decades of her career, she worked in geopolitics in “seismic” events, she told podcaster Sam Harris in an episode of his “Making Sense” podcast. “The kind of persistent feature in my geopolitical career,” Schick said, “was that technology was emerging as this macro geopolitical force and that it wasn’t just shaping geopolitics on a very lofty and high level, but that it was also shaping the individual experience of almost every single person alive. … I just became increasingly interested in technology as this kind of shaping force for society and because I had been working on information warfare, disinformation in actual wars, and information integrity.”
At the end of 2017, Schick told Harris, she started seeing the emergence of deep fakes “and I immediately sensed that this would be something that is really important because it was the first time we were seeing AI create new data.” Her book, Deepfakes: The Coming Infocalypse, was published in 2020, well before Open AI launched ChatGPT.
“It’s really only in the last year and a half that the capability of some of these foundational [large learning] models and what they can do for data generation in all digital medium — whether it’s text, video, audio, every kind of form of information — and the implications they have on the future of human creative and intelligent work” has come to the fore, Schick said. “What is becoming clear now about generative AI — and we’re only at the very beginning of the journey — is that we’re at a tipping point for human society.”
You can almost see the world as pre-ChatGPT and post-ChatGPT, she said. This is the “calm before the storm. This is probably the last moment before we really start seeing AI being integrated into almost every type of knowledge work.”
Convene asked Schick via email in December to look at gen AI through the lens of the business events industry. Here is what she had to say.
Our audience of business events professionals is making use of generative AI to streamline everyday tasks, such as categorizing and personalizing registration lists and creating session descriptions. How can they — and knowledge workers in general — reconcile the more mundanely useful aspects of gen AI with the profound risks the technology poses for our society overall?
In the context of mundane tasks, like managing registration lists, drafting event copy, preparing marketing collateral and content — generative AI offers efficiency. However, the societal risks, such as misinformation, require careful management. Businesses must stay informed about AI advancements and establish ethical guidelines to balance utility with responsible use. Embracing AI’s potential while being cognizant of its societal impact is key.
In recent news, publishers have been outed for publishing gen-AI-created content under the bylines of made-up staff writers. Tech publisher Futurism reported in December that Sports Illustrated, working with vendor AdVon Commerce, created fake profiles of staff writers, and then used gen AI to create and publish buying guides monetized by affiliate links under the synthetic authors’ bylines (as reported in AdWeek). Unfortunately, we can’t always count on tech watchdogs to point out synthetic content.
Given that you predict that 90 percent of online content will be AI-generated by 2025, how can we catch up with this exponential growth? How likely do you think it will be to make gen-AI Captcha-like authentication, that indicates a lineage trail, as you say, an integral part of the digital information ecosystem? Who will oversee and enforce such a gargantuan effort?
The rapid increase in AI-generated content challenges our ability to discern authenticity. Implementing media provenance solutions for content verification is vital but challenging. It requires a collaborative effort between tech developers, regulatory bodies, and users to establish standards and oversight mechanisms for digital authenticity.
Getting back to the business events industry, what should we be concerned about in terms of this technology having a negative impact on our business in the hands of bad actors? For example, we already have a history of fake scientific and academic conferences that lure registrants to pay to attend — and have their research published at — non-existent events. It doesn’t take much imagination to see how gen AI would make these schemes with made-up speakers and sessions more sophisticated in nature and more difficult to distinguish from authentic in-person and digital events. What are your thoughts about how we can prepare for respond to those threats and what else do you foresee as causes for concern?
The business events industry must be vigilant against sophisticated AI-generated scams, like fake conferences. Staying updated with AI trends, educating stakeholders about potential threats, and investing in verification tools are crucial. Preparation and response strategies should focus on distinguishing authentic events from fraudulent ones.
What role can face-to-face events play — in terms of knowledge-sharing — in a world where it will become increasingly challenging to differentiate between synthetic and authentic content online?
In-person events hold significant value for authentic knowledge-sharing, especially as distinguishing between synthetic and real online content becomes harder. These events offer irreplaceable opportunities for genuine human interaction, networking, and trust-building, which are challenging to replicate digitally.
Are you more excited or concerned about our prospects for the future? What excites you most? What worries you most? As a professional studying this explosive technology and as a parent?
The dual nature of AI’s potential is both exciting and concerning. The prospect of transformative solutions in various fields is thrilling. However, the rapid evolution and potential misuse of AI, especially in shaping public opinion and creating false narratives, are worrying. As a parent and professional, the focus is on guiding AI’s growth responsibly for a safe future. Most of all, we have to remember that we have agency. AI is not an autonomous force: We control how it is developed.
Michelle Russell is editor in chief of Convene.