PCMA’s Catalyst community offers members a platform to ask each other questions, share ideas, or, as the website says, “communicate and collaborate.” Here’s a sampling from a recent Catalyst discussion.
“Do you see AI as a threat to what you do for your organizations or clients?” Annette Suriani, business events strategist for AMS Meetings Solutions, asked her peers on the PCMA Catalyst forum. “Yesterday, [a conversation I took part in] addressed ways to stay relevant if a client thinks that AI could do some of the work we are contracted for. But this goes beyond just independent [meeting planners]. What are your thoughts?”
I don’t see a threat but appreciate technology as an “assist” to streamline some work activities. We are early in broad adoption, so there is a lot of learning and opportunity to shape future use.
— Tonisha Landry, User Experience Owner, Catalyst Inc.
I don’t see AI as a threat, but there needs to be regulation on how it is being used and what it can do to help, [since] AI is continuing to grow. It is just another tool to speed up some of the process. Like any data, if you put in garbage, garbage will come out. When you use AI, there is no emotion or humanity. … Like all technology, humans need to control it.
— Sandy Yi-Davis, MBA, CMP, DES, Founder and Head of Event Design for Strategic Meeting International and Director of Operations for Seafood Nutrition Partnership
I use ChatGPT quite a bit when I’m writing software (as a hobbyist; it’s not my day job) and need to troubleshoot. It’s definitely not ready for prime time as far as job stealing goes. I have seen people use it to generate text — I would not publish or distribute text generated by an LLM (large language model, the type of AI that ChatGPT is) without reviewing it closely by a human.
You might have seen stories about how ChatGPT “hallucinates.” As an example, I asked it where I could find more information on a subject we were working on, and it provided links to specific software projects, but when I clicked on them, nothing was there. The projects didn’t exist. That’s because LLMs are sort of a supercharged autocomplete. They are trained on an immense amount of text, so they can generate things like book names with authors, and the authors will be real authors, and the titles will sound like something that author wrote, but they don’t exist.
But if we think of LLMs as a tool to make individuals more powerful and efficient, then there are lots of opportunities. For example, it’s great at generating the form of something, the general shape. So, use an LLM to get the starting text you need — something like a cover letter, a session description, a registration form — and then you can go over and fine tune. Fix things here and there and punch up the tone to have a little more personality.
I’m very excited about AI. As the technology progresses, we’re going to see methods to make it more self-sufficient — breaking tasks up into chunks and giving the software access to other resources like a search engine, a camera, etc. so that, with supervision, you could assign it more complex tasks to complete.
Like all technology, I think the real issue is who will control it, rather than the tech itself. I would like to see AI in the hands of average people, and controlled by average people, rather than a service with one or two monopolistic gatekeepers who decide how much it costs and who can do what.
— Greg Kamprath, CMP, Director of Business Development, Dyventive
Where are we going so fast is my question. If we are not solving problems with AI, what are we doing? Being distracted? I am a little tired of reading about it, quite frankly. I’m not afraid to use it and integrate as I see fit.
— Yolanda Gonzalez, National Sales Director, Discover Puerto Rico
Read previous Catalyst discussions in our Catalyst Questions Archive.