Artificial intelligence (AI) – the phrase used to describe the intelligence of machines instead of humans – is nothing new. It was launched, in fact, in 1956 and has been part of our modern conscience through online recommendations (“if you liked this, you might like this …”), online chess and other human vs. computer games, or voice-operated systems (Siri and Alexa). These uses felt convenient and pretty neat … until November 2022, when Open AI released ChatGPT for written content and both Midjourney (July 2022) and Stable Diffusion (August 2022) were released for art and imagery. Suddenly, news sources and the proverbial water cooler were filled with angst that jobs would be lost, creativity would be stunted, and machines would take over.

The impacts of AI are changing quickly and constantly, and while GRAPHEK has not adopted its use, we’re watching it closely. After all, our industry is directly affected by any tool that claims to create art and imagery.

Our professional network could probably argue for days the pros and cons of using AI: Maybe it allows a user to get to the end result faster, at a lower cost. On the flip side, our clients trust us to come up with unique, legally sound, concept-driven design solutions.

In between those two decision points, however, there is a lot to consider.

Our Art Director and Brand Strategist Christina Davies is staunchly opposed to using AI to create any kind of art, pointing out, “My concern stems from where AI gets its data, and who didn’t consent to having their creative content taken. It’s largely using data from sources without permission, compensation, or even notification of what’s being done with that work, and that hurts people, whether it’s someone who created a font, designed a backdrop, took a photo, or designed a magazine cover.”

With that in mind, what happens if a designer uses AI to create a logo for a client, and it pulls copyrighted images to help create a “new” logo? Will a potential lawsuit outweigh any cost savings resulting from using AI?

“I have concerns for any client who seeks to adopt AI for creative purposes due to the legal complications it could create for them down the line,” Davies added, “and I’m concerned about our industry. It will impact associations, businesses, and the economy; just because technology is available doesn’t mean it’s a good idea, especially when it has no guardrails.”

In chatting with Jeff De Cagna, executive advisor for Foresight First LLC and a self-described contrarian thinker on the future of associations, we discovered a similar viewpoint.

“We’ve become comfortable with offloading work to technology,” he said, “and now that we have technology that can be impressively fluent, there’s a huge risk that we’ll surrender human agency in favor of greater efficiency. I find that troubling, primarily since human contribution and empathy is core to creating. Incremental acceptance of this shift will occur if associations don’t consciously decide what is appropriate.”

To help, De Cagna encourages association leadership to make purposeful, informed decisions before accepting the orthodoxy that any new, inexpensive technology must be tried. Consider, for example:

  • AI technologies are owned by a relatively small number of companies, and our use of them further concentrates the power and resources in their hands while taking away individuals’ and organizations’ agency. How can you counteract that?
  • Has your organization’s Board established ethical parameters and boundaries surrounding AI?
  • Who will be responsible for making sure your association has clear guidance and policies for AI?
  • How will your stakeholders – including members and staff – be informed of and prepared to adopt your policies and practices?
  • How will you educate and help your stakeholders build the human skills such as communication and collaboration, creativity, and innovation?
  • Have you considered the legal ramifications of using AI, particularly as it relates to bias, the use of copyrighted content, misinformation in generative AI outputs, and privacy concerns?
  • Have you identified your association’s “high stakes” areas, such as testing and credentialing? Will you allow the integration of AI into such high stakes work?

“When I think of AI, I see a boulder rolling down a hill,” Davies said. “At the same time, there are people hiking up the hill. Others are throwing up their hands saying, ‘There’s nothing we can do!’ But I think it’s on us to find a way. It may be hard, but if it saves the hikers or even an entire village at the bottom, we have to do it. This is a human-made problem, and we have the ability to control it.”

“There seems to be an overwhelming desire to use generative AI,” agreed De Cagna. “But we have a choice, and we don’t have to use it. If someone decides to use it, they need to be prepared to clearly and transparently explain why – and be prepared to think through the question of how to handle using others’ work without their knowledge or authorization. This risk affects everyone who’s created anything, whether it’s words, art, video, or audio. Personally, I believe that it is our humanity, rather than technology, that makes creation truly meaningful.”