Visionary advancements in AI are flitting at warp speed and are poised to transform professional environments as we’ve known them. (Heck, this is already happening.)
But what about the ethics entwined? Late last year, the U.S. Copyright Office ended up backtracking a decision to grant protections to an AI-generated comic book. ChatGPT is being tapped to write wholesale expositional pieces. The many layers of deepfakes exist.
To discuss the streamlining of Generative AI with ethics, Bärbel Wetenkamp held a session today at the new Generative AI Expo (part of ITEXPO, held at the Broward County Convention Center in Fort Lauderdale, FL). Wetenkamp is a professor, a seasoned speaker when it comes to AI, and the CEO of The Swiss Quality Consulting. Wetenkamp has trained businessfolk in executive cultures and is thrilled with the evolution of AI, but she’s also aware of the concerns and questions it raises. (Questions as simple as “Is this a good thing?” and “To what extents can it be used negatively?”) Depending on with whom you speak, people are usually either very thrilled or straight-up spooked; sometimes, a bit of both.
But to Wetenkamp, this is just another societal change we must train ourselves in, not the boogeyman prowling in digital shadow.
“Big changes can instill fear,” Wetenkamp acknowledged, “but fear is no answer.”
Wetenkamp juxtaposed AI like ChatGPT with the early 1900s shift from horses and buggies to automobiles. When the latter outnumbered the formers, what was a step taken at the time?
The installation of traffic lights.
By a similar token, AI needs its own figurative stop signs, guardrails, and so on.
“Traditional vehicles do not cause accidents on their own,” Wetenkamp said. “Drivers do. So it’s not that AI is self-generating what people label as harmful; it is trained and fed prompts by human beings. And, as history has shown, human beings are prone to errors and misuses. But this boost in Generative AI can have massively positive impacts on how we work, market, learn, teach, and so forth.”
Speaking of work, Wetenkamp recognized that individuals are afraid that their jobs may be in jeopardy. “There may be slivers of truth to this in many years,” she said, “but not for now, and not without hope. The nature of jobs will change. New jobs will pop up. We cannot simply forget that AI exists. We need to be resilient.”
Throughout her presentation, Wetenkamp emphasized the necessary transparencies in regard to AI and her three main ethical considerations: deepfakes, malicious use cases, and copyright issues. She also highlighted the topic of specific safeguards, the value in raising AI awareness, and even an outline for an “AI Bill of Rights” in order to regulate.
“Truthfully,” Wetenkamp said, “we must mitigate risks, like in all things in life, while enjoying the real positives. People know not to drink and drive. People know to wear seatbelts. People know to separate recycled materials from waste as much as they can. People know to wash their hands; more so since COVID. We learn and adapt. The same can be said for our approaches to AI.”
It was Antoine de Saint- Exupéry who wrote “Men have forgotten this truth,” said the fox. ‘But you must not forget it. You become responsible, forever, for what you have tamed.”
This is a quote Wetenkamp closed with. “We are responsible,” she reiterated.
“And in the face of world-changers, like cars that pass us by, we cannot remain on our horses.”
Edited by
Alex Passett