Future of Work News Free eNews Subscription

Expert Strategies for Controlling GenAI and 'Taming the Knowledge Sprawl'

By

I cannot overstate this enough, readers: Generative AI Expo (part of the #TECHSUPERSHOW experience that took place last week in Fort Lauderdale, Florida) was a jam-packed event; more so than last year, to boot. The energy in the Broward County Convention Center was almost palpable — long story short, this was a reminder of how great it can feel to be in tech right now.

Speaking of last year’s event, by the way, I covered a session led by business owner Eric Riz, CEO of VERIFIED and a bona fide pro when it comes to approaching artificial intelligence.

So, here’s a detailed rundown of Riz’s session this year, which was titled “Taming the Knowledge Sprawl: Strategies for Controlling Generative AI’s Expanding Knowledge”:

“First, the elephant in the room,” Riz began. “Let’s face it; last year’s session, in hindsight, was more of an intro to ChatGPT, as it was ‘newer’ and 'shinier' at the time. But nowadays, even if it can still feel overwhelming to properly prompt GenAI for the right response, folks are growing more accustomed to it.”

Riz proceeded to break down what we have in front of right now; in his words, “what's behind the knowledge sprawl.”

AI knowledge sprawl, in layman’s terms, stems from the idea of “I know what I want from the AI. It gives it to me. I, then, call it my own and use it/send it elsehwere.” This sprawl evolves into, for example, an employee getting an email, saving some of its contents (which themselves could've been generated by AI), then sending this now-repurposed content to their boss. When the boss receives it, they modify and share it, and etcetera.

That, in essence, is "the sprawl."

Now, how do we tame this sprawl? How do we control AI content in 2024 better than we did in 2023?

Right out of the gate, we think of robust governance strategies, right?

Well, as you’d expect, that’s about a million times easier to say than it is to make actionable. The World Economic Forum, for instance, launched the AI Governance Alliance. The EU at large is cracking down, in its own ways. Even OpenAI’s Sam Altman recently called for a “proposed global regulatory body.” (That one’s a bit ironic, as Riz noted, but he reserved that subtopic as a conversation for another day – next year’s expo, perhaps.)

Back to taming the seemingly never-ending sprawl: Riz said that “Information has to be genuine, be it from a human or from AI. We need more reliable management strategies. We need to be better.”

We can achieve that, according to him, via exploring the following:

  • Interdisciplinary Collaboration – i.e. fostering collaborative networks across disciplines to integrate diverse perspectives regarding the evolution of AI, and then reducing undue redundancies in research efforts to expedite the creation of guardrails. Creating genuinely collaborative interdisciplinary frameworks “keeps AI ethical,” Riz said. “We need to continue establishing community-led initiatives to curate and maintain approaches to AI, and we’ve got to make sure that they remain relevant and accessible.”
  • Practice Standardization – As Riz described, “actually having conversations with business partners and vendors and so on to introduce real awareness of issues related to AI-produced content and across-the-board practices that should be put in place.” Developing and adopting industry-wide standards for documentation, reporting, and the sharing of AI research is vital.
  • Education and Training – This loops into the points above, but it’s still worth reiterating. “Equipping the AI workforce with the skills necessary to navigate and contribute to the vast knowledge landscape effectively facilitates the creation of more smartly managed structures," Riz explained.
  • Data and Code Repositories – “We should promote the use of centralized repositories for datasets and code,” Riz added, “with strong metadata standards to ensure advanced usability and discovery. A lot of people still haven’t realized that this is just as meaningful and important as the data itself that we’re generating.”

Overall, the plan Riz presented got as technical as it needed to (while also focusing on inclusivity, equity and societal impact when it comes to the creation of new AI models and the sharing of the sprawling knowledge roots therein).

“This is transformative,” Riz concluded. “A model’s knowledge expanding beyond its training data can result in dangerous inaccuracies. Controlling this sprawl is a must for researchers and developers in this space, and we should encourage them to take action.”




Edited by Alex Passett
Get stories like this delivered straight to your inbox. [Free eNews Subscription]

Future of Work Contributor

SHARE THIS ARTICLE

Related Articles

4CRisk.ai Introduces Ask ARIA Co-Pilot, its AI-Driven Risk Management Solution

By: Tracey E. Schelmetic    4/26/2024

AI-powered risk and compliance company 4CRisk.ai recently announced a new product: Ask ARIA Co-Pilot. The solution is an intuitive, accurate, and conv…

READ MORE

4 Key GFI Products Now Powered by AI

By: Greg Tavarez    4/23/2024

GFI announced the integration of its CoPilot AI component into four of its core products.

READ MORE

A Winner's Mindset: Alan Stein Jr. Helps Businesses Build Winning Teams

By: Alex Passett    4/22/2024

At SkySwitch Vectors 2024 in downtown Nashville, Tennessee, last week, the keynote speaker was Alan Stein Jr. He stylishly presented to the Vectors au…

READ MORE

Atomicwork and Cohere Partner on AI-Powered Workplace

By: Greg Tavarez    4/22/2024

Atomicwork launched its innovative digital workplace experience solution, co-developed with Cohere.

READ MORE

Hybrid Work Fuels Demand for SASE, Zero-Trust Security

By: Greg Tavarez    4/16/2024

Around 80% of respondents said hybrid work is driving the need for SASE and zero-trust networking tools, according to an Aryaka report.

READ MORE