Future of Work News Free eNews Subscription

Expert Strategies for Controlling GenAI and 'Taming the Knowledge Sprawl'

By

I cannot overstate this enough, readers: Generative AI Expo (part of the #TECHSUPERSHOW experience that took place last week in Fort Lauderdale, Florida) was a jam-packed event; more so than last year, to boot. The energy in the Broward County Convention Center was almost palpable — long story short, this was a reminder of how great it can feel to be in tech right now.

Speaking of last year’s event, by the way, I covered a session led by business owner Eric Riz, CEO of VERIFIED and a bona fide pro when it comes to approaching artificial intelligence.

So, here’s a detailed rundown of Riz’s session this year, which was titled “Taming the Knowledge Sprawl: Strategies for Controlling Generative AI’s Expanding Knowledge”:

“First, the elephant in the room,” Riz began. “Let’s face it; last year’s session, in hindsight, was more of an intro to ChatGPT, as it was ‘newer’ and 'shinier' at the time. But nowadays, even if it can still feel overwhelming to properly prompt GenAI for the right response, folks are growing more accustomed to it.”

Riz proceeded to break down what we have in front of right now; in his words, “what's behind the knowledge sprawl.”

AI knowledge sprawl, in layman’s terms, stems from the idea of “I know what I want from the AI. It gives it to me. I, then, call it my own and use it/send it elsehwere.” This sprawl evolves into, for example, an employee getting an email, saving some of its contents (which themselves could've been generated by AI), then sending this now-repurposed content to their boss. When the boss receives it, they modify and share it, and etcetera.

That, in essence, is "the sprawl."

Now, how do we tame this sprawl? How do we control AI content in 2024 better than we did in 2023?

Right out of the gate, we think of robust governance strategies, right?

Well, as you’d expect, that’s about a million times easier to say than it is to make actionable. The World Economic Forum, for instance, launched the AI Governance Alliance. The EU at large is cracking down, in its own ways. Even OpenAI’s Sam Altman recently called for a “proposed global regulatory body.” (That one’s a bit ironic, as Riz noted, but he reserved that subtopic as a conversation for another day – next year’s expo, perhaps.)

Back to taming the seemingly never-ending sprawl: Riz said that “Information has to be genuine, be it from a human or from AI. We need more reliable management strategies. We need to be better.”

We can achieve that, according to him, via exploring the following:

  • Interdisciplinary Collaboration – i.e. fostering collaborative networks across disciplines to integrate diverse perspectives regarding the evolution of AI, and then reducing undue redundancies in research efforts to expedite the creation of guardrails. Creating genuinely collaborative interdisciplinary frameworks “keeps AI ethical,” Riz said. “We need to continue establishing community-led initiatives to curate and maintain approaches to AI, and we’ve got to make sure that they remain relevant and accessible.”
  • Practice Standardization – As Riz described, “actually having conversations with business partners and vendors and so on to introduce real awareness of issues related to AI-produced content and across-the-board practices that should be put in place.” Developing and adopting industry-wide standards for documentation, reporting, and the sharing of AI research is vital.
  • Education and Training – This loops into the points above, but it’s still worth reiterating. “Equipping the AI workforce with the skills necessary to navigate and contribute to the vast knowledge landscape effectively facilitates the creation of more smartly managed structures," Riz explained.
  • Data and Code Repositories – “We should promote the use of centralized repositories for datasets and code,” Riz added, “with strong metadata standards to ensure advanced usability and discovery. A lot of people still haven’t realized that this is just as meaningful and important as the data itself that we’re generating.”

Overall, the plan Riz presented got as technical as it needed to (while also focusing on inclusivity, equity and societal impact when it comes to the creation of new AI models and the sharing of the sprawling knowledge roots therein).

“This is transformative,” Riz concluded. “A model’s knowledge expanding beyond its training data can result in dangerous inaccuracies. Controlling this sprawl is a must for researchers and developers in this space, and we should encourage them to take action.”




Edited by Alex Passett
Get stories like this delivered straight to your inbox. [Free eNews Subscription]

Future of Work Contributor

SHARE THIS ARTICLE

Related Articles

See How IT Adapts to AI in the Workplace at Future of Work Expo 2025

By: Greg Tavarez    1/14/2025

A Future of Work Expo panel session will look into how IT departments can move through the increased demand on network capacity.

READ MORE

A Conversation of AI in the Contact Center Space at Future of Work Expo 2025

By: Greg Tavarez    1/14/2025

The "Evolving Role of the Contact Center - More Than Just Customer Service" panel session will address how AI is having a transformational impact on t…

READ MORE

Speed of AI Implementation Outpaces Strategic Frameworks in Europe

By: Greg Tavarez    1/13/2025

Businesses have invested heavily in AI and automation, with an average spend of €103.4 million over the past two years.

READ MORE

New Year, New Gear: Introducing Jabra Perform 75, the Bluetooth Headset for Retail Shift Work

By: Alex Passett    1/13/2025

This morning, Jabra officially unveiled the Jabra Perform 75, its newest Bluetooth headset that is designed for tough retail shiftwork.

READ MORE

CloneOps.ai Seed Funding Fuels Development of Scalable AI Solution for Logistics Communications

By: Greg Tavarez    1/8/2025

CloneOps.ai recently closed a seed round investment with an initial group of 10 customers in beta testing who expect to go live early 2025.

READ MORE