Future of Work News Free eNews Subscription

Expert Strategies for Controlling GenAI and 'Taming the Knowledge Sprawl'

By

I cannot overstate this enough, readers: Generative AI Expo (part of the #TECHSUPERSHOW experience that took place last week in Fort Lauderdale, Florida) was a jam-packed event; more so than last year, to boot. The energy in the Broward County Convention Center was almost palpable — long story short, this was a reminder of how great it can feel to be in tech right now.

Speaking of last year’s event, by the way, I covered a session led by business owner Eric Riz, CEO of VERIFIED and a bona fide pro when it comes to approaching artificial intelligence.

So, here’s a detailed rundown of Riz’s session this year, which was titled “Taming the Knowledge Sprawl: Strategies for Controlling Generative AI’s Expanding Knowledge”:

“First, the elephant in the room,” Riz began. “Let’s face it; last year’s session, in hindsight, was more of an intro to ChatGPT, as it was ‘newer’ and 'shinier' at the time. But nowadays, even if it can still feel overwhelming to properly prompt GenAI for the right response, folks are growing more accustomed to it.”

Riz proceeded to break down what we have in front of right now; in his words, “what's behind the knowledge sprawl.”

AI knowledge sprawl, in layman’s terms, stems from the idea of “I know what I want from the AI. It gives it to me. I, then, call it my own and use it/send it elsehwere.” This sprawl evolves into, for example, an employee getting an email, saving some of its contents (which themselves could've been generated by AI), then sending this now-repurposed content to their boss. When the boss receives it, they modify and share it, and etcetera.

That, in essence, is "the sprawl."

Now, how do we tame this sprawl? How do we control AI content in 2024 better than we did in 2023?

Right out of the gate, we think of robust governance strategies, right?

Well, as you’d expect, that’s about a million times easier to say than it is to make actionable. The World Economic Forum, for instance, launched the AI Governance Alliance. The EU at large is cracking down, in its own ways. Even OpenAI’s Sam Altman recently called for a “proposed global regulatory body.” (That one’s a bit ironic, as Riz noted, but he reserved that subtopic as a conversation for another day – next year’s expo, perhaps.)

Back to taming the seemingly never-ending sprawl: Riz said that “Information has to be genuine, be it from a human or from AI. We need more reliable management strategies. We need to be better.”

We can achieve that, according to him, via exploring the following:

  • Interdisciplinary Collaboration – i.e. fostering collaborative networks across disciplines to integrate diverse perspectives regarding the evolution of AI, and then reducing undue redundancies in research efforts to expedite the creation of guardrails. Creating genuinely collaborative interdisciplinary frameworks “keeps AI ethical,” Riz said. “We need to continue establishing community-led initiatives to curate and maintain approaches to AI, and we’ve got to make sure that they remain relevant and accessible.”
  • Practice Standardization – As Riz described, “actually having conversations with business partners and vendors and so on to introduce real awareness of issues related to AI-produced content and across-the-board practices that should be put in place.” Developing and adopting industry-wide standards for documentation, reporting, and the sharing of AI research is vital.
  • Education and Training – This loops into the points above, but it’s still worth reiterating. “Equipping the AI workforce with the skills necessary to navigate and contribute to the vast knowledge landscape effectively facilitates the creation of more smartly managed structures," Riz explained.
  • Data and Code Repositories – “We should promote the use of centralized repositories for datasets and code,” Riz added, “with strong metadata standards to ensure advanced usability and discovery. A lot of people still haven’t realized that this is just as meaningful and important as the data itself that we’re generating.”

Overall, the plan Riz presented got as technical as it needed to (while also focusing on inclusivity, equity and societal impact when it comes to the creation of new AI models and the sharing of the sprawling knowledge roots therein).

“This is transformative,” Riz concluded. “A model’s knowledge expanding beyond its training data can result in dangerous inaccuracies. Controlling this sprawl is a must for researchers and developers in this space, and we should encourage them to take action.”




Edited by Alex Passett
Get stories like this delivered straight to your inbox. [Free eNews Subscription]

Future of Work Contributor

SHARE THIS ARTICLE

Related Articles

ICYMI: Developments for the Future of Work

By: Greg Tavarez    5/10/2024

Here are a few articles compiled into one for readers interested in developments regarding the future of work.

READ MORE

Trellix Announces Collaboration Security Solution for Unprotected Attack Vectors

By: Tracey E. Schelmetic    5/10/2024

Cybersecurity company Trellix, its teams focused on extended detection and response (XDR), recently announced Trellix Collaboration Security in the Un…

READ MORE

SS&C Debuts Blue Prism Next Gen Platform for Intelligent Automation

By: Alex Passett    5/8/2024

SS&C announced the first release of its new SS&C Blue Prism Next Generation intelligent automation platform, which was designed specifically to delive…

READ MORE

ICYMI: What's in Store for the Future of Work

By: Greg Tavarez    5/3/2024

Let's get into what the future of work has in store for all - some with AI solutions and some without.

READ MORE

Leostream Integrates with Windows 365 to Simplify Remote Work

By: Greg Tavarez    5/3/2024

Integrating with Microsoft Windows 365, the Leostream Platform looks to allow Windows 365 users to access additional resources with a consistent and u…

READ MORE