Future of Work News Free eNews Subscription

The AI Regulatory Gap: Business Leaders Must Take the Helm


AI is everywhere. You can see it on websites you visit. You can see it in smart cars. You can even see it within entertainment and social media apps. A reason people see AI everywhere is the absence of comprehensive AI-specific regulations that has empowered AI providers to introduce a wide array of services and products, each varying in terms of reliability and ethical considerations.

Governing bodies worldwide are grappling with the complexities of crafting effective AI regulations that strike the right balance between encouraging innovation and safeguarding against potential harms. This leaves AI providers with a considerable degree of autonomy in shaping the AI landscape, which could lead to the proliferation of technologies with varying ethical standards and reliability levels.

Because governing bodies are moving at a snail's pace, organizations need to establish their own robust principles and guidelines for responsible and ethical AI implementation. According to a Conversica survey, for organizations already adopting AI, 86% agree on the critical importance of having clearly established guidelines for the responsible use of AI; the percentage was 73% among all respondents.

These organizations know that relying solely on external regulations may not adequately address the unique risks and ethical considerations associated with their specific AI applications. By formulating their own ethical frameworks and responsible AI practices, organizations can not only mitigate potential legal and reputational risks but also foster trust among customers and stakeholders.

However, despite being more likely to recognize the importance of these policies, only one in five business leaders at companies that use AI have limited or no knowledge about their policies concerning critical AI issues including security, transparency, accuracy and ethics, according to the survey.

“From an enterprise perspective, these figures are concerning, especially considering the vast array of AI products and services expected to become available in the coming years and the potentially significant impact they will have on the future of business,” said Jim Kaskade, CEO of Conversica. “This could represent a problematic trend for companies that haven’t started planning to enforce responsible and ethical use of AI.”

Data security is a multifaceted challenge that encompasses protecting sensitive information from breaches, ensuring data privacy compliance, and maintaining the quality and accuracy of data used in AI models. Companies tend to struggle to allocate the necessary resources for comprehensive data security measures, including robust encryption, access controls, and cybersecurity expertise.

As for ethical alignment, it ensures that AI solutions are developed and deployed in a manner consistent with a company's values, mission and social responsibilities. Organizations need to partner with AI providers that share their commitment to ethical practices. This alignment extends to issues such as fairness, bias mitigation, accountability and responsible AI governance.

Finding providers that meet these criteria is a challenging one, as it requires comprehensive due diligence, scrutiny of providers' practices and a willingness to potentially forgo partnerships that don't align with a company's ethical standards.

Similarly, transparency in AI is crucial not only for ethical reasons but also for building trust among stakeholders. Achieving transparency involves elucidating how AI models make decisions, disclosing biases and providing explanations for AI-driven outcomes, which are complex and resource-intensive endeavors.

“The main elements business leaders should be looking for are the safe, brand-protective, and compliant use of AI that protects their end users,” said Kaskade.

Therefore, transparent AI practices, vigilant systems and human involvement, as Kaskade states, help reduce the risks associated with AI adoption and protect the interests of brands, employees and end users.

Edited by Alex Passett
Get stories like this delivered straight to your inbox. [Free eNews Subscription]

Future of Work Contributor


Related Articles

Professionals Get a Productivity Boost with New Intapp AI Capabilities

By: Greg Tavarez    3/1/2024

Intapp, a provider of AI-powered software for professionals in specialized industries, announced new features and a refreshed brand focused on its "In…


Akto Pioneers Proactive AI Security with GenAI Testing Solution

By: Greg Tavarez    2/29/2024

Akto recently launched its GenAI Security Testing solution to offer proactive security testing specifically designed for GenAI models and their APIs.


Quantive Launches StrategyAI to Accelerate 'Always-On' Solutions for Modern Organizations

By: Alex Passett    2/29/2024

Quantive's new StrategyAI solution hones organizations' efforts for developing, executing, and evaluating smarter, "always-on" business strategies.


Gathr Simplifies Enterprise AI Development with Generative AI Fabric

By: Greg Tavarez    2/28/2024

The Gen AI fabric platform aims to simplify and accelerate the development and implementation of GenAI applications, which lets businesses harness the…


What's Next in the AI Revolution: A Conversation with IgniteTech and GFI Software CEO Eric Vaughan

By: Alex Passett    2/23/2024

Eric Vaughan delivered an in-depth keynote presentation on the latest technology revolution, catalyzed by artificial intelligence (AI).