
Our present day is currently rife with advancements in artificial intelligence (AI), a tool with what often feels like galactic-level promise. This tool, however, is also the subject of considerable apprehension; it has been since its debut late last year. More and more, we rely on generative AI (specifically OpenAI’s ChatGPT) to automate mundanity, assist with critical decision-making efforts and reshape what can be done across industries, and how.
And yet, that apprehensive vibe is hard for many folks to shake. ChatGPT is still actively revered as the greatest invention since the iPhone (perhaps even since sliced bread), but a persistent undercurrent of mistrust has seeped in on a societal – if not global – level.
According to the team at Malwarebytes, that initial ChatGPT luster has indeed begun to fade and deeper reservations about its capabilities have reared forth. Malwarebytes recently conducted a consumer pulse survey about ChatGPT and, based on word from its respondents, optimism may be, per Malwarebytes’ own words, “in startlingly short supply.”
While still regarded by many as a useful tool both personally and professionally, the responses from Malwarebytes’ survey indicated:
- 81% remain concerned about security risks
- 51% question whether AI tools can improve internet safety
- 52% want ChatGPT developments paused in order for regulations to catch up
These levels of distrust in both ChatGPT’s accuracy and its overall, long-term impacts on cyber safety aren’t the kind to be swept under the digital rug. While not an ultimate condemnation of AI, it’s clear that more assurances are needed when only 10% of a survey’s respondents wholeheartedly align with the statement “I trust the information produced by ChatGPT.”
As Malwarebytes described, it’s great that ChatGPT is technically able to create entirely new computer programs, replace search engines, pen new punk rock songs, you name it. It has since been integrated into what feels like every conceivable tech product possible. But when tech-savvy respondents now cite its “hallucinations” and other shortcomings not simply as navigable blunders but instead as deeply red flags, more visibility into the hows behind ChatGPT (and potentially even more rigid sets of shared safety protocols) have become crucial.
That said, Malwarebytes recognizes that such a “continuum of serious viewpoints that range extensively” doesn’t lend the easiest of decisions. There are areas of gray; more work has yet to be done.
As Malwarebytes’ Cybersecurity Evangelist Mark Stockley puts it:
"An AI revolution has been gathering pace for a very, very long time, and many specific, narrow applications have been enormously successful without stirring this breed of mistrust. At Malwarebytes, ML and AI have been used for years to improve efficiencies of our own. However, current public sentiment on ChatGPT is a different beast and the uncertainty around how it will change our lives is compounded in mystery that begs for more sizeable, trust-restoring explanations.”
The remainder of Malwarebytes’ latest reflections on ChatGPT can be read here.
Edited by
Greg Tavarez