Artificial intelligence (AI) has made major strides during the past decade, and has become increasingly integrated in everyday technology. As AI becomes a pervasive part of human activities, there has been much debate over safe and ethical use of deep learning as well as how to create the next version of AI by incorporating some uniquely human characteristics.
The future of AI was the topic of discussion at a recent end-of-year online debate held by Montreal.AI, a research company dedicated to developing and commercializing AI on a widespread basis. The discussion included scientists from a variety of backgrounds.
Deep learning was one of the key topics discussed, with one scientist expressing concerns about excessive data requirements, low capacity to transfer knowledge to other domains, opacity and an overall lack of knowledge representation and reasoning. Gary Marcus, a cognitive scientist, is an advocate for a hybrid approach to deep learning, combining learning algorithms with rules-based software.
“One of the key questions is to identify the building blocks of AI and how to make AI more trustworthy, explainable, and interpretable,” said Luis Lamb, a computer scientist. “We use logic and knowledge representation to represent the reasoning process that [it] is integrated with machine learning systems so that we can also effectively reform neural learning using deep learning machinery.”
Reinforcement learning is another key topic within AI. It refers to a practice through which agents are given the basic rules of an environment, and then left to discover ways to maximize their reward. The practice ties in with computational theory, which defines what goal an information processing system seeks as well as why it pursues that goal.
“In neuroscience, we are missing a high-level understanding of the goal and the purposes of the overall mind," said Richard Sutton, a computer scientist. "It is also true in artificial intelligence — perhaps more surprisingly in AI. Reinforcement learning is the first computational theory of intelligence.”
Many debate participants agreed that integrating world knowledge and common sense into AI was a sensible approach. According to Judea Pearl, a computer scientist and winner of the Turing Award, AI systems need both of those variables to make the most efficient use of the data fed to them.
“I believe we should build systems which have a combination of knowledge of the world together with data,” said Pearl. He said that knowledge doesn't just emerge from data, but by employing the innate structures of our brains to interact with the world. “That kind of structure must be implemented externally to the data. Even if we succeed by some miracle to learn that structure from data, we still need to have it in the form that is communicable with human beings.”
To find out more about how AI is changing and evolving to better meet the needs of humans and the industries it serves, TMC is hosting its Future of Work Expo from June 22-25 at the Miami Beach Convention Center. The show will examine how AI and deep learning are changing the future of work and reshaping the entire technology landscape.
Future of Work Contributor
Intelligent Workflows helps machine learning (ML) engineers proactively rectify errors and ensure the reliability and robustness of AI model performan…
Amazon Web Services, Inc. (AWS) recently launched Amazon Q, a generative-AI powered assistant that is specifically tailored for business operations.
Lacework announced a generative AI assistant that gives enterprise customers a new way to engage with the Lacework platform by providing customized co…
Voicify integrated with Chowly, an all-in-one digital ordering platform, to make its technology available to any of Chowly's customers.
Verta launched the Verta GenAI Workbench, an all-in-one platform to accelerate the GenAI builder's journey from idea to product.