Everywhere you look these days, voice is becoming the dominant enablement medium. Whether in a car, at home, in the office, or anywhere else, voice is connecting us with the technologies we use every day.
Juniper estimates that more than 3.25 billion voice-enabled devices are in circulation today, driving voice-driven commerce to more than $80 billion by 2023. Companies across industries – both incumbents and startups – are recognizing it and are developing voice-enabled applications to enable better and more efficient processes for businesses and consumers alike. According to PwC and CB insights, venture capital funding of AI companies reached a record $9.3 billion, underscoring the momentum the speech-enablement and AI markets are gaining.
Of course, much of the hype has been driven, first, by Apple’s Siri, followed by Amazon’s Alexa, Microsoft’s Cortana, Google Assistant, and now Samsung’s Bixby. These are only the tip of the iceberg, but they have helped give validity to the AI market and created the demand for massive AI innovation.
Much of the growth can be attributed to advances in accuracy. If the voice recognition engines didn’t work, they wouldn’t be useful. Today, most major platforms are well above 90% accuracy ratings, making them much more enticing to users. In fact, 32% of American adults say they use voice search for the fun of it – a figure that jumps to 51% for teenagers, who will soon be entering the workforce and 55% of whom already use voice search on a daily basis.
What does it mean? comScore predicts that voice will be used for half of all searches by next year, and Gartner goes even further, saying that 30% of all searches will be done without even a screen.
For businesses looking to engage their customers in the most efficient and desirable way, that means they had better get their AI development hats on quickly.
To address that issue, Adam Cheyer, co-founder of both Siri and Bixby, will be in Los Angeles tomorrow, June 15, to talk not only about Samsung’s Bixby Developer program, but how AI can be used with existing APIs and services to build rich conversational experiences for users.
It’s all about the experience, and Cheyer’s engagement at the Bixby Developer Session will let attendees experience firsthand, in an immersive, hands-on training environment, how new capabilities for voice interaction will help create new levels of interactive engagement for more than 500 million users through Bixby.
Details of tomorrow’s event:
Date: Saturday, June 15, 2020
Location: Cross Campus, 29 Colorado Avenue, Santa Monica, California 90401
If you’re on the East Coast, there will be a Bixby Developer Session in New York next weekend, Saturday, June 22. Details here.
The Future of Work Expo will take place February 12-14, 2020, in Ft. Lauderdale, Florida, featuring three days of discussion about how AI, chatbots, and automation are enabling businesses to reinvent themselves and become more agile and customer-centric.
Future of Work Contributor
A new study has found that the introduction of robots in the U.S. workforce has increased income inequality in certain geographic areas. The study als…
NVIDIA has announced the latest generation of its AI system, geared toward supercomputing AI training, deployment and inference in the data center. Th…
Speakeasy AI is adding an intelligent assistant feature to its platform to deliver information to agents immediately.
In this episode of IoT Time Podcast, Ken Briodagh gets on the mic alone to talk to you about some of the tech news and solutions to adapting to life i…
Business continuity during the COVID-19 pandemic is about more than just UCaaS -- it requires a full corporate strategy that includes IT, HR, manageme…