How to design a scalable natural language intent recognition framework for human–robot interaction
Feeling heard matters
April 15, 2020
Carolina Belmonte, Practice Architect, Data Analytics and Insights
Virtual assistants have become an omnipresent force, extending into our personal and professional endeavors, and the number of voice assistants is forecast to jump threefold to 8 billion by 2023. But while technology has made major inroads in enabling assistants to answer simple questions or commands with one-to-one responses (such as, “what time is it?” or “set a timer for 2 minutes”), most of us can recall a frustrating experience when a chatbot or smart speaker didn’t reply with a relevant answer.
So, how can we enable virtual assistants to understand us better? Can we design more robust interactions that cater to the nuances of human communication? In this video, Carolina Belmonte, a practice architect for TEKsystems Global Services Data Analytics and Insights, explores natural language intent recognition and how developers can build a framework to improve human–robot interaction.
Watch to learn:
The six technical approaches to building conversational AI
Rule-based AI
Retrieval-based AI
Generative AI
Ensemble AI
Grounded learning AI
Interactive AI
Nuances in human–human conversation vs. human–machine conversation
What is an intent recognition framework—and why it matters
Carolina Belmonte is a practice architect of data analytics and insights for TEKsystems Global Services. She leads AI implementation for a virtual agents, chatbots and assistants for our clients, and is heavily involved in designing TEKsystems’ proprietary chatbot, sAlge. Carolina has a bachelor’s degree in brain and cognitive sciences from the Massachusetts Institute of Technology (MIT). This recording is from her presentation at DeveloperWeek 2020 in San Francisco.