Amazon on Wednesday announced Alexa Conversations, a deep learning-based approach for developers to create natural voice experiences on Alexa. The toolset, currently in preview, lets developers build natural skill dialogs with fewer lines of code and less training data, Amazon said.
To use Conversations, developers provide API access to their skills’ functionality, a sample of dialogs annotated with the prompts that Alexa will say to say to the customer, and the actions they expect the customer to take. Alexa Conversations’ AI finishes the job, using the input data to generate dialog flows and variations.
“It’s way easier to build a complex voice experience with Alexa Conversations due to its underlying deep-learning-based dialog modeling,”said Rohit Prasad, Alexa vice president and head scientist, at Amazon’s re:MARS conference in Las Vegas Wednesday.
Prasad also unveiled a new ML-based concept that will let developers create Alexa Skills that complete multiple tasks in a single conversation. Currently, Alexa can only handle one-off requests, forcing users of the digital assistant to frame more complex questions separately. This yet-to-be released multi-skill experience relies on a set of AI modules working together to generate responses to customers’ questions and requests through a single conversation.
Prasad said this approach is different from other dialogue systems in that it models the entire system end-to-end, where the system takes spoken text as its input, and delivers actions as its output. The action prediction is based on a machine-learned conversational…