Amazon AWS Machine Learning Summit keynote kicks off with ‘few-shot learning’

aws-ml-summit-sivasubramanian-2021.png

Amazon’s AWS cloud compting service on Wednesday morning kicked off its machine learning summit via virtual transmission.

The morning’s keynote talk was lead by Swami Sivasubramanian, AWS’s vice presdient of AI and machine learning; Yoelle Maarek, vice president of research for Amazon Alexa at AWS; Bratin Saha, vice president of machine learning at AWS; and with a special guest appearance by Ashok Srivastava, Chief Data Officer at Intuit.

Sivasubramanian lead off with a talk about machine learning being “one of the most transformative” technologies in a generation. He cited a stat that more than 100 papers in machine learning are published each day. “Machine learning is going mainstream,” said Sivasubramanian. More than 100,000 customers use AWS for machine learning, said Sivasubramanian, citing examples such as pharma giant Roche and The New York Times. 

Sivasubramanian Offered examples of “working backward from the customer” in the development of ML. The first example was how a system can “learn with less data.” Accessing and annotating data is “too tedious” as ML becomes mainstream, said Sivasubramanian. He cited the example of the NFL wanting to manage its library of video assets from football games. An 80-year-old pizza maker wanted to ensure every pizza has the same amount of cheese to maintain quality. The company used AWS for an imaging system for pizza inspection. The solution was what’s called “few-shot learning,” where machine learning is supplied with only a limited number of examples. 

Few-shot learning is used for custom data labeling in the Amazon Rekognition product, he said. The NFL, for example, assigns custom labels to things such as players and jerseys in video. Amazon’s service for defect inspection, Amazon Lookout for Vision, also uses the service.

Sivasubramanian cited the desire to replicate a real factory setting. So, the team that developed Lookout for Vision built a replica of a factory to try the few-shot approach in the real world. 

Sivasubramanian’s next example was understanding “irregular text” with machine learning, where, for example, text is blurred. Accuracy goes way down, he noted. That’s important for real-world instances such as doctors’ handwritten notes.

The traditional language model approach of guessing with the first few letters runs into problems when there is little context. So, the AWS team invented something called “SCATTER,” Selective Context Attentional SceneText Recognizer. It sends an image through additional processing that has a decoder to choose contextual or visual information alone. The SCATTER tech lead to a 3.7% improvement in text recognition, a big improvement, he said. SCATTER is now used in AWS’s automatic text extraction service.  

Sivasubramanian then brought up Maarek, to talk about “giving Alexa a sense of humor.” Maarek referenced Alan Turin’s 1950 paper on “Computing Machinery and Intelligence,” in which the mathematician argued against presumptions about computers. 

“Think of debuggers,” said Maarek, which is an example of how a computer “thinks about its own thought,” something people thought computers wouldn’t do but Turing said it could. “Already Turing was looking at having a sense of humor being a really hard challenge.” 

“We want to look backward, whether customers are funny, and how should the machine respond to it,” explained Maarek. That lead to the challenge of “detecting humor when customers are the one being funny.” To train the system, Amazon looked at humorous customer comments on Amazon. “We actually discovered tons of funny questions,” she said, such as “can you hack into the Matrix” via the Nintendo Switch video game machine, asked one customer. Maarek proceeded to explain the joke…

Another example was customer sarcasm, she said. Will a luxury drink cooler “make me fly.” Said Maarek, “Sarcasm: funny.” Another type of humor is “the superiority theory of humor,” such as asking whether Amazon Show will cook breakfast. Someone asked about the Hutzler Banana Slicer, “will it bend the other way.” ANother example” If a unicorn farts in the woods and no one is around, does it make a sound” (pertaining to Unicorn Meats.) 

Maarek said the team built a deep learning model, employing notions about humor such as subjectivity, and using embeddings. “We took into account domain bias, to make sure we didn’t over-fit our model.” As a result the team was able to present a paper with high degrees of humor accuracy at last year’s SIGER conference. 

Then, the team moved on to how to detect with speech in Alexa. Would customers appreciate, she asked, Alexa understanding the humor? Or did customers want to feel superior? Maarek cited humorous user utterances toward Alexa, such as “Alexa, can you burp?” “You will see a ton of toilet humor,” she said, “it’s part of a very important area of humor, relief humor.” “Alexa, what is your blood type.” Some examples, she noted, are not so much funny as playful. Such is an example of both personification and superiority by humans. “We defined playfulness,” she said. “The customer doesn’t expect Alexa to take this request literally” and Alexa should not add anything to the shopping list of the user.

Maarek had to go back to the papers about Aristotle, Kant, Schopenhauer, etc., to understand all the forms of humor. Surveying all the forms of humor helped understand the matter of what users will enjoy from Alexa. Will they enjoy if Alexa understand their humor? 

The team started with “personification,” where people relate to Alexa as a personality, as a conjecture. They recruited a hundred of students in a blind question-asking exercise, talking to an entity they didn’t know was Alexa (it was labeled as “Shirley,” a play on the movie Airplane.) The students’ questions were examined by a custom version of Google’s BERT transformer neural net. It employed sentiment analysis and such. “We got a pretty good model,” she said, “to detect these funny personification utterances on the fly.” The team went to a speed-dating site to scope out questions people ask when trying to be fun. That lead to a survey of personification questions that people ask “Do you think as good as a woman?” Etc. The result was that all the human questioners enjoyed when Alexa responded. “We really want to have fun not at Alexa but with Alexa,” was the conclusion of the research.

Access the original article
Subscribe
Don't miss the best news ! Subscribe to our free newsletter :