Warner Music and Rothco launch Saylists on Apple Music to aid speech therapy
Warner Music and Accenture Interactive’s Rothco have teamed up to launch Saylists, a playlist of songs that have been curated specifically to aid speech therapy.
One of the most successful therapeutic methods used to help overcome speech sound disorder is through repetition of difficult syllables, words, phrases.
Apple Music used an algorithm developed by Rothco to carry out a data analysis of song lyrics of 70 million tracks in its catalogue to identify songs where patterns of repetition used to train specific speech sounds were most present and would be helpful for speech therapy. This was narrowed down to 173 songs that resulted in the creation of an initial 10 saylists.
Each saylists has been categorised accordingly to the most commonly challenging speech sounds in the English language: ‘CH’, ‘D’, ‘F’, ‘G’, ‘K, ‘L’, ‘R’, ‘S’, ‘Z’, ‘T’.
The custom-built sayslists feature tracks such Don’t Start Now by Dua Lipa for the D playlist; Right Here, Right Now by Fatboy Slim for the R playlist; and Good as Hell by Lizzo for the G playlist.
“Our goal was to help redefine the long and often painstaking journey that young people with atypical speech can experience. Several members of our team who worked on ‘Saylists’ themselves grew up with SSDs, so it’s a personal project as well,” Rothco chief creative officer Alan Kelly said.
“We recognised that there is one place where many people enjoy the rhythmic repetition of words and sounds — in music. It was crucial that we could analyse as many songs as possible to present children with something engaging. Pairing this with Warner’s curation meant we could be certain the songs in the ‘Saylists’ will appeal to many different young people.”
Saylists is now available exclusively to Apple Music subscribers globally.
Muru Music looks to launch AI music therapy platform
Using music as a brain stimulant for the ageing population.
Five Australian universities launch project to improve voice assistants for kids
Up until now, the software behind voice assistants has always relied on samples of adult voices.
Human meets AI: Intel Labs team pushes at the boundaries of human-machine interaction with deep learning
What is it like for a person to live partly inside of the objective function of an AI program? Intel scientist Lama Nachman shares insights from her team’s work with Peter Scott-Morgan, a person willing to transform his body and his life to interact intimately with a machine.
AI hearing aids optimize sound in real time
The hearing aid hardware wars have become a software battleground.