Episode Archives

Chatbot solving hard problems for students in Australia

The University of Adelaide built a chatbot to solve the dreading problem of thousands and thousands of students receiving their Admission Rank on the same day. The chatbot was built using only Oracle Autonomous Mobile Cloud Enterprise, a sheet with the formulas for the calculations for the admission rank and a list of high schools whose students are eligible for bonus points schemes. It’s a very precise task that the bot performed very well and saved the university staff from thousands of calls from students the same day wanting to know their bonus points. The chatbot was deployed to Messenger, because that’s where students are. “We capitalized on the engagement we were already having across Facebook”. Which at the same time potentiated students sharing their scores on the Facebook platform.

The prospect management team was shocked at how many students embraced the chatbot in its first year. “We would have been happy with 200,” Cherry says. “Instead, the chatbot held 2,100 unique conversations with students.”

A very illustrative example of conversational done right, for a specific task, for the users and context first. Resulting in a win for students and university staff alike.

The future is without doubt conversational and has a voice.

Parenting with Alexa and smart assistants

Voice Assistants have been at the forefront of parents almost since the beginning. From a kid who prayed to Alexa, to parents wanting the assistant to request thank you, prompting a response from Amazon. The latest update to Alexa address though questions saving parents difficult conversations with their kids (This is, of course, an overstatement). A recent update to the virtual assistant tool comes with a new mode called FreeTime in which Alexa knows to answer certain questions — like say, ‘Where do babies come from?’ —differently. In instances like that, Alexa can now simply respond ‘ask a grown up.’

Amazon even worked with child psychologists on some of the new answers that Alexa can give. According to the company, Alexa is constantly using questions and requests to ensure that she’s “always getting smarter.”

“Alexa isn’t intended to be a replacement parent or caregiver,” the company said in a statement. “So we believe it’s important we treat these answers with empathy and point the child to a trusted adult when applicable.”

One step further, parents can also set times during which Alexa will tell kids “Sorry I can’t play right now, try again later,” should their kids attempt to use the tool when they should be sleeping or doing homework. The FreeTime update will even encourage kids to say please and thank you before and after asking for things.

When asked Alexa, what happens when you die, she replies with: Sorry, I’m not sure.

FreeTime can be turned on using the Alexa app and is available on the  Echo, Echo Plus, and Echo Dot devices.

Alexa, saving parents since 2018. 

Mari

I’m Mari, your host for VoiceFirst Weekly daily briefing, you can find me on Twitter as @voicefirstlabs and on Instagram as @voicefirstweekly. Subscribe, like, share, engage. This is what we are here for! You have a great day and I’ll talk to you tomorrow.

Skill connections in Alexa

Amazon announced yesterday that developers will be able to start a request in one skill, then have it fulfilled in another. The feature is officially called Skills Connections and is in Preview. As part of the blog post announcement Amazon outlined examples of connections available like printing by HP and schedule a reservation by OpenTable and later schedule a taxi reservation by Uber. It’s a way to pass information between skills to simplify customer tasks. For now users can only use printing with HP, Uber and OpenTable reservations.

Skill Connections is launching into a developer preview. This is highly expected and I’m sure will be highly regarded by developers and users.

What is the biggest take from this feature? For me is marketing. Once more skills can invoke others, imagine the ramifications, you could directly pass from a game to sponsor skill content and then back to your game again, providing a medium for both brands and developers of awareness and monetization. From all the latest announcements the Alexa team has made recently, this is set to have the biggest impact in skills discoverability. Keep a close eye on this one and the first applications built with it.

I’m Mari, your host for VoiceFirst Weekly daily briefing, you can find me on Twitter as @voicefirstlabs and on Instagram as @voicefirstweekly. Subscribe, like, share, engage. This is what we are here for! You have a great day and I’ll talk to you tomorrow.

Omega voice assistant and Surface headphones

Will.i.am backed startup presented Omega, their smart assistant at the recently finished Dreamforce conference. Omega is set to compete with Alexa and Google Assistant although heavy geared towards music fans: in the presentation the assistant was shown playing tracks at request, before providing information about the artist playing and any upcoming gigs or live performances. I mean, seems like Spotify for smart assistants. Given this music focus we can safely predict that is likely to find Omega built in in bluetooth wireless headphones. Speaking of which Microsoft unveiled several Surface devices among which are the Surface Headphones with Cortana (obviously) and automatic pause and play. The headphones are set to be available later this year. With Omega smart assistant and Microsoft own Surface headphones is the tell tale of companies capitalizing from voice technology, or at least fighting for a place in the consumers heart.

You see my friends, this voice space that might have been called a fad, it nothing like it. Look at the different angles of the ones coming to the space are and draw your own conclusions.

I’m Mari, your host for VoiceFirst Weekly daily briefing, you can find me on Twitter as @voicefirstlabs and on Instagram as @voicefirstweekly./ You have a great day and I’ll talk to you tomorrow.

Reboot the web with voice

Last week or earlier this week, the creator of the World Wide Web, Sir Tim Berners Lee presented a project he has been working on in stealth mode: Solid, an open source project according to its author: “with the goal to restore the power and agency of individuals on the web”, is based in Linked Data and web annotation ontology. Anyways, why am I talking about this? In the FastCompany interview Lee referred to a voice assistant with the same principles, free, open source and decentralized. Its code name is Charlie, focusing on how people will own their own data. And goes along to explain how Charlie could be more useful on areas like health records, children’s school events or financial records. I talked here in an episode about smart assistant that are alternatives to the most popular ones backed by one of the big companies, among which was Snips, a smart assistant which differentiator is privacy built in.

Whether Tim Berners-Lee offering of a decentralized web will catch up or not time will tell. But also developers will tell.  Tim said that “Developers have always had a certain amount of revolutionary spirit” and this can be the opportunity to start something new and participate on a bigger mission for developers with an appeal for freedom and take control back from the web from corporations.

There are several aspects to be noted from the announcement, most of them are out of the scope of this podcast, for the topic we care, it’s really telling that when Tim decided to create a new platform for the web, it had voice from that start on in the strategy.

The intent is world domination

Said the WWW creator in a interview for FastCompany.

It seems that I keep missing Mondays the past two weekends! I have been traveling and it’s harder to find a quite place to record when you are traveling. As always, share this episodes and have a nice day. I’ll talk to you all tomorrow!