Archive Monthly Archives: October 2018

Oracle voice assistant and voice in the enterprise

Oracle announced recently the launch of the Oracle Digital Assistant for companies, an AI assistant built to help employees handle things like enterprise resource planning (ERP), customer relationship management (CRM), or human relations needs in a conversational setting.

After the news, Brian Roemmele, tweeted how Oracle joining the voice wagon marked completed the list of big companies he predicted will be in voice by this year. The companies in the list are:

  • Apple
  • Microsoft
  • Google
  • IBM
  • Oracle
  • Salesforce
  • Samsung
  • Sony
  • Facebook

And the prediction said:

The first wave of Voice First devices will likely come from these companies with consumer grade and enterprise grade systems and devices.

And indeed we have come full circle. Brian article was from 2016, only two years ago. It’s has been said that voice is the faster ever growing technology. And I would say that is the faster to enter the enterprise world. Almost all the companies in the mentioned list, except notably for Facebook and Samsung offer enterprise solutions with conversational interfaces. 

You might tell me well it’s not widely adopted yet. True, but the offerings are starting to pop and it’s clear that every big company is betting on voice and some extending the bet to the enterprise. I believe voice technologies, and all the other computing developments we are going to see derived from it are the next frontier in our interaction with computers. And I will continue reporting on it and bringing to you my thoughts on it.

This is VoiceFirst Weekly daily briefing, you can find me on Twitter as @voicefirstlabs and on Instagram as @voicefirstweekly./ You have a great day and I’ll talk to you tomorrow.

Read along with Google Home Mini and Disney’s Little Golden Books

I had to come back with a news related to Disney. I had to. You all know by now that I consulted as a software engineer with Disney while in South America and it was a tremendous experience. Just yesterday Google announced story time experiences: interactive stories in partnership with Disney. TechCrunch article presents the fundamentals:

  • The new story time experiences will work with a selection of Little Golden Books.
  • The titles currently available are Moana, Toy Story 3, Coco, Jack Jack Attack, along with classics like Peter Pan, Cinderella, Alice in Wonderland, The Three Little Pigs, Mickey Mouse and his Spaceship, and Mickey’s Christmas Carol.
  • The stories will be available this week alongside Google Home Mini devices in stores like Walmart, Target and Barnes & Noble.
  • To get started, you say, “Hey Google, let’s read along with Disney.”

Is there a better storytelling company than Disney? Disney is eating the world, and not only the children world. What better way to incentivize users to buy Google Home and Mini devices than relying on a known emotional connection to Disney stories that transcends several generations, just ahead of the holidays?

Two things I want to touch on:

I have talk here how voice tech and smart assistant state now is so complex, for a number of reasons, one being at any time any of the companies (Google, Amazon, Apple, et all) with the smart assistants can release something that take away what startups are doing, in this case this is similar to what Tellables and Novel Effects are offering today. Obviously is not exactly the same, but you get the gist. the can come and eat you for breakfast pretty quickly. No exactly encouraging let’s say.

The second point is how the experience is more “alive”, it has the Little Golden Books, along with the experience in the Home and Home Minis. Which can make it like it’s not about the Assistant at all, but about the story and the parents reading to kids. A very good marketing campaign

Will these points drive sales for Google Assistant Home and Mini this holiday season? We are so close, let’s see. I’m excited with the announcement

I was a guess at This Week in Voice in last week episode number 6 of their 3rd season. I had a great time with Bradley and Kane and Dustin from VUX.World.

In related news VoiceFirst Weekly will be present in the next Alexa Conference to be held in Chattanooga, Tennessee, in January 15-17, 2019. Hope to meet you there and chat about voice technology.

I just wanted to say how much I appreciate when you guys mentioned that you listened to one episode or that something resonated with you. It’s my oxygen to keep this going.This is VoiceFirst Weekly flash briefing and my name is Mari. Have the best Tuesday you can have today and I’ll talk to you tomorrow about the Oracle entering the voice wagon.

Real time translation is coming to all Assistant-optimized headphones and phone

AndroidCentral is reporting that the translate support in Google Pixels page has been updated to reflect:

Google Translate is available on all Assistant-optimized headphones and Android phones.
The feature was previously available on the Pixel Buds only with Pixel phones:

With the assistance of your Google Pixel Buds, you can easily converse with someone who doesn’t speak your language.

With the update, you will need any Assistant-enabled devices, headphones and phones. You now can say “Help me interpret Japanese” or any other language, you can hear translations and respond to them on your headphones while holding on your phone to the person you’re talking to. That person will hear your translations from the phone’s speaker and respond to them through the phone’s microphone.

Real-time translation is available in 40 languages on the Google Buds support page, but only 27 languages are listed under “Talk” for speech translation and bilingual conversation translation on Google Translate.

Google Assistant-enabled devices get a really differentiating point from competitors like Amazon Alexa, Siri and Cortana introducing real time translations available on phones and headphones.

Voice across industries featured in today’s newsletter issue

Every Thursday 9:50 PT we send the weekly issue of the ultimate newsletter in voice technology. We will soon add an audio version of it as well.

AppSheet launched SmartAssistant, an automatic conversational UI for Apps

AppSheet, a service to create mobile apps almost without coding announced last week a new feature that allow creators to add a new user interface that acts like a digital business assistant, enabling voice recognition and natural language processing for any app built on the AppSheet platform. It’s compared to having Siri in your app, where users once they have it enabled can simply type or use voice commands to access data immediately.

Smart Assistant delivers a conversational experience to any app built on our platform. With it, users can directly access information using simple phrases rather than learning or navigating the app interface. We believe this kind of seamless interface will increase user adoption rates as users no longer have to adapt to technology—technology adapts to the user,”

I think I have talked about this before in the show, the convenience of voice is so high because is the first technology that truly promises to take the load from the user to the machines. Pretty exciting. From this announcement I want to point out something else: platforms like AppSheet providing voice activation to apps upfront will be cornerstone for adoption and engagement. The danger is then, in doing it right, according to Lars Knoll, Qt Company CTO, doing voice integration wrong is worse than not doing voice at all.

Before I wrap up this episode, I want to also highlight the dichotomy it might seem voice has: it’s technical and creative. But once the technicalities are at hand like AppSheet is providing, all you have is the creativity and the user experience. It reminds me of Dave Itsbitski at the keynote of Voice summit, the biggest challenge voice has is designing human conversations.

I’m Mari, your host for VoiceFirst Weekly daily briefing, you can find me on Twitter as @voicefirstlabs and on Instagram as @voicefirstweekly./ You have a great day and I’ll talk to you tomorrow.

Links of the coverage of the Portal announcement:

What you didn’t hear on the Portal announcement

As you probably know by today, Facebook announced Portal yesterday, a smart display for video calls with Alexa integrated.

There are tons of articles on the web related to the announcement summing up three fundamentals: the privacy scandals the company has been surrounded this year, why choosing Alexa was a smart decision at this point and the focus on visual as oppose to another smart speaker. I’ll leave links at this episode notes. Go and check them out if you haven’t.

You might think, what is she going to say that hasn’t been said by the other stories. Hear me out, I have something to add to the conversation, pun intended.

Another bullet directed at Snapchat

I think this is also another shot at Snapchat. The company has been relentless in their competition with Snapchat (Instagram stories, Facebook stories and now Instagram name tag) and this is no exception. Portal smart display landing page first feature listed is Smart camera and sound followed by private by design. Sound familiar from Snapchat camera first mantra, right? Let’s continue.

With music, animation and augmented reality effects, Portal lets you get into stories like never before.

Portal is adding VR lenses to the calls, directed to a more leisure focused model that Facebook knows very well, which can also give them a differentiator point among the competitors. Targeting an audience that’s already used to lenses, that use them on a daily basis, combined with voice activation provided by the very familiar (homie) Alexa is the recipe for snapping (again, pun intended) those young users into the giant blue platform.

One detail that also deserves attention:

For added security, Smart Camera and Smart Sound use AI technology that runs locally on Portal, not on Facebook servers. Portal’s camera doesn’t use facial recognition and doesn’t identify who you are.

I talked in the episode about the compression algorithms the Amazon Alexa team is working and the effects it might have for the voice activated internet of thing feature, and this quote, proves that Facebook is very aware of the users reaction to the privacy scandals, what users think and is alerting them: it runs local, does not call Facebook servers. Maybe Facebook is the first company to provide a truly offline smart assistant experience. Wait, it has Alexa built-in, never mind.

Payments through AI messaging service + social network = better smart display?

As I alway say, whether this strategy is going to play out or not, only time will tell. Truth is, Facebook wants in in the smart speaker race and in the home. And they want it now, as much they couldn’t wait any longer. I would expect that the patent Facebook filled in 2016 for conversational payments (Processing payment transactions using artificial intelligence messaging services), plus all the social network expertise will come together with Portal and Portal+ for a product that might change the course of the smart display race.

Let me know what you think in the comments of this post, on Twitter replying to @voicefirstlabs or on Instagram @voicefirstweekly. My name is Mari, I’m your host, stop scrolling down now and go and share this, like and engage. You know why? Because you love it and I love it. Best of the Tuesdays for all of you. I’ll be back tomorrow!

Recap of important Google Assistant recent updates

This week Google Assistant had several updates important enough to make it to my notes for episodes. Let’s start reviewing them.

Monetization

First, monetization, just a week after Amazon announced consumables API for Alexa Google released the ability to sell digital goods, a way for Action creators to offer digital subscriptions and goods to consumers. The novelty is in the digital part, as the API already offered physical products.

As a follow up the Google team also introduced a new sign-in service for the Assistant that allows developers to log in and link their accounts. According to TechCrunch: Starbucks has already integrated this feature into its Assistant experience to give users access to their rewards account. Adding the new Sign-In for the Assistant has almost doubled its conversion rate.

Book rides with Lyft or Uber

Starting this past Thursday Google Assistant users will be able to book rides through Uber or Lyft. Siri and Alexa already let you book rides on Uber and Alexa also in Lyft for some time now. Considering that Google has the map advantage, or, actually the ecosystem of apps advantage it was about time.

Voice access publicly available, an important step for accessibility.

The next update is Voice Access made publicly available this week. The app is tied directly to the Google Voice assistant, designed to make it easier for users to navigate a wider range of tasks via voice command instead of manual actions. It accomplishes that by letting users essentially translates a button push, page scroll or item selection into a voice command that Google Assistant can easily follow.

The Google Assistant mobile app gets more visual, aligning with smart displays

Google Assistant also got more visual last week: with a visual redesign of its Assistant experience on phones. Bringing more and larger visuals to the Assistant. The update aligns what users were already seeing on smart displays.

In addition to the visual redesign of the Assistant, Google also today announced a number of new features for developers. Unsurprisingly, one part of this announcement focuses on allowing developers to build their own visual Assistant experiences. Google calls these “rich responses” and provides developers with a set of pre-made visual components that they can easily use to extend their Assistant actions.

Acquisitions

Finally, Google acquired AI chatbot startup Onward, a “powered chatbot startup that creates a range of automated customer service and sales tools for business usage.”

That’s all for today. I miss some, but I think these are the most important ones, it shows how they are moving quickly in the space heads on. Exciting times to be alive. Hope the best day for you all.

I’m Mari, your host for VoiceFirst Weekly daily briefing, you can find me on Twitter as @voicefirstlabs and on Instagram as @voicefirstweekly./ You have a great day and I’ll talk to you tomorrow.

Chatbot solving hard problems for students in Australia

The University of Adelaide built a chatbot to solve the dreading problem of thousands and thousands of students receiving their Admission Rank on the same day. The chatbot was built using only Oracle Autonomous Mobile Cloud Enterprise, a sheet with the formulas for the calculations for the admission rank and a list of high schools whose students are eligible for bonus points schemes. It’s a very precise task that the bot performed very well and saved the university staff from thousands of calls from students the same day wanting to know their bonus points. The chatbot was deployed to Messenger, because that’s where students are. “We capitalized on the engagement we were already having across Facebook”. Which at the same time potentiated students sharing their scores on the Facebook platform.

The prospect management team was shocked at how many students embraced the chatbot in its first year. “We would have been happy with 200,” Cherry says. “Instead, the chatbot held 2,100 unique conversations with students.”

A very illustrative example of conversational done right, for a specific task, for the users and context first. Resulting in a win for students and university staff alike.

The future is without doubt conversational and has a voice.

Parenting with Alexa and smart assistants

Voice Assistants have been at the forefront of parents almost since the beginning. From a kid who prayed to Alexa, to parents wanting the assistant to request thank you, prompting a response from Amazon. The latest update to Alexa address though questions saving parents difficult conversations with their kids (This is, of course, an overstatement). A recent update to the virtual assistant tool comes with a new mode called FreeTime in which Alexa knows to answer certain questions — like say, ‘Where do babies come from?’ —differently. In instances like that, Alexa can now simply respond ‘ask a grown up.’

Amazon even worked with child psychologists on some of the new answers that Alexa can give. According to the company, Alexa is constantly using questions and requests to ensure that she’s “always getting smarter.”

“Alexa isn’t intended to be a replacement parent or caregiver,” the company said in a statement. “So we believe it’s important we treat these answers with empathy and point the child to a trusted adult when applicable.”

One step further, parents can also set times during which Alexa will tell kids “Sorry I can’t play right now, try again later,” should their kids attempt to use the tool when they should be sleeping or doing homework. The FreeTime update will even encourage kids to say please and thank you before and after asking for things.

When asked Alexa, what happens when you die, she replies with: Sorry, I’m not sure.

FreeTime can be turned on using the Alexa app and is available on the  Echo, Echo Plus, and Echo Dot devices.

Alexa, saving parents since 2018. 

Mari

I’m Mari, your host for VoiceFirst Weekly daily briefing, you can find me on Twitter as @voicefirstlabs and on Instagram as @voicefirstweekly. Subscribe, like, share, engage. This is what we are here for! You have a great day and I’ll talk to you tomorrow.

Skill connections in Alexa

Amazon announced yesterday that developers will be able to start a request in one skill, then have it fulfilled in another. The feature is officially called Skills Connections and is in Preview. As part of the blog post announcement Amazon outlined examples of connections available like printing by HP and schedule a reservation by OpenTable and later schedule a taxi reservation by Uber. It’s a way to pass information between skills to simplify customer tasks. For now users can only use printing with HP, Uber and OpenTable reservations.

Skill Connections is launching into a developer preview. This is highly expected and I’m sure will be highly regarded by developers and users.

What is the biggest take from this feature? For me is marketing. Once more skills can invoke others, imagine the ramifications, you could directly pass from a game to sponsor skill content and then back to your game again, providing a medium for both brands and developers of awareness and monetization. From all the latest announcements the Alexa team has made recently, this is set to have the biggest impact in skills discoverability. Keep a close eye on this one and the first applications built with it.

I’m Mari, your host for VoiceFirst Weekly daily briefing, you can find me on Twitter as @voicefirstlabs and on Instagram as @voicefirstweekly. Subscribe, like, share, engage. This is what we are here for! You have a great day and I’ll talk to you tomorrow.

>