Author

About the Author
The ultimate resource in the voice space. Conversational interfaces, voice interfaces, smart speakers and smart assistants, voice strategy, audio branding.

Real time translation is coming to all Assistant-optimized headphones and phone

AndroidCentral is reporting that the translate support in Google Pixels page has been updated to reflect:

Google Translate is available on all Assistant-optimized headphones and Android phones.
The feature was previously available on the Pixel Buds only with Pixel phones:

With the assistance of your Google Pixel Buds, you can easily converse with someone who doesn’t speak your language.

With the update, you will need any Assistant-enabled devices, headphones and phones. You now can say “Help me interpret Japanese” or any other language, you can hear translations and respond to them on your headphones while holding on your phone to the person you’re talking to. That person will hear your translations from the phone’s speaker and respond to them through the phone’s microphone.

Real-time translation is available in 40 languages on the Google Buds support page, but only 27 languages are listed under “Talk” for speech translation and bilingual conversation translation on Google Translate.

Google Assistant-enabled devices get a really differentiating point from competitors like Amazon Alexa, Siri and Cortana introducing real time translations available on phones and headphones.

Voice across industries featured in today’s newsletter issue

Every Thursday 9:50 PT we send the weekly issue of the ultimate newsletter in voice technology. We will soon add an audio version of it as well.

AppSheet launched SmartAssistant, an automatic conversational UI for Apps

AppSheet, a service to create mobile apps almost without coding announced last week a new feature that allow creators to add a new user interface that acts like a digital business assistant, enabling voice recognition and natural language processing for any app built on the AppSheet platform. It’s compared to having Siri in your app, where users once they have it enabled can simply type or use voice commands to access data immediately.

Smart Assistant delivers a conversational experience to any app built on our platform. With it, users can directly access information using simple phrases rather than learning or navigating the app interface. We believe this kind of seamless interface will increase user adoption rates as users no longer have to adapt to technology—technology adapts to the user,”

I think I have talked about this before in the show, the convenience of voice is so high because is the first technology that truly promises to take the load from the user to the machines. Pretty exciting. From this announcement I want to point out something else: platforms like AppSheet providing voice activation to apps upfront will be cornerstone for adoption and engagement. The danger is then, in doing it right, according to Lars Knoll, Qt Company CTO, doing voice integration wrong is worse than not doing voice at all.

Before I wrap up this episode, I want to also highlight the dichotomy it might seem voice has: it’s technical and creative. But once the technicalities are at hand like AppSheet is providing, all you have is the creativity and the user experience. It reminds me of Dave Itsbitski at the keynote of Voice summit, the biggest challenge voice has is designing human conversations.

I’m Mari, your host for VoiceFirst Weekly daily briefing, you can find me on Twitter as @voicefirstlabs and on Instagram as @voicefirstweekly./ You have a great day and I’ll talk to you tomorrow.

Links of the coverage of the Portal announcement:

What you didn’t hear on the Portal announcement

As you probably know by today, Facebook announced Portal yesterday, a smart display for video calls with Alexa integrated.

There are tons of articles on the web related to the announcement summing up three fundamentals: the privacy scandals the company has been surrounded this year, why choosing Alexa was a smart decision at this point and the focus on visual as oppose to another smart speaker. I’ll leave links at this episode notes. Go and check them out if you haven’t.

You might think, what is she going to say that hasn’t been said by the other stories. Hear me out, I have something to add to the conversation, pun intended.

Another bullet directed at Snapchat

I think this is also another shot at Snapchat. The company has been relentless in their competition with Snapchat (Instagram stories, Facebook stories and now Instagram name tag) and this is no exception. Portal smart display landing page first feature listed is Smart camera and sound followed by private by design. Sound familiar from Snapchat camera first mantra, right? Let’s continue.

With music, animation and augmented reality effects, Portal lets you get into stories like never before.

Portal is adding VR lenses to the calls, directed to a more leisure focused model that Facebook knows very well, which can also give them a differentiator point among the competitors. Targeting an audience that’s already used to lenses, that use them on a daily basis, combined with voice activation provided by the very familiar (homie) Alexa is the recipe for snapping (again, pun intended) those young users into the giant blue platform.

One detail that also deserves attention:

For added security, Smart Camera and Smart Sound use AI technology that runs locally on Portal, not on Facebook servers. Portal’s camera doesn’t use facial recognition and doesn’t identify who you are.

I talked in the episode about the compression algorithms the Amazon Alexa team is working and the effects it might have for the voice activated internet of thing feature, and this quote, proves that Facebook is very aware of the users reaction to the privacy scandals, what users think and is alerting them: it runs local, does not call Facebook servers. Maybe Facebook is the first company to provide a truly offline smart assistant experience. Wait, it has Alexa built-in, never mind.

Payments through AI messaging service + social network = better smart display?

As I alway say, whether this strategy is going to play out or not, only time will tell. Truth is, Facebook wants in in the smart speaker race and in the home. And they want it now, as much they couldn’t wait any longer. I would expect that the patent Facebook filled in 2016 for conversational payments (Processing payment transactions using artificial intelligence messaging services), plus all the social network expertise will come together with Portal and Portal+ for a product that might change the course of the smart display race.

Let me know what you think in the comments of this post, on Twitter replying to @voicefirstlabs or on Instagram @voicefirstweekly. My name is Mari, I’m your host, stop scrolling down now and go and share this, like and engage. You know why? Because you love it and I love it. Best of the Tuesdays for all of you. I’ll be back tomorrow!

Recap of important Google Assistant recent updates

This week Google Assistant had several updates important enough to make it to my notes for episodes. Let’s start reviewing them.

Monetization

First, monetization, just a week after Amazon announced consumables API for Alexa Google released the ability to sell digital goods, a way for Action creators to offer digital subscriptions and goods to consumers. The novelty is in the digital part, as the API already offered physical products.

As a follow up the Google team also introduced a new sign-in service for the Assistant that allows developers to log in and link their accounts. According to TechCrunch: Starbucks has already integrated this feature into its Assistant experience to give users access to their rewards account. Adding the new Sign-In for the Assistant has almost doubled its conversion rate.

Book rides with Lyft or Uber

Starting this past Thursday Google Assistant users will be able to book rides through Uber or Lyft. Siri and Alexa already let you book rides on Uber and Alexa also in Lyft for some time now. Considering that Google has the map advantage, or, actually the ecosystem of apps advantage it was about time.

Voice access publicly available, an important step for accessibility.

The next update is Voice Access made publicly available this week. The app is tied directly to the Google Voice assistant, designed to make it easier for users to navigate a wider range of tasks via voice command instead of manual actions. It accomplishes that by letting users essentially translates a button push, page scroll or item selection into a voice command that Google Assistant can easily follow.

The Google Assistant mobile app gets more visual, aligning with smart displays

Google Assistant also got more visual last week: with a visual redesign of its Assistant experience on phones. Bringing more and larger visuals to the Assistant. The update aligns what users were already seeing on smart displays.

In addition to the visual redesign of the Assistant, Google also today announced a number of new features for developers. Unsurprisingly, one part of this announcement focuses on allowing developers to build their own visual Assistant experiences. Google calls these “rich responses” and provides developers with a set of pre-made visual components that they can easily use to extend their Assistant actions.

Acquisitions

Finally, Google acquired AI chatbot startup Onward, a “powered chatbot startup that creates a range of automated customer service and sales tools for business usage.”

That’s all for today. I miss some, but I think these are the most important ones, it shows how they are moving quickly in the space heads on. Exciting times to be alive. Hope the best day for you all.

I’m Mari, your host for VoiceFirst Weekly daily briefing, you can find me on Twitter as @voicefirstlabs and on Instagram as @voicefirstweekly./ You have a great day and I’ll talk to you tomorrow.