Google announced that users can now query their assistant in Spanish. It was already available in the Assistant in Android phones. And now, Google Home and Google Home Mini are also available in Spanish in both Mexico and Spain. We ask it “Cual es el clima (What’s the weather)” and it replied with the weather information in English, presumably because my default language is English. They also introduced continued conversations, where you can ask more than one question without saying the invocation word.
And just 2 days ago Google Duplex, the famous demo presented at Google I/O opened for testing. The company gave several small groups of journalists a chance to demo Duplex in which the assistant identify themselves and said the call was gonna be recorded. Google listened to the main critics about the concern of the system deceiving the caller by not identifying itself as not human. The company efforts in AI and audio is playing a main role in their strategy. Duplex could have a big impact as an assistant. We’ll wait for more tester to give a more thorough review. Thanks for listening. Find this show notes at voicefirstweekly.com/flashbriefing.
The city of Ozark Missouri has its own voice in Alexa.
I’m gonna play near future oracle here. It seems to me that in time it makes a lot of sense that every city has a voice assistant in every main attraction area where visitors and locals can ask questions about the city and the different services that are link to the government. It would be really useful for tourists finding their way around the city, asking for schedules of theaters, city hall, museums. Others use cases may be file complains, request a permit, checking permit status, ask when a service will launch or when there are new job offers in the city government. Lots of possible uses and applications. I’m sure we’ll be walking to the city and you’ll get to a point where there’s an echo you can talk to. And that’s why Ozark Missouri skill it’s an breakthrough move in VoiceFirst. It’s the first city to have one in the US, and I’m expecting a lot more to come.
Thanks for listening. Before wrapping up today’s episode I want to invite you to subscribe to our weekly newsletter, every Thursday morning we deliver our digest of what we think is the most relevant in voice first. You can subscribe at voicefirstweekly.com or at digest.voicefirstweekly.com. Also, this show can be listened now in Google Play, iTunes, CatBox and TuneIn. And last news I promise, we just launched VoiceFirst Weekly Google Action. Go to your Assistant app on your android phone or your home mini and say Ok Google, talk to VoiceFirst Weekly to listen to the current day flash briefing. Alright! Have a productive, joyful day and we’ll talk tomorrow.
Someone asked this week in the voice entrepreneur group
We need category headings for job types & Skills applicable to the world of Voice Development/businesses needs. Think: Roles, Skills, Technologies.
Voice is so new as a platform that we haven’t defined yet common domain for applicants and jobs. What’s so interesting and attractive to me about voice is not only the technical side of it, but the psychology and understanding of human communication that it’s so core to the whole technology.
The abilities required for voice apps, successful, engaging, million of users apps will be way different than those of a mobile application. Empathy will play a big role and creatives and writers. For me on a basic level of skill development you’ll need understanding and some hands on experience on NodeJS/JS/Python, JSON, APIs, basic understanding of databases. On the business side, there is a lot of room for creatives types, voice actors, writers, designers. Skills: empathy, design for voice interactions, understanding the difference of designing conversational interfaces vs websites and mobile apps. Voice is the interface to humans, and you’ll need to understand them.
It’s about time that Google launched an app for podcasts in Android. The company is betting hard on AI and audio focused content and this is showing it. We already submitted our podcast and the process seems fairly straightforward. Besides the podcast RSS URL, it is required to verify ownership of the email associated with the RSS feed. After that the approval process begins. We’ll report back when we hear from them.
Continuing with the trend in naming service as literal as possible the app is called Google Podcasts and will recommend shows that users may like based on their listening profile. Google is predicting a lot of growth for podcast listeners. The coolest part, for us anyway, is that the app will come integrated with Google Assistant meaning you can search for and play podcasts wherever you have Assistant enabled. The company will sync your place in a podcast across all Google products, so if you listen to half a podcast on your way home from work, you can resume it on your Google Home once you’re back at the house. Google is playing Apple with this move to integrate across their products, which seems smart. And Apple is playing no one in this movement. To be fair, Apple has had podcast app for ages and played audio way before Google, exactly the reason why their silence and relative lack of action is telling.
According to an article by The Verge, in the coming months, Google plans to add a suite of features to Podcasts that are powered by artificial intelligence. One feature will add closed captions to your podcast, so you can read along as you listen. Eventually, you’ll be able to read real-time live transcriptions in the language of your choice, letting you “listen” to a podcast even if you don’t speak the same tongue as the host.
If I had one complaint is that they do not have a plan to release it on IOS, which, with them becoming Apple, makes sense.
Thanks for listening. Leave a comment on Twitter @voicefirstlabs or Instagram @voicefirstweekly and let us know what you think about the format of the flash briefing, the content or drop a question you will like us to discuss. Till next time!
In voice, as in life, there are not second chances to make first impressions. Particularly at this stage in a technology that has not gained all users attention. To increase engagement with your voice app – start with the welcoming experience.
Voice adds a new dimension to how customers interact with your service, brand, or content. A customer’s first interaction with your skill will leave a lasting impression, which is why it’s important to ensure your welcome experience is positive and memorable. Also, providing a guided experience is vital for both new and repeat customers. A welcome prompt (something as simple as “Welcome back to MySkill”) reinforces to the customer that they are in a skill experience, and that they correctly invoked the desired skill. Customers may not realize they invoked a skill and a generic question like “What would you like to do?” could cause confusion.
In addition to helping new customers have a positive experience, the welcome message is your first opportunity to establish your brand identity on Alexa and create a memorable first impression.
In a separate article by The Verge, How to get voice-enabled apps right in the age of smart everything there’s an interesting observation “It doesn’t make sense to only study what people want from voice interactions using traditional UX research methods like lab-based usability studies, and yet that’s what most brands are doing.”
Voice is the platform for context. And to better define what your users want or expect from your app you need to observe them on their environment and in the context they are in. Forget content, for a voice-first world, context is king and your welcoming experience is the door to that kingdom.
Thank you for listening and until next time. Find us @voicefirstlabs on Twitter, @voicefirstweekly on Instagram.
Adapted from bits of:
Your car will talk way before it flies. We were promised flying cars instead we got talking, connected cars. And that might be a good thing. We are not ready for flying cars.
In the late 2017 Smart audio report by NPR and Edison Research, is said that 64% of Smart Speaker owners are interested in having Smart Speaker technology in their car.
Mercedes Benz recently announced voice control for models A-B-Class, CLA and GLA. The app is available in 10 languages and can be used from your phone connected to the car. Featuring weather reports, navigation and music playing, Mercedes built their own assistant, a decision that separate them from other manufacturers that have chosen Google or Amazon assistants.
Lexus also announced that the seventh generation of ES 2019 will come with Apple play and Amazon Alexa compatibility.
Tesla, which will have voice controlled commands in its Model 3, according to a tweet by Elon Musk it’s another one of the car companies announcing features with voice tech. Connected cars have arrived, we are not sure about all the implications it’ll have, but it’s clearly one application of voice it’s hard to argue against. After all where is more convenient to use voice that when you are driving or using your hands? Until we get self-driving and flying cars, voice tech is here to command it.
Thank you for listening and until next time. Find us @voicefirstlabs in Twitter, @voicefirstweekly in Instagram and our website is voicefirstweekly.com/flashbriefing. That’s voicefirstweekly.com/flashbriefing
We talked in one of last week flash briefings how businesses should pay attention to voice search. The trend in voice search uprising pose a new challenge for marketers and search engine optimization practitioners: Optimizing for voice search.
For industrial marketers the advent of voice search could change how their potential customers look for parts and services. Imagine a procurement manager who holds up his phone and shouts out a keyword like this (formatted the way a search engine would process it): “Ok google search for aluminum suppliers.” Or maybe it would be “Alexa show me CPVC pipe manufacturers.”
Some real keywords that illustrate how people in the manufacturing and industrial world are voice searching. In this example they are attempting to find solutions for sound and noise-related problems.
Here are two about finding companies that provide craning and relevant equipment:
Some of the queries are structured exactly like standard typed searches, while others are more conversational. Manufacturing companies will need to account for both styles to fully succeed in optimizing their content for search engines. This means you have to pay closed attention to what users are searching now so you can optimize your content for it.
Sources referenced: https://blog.thomasnet.com/voice-search-optimization-industrial-marketing?hs_amp=true&__twitter_impression=true
You decided to enter voice tech and smart speakers market. You want to be ahead of the game and you took the leap to create Alexa skills and Google Actions, now what?
As a brand you should be considering your strategy for voice platforms and how consumers can reach you in those channels. Begin with defining a persona for your brand, how will it sound, what language will it use, will it be relaxed, formal, even good morning vs hello makes a difference. In what channels should you launch first, should it be smart speakers, if so, which first, Amazon Alexa, Google, others, voice actors or the platform’s voice, chatbots or all at once? And when you finally deploy your conversational solution in front of users, two more questions arise. One measuring how the users are interacting with your chatbot/skill/action, you should look into this as soon as possible to see how users are reacting to your conversational design, if they are getting frustrated and how can you improve the interaction. And the second, and most important, what role will this collected data of conversational interaction of users with your service will have in your decision making process. Part of the answer depends on the goals and key opportunities you have set and part you’ll need to discover it on the way, but you have to be willing to. This is the time for experimentation to figure out what is the best returns on voice technology for your company.
Thanks for listening!
Did you know that Alexa can now run on the Apple watch?
Thanks to Voice in a can app, a standalone Apple Watch app, which means you don’t need to tether to an iPhone to use Alexa.
Apple has restricted the Apple Watch to easily access Siri and no other digital assistants, but Voice in a Can uses a watch complication to make it easy to launch the app from the watchface. As this is a third-party Alexa app, it also means you won’t be able to use it to make calls, play music, or do Echo announcements. However, it will fully support the smart home features of Alexa, you’ll be able to trigger your lights or other devices like an Alexa-enabled coffee machine while you’re out of the house. This is the best alternative out there for Alexa on your wrist since Amazon still does not support the Apple Watch. You can buy the app for $1.99 at the App Store.
Despite security and privacy concerns, there’s no doubt of the convenience of using voice for everyday money management.
Big banks and financial companies have started to offer banking through virtual assistants, Amazon’s Alexa, Apple’s Siri, and Google’s Assistant, in a way that will allow customers to check their balances, pay bills and, in the near future, send money just with their voice. Regional banking giant U.S. Bank is the first bank to be on all three services, Alexa, Siri and Assistant.
Bank of America has Erica, a voice-activated virtual financial assistant, which just recently surpassed one million users according to a press release on BusinessWire.
Other financial companies have set up virtual assistant features. Credit card companies Capital One and American Express both have Alexa skills that allow customers to check their balances and pay bills. There are other smaller banks and credit unions that have set up Google Assistant or Alexa as well. Conversation.one, a service to build conversational interfaces, reached an agreement with Fintech giant Finastra to integrate their platforms. This allows Conversation.one to extend their voice solution to Finastra’s 9,000+ financial institutions. It also means a lot more banks and credit unions will be able to offer omni-channel customer experiences without having to drop a single line of code.
The biggest challenge today for voice and finance is privacy. I think convenience will beat privacy and eventually users will get used to the risks or we’ll come up with some suitable solution. Until then, have you used any of these services? Let us know what you think @voicefirstweekly in Instagram.
Image from unsplash.com