Episode Archives
Flash briefing 51 – Can Alexa help your guests in your wedding?
Why put that friend to guide your guests to their table when Alexa can do it for you?
Guests at a recent Donegal wedding were treated to a taste of things to come to contemporary nuptials when Amazon’s Alexa debuted her wedding planning skills at a couple in Harvey’s Point, courtesy of one of their friends who is studying computer science at Queen’s University. The skill allowed Alexa to give guests their table number when they told her their name.
She even added a little sass to her responses.
The creator noted that it’s not a skill available in Alexa skills store as it’s heavily customized for each event. Interesting use case for a voice application. Everything that it’s repetitive, can and will be answer by a smart assistant in the future.
Comment on a news that appeared here.
Thank you for listening, you have a great day and we’ll talk tomorrow
Flash briefing 50 – Snapchat launches speech recognition lenses
We have several big company news ending the week. Apple hit the 1 trillion valuation, whoa, this was kind of expected to happen eventually but it’s big news. WhatsApp released their business API, this is following rumors between WhatsApp founders and Facebook highest executives over the platform monetization. But what I want to concentrate today is Snapchat.
Snapchat launched two days ago new lenses that respond to voice commands. Lenses are animations on top of your photos and videos in the snapchat mobile app. While the company has offered lenses that involve audio before, this is the first time it has created lenses that actually recognize words, then use its understanding of what was said as a marker that kicks off the lens animation. This is probably the biggest news out there this week, besides Apple valuation, Snapchat is entering the voice first world quietly but surely and they have a strong young audience leaning in to camera first and now voice first. I will be even more excited if they say that we can build applications or ‘lensations’, however they decide to call it. One of the things we listened the most in voice technology world is voice assistants and stats about adoption and competition between manufacturers, however, there are a lot of applications of voice technology that are not necessarily assistant driven and Snapchat is the first company showing the way on voice first out of the assistant. This is also a release that can help make voice commands and interaction more widespread and accepted. I think this is a trend to continue paying attention to. Have a nice weekend, we have an episode coming out tomorrow. Before wrapping up this episode, we sent yesterday this week newsletter, if you haven’t subscribe at voicefirstweekly.com. We have also made available these episodes in Alexa flash briefings in India, in Spotify and in Google Podcasts. You can find us everywhere!
Happy Friday and we’ll talk tomorrow
Flash briefing 49 – Voice, language and user interfaces
The duality between Voice and Visual interfaces basically comes down to age old question of visual communication versus audio communication. “Language need not have started in a spoken modality; sign language may have been the original language. The presence of speech supports the presence of language, but not vice versa.”
This shift is leading companies to ask the question of
whether they should continue to invest in the visual interface or should budgets shift to voice?
How does designing UX and UI for sound/voice change our role and the tools we use?
How prepared for the shift is our industry?
What effect will chatting to a machine have on language? We’re all well aware of the effects texting and instant messaging has had! LOL
If this trend is anything to go by, will we need to develop a sound version of icons and emojis? An audio version of shorthand?
Still, with this increased utilisation of the senses, how long will it be before the other two, taste and smell, get in on the action?
Are we moving towards a world that is augmenting our senses
Brain-Computer Interfaces Are Already Here
And now Mr Musk has entered the fray, by funding ‘medical research’ startup Neuralink who are developing neural laces .
Commented from voice vs human interfaces article .
Flash briefing 48 – Amazon Alexa introduced customer contact access for skills
Amazon announced yesterday that you can now request customer permission to access customer contact information using the Customer Profile API. Once a customer consents, you can access certain contact information in their customer profile, and use it to deliver a more personalized experience and provide additional information to your customers. For instance you could use the name to request a service and then send a confirmation to the user. You can also use text or email to provide additional information that’s not easily shared through voice. With contact information, skill builders can now request
- Full Name
- Given Name (First Name)
- Email Address
- Mobile Number
Requirements to use contact information
To use the contact information, skills must meet certain requirements, like having a privacy policy, it can not be a child-directed skill and it does not allow account linking in the background with the customer information. Amazon is showing its concern with privacy and user information. Restricting the contact information not be able to link the user in real life protects the user’s data.
Prior to this announcement, Alexa skills provided only account linking to get customer contact information, which introduced a lot of bumps and was avoided in general. The fact that skills can now access contact information can enhance skill experience and drive new use cases and skill engagement. You can now send follow up content or reminders through emails and text.
Google has had this feature with the emails for a while and Amazon is catching up to the features box of smart assistants.
Thank you for listening!