Category Archives for flashbriefing

Recap of important Google Assistant recent updates

This week Google Assistant had several updates important enough to make it to my notes for episodes. Let’s start reviewing them.

Monetization

First, monetization, just a week after Amazon announced consumables API for Alexa Google released the ability to sell digital goods, a way for Action creators to offer digital subscriptions and goods to consumers. The novelty is in the digital part, as the API already offered physical products.

As a follow up the Google team also introduced a new sign-in service for the Assistant that allows developers to log in and link their accounts. According to TechCrunch: Starbucks has already integrated this feature into its Assistant experience to give users access to their rewards account. Adding the new Sign-In for the Assistant has almost doubled its conversion rate.

Book rides with Lyft or Uber

Starting this past Thursday Google Assistant users will be able to book rides through Uber or Lyft. Siri and Alexa already let you book rides on Uber and Alexa also in Lyft for some time now. Considering that Google has the map advantage, or, actually the ecosystem of apps advantage it was about time.

Voice access publicly available, an important step for accessibility.

The next update is Voice Access made publicly available this week. The app is tied directly to the Google Voice assistant, designed to make it easier for users to navigate a wider range of tasks via voice command instead of manual actions. It accomplishes that by letting users essentially translates a button push, page scroll or item selection into a voice command that Google Assistant can easily follow.

The Google Assistant mobile app gets more visual, aligning with smart displays

Google Assistant also got more visual last week: with a visual redesign of its Assistant experience on phones. Bringing more and larger visuals to the Assistant. The update aligns what users were already seeing on smart displays.

In addition to the visual redesign of the Assistant, Google also today announced a number of new features for developers. Unsurprisingly, one part of this announcement focuses on allowing developers to build their own visual Assistant experiences. Google calls these “rich responses” and provides developers with a set of pre-made visual components that they can easily use to extend their Assistant actions.

Acquisitions

Finally, Google acquired AI chatbot startup Onward, a “powered chatbot startup that creates a range of automated customer service and sales tools for business usage.”

That’s all for today. I miss some, but I think these are the most important ones, it shows how they are moving quickly in the space heads on. Exciting times to be alive. Hope the best day for you all.

I’m Mari, your host for VoiceFirst Weekly daily briefing, you can find me on Twitter as @voicefirstlabs and on Instagram as @voicefirstweekly./ You have a great day and I’ll talk to you tomorrow.

Chatbot solving hard problems for students in Australia

The University of Adelaide built a chatbot to solve the dreading problem of thousands and thousands of students receiving their Admission Rank on the same day. The chatbot was built using only Oracle Autonomous Mobile Cloud Enterprise, a sheet with the formulas for the calculations for the admission rank and a list of high schools whose students are eligible for bonus points schemes. It’s a very precise task that the bot performed very well and saved the university staff from thousands of calls from students the same day wanting to know their bonus points. The chatbot was deployed to Messenger, because that’s where students are. “We capitalized on the engagement we were already having across Facebook”. Which at the same time potentiated students sharing their scores on the Facebook platform.

The prospect management team was shocked at how many students embraced the chatbot in its first year. “We would have been happy with 200,” Cherry says. “Instead, the chatbot held 2,100 unique conversations with students.”

A very illustrative example of conversational done right, for a specific task, for the users and context first. Resulting in a win for students and university staff alike.

The future is without doubt conversational and has a voice.

Parenting with Alexa and smart assistants

Voice Assistants have been at the forefront of parents almost since the beginning. From a kid who prayed to Alexa, to parents wanting the assistant to request thank you, prompting a response from Amazon. The latest update to Alexa address though questions saving parents difficult conversations with their kids (This is, of course, an overstatement). A recent update to the virtual assistant tool comes with a new mode called FreeTime in which Alexa knows to answer certain questions — like say, ‘Where do babies come from?’ —differently. In instances like that, Alexa can now simply respond ‘ask a grown up.’

Amazon even worked with child psychologists on some of the new answers that Alexa can give. According to the company, Alexa is constantly using questions and requests to ensure that she’s “always getting smarter.”

“Alexa isn’t intended to be a replacement parent or caregiver,” the company said in a statement. “So we believe it’s important we treat these answers with empathy and point the child to a trusted adult when applicable.”

One step further, parents can also set times during which Alexa will tell kids “Sorry I can’t play right now, try again later,” should their kids attempt to use the tool when they should be sleeping or doing homework. The FreeTime update will even encourage kids to say please and thank you before and after asking for things.

When asked Alexa, what happens when you die, she replies with: Sorry, I’m not sure.

FreeTime can be turned on using the Alexa app and is available on the  Echo, Echo Plus, and Echo Dot devices.

Alexa, saving parents since 2018. 

Mari

I’m Mari, your host for VoiceFirst Weekly daily briefing, you can find me on Twitter as @voicefirstlabs and on Instagram as @voicefirstweekly. Subscribe, like, share, engage. This is what we are here for! You have a great day and I’ll talk to you tomorrow.

Skill connections in Alexa

Amazon announced yesterday that developers will be able to start a request in one skill, then have it fulfilled in another. The feature is officially called Skills Connections and is in Preview. As part of the blog post announcement Amazon outlined examples of connections available like printing by HP and schedule a reservation by OpenTable and later schedule a taxi reservation by Uber. It’s a way to pass information between skills to simplify customer tasks. For now users can only use printing with HP, Uber and OpenTable reservations.

Skill Connections is launching into a developer preview. This is highly expected and I’m sure will be highly regarded by developers and users.

What is the biggest take from this feature? For me is marketing. Once more skills can invoke others, imagine the ramifications, you could directly pass from a game to sponsor skill content and then back to your game again, providing a medium for both brands and developers of awareness and monetization. From all the latest announcements the Alexa team has made recently, this is set to have the biggest impact in skills discoverability. Keep a close eye on this one and the first applications built with it.

I’m Mari, your host for VoiceFirst Weekly daily briefing, you can find me on Twitter as @voicefirstlabs and on Instagram as @voicefirstweekly. Subscribe, like, share, engage. This is what we are here for! You have a great day and I’ll talk to you tomorrow.

Omega voice assistant and Surface headphones

Will.i.am backed startup presented Omega, their smart assistant at the recently finished Dreamforce conference. Omega is set to compete with Alexa and Google Assistant although heavy geared towards music fans: in the presentation the assistant was shown playing tracks at request, before providing information about the artist playing and any upcoming gigs or live performances. I mean, seems like Spotify for smart assistants. Given this music focus we can safely predict that is likely to find Omega built in in bluetooth wireless headphones. Speaking of which Microsoft unveiled several Surface devices among which are the Surface Headphones with Cortana (obviously) and automatic pause and play. The headphones are set to be available later this year. With Omega smart assistant and Microsoft own Surface headphones is the tell tale of companies capitalizing from voice technology, or at least fighting for a place in the consumers heart.

You see my friends, this voice space that might have been called a fad, it nothing like it. Look at the different angles of the ones coming to the space are and draw your own conclusions.

I’m Mari, your host for VoiceFirst Weekly daily briefing, you can find me on Twitter as @voicefirstlabs and on Instagram as @voicefirstweekly./ You have a great day and I’ll talk to you tomorrow.

Reboot the web with voice

Last week or earlier this week, the creator of the World Wide Web, Sir Tim Berners Lee presented a project he has been working on in stealth mode: Solid, an open source project according to its author: “with the goal to restore the power and agency of individuals on the web”, is based in Linked Data and web annotation ontology. Anyways, why am I talking about this? In the FastCompany interview Lee referred to a voice assistant with the same principles, free, open source and decentralized. Its code name is Charlie, focusing on how people will own their own data. And goes along to explain how Charlie could be more useful on areas like health records, children’s school events or financial records. I talked here in an episode about smart assistant that are alternatives to the most popular ones backed by one of the big companies, among which was Snips, a smart assistant which differentiator is privacy built in.

Whether Tim Berners-Lee offering of a decentralized web will catch up or not time will tell. But also developers will tell.  Tim said that “Developers have always had a certain amount of revolutionary spirit” and this can be the opportunity to start something new and participate on a bigger mission for developers with an appeal for freedom and take control back from the web from corporations.

There are several aspects to be noted from the announcement, most of them are out of the scope of this podcast, for the topic we care, it’s really telling that when Tim decided to create a new platform for the web, it had voice from that start on in the strategy.

The intent is world domination

Said the WWW creator in a interview for FastCompany.

It seems that I keep missing Mondays the past two weekends! I have been traveling and it’s harder to find a quite place to record when you are traveling. As always, share this episodes and have a nice day. I’ll talk to you all tomorrow!

Amazon Alexa and sports stats

Amazon’s Alexa is getting smarter about sports — Just in time for the NFL season, Amazon has been stuffing Alexa full of sports knowledge. It can tell you the odds of the next NFL game and give you an update on your favorite teams. In the near future, Alexa will be able to give fantasy football fans updates on their players, and alert users when their teams are about to take the field.

Sports-related questions have become some of the most popular ones to ask Alexa in recent years, Jason Semine, principal product manager for sports information on Amazon’s Alexa team, said in an email.

Business Insider reports that the Alexa team has also been working to ensure that the intelligent assistant is able to respond to questions about sports events as they happen and to understand the context of particular inquiries.

“Our long-term goal is for Alexa to understand and be able to answer all questions, in all forms, from anywhere in the world.”

As you see, Amazon wants Alexa to be everywhere and to know everything. This is what having big goals means.

In recent weeks, Amazon has added a slew of new sports-related features to Alexa. Among them:

  • Answers to an assortment of trivia-related questions relating to sports history, records and statistics.
  • Updates on the latest injuries and transactions involving individual players or teams.
  • Predictions on who will win upcoming games, including the latest betting line.

But more importantly, Amazon will be more and more in your home, inserting the need in your life to ask all types of questions to their smart assistant, emphasis in sportswhich unites (or divides) us, but certainly always requires some data in the conversation. Pretty smart if you ask me.

Happy Sunday, and I’ll talk to you tomorrow.

Cortana Skills Kit for Enterprises

The same week Salesforce announced their Einstein voice assistant ahead of their annual gathering Dreamforce, Microsoft launched Cortana Skills Kit for the Enterprise for developers, to help businesses create custom voice apps for their employees and users, at Ignite, their annual gathering. This was expected, given the company strong stance on the enterprise. I have mentioned before that I see at this time Microsoft strategy in voice tech seems to be more focused on providing AI tools to developers, and again, you know what I’m gonna say developers are the currency for today’s platforms.
Cortana Skill Kit is currently available by invitation only. Invitations for companies and developers will be made available in the future.
According to a programme manager on Cortana’s team, the platform is powered by the Azure Bot Service and leverages Language Understanding from Azure Cognitive Services, allowing developers to create company-specific skills for Cortana using known and trusted tools.
As a proof of concept, IT developers at Microsoft used the enterprise platform to create an IT help desk skill that enables Cortana to file tickets for employees who are having computer problems and connect them to someone who can help.

How voice is changing software development

I’m very passionate about the process of software development and how it affects the product cycle. I read a blog post about people with physical disabilities using voice tools to program, so I was excited when found this article about the influence of voice technology in software development. Let’s see some of the ways software development is being affected by voice.

  • New development agencies are seeing clients request more and more voice capabilities, as well as old applications being upgraded with voice-activated functionality. But at the same time a bad interaction is worse than none at all. Lars Knoll, CTO at The Qt Company have said that “A badly done voice integration is probably leading to a worse user experience than not having one at all.”
  • The other one is to have into account the differences between gui design and voice interface design. It will be increasingly needed that voice-oriented developers understand the basics of pattern recognition and machine learning. That being said, the underlying development principles remain pretty much the same, as well as having knowledge of popular programming languages. In a GUI, the user’s eyes and mouse movements have been trained over several years of behavior. As a result, voice development is as much a product design challenge as an engineering problem.
  • The third way software development is affecting development is the fragmentation of voice platforms. You’ll need to make choices about developing independently for each platform (and your priorities) or taking more of a one-size-fits-all approach. I talked about it in the Voice Space Fragmentation (https://voicefirstweekly.com/flashbriefing/69/).
  • B2B companies, can’t ignore voice work much longer. With Salesforce announcement of Einstein voice assistant to their platform, plus Microsoft Cortana Skill Kit, plus voice assistants being more pervasive in the home, cars and mobile of users, there is only so long before users start asking: can I do it with my voice?  or why I can’t do it with my voice? 

There is also an increment in development tools available to code by voice, in combination with the advancements of speech to text recognition, we can see a new wave of tools, where you can have voice commands as actions that use speech to text for the actual code input. The Nature magazine has an interesting article I recommend you check in full: Speaking in code, it outlines how programmers from the Genome Aggregation Database (gnomAD) at MIT, which is used to explore genomic data are using voice coding to build web applications. “These applications share data from some of the largest sequencing studies in the world”

Coding by voice command requires two kinds of software: a speech-recognition engine and a platform for voice coding. Dragon from Nuance, a speech-recognition software developer in Burlington, Massachusetts, is an advanced engine and is widely used for programming by voice, with Windows and Mac versions available. There is also VoiceCode and Caster, the latter free and open source. This tools are not new, there’s a video of PyCon in 2013, demonstrating voice command. However, reportedly, the learning process of voice coding has a steep curve. You have to learn all the commands which turns out are not that natural. And if you have any throat problem it can become a challenge as well.

On the bright side, users report to think through very well before dictating any code, and we can be sure is a big benefit.

Voice technologies are soon to be part of the development process and not only result of it.

Is Podcast media dying?

Happy Thursday! Today is newsletter day! Every Thursday at 9:50 Pacific Time, our weekly issue is sent to your inbox. It’s filled with my commentaries of the best of the week and stuff I find interesting even if it didn’t make mainstream media in conversational interfaces, voice technologies and voice strategy and branding. If you haven’t subscribed yet do so at voicefirstweekly.com.

Alright, moving on to today’s business.

Buzzfeed is cutting their podcast unit to focus on shows. The news came fast with some Twitter on layovers the same day last week. According to an article by The Wall Street Journal. Buzzfeed decision to cut its original podcasting staff comes in conjunction with the decision made the previous week, by the audio company Panoply. Audible Originals, the podcasting unit run by Amazon.com Inc.’s Audible audiobooks division, also laid off several employees earlier this year.

Apparently Buzzfeed is having problems keeping up with investors expectations in the advertising torment the industry is suffering from now. The Vice-President of News and Programming Shani Hilton said in a memo to stuff also reported by the Wall Street Journal:

We’ve decided to move to a production model that is more like our TV projects — that is, treating shows as individual projects, with teams brought on as needed.

These developments come as Audible and Buzzfeed reshapes its original programming strategy, alleging for more short-term programming or more Netflix-style shows.

As contrast the NYT Voyages issue will feature stories told through audio that correspond with full images without captions in the magazine”:

For the first time, the Times has produced a bonus crossword puzzle in which more than half the clues will feature audio clues.

Noting the company is experimenting with audio, not podcasts per se.

The founders of Gimlet Media, the narrative podcasting company, said in an episode by Recode Podcast that is expanding beyond original audio shows and into two newer businesses: Film and TV adaptations of their podcasts, such as a TV series reworking of the show Homecoming, now starring Julia Roberts, that will hit Amazon this fall; and branded podcasts, which are completely underwritten by a single sponsor.

Does this means that the podcast media is dying in favor of short-style stories or shows? What does this tell us about the state of podcast monetization at scale? Today I’ll leave you with more questions than answers. Partly because then I can come back and do another episode with the answers!

The truth is that the podcast industry is shifting and we need to watch how that plays out with content being created for smart assistants platforms.

Here are more resources to complement this subject:

You have a great day and we will talk tomorrow!

>