I met Heidi at the Voice Summit in Newark, she is the CEO of AskMarvee, an Alexa skills for elder care. Heidi gave one of the best talks that I attended in the conference. This weekend she tweeted that listening to a podcast with Brian Roemmele, about Siri shortcuts as jobs to be done, and she mapped that to cares to be given. And it got me thinking, it is possible that all this voice is about caring? Because our voices are instruments of care. You feel cared when your mom talk to you or a long time friend, when you hear their voices, and that’s because you have their voices fixed in your brain. I think this is a curious message for brands out there: Do u want your brand to trigger the emotional response that a friend trigger in you when you are in need of emotional support?
I don’t mean that the brand should be your dearest friend, I mean the trigger you want to be for the users of your brand. You want to be the way to go for your users where they think of toilet paper, or whatever your product or service is. Think about this, what are the cares to be given by your company? And for voice applications tailored to patient care, elder care, children development and learning, what are the cares to be given? Checkout today’s episode of Alexa in Canada where I talked to Teri Fisher about the future of voice technologies.
Thanks for listening and we’ll talk tomorrow
Hello there, I got asked some deal on voice summit how we put our flash briefing on Google Assistant. It’s not as straightforward as in Alexa, and it require some coding, but it can be done! So today’s episode is gonna be a little meta and we are gonna talk about the tools and processes we follow to produce VoiceFirst Weekly content. We basically have 2 mediums: writing and audio. We don’t do a lot of videos for now, except those we do from the episodes content, that might change in the future. It all started with a newsletter. The newsletter is a curation of what we consider worth sharing in voice technologies every week. It’s content curation. The amazing Nersa has set up an automated process to send to an Excel file all the tweets with the hashtags we are interested in. This Excel file gets pretty big pretty fast, but most of it are retweets and cited tweets. Then I come in and read the most relevant ones, read the articles and make the selection for what makes the newsletter that week. Nersa reviews them and so forth and so on. For our audio content the strategy is a little different, as this shows are produced daily. We try to not say the same in the podcast as in the newsletter, and most of the time we succeed. The format of the show is intended to be short-sized episodes of ~2-3 minutes, always under 5, as I suspect you know by now. The approach for these is different. Once a week we gather the themes we want to feature in the week and based on that I write the commentaries. Say we decided to talk about retail and voice commerce, and 6 themes more. We then proceed to write an outline of what we want and then we go and refine it. The other thing we do is as we read Twitter or Instagram, we gather question that people ask and decide if we want to give our 2 cents on them. We write every podcast episode no matter how short, that way the transcript is already there, and we can make videos, tweets, more content. We mainly published everything in our website and in social networks.
Basically that’s our processes, I’m sure everyone of you have a different process to come up with content, share it with us if you like, or let us know if this was useful or we can help you in any way with yours.
Thank you for listening!
Hello happy Sunday,
What a week this was for the voice fam. Voice Summit wrapped up on Thursday and it was an amazing experience. The talks as well as meeting the leading voices in voice, pun intended. And putting faces to the twitter handles and tweets. Let me tell you, it’s an amazing family. I feel so grateful to have met them. As happens when you have a lot of enlightening conversations, I’m back full of ideas, new projects and collaborations. Stay tuned for new great things to come! The most immediate one I talk to Teri Fisher, host of Alexa in Canada podcast. He did this Voice summit special episode where each person he interviewed gave a his 1 minute view on the future of voice applications. He just announced on Twitter that the episode should be out soon. I also talked with VUX.world podcast so stay tuned for that as well.
I also met with some of the guy from the Voice Entrepreneur community and we had so much fun along with learning. Can’t wait to be with this guys again! Sending you all lots of love.
Thank you for listening
There is gonna be a voice app for everything. Alexa has a wine skill that helps you pair wines with food. Just say
“Alexa, ask wine pairing which wine goes best with lamb”
Today’s newsletter is dedicated to voice in healthcare. This week we learned that Rockpointe is launching a medical education program on Amazon’s Alexa for doctors that they say is the first for the voice technology.
Thomas Sullivan, president of Rockpointe, said offering a continuing medical education program on Alexa provides more flexibility and easier access for doctors.
Sullivan recognized that doctors already spend a significant amount of time on their computers and said the program on Alexa could be an alternative. Recent studies have found doctors spend about half of their working time on computers. While many CME programs are online, typically webinars and online courses, Alexa may be a welcome alternative to more hours in front of the computer.
“It’s natural this would be a great tool for education,” Sullivan said. “It’s a good step for physicians and to start to think how do we use [this new technology] in our field. We have to look at all these new technologies as yet another tool.”
It won’t be that long before every industry start working on their voice integration. We talked here constantly about the need to start experimenting in voice and have a voice strategy.
Catch our newsletter today if you haven’t subscribed, go to voicefirstweekly.com so you can get the latest issue.
Rate us on iTunes, or wherever you listen to this podcast, or just hit reply and gives a shout out. Thank you for listening
My family has a history with blindness or almost blindness. My uncle has been fighting a disease that he knows is gonna left him blind, soon. One of my best friend is blind. Of all our technological advantages to date, none of them have been particularly helpful for blind or physically impaired. Navigating internet today for my uncle or my friend is a daunting task. Booking flights, a relatively easy task for you or me, is a long process of calls with operators for them. Now imagine how voice applications can change that.
It maybe possible that interaction with Alexa, a machine pretending to be human—especially after missing the evolution of personal computing can be a daunting situation. But every other supposedly obvious technical interface has proved to require some prior knowledge or familiarity. People had to be trained to operate a mouse, for example; direct control of a cursor was awkward until it became habitual. The touch screen built on the mouse, replacing the pointer with the finger. Its accompanying gestures—flicking through a feed or pinch-zooming a map or swiping right on a love interest—have come to feel like second nature. But none of them are actually natural.
Voice assistants appear to bypass that legacy, offering hands-free operation and new accessibility for those with limited mobility or dexterity. Yet they still require expertise. The way most of us talk to the devices has been shaped by our interaction with web and mobile search, making it query-like. For a person that didn’t live through that at all, it’s foreign language.
Computers and mobile phones are so ubiquitous now, that a life without them is a little more painful and certainly, for professional development, hindering. The smart assistants might seem for some unnecessary, for others a glorified QA speaker, but for those that do not have easy access to texting or web browsing is not only a answering tool but a facilitator. It allow them to communicate in a modern way and connect with people. To live fully means more than sensing with the eyes and ears—it also means engaging with the technologies of the moment, and seeing the world through the triumphs and failures they uniquely offer.
This episode was inspired by a story appeared in The Atlantic about a son recounting his father interaction with Alexa, you can find it in this episode notes at voicefirstweekly.com/flashbriefing.
Thank you for listening!
This post was inspired by the article appeared in the At
In May at VoiceCon, after the closing cocktail reception, the comments I heard the most was, this wasn’t totally new. Most of what I heard today I already was aware, as we have been following the technology for a while now. Every now and then on Twitter, Facebook groups of voice applications, this comment pops up, but it’s that the next big thing, is voice in real state/medicine/travel? People asking for examples of really successful skills or actions, investors asking for voice apps with engagement to put their monies on.
There’s always new stuff out there, and most of it’s not very good. Rather than looking for the next musing, it’s probably better to be thorough about what we know is true and make sure we do that well. Our brain has a weakness for novelty. We thrive when we look at something new, curious, extraordinary. But we can not have amusement all the time. We need to do the ordinary in order to get to the extraordinary. This time in voice is time of creation. Engagement is low still for the vast majority of voice skills and actions. The weather might be the most used voice application so far. That doesn’t mean that you shouldn’t be looking for the next thing and be aware of new services, but this time is the time to build your dreams in voice. I have been saying this repeatedly and it will become my motto for the rest of the year. This is the time to experiment. Go and experiment, hire that agency and build your conversational presence for your company or service. The voice strategy that you plan today it’s gonna give its results, true results in at least a span of two years. For instance, according to several informal posts on twitter and surveys, people do not listen to Alexa flash briefings that often or that many. Does that means that you shouldn’t have a flash briefing? No, we certainly have one, and when you have content you can just repurpose it to voice and be present, but be cognizant that the audience it’s just not there yet, and also have the same content in other audio and podcast platforms. Bottom line is, instead of focusing on the next big thing in voice, go and build your thing, play and understand where the audience is, and where is likely gonna be in the next 12 months.
Thank you for listening
Amazon and Microsoft called the assistants apps skills. And there’s a reason for it, the assistant is supposed to learn from the skills, be enhanced by them. Can smart speakers enhance learning and education? At Arizona State University (ASU) this fall, 1,600 Amazon Echo Dots were distributed among engineering students in the school’s Tooker House — a residence hall on campus. Students at ASU are primarily using the Echo Dots as a tool for campus connection.
“Alexa, what’s happening on campus this week?” “Alexa, where is the science building?” “Alexa, when is my next Calculus exam?”
The students at ASU also found a practical application for the devices in the residence halls, proposing a skill for Alexa that could identify how many washing machines are available in the laundry rooms.
Educators from higher ed powerhouses like Arizona State University to small charter schools like New Mexico’s Taos Academy are experimenting with Amazon Echo, Google Home or Microsoft Invoke and discovering new ways this technology can create a more efficient and creative learning environment.
The devices are being used to help students with and without disabilities gain a new sense for digital fluency, find library materials more quickly and even promote events on college campuses for greater social connection.
These applications are very practical but the next challenge for voice in edtech is finding a way to more thoroughly integrate Alexa/Google Assistant/smart assistants into the school’s curriculum.
Some professors are already experimenting with machine learning as a new type of virtual “assistant.” In early 2016, Ashok Goel, a Georgia Tech computer science professor, deployed IBM’s Watson platform to answer routine questions posed by students. Goel revealed the assistant’s “identity” at the end of the semester, and students were surprised to learn that a machine had been answering their questions.
Using machine learning for routine tasks — so instructors can spend more time on complex teaching activities — is one of the primary applications that educational technologists foresee in higher education.
How education can be transformed by intelligent assistant by providing context aware, personalized learning is a task to see, and it’s the dream of every educator. Imagine the possibilities for learning when each student could have a teaching assistant tailored to his learning needs. This is the new education revolution that Kevin Robinson and my mom dreams.
Thanks for listening!
Alexa, Google Assistant, Bixby, Cortana are not the only smart speakers, they all have something in common: created and baked by a big company. But they are not the only assistants in the market.
As conversational computing becomes more widely available, assistants that offer a different sort of vision of the future are emerging, offering counter-narratives and different technological approaches.
That’s why startups like Snips, which is bringing its own smart speaker to market, center their attention on one primary differentiator: privacy. “My goal is to destroy Alexa by providing a platform that is the exact opposite, that people are going to enjoy building on top of If you really want to distinguish yourself from Alexa or Google, you have to do something that’s radically different. My job is not to catch up to Alexa. My job is to offer something that’s so different that people effectively don’t even compare — that’s my objective.
Mycroft.ai is another intelligent personal assistant and knowledge navigator for Linux-based operating systems that uses a natural language user interface. Mycroft is a free, open-source and It is said to be the world’s first fully open-source AI voice assistant.
This startups are tapping into the surveillance fear that people have of all the information hosted in the cloud that can be targets for hacking. Regardless of whether smart speaker conspiracy theories are true, they’re having an effect. According to several studies, outlined in the show notes, one of the main reasons people have said for not having a smart assistant is privacy concerns. Also in the assistant alternatives is SoundHound which raised 100 million in May. In the car, where Google is pushing Assistant in Android Auto and Amazon is installing Alexa in cars from Toyota and Ford, SoundHound is pitching its assistant and Hound platform as an alternative that lets brands deploy conversational AI without the need to say “Alexa” or “OK, Google.” SoundHound does not present itself as a better choice based on privacy, but just like more recognizable AI assistants, alternatives like SoundHound and Snips are competing to make their assistants available in a broad range of devices found in the home or workplace.
We can not tell if Snips or Mycroft will survive or be eaten by the big ones in the conversational AI space, but for sure the alternative it’s appealing. We’ll be here to continue bringing you all the latest on it. My opinion is that convenience will win, always.
Thank you for listening.
Some content taken from this Venture Beat piece.
Really just to wish you a happy Sunday and a productive week, we are today on the road to the Voice Summit in Newark, very excited, I’m anticipating this to be an awesome conference.