flashbriefing

For the summary of every one of our flash briefings

It’s time small businesses had big business capability. The AI for small business

The voice community is amazing to watch as it evolves. I have been very fortunate to meet lots of people who are making an impact on voice technology and conversational interfaces. This episode is special because it features one of those startups impacting voice tech every day, and it is the first time VoiceFirst Weekly show welcomes a guest.  We are very thrilled about how it turned out.

In this episode, I talk to Brendan Roberts, CEO of Aider, The AI assistant for small businesses. Aider is launching now in Australia and New Zealand, with plans for the US in 2019. Aider will help you answer questions like: What’s my top selling product? What’s my revenue today? Who is meant to be working tomorrow and what’s the weather going to be like? All this from your phone bringing your business context into account.

I met Brendan at the Voice Summit back in July after arranging a meet up of folks at the conference from the Voice Entrepreneur Community on Facebook. I got to see Aider first hand and was really impressed by its capabilities. Aider integrates with several SaaS apps that small business users might be familiar with for sales, accounting and client management providing insights and learning from the user’s actions. What I thought was really impressive for an app this type was the ability to keep the conversation across conversational channels through voice or messaging.

Without further ado, please enjoy my talk with Brendan.

You can contact Brendan on Twitter  or LinkedIn . You can also try Aider and sign up for Aider beta access.

Voice activated smart glasses available exclusively in stores

A showcase for smart glasses opened yesterday, marking November 12 as the day (per the company) that the first world’s smart glasses store opened. North is the Canadian company who develops futuristic HCI products. The company has raised over 135 million from investors, including the Amazon Alexa fund.

Focals is the smart glasses the company is presenting, exclusively available in their stores in Brooklyn, New York and Toronto since yesterday. The custom made glasses features a display that only you can see, I’m not exactly clear about the technology behind this as it’s not expanded in their webpage, I’m very curious and excited to know more about it.

Focals includes visual summaries, smart text and emojis and voice to text. It also comes with a navigation feature with search, turn by turn and the ability to hail an Uber.

The display is controlled with a Loop, a small finger ring with a tiny, four-way joystick that’s included in the purchase, along with a case that doubles as a battery charger. The glasses sync with the user’s Android or Apple iOS device via Bluetooth.

Alexa anywhere

Focals comes with Alexa built-in. According to the showcase page you can Ask Alexa to play music, hear the news, see the weather, control your smart home and more. I’m guessing you can do anything that Alexa allows you to do.

The glasses also comes with a function to pause it all from when you don’t need them:

Technology that’s there when you need it, gone when you don’t – hidden by design.

Form plus function

The glasses comes in stylish designs, a la Warby Parker, maintaining the idea of keeping the technology invisible for only when you need it. The store is also selling the experience in the shopping process. You have to be custom fitted for Focals.It’s crucial to understand how the technology looks and feels,Adam Ketcheson, Chief Marketing Officer of North said to The Bridge: It’s incredibly important for people to get a hands-on experience, especially at our price point. The entire retail model is so people can immersively understand what it is and get the right fit.

Focals will be offered in a variety of styles at $999.

Smart glasses have been emerging and dying for a while now. Google Glass and Intel’s Vaunt both shut down in 2015 and 2018 respectively.

What makes Focals different? The focus on design and style more than the geek outlook of Google Glasses might be a compelling point. Focals are voice activated, but their first selling point is for the technology to be there only when needed, otherwise looking as regular glasses. They are not advertised as a technology, geeky gadget, more as helping companion.

As it often turns out in technology advancements, timing might turn different for North glasses.

Waiting next time I go to NYC to visit the store and try the Focals. Let me know what you think on Twitter @voicefirstlabs or Instagram at voicefirstweekly. I’m Mari, this is VoiceFirst Weekly flash briefing, have a great day and I’ll talk to you all tomorrow. We have an special episode tomorrow with the first human guest in the show. Don’t miss it. See ya.

How receptive are smart speaker owners to advertising?

Survata’s September survey of 2,000 smart speaker owners in the US came with one surprising finding: Apple HomePod owners are more likely to be receptive to audio ads than anyone else.

According to the Survata data, as reported by BusinessInsider:

  • 35% of HomePod owners would be interested in hearing about sponsored products or services on their speaker
  • Only 22% of Google Home owners said the same.
  • And just 17% of Amazon Echo and Echo Dot owners are receptive to ads on their speaker.

This shows there is still a large chunk of people who don’t want to hear ads on their smart speakers, suggesting it’ll be an unpopular move if anyone introduces sponsored content any time soon. It’s also unlikely Apple would venture into the sponsored content territory, given it has shied away from targeting ads at users.

Survata market research president Dyna Boen explained that anomaly:

While adoption of Apple HomePod has thus far lagged behind Amazon Echo and Google Home, and thus makes up a smaller percentage of the sample, we still are seeing that these users are saying sponsored content ‘very positively impacts their smart speaker experience’ at a statistically significant level.

More on Bixby and SDC 2018

As they say, sometimes is better to wait to report on some news. I feel the episode of Bixby on Wednesday could’ve wait until yesterday, when I was going to the Samsung Developer Conference and will get more context and details. If you didn’t listen to yesterday episode for some reason here’s the summary: Samsung opened Bixby for developers, we were part of the developer Beta program and VoiceFirst Weekly now has a capsule. The words game changing were said. Perhaps, I didn’t completely understood my own words on Wednesday. When I was at SDC yesterday, I realized, Bixby is definitely and completely going to change the voice game. You might ask, didn’t you said you developed capsules for Bixby already? Yes, we absolutely did. They are in the Bixby showcase page. The thing is, as I said, Samsung might be rushing it a little to enter the race. As such, some things I saw first hand at SDC were not promoted in the documentation. Maybe they wanted to unveil it during SDC. The truth is I saw this camera putting some makeup on my face, then showing me a list of the same lipsticks or mascaras and then showing a list of places where I could buy them, right there. I saw Bixby recognizing a bottle of wine and showing reviews, prices. Read signs and translate them right there from the camera.

This was all part of Bixby Vision a Samsung S8+ app (some features are only S9+) powered by image recognition. I have read reviews that sometimes Bixby is not as accurate as other smart assistants in speech recognition, but all these features, combined with the ability to learn is a powerful point in favor of Bixby. All of that is now open for developers to interact with.

Among the other announcements in SDC was the coming Marketplace for Bixby capsules in 2019 and the expansion to five new languages in the coming months. I think I’m saying Samsung might have figure out multimodal commerce right there in the faces of everyone. Without AR or VR, just the camera. Certainly Bixby is here to change the game, plus we can not ignore all the phones and appliances Samsung makes. They even have HARMAN, the market leader in connected car solutions.

Bixby is coming to everything.

I’m Mari, this is VoiceFirst weekly flash briefing. You can find me on Twitter as @voicefirstlabs and on Instagram as @voicefirstweekly. Happy Friday and I’ll talk to you tomorrow!

Here is a video of my interaction with Bixby vision:

 

Samsung opens Bixby to third party developers

Samsung announce yesterday at the Developer Conference in San Francisco that the Bixby platform was open to developers. The Bixby developer Studio was until so far in private beta. Nersa my cofounder at VoiceFirst Labs and I were lucky enough to be in the beta program and contest for the creation of the first capsules (Bixby voice app). I heard that name might change and I’m happy for it, capsule is definitely not a good name for a voice app.

We developed two capsules, with the intention to understand the platform, one for number facts and the other for getting episodes of this show. You heard correctly, VoiceFirst Weekly flash briefing is already available in Bixby, yay!

Experience creating for Bixby

The developer experience still fills a little raw, they clearly need polishing in the platform with the documentation and such, I feel they are in a rush to open up the platform, with the clock ticking.

The capsules were created in a weekend or less, after watching some of the videos provided and then following the documentation. It means it’s relatively straightforward to start creating something for the platform. And the documentation geared towards developers, but we found it pretty useful.

The good, the bad and the ugly

The good part about the platform is the ability to remember an answer or similar answers by instruction, that’s a pretty sweet deal that in short means you don’t need to put all the utterances for an intent. It learns. I really liked that. The way you build the capsule itself It’s also a different way to develop voice apps compared to Alexa or Google Assistant. The IDE was decent enough, it felt smooth.

The bad is the maturity of the platform. Is definitely not at the level of the likes by Amazon or Google.

The ugly, as far as we could see, and we tried, the platform is more focused towards visual interfaces, and it does not reproduce audios. As we were trying to get the audios of the show reproduced directly by Bixby – my expectation was that it was going to be similar to the cards in Google Assistant – we quickly hit a wall. I’m sure they are gonna correct that, but at this point it feels a little outdated already.

Summary

Bixby platform and the developer studio might be a game changer in the smart assistants race. The Bixby team have a different, novel idea on how an assistant should behave and I’m expecting the competition to only be good for the voice ecosystem overall.

If this catches up, Samsung will have the “phone advantage”, in their case it’s not only phones but all kind of appliances. The possibility to instantly have their platform on all this devices, without having to convince users to buy a smart speaker. Although they did released the Galaxy Home a couple of months ago, and for sure the whole Bixby ecosystem will work there as well. All in all, exciting times ahead.

This is VoiceFirst weekly flash briefing. My name is Mari, as always you can find me on Twitter as @voicefirstlabs or Instagram as @voicefirstweekly. You have a great day and I’ll talk to you all tomorrow.

P.S We will be at the Developer Conference today during the announcement of the capsules contest winners. Expect live updates on Twitter.