I’m very passionate about the process of software development and how it affects the product cycle. I read a blog post about people with physical disabilities using voice tools to program, so I was excited when found this article about the influence of voice technology in software development. Let’s see some of the ways software development is being affected by voice.
- New development agencies are seeing clients request more and more voice capabilities, as well as old applications being upgraded with voice-activated functionality. But at the same time a bad interaction is worse than none at all. Lars Knoll, CTO at The Qt Company have said that “A badly done voice integration is probably leading to a worse user experience than not having one at all.”
- The other one is to have into account the differences between gui design and voice interface design. It will be increasingly needed that voice-oriented developers understand the basics of pattern recognition and machine learning. That being said, the underlying development principles remain pretty much the same, as well as having knowledge of popular programming languages. In a GUI, the user’s eyes and mouse movements have been trained over several years of behavior. As a result, voice development is as much a product design challenge as an engineering problem.
- The third way software development is affecting development is the fragmentation of voice platforms. You’ll need to make choices about developing independently for each platform (and your priorities) or taking more of a one-size-fits-all approach. I talked about it in the Voice Space Fragmentation (https://voicefirstweekly.com/flashbriefing/69/).
- B2B companies, can’t ignore voice work much longer. With Salesforce announcement of Einstein voice assistant to their platform, plus Microsoft Cortana Skill Kit, plus voice assistants being more pervasive in the home, cars and mobile of users, there is only so long before users start asking: can I do it with my voice? or why I can’t do it with my voice?
There is also an increment in development tools available to code by voice, in combination with the advancements of speech to text recognition, we can see a new wave of tools, where you can have voice commands as actions that use speech to text for the actual code input. The Nature magazine has an interesting article I recommend you check in full: Speaking in code, it outlines how programmers from the Genome Aggregation Database (gnomAD) at MIT, which is used to explore genomic data are using voice coding to build web applications. “These applications share data from some of the largest sequencing studies in the world”
Coding by voice command requires two kinds of software: a speech-recognition engine and a platform for voice coding. Dragon from Nuance, a speech-recognition software developer in Burlington, Massachusetts, is an advanced engine and is widely used for programming by voice, with Windows and Mac versions available. There is also VoiceCode and Caster, the latter free and open source. This tools are not new, there’s a video of PyCon in 2013, demonstrating voice command. However, reportedly, the learning process of voice coding has a steep curve. You have to learn all the commands which turns out are not that natural. And if you have any throat problem it can become a challenge as well.
On the bright side, users report to think through very well before dictating any code, and we can be sure is a big benefit.
Voice technologies are soon to be part of the development process and not only result of it.