The duality between Voice and Visual interfaces basically comes down to age old question of visual communication versus audio communication. “Language need not have started in a spoken modality; sign language may have been the original language. The presence of speech supports the presence of language, but not vice versa.”
This shift is leading companies to ask the question of
whether they should continue to invest in the visual interface or should budgets shift to voice?
How does designing UX and UI for sound/voice change our role and the tools we use?
How prepared for the shift is our industry?
What effect will chatting to a machine have on language? We’re all well aware of the effects texting and instant messaging has had! LOL
If this trend is anything to go by, will we need to develop a sound version of icons and emojis? An audio version of shorthand?
Still, with this increased utilisation of the senses, how long will it be before the other two, taste and smell, get in on the action?
Are we moving towards a world that is augmenting our senses
Brain-Computer Interfaces Are Already Here
And now Mr Musk has entered the fray, by funding ‘medical research’ startup Neuralink who are developing neural laces .
Commented from voice vs human interfaces article .