Voice Activated UIs

Products like the Amazon Echo and Google Home are primarily targeted at making it possible to access information and engage with services without a screen, but there is also great opportunity to use these same devices to control existing UIs (such as is being done with voice on TV platforms).

As an industry, we need to engage with how to leverage this device (perhaps a new peripheral?) for the benefit of our users. There are many situations that can benefit from using voice over keyboard, mouse, or touch.

In this session, we'll cover some examples of how to use a Google Home device to drive a UI interface, developed with Angular. With the ability to manipulate a webpage just by using voice, new types of interactions are available. Some difficult tasks can become easy, while others can become more difficult.

Track Details

  • Day: March 9 - 2017 (11:00am - 11:50am)

  • Track: Three

  • Level: Advanced

Speaker: Jeremy Wilken

Jeremy Wilken

Jeremy Wilken is a software engineer at Teradata, where he develops mobile apps with Ionic, crafts user interfaces with AngularJS applications for data systems, and builds web-service layers with Node.js. He is also the author of Ionic in Action. When he isn't coding, you can find him brewing beer.

Connect with Jeremy Wilken