Lisa Falkson
CloudCar, USA
Scientific Tracks Abstracts: Adv Robot Autom
Current car infotainment systems are notoriously out-of-date and don�t meet their user needs. With fixed hardware and software in these systems, users turn to their mobile phones for the latest content in navigation, media and communication. Recent data (from Distraction.gov) shows that since 2010, at any given moment, approximately 660K drivers are using cell phones or electronic devices while driving. By improving in-car infotainment systems, we can draw user attention back to the larger built-in display, as well as encourage users to use a handsfree, voice-driven interface. In this presentation, we will discuss the challenges of designing a multimodal interface for speech systems in the automobile. NHTSA guidelines require a glance time of 2s or less (12s total for a task), which is a challenging requirement. However, interaction with the screen can be minimized by ensuring that speech input and TTS output are the primary modes of interaction. When users are interfacing with the touch screen, the fonts and touch targets should be large, and the screens should be free of confusing graphics. In short, both modes of interaction (speech and touch) should be utilized to their best advantage in order to make the most usable interface.
Lisa Falkson is Senior VUI/UX Designer at CloudCar, designing the next generation of voice user interfaces for connected cars. Previously, she worked on Amazon’s first speech-enabled products: Fire TV, Fire Phone and Echo. She has over 15 years of industry experience, specializing in design of natural speech and multimodal interfaces. She has an MS in Electrical Engineering from UCLA, and a BS in Electrical Engineering from Stanford University.
Email: lisa@cloudcar.com
Advances in Robotics & Automation received 1275 citations as per Google Scholar report