Screenless Typing: Exploring the Auditory Keyboard

Research partially supported through a Google Faculty Research Award

Principal Investigator: Dr. Davide Bolchini

Current Team: Reeti Mathur | Aishwarya Sheth | Parimal Vyas | Mikaylah Gross

Reeti Mathur on the left describing the study details to the blind and visually impaired participant on the right wearing the MYO band

Challenge

While conducting research work with blind or visually impaired (BVI) people, we observed that typing messages manually using a mobile QWERTY keyboard could be difficult to handle and time consuming, especially while multitasking – having a cane in one hand and other belongings in the other, and when they are on the go.

 

Approach

In this study, we have created an auditory-based concept, prototyped to enable the BVI users to type words or messages without touching on a screen. The characters are spoken to the users with a visual impairment and they just need to perform an easy gesture with their hand in order to select the letters. We call this concept a screenless, auditory keyboard.

 

Technology

Platform: Android

Wearable device: Myo Band

Connector: Bluetooth

 

Concept

  • The letters are looped continuously by the keyflow in an A-Z order
  • These 26 alphabets are divided in groups of 5 called chunks
  • These letters are divided in chunks to get to a latter letter faster
  • The users perform simple combinations of gestures to select a character, skip chunks forward, go letter-by-letter backwards, delete characters and pronounce letters or words framed
  • Auditory and haptic cues (vibrations of the band) are also embedded to provide feedback to the user

 

My Role

I joined the team when the application was partly prototyped and the script for the usability test interview sessions was being created. My ongoing contributions to this research are as follows:

  • Contributed towards finishing the script for the structured interview sessions to carry out the usability tests for the Screenless typing concept.
  • Researched on other wearable technologies like the Smart Rings including concepts of the Magic Ring [1], LightRing [2], Nenya [3], TRing [4], eRing [5] and iRing [6] to name a few.
  • Wrote the first draft of the Related Work section for this study pertaining to these prospective Smart Rings.
  • Conducted 20 interview sessions with participants to test the concept of the keyflow.
  • Designed and created animated videos using Adobe After Effects to describe the process of the keyflow in order to type simple words.
  • Working on analyzing this quantitative and qualitative data collected from the interview sessions to:
    • Group participants based on the number and type of words that were framed
    • Calculate the mean and standard deviation of the time taken by participants to type certain words
    • Understand the situations in which they think this concept would be advantageous and disadvantageous
    • Comprehend the suggestions given in order to improve on this concept of the auditory keyboard to make it more inclusive

 

References

[1] L. Jing, Z. Cheng, Y. Zhou, J. Wang, and T. Huang, “Magic Ring: A Self-contained Gesture Input Device on Finger,” in Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia, New York, NY, USA, 2013, pp. 39:1–39:4.

[2] W. Kienzle and K. Hinckley, “LightRing: Always-available 2D Input on Any Surface,” in Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, New York, NY, USA, 2014, pp. 157–160.

[3] D. Ashbrook, P. Baudisch, and S. White, “Nenya: Subtle and Eyes-free Mobile Input with a Magnetically-tracked Finger Ring,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, 2011, pp. 2043–2046.

[4] S. H. Yoon, Y. Zhang, K. Huo, and K. Ramani, “TRing: Instant and Customizable Interactions with Objects Using an Embedded Magnet and a Finger-Worn Device,” in Proceedings of the 29th Annual Symposium on User Interface Software and Technology, New York, NY, USA, 2016, pp. 169–181.

[5] M. Wilhelm, D. Krakowczyk, F. Trollmann, and S. Albayrak, “eRing: Multiple Finger Gesture Recognition with One Ring Using an Electric Field,” in Proceedings of the 2Nd International Workshop on Sensor-based Activity Recognition and Interaction, New York, NY, USA, 2015, pp. 7:1–7:6.

[6] M. Ogata, Y. Sugiura, H. Osawa, and M. Imai, “iRing: Intelligent Ring Using Infrared Reflection,” in Proceedings of the 25th Annual ACM Symposium on User Interface Software and Technology, New York, NY, USA, 2012, pp. 131–136.