Twiddler3: A Year Later… and new idea for AI Assistant

(I wrote this in the midst of the COVID-19 pandemic. While saddened by the enormous suffering that everyone is coping with, I find solace in the unintended gifts the lockdown brings– time to bond with family and to rekindle creative pursuits that we often brush aside when we’re “too busy”.)

My Twiddler3, once a shiny new toy, sat inside my drawer untouched since last summer. After designing an efficient keyboard layout that took advantage of pressing Multiple Buttons per Row (MBR), I hit a roadblock. How so?

Multi Character Chord designMulti Character Chord design 2
Pressing keys simultaneously (aka “chords”) will generate the indicated multi characters in blue/red.
Example: [s] + [t] => “st”. This new layout supports row and column chords.

For the next step, I wanted to build an AI Assistant using the Twiddler. It would work like Alexa (or Siri / Google Assistant) except you type commands instead of speaking, and it responds by silently talking to your Bluetooth earbuds. It’s always ready to assist you and arguably more versatile than Alexa in many ways.

I had the basic Twiddler <-> Android phone <-> Voice Response prototype running when I realized how much work is needed to give the bot assistant a good range of practical skills. Say you want your bot to read you the latest articles from your digital magazine subscription. It has to:

  1. Convert your typed command into an actionable Intent
  2. Ask you for clarification or additional details if needed
  3. Get the information you need. To do so, it has to:
    • determine the relevant app or Web site to pull the info from
    • map the Intent into parameter values
    • launch the app or go to the site, passing it the appropriate parameter values
    • retrieve the content
  4. Translate the content into a coherent Natural Language utterance
  5. Speak out the utterance by calling a Text-To-Speech engine

Wow. And you need this for every task you want the bot to perform: read your email, go through your To Do list, etc. If this looks very much like creating Alexa Skills, you are absolutely correct.

Sadly Amazon does not support sending commands by Push Notification, though many have asked and created limited workarounds like SilentEcho and this API. So I put this project on hold.

Fast forward to April 2020. I don’t quite recall how, but I came up with the idea of using the accessibility features built into Android (iOS users have this too) to enable eyes-free control of apps on the smartphone. Ironically, the access features designed for users who are blind or have dexterity impairments are what makes advanced interaction possible! I started experimenting with:

  • Voice Assistant – helps blind and low-vision users navigate around their phone
  • TalkBack (screen reader) – gives you spoken feedback so you can use your device without looking at the screen
  • Switch Access – scans the items on your screen, highlighting each item in turn, until you make a selection

So far I am getting encouraging results. Stay tuned for more details.

Leave a Reply

Your email address will not be published. Required fields are marked *