I would like to make KSTARS accessible to an astronomer with extremely limited keyboard and mouse skills due to Parkinsons. One approach would be to use scripting and workflows to do automated observing and capture for him. What I was hopeful in addition is that there might be dictation enabled commands that would allow him to navigate some of the common functions of the KSTARS menus that require keyboard/mouse input. KSTARS has many Ctrl-Letter sequences that can be easily captured in the accessibility functionality of the MAC for example. I can say "Hey" then "find" (Ctil-F and the cursor is sitting Where I might like to enter "MOON" but since the KSTARS input field is not text I have not figured out how to dictate instead of typing. Any experience in Mac, Windows, or Linux environments with passing dictation type commands directly into KSTARS as a substitute for keyboard? Im interested in participating in a comprehensive long-term Voice-activated KSTARS strategy, as well as a short-term partial functionality.
Example of Mac voice activated "what's up tonight Menu" command here: t.co/lddUbfzCBm
This is quite interesting. I'm a backer of
but I haven't received my unit yet. I think using it with KStars should be possible, but I'm not sure about other open source voice recognition solution that are cross-platform.
I had not heard of the mycroft project and I am checking that out thanks! I did install the Amazon Alexa US engine yesterday on the raspberry pi using the "SENSORY" and "KITT.AI wakeword processors but this is not a true open source solution and there is no rules engine I can exploit easily. I see the Mycroft AI code is to be released under GPLv3 using the UBUNTU snappy core. Are there KSTARS issues in that distribution?
My shorterm goal is way more pedestrian: Can anyone suggest a way, for example, I could say the word "MOON" and it be inserted into the "find object" filter field of KSTARS ctrl-f menu and advance to the OK button without using the keyboard? I'd like to think there is a way that the "paste text" or some workflow command would work with some dictation product on windows, mac, or Rpi.
Yes, the DBus interface exposes a good chunk of functionality -- and if something is not there, let us know, it isn't too difficult to add a DBus wrapper to any method existing in KStars.
There is an open-source speech recognition engine called CMU Sphinx, and there's also KDE's speech recognition efforts named Simon. I don't know the state of either project, but I presume they are still not using Deep Neural Networks, and it's very unlikely that they are as good as Amazon's or Google's APIs due to the sheer amount of data they have. Yet, they might actually work pretty well for a small list of known commands, and that was the intention for Simon at least as far as I know.