Inspire and innovate, emphasizing wearables, voice user interfaces, speech recognition and synthesis, NLU, and AIML.
I am focusing on embedded, mobile, and open source technologies and help accelerating the discovery and adoption of emerging mobile technologies.
SwixML represents ideas that today are heavily re-used in Google’s Android SDK. (Graphical User Interfaces are described declaratively in XML documents that are parsed and rendered into UI widgets hierarchies at runtime.)
But I have create so much more that I’m extremely proud of.
Take a look at Artist on Android, the Horsemen of Speech Recognition, or other apps that I have published under the Techcasita Productions brand in Google’s play store.
I’m appointed to the advisory committee for the Mobile App Development Certificate at the University of California, Irvine, and occasionally speaks at conferences and user groups on topics ranging from Embedded Technology to Declarative Programming, emphasizing UI Generation at Runtime, and everything Voice User Interface related of course.
Have a look at some slide from my most recent talks.
Take a look at some high quality short HD films that I have created over the last few months and years.
If you like, take a look at some of my photos and the stories behind them, at http://ramonaphoto.com
Listen 2014 was a short one-day conference, focusing on “Voice Interfaces for the Internet of Things” organized by Wit.ai Inc, the company that provides a WebService, making it easy for developers to build applications and devices that you can talk to. The conference took place on November 6. 2014 at the unique Bluxome Winery in San Francisco.
Voice User Interfaces (VUI) complement the Internet Of Things (IOT); and not just for economical reasons, attaching a touch enabled LCD display to connected devices like door locks, thermostats, etc, is not really an option.
Wit.ai’s Jen Dewalt gave the opening address, stating that VUIs need to be intuitive and effective – and recommended to start small, but that giving the VUI a personality would be absolutely OK.
Siri, Back to the Future
Adam Cheyer, founder of Siri and now VIV, gave the conference Keynote, titled “Siri, Back to the Future”.
Over the last three years, I have seen Adam speak three or for times, but never saw him that good, that influential, maybe because he was given enough time. Hearing him talk about how Siri happened was absolutely inspiring. When Apple took over, Siri already had structured knowledge in 15 domain and was always taking context from previous dialog exchanges into consideration. Siri was developed as a “do-engine” and a “knowledge navigator,” allowing people quick and easy access to details related to travel, scheduling, weather and other kinds of information.
“Apple bought Siri Inc. for a $100 to 200 million, and Siri continued to be available in the app store. When iPhone 4S launched, in early October 2011, it finally had Siri fully integrated into iOS. Steve Jobs died the day after the launch. – Based on the dates mentioned in the Knowledge Navigator video, it takes place on September 16, 2011.”
Adam is currently building VIV, with the goal to build a personal assistant framework that incorporates an app-store for agent knowledge-bases, providing 3rd parties the capability to add domain knowledge.
Before long, we will be at the SoCal Code Camp, which takes place at the beautiful USC Viterbi School of Engineering on Saturday, November 15.
I will be talking about what I have learned so far, with regards to Android Wear. This talk will present a first-hand look at the Android Wear platform and an introduction to Android Wear APIs, how to design effective user interfaces that work best on a wearable device. Come and join us in L.A. at the USC campus and learn about how to use Google’s Android Studio IDE for creating apps for Android Wear devices and bringing wearable experiences to your Android apps. We’ll walk step-by-step through designing and building a small, native, contextual app for Android Wear. Of course using Java and Android Studio, the new Android development environment based on IntelliJ IDEA.
I was able to attend this year’s SpeechTek 2014 conference in New York City. Organized in four parallel tracks, the conference’s advanced technology track was devoted to topics like virtual agents, voice biometrics, natural language understanding, or speech technologies for smart devices.
Bruce Balentine, @brucebalentine Chief Scientist at the Enterprise Integration Group, gave the keynote on the 2nd day of the conference, which was probably the most impacting and insightful talk of the whole event. One of this key points was that, “We can now stop selling the future,” all basic technologies are at our disposal, empowering us to succeed, building smart speech systems. However, he also pointed out a major shortcoming of Voice User Interfaces: micro-interactions were never established, have not been learned by users. In comparison, micro-interactions like pinch/zoom, were successfully taught to users of touch enabled user interfaces.