My name is Wolf Paulus, a photographer, hiker, hacker, technologist, based in Ramona, California. I am focusing on embedded, mobile, and open source technologies and help accelerating the discovery and adoption of emerging mobile technologies; inspire and innovate, emphasizing mobile and wearables, voice user interfaces, speech recognition and synthesis, and natural language understanding.
I created the Java-based open source XUL Engine SwixML, which Sun’s CTO called “The strongest straightforward design of declarative UI implementations”.
SwixML introduced ideas, which are now heavily re-used in Google’s Android SDK. (Graphical User Interfaces are described declaratively in XML documents that are parsed and rendered into UI widgets hierarchies at runtime.)
But I have created much more that I’m extremely proud of.
A lot of my work evolves around early technology prototyping. Still, I’m trying to put some ideas into real-world mobile applications.
Take a look at Artist on Android, the Horsemen of Speech Recognition, or other apps that I have published under the Techcasita Productions label in Google’s play store.
Most mobile applications consume some sort of cloud service. Speed is extreme important for Voice User Interfaces to work well, which means you want to do as much as possible on-device. However, speech recognition accuracy and speech synthesis quality often require a cloud-based implementation. Cloud services that I have recently implemented include speech synthesis, aggregation, AIML based natural language understanding, and text summarization, including simple sentiment analysis.
I’m appointed to the advisory committee for the Mobile App Development Certificate at the University of California, Irvine, and occasionally speak at conferences and user groups on topics ranging from Embedded Technology to Declarative Programming, emphasizing UI Generation at Runtime, and everything Voice User Interface related of course.
Take a look at some slides from my most recent talks.
April 11-12, Mobile Voice Conference 2016, San Jose, California
Mobile Voice Conference 2016 in San Jose, California
On April 11-12, I will be speaking at the Mobile Voice Conference 2016 on “Natural Language for Developers – beyond declaring User Intents”
Moderator: Alexander Rudnicky, Research Professor, School of Computer Science, Carnegie Mellon University
Natural Language for Developers – beyond declaring User Intents – Wolf Paulus, Engineer, Intuit
Driving Natural Language Interaction Using Knowledge Representation – William Meisel, President, TMA Associates
Many new concepts that I implement in mobile applications, are communicated best through video clips or short films.
Take a look at some high quality short HD films that I have created over the last few months and years.
“Amateur Professionalism”, a concept used since 2004, describes an emerging sociological and economic trend of people pursuing amateur activities to professional standards. That pretty much describes how I look at my photography work today.
If you like, take a look at some of my photos and the stories behind them, at http://ramonaphoto.com
If experimenting with the Amazon Echo / Alexa Skill Kit or running a so-called Skill in production, you generally have two choices:
- AWS Lambda functions on AWS Lambda (a service offering by Amazon Web Services)
- Hosting the Web service yourself.
If you decide against AWS Lambda, you can build the Web service, using
- anything else that can consume and produce JSON documents
However, Amazon provides good support libraries and sample code for Java and Node, making those options preferable.
Building a basic skill, using the Alexa Skill Kit is really not all that hard and reasonably well documented. I’m preferring JAX-RS and building with Gradle, over the older servlet model and Maven and therefore, my build.gradle file, looks something like this (simplified):
The Servlet 4.0 specification is out and Tomcat 9.0.x will support it. However, at this point Tomcat 8.5.x is the best Tomcat version and it is supporting the 3.1 Servlet Spec.
Since OS X 10.7 Java is not (pre-)installed anymore, let’s fix that first.
It all began back in 1990, when Hugh Loebner initiated a contest, designed to implement the Turing Test. The Loebner Prize is an annual competition in artificial intelligence, where judges decide, which chatbot is the most human-like. The format of the competition is that of a standard Turing test. In each round, a human judge simultaneously holds textual conversations with a computer program and a human being via computer. Based upon the responses, the judge must decide which is which.
The most recent Loebner Prize in Artificial Intelligence competition happened last September, in Bletchley Park, where during World War II, the German secret codes were broken. The winner was a chatbot named Rose, created by Bruce Wilcox, and developed in ChatScript. You can chat with her here. You can use text input or use your voice, when you open the link in a Google Chrome web-browser.
“Rose is a twenty-something computer hacker, living in San Franscisco. As Rosette she won the 2011 Loebner competition. As Rose she won in 2014.” read more…