I am the proud owner of an Apple Thunderbolt Display (2560×1440). It has an outstanding image quality, comes with built-in microphone, speakers, and autofocus FaceTime HD camera.
- Btw, the camera does a rather poor job with the white balance and has a red tint, but becomes useable with a tool like iGlasses 3 -
On the back, it has three USB -, one Firewire -, one Thunderbolt, and one Ethernet port. There is also this single bigger cable to the Laptop, combining Thunderbolt and power and keeping the desk neatly organized.
For a year or so, I have an external USB Harddrive (Western Digital 2 TerraBytes) connected to one of the display’s USB ports. I’m mainly using it for Time-Machine backups and storing application disk images etc. Since all cables disappear behind the display and not cluttering my desk, I never thought about connecting the harddrive straight to one of the Laptop’s USB ports directly. However, as it turns out, my nice and neat setup comes with quite a penalty.
You may have seen the impressive demo of Google’s Web Speech API, during the 3 hour long keynote at this year’s Google I/O conference. However, experiencing an interactive speech-enabled web search yourself, can be even more enlightening.
Your computer needs to be equipped with a microphone and speakers and needs to have the latest version of Google’s Chrome browser installed, which is currently Version 27.0.1453.93.
Experiencing Speech-enabled Web Search
Now goto https://www.google.com and click on the small microphone icon, on the right side of the text input field.
The page your were looking at disappears and is replaced with a much larger red microphone and text that changes from “Speak Now” to “Listening…”
The legends and current elites of AIML and Turing AI (not necessarily disjoint groups) met for their yearly gathering at Seed Philly, a Philadelphia tech startup incubator, located in the heart of Center City.
The Chatbots 3.3 conference was a fast flowing event with exciting flash-talk style presentations, always followed by Q&A segments. The amazing speaker lineup included several Loebner Prize winners, AIML Engine developers, VCs, Psychiatrists, Artificial Intelligence researcher, and best-selling authors.
Dr. Richard Wallace
Dr. Richard Wallace, Loebner Prize Winner 2000, 2001, 2004 and father of AIML, talked about the new AIML 2.0 Specification that focuses much on making AIML more succinct, while maintaining its simplicity. He stated that it currently took about 10,000 AIML categories to create a believable character. ALICE for instance has about 100,000 categories and the PROFESSOR, a bot with one of the largest AIML knowledge bases, has about 580,000 AIML categories. Writing that many categories is not only very time consuming (experienced AIML authors may be able to write one category per minute) but also requires a memory capacity, not available on many embedded and mobile devices. AIML 2.0 therefore tries to make AIML more efficient, allowing the creation of a believable character with as little as 1/6 of the categories needed before.
Tomcat 7 is the first Apache Tomcat release to support the Servlet 3.0, JSP 2.2, and EL 2.2 specifications. Please note that Tomcat 7 requires Java 1.6 or better so we start with installing a recent version of Oracle’s JRE.
Install Oracle JRE 7 on Debian Linux
To install Oracle’s Java Runtime with apt-get, we first need to entend the list of apt-get’s sources. Once that is done, an java-installer will actually install the Java SE Runtime Environment. Here are the steps to follow:
When listing to the radio or a podcast, while driving to work, I don’t think I imagine how the person I’m listing to, looks like. Still, if later, I happen to see them for the first time, in a picture or video, I often find myself surprised.
A verbally responding mobile application has many obvious advantages. For instance, users don’t have to decipher tiny fonts on small displays, in fact, they don’t have to look at the display at all. Just like colors and typography contribute considerably to the look and feel of an application, so does the voice quality for a voice enabled mobile application.
There are at least three different approaches to synthesize text.
There might be a Text-To-Speech module built into the OS, or a separately installed Text-o-Speech engines can plug-in to the OS’s Text-To-Speech module.
Secondly, instead of requiring a separate install, a synthesizer and voices can be packaged and shipped with the application.
Lastly, a web-service can be used, to synthesize text. The advantage of this, would be a more predictable and consistent voice quality, comparatively independent from the hardware and operation system used on the mobile client.