A couple of days ago, Google introduced the Chromecast device, a device that plugs into an HDMI port of a TV and is only hardly bigger than a USB Flash drive. The main idea is that you can select a video on your smartphone or tablet and then watch it on the TV, that has the Chromecast device plugged-in. Not only is the $35 device the most inexpensive solution to watch Netflix on your TV, it more importantly potentially transforms the mostly solitary experience of watching YouTube videos, into a social event.
For this to work, the Chromecast device and the smartphone (or tablet) need to be on the same Wifi network, which also needs to provide Internet connectivity. So far, only a very few application, like YouTube and Netflix make full use of the Chromecast device. However, an SDK is already available and it is rumored that Vimeo and HBO will support Chromecast soon.
That’s all very cool. But in my office, instead of a TV, I have an Apple Thunderbolt Display connected to a Macbook; I have often wished for a way to easily transition from my phone or tablet to the laptop, when watching a video clip on Youtube.
Since neither Laptops or Desktops typically come with HDMI Input Ports, the Chromecast device wouldn’t help. However, a software-based Chromecast Simulator could do the trick. Lets try it …
If you want to add none trivial speech output to your application, no matter if it’s a desktop, web, or mobile app, you need to find a way to convert text into speech (TTS) and eventually provide it in a sound format (like MP3) that can be played back on end-users’ devices. While some operating systems come with TTS capabilities built-in, the quality of the voice sound may vary more than you like, and an user experience spanning multiple OSes and platforms, almost always justifies or even requires the deployment of a TTS Web service.
All this is old news of course and companies like Nuance, iSpeech, NeoSpeech, or AT&T provide Text-To-Speech services, varying greatly in price, quality, and performance. Other TTS-providers like Acapela or LumenVox lease their TTS-Server software, i.e. you get a performance-constrained binary that can be deployed on a RedHat Linux server in your own server room or for instance on Amazon Elastic Compute Cloud (Amazon EC2). The obvious advantage over the completely out-sourced approach is quality of service (response time) as well as security and privacy.
Every single fire started with a spark
[from Michelle Branch's - Spark ..]
Getting started with something new, sometimes requires only little more than a spark, which I hope to provide by showing how to use your Mac as a Text-To-Speech server, converting text strings to MP3 voice sound files on the fly. When we are done, you can request an mp3 sound file by either sending an HTTP GET request like:
which would stream an MP3 back in return or send an HTTP POST request and receive a path to the mp3 file back, ready to be downloaded once or multiple times.
While listening to Rural San Diego County CAL FIRE and USFS Live Audio Feed, I heard fire fighters talking about a structure being threatened, as the #ChariotFire was running up the east side of Monument Peak, in the Mount Laguna area. Reason enough, to do a quick search via Google Maps to find out about it.
However, the satellite images provided by Google Maps, were not very telling but alternative service offered a surprisingly better quality.
Don’t take my word for it, judge for yourself.
I am the proud owner of an Apple Thunderbolt Display (2560×1440). It has an outstanding image quality, comes with built-in microphone, speakers, and autofocus FaceTime HD camera.
- Btw, the camera does a rather poor job with the white balance and has a red tint, but becomes useable with a tool like iGlasses 3 -
On the back, it has three USB -, one Firewire -, one Thunderbolt, and one Ethernet port. There is also this single bigger cable to the Laptop, combining Thunderbolt and power and keeping the desk neatly organized.
For a year or so, I have an external USB Harddrive (Western Digital 2 TerraBytes) connected to one of the display’s USB ports. I’m mainly using it for Time-Machine backups and storing application disk images etc. Since all cables disappear behind the display and not cluttering my desk, I never thought about connecting the harddrive straight to one of the Laptop’s USB ports directly. However, as it turns out, my nice and neat setup comes with quite a penalty.
You may have seen the impressive demo of Google’s Web Speech API, during the 3 hour long keynote at this year’s Google I/O conference. However, experiencing an interactive speech-enabled web search yourself, can be even more enlightening.
Your computer needs to be equipped with a microphone and speakers and needs to have the latest version of Google’s Chrome browser installed, which is currently Version 27.0.1453.93.
Experiencing Speech-enabled Web Search
Now goto https://www.google.com and click on the small microphone icon, on the right side of the text input field.
The page your were looking at disappears and is replaced with a much larger red microphone and text that changes from “Speak Now” to “Listening…”
The legends and current elites of AIML and Turing AI (not necessarily disjoint groups) met for their yearly gathering at Seed Philly, a Philadelphia tech startup incubator, located in the heart of Center City.
The Chatbots 3.3 conference was a fast flowing event with exciting flash-talk style presentations, always followed by Q&A segments. The amazing speaker lineup included several Loebner Prize winners, AIML Engine developers, VCs, Psychiatrists, Artificial Intelligence researcher, and best-selling authors.
Dr. Richard Wallace
Dr. Richard Wallace, Loebner Prize Winner 2000, 2001, 2004 and father of AIML, talked about the new AIML 2.0 Specification that focuses much on making AIML more succinct, while maintaining its simplicity. He stated that it currently took about 10,000 AIML categories to create a believable character. ALICE for instance has about 100,000 categories and the PROFESSOR, a bot with one of the largest AIML knowledge bases, has about 580,000 AIML categories. Writing that many categories is not only very time consuming (experienced AIML authors may be able to write one category per minute) but also requires a memory capacity, not available on many embedded and mobile devices. AIML 2.0 therefore tries to make AIML more efficient, allowing the creation of a believable character with as little as 1/6 of the categories needed before.