by Ryan Jones
Wednesday, December 14, 2016
RoboUnivers 2016 San Diego (http://robouniverse.com/san-diego/2016/)

3dp-logo

If you hear the name RoboUniverse what’s the first thing you think of? Possibly robotics, drones, Arduino, RaspberryPI, the newest innovations, and hands-on; not an exhibit hall covered almost entirely in 3D printing companies promoting their printers and servers or company attempting to sell smartphone equipment. I won’t even get started on the outrageous pricing. That’s not to say the event was a complete loss, I was just hoping for a more hands-on environment.

Of the seminars I attended, the AI Deep Learning and Amazon Echo/Alexa presentations went hand-in-hand with one another.

logo200

AI Deep Learning (NervanaSys.com and UCSD)

The topic for AI Deep Learning was covered by two different speakers. The first YinYin at NervanaSys.com. Quote from the site: “Using deep learning as a computational paradigm, Nervana is building a full-stack solution that is optimized from algorithms down to silicon to solve machine learning problems at scale.” What I got from the presentation is that they offer up access to their solution free-of-charge on GitHub if you have the personal or company funds to support an ever growing Deep Learning AI. Meaning it’s FREE but as the AI grows you will need to invest in data and processing to handle the growth rate. Think of it like having a new child. They grow with time and knowledge. Every day, every year, they need new clothes; the AI will need upgrades over time. If you don’t have the funds to build your system they offer a cloud solution that can grow with your development based on the current size like many of the cloud services provide today.

They claim that the Nervana Engine is 10 times the speed over the current top GPU. It is built upon scaling up deep learning, can run on either a CPU or GPU but recommend to run on their custom CPU, they provide compile (Intel Nervana Graph Compiler), and it can work with your current framework.

If you are interested in learning more about the system you can obtain a copy on Github https://github.com/NervanaSystems/ModelZoo and go through their training program https://www.nervanasys.com/learn/

ucsd_logo

After YinYin spoke Gert Cauwenberghs from UCSD spoke on reverse engineering the cognitive brain in silicon. This topic wasn’t for the simple, it was definitely higher learning. If you are interested in AI Deep Learning and comparing it the how the human brain works the different parts of the body, then this was the lecture for you. His first point was to state that in current AI Deep Learning we are just on the tip of understanding how to create a Deep Learning AI. In current news, the AI AlphaGo beat the world Go champion Lee Sedol. https://deepmind.com/research/alphago/

From this simple introduction he revved up and began to talk about brain research through advancing innovative neuro-technologies, how nano-technology has inspired a grand challenge for the future computing, neuromorphic engineering, the physics of neural computation, event coding silicon retina and charge threshold detection using APS CMOS imager, reconfigurable synaptic connectivity, large-scale reconfigurable neuromorphic computing, spiking synaptic sampling machine (S3M), silicon learning machines for adaptive learning, sub-micro power, integrated systems nuero-engineering, something called a “kernaltron” (adiabatic support vector machine), and closing the gap.

This was a comprehensive list and kept me thoroughly engaged. What I effectively recall from the presentation are only a few facts. The human eye breaks up light into color into different quadrants. Using that understanding they created threshold mapping and red dot mapping for AI Deep Learning. The human brain sends out 15 synaptic responses at a time and gets back half. I know it doesn’t seem like much, but this comprehensive session was only about 30 minutes.

If you’d like to read more on these subjects refer to the following documents available on the web:

amazon-alexa-logo_transparent12

Amazon alexa

After the AI presentation, I attended the Amazon alexa seminar. With AI Deep Learning on the brain and the possibilities, alexa was a perfect transition. The presenter from Amazon, Paul Cutsinger, was an excellent presenter. He kept the audience engaged and even interacted with audience members. He explained the benefits to the Amazon alexa project in a clear and precise way that even a fifth grader would grasp. If you aren’t familiar with the Amazon alexa, it is a device for verbally controlling the IoT (Internet of Things) in your environment. This could be a room, house, office, etc. The device can be previewed at https://www.amazon.com/Amazon-Echo-Bluetooth-Speaker-with-WiFi-Alexa/dp/B00X4WHP5E. Before I get into the technical details, alexa is the API (Application Programming Interface), echo or echo dot is a physical device built by Amazon that utilized the alexa API. If you think alexa is just another so called smart device then you would be only half true. Alexa was built with the idea of AI Deep Learning. It is currently not as deep as the nervanasys.com concept but it can be programmed to respond to an array of commands or requests and it can respond verbally and actively. This implies that you could technically create a bot that responds to certain speech patterns. If I’m not making sense, think simple; I would say “alexa, how are you doing today?” and alexa would respond, “I’m fine and how are you John?” that is if your name was John. I’d like to speculate on how this could be useful: imagine you work in the medical industry or psychological ward and need to rehabilitate a person back into society who has trouble picking up on how to respond to the common person. Using alexa, we would create a series of responses that someone would say in an average situation. Or let’s bring drones into the mix. Could you image plotting out a path for drone and then using alexa to say, “Start drone, fly path one”, yes I know that’s quite broad but it’s just an idea.

Paul Cutsinger used Alexa to show fun little examples like turning on his favorite radio station from Pandora, turning off the house lights as he walks up stair before going to bed. He even showed how easy it was to modify the alexa API and even offered to show anyone how to program using the API in room nine of the San Diego Convention Center which is where I headed off to just after the seminar. Paul also confirmed that alexa could be installed on a RaspberryPI, so you don’t have to go out and buy an Amazon echo if you’re broke.

Upon entering room nine I found only a few people interested in learning to program the alexa. I sat down and realized that I didn’t have a computer. That was a let-down, but luckily one of the other fellows Walter had another spare laptop that he let me use. As I begin to study up and learn how the alexa API worked, I learned that it can be programmed using Java, NodeJS, and Obejctive-C. The fact that it accepts Java is Godsend. The University I attend, Coleman University, currently teaches five courses on Java and the first three classes on Java can become a bit boring since you don’t really get to see the fruits of your studying applied anywhere. But with alexa now we can apply our knowledge. We can use our skills and create a portfolio, which will drive ambition, employer interest and further the student’s desire to learn.

If you’d like to learn more I recommend you sign up for FREE developer account:
Bit.ly/alexaquickstart

Or Stop by ENVI because I’m hopeful we’ll soon be getting Alexa support from Amazon.com themselves.