Wednesday was yet another interesting day at CES 2016! Toyota showed off some futuristic cars that are meant to be a start to autonomous parking. The vehicles can perform some self driving maneuvers like parking on its own, parallel parking and even stopping in an emergency.
Toyota also mentioned that the cars have a form of artificial intelligence that allows them to communicate with other autonomous cars about things like accidents ahead, bad weather, etc,. Crazy or what?
There were also some cool video related things I got to see. For instance, I saw a new product that allows you to view your smartphone screen in 3D without the use of glasses. I also tried some virtually reality that was a full 360 degrees. I actually had to turn my body and head completely around to view everything.
Oh yea, and I got to ride a hoverboard! Do I look like I’m a little off balance? That’s probably because I felt like I was about to fall off. Luckily, I didn’t fall off and was able to attend a panel.
Panel – Is Typing Dead?
I attended the session on “Is Typing Dead,” which is a part of the CNet Next Big Thing session. Even the voice of Apple’s Siri was in attendance.
The session was all about voice and gesture recognition, and how these technologies are progressing and will be used in the future. Voice recognition is getting much better and is being further integrated into all aspects of our technology, such as smart homes, cars, phones, appliances, you name it. However, it isn’t all down to voice recognition, gesture and movement recognition are advancing to be able to read intent from certain gestures.
As an example, they mentioned something called “Gaze Technology,” which is basically eye-tracking technology. This could be used in something like vehicles to detect if a driver is drowsy based on certain eye movements.
Another very interesting technology discussed was “implicit interactions.” Basically, they said once you have multiple devices with gesture and voice recognition capabilities, it can get confusing to know which device is being interacted with. Implicit interactions are about devices being able to recognize, based off your gestures and other interactions, which device you are talking to. In a sense, it is a level of artificial intelligence.
Similar to implicit interactions, gesture control was discussed. Part of the problem with gesture control is that there generally is not enough room for all your gestures on something like a watch or smartphone. The panel thinks some solutions will come from things like projections hovering above the device you are working on so that you can interact with the projection to control the device.
Since artificial intelligence was discussed throughout the session, a lot of questions came at the end from the audience regarding safety. Basically going back to the questions posed to us throughout science fiction; How safe is AI? How much of our humanity are we going to lose to emerging technologies? The panelists said that while we should be careful, generally, they don’t see any problem right now and we should continue to move forward.
It was a long day filled with tons of cool and exciting technology. Tomorrow I’ll be heading to the Venetian conference center to check out what else there is to see here at CES! Stay tuned to the Stellar Solutions Blog for day-to-day, behind-the-scenes recaps of my time here at #CES2016!