Google, Nvidia, electronic skin, and MIT’s AI tools for the blind are making news in today’s AI Minute.
Transcript
- Google recently announced the arrival of its second-generation Tensor Processing Units. Designed to be faster and more capable at machine learning than first-generation versions, the chips are designed to be used with Google’s TensorFlow framework for AI workload and will be available as a service on the Google Cloud Platform. Read more.
- In related news, Nvidia announced their new Volta V100 processor, a GPU chip focused on deep learning. Nvidia’s founder and chief executive officer described AI as “driving the greatest technology advances in human history." He added, "It will automate intelligence and spur a wave of social progress unmatched since the industrial revolution. Read more.
- Scientists at Georgia Tech have developed a material that could be used to make self-powered electronic skin or self-powered soft robots. By combining a hybrid material made of an elastomer and an ionic hydrogel, the team is able to harvest energy from movement and provide tactile sensing. Read more.
- As reported by MIT News, MIT AI researchers have developed a new system to help visually impaired users better navigate their surroundings. The system employs a 3-D camera worn in a pouch hung around the neck; a processing unit running proprietary algorithms identifies surfaces and their orientations. Meanwhile, a sensor belt sends different types of tactile signals to the user. Additionally, a braille interface displays symbols describing the objects in a user’s environment. Read more.
- Subscribe to Gigaom AI Minute
- iTunes
- Google Play
- Spotify
- Stitcher
- RSS