DeepHand Uses Neural Networks To Transport Your Hands Into Virtual Reality

Posted: Jun 23 2016, 12:37am CDT | by , Updated: Jun 23 2016, 10:04pm CDT, in News | Latest Science News

 
DeepHand Uses Neural Networks To Transport Your Hands Into Virtual Reality
Credit: Design Lab C / DeepHand: Robust Hand Pose Estimation by Completing a Matrix Imputed with Deep Features

Don't Miss: This How to find Fingerlings in Stock

Seeing your hands in virtual worlds is increasing immersion dramatically. New research uses neural networks and deep learning to represent your hands' movements and gestures in VR.

Researcher at Purdue University developed a new sophisticated way of representing user's hands in Virtual Reality.

DeepHand uses a “convolutional neural network” that mimics the human brain and is capable of “deep learning” to understand the hand’s nearly endless complexity of joint angles and contortions.

“We figure out where your hands are and where your fingers are and all the motions of the hands and fingers in real time,” said Karthik Ramani, Purdue University’s Donald W. Feddersen Professor of Mechanical Engineering and director of the C Design Lab.

DeepHand uses a depth-sensing camera to capture the user’s hand, and specialized algorithms then interpret hand motions.

The researchers “trained” DeepHand with a database of 2.5 million hand poses and configurations. The positions of finger joints are assigned specific “feature vectors” that can be quickly retrieved.

The research paper was authored by doctoral students Ayan Sinha and Chiho Choi and Ramani.

“We identify key angles in the hand, and we look at how these angles change, and these configurations are represented by a set of numbers,” Sinha said.

Then, from the database the system selects the ones that best fit what the camera sees.

“The idea is similar to the Netflix algorithm, which is able to select recommended movies for specific customers based on a record of previous movies purchased by that customer,” Ramani said.

DeepHand selects “spatial nearest neighbors” that best fit hand positions picked up by the camera. Although training the system requires a large computing power, once the system has been trained it can run on a standard computer.

A research paper about DeepHand will be presented during CVPR 2016, a computer vision conference in Las Vegas from Sunday (June 26 )to July 1.

More information about the DeepHand paper is available on the C Design Lab Web site.

Holiday Gift Guides and Deals

Get your Holiday gifting inspired by Best Toy Gifts with High STEM Value and the Top 10 toy gifts under $10 if you are on budget. The most popular Holiday 2017 toy list include Fingerlings, Crate Creatures and more. Don't miss the new Holiday deals on Amazon Devices, including $29.99 Fire tablet.

This story may contain affiliate links.

This free App Solves You Holiday Shopping Problem


Download the free Tracker app now to get in-stock alerts on Fingerling, Luvabella, SNES Classic and more.

Latest News

Comments

The Author

<a href="/latest_stories/all/all/2" rel="author">Luigi Lugmayr</a>
Luigi Lugmayr () is the founding chief Editor of I4U News and brings over 15 years experience in the technology field to the ever evolving and exciting world of gadgets. He started I4U News back in 2000 and evolved it into vibrant technology magazine.
Luigi can be contacted directly at ml@i4u.com.

 

 

Advertisement

comments powered by Disqus