The depth camera creates a constantly updated 3-D map of the desktop, noting when objects move and when hands enter the scene. This information is then passed along to the rig’s brains, which Xiao's team programmed to distinguish between fingers and, say, a dry erase marker. This distinction is important since Desktopography works like an oversized touchscreen. Xiao designed a few new interactions, like tapping with five fingers to surface an application launcher, or lifting a hand to exit an app. But for the most part, Desktopography applications still rely on tapping, pinching, and swiping. Smartly, the researchers designed a feature that makes digital apps to snap to hard edges on laptops or phones, which could allow projected interfaces to act like an augmentation of physical objects like keyboards. “We want to put the digital and physical in the same environment so we can eventually look at merging these things together in a very intelligent way,” Xiao says.
Click here to read the full article.