Ubicomp Lab – UW News /news Thu, 15 Oct 2015 20:33:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Affordable camera reveals hidden details invisible to the naked eye /news/2015/10/15/affordable-camera-reveals-hidden-details-invisible-to-the-naked-eye/ Thu, 15 Oct 2015 18:12:53 +0000 /news/?p=39346
Compared to an image taken with a normal camera (left), HyperCam images (right) reveal detailed vein and skin texture patterns that are unique to each individual. Photo: ÌìÃÀÓ°ÊÓ´«Ã½

Peering into a grocery store bin, it’s hard to tell if a peach or tomato or avocado is starting to go bad underneath its skin.

But an affordable camera technology being developed by the ÌìÃÀÓ°ÊÓ´«Ã½ and Microsoft Research might enable consumers of the future to tell which piece of fruit is perfectly ripe or what’s rotting in the fridge.

The team of computer science and electrical engineers developed HyperCam, a lower-cost hyperspectral camera that uses both visible and invisible near-infrared light to “see” beneath surfaces and capture unseen details. This type of camera is typically used in industrial applications and can cost between several thousand to tens of thousands of dollars.

In a presented at the , the team detailed a hardware solution that costs roughly $800, or potentially as little as $50 to add to a mobile phone camera. They also developed intelligent software that easily finds “hidden” differences between what the hyperspectral camera captures and what can be seen with the naked eye.

When HyperCam captured images of a person’s hand, for instance, they revealed detailed vein and skin texture patterns that are unique to that individual. That can aid in everything from gesture recognition to biometrics to distinguishing between two different people playing the same video game.

As a preliminary investigation of HyperCam’s utility as a biometric tool, in a test of 25 different users, the system was able differentiate between hand images of users with 99 percent accuracy.

HyperFrames taken with HyperCam predicted the relative ripeness of 10 different fruits with 94 percent accuracy, compared with only 62 percent for a typical (RGB) camera. Photo: ÌìÃÀÓ°ÊÓ´«Ã½

In another test, the team also took hyperspectral images of 10 different fruits, from strawberries to mangoes to avocados, over the course of a week. The HyperCam images predicted the relative ripeness of the fruits with 94 percent accuracy, compared with only 62 percent for a typical camera.

“It’s not there yet, but the way this hardware was built you can probably imagine putting it in a mobile phone,” said , Washington Research Foundation Endowed Professor of Computer Science & Engineering and Electrical Engineering at the UW.

“With this kind of camera, you could go to the grocery store and know what produce to pick by looking underneath the skin and seeing if there’s anything wrong inside. It’s like having a food safety app in your pocket,” Patel said.

Hyperspectral imaging is used today in everything from satellite imaging and energy monitoring to infrastructure and food safety inspections, but the technology’s high cost has limited its use to industrial or commercial purposes. The UW and Microsoft Research team wanted to see if they could make a relatively simple and affordable hyperspectral camera for consumer uses.

“Existing systems are costly and hard to use, so we decided to create an inexpensive hyperspectral camera and explore these uses ourselves,” said , a Microsoft researcher who worked on the project. “After building the camera we just started pointing it at everyday objects — really anything we could find in our homes and offices — and we were amazed at all the hidden information it revealed.”

A typical camera divides visible light into three bands — red, green and blue — and generates images using different combinations of those colors. But cameras that utilize other wavelengths in the electromagnetic spectrum can reveal invisible differences.

Near-infrared cameras, for instance, can reveal whether crops are healthy or a work of art is genuine. Thermal infrared cameras can visualize where heat is escaping from leaky windows or an overloaded electrical circuit.

HyperCam is a low-cost hyperspectral camera developed by UW and Microsoft Research that reveals details that are difficult or impossible to see with the naked eye. Photo: ÌìÃÀÓ°ÊÓ´«Ã½

“When you look at a scene with a naked eye or a normal camera, you’re mostly seeing colors. You can say, ‘Oh, that’s a pair of blue pants,'” said lead author , a UW computer science and engineering doctoral student and Microsoft Research graduate fellow. “With a hyperspectral camera, you’re looking at the actual material that something is made of. You can see the difference between blue denim and blue cotton.”

HyperCam, which uses the visible and near-infrared parts of the electromagnetic spectrum, illuminates a scene with 17 different wavelengths and generates an image for each.

One challenge in hyperspectral imaging is sorting through the sheer volume of frames produced. The UW software analyzes the images and finds ones that are most different from what the naked eye sees, essentially zeroing in on ones that the user is likely to find most revealing.

“It mines all the different possible images and compares it to what a normal camera or the human eye will see and tries to figure out what scenes look most different,” Goel said.

One remaining challenge is that the technology doesn’t work particularly well in bright light, Goel said. Next research steps will include addressing that problem and making the camera small enough to be incorporated into mobile phones and other devices, he said.

Co-authors include , and the late of the UW’s Department of Computer Science & Engineering and T. Scott Saponas, Neel Joshi, Dan Morris, Brian Guenter and Marcel Gavriliu at Microsoft Research.

The project was funded by Microsoft Research.

For more information, contact the research team at hypercam@cs.washington.edu.

]]>
New wearable technology can sense appliance use, help track carbon footprint /news/2015/09/08/new-wearable-technology-can-sense-appliance-use-help-track-carbon-footprint/ Tue, 08 Sep 2015 20:35:56 +0000 /news/?p=38613
This MagnifiSense research prototype can sense what appliances its wearer is using, based on the electromagnetic radiation emanating from devices such as blenders, remote controls or even automobiles. Photo: ÌìÃÀÓ°ÊÓ´«Ã½

In today’s smart home, technologies can track how much energy a particular appliance like a refrigerator or television or hair dryer is gobbling up. What they don’t typically show is which person in the house actually flicked the switch.

A new wearable technology developed at the ÌìÃÀÓ°ÊÓ´«Ã½ called MagnifiSense can sense what devices and vehicles the user interacts with throughout the day, which can help track that individual’s carbon footprint, enable smart home applications or even assist with elder care.

In a to be presented this week at the , MagnifiSense correctly classified 94 percent of users’ interactions with 12 common devices after a quick one-time calibration, including microwaves, blenders, remote controls, electric toothbrushes, laptops, light dimmers, and even cars and buses. Even without the calibration, MagnifiSense was still correct 83 percent of the time.

The sensor worn on the wrist uses unique electromagnetic radiation signatures generated by electrical components or motors in those devices to pinpoint when its wearer flicks a light switch, turns on a stove or even boards a train.

“It’s another way to log what you’re interacting with so at the end of the day or month you can see how much energy you used,” said , Washington Research Foundation Endowed Professor of Computer Science & Engineering and Electrical Engineering, who directs the UW .

“Right now, we can know that lights are 20 percent of your energy use. With this, we divvy it up and say who consumed that energy,” said Patel.

Appliances and vehicles such as cars, buses and trains emit a unique pattern of electromagnetic radiation, based on the combination of electrical components that make them run. Photo: ÌìÃÀÓ°ÊÓ´«Ã½

In a 24-hour test in which a single user did everything from read on a laptop to cook dinner and take a bus ride, the system correctly identified 25 out of 29 interactions with various devices and vehicles.

MagnifiSense also has potential for other smart home applications, such as recognizing a user’s preference for interacting with an appliance or device. By sensing whether an adult or child is turning on a television or tablet, for instance, a system could automatically display their favorite programs or tailor the device with appropriate selections.

In assisted living settings or nursing homes, the wearable sensor could help keep track of how efficiently elderly people are going about everyday tasks such as cooking or grooming. It could also detect when a stove has been left on for a long period of time and help alert someone to that danger.

“The nice thing with MagnifiSense is that you don’t have to instrument every single appliance in your house, which gets expensive and cumbersome,” said lead author , a UW electrical engineering doctoral student. “It can also sense some of the blank spots that other technologies can’t, like battery-powered devices.”

The team combined three simple, off-the-shelf sensors that use inductors, or coils of wire wound around magnets. Those proved to be the most accurate without being so power-hungry that wearing them would be impractical.

These sources of unique electromagnetic radiation patterns enable MagnifiSense to identify what devices its wearer is using. Photo: ÌìÃÀÓ°ÊÓ´«Ã½

The sensors also capture a broad frequency range that allows the system to differentiate between electromagnetic radiation emanating from the unique combinations of electronic components such as motors, rectifiers and modulators embedded in everyday devices.

“When a blender turns on, for instance, modulators change the current profile of the device and create something similar to a vocal cord pattern,” Wang said. “A blender ‘sings’ quite differently than a hair dryer even though to our ears they sound similar.”

The team also developed innovative signal processing and machine learning algorithms to help the system correctly match those patterns with a particular type of device.

One advantage to a wearable option is that anyone concerned about privacy issues can control when they use it, researchers said, or simply take it off.

Next steps include testing MagnifiSense on a wider variety of devices and distinguishing between multiple devices operating in close proximity. In preliminary tests, for instance, MagnifiSense had the most trouble correctly classifying a handful of particular toothbrushes, shavers and cars.

The researchers also plan to work on miniaturizing their proof-of-concept device into something that could be embedded into a watch or band. Based on its investigation, the team believes that with slight improvement to the update rate of magnetic sensors in current smartphones and smartwatches, MagnifiSense could soon be enabled on new devices with a simple software upgrade.

“We think it could be integrated into any wrist-sized product,” said Patel. “The next steps are really to look at what other devices we can detect and work on a prototype that’s wearable.”

Co-authors include UW electrical engineering doctoral student , UW computer science and engineering doctoral students and , and .

For more information, contact Wang and Patel at magnifisense@cs.washington.edu.

 

]]>