Shwetak Patel – UW News /news Mon, 10 Jun 2024 20:23:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 New circuit boards can be repeatedly recycled /news/2024/04/26/recyclable-circuit-boards-vitrimer-pcb-e-waste/ Fri, 26 Apr 2024 14:02:52 +0000 /news/?p=85202 A small brown circuit board sits on a gray background. To its right are a small copper plate, sheets of glass fibers in a crosshatch pattern, small chunks of vitrimer plastic that鈥檚 been removed from a circuit board, and a computer chip.
A team led by researchers at the 天美影视传媒 developed a new PCB that performs on par with traditional materials and can be recycled repeatedly with negligible material loss. Researchers used a solvent that transforms a type of vitrimer 鈥 a cutting-edge class of polymer 鈥 into a jelly-like substance without damage, allowing solid components to be plucked out for reuse or recycling. Here, from left to right is a vitrimer-based circuit board, a sheet of glass fibers, vitrimer that鈥檚 been swollen and removed from a board, and electrical components such as a computer chip. Photo: Mark Stone/天美影视传媒

A recent found that the world generated 137 billion pounds of electronic waste in 2022, an 82% increase from 2010. Yet less than a quarter of 2022鈥檚 e-waste was recycled. While many things impede a sustainable afterlife for electronics, one is that we don鈥檛 have systems at scale to recycle the found in nearly all electronic devices.

PCBs 鈥 which house and interconnect chips, transistors and other components 鈥 typically consist of layers of thin glass fiber sheets coated in hard plastic and laminated together with copper. That plastic can鈥檛 easily be separated from the glass, so PCBs often pile up in landfills, where their chemicals can seep into the environment. Or they鈥檙e burned to extract their electronics鈥 valuable metals like gold and copper. This burning, , is wasteful and can be toxic 鈥 especially for those doing the work without proper protections.

A team led by researchers at the 天美影视传媒 developed a new PCB that performs on par with traditional materials and can be recycled repeatedly with negligible material loss. Researchers used a solvent that transforms a type of 鈥 a cutting-edge class of sustainable polymers 鈥 to a jelly-like substance without damaging it, allowing the solid components to be plucked out for reuse or recycling.

The vitrimer jelly can then be repeatedly used to make new, high-quality PCBs, unlike conventional plastics that degrade significantly with each recycling. With these 鈥渧PCBs鈥 (vitrimer printed circuit boards), researchers recovered 98% of the vitrimer and 100% of the glass fiber, as well as 91% of the solvent used for recycling.

The researchers published April 26 in Nature Sustainability.

In a 30ml glass beaker filled with clear liquid, tweezers remove a piece of vitrimer plastic. A square sheet of glass fibers sits in the background, leaning against the side of the beaker
Tweezers remove a piece of vitrimer from the solvent. A sheet of glass fibers sits in the background. Photo: Mark Stone/天美影视传媒

鈥淧CBs make up a pretty large fraction of the mass and volume of electronic waste,鈥 said co-senior author , a UW assistant professor in the Paul G. Allen School of Computer Science & Engineering. 鈥淭hey鈥檙e constructed to be fireproof and chemical-proof, which is great in terms of making them very robust. But that also makes them basically impossible to recycle. Here, we created a new material formulation that has the electrical properties comparable to conventional PCBs as well as a process to recycle them repeatedly.鈥

Vitrimers are a class of polymers first developed in 2015. When exposed to certain conditions, such as heat above a specific temperature, their molecules can rearrange and form new bonds. This makes them both 鈥渉ealable鈥 (a bent PCB could be straightened, for instance) and highly recyclable.

鈥淥n a molecular level, polymers are kind of like spaghetti noodles, which wrap and get compacted,鈥 said co-senior author , a UW assistant professor in the mechanical engineering department. 鈥淏ut vitrimers are distinct because the molecules that make up each noodle can unlink and relink. It鈥檚 almost like each piece of spaghetti is made of small Legos.鈥

The team鈥檚 process to create the vPCB deviated only slightly from those used for PCBs. Conventionally, semi-cured PCB layers are held in cool, dry conditions where they have a limited shelf life before they鈥檙e laminated in a heat press. Because vitrimers can form new bonds, researchers laminated fully cured vPCB layers. The researchers found that to recycle the vPCBs they could immerse the material in an organic solvent that has a relatively low boiling point. This swelled the vPCB鈥檚 plastic without damaging the glass sheets and electronic components, letting the researchers extract these for reuse.

A man in a white lab coat and white thermal gloves works at a heat press in a laboratory.
Here, Agni K. Biswal, a UW postdoctoral scholar in mechanical engineering, uses a heat press to laminate a circuit board together. Photo: Mark Stone/天美影视传媒

This process allows for several paths to more sustainable, circular PCB lifecycles. Damaged circuit boards, such those with cracks or warping, can in some cases be repaired. If they aren鈥檛 repaired, they can be separated from their electronic components. Those components can then be recycled or reused, while the vitrimer and glass fibers can get recycled into new vPCBs.

The team tested its vPCB for strength and electrical properties, and found that it performed comparable to the most common PCB material (). Vashisth and co-author , a principal researcher at Microsoft Research and an affiliate assistant professor in the Allen School, are now using artificial intelligence to explore new vitrimer formulations for different uses.

Producing vPCBs wouldn鈥檛 entail major changes to manufacturing processes.

Related:

For more information, visit

鈥淭he nice thing is that a lot of industries 鈥 such as aerospace, automotive and even electronics 鈥 already have processing set up for the sorts of two-part epoxies that we use here,鈥 said lead author , a UW doctoral student in the Allen School.

The team analyzed the environmental impact and found recycled vPCBs could entail a 48% reduction in global warming potential and an 81% reduction in carcinogenic emissions compared to traditional PCBs. While this work presents a technology solution, the team notes that a significant hurdle to recycling vPCBs at scale would be creating systems and incentives to gather e-waste so it can be recycled.

鈥淔or real implementation of these systems, there needs to be cost parity and strong governmental regulations in place,鈥 said Nguyen. 鈥淢oving forward, we need to design and optimize materials with sustainability metrics as a first principle.鈥

Additional co-authors include , a UW postdoctoral scholar in the mechanical engineering department; , a UW doctoral student in the mechanical engineering department; , a senior applied scientist at Microsoft Research; , a senior researcher at Microsoft Research and an affiliate researcher in the Allen School; and , a UW professor in the Allen School and the electrical and computer engineering department. This research is funded by the Microsoft Climate Research Initiative, an Amazon Research Award and the Google Research Scholar Program. Zhang was supported by the UW Clean Energy Institute Graduate Fellowship.

For more information, contact vpcb@cs.washington.edu.

]]>
UW-developed smart earrings can monitor a person鈥檚 temperature /news/2024/02/07/smart-earrings-can-monitor-temperature/ Wed, 07 Feb 2024 16:21:30 +0000 /news/?p=84315  

The temperature sensing earring is shown attached to a person鈥檚 ear. The portion touching the earlobe has a gemstone on it. Dangling a few centimeters below it is a small circular circuit board.
天美影视传媒 researchers introduced the Thermal Earring, a wireless wearable that continuously monitors a user鈥檚 earlobe temperature. Photo: Raymond Smith/天美影视传媒

Smart accessories are increasingly common. Rings and watches track vitals, while Ray-Bans now . Wearable tech has even broached . Yet certain accessories have yet to get the smart touch.

天美影视传媒 researchers introduced the Thermal Earring, a wireless wearable that continuously monitors a user鈥檚 earlobe temperature. In a study of six users, the earring outperformed a smartwatch at sensing skin temperature during periods of rest. It also showed promise for monitoring signs of stress, eating, exercise and ovulation.

The smart earring prototype is about the size and weight of a small paperclip and has a 28-day battery life. A magnetic clip attaches one temperature sensor to a wearer鈥檚 ear, while another sensor dangles about an inch below it for estimating room temperature. The earring can be personalized with fashion designs made of resin (in the shape of a flower, for example) or with a gemstone, without negatively affecting its accuracy.

Researchers Jan. 12 in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. The device is not currently commercially available.

鈥淚 wear a smartwatch to track my personal health, but I鈥檝e found that a lot of people think smartwatches are unfashionable or bulky and uncomfortable,鈥 said co-lead author , a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. 鈥淚 also like to wear earrings, so we started thinking about what unique things we can get from the earlobe. We found that sensing the skin temperature on the lobe, instead of a hand or wrist, was much more accurate. It also gave us the option to have part of the sensor dangle to separate ambient room temperature from skin temperature.鈥

The temperature sensing earring lies on its side on a gray surface. It has a small circuit board with a magnet attached to it, connected to a slightly larger circuit board.
The smart earring prototype shown here is about the size and weight of a small paperclip and has a 28-day battery life. Photo: Raymond Smith/天美影视传媒

Creating a wearable small enough to pass as an earring, yet robust enough that users would have to charge it only every few days, presented an engineering challenge.

鈥淚t鈥檚 a tricky balance,鈥 said co-lead author , who was a UW masters student in the electrical and computer engineering department when doing the research and is now at the University of California San Diego. 鈥淭ypically, if you want power to last longer, you should have a bigger battery. But then you sacrifice size. Making it wireless also demands more energy.鈥

The team made the earring鈥檚 power consumption as efficient as possible, while also making space for a Bluetooth chip, a battery, two temperature sensors and an antenna. Instead of pairing it with a device, which uses more power, the earring uses Bluetooth advertising mode 鈥 the transmissions a device broadcasts to show it can be paired. After reading and sending the temperature, it goes into deep sleep to save power.

Related:

  • Story from
  • See more work from the聽

Because continuous earlobe temperature has not been studied widely, the team also explored potential applications to guide future research. In five patients with fevers, the average earlobe temperature rose 10.62 degrees Fahrenheit (5.92 degrees Celsius) compared with the temperatures of 20 healthy patients, suggesting the earring鈥檚 potential for continuous fever monitoring.

鈥淚n medicine we often monitor fevers to assess response to therapy 鈥 to see, for instance, if an antibiotic is working on an infection,鈥 said co-author , a clinical instructor at the Department of Emergency Medicine in the UW School of Medicine. 鈥淟onger term monitoring is a way to increase sensitivity of capturing fevers, since they can rise and fall throughout the day.鈥

While core body temperature generally stays relatively constant outside of fever, earlobe temperature varies more, presenting several novel uses for the Thermal Earring. In small proof-of-concept tests, the earring detected temperature variations correlated with eating, exercising and experiencing stress. When tested on six users at rest, the earring鈥檚 reading varied by 0.58 F (0.32 C) on average, placing it within the range of 0.28 C to 0.56 C necessary for ovulation and period tracking; a smartwatch varied by 0.72 C.

The temperature sensing earring is shown attached to a person鈥檚 ear. The portion touching the earlobe has a gemstone on it. Dangling a few centimeters below it is a pink flower made of resin.
The smart earring can be personalized with fashion designs made of resin 鈥 such as the flower shown here 鈥 or with a gemstone, without negatively affecting its accuracy. Photo: Raymond Smith/天美影视传媒

鈥淐urrent wearables like Apple Watch and Fitbit have temperature sensors, but they provide only an average temperature for the day, and their temperature readings from wrists and hands are too noisy to track ovulation,鈥 Xue said. 鈥淪o we wanted to explore unique applications for the earring, especially applications that might be attractive to women and anyone who cares about fashion.鈥

While researchers found several promising potential applications for the Thermal Earring, their findings were preliminary, since the focus was on the range of potential uses. They need more data to train their models for each use case and more thorough testing before the device might be used by the public. For future iterations of the device, Xue is working to integrate heart rate and activity monitoring. She鈥檚 also interested in potentially powering the device from solar or kinetic energy from the earring swaying.

鈥淓ventually, I want to develop a jewelry set for health monitoring,鈥 Xue said. 鈥淭he earrings would sense activity and health metrics such as temperature and heart rate, while a necklace might serve as an electrocardiogram monitor for more effective heart health data.鈥

, a doctoral student in the Allen School, was a co-author on the paper. , a professor in the Allen School, and , a professor in the Allen School and the electrical and computer engineering department, were co-senior authors. This research was funded by the Washington Research Foundation and the .

For more information, contact Xue at qxue2@cs.washington.edu and Liu at yul276@ucsd.edu.

For questions specifically for Dr. Mastafa Springston, please contact Susan Gregg at sghanson@uw.edu.

]]>
An app can transform smartphones into thermometers that accurately detect fevers /news/2023/06/21/an-app-can-transform-smartphones-into-thermometers-that-accurately-detect-fevers/ Wed, 21 Jun 2023 16:36:15 +0000 /news/?p=82017 A researcher holds a phone to a patient's forehead.
A team led by researchers at the 天美影视传媒 has created an app 鈥 FeverPhone 鈥 that transforms smartphones into thermometers without adding new hardware. To take someone鈥檚 temperature, the screen of a smartphone is held to a patient’s forehead. Shown here is lead author Joseph Breda (left), a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering, measuring Richard Li’s temperature. Photo: Dennis Wise/天美影视传媒

If you鈥檝e ever thought you may be running a temperature yet couldn鈥檛 find a thermometer, you aren鈥檛 alone. A fever is the and an early sign of many other viral infections. For quick diagnoses and to prevent viral spread, a temperature check can be crucial. Yet accurate at-home thermometers aren鈥檛 commonplace, despite .

There are a few potential reasons for that. The devices can range from $15 to $300, and many people need them only a few times a year. In times of sudden demand 鈥 such as the early days of the COVID-19 pandemic 鈥 thermometers can sell out. Many people, particularly those in under-resourced areas, can end up without a vital medical device when they need it most.

To address this issue, a team led by researchers at the 天美影视传媒 has created an app called FeverPhone, which transforms smartphones into thermometers without adding new hardware. Instead, it uses the phone’s touchscreen and repurposes the existing battery temperature sensors to gather data that a machine learning model uses to estimate people鈥檚 core body temperatures. When the researchers tested FeverPhone on 37 patients in an emergency department, the app estimated core body temperatures with accuracy comparable to some consumer thermometers. The team March 28 in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies.

鈥淚n undergrad, I was doing research in a lab where we wanted to show that you could use the temperature sensor in a smartphone to measure air temperature,鈥 said lead author , a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering.聽鈥淲hen I came to the UW, my adviser and I wondered how we could apply a similar technique for health. We decided to measure fever in an accessible way. The primary concern with temperature isn鈥檛 that it鈥檚 a difficult signal to measure; it鈥檚 just that people don鈥檛 have thermometers.鈥

A researcher holds a phone that says 98.7 degrees.
Lead author Joseph Breda. Photo: Dennis Wise/天美影视传媒

The app is the first to use existing phone sensors to estimate whether people have fevers. It needs more training data to be widely used, Breda said, but for doctors, the potential of such technology is exciting.

鈥淧eople come to the ER all the time saying, ‘I think I was running a fever.’ And that鈥檚 very different than saying ‘I was running a fever,’鈥 said , a co-author on the study and a UW clinical instructor at the Department of Emergency Medicine in the UW School of Medicine.聽鈥淚n a wave of influenza, for instance, people running to the ER can take five days, or even a week sometimes. So if people were to share fever results with public health agencies through the app, similar to how we signed up for COVID exposure warnings, this earlier sign could help us intervene much sooner.鈥

Clinical-grade thermometers use tiny sensors known as thermistors to estimate body temperature. Off-the-shelf smartphones also happen to contain thermistors; they鈥檙e mostly used to monitor the temperature of the battery. But the UW researchers realized they could use these sensors to track heat transfer between a person and a phone. The phone touchscreen could sense skin-to-phone contact, and the thermistors could gauge the air temperature and the rise in heat when the phone touched a body.

To test this idea, the team started by gathering data in a lab. To simulate a warm forehead, the researchers heated a plastic bag of water with a sous-vide machine and pressed phone screens against the bag. To account for variations in circumstances, such as different people using different phones, the researchers tested three phone models. They also added accessories such as a screen protector and a case and changed the pressure on the phone.

The researchers used the data from different test cases to train a machine learning model that used the complex interactions to estimate body temperature. Since the sensors are supposed to gauge the phone鈥檚 battery heat, the app tracks how quickly the phone heats up and then uses the touchscreen data to account for how much of that comes from a person touching it. As they added more test cases, the researchers were able to calibrate the model to account for the variations in things such as phone accessories.

Then the team was ready to test the app on people. The researchers took FeverPhone to the UW School of Medicine鈥檚 Emergency Department for a clinical trial where they compared its temperature estimates against an oral thermometer reading. They recruited 37 participants, 16 of whom had at least a mild fever.

To use FeverPhone, the participants held the phones like point-and-shoot cameras 鈥 with forefingers and thumbs touching the corner edges to reduce heat from the hands being sensed (some had the researcher hold the phone for them). Then participants pressed the touchscreen against their foreheads for about 90 seconds, which the researchers found to be the ideal time to sense body heat transferring to the phone.

Overall, FeverPhone estimated patient core body temperatures with an average error of about 0.41 degrees Fahrenheit (0.23 degrees Celsius), which is in the clinically acceptable range of 0.5 C.

See more work from the

The researchers have highlighted a few areas for further investigation. The study didn鈥檛 include participants with severe fevers above 101.5 F (38.6 C), because these temperatures are easy to diagnose and because sweaty skin tends to confound other skin-contact thermometers, according to the team. Also, FeverPhone was tested on only three phone models. Training it to run on other smartphones, as well as devices such as smartwatches, would increase its potential for public health applications, the team said.

鈥淲e started with smartphones since they鈥檙e ubiquitous and easy to get data from,鈥 Breda said. 鈥淚 am already working on seeing if we can get a similar signal with a smartwatch. What鈥檚 nice, because watches are much smaller, is their temperature will change more quickly. So you could imagine having a user put a Fitbit to their forehead and measure in 10 seconds whether they have a fever or not.鈥

, a UW professor in the Allen School and the electrical and computer engineering department, was a senior author on the paper, and , an assistant professor in the University of Toronto鈥檚 computer science department, was a co-author. This research was supported by the 天美影视传媒 Gift Fund.

 

For more information, contact Breda at joebreda@cs.washington.edu. He’ll be traveling for research starting June 23; his availability for interviews will be limited after that.

For questions specifically for Dr. Mastafa Springston, please contact Susan Gregg at sghanson@uw.edu.

]]>
A smartphone’s camera and flash could help people measure blood oxygen levels at home /news/2022/09/19/smartphone-camera-flash-could-help-people-measure-blood-oxygen-levels-home/ Mon, 19 Sep 2022 12:14:26 +0000 /news/?p=79438
This technique involves having participants place their finger over the camera and flash of a smartphone, which uses a deep-learning algorithm to decipher the blood oxygen levels from the blood flow patterns in the resulting video. Photo: Dennis Wise/天美影视传媒

First, pause and take a deep breath.

When we breathe in, our lungs fill with oxygen, which is distributed to our red blood cells for transportation throughout our bodies. Our bodies need a lot of oxygen to function, and healthy people have at least 95% oxygen saturation all the time.

Conditions like asthma or COVID-19 make it harder for bodies to absorb oxygen from the lungs. This leads to oxygen saturation percentages that drop to 90% or below, an indication that medical attention is needed.

In a clinic, doctors monitor oxygen saturation using pulse oximeters 鈥 those clips you put over your fingertip or ear. But monitoring oxygen saturation at home multiple times a day could , for example.

In a proof-of-principle study, 天美影视传媒 and University of California San Diego researchers have shown that smartphones are capable of detecting blood oxygen saturation levels down to 70%. This is the lowest value that pulse oximeters should be able to measure, as recommended by the U.S. Food and Drug Administration.

The technique involves participants placing their finger over the camera and flash of a smartphone, which uses a deep-learning algorithm to decipher the blood oxygen levels. When the team delivered a controlled mixture of nitrogen and oxygen to six subjects to artificially bring their blood oxygen levels down, the smartphone correctly predicted whether the subject had low blood oxygen levels 80% of the time.

The team Sept. 19 in npj Digital Medicine.

“Other smartphone apps that do this were developed by asking people to hold their breath. But people get very uncomfortable and have to breathe after a minute or so, and that’s before their blood-oxygen levels have gone down far enough to represent the full range of clinically relevant data,” said co-lead author , a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “With our test, we’re able to gather 15 minutes of data from each subject. Our data shows that smartphones could work well right in the critical threshold range.”

One way to measure oxygen saturation is to use pulse oximeters 鈥 those little clips you put over your fingertip (some shown here in gray and blue). Photo: Dennis Wise/天美影视传媒

Another benefit of measuring blood oxygen levels on a smartphone is that almost everyone has one.

“This way you could have multiple measurements with your own device at either no cost or low cost,” said co-author , professor of family medicine in the UW School of Medicine. “In an ideal world, this information could be seamlessly transmitted to a doctor’s office. This would be really beneficial for telemedicine appointments or for triage nurses to be able to quickly determine whether patients need to go to the emergency department or if they can continue to rest at home and make an appointment with their primary care provider later.”

The team recruited six participants ranging in age from 20 to 34. Three identified as female, three identified as male. One participant identified as being African American, while the rest identified as being Caucasian.

To gather data to train and test the algorithm, the researchers had each participant wear a standard pulse oximeter on one finger and then place another finger on the same hand over a smartphone’s camera and flash. Each participant had this same set up on both hands simultaneously.

“The camera is recording a video: Every time your heart beats, fresh blood flows through the part illuminated by the flash,” said senior author , who started this project as a UW doctoral student studying electrical and computer engineering and is now an assistant professor at UC San Diego’s and the Department of Electrical and Computer Engineering.

“The camera records how much that blood absorbs the light from the flash in each of the three color channels it measures: red, green and blue,” said Wang, who also directs the . “Then we can feed those intensity measurements into our deep-learning model.”

Each participant breathed in a controlled mixture of oxygen and nitrogen to slowly reduce oxygen levels. The process took about 15 minutes. For all six participants, the team acquired more than 10,000 blood oxygen level readings between 61% and 100%.

The researchers used data from four of the participants to train a deep learning algorithm to pull out the blood oxygen levels. The remainder of the data was used to validate the method and then test it to see how well it performed on new subjects.

“Smartphone light can get scattered by all these other components in your finger, which means there’s a lot of noise in the data that we’re looking at,” said co-lead author , a UW alumnus who is now a doctoral student advised by Wang at UC San Diego. “Deep learning is a really helpful technique here because it can see these really complex and nuanced features and helps you find patterns that you wouldn’t otherwise be able to see.”

The team hopes to continue this research by testing the algorithm on more people.

“One of our subjects had thick calluses on their fingers, which made it harder for our algorithm to accurately determine their blood oxygen levels,” Hoffman said. “If we were to expand this study to more subjects, we would likely see more people with calluses and more people with different skin tones. Then we could potentially have an algorithm with enough complexity to be able to better model all these differences.”

But, the researchers said, this is a good first step toward developing biomedical devices that are aided by machine learning.

“It’s so important to do a study like this,” Wang said. “Traditional medical devices go through rigorous testing. But computer science research is still just starting to dig its teeth into using machine learning for biomedical device development and we’re all still learning. By forcing ourselves to be rigorous, we’re forcing ourselves to learn how to do things right.”

Additional co-authors are , a doctoral student at Southern Methodist University; , associate professor of computer science at Southern Methodist University; , who completed this research as a UW undergraduate student; and , UW professor in both the Allen School and the electrical and computer engineering department. This research was funded by the 天美影视传媒. The researchers have applied for a patent that covers systems and methods for SpO2 classification using smartphones (application number: 17/164,745).

For more information, contact Hoffman at jasonhof@cs.washington.edu, Wang at ejaywang@eng.ucsd.edu and Viswanath at varunv9@eng.ucsd.edu. For questions specifically for Matthew Thompson, please contact Leila Gray at leilag@uw.edu.

]]>
ClearBuds: First wireless earbuds that clear up calls using deep learning /news/2022/07/11/clearbuds-first-wireless-earbuds-clear-calls-deep-learning/ Mon, 11 Jul 2022 15:55:49 +0000 /news/?p=79023
ClearBuds use a novel microphone system and are one of the first machine-learning systems to operate in real time and run on a smartphone. Photo: Raymond Smith/天美影视传媒

As meetings shifted online during the COVID-19 lockdown, many people found that chattering roommates, garbage trucks and other loud sounds disrupted important conversations.

This experience inspired three 天美影视传媒 researchers, who were roommates during the pandemic, to develop better earbuds. To enhance the speaker’s voice and reduce background noise, “ClearBuds” use a novel microphone system and one of the first machine-learning systems to operate in real time and run on a smartphone.

The researchers at the ACM International Conference on Mobile Systems, Applications, and Services.

“ClearBuds differentiate themselves from other wireless earbuds in two key ways,” said co-lead author , a doctoral student in the Paul G. Allen School of Computer Science & Engineering. “First, ClearBuds use a dual microphone array. Microphones in each earbud create two synchronized audio streams that provide information and allow us to spatially separate sounds coming from different directions with higher resolution. Second, the lightweight neural network further enhances the speaker’s voice.”

While most commercial earbuds also have microphones on each earbud, only one earbud is actively sending audio to a phone at a time. With ClearBuds, each earbud sends a stream of audio to the phone. The researchers designed Bluetooth networking protocols to allow these streams to be synchronized within 70 microseconds of each other.

The team’s neural network algorithm runs on the phone to process the audio streams. First it suppresses any non-voice sounds. And then it isolates and enhances any noise that’s coming in at the same time from both earbuds 鈥 the speaker’s voice.

“Because the speaker鈥檚 voice is close by and approximately equidistant from the two earbuds, the neural network can be trained to focus on just their speech and eliminate background sounds, including other voices,” said co-lead author , a doctoral student in the Allen School. “This method is quite similar to how your own ears work. They use the time difference between sounds coming to your left and right ears to determine from which direction a sound came from.”

Shown here, the ClearBuds hardware (round disk) in front of the 3D printed earbud enclosures. Photo: Raymond Smith/天美影视传媒

When the researchers compared ClearBuds with Apple AirPods Pro, ClearBuds performed better, achieving a higher signal-to-distortion ratio across all tests.

“It’s extraordinary when you consider the fact that our neural network has to run in less than 20 milliseconds on an iPhone that has a fraction of the computing power compared to a large commercial graphics card, which is typically used to run neural networks,” said co-lead author , a doctoral student in the Allen School. “That鈥檚 part of the challenge we had to address in this paper: How do we take a traditional neural network and reduce its size while preserving the quality of the output?”

The team also tested ClearBuds “in the wild,” by recording eight people reading from in noisy environments, such as a coffee shop or on a busy street. The researchers then had 37 people rate 10- to 60-second clips of these recordings. Participants rated clips that were processed through ClearBuds’ neural network as having the best noise suppression and the best overall listening experience.

  • For more information, check out the team’s .
  • The hardware and software design for ClearBuds is open source and

One limitation of ClearBuds is that people have to wear both earbuds to get the noise suppression experience, the researchers said.

But the real-time communication system developed here can be useful for a variety of other applications, the team said, including smart-home speakers, tracking robot locations or search and rescue missions.

The team is currently working on making the neural network algorithms even so that they can run on the earbuds.

Additional co-authors are , an associate professor in the Allen School; , a professor in both the Allen School and the electrical and computer engineering department; and and , both professors in the Allen School. This research was funded by The National Science Foundation and the 天美影视传媒’s Reality Lab.

For more information, contact the team at clearbuds@cs.washington.edu.

]]>
New system that uses smartphone or computer cameras to measure pulse, respiration rate could help future personalized telehealth appointments /news/2021/04/01/smartphone-computer-cameras-measure-pulse-respiration-rate-telehealth/ Thu, 01 Apr 2021 18:01:03 +0000 /news/?p=73554
A UW-led team has developed a method that uses the camera on a person’s smartphone or computer to take their pulse and breathing rate from a real-time video of their face. Photo:

Telehealth has become a critical way for doctors to still provide health care while minimizing in-person contact during COVID-19. But with phone or Zoom appointments, it’s harder for doctors to get important vital signs from a patient, such as their pulse or respiration rate, in real time.

A 天美影视传媒-led team has developed a method that uses the camera on a person’s smartphone or computer to take their pulse and respiration signal from a real-time video of their face. The researchers in December at the Neural Information Processing Systems conference.

Now the team is proposing a better system to measure these physiological signals. This system is less likely to be tripped up by different cameras, lighting conditions or facial features, such as skin color. The researchers will April 8 at the ACM Conference on Health, Interference, and Learning.

“Machine learning is pretty good at classifying images. If you give it a series of photos of cats and then tell it to find cats in other images, it can do it. But for machine learning to be helpful in remote health sensing, we need a system that can identify the region of interest in a video that holds the strongest source of physiological information 鈥 pulse, for example 鈥 and then measure that over time,” said lead author , a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering.

“Every person is different,” Liu said. “So this system needs to be able to quickly adapt to each person鈥檚 unique physiological signature, and separate this from other variations, such as what they look like and what environment they are in.”

Try the researchers’ that can detect a user’s heartbeat over time, which doctors can use to calculate heart rate.

For journalists:

The team鈥檚 system is privacy preserving 鈥 it runs on the device instead of in the cloud 鈥 and uses machine learning to capture subtle changes in how light reflects off a person’s face, which is correlated with changing blood flow. Then it converts these changes into both pulse and respiration rate.

The first version of this system was trained with a dataset that contained both videos of people’s faces and “ground truth” information: each person’s pulse and respiration rate measured by standard instruments in the field. The system then used spatial and temporal information from the videos to calculate both vital signs. It outperformed similar machine learning systems on videos where subjects were moving and talking.

But while the system worked well on some datasets, it still struggled with others that contained different people, backgrounds and lighting. This is a common problem known as “overfitting,” the team said.

The researchers improved the system by having it produce a personalized machine learning model for each individual. Specifically, it helps look for important areas in a video frame that likely contain physiological features correlated with changing blood flow in a face under different contexts, such as different skin tones, lighting conditions and environments. From there, it can focus on that area and measure the pulse and respiration rate.

While this new system outperforms its predecessor when given more challenging datasets, especially for people with darker skin tones, there’s still more work to do, the team said.

“We acknowledge that there is still a trend toward inferior performance when the subject’s skin type is darker,” Liu said. “This is in part because light reflects differently off of darker skin, resulting in a weaker signal for the camera to pick up. Our team is actively developing new methods to solve this limitation.”

The researchers are also working on a variety of collaborations with doctors to see how this system performs in the clinic.

“Any ability to sense pulse or respiration rate remotely provides new opportunities for remote patient care and telemedicine. This could include self-care, follow-up care or triage, especially when someone doesn鈥檛 have convenient access to a clinic,” said senior author , a professor in both the Allen School and the electrical and computer engineering department. “It鈥檚 exciting to see academic communities working on new algorithmic approaches to address this with devices that people have in their homes.”

This software is open-source and available on Github:

, a doctoral student in the Allen School; , a UW graduate who now works at OctoML; , a doctoral student in the Information School; and at Microsoft Research are also co-authors on this paper. This research was funded by the Bill & Melinda Gates Foundation, Google and the 天美影视传媒.

For more information, contact Liu at xliu0@cs.washington.edu and Patel at shwetak@cs.washington.edu.

]]>
UW researchers need your (digital) coughs /news/2020/03/31/uw-researchers-need-your-digital-coughs/ Tue, 31 Mar 2020 16:09:40 +0000 /news/?p=67191
While you鈥檙e working from home, fill out a survey that will help UW researchers develop a cough-monitoring app. Photo: 天美影视传媒

For anyone who is looking for a way to help during the coronavirus pandemic while adhering to “stay-at-home” and social distancing orders, 天美影视传媒 researchers have a task for you.

A team from the Paul G. Allen School of Computer Science & Engineering is developing an app that will allow health organizations to monitor coughs from self-quarantined COVID-19 patients at home. Right now, the researchers need to train the app to recognize coughing sounds, so they are asking for participants who can complete a quick聽 to collect coughs and other vocalizations.

The survey includes:

  • a consent form
  • a demographic/health questionnaire
  • participants submitting sounds of 20 coughs and up to 10 samples of other vocalizations, including speech, throat-clearing and laughter

“These sounds will help us train our cough detection model,” said , a doctoral student in the Allen School.”We also train the model with negative examples 鈥 such as voices, laughing and throat clearing 鈥 to help it learn to not classify them as coughs. The more examples we can give the model, the better performance it will achieve.”

The team welcomes participants of all ages. For the best results, participants should do this study on a computer or laptop instead of a smartphone or tablet.

This research is in collaboration with聽聽at Seattle Children’s Hospital; , a professor of medicine at the UW School of Medicine; and , an associate professor of medicine at the UW School of Medicine. This work is funded by the Bill and Melinda Gates Foundation and the National Institutes of Health.

For more information, contact Whitehill at mattw12@uw.edu.

]]>
The one ring 鈥 to track your finger’s location /news/2020/02/03/auraring-tracks-fingers-location/ Mon, 03 Feb 2020 16:18:41 +0000 /news/?p=65958
With continuous tracking, AuraRing can pick up handwriting 鈥 potentially for short responses to text messages. Photo: Dennis Wise/天美影视传媒

Smart technology keeps getting smaller. There are smartphones, smartwatches and now, smart rings, devices that allow someone to use simple finger gestures to control other technology.

For journalists

Researchers at the 天美影视传媒 have created , a ring and wristband combination that can detect the precise location of someone’s index finger and continuously track hand movements. The ring emits a signal that can be picked up on the wristband, which can then identify the position and orientation of the ring 鈥 and the finger it’s attached to. The research team Dec. 11 in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies.

“We’re thinking about the next generation of computing platforms,” said co-lead author , who completed this research as a doctoral student at the Paul G. Allen School of Computer Science & Engineering. “We wanted a tool that captures the fine-grain manipulation we do with our fingers 鈥 not just a gesture or where your finger’s pointed, but something that can track your finger completely.”

AuraRing is composed of a coil of wire wrapped 800 times around a 3D-printed ring. A current running through the wire generates a magnetic field, which is picked up by three sensors on the wristband. Based on what values the sensors detect, the researchers can continuously identify the exact position of the ring in space. From there, they can determine where the user’s finger is located.

A close up of the ring in AuraRing
A close up of the wristband

The ring in AuraRing (left) is composed of a coil of wire wrapped 800 times around a 3D printed ring. The AuraRing wristband (right) uses three sensors (one shown here: white box in the lower left) to pick up the magnetic field generated by the ring. Credit: Dennis Wise/天美影视传媒

“To have continuous tracking in other smart rings you’d have to stream all the data using wireless communication. That part consumes a lot of power, which is why a lot of smart rings only detect gestures and send those specific commands,” said co-lead author , a doctoral student in electrical and computer engineering. “But AuraRing’s ring consumes only 2.3 milliwatts of power, which produces an oscillating magnetic field that the wristband can constantly sense. In this way, there’s no need for any communication from the ring to the wristband.”

With continuous tracking, AuraRing can pick up handwriting 鈥 potentially for short responses to text messages 鈥 or allow someone to have a virtual reality avatar hand that mimics what they’re doing with their actual hand. In addition, because AuraRing uses magnetic fields, it can still track hands even when they are out of sight, such as when a user is on a crowded bus and can’t reach their phone.

AuraRing can allow someone to have a virtual reality avatar hand that mimics what they’re doing with their actual hand. Photo: Dennis Wise/天美影视传媒

“We can also easily detect taps, flicks or even a small pinch versus a big pinch,” Salemi Parizi said. “This gives you added interaction space. For example, if you write ‘hello,’ you could use a flick or a pinch to send that data. Or on a Mario-like game, a pinch could make the character jump, but a flick could make them super jump.”

The researchers designed AuraRing to be ready to use as soon as it comes out of the box and not be dependent on a specific user. They tested the system on 12 participants with different hand sizes. The team compared the actual location of a participant’s finger to where AuraRing said it was. Most of the time, the system鈥檚 tracked location agreed with the actual location within a few millimeters.

This ring and wristband combination could be useful for more than games and smartphones, the team said.

The team has a , a small controller for mobile VR devices that has the location-tracking accuracy of desktop VR devices.

“Because AuraRing continuously monitors hand movements and not just gestures, it provides a rich set of inputs that multiple industries could take advantage of,” said senior author , a professor in both the Allen School and the electrical and computer engineering department. “For example, AuraRing could detect the onset of Parkinson’s disease by tracking subtle hand tremors or help with stroke rehabilitation by providing feedback on hand movement exercises.”

The technology behind AuraRing is something that could be easily added to smartwatches and other wristband devices, according to the team.

“It’s all about super powers,” Salemi Parizi said. “You would still have all the capabilities that today鈥檚 smartwatches have to offer, but when you want the additional benefits, you just put on your ring.”

This research was funded by UW Reality Lab, Facebook, Google and Futurewei.

For more information, contact Salemi Parizi at farshid@cs.washington.edu, Whitmire at emwhit@cs.washington.edu and Patel at shwetak@cs.washington.edu.

]]>
UW virtuoso of mobile sensing technology receives ACM Prize in Computing /news/2019/04/03/uw-virtuoso-of-mobile-sensing-technology-receives-acm-prize-in-computing/ Wed, 03 Apr 2019 12:00:10 +0000 /news/?p=61488
Shwetak Patel Photo: Mark Stone/天美影视传媒

 

Shwetak Patel broke new ground in IoT research and brought innovative products to market

A 天美影视传媒 professor, Shwetak Patel, is the recipient of the 2018 ACM Prize in Computing for contributions to creative and practical sensing systems for sustainability and health, the Association for Computing Machinery or ACM announced today.

Until Patel鈥檚 work, most systems for monitoring energy and health required expensive and cumbersome specialized devices, precluding practical widespread adoption. Patel and his students found highly creative ways to leverage existing infrastructure to make affordable and accurate monitoring a practical reality. Patel quickly turned his team鈥檚 research contributions into real-world deployments, founding companies to commercialize their work.

Shwetak Patel

The ACM Prize in Computing recognizes early-to-mid-career computer scientists whose research contributions have fundamental impact and broad implications. The award carries a prize of $250,000, from an endowment provided by Infosys Ltd. Patel will formally receive the ACM Prize at ACM鈥檚 annual awards banquet on June 15, 2019 in San Francisco. This is ACM鈥檚 second most prestigious award in all of computing (after the Turing Award – known as the Nobel Prize in Computing).

鈥淒espite the fact that he is only 37, Shwetak Patel has been significantly impacting the field of ubiquitous computing for nearly two decades,鈥 said ACM President Cherri M. Pancake. 鈥淗is work has ushered in new possibilities in many applications of ubiquitous computing for sustainability and health. Advances in sensors will be central to the ongoing Internet of Things revolution, and applications which allow individuals to monitor their health with smart phones could revolutionize health care鈥攅specially in the developing world. Shwetak Patel certainly exemplifies the ACM Prize鈥檚 goal of recognizing work with 鈥榝undamental impact and broad implications.鈥欌

鈥淎t the 天美影视传媒, we measure success by impact. Shwetak鈥檚 ground-breaking work sets a high standard for what creative thinking and a pioneering spirit can deliver on the frontiers of computing. We congratulate Shwetak for this prestigious award,鈥 said Mark Richards, provost and executive vice president for academic affairs, 天美影视传媒.

鈥淚nfosys is proud to support the ACM Prize in Computing, which this year recognizes Shwetak Patel for his trailblazing work in ubiquitous computing,鈥 said Salil S. Parekh, CEO of Infosys. 鈥淏eyond breaking new conceptual ground through research in many areas, Shwetak Patel is especially adept at rapidly bringing his ideas to the public via new products that are accessible and affordable.聽 Patel鈥檚 vision for ubiquitous computing is to enhance our everyday world with sensing, data processing and computation. The way in which his digital health initiatives combine AI, with sensors and mobile computing is also very exciting and will likely have a significant impact on healthcare around the world for many years to come.鈥

Patel鈥檚 research closed the gap between science fiction and reality in many applications in ubiquitous computing for sustainability and health.

Monitoring Energy and Water Usage in the Home

With the emergence of embedded computing systems over the past few decades, a longstanding goal has been to use embedded devices to gain a more fine-grained understanding of home water and energy usage than is available by simply reading a monthly utility bill. In industry, one proposed solution has been to develop 鈥渟mart appliances鈥 in which items such as refrigerators or televisions would be fitted with special meters so that that their energy consumption could be monitored. Rather than having smart devices throughout the home, each with its own meter, Patel recognized that a home鈥檚 electrical system (and later its plumbing system) can be reconsidered as a network capable of capturing and transmitting information. Patel鈥檚 insight was that each appliance, as it uses power, generates and transmits information as 鈥渘oise鈥 (perturbations) on the circuit. Patel then developed a method to 鈥渄isambiguate鈥 (separate and catalog) which rooms, appliances, and times of day energy was being used. Patel engineered the system so that an entire home could be monitored with just one meter for electricity and one meter for plumbing. Zensi, Inc., the startup he formed to commercialize his work was sold to Belkin, which subsequently opened a 25-person R&D lab in Seattle to conduct further sustainability research alongside Patel and his team. In developing these products, Patel also became an electrician and plumber.

Low-Powered Home Sensors

Building on his work on sustainability in residential environments, Patel next developed a new approach for wireless sensor nodes in the home, which dramatically reduced power consumption of each node while continuing to cover the whole home. Patel鈥檚 SNUPI (Sensor Nodes Utilizing Powerline Infrastructure) nodes contain an ultra low-power transmitter that extends its range by coupling its wirelessly transmitted signal to the existing powerlines. SNUPI was a core component of Patel鈥檚 next startup WallyHome. Through its network of low-powered sensors, WallyHome monitors temperature, humidity and any potential water leakages. For example, if an event occurs in the home such as a dishwasher leak the homeowner will receive an instant text on their mobile phones. While WallyHome was purchased by Sears Holding Company in 2015, Patel鈥檚 research in smart home sensing has informed much of the growing smart home industry including companies like Google, Next, and Samsung.

Digital Health

More recently, Patel has leveraged sensors already on mobile phones (eg, camera, microphone) for physiological sensing and the management of chronic diseases. These technologies include SPiroSmart and CoughSense, which monitor lung function, BiliCam, which detects neonatal jaundice in newborns, HemaApp, which monitors hemoglobin levels, OsteoApp, to screen for osteoporosis, and BPSense, which monitors blood pressure. 聽Patel has been working closely with Bill Gates and the Bill & Melinda Gates Foundation to share these technologies throughout the developing world. His work using a microphone for respiratory monitoring has already been deployed in parts of India and Bangladesh, and HemApp is being used in Peru to screen for childhood anemia. He also commercialized some of these technologies through his startup Senosis, which was recently acquired by Google.

Biographical Background

Patel is the Washington Research Foundation Entrepreneurship Endowed Professor in Computer Science and Engineering at the UW, where he directs the Ubicomp Lab, which develops innovative sensing systems for real-world applications in health, sustainability and novel interactions. He has joint appointments in the Paul G. Allen School of Computer Science & Engineering and the Department of Electrical & Computer Engineering. He is also a director at Google working on health care.

Patel earned his Bachelor鈥檚 and Ph.D. degrees in computer science from Georgia Institute of Technology. His numerous honors include receiving a MacArthur Fellowship, a Sloan Fellowship, a Presidential Early Career Award for Scientists and Engineers Award (PECASE), MIT TR-35 Award, and a National Academy of Engineering Gilbreth Award. Patel is a Fellow of ACM.

About the ACM Prize in Computing

recognizes an early to mid-career fundamental innovative contribution in computing that, through its depth, impact and broad implications, exemplifies the greatest achievements in the discipline. The award carries a prize of $250,000. Financial support is provided by an endowment from Infosys Ltd. The ACM Prize in Computing was previously known as the ACM-Infosys Foundation Award in the Computing Sciences from 2007 through 2015. ACM Prize recipients are invited to participate in the Heidelberg Laureate Forum, an annual networking event that brings together young researchers from around the world with recipients of the ACM A.M. Turing Award, the Abel Prize, the Fields Medal, and the Nevanlinna Prize.

 

About ACM

ACM, , is the world鈥檚 largest educational and scientific computing society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field鈥檚 challenges. ACM strengthens the computing profession鈥檚 collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.

 

About Infosys

Infosys is a global leader in technology services and consulting. We enable clients in more than 50 countries to create and execute strategies for their digital transformation. From engineering to application development, knowledge management and business process management, we help our clients find the right problems to solve, and to solve these effectively. Our team of 199,000+ innovators, across the globe, is differentiated by the imagination, knowledge and experience, across industries and technologies that we bring to every project we undertake. Visit to see how Infosys (NYSE: INFY) can help your enterprise thrive in the digital age.

This press release was adapted from an ACM press release.

###

For more information or to reach Patel, email shwetak@cs.washington.edu.

]]>
PupilScreen aims to allow parents, coaches, medics to detect concussion, brain injuries with a smartphone /news/2017/09/06/pupilscreen-aims-to-allow-parents-coaches-medics-to-detect-concussion-brain-injuries-with-a-smartphone/ Wed, 06 Sep 2017 15:48:53 +0000 /news/?p=54603
PupilScreen aims to allow anyone with a smartphone to objectively screen for concussion and other brain injuries on the spot 鈥 whether on the sidelines of a sports game or at an accident site. Photo: Dennis Wise/University of washington

天美影视传媒 researchers are developing the first smartphone app that is capable of objectively detecting concussion and other traumatic brain injuries in the field: on the sidelines of a sports game, on a battlefield or in the home of an elderly person prone to falls.

can detect changes in a pupil鈥檚 response to light using a smartphone鈥檚 video camera and deep learning tools 鈥 a type of artificial intelligence 鈥 that can quantify changes imperceptible to the human eye.

This pupillary light reflex has long been used to assess whether a patient has severe traumatic brain injury, and finds it can be useful in detecting milder concussions 鈥 opening up an entirely new avenue for screening.

The team of UW computer scientists, electrical engineers and medical researchers has demonstrated that PupilScreen can be used to detect instances of significant traumatic brain injury.聽 A broader clinical study this fall will put PupilScreen in the hands of coaches, emergency medical technicians, doctors and others to gather more data on which pupillary response characteristics are most helpful in determining ambiguous cases of concussion. The researchers hope to release a commercially available version of PupilScreen within two years.

鈥淗aving an objective measure that a coach or parent or anyone on the sidelines of a game could use to screen for concussion would truly be a game-changer,鈥 said , the Washington Research Foundation Endowed Professor of Computer Science & Engineering and of Electrical Engineering at the UW. 鈥淩ight now the best screening protocols we have are still subjective, and a player who really wants to get back on the field can find ways to game the system.鈥

PupilScreen聽can currently distinguish between the pupillary light reflex of healthy people (shown above) and patients with severe traumatic brain injury. Additional studies will聽help determine what characteristics are most useful in detecting milder concussions. Photo: Dennis Wise/天美影视传媒

As described in a to be presented Sept. 13 at , PupilScreen can assess a patient鈥檚 pupillary light reflex almost as well as a pupilometer, an expensive and rarely used machine found only in hospitals. It uses the smartphone鈥檚 flash to stimulate the patient鈥檚 eyes and the video camera to record a three-second video.

The video is processed using deep learning algorithms that can determine which pixels belong to the pupil in each video frame and measure the changes in pupil size across those frames. In a small pilot study that combined 48 results from patients with traumatic brain injury and from healthy people, clinicians were able to diagnose the brain injuries with almost perfect accuracy using the app鈥檚 output alone.

In amateur sports today, even the best practices that coaches or parents use if an athlete is suspected of a concussion during a game 鈥 asking them where they are, to repeat a list of words, balancing, touching a finger to their nose 鈥 essentially consist of subjective assessment. By contrast, PupilScreen aims to generate objective and clinically relevant data that anyone on the sidelines could use to determine whether a player should be further assessed for concussion or other brain injury.

The U.S. Centers for Disease Control and Prevention estimates about in the U.S. from recreational sports injuries alone still go undiagnosed, putting millions of young players and adults at risk for future head injury and permanent cognitive deficits.

UW Medicine residents who collaborated with the UW UbiComp Lab on PupilScreen are Dr. Tony Law of the Department of Otolaryngology 鈥 Head and Neck Surgery (left) and Dr. Lynn McGrath of the Department of Neurological Surgery (right). Photo: Dennis Wise/天美影视传媒

Historically, there鈥檚 been no surefire way to diagnose concussion 鈥 even in the emergency room, said co-author Dr. , a resident physician in UW Medicine鈥檚 Department of Neurological Surgery. Doctors usually run tests to rule out worst cases like a brain bleed or skull fracture. After more serious head injuries are excluded, a diagnosis of concussion can be made.

Medical professionals have long used the pupillary light reflex 鈥 usually in the form of a penlight test where they shine a light into a patient鈥檚 eyes 鈥 to assess severe forms of brain injury. But a growing body of medical research has recently found that more subtle changes in pupil response can be useful in detecting milder concussions.

鈥淧upilScreen aims to fill that gap by giving us the first capability to measure an objective biomarker of concussion in the field,鈥 McGrath said. 鈥淎fter further testing, we think this device will empower everyone from Little League coaches to NFL doctors to emergency department physicians to rapidly detect and triage head injury.鈥

Researchers initially tested PupilScreen with a 3-D printed box that controls the eye鈥檚 exposure to light, but the goal is to obtain accurate聽results with a smartphone’s camera alone. Photo: Dennis Wise/天美影视传媒

While the UW team initially tested PupilScreen with a 3-D printed box to control the eye鈥檚 exposure to light, researchers are now training their machine learning neural network to produce similar results with the smartphone camera alone.

鈥淭he vision we鈥檙e shooting for is having someone simply hold the phone up and use the flash. We want every parent, coach, caregiver or EMT who is concerned about a brain injury to be able to use it on the spot without needing extra hardware,鈥 said lead author , a doctoral student in the Paul G. Allen School of Computer Science & Engineering.

The PupilScreen research team includes Shwetak Patel (left), the Washington Research Foundation Endowed Professor of Computer Science & Engineering and of Electrical Engineering, and Alex Mariakakis (right), doctoral student in the Paul G. Allen School of Computer Science & Engineering. Photo: Dennis Wise/天美影视传媒

One of the challenges in developing PupilScreen involved training the machine learning tools to distinguish between the eye鈥檚 pupil and iris, which involved annotating roughly 4,000 聽images of eyes by hand. A computer has the advantage of being able to quantify subtle changes in the pupillary light reflex that the human eye cannot perceive.

鈥淚nstead of designing an algorithm to solve the specific problem of measuring pupil response, we moved this to a machine learning approach 鈥 collecting a lot of data and writing an algorithm that allowed the computer to learn for itself,鈥 said co-author ,聽 a UW medical student and doctoral student in physiology and biophysics.

The PupilScreen researchers are currently working to identify partners interested in conducting additional field studies of the app, which they expect to begin in October.

The project was funded by the National Science Foundation, the Washington Research Foundation and Amazon Catalyst.

Co-authors include , and of the Paul G. Allen School of Computer Science & Engineering and UW Medicine Otolaryngology 鈥 Head and Neck Surgery resident .

For more information, contact the research team at聽uwpupilscreen@gmail.com.

]]>