Jacob Sunshine – UW News /news Mon, 09 May 2022 19:07:10 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 A contact-tracing app that helps public health agencies and doesn’t compromise your privacy /news/2020/04/22/a-contact-tracing-app-that-helps-public-health-agencies-and-doesnt-compromise-your-privacy/ Wed, 22 Apr 2020 19:47:17 +0000 /news/?p=67638
Contact-tracing apps can monitor who has come in contact with whom and, when appropriate, alert a network of people if someone nearby has been diagnosed with the virus.

Update July 9, 2020聽鈥 the app mentioned below is now called .

Stay-at-home orders and social distancing have been successful in some areas to help flatten the coronavirus curve. As parts of the world begin to open up again, communities need a reliable way to keep track of the virus and contain its spread.

Contact-tracing apps may provide one option as part of a larger strategy. These apps monitor who has come in contact with whom and can, when appropriate, alert a network of people if someone nearby has been diagnosed with the virus. But many current contact-tracing apps have 鈥 for example leaking a user’s location information or taking away people’s control over their own data.

Now researchers from the 天美影视传媒 and UW Medicine, along with volunteers from Microsoft, have developed a new tool, . This contact-tracing app, developed with input from public health officials and contact tracing teams, would alert people about potential exposure to COVID-19 without giving up anyone’s privacy. This app could also help individuals who test positive prepare for a contact tracing interview with a public health official.

CovidSafe is not ready to be downloaded from app stores, but an Android demo version is accessible through the . Users who try the demo version, which doesn’t have full functionality yet, can submit feedback to the team. This app is based off a series of privacy and security guidelines that the team outlined posted earlier this month to the preprint site arXiv.

“Contact tracing is one of the most effective tools that public health officials have to halt a pandemic and prevent future outbreaks,” said author , a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “Our contact-tracing app addresses underlying privacy, security and re-identification issues, rather than sweeping them under the rug. With CovidSafe, all information is stored locally on your phone unless you choose to share that you鈥檝e tested positive. Only then is your data sent to a secure server, and the app alerts anyone who has been nearby. After these notifications are sent, all the information is deleted.”

CovidSafe takes several steps to maintain users’ privacy. The app begins by assigning each user a secret code name, which remains private. Then it generates a variation of the code name that changes every 15 minutes and uses Bluetooth to broadcast that to other users nearby. CovidSafe also stores a list of these people’s smartphone signals. With the full version of CovidSafe, if a user tests positive and they choose to share that information with the app, it will alert anyone who has come in contact with them within the past 14 days 鈥 the infection window for COVID-19 鈥 without divulging who the person is or where they are.

A screenshot from the CovidSafe app.

“Conventional contact tracing already requires a person to give up some measure of personal privacy as well as the privacy of those they came into contact with,” said collaborator , an associate professor in the Allen School. “However, we can make acceptable trade-offs to enable us to use the best tools available to speed up and improve that process, all while ensuring stronger privacy guarantees at the same time.”

Because not everyone will want to use a contact-tracing app, CovidSafe aims to augment 鈥 not replace 鈥 conventional contact tracing, which public health officials do by interviewing patients who’ve tested positive about where they have been and who they have seen. CovidSafe creates a log of users’ locations over time, so it can help people in these interviews by providing them with the details about where they’ve been lately.

“This is being built first and foremost with contact-tracing teams and public health officials in mind,” said collaborator Dr. , an assistant professor of anesthesiology and pain medicine at the UW School of Medicine. “They are the experts, and much of the functionality has been developed based on direct feedback from teams doing this necessary and difficult work. Combined with extremely thoughtful privacy-preserving designs, this system is built to meet the needs of a privacy-conscious public and to efficiently deliver useful information that can help public health systems and contact tracers work smarter and faster.”

See a related story in .

CovidSafe has other features, so that users who have tested positive and are in isolation can track their symptoms, and a messaging system that will eventually allow users to receive tailored health announcements from local public health agencies. The researchers for organizations to customize for their own use.

“Ten years from now, I want to be able to look back and genuinely say, ‘I did something to help in the greatest crisis of my lifetime,'” said collaborator , one of the project volunteers who is also a computer scientist at Microsoft Research. “At this point, dozens of people have contributed hundreds of hours toward making this project happen. We have all the expertise needed to create something genuinely useful, and we are well on the way.”

Other co-authors on the white paper are , , , and in the Allen School, and and at Microsoft.

This research was funded by the Washington Research Foundation, the Office of Naval Research, the National Science Foundation, the National Institutes of Health and the Alfred P. Sloan Foundation.

For more information or to submit feedback about CovidSafe, contact the team at Covidsafe@uw.edu.

Grant numbers: N00014-18-1-2247, #CCF 1637360, #CCF 1740551, K23DA046686, 1914873, 1812559, CNS-1553758 and CNS-1719146

]]>
First smart speaker system that uses white noise to monitor infants’ breathing /news/2019/10/15/smart-speaker-system-white-noise-infants-breathing/ Tue, 15 Oct 2019 13:40:56 +0000 /news/?p=64332
UW researchers have developed a new smart speaker skill that lets a device use white noise to both soothe sleeping babies and monitor their breathing and movement. Photo: Dennis Wise/天美影视传媒

Gone are the days when people use smart speakers 鈥 like Amazon Echo or Google Home 鈥 only as kitchen timers or dinner party music players. These devices have started helping people track their own health, and can even monitor for cardiac arrest.

Now researchers at the 天美影视传媒 have developed a new smart speaker skill that lets a device use white noise to both soothe sleeping babies and monitor their breathing and movement.

For journalists

With this skill, called BreathJunior, the smart speaker plays white noise and records how the noise is reflected back to detect breathing motions of infants’ tiny chests. When the researchers tested BreathJunior with five babies in a local hospital’s neonatal intensive care unit, it detected respiratory rates that closely matched the rates detected by standard vital sign monitors. The team will present its findings October 22 at the conference in Los Cabos, Mexico.

“One of the biggest challenges new parents face is making sure their babies get enough sleep. They also want to monitor their children while they鈥檙e sleeping. With this in mind, we sought to develop a system that combines soothing white noise with the ability to unobtrusively measure an infant鈥檚 motion and breathing,” said co-author, an assistant professor of anesthesiology and pain medicine at the UW School of Medicine.

To make things easy for new parents, the team made a system that could run on a smart speaker that replicates the hardware in an Amazon Echo.

“Smart speakers are becoming more and more prevalent, and these devices already have the ability to play white noise,” said co-author, an associate professor in the UW’s Paul G. Allen School of Computer Science & Engineering and the director of the . “If we could use this white noise feature as a contactless way to monitor infants鈥 hand and leg movements, breathing and crying, then the smart speaker becomes a device that can do it all, which is really exciting.”

White noise is a, which makes a seemingly random soothing sound that can help cover up other noises that might wake a sleeping baby. To use white noise as a breathing monitor, the team needed to develop a method to detect tiny changes between the white noise a smart speaker plays and the white noise that gets reflected back from the infant鈥檚 body into the speaker’s array of microphones.

“We start out by transmitting a random white noise signal. But we are generating this random signal, so we know exactly what the randomness is,” said first author, a doctoral student in the Allen School. “That signal goes out and reflects off the baby. Then the smart speaker’s microphones get a random signal back. Because we know the original signal, we can cancel out any randomness from that and then we’re left with only information about the motion from the baby.”

Detecting breathing in babies has an extra wrinkle: The movement of their chests is so tiny that the smart speaker needs to know exactly where the babies are to be able to “see” them breathing.

“The breathing signal is so weak that we can’t just look for a change in the overall signal we get back,” Wang said. “We needed a way to scan the room and pinpoint where the baby is to maximize changes in the white noise signal. Our algorithm takes advantage of the fact that smart speakers have an array of microphones that can be used to focus in the direction of the infant鈥檚 chest. It starts listening for changes in a bunch of potential directions, and then continues the search toward the direction that gives the clearest signal.”

With this smart speaker skill, the device plays white noise and records how the noise is reflected back to detect breathing motions of infants’ tiny chests. Photo: Dennis Wise/天美影视传媒

BreathJunior tracks both small motions 鈥 such as the chest movement involved in breathing 鈥 and large motions 鈥 such as babies moving around in their cribs. It can also pick up the sound of a baby crying.

The team created a prototype smart speaker to test BreathJunior on an infant simulator. The researchers could set the simulator to breathe at specific rates, which allowed them to test how well BreathJunior detected a variety of respiratory rates 鈥 from a slow 20 breaths per minute to聽60 breaths per minute. The infant simulator also allowed the team to test if BreathJunior could detect abnormal breathing patterns, such as apnea, that are common in babies who are born early and may not have developed respiratory centers in their brains. The system performed well for both tests.

Then the team tested how well their prototype tracked real babies’ breathing in the neonatal intensive care unit or NICU. These babies are connected to wired, hospital-grade respiratory monitors, so the team could compare their readouts to BreathJunior’s. The system was able to accurately identify respiratory rates up to 65 breaths per minute.

“Infants in the NICU are more likely to have either quite high or very slow breathing rates, which is why the NICU monitors their breathing so closely,” Sunshine said. “BreathJunior holds potential for parents who want to use white noise to help their child sleep and who also want a way to monitor their child鈥檚 breathing and motion. It also has appeal as a tool for monitoring breathing in the subset of infants in whom home respiratory monitoring is clinically indicated, as well as in hospital environments where doctors want to use unwired respiratory monitoring.

“However, it is very important to note that the American Academy of Pediatrics recommends not using a monitor that markets itself as reducing the risk of sudden infant death syndrome, and this research and the team makes no such claim.”

While BreathJunior currently uses white noise to track breathing and motion, the researchers would like to expand its capabilities so that it could also use other soothing sounds like lullabies.

The team plans to commercialize this technology through a UW spinout,.

“In just a few years, we have come a long way from monitoring large motions in adults to extracting the tiny motion of a newborn infant’s breathing,” Gollakota said. “This has been possible because of algorithmic innovations as well as advances in smart speaker hardware. Looking ahead, one can envision transforming a smart speaker into a that can contactlessly monitor a variety of vital signs beyond just breathing.”

This research was funded by the National Science Foundation.

###

For more information, contact whitenoise@cs.washington.edu.

Grant numbers: CNS 1812559, 1914873

]]>
‘Alexa, monitor my heart’: Researchers develop first contactless cardiac arrest AI system for smart speakers /news/2019/06/19/first-contactless-cardiac-arrest-ai-system-for-smart-speakers/ Wed, 19 Jun 2019 13:17:55 +0000 /news/?p=62841
UW researchers have developed a new tool to monitor people for cardiac arrest while they’re asleep 鈥 all without touching them. The tool is essentially an app for a smart speaker or a smartphone that allows it to detect the signature sounds of cardiac arrest and call for help. Photo: Sarah McQuate/天美影视传媒

Almost 500,000 Americans die each year from , when the heart suddenly stops beating.

People experiencing cardiac arrest will suddenly become unresponsive and either stop breathing or gasp for air, a sign known as agonal breathing. Immediate CPR can double or triple someone’s chance of survival, but that requires a bystander to be present.

Cardiac arrests often occur outside of the hospital and in the privacy of someone’s home. suggests that one of the most common locations for an out-of-hospital cardiac arrest is in a patient’s bedroom, where no one is likely around or awake to respond and provide care.

Researchers at the 天美影视传媒 have developed a new tool to monitor people for cardiac arrest while they’re asleep without touching them. A new skill for a smart speaker 鈥 like Google Home and Amazon Alexa 鈥 or smartphone lets the device detect the gasping sound of agonal breathing and call for help. On average, the proof-of-concept tool, which was developed using real agonal breathing instances captured from 911 calls, detected agonal breathing events 97% of the time from up to 20 feet (or 6 meters) away. The findings June 19 in the Nature journal .

“A lot of people have smart speakers in their homes, and these devices have amazing capabilities that we can take advantage of,” said co-corresponding author , an associate professor in the UW’s Paul G. Allen School of Computer Science & Engineering. “We envision a contactless system that works by continuously and passively monitoring the bedroom for an agonal breathing event, and alerts anyone nearby to come provide CPR. And then if there’s no response, the device can automatically call 911.”

The researchers envision a contactless system that works by continuously and passively monitoring the bedroom for an agonal breathing event. If it detects agonal breathing, it can call for help. Photo: Sarah McQuate/天美影视传媒

Agonal breathing is present for about 50% of people who experience cardiac arrests, according to 911 call data, and patients who take agonal breaths often have a better chance of surviving.

“This kind of breathing happens when a patient experiences really low oxygen levels,” said co-corresponding author , an assistant professor of anesthesiology and pain medicine at the UW School of Medicine. “It’s sort of a guttural gasping noise, and its uniqueness makes it a good audio biomarker to use to identify if someone is experiencing a cardiac arrest.”

The researchers gathered sounds of agonal breathing from real 911 calls to Seattle’s Emergency Medical Services. Because cardiac arrest patients are often unconscious, bystanders recorded the agonal breathing sounds by putting their phones up to the patient’s mouth so that the dispatcher could determine whether the patient needed immediate CPR. The team collected 162 calls between 2009 and 2017 and extracted 2.5 seconds of audio at the start of each agonal breath to come up with a total of 236 clips. The team captured the recordings on different smart devices 鈥 an Amazon Alexa, an iPhone 5s and a Samsung Galaxy S4 鈥 and used various machine learning techniques to boost the dataset to 7,316 positive clips.

“We played these examples at different distances to simulate what it would sound like if it the patient was at different places in the bedroom,” said first author , a doctoral student in the Allen School. “We also added different interfering sounds such as sounds of cats and dogs, cars honking, air conditioning, things that you might normally hear in a home.”

For the negative dataset, the team used 83 hours of audio data collected during sleep studies, yielding 7,305 sound samples. These clips contained typical sounds that people make in their sleep, such as snoring or obstructive sleep apnea.

From these datasets, the team used machine learning to create a tool that could detect agonal breathing 97% of the time when the smart device was placed up to 6 meters away from a speaker generating the sounds.

Next the team tested the algorithm to make sure that it wouldn’t accidentally classify a different type of breathing, like snoring, as agonal breathing.

“We don’t want to alert either emergency services or loved ones unnecessarily, so it’s important that we reduce our false positive rate,” Chan said.

For the sleep lab data, the algorithm incorrectly categorized a breathing sound as agonal breathing 0.14% of the time. The false positive rate was about 0.22% for separate audio clips, in which volunteers had recorded themselves while sleeping in their own homes. But when the team had the tool classify something as agonal breathing only when it detected two distinct events at least 10 seconds apart, the false positive rate fell to 0% for both tests.

The team envisions this algorithm could function like an app, or a skill for Alexa that runs passively on a smart speaker or smartphone while people sleep.

See related stories in 听补苍诲 .

“This could run locally on the processors contained in the Alexa. It’s running in real time, so you don’t need to store anything or send anything to the cloud,” Gollakota said.

“Right now, this is a good proof of concept using the 911 calls in the Seattle metropolitan area,” he said. “But we need to get access to more 911 calls related to cardiac arrest so that we can improve the accuracy of the algorithm further and ensure that it generalizes across a larger population.”

The researchers plan to commercialize this technology through a UW spinout, .

“Cardiac arrests are a very common way for people to die, and right now many of them can go unwitnessed,” Sunshine said. “Part of what makes this technology so compelling is that it could help us catch more patients in time for them to be treated.”

, a professor of general internal medicine at the UW School of Medicine and the medical director of was also a co-author on this paper. This research was funded by the National Science Foundation.

###

For more information, contact the research team at cardiacalert@cs.washington.edu.

]]>
First smartphone app to detect opioid overdose and its precursors /news/2019/01/09/smartphone-app-detects-opioid-overdose/ Wed, 09 Jan 2019 19:08:03 +0000 /news/?p=60421
UW researchers have developed a cellphone app that uses sonar to monitor someone’s breathing rate and sense when an opioid overdose has occurred. Photo: Mark Stone/天美影视传媒

At least 115 people die every day in the U.S. after overdosing on opioids, .

And in 2016, illegal injectable opioids involved in overdose-related deaths. This spike has led to a national public health crisis and epidemic.

During an overdose, a person breathes slower or stops breathing altogether. These symptoms are reversible with the drug naloxone if caught in time.

But people who use opioids by themselves have no way of asking for help in the event of an overdose.

Researchers at the 天美影视传媒 have developed a cellphone app, called Second Chance, that uses sonar to monitor someone’s breathing rate and sense when an opioid overdose has occurred. The app accurately detects overdose-related symptoms about 90 percent of the time and can track someone’s breathing from up to 3 feet away. The team Jan. 9 in Science Translational Medicine.

“The idea is that people can use the app during opioid use so that if they overdose, the phone can potentially connect them to a friend or emergency services to provide naloxone,” said co-corresponding author , an associate professor in the UW’s Paul G. Allen School of Computer Science & Engineering. “Here we show that we have created an algorithm for a smartphone that is capable of detecting overdoses by monitoring how someone’s breathing changes before and after opioid use.”

When the app detects decreased or absent breathing, it will send an alarm asking the person to interact with it before it contacts a trusted friend or emergency services. Photo: Mark Stone/天美影视传媒

The Second Chance app sends inaudible sound waves from the phone to people’s chests and then monitors the way the sound waves return to the phone to look for specific breathing patterns.

“We’re looking for two main precursors to opioid overdose: when a person stops breathing, or when a person’s breathing rate is seven breaths per minute or lower,” said co-corresponding author , an assistant professor of anesthesiology and pain medicine at the UW School of Medicine. “Less than eight breaths per minute is a common cutoff point in a hospital that would trigger people to go to the bedside and make sure a patient is OK.”

In addition to watching breathing, Second Chance also monitors how people move.

“People aren’t always perfectly still while they’re injecting drugs, so we want to still be able to track their breathing as they’re moving around,” said lead author , a doctoral student in the Allen School. “We can also look for characteristic motions during opioid overdose, like if someone’s head slumps or nods off.”

To be able to use real-world data to design and test the algorithm behind the app, the researchers partnered with the in Vancouver, Canada. Insite is the first legal supervised consumption site in North America. As part of the study, participants at Insite wore monitors on their chests that also track breathing rates.

Second Chance monitors a person’s breathing rate to detect an opioid overdose or its precursors. Photo: Mark Stone/天美影视传媒

“We asked participants to prepare their drugs like they normally would, but then we monitored them for a minute pre-injection so the algorithm could get a baseline value for their breathing rate,” said Nandakumar. “After we got a baseline, we continued monitoring during the injection and then for five minutes afterward, because that’s the window when overdose symptoms occur.”

Of the 94 participants who tested the algorithm, 47 had a breathing rate of seven breaths per minute or slower, 49 stopped breathing for a significant period, and two people experienced an overdose event that required oxygen, ventilation and/or naloxone treatment. On average, the algorithm correctly identified breathing problems that foreshadow overdose 90 percent of the time.

The researchers also wanted to make sure the algorithm could detect actual overdose events, because these occur infrequently at Insite. The researchers worked with anesthesiology teams at UW Medical Center to 鈥渟imulate鈥 overdoses in an operating room, allowing the app to monitor people and detect when they stop breathing.

“When patients undergo anesthesia, they experience much of the same physiology that people experience when they’re having an overdose,” Sunshine said. “Nothing happens when people experience this event in the operating room because they’re receiving oxygen and they are under the care of an anesthesiology team. But this is a unique environment to capture difficult-to-reproduce data to help further refine the algorithms for what it looks like when someone has an acute overdose.”

For the simulation, the team recruited healthy participants undergoing previously scheduled elective surgeries. After providing informed consent, the patients then received standard anesthetic medications that led to 30 seconds of slower or absent breathing, and these events were captured by the device. The algorithm correctly predicted 19 out of the 20 simulated overdoses. For the one case it was wrong, the patient’s breathing rate was just above the algorithm’s threshold.

If a person fails to interact with the app, the team would like it to contact someone who can administer naloxone. Photo: Mark Stone/天美影视传媒

Right now, Second Chance is only monitoring the people who use it. The team would eventually like the app to interact with them.

“When the app detects decreased or absent breathing, we’d like it to send an alarm asking the person to interact with it,” Gollakota said. “Then if the person fails to interact with it, that’s when we say: ‘OK this is a stage where we need to alert someone,’ and the phone can contact someone with naloxone.”

The researchers are applying for FDA approval and have plans to commercialize this technology through a UW spinout called . While this app could be used for all forms of opioid use, the team cautions that right now they have only tested it on illegal injectable opioid use because deaths from those overdoses are the most common.

“We’re experiencing an unprecedented epidemic of deaths from opioid use, and it’s unfortunate because these overdoses are completely reversible phenomena if they’re detected in time,” Sunshine said. “The goal of this project is to try to connect people who are often experiencing overdoses alone to known therapies that can save their lives. We hope that by keeping people safer, they can eventually access long-term treatment.”

See related stories in 听补苍诲 .

This research was funded by the UW Alcohol and Drug Abuse Institute and the National Science Foundation.

###

 


For more information, contact the research team at secondchance@cs.washington.edu.

]]>