Engineering – UW News /news Tue, 14 Apr 2026 22:17:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Tiny cameras in earbuds let users talk with AI about what they see /news/2026/04/14/cameras-in-wireless-earbuds-vuebuds/ Tue, 14 Apr 2026 14:38:00 +0000 /news/?p=91232 Two black earbuds: one with the casing removed exposing a computer chip and tiny camera.
UW researchers developed a system called VueBuds that uses tiny cameras in off-the-shelf wireless earbuds to allow users to talk with an AI model about the scene in front of them. Here, the altered headphones are shown with the camera inserted. Photo: Kim et al./CHI 鈥26

天美影视传媒 researchers developed the first system that incorporates tiny cameras in off-the-shelf wireless earbuds to allow users to talk with an AI model about the scene in front of them. For instance, a user might turn to a Korean food package and say, 鈥淗ey Vue, translate this for me.鈥 They鈥檇 then hear an AI voice say, 鈥淭he visible text translates to 鈥楥old Noodles鈥 in English.鈥

The prototype system called VueBuds takes low-resolution, black-and-white images, which it transmits over Bluetooth to a phone or other nearby device. A small artificial intelligence model on the device then answers questions about the images within around a second. For privacy, all of the processing happens on the device, a small light turns on when the system is recording, and users can immediately delete images.听

The team will April 14 at the Association for Computing Machinery Conference on Human Factors in Computing Systems in Barcelona.听

鈥淲e haven鈥檛 seen most people adopt smart glasses or VR headsets, in part because a lot of people don鈥檛 like wearing glasses, and they often come with , such as recording high-resolution video and processing it in the cloud,鈥 said senior author , a UW professor in the Paul G. Allen School of Computer Science & Engineering. 鈥淏ut almost everyone wears earbuds already, so we wanted to see if we could put visual intelligence into tiny, low-power earbuds, and also address privacy concerns in the process.鈥

Cameras use far more power than the microphones already in earbuds, so using the same sort of high-res cameras as those in smart glasses wouldn鈥檛 work. Also, large amounts of information can鈥檛 stream continuously over Bluetooth, so the system can鈥檛 run continuous video.听

The team found that using a low-power camera 鈥 roughly the size of a grain of rice 鈥 to shoot low-resolution, black-and-white still images limited battery drain and allowed for Bluetooth transmission while preserving performance.

There was also the matter of placement.听

鈥淥ne big question we had was: Will your face obscure the view too much? Can earbud cameras capture the user鈥檚 view of the world reliably?鈥 said lead author , who completed this work as a UW doctoral student in the Allen School.听

The team found that angling each camera 5-10 degrees outward provides a 98-108 degree field of view. While this creates a small blind spot when objects are held closer than 20 centimeters from the user, people rarely hold things that close to examine them 鈥 making it a non-issue for typical interactions.

Researchers also discovered that while the vision language model was largely able to make sense of the images from each earbud, having to process images from both earbuds slowed it down. So they had the system 鈥渟titch鈥 the two images into one, identifying overlapping imagery and combining it. This allows the system to respond in one second 鈥 quick enough to feel like real-time for users 鈥 rather than the two seconds it takes with separate images.

The team then had 74 participants compare recorded outputs from VueBuds with outputs from Ray-Ban Meta Glasses in a series of tests. Despite VueBuds using low-resolution images with greater privacy controls and the Ray-Bans taking high-res images processed on the cloud, the two systems performed equivalently. Participants preferred VueBuds鈥 translations, while the Ray-Bans did better at counting objects.

Sixteen participants also wore VueBuds and tested the system鈥檚 ability to translate and answer basic questions about objects. VueBuds achieved 83-84% accuracy when translating or identifying objects and 93% when identifying the author and title of a book.

This study was designed to gauge the feasibility of integrating cameras in wireless earbuds. Since the system only takes grayscale images, it can鈥檛 answer questions that involve color in the scene.听

The team wants to add color to the system 鈥 color cameras require more power 鈥 and to train specialized AI models for specific use cases, such as translation.听聽

鈥淭his study lets us glimpse what鈥檚 possible just using a general purpose language model and our wireless earbuds with cameras,鈥 Kim said. 鈥淏ut we鈥檇 like to study the system more rigorously for applications like reading a book 鈥 for people who have low vision or are blind, for instance 鈥 or translating text for travelers.鈥澛

Co-authors include , a UW master鈥檚 student in the Allen School, and , , , and , all UW students in electrical and computer engineering.听

For more information, contact vuebuds@cs.washington.edu.

]]>
At quantum testbed lab, researchers across the UW probe 鈥榮pooky鈥 mysteries of quantum phenomena /news/2026/04/13/qt3-quantum-computing-testbed-lab-dilution-fridge/ Mon, 13 Apr 2026 23:09:13 +0000 /news/?p=91294 Three people stand next to a complex metal tube-shaped machine
Max Parsons (left), assistant professor of electrical and computer engineering, works with undergraduate staff members Reynel Cariaga (center) and Jesus Garcia (right) at the QT3 lab. The device in the foreground is a scanning tunneling microscope that can image individual atoms within a material by scanning an extremely fine needle 鈥 just one atom thick at the tip 鈥 across the sample. Photo: Erhong Gao/天美影视传媒

Even on a campus like the 天美影视传媒鈥檚 鈥 home to particle accelerators, wave tanks and countless other bespoke pieces of equipment 鈥 the machinery in the stands out. Take the dilution fridge, a large, white, cylindrical device that can cool a small chamber to one hundredth of a kelvin above absolute zero 鈥 the coldest possible temperature in the universe.听

鈥淭his is the coldest fridge money can buy,鈥 said , a UW assistant professor of electrical and computer engineering and the former director of the lab, which goes by the nickname QT3. 鈥淲hen it鈥檚 running, the chamber inside this device is about 100 times colder than outer space. At that temperature, it鈥檚 much easier to study and manipulate a material鈥檚 quantum properties.鈥

The lab also houses a photon qubit tabletop lab: a nondescript set of boxes, lasers and lenses that can demonstrate the 鈥渟pooky鈥 鈥 a term scientists actually use 鈥 phenomenon known as quantum entanglement, where two particles appear to communicate instantaneously with each other despite being physically apart.

Or there鈥檚 the lab鈥檚 latest acquisition, the scanning tunneling microscope, which can image individual atoms within a solid material, allowing researchers to study the structure of materials at the smallest scales.

An interdisciplinary group of researchers has been marshalling resources and expertise to create QT3 for three years, and now, the lab is opening its doors as a unique one-stop shop resource for quantum researchers and educators at the UW.

鈥淭he idea of this lab is to improve access to quantum hardware,鈥 Parsons said. 鈥淚t’s rather hard to acquire equipment like this. And there are a lot of researchers that may have good ideas that they want to test, but don鈥檛 have the resources yet for their own equipment. So we鈥檙e inviting researchers, initially from across campus, but also from other universities and from industry, to come in and test their ideas. This can be a hub for quantum experts to share their ideas and collaborate.鈥

The lab also boasts hardware that can demonstrate known quantum principles and techniques, making it useful for students in quantum fields. In addition to the entanglement device, Parsons鈥 students developed a machine that can suspend charged particles 鈥 in this case, tiny grains of pollen 鈥 in midair using electric fields. Researchers use the same technique to trap single atoms and manipulate their quantum properties, making the lab鈥檚 ion-trapping machine good practice for more complex work.

Two tiny dots hover back and forth in a tube
The QT3 facility鈥檚 ion trapping lab gives students a chance to practice techniques used in quantum computing research. Here, students have suspended two tiny grains of pollen 鈥 the red dots hovering back and forth 鈥 in midair using electric fields. Photo: Robert Thomas

Some students even work at the lab through an undergraduate staffing program, and have helped install instrumentation, write code to power equipment and build parts for custom microscopes. The program provides yet another avenue for students to get hands-on experience with unusual machinery and techniques.听

鈥淨uantum mechanics is inherently counterintuitive, and that makes it a powerful teaching tool,鈥 Parsons said. 鈥淚n the QT3 lab, students will encounter systems where their everyday intuition breaks down, and they must rely on careful reasoning and experimentation instead. They learn how to debug when results don鈥檛 match expectations, how to test simple cases and how to build understanding about hardware step by step.鈥

The cosmically cold dilution fridge remains something of a centerpiece, even as the lab fills up with specialized equipment. The extreme environment within the device strips heat, light and other stray energy away from materials, allowing researchers to observe the peculiar quantum properties that remain. One such property is superposition, or the ability of a particle like an electron to maintain multiple mutually exclusive properties at the same time. Scientists use superposition to create a powerful, tiny piece of technology: a quantum bit, or qubit.听

鈥淭raditional computers use bits, which can only be one or zero. A qubit, on the other hand, we can make one plus zero,鈥 Parsons said. 鈥淚t’s both at the same time, and only when we measure it do we find out which one it is. We can use this unusual property to build a new class of computers that excel at tasks like communications and encryption.鈥

QT3 is part of a collaborative effort to solidify UW as a leader in quantum research and applications. Most of the lab hardware was funded by a congressional earmark championed by Senator Maria Cantwell鈥檚 office. Departmental funding from across the College of Engineering and the College of Arts and Sciences helped rehab the lab space. The National Science Foundation provided seed funding for the instructional lab equipment.

a repeating hexagonal pattern of small golden blobs
An image captured by the QT3 lab鈥檚 scanning tunneling microscope reveals a lattice of individual atoms in a sample of silicon. Photo: Rajiv Giridharagopal

The UW has also spent the past decade investing heavily in faculty with quantum expertise.

鈥淰ery few places have expertise across the full quantum stack, from materials up to algorithms,鈥 said , a UW professor of physics and founder of QT3. 鈥淭he UW has quantum faculty in electrical and mechanical engineering, physics, computer science, materials science and chemistry. Our faculty work on superconducting qubits, spin defects, photons, trapped ions, neutral atoms and topological qubits. Our advantage is the breadth of our investment.鈥

The lab is now available to researchers and students across the UW, and private companies are encouraged to reach out about partnering. Parsons has already used the lab to teach a graduate-level class in electrical and computer engineering for students who included employees from Boeing, Microsoft and quantum computing company IonQ. The lab is hiring for a full-time manager to maintain the equipment and help users make the most of the facility.听

鈥淗ere in academia, we can improve the building blocks for applied technologies like quantum computing, and then transfer those learnings to industry for further scaling,鈥 Parsons said.

For more information, contact Parsons at mfpars@uw.edu.

]]>
Climate change may complicate avalanche risk across the Pacific Northwest /news/2026/03/23/climate-change-avalanche-risk/ Mon, 23 Mar 2026 17:07:56 +0000 /news/?p=91066 Snowy mountains with two signs in foreground. A yellow sign reads 鈥淎VALANCHE AREA鈥; a red and white sign reads 鈥淣O STOPPING OR STANDING NEXT 戮 MILE鈥.
Warming temperatures throughout the Pacific Northwest are likely to complicate avalanche forecasting in the coming years, according to a new UW study. Cooler inland regions such as Idaho and Western Montana may see increased risk from avalanches caused by layers of icy crusts that form when rain falls on snow and freezes. Photo: iStock

This winter was ; as a result, many snowy, alpine areas have seen bouts of winter rainfall where there would ordinarily only be snow. These unusual weather patterns have contributed to an abysmal ski season, but they can also set the stage for dangerous avalanches. At temperatures close to freezing, precipitation can fall as rain but freeze when it hits the snow, forming an icy crust. Snow that accumulates on top of that crust is unstable and prone to abrupt slides, causing an avalanche that can close down a major highway in moments, endanger backcountry skiers and more.

Avalanche experts in Western Washington know how to manage the risks associated with rain-on-snow events, but many of their counterparts in colder regions like Eastern Washington, Idaho and Montana are less familiar with these dynamics. New research from the 天美影视传媒 shows that as winters in these regions warm, their snowpacks may come to resemble those of maritime areas, with more rain-on-snow events, icy crusts and complex avalanche forecasting.听

The findings in ARC Geophysical Research.

鈥淭his winter鈥檚 warmth is a harbinger,鈥 said lead author , a UW graduate student of civil and environmental engineering. 鈥淲e know that temperatures will keep rising, and our work is a red flag for cooler regions of the greater Pacific Northwest, such as Idaho and Western Montana, that aren鈥檛 used to dealing with ice crusts and their resulting avalanche problems.鈥

A cross-section of a snow drift with a shovel in the foreground. A horizontal line is visible running through the drift about halfway up.
A cross-section of snowpack reveals a thin, darker ice layer running horizontally through the snow. Ice layers like this one form when rain falls onto snow and freezes, forming a crust. This creates a boundary within the snowpack that can cause snow to slip and trigger an avalanche. Photo: Clinton Alden

The study is part of a larger effort to understand the structure of snow as it accumulates, which has implications for weather and avalanche forecasting, wildlife dynamics and more.听

鈥淪now scientists are pretty good at measuring snow depth and volume,鈥 said senior author , a UW professor of civil and environmental engineering. 鈥淲e鈥檙e also pretty good at figuring out how much water you get if all that snow melts. But our models aren鈥檛 as good at representing snow structure, such as layers of different densities and crystal types that increase avalanche risks. And we really want to know how the structure of snow changes as the climate changes. That鈥檚 a tricky question that no one has tackled, particularly for rain-on-snow conditions.鈥

To dig into that question, the researchers studied how warming influences ice layer formation in seasonal snowpacks. First, they collected temperature and precipitation data captured by 53 monitoring stations across the Pacific Northwest for the past 25 years. They used a computer model to identify days when ice layers likely formed at each location. They then checked the model against real-world measurements at one of the locations 鈥 a station at Snoqualmie Pass 鈥 and found that the model matched the measurements with 74% accuracy.

Finally, they used the same model to simulate those same 25 winters at 2 C and 4 C warmer than they were, and looked for changes to the number of ice crusts across the region. , the Pacific Northwest is expected to warm by 2 C to 5 C by 2050 as compared to pre-2000 temperatures.

A map of the Pacific Northwest with red and blue triangles scattered across it. The red triangles point down and the blue triangles point up.
This map shows the change in number of 鈥渋ce crust days鈥 across the 53 monitoring sites during the simulated winter with 2 C warming. The Cascade sites overwhelmingly saw fewer theoretical ice crust days, whereas cooler inland regions overwhelmingly saw more. Photo: Alden et. al/ARC Geophysical Research

The results were split regionally by the Cascade mountains. In colder, inland parts of the Pacific Northwest 鈥 places like Eastern Washington, Idaho and Montana 鈥 higher temperatures created more rain-on-snow days and more avalanche-prone ice layers. Locations in the warmer, maritime Cascades saw the opposite effect: Higher temperatures created slush instead of ice, potentially reducing the avalanche risk associated with ice crusts.听

The predicted snowpack changes may also impact wildlife behavior. Some foraging mammals, such as reindeer, dig down into the snow in search of food and may have a hard time breaking through an icy crust. Conversely, firm ice might provide a better running surface for animals fleeing predators. Specific regional effects will require additional study.

What鈥檚 clear now is that those who work or play in avalanche terrain in broad swaths of the Pacific Northwest 鈥 and even beyond 鈥 may need to adjust to a new set of risk factors.

鈥淚 get calls from avalanche forecasters in places like Colorado, Wyoming and Montana. They tell me they鈥檙e getting rain at 10,000 feet, which they鈥檝e never seen before,鈥 said co-author , the avalanche forecaster supervisor at Washington State Department of Transportation at Snoqualmie Pass, who earned his master鈥檚 in transportation and highway engineering at the UW. 鈥淭hey want to know when to expect the onset of avalanches and when to expect the return to stability.鈥澛

Alden hopes that this research will encourage further collaboration within the avalanche forecasting community.

鈥淚鈥檇 love to see this shared with avalanche forecasters widely, both as a call to action and as a way to help them understand what their snowpack might look like in the future,鈥 Alden said.

, the director of geospatial science at Audubon Alaska and former doctoral student of environmental and forest sciences at the UW, is a co-author.

This research was funded by the NASA Interdisciplinary Research in Earth Science program and the UW Program on Climate Change鈥檚 Graubard Fellowship.

For more information, contact Alden at cdalden@uw.edu.

]]>
New marine energy tech is put to the test at Harris Hydraulics Lab /news/2026/03/06/marine-energy-turbines-harris-hydraulics-uw-pnnl/ Fri, 06 Mar 2026 17:29:14 +0000 /news/?p=90849

At the 天美影视传媒 Harris Hydraulics Lab, an odd scene plays out. Over and over again, researchers from the UW and the (PNNL) pass a small rubber model of a marine animal through a large tank filled with flowing water and fitted with a spinning turbine. On some runs, the model bonks against the turbine blades; on others, it receives a glancing blow or sails past undisturbed. When bonks or knicks occur, a small collision sensor on one of the turbine鈥檚 blades detects the impacts and plots the interactions in a computer program.

The researchers are repeatedly simulating something that they hope will rarely happen in the wild: a collision between marine wildlife like a seabird, seal, fish or whale 鈥 or submerged debris like logs 鈥 and an underwater turbine.听

鈥淲e want to make sure we鈥檙e minimizing the chances of a collision in the first place,鈥 said Aidan Hunt, a senior research engineer in mechanical engineering at the UW and member of the (PMEC). 鈥淏ut if a collision were to occur, we want to be able to detect it, and potentially avoid it, in real time. The available evidence suggests that collisions are rare, but we鈥檙e taking a 鈥榯rust-but-verify鈥 approach.鈥

Marine energy 鈥 power harvested from tides, waves and currents 鈥 has enormous potential as a clean, renewable resource. But more information is needed about how large, commercial installations of underwater turbines or power-generating buoys could affect marine wildlife, whether through increased noise in the environment, habitat change or direct interactions with equipment.听

The marine collision experiments are part of the , a collection of projects led by PNNL to study the environmental impact of marine energy.听

The work at Harris Hydraulics follows a by PNNL and the UW Applied Physics Lab using a four-foot-tall prototype turbine installed at the entrance to Sequim Bay. In that study, researchers trained an underwater camera on the turbine for 109 days and then catalogued every instance of an animal approaching or interacting with the turbine. The camera captured more than 1,000 instances of fish, birds and seals approaching the turbine blades. There were only four collisions, and all were small fish.听

鈥淭his study was a first step, but a promising one,鈥 said co-author , a research scientist at the UW Applied Physics Lab. 鈥淲e 诲颈诲苍鈥檛 see any endangered species in our study, and the risk of collision for seals and sea birds seemed to be quite low. We鈥檙e excited to get back out there with the camera and learn even more.鈥

The Sequim Bay experiment generated hours of valuable data, but that degree of intense monitoring may not be practical in large commercial installations in the future. Cheaper impact sensors, like the ones logging bath toy impacts at Harris Hydraulics, could be a solution, researchers say.听聽

The project is funded by the U.S. Department of Energy鈥檚 Hydropower & Hydrokinetics Office, through the Pacific Northwest National Laboratory鈥檚 Triton Initiative and the TEAMER program.

For more information, contact Hunt at ahunt94@uw.edu or Emma Cotter at emma.cotter@pnnl.gov.

]]>
Selective forest thinning in the eastern Cascades supports both snowpack and wildfire resilience /news/2026/03/03/forest-thinning-snowpack-snow-drought-wildfire-resilience/ Tue, 03 Mar 2026 13:24:55 +0000 /news/?p=90813 An aerial photo of a snowy forest with a mountain range in the background. In the foreground, several small figures stand next to a pickup truck.
UW researchers, including members of the RAPID facility, fly a drone along Cle Elum Ridge in the Eastern Cascades. The drone was equipped with a lidar sensor that helped the team build a detailed 3D map of the study area and changes to the snowpack there. Photo: Mark Stone/天美影视传媒

As climate change nudges weather in the eastern Cascades in extreme and volatile directions, forest managers in the region have a lot to juggle. Hotter, drier summers are contributing to bigger and more frequent wildfires. Meanwhile, warmer winters may cause the Cascades to lose 50% of its annual snowpack over the next 70 years. Mountain snow supplies the Yakima River Basin with 75% of its water supply, making it a crucial reservoir for both nature and agriculture . Less winter snow also leads to drier and more fire-prone forests in the summer.

To encourage fire resilience, forest managers use tried-and-true tools like controlled burning and the selective felling of trees to thin out the forest. Both methods remove fuel and help return forests to historical conditions 鈥 but less is known about their impact on snowpack.

To address this knowledge gap, a team of researchers at the 天美影视传媒 and The Nature Conservancy (TNC) embarked on an ambitious, multiyear study of snowpack along Cle Elum Ridge, an area of the eastern Cascades in the headwaters of the Yakima River Basin. The group experimentally thinned the forest to varying degrees in a roughly 150-acre area. Then, they measured the amount and duration of snowpack during the winter of 2023 and compared it to a previous winter before the forest treatment.听

The results were encouraging: Forest thinning efforts increased snowpack by 30% on north-facing slopes and by 16% on south-facing slopes. Thinning aided snowpack the most where it created a patchwork of gaps in the forest rather than a more even density; gaps of 4-16 meters in diameter seemed to be the 鈥渟weet spot鈥 for snow.听

The research points toward more refined forest management practices that can optimize for both wildfire resilience and snowpack.

in Frontiers in Forest and Global Change.

鈥淎t its core, this research shows that reducing wildfire risk and protecting water resources don鈥檛 have to be competing goals,鈥 said lead author , a postdoctoral researcher at the University of Alaska who completed this work as a UW doctoral student of civil and environmental engineering. 鈥淭hat鈥檚 genuinely good news for a place facing both growing wildfire threats and increasing water vulnerability. So much of the climate conversation focuses on loss, which makes findings like this especially meaningful.鈥

A figure adjusts a drone sitting on a launchpad in a snowy field.
A figure straps a camera onto a tree in a forest.
A figure in an orange vest attaches a gadget to a tripod in a snowy field.
A figure in an orange vest operates a drone that is hovering 10 feet in the air.
A figure inspects an instrument covered with snow.
Two figures measure the depth of a hole in the snow with a pole.

Predicting snowpack in forested areas, especially those at higher altitudes, hinges on understanding how much snow reaches the ground and how much lands in the forest canopy. Snow on the ground is more likely to stick around through the season, whereas snow in the trees may either melt or sublimate back into water vapor. In either case, it wouldn鈥檛 add to the reservoir of water that melts in the spring and summer.听聽

鈥淭rees intercept snow and so can reduce snowpack, but trees also shade snow and so can retain snowpack,鈥 said senior author , a UW professor of civil and environmental engineering. 鈥淭he dominant effect depends on winter temperatures, and the Cascade crest near Cle Elum is right on the border where the effect flips from trees decreasing snow to trees saving snow.鈥澛

found that natural gaps in the forests of the eastern Cascades accumulated more snow. This, combined with other research, gave the team reason to hope for a positive connection between forest thinning and snowpack, though it wasn鈥檛 a sure thing. have found that open areas elsewhere in the Western U.S. saw reduced snowpack.

Thus, it was time for a direct 鈥 and complex 鈥 study of managed forests.

Researchers picked Cle Elum Ridge for the work, where TNC鈥檚 forest managers were planning thinning treatments to improve forest health and wildfire resiliency. The orientation of the ridge allowed them to compare north- and south-facing slopes 鈥 southern slopes in the region see more sunshine and less snow retention on average. From October 2021 to September 2022, the researchers worked with TNC鈥檚 forest managers and local contract loggers to remove trees on both slopes in a gradient, from no thinning to extensive. The team also set up time-lapse cameras at several strategic points to measure snow depth over time.

Then, they waited for snow to fall.

By March 2023, the area was close to its peak snowpack, and the team returned with staff and equipment from the UW (RAPID). The RAPID crew flew a specialized drone that generated a detailed 3D map of the study area using a laser-mapping technology called lidar.听

By comparing the new 3D map and timelapse imagery to lidar data captured before the forest treatment, the team was finally ready to calculate two things: the change to the forest structure, and its effect on the snowpack.

Three photorealistic 3D renderings of trees in a snowy forest.
Lidar renderings of three different areas of the forest studied by the team. Left: a dense, untreated forest stand. Center: a medium-density thinned stand with tree clumps and gaps. Right: a dense stand with a canopy gap. Photo: Cassie Lumbrazo and Karen Dedinsky

Across the whole study area, the team found that thinning helped the forest recover 12.3 acre-feet (or about four million gallons) of water in the form of snow per 100 acres on north-facing slopes, and 5.1 acre-feet (or about 1.5 million gallons) per 100 acres on south-facing slopes.听

As expected, areas where the thinning opened gaps in the canopy were most effective at restoring snow storage that had been previously lost to environmental degradation and climate change. Gaps of 4-16 meters in diameter seemed to retain the most snow, though there were few gaps larger than 16 meters to evaluate.

One surprising result: The way forest managers thin forests doesn鈥檛 reliably create gaps. Forest managers map out their reductions using the density of trunks in an area, not canopies, as their primary measurement.

鈥淚magine a group of 100 people all holding umbrellas in the rain,鈥 said co-author , director of the UW Climate Impacts Group. 鈥淭hey鈥檙e standing close enough together that their umbrellas overlap, so none of the rain hits the ground. If you remove 10 of the umbrellas randomly, you鈥檇 still have plenty of coverage overall. But, if you remove 10 umbrellas that are right next to one another, you create a gap in the umbrella 鈥榗anopy,鈥 and you get a 10% increase in the amount of rain that hits the ground.鈥

That realization adds a nuance to the findings. It鈥檚 likely that forest thinning can benefit both wildfire and snowpack resilience at the same time, but only if managers keep canopy gaps in mind.听

鈥淥ne thing we all learned was that snow people and tree people speak different languages,鈥 Lumbrazo said. 鈥淒ifferent experts look at totally different variables to help them decide whether or not to cut down a single tree. So an important goal is to get everyone speaking the same language. And I think this paper is one step towards better communication.鈥

A short documentary from 2023 highlights the team’s fieldwork.

Overall, the results suggest practical changes to forest management practices in the eastern Cascades. For example, managers might consider more tree-thinning on north-facing slopes, since snowpack gains may be greater there. With further research, these learnings may also extend to other regions in the Pacific Northwest.听

The work could also aid collaboration between forest managers and hydrologists at a time when the region needs all the water it can get.

鈥淎s we lose snowpack, everything becomes really squeezed,鈥 said co-author , a senior aquatic ecologist at TNC who earned her doctorate in aquatic and fishery sciences at the UW. 鈥淲e are currently in our third consecutive year of water restrictions in the Yakima River Basin, and are staring down one of the lowest snow years on record. However, our research shows that the treatments currently used for restoring fire resilient forests are compatible with the forest structure needed for supporting water security. And in a world where climate change is reducing water supplies and increasing wildfire severity, we are pleased to report that the same forest treatments can support both goals.鈥

Co-authors include , a former UW graduate student of civil and environmental engineering; , a former UW undergraduate student of atmospheric and climate science; , a data processing specialist at the UW RAPID facility; and , director of Forest Conservation and Management at The Nature Conservancy.

This research was funded by The Washington Department of Natural Resources, The Nature Conservancy and the National Science Foundation.听

For more information, contact Lundquist at jdlund@uw.edu, Dickerson-Lange at dickers@uw.edu or Howe at emily.howe@tnc.org.听

]]>
DopFone app can accurately track fetal heart rate using only a smartphone /news/2026/02/26/dopfone-fetal-heart-rate-app/ Thu, 26 Feb 2026 16:58:23 +0000 /news/?p=90704
DopFone uses an off-the-shelf smartphone鈥檚 existing speaker and microphone to accurately estimate fetal heart rate. The phone mimics a Doppler ultrasound, emitting a tone and listening for the subtle variations in its echo caused by fetal heart beats. A machine learning model then estimates the heart rate. Photo: Garg et al./Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies

Heart rate is an important sign of fetal health, yet few technologies exist to easily and inexpensively track fetal heart rates outside of doctors鈥 offices. This can create risks for pregnancies in low-resource regions where doctors are far away or inaccessible.听

A team led by 天美影视传媒 researchers has created DopFone, a system that uses an off-the-shelf smartphone鈥檚 existing speaker and microphone to accurately estimate fetal heart rate. The phone mimics a Doppler ultrasound, emitting a tone and listening for the subtle variations in its echo caused by fetal heart beats. A machine learning model then estimates the heart rate. In a clinical test with 23 pregnant women, DopFone estimated heart rate with an average error of 2 beats per minute, or bpm. The accepted clinical range is within 8 bpm.听

The team Dec. 2 in the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies.听

鈥淓ventually DopFone could let people test fetal heart rate regularly, rather than relying on the intermittent tests at a doctor鈥檚 office, or not getting tested at all,鈥 said lead author , a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. 鈥淧atients might then send this data to doctors so that they can better judge patients鈥 health when they鈥檙e not in a clinic.鈥

Traditional Doppler ultrasounds, the clinical standard for fetal heart rate monitoring, work by sending high-frequency sound into a person鈥檚 body and tracking how the echo changes in frequency. They鈥檙e very accurate at measuring fetal heart rate but require costly equipment and a skilled technician to operate it.

To use DopFone, a user places the phone鈥檚 microphone against their abdomen for one minute. The phone emits a subaudible 18 kilohertz tone. The team chose this low frequency because 鈥 unlike a Doppler鈥檚 high frequencies, above 2,000 kilohertz 鈥斅 it sits within the range smartphone microphones can record while still traveling well through tissue. As the tone is reflected through the user鈥檚 abdomen, the fetus鈥檚 heartbeat creates small shifts in the sound.听

A machine learning model then estimates the heart rate using the audio and the patient鈥檚 demographic information

The team tested DopFone in UW Medicine鈥檚 maternal-fetal medicine division on 23 pregnant patients between 19 and 39 weeks of pregnancy. On average its readings were within 2.1 bpm of the medical Doppler ultrasound. Its accuracy was slightly diminished for patients with high body mass indexes, though those readings were still within normal limits. Because an irregular fetal heartbeat is often an emergency, DopFone was not tested on patients with irregularities.听

Next, the team plans to gather more data outside a lab to better train the model. Eventually they want to deploy it as a publicly available app.

鈥淭his women鈥檚 health space is often overlooked,鈥 Garg said. 鈥淪o I want to focus on accessible alternatives that can be available to people in low resource areas, whether that鈥檚 here in the U.S. or in other countries. Because health belongs to everyone.鈥

Co-authors include , a UW graduate student in electrical and computer engineering; and , both OB/GYNs in UW Medicine鈥檚聽 maternal-fetal medicine division; and , a UW assistant professor in the Allen School. , a UW professor in the Allen School and in electrical and computer engineering, and of the Georgia Institute of Technology, were senior authors. This research was funded by the UW Gift Fund.听

For more information, contact Garg at pgarg70@uw.edu.

]]>
In a study, AI model OpenScholar synthesizes scientific research and cites sources as accurately as human experts /news/2026/02/04/in-a-study-ai-model-openscholar-synthesizes-scientific-research-and-cites-sources-as-accurately-as-human-experts/ Wed, 04 Feb 2026 16:02:30 +0000 /news/?p=90533 A screenshot of the OpenScholar demo.
UW and Ai2 research team built OpenScholar, an open-source AI model designed specifically to synthesize current scientific research. In tests, OpenScholar cited sources as accurately as human experts, and 16 scientists preferred its response to those written by subject experts 51% of the time. Above is the user-interface for a free online demo of the model.

Keeping up with the latest research is vital for scientists, but given that are published every year, that can prove difficult. Artificial intelligence systems show promise for quickly synthesizing seas of information, but they still tend to make things up, or 鈥渉allucinate.鈥澛

For instance, when a team led by researchers at the 天美影视传媒 and , or Ai2, studied a recent OpenAI model, , they found it fabricated 78-90% of its research citations. And general-purpose AI models like ChatGPT often can鈥檛 access papers that were published after their training data was collected.听

So the UW and Ai2 research team built OpenScholar, an open-source AI model designed specifically to synthesize current scientific research. The team also created the first large, multi-domain for evaluating how well models can synthesize and cite scientific research. In tests, OpenScholar cited sources as accurately as human experts, and 16 scientists preferred its response to those written by subject experts 51% of the time.听

The team Feb. 4 in Nature. The project鈥檚 are publicly available and free to use.

鈥淎fter we started this work, we put the demo online and quickly, we got a lot of queries, far more than we鈥檇 expected,鈥 said senior author , a UW associate professor in the Paul G. Allen School of Computer Science & Engineering and senior director at Ai2. 鈥淲hen we started looking through the responses we realized our colleagues and other scientists were actively using OpenScholar. It really speaks to the need for this sort of open-source, transparent system that can synthesize research.鈥

Try the

Researchers trained the model and then created a set of 45 million scientific papers for OpenScholar to pull from to ground its answers in established research. They coupled this with a technique called “,鈥 which lets the model search for new sources, incorporate them and cite them after it鈥檚 been trained.听

鈥淓arly on we experimented with using an AI model with Google鈥檚 search data, but we found it wasn鈥檛 very good on its own,鈥 said lead author , a research scientist at Ai2 who completed this research as a UW doctoral student in the Allen School. 鈥淚t might cite some research papers that weren鈥檛 the most relevant, or cite just one paper, or pull from a blog post randomly. We realized we needed to ground this in scientific papers. We then made the system flexible so that it could incorporate emerging research through results.鈥澛

To test their system, the team created ScholarQABench, a benchmark against which to test systems on scientific search. They gathered 3,000 queries and 250 longform answers written by experts in computer science, physics, biomedicine and neuroscience.听

鈥淎I is getting better and better at real world tasks,鈥 Hajishirzi said. 鈥淏ut the big question ultimately is whether we can trust that its answers are correct.鈥

The team compared OpenScholar against other state-of-the-art AI models, such as OpenAI鈥檚 GPT-4o and two models from Meta. ScholarQABench automatically evaluated AI models鈥 answers on metrics such as their accuracy, writing quality and relevance.听

OpenScholar outperformed all the systems it was tested against. The team had 16 scientists review answers from the models and compare them with human-written responses. The scientists preferred OpenScholar answers to human answers 51% of the time, but when they combined OpenScholar citation methods and pipelines with GPT-4o (a much bigger model), the scientists preferred the AI written answers to human answers 70% of the time. They picked answers from GPT-4o on its own only 32% of the time.

鈥淪cientists see so many papers coming out every day that it鈥檚 impossible to keep up,鈥 Asai said. 鈥淏ut the existing AI systems weren鈥檛 designed for scientists鈥 specific needs. We鈥檝e already seen a lot of scientists using OpenScholar and because it鈥檚 open-source, others are building on this research and already improving on our results. We鈥檙e working on a followup model, , which builds on OpenScholar鈥檚 findings and performs multi-step search and information gathering to produce more comprehensive responses.鈥澛

Other co-authors include , , , all UW doctoral students in the Allen School; , a UW professor emeritus in the Allen School and general manager and chief scientist at Ai2; , a UW postdoc in the Allen School and postdoc at Ai2; , a UW professor in the Allen School; , a UW assistant professor in

the Allen School; Amanpreet Singh, Joseph Chee Chang, Kyle Lo, Luca Soldaini, Sergey Feldman, Mike D鈥橝rcy, David Wadden, Matt Latzke, Jenna Sparks and Jena D. Hwang of Ai2; Wen-tau Yih of Meta; Minyang Tian, Shengyan Liu, Hao Tong and Bohao Wu of University of Illinois Urbana-Champaign; Pan Ji of University of North Carolina; Yanyu Xiong of Stanford University; and Graham Neubig of Carnegie Mellon University.

For more information, contact Asai at akaria@allenai.org and Hajishirzi at hannaneh@cs.washington.edu.

]]>
Q&A: UW researchers create a smart glove with its own sense of touch /news/2026/01/27/smart-glove-electronic-touch-pressure-sensor-engineeering-soft-robotics/ Tue, 27 Jan 2026 21:19:51 +0000 /news/?p=90498 Two pieces of an electronic glove lie on a table.
Inside the OpenTouch Glove (right) is a grid of wires (left) that allows the glove to sense the location and degree of any pressure applied to it. Photo: 天美影视传媒

Yiyue Luo鈥檚 at the 天美影视传媒 is full of machinery that鈥檚 oddly cozy. Here, soft and pliable sensors are sewn, knit and glued directly into clothing to give everyday garments new capabilities.听

One of the lab鈥檚 newest curiosities is a nondescript gray work glove embedded with sensors that enable it to 鈥渇eel鈥 on its own. An array of small wires hidden inside the glove report the location and degree of pressure anywhere along its surface. When in use, the signals from the glove inform a realtime 鈥渉eat map鈥 of pressure that could one day help physical therapy patients track their progress, teach robots to grasp objects, and more.

The project, as it鈥檚 officially known, is led by UW electrical and computer engineering doctoral student as part of a collaboration with the and at MIT. UW News caught up with Murphy to learn more about the glove and its potential uses.

What inspired you to create this glove?

Devin Murphy: Our hands are arguably our greatest tools as humans. We interact with the world through our hands in so many different ways. But the nature of how we grasp and manipulate things in our environment is super nuanced and complex, and it鈥檚 hard to capture. We have very mature electronics that record sight and sound 鈥 think of the cameras and microphones in your smartphone. But there aren鈥檛 many electronic devices that record our other senses 鈥 like touch. That鈥檚 what I鈥檝e been working to remedy with the OpenTouch Glove.

How does the glove work? What are its capabilities?

DM: There are two flexible circuit boards inside each glove that form a grid of wires across the gripping surface of the glove. We can measure pressure at any point in that mesh where two wires meet. The circuit boards connect to a little box of electronics at the user鈥檚 wrist, which processes the signals and sends them wirelessly to a laptop.

We can then generate a 鈥渉eat map鈥 image showing where force is being applied on the hand, where the hand is applying force to different objects and how much force the hand is applying.听

This kind of data gives us extra nuance that a camera can’t capture. For example, if your hand is in a bag or behind an object while it鈥檚 grasping things, a camera wouldn鈥檛 be able to tell what your hand is doing, whereas this glove can follow along.

What are some potential applications for the glove?

DM: I’m particularly excited about how this technology might help patients recovering from an injury. Physical therapists have patients perform a variety of tasks to regain mobility in their hands 鈥 if we can measure how much force people apply during this process, we can provide them with concrete feedback. The patient and therapist can both track progress by monitoring grip strength of the patient over time.听

We鈥檙e also seeing lots of new companies invest in physical intelligence for robotics 鈥 basically recording how robots interact with the physical world. If we can record human hand grip signals, we might be able to teach robotic hands how to mimic human behavior.听

One other interesting application is in augmented reality or virtual reality. If we replaced traditional controllers with these gloves, it could give users a more natural way to interact with virtual objects and scenery 鈥 though we鈥檇 need some additional technology for users to feel pressure when gripping virtual things.

How can other researchers access this technology?

DM: It鈥檚 really important to us that the glove is accessible to other researchers and anyone else who might want to use it for their own applications. You can order all of the components of the glove directly from commercial manufacturers, and we have released all of the manufacturing files and instructions for putting the glove together yourself.听

We’ve also shown some demos of the glove 鈥渋n the wild鈥 to showcase the different kinds of data it can collect, and we鈥檙e planning to release an open source data set collected with the glove in partnership with researchers at MIT.听

I鈥檓 really excited about developing new wearable technologies that allow people to record less popular sensing modalities like touch. I want to figure out how we can capture the nuances of touch-based interactions, so that ultimately we can get better insights into our daily lives.

For more information, contact Murphy at devinmur@uw.edu.

]]>
Q&A: A UW materials lab probes the mysteries of toughness at the nano scale /news/2026/01/21/lucas-meza-nanoscale-architecture-nanomaterials-mechanical-engineering/ Wed, 21 Jan 2026 17:13:20 +0000 /news/?p=90387 .wp-video { margin-top: -20px; margin-bottom: 5px; } .wp-video br { display: none; }
A splitscreen image showing a black and white webbed material on the left and a bubbled, foamy black and white material on the right.
Researchers in the Meza Research Group at the 天美影视传媒 draw inspiration from natural structures to develop new materials. On the left is a scanning electron microscope (SEM) image of naturally occurring spider silk. On the right is an SEM image of an engineered plastic material with a similar structure. The plastic is foamed using tiny carbon dioxide bubbles to make it lighter and tougher. Photo: Haynl et. al/Nature Scientific Reports (left) and Dwivedi et. al/Journal of the Mechanics and Physics of Solids (right).

UPDATE (Feb. 17, 2026): This story has been updated to note Meza’s work with the NSF I-Corps program and CoMotion Innovation Gap Fund.

Biology is full of architecture. Materials like wood, crab shells and bone all contain microscopic structures such as layers, lattices, cells and interwoven fibers. Those structures give natural materials an ideal combination of lightness and toughness, and they鈥檝e inspired engineers to build artificial materials with similar properties. But how those tiny architectures lead to such tough materials is something of a mystery.

In 2019, , assistant professor of mechanical engineering, set up the at the 天美影视传媒 to tease out the mechanical secrets of structures that are as small as 100 nanometers, which is about the size of a virus. He arrived with an ambitious plan to build a new generation of nanomaterials, but soon discovered that the field was missing a fundamental understanding of toughness at tiny scales.

鈥淲e had to go back to basics,鈥 Meza said.听

In the years since, Meza and his team have flipped the script on nanomaterial toughness. They鈥檙e applying what they鈥檝e learned to new kinds of bespoke materials, though along the way they鈥檙e still surprised by tiny structures behaving in ways they theoretically shouldn鈥檛.

Meza spoke with UW News about his strange and surprising journey into the nano realm.

What questions did you establish your lab to tackle?

Lucas Meza: Very broadly, we’re trying to design better materials, but not by introducing new material chemistries. Instead, we use architecture. This is something humans have done throughout history 鈥 think of woven textiles and fabrics, or straw-reinforced mud bricks. These are 鈥渁rchitected materials,鈥 where the structure of materials allows us to control useful properties like strength, toughness and flexibility.听

The thing that I was particularly interested in was introducing architecture at the nanoscale. What if, instead of building a wall with bricks, we could use nanoplatelets? Or instead of making fabrics with yarn, we could use nanofibers? How would those properties change?

Engineers have found that nanomaterials are stronger, more flaw resistant and more deformable. The challenge is: How do you actually do something with them? We need to build them into large-scale materials in a way that preserves their unique nanoscale properties.听

What material properties are you most interested in?

LM: We鈥檙e using architecture to tinker with a few interrelated properties. The first is a material鈥檚 strength, which is how much stress it can take before it permanently deforms. The second is ductility, which is how much a material can stretch before it breaks. Those two features sort of combine to determine a material鈥檚 toughness, which is the total amount of energy you have to put into a material to break it.

To give a couple of opposing examples: A ceramic plate is strong, meaning it can take a lot of stress, but it has very low ductility, meaning it barely deforms before breaking. So overall, it鈥檚 not a very tough material. Conversely, a rubber band is not strong at all 鈥 you can bend and stretch it with very little stress. But, it鈥檚 extremely ductile 鈥 it can stretch to many times its original dimensions without snapping. So as a result, rubber is very tough.

Credit: 天美影视传媒 (left) and Envato (right).

Toughness is a particularly interesting property to study because there’s no limit on how tough a material can be. There are very hard limits on how strong and how stiff a material can be, and you can use architecture to optimize them, but you can’t exceed the properties of the base material. On the other hand, you can use architecture to improve the overall toughness of a material.听

Nature has already created a lot of really interesting micro- and nano-structures. Every natural material has to be porous to transport nutrients, and on top of that we see things like lattices in some bone and in sea sponges; shells all have layered architectures; wood and bone are fiber composites; and all of this happens at the micro- and nanoscale.听

There had to be a reason that nature was making these architectural motifs at the micro and nanoscale, and I had a strong hunch that it had to do with toughness.听

What has your lab learned about toughness at the small scale?

LM: Initially, we learned a surprising amount about what we 诲颈诲苍鈥檛 know. My thought in getting into this work was that people know enough about fracture mechanics 鈥 how things break and why 鈥 so we can just dive into making really complicated architectures and studying their toughness, like l made by my former doctoral student, . We realized the scientific community has some big gaps in their understanding of fracture toughness. So instead, we had to go simple 鈥 basically we pulled and pushed and broke a lot of small things to understand what gives a material ductility and toughness.

We learned that all material behavior centers around something called a 鈥減lastic zone size.鈥 Basically, when you pull on a part that has a crack, a little ball of energy builds up right at the tip of that crack. That energy ball grows as you add more stress, and at a certain point it shoots through the sample and causes a break. The size of the ball at its breaking point is the material鈥檚 plastic zone size, and it鈥檚 different for every material.听

We realized that what makes a material ductile or not . If a material is smaller than its plastic zone size, that ball of energy can鈥檛 grow big enough to cause the crack to grow, so instead it spreads outward and the material bends.听

The four material samples in this video are all the same size, but structural differences at the nanoscale produce different levels of ductility. In each example, the cyan color represents the sample鈥檚 plastic zone size. In less ductile samples, the cyan-colored area remains small and the material snaps, whereas in more ductile samples, the cyan area spreads out and the material stretches. Credit: Dwivedi et. al/Journal of the Mechanics and Physics of Solids

This is the key for how to use architecture to cheat and get more ductility out of a material. If you take a brittle material and make a nanoscale lattice or foam out of it, . The new tougher 鈥渁rchitected material鈥 can also have a larger plastic zone size, sometimes as much as 100 times larger, meaning it is likely to be ductile as well. This is why things like fabrics and meshes can be really hard to tear.听

How are you applying what you鈥檙e learning to real-world materials?

LM: We鈥檙e building lots of our material architectures painstakingly at the small scale using resources like the and the UW . That 鈥渂ottom-up鈥 approach 鈥 building things one nanofeature at a time 鈥 gives us lots of control over the building blocks we鈥檙e playing with, but it鈥檚 a real challenge to scale.

The 鈥渢op-down鈥 approach, where you let physics and kinetics just self-assemble things for you, is much easier. One example is 鈥渟olid state foaming鈥, a technique my colleague has been working on for decades. Basically, you take a thermoplastic material 鈥 something that melts when you heat it up 鈥 throw it in a chamber with high pressure carbon dioxide so it saturates the sample, then heat it up so that dissolved gas forms tiny bubbles in the material. With this process we have less control over the precise architecture 鈥 it鈥檚 a random foam 鈥 but by controlling the amount of dissolved gas we can easily control the size of the bubbles. Those materials turned out to be super tough! My doctoral student has , where we show they could even be tougher than the material they were made from. This goes against everything we knew about normal foam fracture processes.听

A black and white image showing a dense, webbed material.
A black and white image showing a dense, webbed material.
A black and white image showing a dense, webbed material.

A plastic nanofoam material created by Kush Dwivedi, a doctoral student in Meza鈥檚 lab, seen at 2,500x, 12,000x and 35,000x magnifications. Credit: Dwivedi et. al/Journal of the Mechanics and Physics of Solids.

I鈥檓 currently pursuing an earlier-stage commercialization effort to use tiny foams as a filtration material for biomedical applications. We can make nanoporous filter materials 鈥 think of the reverse osmosis system that might be under your sink 鈥 but we can do it without using any of the harsh chemical processes that are currently used. We’ve been able to explore this avenue thanks to our participation in the program, which then enabled us to get a award.

I also recently got an NSF CAREER grant to study fracture in architected materials, and we鈥檙e exploring ways to make tougher sustainable and biodegradable materials. Think of the last time you used a biodegradable fork that broke off in your food. Materials like wood are actually great alternatives for this, but we鈥檙e trying to figure out how to do it without cutting down a tree or harvesting bamboo.听

For more information contact Meza at lmeza@uw.edu.

]]>
Video: Drivers struggle to multitask when using dashboard touch screens, study finds /news/2025/12/16/video-drivers-struggle-to-multitask-when-using-dashboard-touch-screens-study-finds/ Tue, 16 Dec 2025 17:00:09 +0000 /news/?p=90099

Once the domain of buttons and knobs, car dashboards are increasingly home to large touch screens. While that makes following a mapping app easier, it also means drivers can鈥檛 feel their way to a control; they have to look. But how does that visual component affect driving?

New research from the 天美影视传媒 and Toyota Research Institute, or TRI, explores how drivers balance driving and using touch screens while distracted. In the study, participants drove in a vehicle simulator, interacted with a touch screen and completed memory tests that mimic the mental effort demanded by traffic conditions and other distractions. The team found that when people multitasked, their driving and touch screen use both suffered. The car drifted more in the lane while people used touch screens, and their speed and accuracy with the screen declined when driving. The effects increased further when they added the memory task.听

These results could help auto manufacturers design safer, more responsive touch screens and in-car interfaces.

The team Sept. 30 at the ACM Symposium on User Interface Software and Technology in Busan, Korea.听

鈥淲e all know ,鈥 said co-senior author , a UW professor in the Paul G. Allen School of Computer Science & Engineering. 鈥淏ut what about the car鈥檚 touch screen? We wanted to understand that interaction so we can design interfaces specifically for drivers.鈥

As the study鈥檚 16 participants drove the simulator, sensors tracked their gaze, finger movements, pupil diameter and electrodermal activity. The last two are common ways to measure mental effort, or 鈥渃ognitive load.鈥 For instance, pupils tend to grow when people are concentrating.听

Related:

  • Story from

While driving, participants had to touch specific targets on a 12-inch touch screen, similar to how they would interact with apps and widgets. They did this while completing three levels of an 鈥淣-back task,鈥 a memory test in which the participants hear a series of numbers, 2.5 seconds apart, and have to repeat specific digits.听

The participants鈥 performance changed significantly under different conditions:

  • When interacting with the touch screen, participants drifted side to side in their lane 42% more often. Increasing cognitive load had no effect on the results.
  • Touch screen accuracy and speed decreased 58% when driving, then another 17% under high cognitive load.
  • Each glance at the touchscreen was 26.3% shorter under high cognitive load.
  • A 鈥渉and-before-eye鈥 phenomenon, in which drivers鈥 reached for a control before looking at it, increased from 63% to 71% as memory tasks were introduced.

The team also found that increasing the size of the target areas participants were trying to touch did not improve their performance.听

鈥淚f people struggle with accuracy on a screen, usually you want to make bigger buttons,鈥 said , a UW doctoral student in the Allen School. 鈥淏ut in this case, since people move their hand to the screen before touching, the thing that takes time is the visual search.鈥

Based on these findings, the researchers suggest future in-car touch screen systems might use simple sensors in the car 鈥 eye tracking, or touch sensors on the steering wheel 鈥 to monitor drivers鈥 attention and cognitive load. Based on these readings, the car鈥檚 system might adjust the touch screen鈥檚 interface to make important controls more prominent and safer to access.

鈥淭ouch screens are widespread today in automobile dashboards, so it is vital to understand how interacting with touch screens affects drivers and driving,鈥 said co-senior author , a UW professor in the Information School. 鈥淥ur research is some of the first that scientifically examines this issue, suggesting ways for making these interfaces safer and more effective.鈥

, a UW doctoral student in the Information School, is co-lead author. Other co-authors include , , and of TRI. This research was funded in part by TRI.

For more information, contact Wobbrock at wobbrock@uw.edu and Fogarty at jfogarty@cs.washington.edu.

]]>