Technology – UW News /news Tue, 14 Apr 2026 22:17:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Tiny cameras in earbuds let users talk with AI about what they see /news/2026/04/14/cameras-in-wireless-earbuds-vuebuds/ Tue, 14 Apr 2026 14:38:00 +0000 /news/?p=91232 Two black earbuds: one with the casing removed exposing a computer chip and tiny camera.
UW researchers developed a system called VueBuds that uses tiny cameras in off-the-shelf wireless earbuds to allow users to talk with an AI model about the scene in front of them. Here, the altered headphones are shown with the camera inserted. Photo: Kim et al./CHI 鈥26

天美影视传媒 researchers developed the first system that incorporates tiny cameras in off-the-shelf wireless earbuds to allow users to talk with an AI model about the scene in front of them. For instance, a user might turn to a Korean food package and say, 鈥淗ey Vue, translate this for me.鈥 They鈥檇 then hear an AI voice say, 鈥淭he visible text translates to 鈥楥old Noodles鈥 in English.鈥

The prototype system called VueBuds takes low-resolution, black-and-white images, which it transmits over Bluetooth to a phone or other nearby device. A small artificial intelligence model on the device then answers questions about the images within around a second. For privacy, all of the processing happens on the device, a small light turns on when the system is recording, and users can immediately delete images.听

The team will April 14 at the Association for Computing Machinery Conference on Human Factors in Computing Systems in Barcelona.听

鈥淲e haven鈥檛 seen most people adopt smart glasses or VR headsets, in part because a lot of people don鈥檛 like wearing glasses, and they often come with , such as recording high-resolution video and processing it in the cloud,鈥 said senior author , a UW professor in the Paul G. Allen School of Computer Science & Engineering. 鈥淏ut almost everyone wears earbuds already, so we wanted to see if we could put visual intelligence into tiny, low-power earbuds, and also address privacy concerns in the process.鈥

Cameras use far more power than the microphones already in earbuds, so using the same sort of high-res cameras as those in smart glasses wouldn鈥檛 work. Also, large amounts of information can鈥檛 stream continuously over Bluetooth, so the system can鈥檛 run continuous video.听

The team found that using a low-power camera 鈥 roughly the size of a grain of rice 鈥 to shoot low-resolution, black-and-white still images limited battery drain and allowed for Bluetooth transmission while preserving performance.

There was also the matter of placement.听

鈥淥ne big question we had was: Will your face obscure the view too much? Can earbud cameras capture the user鈥檚 view of the world reliably?鈥 said lead author , who completed this work as a UW doctoral student in the Allen School.听

The team found that angling each camera 5-10 degrees outward provides a 98-108 degree field of view. While this creates a small blind spot when objects are held closer than 20 centimeters from the user, people rarely hold things that close to examine them 鈥 making it a non-issue for typical interactions.

Researchers also discovered that while the vision language model was largely able to make sense of the images from each earbud, having to process images from both earbuds slowed it down. So they had the system 鈥渟titch鈥 the two images into one, identifying overlapping imagery and combining it. This allows the system to respond in one second 鈥 quick enough to feel like real-time for users 鈥 rather than the two seconds it takes with separate images.

The team then had 74 participants compare recorded outputs from VueBuds with outputs from Ray-Ban Meta Glasses in a series of tests. Despite VueBuds using low-resolution images with greater privacy controls and the Ray-Bans taking high-res images processed on the cloud, the two systems performed equivalently. Participants preferred VueBuds鈥 translations, while the Ray-Bans did better at counting objects.

Sixteen participants also wore VueBuds and tested the system鈥檚 ability to translate and answer basic questions about objects. VueBuds achieved 83-84% accuracy when translating or identifying objects and 93% when identifying the author and title of a book.

This study was designed to gauge the feasibility of integrating cameras in wireless earbuds. Since the system only takes grayscale images, it can鈥檛 answer questions that involve color in the scene.听

The team wants to add color to the system 鈥 color cameras require more power 鈥 and to train specialized AI models for specific use cases, such as translation.听聽

鈥淭his study lets us glimpse what鈥檚 possible just using a general purpose language model and our wireless earbuds with cameras,鈥 Kim said. 鈥淏ut we鈥檇 like to study the system more rigorously for applications like reading a book 鈥 for people who have low vision or are blind, for instance 鈥 or translating text for travelers.鈥澛

Co-authors include , a UW master鈥檚 student in the Allen School, and , , , and , all UW students in electrical and computer engineering.听

For more information, contact vuebuds@cs.washington.edu.

]]>
At quantum testbed lab, researchers across the UW probe 鈥榮pooky鈥 mysteries of quantum phenomena /news/2026/04/13/qt3-quantum-computing-testbed-lab-dilution-fridge/ Mon, 13 Apr 2026 23:09:13 +0000 /news/?p=91294 Three people stand next to a complex metal tube-shaped machine
Max Parsons (left), assistant professor of electrical and computer engineering, works with undergraduate staff members Reynel Cariaga (center) and Jesus Garcia (right) at the QT3 lab. The device in the foreground is a scanning tunneling microscope that can image individual atoms within a material by scanning an extremely fine needle 鈥 just one atom thick at the tip 鈥 across the sample. Photo: Erhong Gao/天美影视传媒

Even on a campus like the 天美影视传媒鈥檚 鈥 home to particle accelerators, wave tanks and countless other bespoke pieces of equipment 鈥 the machinery in the stands out. Take the dilution fridge, a large, white, cylindrical device that can cool a small chamber to one hundredth of a kelvin above absolute zero 鈥 the coldest possible temperature in the universe.听

鈥淭his is the coldest fridge money can buy,鈥 said , a UW assistant professor of electrical and computer engineering and the former director of the lab, which goes by the nickname QT3. 鈥淲hen it鈥檚 running, the chamber inside this device is about 100 times colder than outer space. At that temperature, it鈥檚 much easier to study and manipulate a material鈥檚 quantum properties.鈥

The lab also houses a photon qubit tabletop lab: a nondescript set of boxes, lasers and lenses that can demonstrate the 鈥渟pooky鈥 鈥 a term scientists actually use 鈥 phenomenon known as quantum entanglement, where two particles appear to communicate instantaneously with each other despite being physically apart.

Or there鈥檚 the lab鈥檚 latest acquisition, the scanning tunneling microscope, which can image individual atoms within a solid material, allowing researchers to study the structure of materials at the smallest scales.

An interdisciplinary group of researchers has been marshalling resources and expertise to create QT3 for three years, and now, the lab is opening its doors as a unique one-stop shop resource for quantum researchers and educators at the UW.

鈥淭he idea of this lab is to improve access to quantum hardware,鈥 Parsons said. 鈥淚t’s rather hard to acquire equipment like this. And there are a lot of researchers that may have good ideas that they want to test, but don鈥檛 have the resources yet for their own equipment. So we鈥檙e inviting researchers, initially from across campus, but also from other universities and from industry, to come in and test their ideas. This can be a hub for quantum experts to share their ideas and collaborate.鈥

The lab also boasts hardware that can demonstrate known quantum principles and techniques, making it useful for students in quantum fields. In addition to the entanglement device, Parsons鈥 students developed a machine that can suspend charged particles 鈥 in this case, tiny grains of pollen 鈥 in midair using electric fields. Researchers use the same technique to trap single atoms and manipulate their quantum properties, making the lab鈥檚 ion-trapping machine good practice for more complex work.

Two tiny dots hover back and forth in a tube
The QT3 facility鈥檚 ion trapping lab gives students a chance to practice techniques used in quantum computing research. Here, students have suspended two tiny grains of pollen 鈥 the red dots hovering back and forth 鈥 in midair using electric fields. Photo: Robert Thomas

Some students even work at the lab through an undergraduate staffing program, and have helped install instrumentation, write code to power equipment and build parts for custom microscopes. The program provides yet another avenue for students to get hands-on experience with unusual machinery and techniques.听

鈥淨uantum mechanics is inherently counterintuitive, and that makes it a powerful teaching tool,鈥 Parsons said. 鈥淚n the QT3 lab, students will encounter systems where their everyday intuition breaks down, and they must rely on careful reasoning and experimentation instead. They learn how to debug when results don鈥檛 match expectations, how to test simple cases and how to build understanding about hardware step by step.鈥

The cosmically cold dilution fridge remains something of a centerpiece, even as the lab fills up with specialized equipment. The extreme environment within the device strips heat, light and other stray energy away from materials, allowing researchers to observe the peculiar quantum properties that remain. One such property is superposition, or the ability of a particle like an electron to maintain multiple mutually exclusive properties at the same time. Scientists use superposition to create a powerful, tiny piece of technology: a quantum bit, or qubit.听

鈥淭raditional computers use bits, which can only be one or zero. A qubit, on the other hand, we can make one plus zero,鈥 Parsons said. 鈥淚t’s both at the same time, and only when we measure it do we find out which one it is. We can use this unusual property to build a new class of computers that excel at tasks like communications and encryption.鈥

QT3 is part of a collaborative effort to solidify UW as a leader in quantum research and applications. Most of the lab hardware was funded by a congressional earmark championed by Senator Maria Cantwell鈥檚 office. Departmental funding from across the College of Engineering and the College of Arts and Sciences helped rehab the lab space. The National Science Foundation provided seed funding for the instructional lab equipment.

a repeating hexagonal pattern of small golden blobs
An image captured by the QT3 lab鈥檚 scanning tunneling microscope reveals a lattice of individual atoms in a sample of silicon. Photo: Rajiv Giridharagopal

The UW has also spent the past decade investing heavily in faculty with quantum expertise.

鈥淰ery few places have expertise across the full quantum stack, from materials up to algorithms,鈥 said , a UW professor of physics and founder of QT3. 鈥淭he UW has quantum faculty in electrical and mechanical engineering, physics, computer science, materials science and chemistry. Our faculty work on superconducting qubits, spin defects, photons, trapped ions, neutral atoms and topological qubits. Our advantage is the breadth of our investment.鈥

The lab is now available to researchers and students across the UW, and private companies are encouraged to reach out about partnering. Parsons has already used the lab to teach a graduate-level class in electrical and computer engineering for students who included employees from Boeing, Microsoft and quantum computing company IonQ. The lab is hiring for a full-time manager to maintain the equipment and help users make the most of the facility.听

鈥淗ere in academia, we can improve the building blocks for applied technologies like quantum computing, and then transfer those learnings to industry for further scaling,鈥 Parsons said.

For more information, contact Parsons at mfpars@uw.edu.

]]>
New marine energy tech is put to the test at Harris Hydraulics Lab /news/2026/03/06/marine-energy-turbines-harris-hydraulics-uw-pnnl/ Fri, 06 Mar 2026 17:29:14 +0000 /news/?p=90849

At the 天美影视传媒 Harris Hydraulics Lab, an odd scene plays out. Over and over again, researchers from the UW and the (PNNL) pass a small rubber model of a marine animal through a large tank filled with flowing water and fitted with a spinning turbine. On some runs, the model bonks against the turbine blades; on others, it receives a glancing blow or sails past undisturbed. When bonks or knicks occur, a small collision sensor on one of the turbine鈥檚 blades detects the impacts and plots the interactions in a computer program.

The researchers are repeatedly simulating something that they hope will rarely happen in the wild: a collision between marine wildlife like a seabird, seal, fish or whale 鈥 or submerged debris like logs 鈥 and an underwater turbine.听

鈥淲e want to make sure we鈥檙e minimizing the chances of a collision in the first place,鈥 said Aidan Hunt, a senior research engineer in mechanical engineering at the UW and member of the (PMEC). 鈥淏ut if a collision were to occur, we want to be able to detect it, and potentially avoid it, in real time. The available evidence suggests that collisions are rare, but we鈥檙e taking a 鈥榯rust-but-verify鈥 approach.鈥

Marine energy 鈥 power harvested from tides, waves and currents 鈥 has enormous potential as a clean, renewable resource. But more information is needed about how large, commercial installations of underwater turbines or power-generating buoys could affect marine wildlife, whether through increased noise in the environment, habitat change or direct interactions with equipment.听

The marine collision experiments are part of the , a collection of projects led by PNNL to study the environmental impact of marine energy.听

The work at Harris Hydraulics follows a by PNNL and the UW Applied Physics Lab using a four-foot-tall prototype turbine installed at the entrance to Sequim Bay. In that study, researchers trained an underwater camera on the turbine for 109 days and then catalogued every instance of an animal approaching or interacting with the turbine. The camera captured more than 1,000 instances of fish, birds and seals approaching the turbine blades. There were only four collisions, and all were small fish.听

鈥淭his study was a first step, but a promising one,鈥 said co-author , a research scientist at the UW Applied Physics Lab. 鈥淲e didn鈥檛 see any endangered species in our study, and the risk of collision for seals and sea birds seemed to be quite low. We鈥檙e excited to get back out there with the camera and learn even more.鈥

The Sequim Bay experiment generated hours of valuable data, but that degree of intense monitoring may not be practical in large commercial installations in the future. Cheaper impact sensors, like the ones logging bath toy impacts at Harris Hydraulics, could be a solution, researchers say.听聽

The project is funded by the U.S. Department of Energy鈥檚 Hydropower & Hydrokinetics Office, through the Pacific Northwest National Laboratory鈥檚 Triton Initiative and the TEAMER program.

For more information, contact Hunt at ahunt94@uw.edu or Emma Cotter at emma.cotter@pnnl.gov.

]]>
DopFone app can accurately track fetal heart rate using only a smartphone /news/2026/02/26/dopfone-fetal-heart-rate-app/ Thu, 26 Feb 2026 16:58:23 +0000 /news/?p=90704
DopFone uses an off-the-shelf smartphone鈥檚 existing speaker and microphone to accurately estimate fetal heart rate. The phone mimics a Doppler ultrasound, emitting a tone and listening for the subtle variations in its echo caused by fetal heart beats. A machine learning model then estimates the heart rate. Photo: Garg et al./Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies

Heart rate is an important sign of fetal health, yet few technologies exist to easily and inexpensively track fetal heart rates outside of doctors鈥 offices. This can create risks for pregnancies in low-resource regions where doctors are far away or inaccessible.听

A team led by 天美影视传媒 researchers has created DopFone, a system that uses an off-the-shelf smartphone鈥檚 existing speaker and microphone to accurately estimate fetal heart rate. The phone mimics a Doppler ultrasound, emitting a tone and listening for the subtle variations in its echo caused by fetal heart beats. A machine learning model then estimates the heart rate. In a clinical test with 23 pregnant women, DopFone estimated heart rate with an average error of 2 beats per minute, or bpm. The accepted clinical range is within 8 bpm.听

The team Dec. 2 in the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies.听

鈥淓ventually DopFone could let people test fetal heart rate regularly, rather than relying on the intermittent tests at a doctor鈥檚 office, or not getting tested at all,鈥 said lead author , a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. 鈥淧atients might then send this data to doctors so that they can better judge patients鈥 health when they鈥檙e not in a clinic.鈥

Traditional Doppler ultrasounds, the clinical standard for fetal heart rate monitoring, work by sending high-frequency sound into a person鈥檚 body and tracking how the echo changes in frequency. They鈥檙e very accurate at measuring fetal heart rate but require costly equipment and a skilled technician to operate it.

To use DopFone, a user places the phone鈥檚 microphone against their abdomen for one minute. The phone emits a subaudible 18 kilohertz tone. The team chose this low frequency because 鈥 unlike a Doppler鈥檚 high frequencies, above 2,000 kilohertz 鈥斅 it sits within the range smartphone microphones can record while still traveling well through tissue. As the tone is reflected through the user鈥檚 abdomen, the fetus鈥檚 heartbeat creates small shifts in the sound.听

A machine learning model then estimates the heart rate using the audio and the patient鈥檚 demographic information

The team tested DopFone in UW Medicine鈥檚 maternal-fetal medicine division on 23 pregnant patients between 19 and 39 weeks of pregnancy. On average its readings were within 2.1 bpm of the medical Doppler ultrasound. Its accuracy was slightly diminished for patients with high body mass indexes, though those readings were still within normal limits. Because an irregular fetal heartbeat is often an emergency, DopFone was not tested on patients with irregularities.听

Next, the team plans to gather more data outside a lab to better train the model. Eventually they want to deploy it as a publicly available app.

鈥淭his women鈥檚 health space is often overlooked,鈥 Garg said. 鈥淪o I want to focus on accessible alternatives that can be available to people in low resource areas, whether that鈥檚 here in the U.S. or in other countries. Because health belongs to everyone.鈥

Co-authors include , a UW graduate student in electrical and computer engineering; and , both OB/GYNs in UW Medicine鈥檚聽 maternal-fetal medicine division; and , a UW assistant professor in the Allen School. , a UW professor in the Allen School and in electrical and computer engineering, and of the Georgia Institute of Technology, were senior authors. This research was funded by the UW Gift Fund.听

For more information, contact Garg at pgarg70@uw.edu.

]]>
Rubin Observatory launches real-time monitoring of the sky with thousands of alerts /news/2026/02/25/rubin-observatory-real-time-alerts-dirac/ Wed, 25 Feb 2026 18:02:01 +0000 /news/?p=90703 A large telescope sits on a mountain top beneath a starry night sky.
The Vera C. Rubin Observatory sits on its mountain peak in Chile during observation activities in April 2025. The observatory will soon begin real-time nightly monitoring of the entire Southern Hemisphere sky. Photo: RubinObs/NOIRLab/SLAC/NSF/DOE/AURA/P. Hor谩lek (Institute of Physics in Opava)

On Feb. 24, astronomers鈥 computers around the world lit up with a deluge of cosmic notifications 鈥 800,000 alerts about new asteroids in our solar system, exploding stars across the galaxy and other noteworthy changes in the night sky. The discoveries were made by the Simonyi Survey Telescope at the in Chile and distributed globally within about two minutes.

That flurry of notifications marked the commencement of the observatory鈥檚 Alert Production Pipeline, a sophisticated software system developed at the 天美影视传媒 that is eventually expected to produce up to seven million alerts per night.

鈥淩ubin’s alert system was designed to allow anyone to identify interesting astronomical events with enough notice to rapidly obtain time-critical follow-up observations,” said , a research associate professor of astronomy at the UW who leads the Alert Production Pipeline Group for the Rubin Observatory. 鈥淩ubin will survey the sky at an unprecedented scale and allow us to find the most rare and unusual objects in the universe. We can鈥檛 wait to see the exciting science that comes from these data.鈥

The beginning of scientific alerts is one of the last major milestones before Rubin Observatory launches its (LSST) later this year. During the LSST, Rubin will scan the Southern Hemisphere sky nightly for 10 years to precisely capture every visible change using . These alerts will chronicle the treasure trove of scientific discoveries that Rubin will make through its time-lapse record of the universe. In the first year of the LSST, Rubin is expected to capture images of more objects than all other optical observatories combined in human history.

The UW played a central role in the software that enabled this month鈥檚 milestone. The alert pipeline was developed by a team of about two dozen researchers and software developers in the astronomy department鈥檚 . The team has spent the past decade working with other data management teams around the country to figure out how to process the staggering 10 terabytes of images that Rubin produces every night, and will continue to develop and operate the alert system throughout the 10-year LSST survey.

A grid of 12 images of blurry grayscale celestial images.
As new images are taken, Rubin Observatory鈥檚 software automatically compares each one with a template image. The template image, built by combining images Rubin has previously taken of the same area in the same filter, is subtracted from the new image, leaving only the changes. Each change triggers an alert within minutes of image capture. Photo: NSF鈥揇OE Vera C. Rubin Observatory/NOIRLab/SLAC/AURA. Alert images with classifications provided by ALeRce and Lasair.

鈥淓nabling real-time discovery on such a massive data stream has required years of technical innovation in image processing algorithms, databases and data orchestration. We’re thrilled to continue the UW’s legacy of excellence in data-driven science.鈥 Bellm said.

While the night sky seems calm and unchanging to the casual viewer, it鈥檚 actually alive with motion and transformation. Each alert signals something that has changed in the sky since Rubin last looked 鈥 a new source of light, a star that brightened or dimmed, or an object that moved. With Rubin’s alerts, scientists will have a greater ability to catch supernovae in their earliest moments, discover and track asteroids to assess potential threats to Earth and spot rare interstellar objects as they race through the solar system.

Scientists can use these data to better understand the nature of dark matter, dark energy and other unknown aspects of the universe.

鈥淭he discoveries reported in these alerts reflect the power of NSF-DOE Rubin Observatory as a tool for astrophysics and the importance of sustained federal support,鈥 said Kathy Turner, program manager in the High Energy Physics program in the U.S. Department of Energy鈥檚 . 鈥淩ubin Observatory鈥檚 groundbreaking capabilities are revealing untold astrophysical treasures and expanding scientists鈥 access to the ever-changing cosmos.鈥

Every 40 seconds during nighttime observations, Rubin captures a new region of the sky. It then sends the data on a seconds-long journey from Chile to the U.S. Data Facility (USDF) at the in California for initial processing. Rubin鈥檚 data management system automatically compares it to a template made from previous images of the same region. This comparison allows it to detect the slightest variations. With every change, such as the appearance of a new point of light, an object鈥檚 movement or a change in brightness, the system generates a public alert within two minutes.

鈥淭he scale and speed of the alerts are unprecedented,鈥 says Hsin-Fang Chiang, a SLAC software developer leading operations for data processing at the USDF. 鈥淎fter generating hundreds of thousands of test alerts in the last few months, we are now able to say, within minutes, with each image, 鈥楬ere is everything. Go.鈥欌

Rubin鈥檚 alerts are public, meaning anyone 鈥 from professional researchers to students and citizen scientists 鈥 can access and explore them. The speed of the alerts allows scientists using other ground- and space-based telescopes around the world to coordinate follow-up observations. This collaboration will enable fast and detailed studies of unfolding phenomena.听

Additionally, through collaborations with platforms like , Rubin will empower the global community to help classify cosmic events and contribute directly to discovery.

Rubin Observatory is jointly operated by NSF and SLAC.

For more information, contact Bellm at ecbellm@uw.edu.

This story was adapted from a press release by and .

Operations of the Vera C. Rubin Observatory are funded by the U.S. National Science Foundation and the U.S. Department of Energy鈥檚 Office of Science.

]]>
In a study, AI model OpenScholar synthesizes scientific research and cites sources as accurately as human experts /news/2026/02/04/in-a-study-ai-model-openscholar-synthesizes-scientific-research-and-cites-sources-as-accurately-as-human-experts/ Wed, 04 Feb 2026 16:02:30 +0000 /news/?p=90533 A screenshot of the OpenScholar demo.
UW and Ai2 research team built OpenScholar, an open-source AI model designed specifically to synthesize current scientific research. In tests, OpenScholar cited sources as accurately as human experts, and 16 scientists preferred its response to those written by subject experts 51% of the time. Above is the user-interface for a free online demo of the model.

Keeping up with the latest research is vital for scientists, but given that are published every year, that can prove difficult. Artificial intelligence systems show promise for quickly synthesizing seas of information, but they still tend to make things up, or 鈥渉allucinate.鈥澛

For instance, when a team led by researchers at the 天美影视传媒 and , or Ai2, studied a recent OpenAI model, , they found it fabricated 78-90% of its research citations. And general-purpose AI models like ChatGPT often can鈥檛 access papers that were published after their training data was collected.听

So the UW and Ai2 research team built OpenScholar, an open-source AI model designed specifically to synthesize current scientific research. The team also created the first large, multi-domain for evaluating how well models can synthesize and cite scientific research. In tests, OpenScholar cited sources as accurately as human experts, and 16 scientists preferred its response to those written by subject experts 51% of the time.听

The team Feb. 4 in Nature. The project鈥檚 are publicly available and free to use.

鈥淎fter we started this work, we put the demo online and quickly, we got a lot of queries, far more than we鈥檇 expected,鈥 said senior author , a UW associate professor in the Paul G. Allen School of Computer Science & Engineering and senior director at Ai2. 鈥淲hen we started looking through the responses we realized our colleagues and other scientists were actively using OpenScholar. It really speaks to the need for this sort of open-source, transparent system that can synthesize research.鈥

Try the

Researchers trained the model and then created a set of 45 million scientific papers for OpenScholar to pull from to ground its answers in established research. They coupled this with a technique called “,鈥 which lets the model search for new sources, incorporate them and cite them after it鈥檚 been trained.听

鈥淓arly on we experimented with using an AI model with Google鈥檚 search data, but we found it wasn鈥檛 very good on its own,鈥 said lead author , a research scientist at Ai2 who completed this research as a UW doctoral student in the Allen School. 鈥淚t might cite some research papers that weren鈥檛 the most relevant, or cite just one paper, or pull from a blog post randomly. We realized we needed to ground this in scientific papers. We then made the system flexible so that it could incorporate emerging research through results.鈥澛

To test their system, the team created ScholarQABench, a benchmark against which to test systems on scientific search. They gathered 3,000 queries and 250 longform answers written by experts in computer science, physics, biomedicine and neuroscience.听

鈥淎I is getting better and better at real world tasks,鈥 Hajishirzi said. 鈥淏ut the big question ultimately is whether we can trust that its answers are correct.鈥

The team compared OpenScholar against other state-of-the-art AI models, such as OpenAI鈥檚 GPT-4o and two models from Meta. ScholarQABench automatically evaluated AI models鈥 answers on metrics such as their accuracy, writing quality and relevance.听

OpenScholar outperformed all the systems it was tested against. The team had 16 scientists review answers from the models and compare them with human-written responses. The scientists preferred OpenScholar answers to human answers 51% of the time, but when they combined OpenScholar citation methods and pipelines with GPT-4o (a much bigger model), the scientists preferred the AI written answers to human answers 70% of the time. They picked answers from GPT-4o on its own only 32% of the time.

鈥淪cientists see so many papers coming out every day that it鈥檚 impossible to keep up,鈥 Asai said. 鈥淏ut the existing AI systems weren鈥檛 designed for scientists鈥 specific needs. We鈥檝e already seen a lot of scientists using OpenScholar and because it鈥檚 open-source, others are building on this research and already improving on our results. We鈥檙e working on a followup model, , which builds on OpenScholar鈥檚 findings and performs multi-step search and information gathering to produce more comprehensive responses.鈥澛

Other co-authors include , , , all UW doctoral students in the Allen School; , a UW professor emeritus in the Allen School and general manager and chief scientist at Ai2; , a UW postdoc in the Allen School and postdoc at Ai2; , a UW professor in the Allen School; , a UW assistant professor in

the Allen School; Amanpreet Singh, Joseph Chee Chang, Kyle Lo, Luca Soldaini, Sergey Feldman, Mike D鈥橝rcy, David Wadden, Matt Latzke, Jenna Sparks and Jena D. Hwang of Ai2; Wen-tau Yih of Meta; Minyang Tian, Shengyan Liu, Hao Tong and Bohao Wu of University of Illinois Urbana-Champaign; Pan Ji of University of North Carolina; Yanyu Xiong of Stanford University; and Graham Neubig of Carnegie Mellon University.

For more information, contact Asai at akaria@allenai.org and Hajishirzi at hannaneh@cs.washington.edu.

]]>
Q&A: UW researchers create a smart glove with its own sense of touch /news/2026/01/27/smart-glove-electronic-touch-pressure-sensor-engineeering-soft-robotics/ Tue, 27 Jan 2026 21:19:51 +0000 /news/?p=90498 Two pieces of an electronic glove lie on a table.
Inside the OpenTouch Glove (right) is a grid of wires (left) that allows the glove to sense the location and degree of any pressure applied to it. Photo: 天美影视传媒

Yiyue Luo鈥檚 at the 天美影视传媒 is full of machinery that鈥檚 oddly cozy. Here, soft and pliable sensors are sewn, knit and glued directly into clothing to give everyday garments new capabilities.听

One of the lab鈥檚 newest curiosities is a nondescript gray work glove embedded with sensors that enable it to 鈥渇eel鈥 on its own. An array of small wires hidden inside the glove report the location and degree of pressure anywhere along its surface. When in use, the signals from the glove inform a realtime 鈥渉eat map鈥 of pressure that could one day help physical therapy patients track their progress, teach robots to grasp objects, and more.

The project, as it鈥檚 officially known, is led by UW electrical and computer engineering doctoral student as part of a collaboration with the and at MIT. UW News caught up with Murphy to learn more about the glove and its potential uses.

What inspired you to create this glove?

Devin Murphy: Our hands are arguably our greatest tools as humans. We interact with the world through our hands in so many different ways. But the nature of how we grasp and manipulate things in our environment is super nuanced and complex, and it鈥檚 hard to capture. We have very mature electronics that record sight and sound 鈥 think of the cameras and microphones in your smartphone. But there aren鈥檛 many electronic devices that record our other senses 鈥 like touch. That鈥檚 what I鈥檝e been working to remedy with the OpenTouch Glove.

How does the glove work? What are its capabilities?

DM: There are two flexible circuit boards inside each glove that form a grid of wires across the gripping surface of the glove. We can measure pressure at any point in that mesh where two wires meet. The circuit boards connect to a little box of electronics at the user鈥檚 wrist, which processes the signals and sends them wirelessly to a laptop.

We can then generate a 鈥渉eat map鈥 image showing where force is being applied on the hand, where the hand is applying force to different objects and how much force the hand is applying.听

This kind of data gives us extra nuance that a camera can’t capture. For example, if your hand is in a bag or behind an object while it鈥檚 grasping things, a camera wouldn鈥檛 be able to tell what your hand is doing, whereas this glove can follow along.

What are some potential applications for the glove?

DM: I’m particularly excited about how this technology might help patients recovering from an injury. Physical therapists have patients perform a variety of tasks to regain mobility in their hands 鈥 if we can measure how much force people apply during this process, we can provide them with concrete feedback. The patient and therapist can both track progress by monitoring grip strength of the patient over time.听

We鈥檙e also seeing lots of new companies invest in physical intelligence for robotics 鈥 basically recording how robots interact with the physical world. If we can record human hand grip signals, we might be able to teach robotic hands how to mimic human behavior.听

One other interesting application is in augmented reality or virtual reality. If we replaced traditional controllers with these gloves, it could give users a more natural way to interact with virtual objects and scenery 鈥 though we鈥檇 need some additional technology for users to feel pressure when gripping virtual things.

How can other researchers access this technology?

DM: It鈥檚 really important to us that the glove is accessible to other researchers and anyone else who might want to use it for their own applications. You can order all of the components of the glove directly from commercial manufacturers, and we have released all of the manufacturing files and instructions for putting the glove together yourself.听

We’ve also shown some demos of the glove 鈥渋n the wild鈥 to showcase the different kinds of data it can collect, and we鈥檙e planning to release an open source data set collected with the glove in partnership with researchers at MIT.听

I鈥檓 really excited about developing new wearable technologies that allow people to record less popular sensing modalities like touch. I want to figure out how we can capture the nuances of touch-based interactions, so that ultimately we can get better insights into our daily lives.

For more information, contact Murphy at devinmur@uw.edu.

]]>
UW researchers analyzed which anthologized writers and books get checked out the most from Seattle Public Library /news/2026/01/08/seattle-public-library-data-anthologized-writers/ Thu, 08 Jan 2026 17:04:04 +0000 /news/?p=90225
UW researchers analyzed the checkout data from the last 20 years of the 93 authors included in the post-1945 volume of 鈥淭he Norton Anthology of American Literature,鈥 which is assigned in U.S. English classes more than nearly any other anthology. Photo:

Seattle Public Library, or SPL, is the only U.S. library system that makes its anonymized, granular checkout data public. Want to find out how many times people borrowed the e-book version of Toni Morrison鈥檚 鈥淏eloved鈥 in May 2018? That data is available.听

The hitch is that the library鈥檚 data set contains nearly 50 million rows, and a single title can appear variously. Morrison鈥檚 鈥淏eloved,鈥 for instance, is listed as 鈥淏eloved,鈥 鈥淏eloved (unabridged),鈥 鈥淏eloved : a novel / by Toni Morrison鈥 and so on.听

To track trends in the catalogue over the last 20 years, 天美影视传媒 researchers analyzed the checkout data of the 93 authors included in the post-1945 volume of 鈥淭he Norton Anthology of American Literature.鈥 It鈥檚 assigned in U.S. English classes more than virtually any other anthology, so what鈥檚 thought of as the contemporary American 鈥 the books and writers we鈥檝e deemed culturally important.听

The team found that among these vaunted writers 鈥 including Morrison, Viet Thanh Nguyen, David Foster Wallace and Joan Didion 鈥 science fiction was particularly popular. Ursula K. Le Guin and Octavia E. Butler topped the list.听

The team Nov. 21 in Computational Humanities Research 2025, and created .听

Related:

  • looks at how checkouts correspond with book sales and other library circulation

鈥淚t鈥檚 kind of mind-boggling and ironic that in this age of abundant data, we have so little data about what people are reading,鈥 said senior author , a UW assistant professor in the Information School. 鈥, particularly for researchers, so I鈥檝e been obsessed with SPL鈥檚 data for years now. But extracting insights from it is actually a really hard computational and bibliographic modeling problem.鈥

To organize the data, the team used computational methods, such as stripping away subtitles and standardizing punctuation. They also manually identified things like translations of a work.听

鈥淲e worked with the Norton anthology in part because it’s a small enough scale for us to handle,鈥 said lead author , a UW doctoral student in the Information School. 鈥淚t allows us to have a ground truth to work off of. We can still put a human eye on things.鈥澛

In all the team looked at 1,603 works by the 93 authors, which were checked out a total of 980,620 times since 2005.

A line graph shows checkouts of Ursula K. Le Guin increasing over two decades.
This graph follows how many times Ursula K. Le Guin’s books were borrowed since 2005. Photo: Gupta et al./Computational Humanities Research 2025

The 10 top authors were:

  1. Ursula K. Le Guin
  2. Octavia E. Butler
  3. Louise Erdrich
  4. N.K. Jemisin
  5. Toni Morrison
  6. Kurt Vonnegut
  7. George Saunders
  8. Philip K. Dick
  9. Sherman Alexie
  10. James Baldwin

The 10 top books were:聽

  1. 鈥淧arable of the Sower鈥 by Octavia E. Butler
  2. 鈥淟incoln in the Bardo鈥 by George Saunders
  3. 鈥淭he Fifth Season鈥 by N.K. Jemisin
  4. 鈥淭he Sympathizer鈥 by Viet Thanh Nguyen
  5. 鈥淜indred鈥 by Octavia E. Butler
  6. 鈥淏eloved鈥 by Toni Morrison
  7. 鈥淭he Left Hand of Darkness鈥 by Ursula K. Le Guin
  8. 鈥淭he Absolutely True Diary of a Part-Time Indian鈥 by Sherman Alexie
  9. 鈥淭he Year of Magical Thinking鈥 by Joan Didion
  10. 鈥淭he Sentence鈥 by Louise Erdrich

Researchers noted several trends that may have driven checkouts. In general, books with genre and sci-fi elements were some of the most popular.听

鈥淚 found the prevalence of sci-fi books and writers really interesting,鈥 Gupta said. 鈥淭hese are recent additions to the anthology, since sci-fi and genre fiction haven鈥檛 always been seen as important literature. So while it鈥檚 a bit unsurprising, it鈥檚 also striking to see that despite comprising a small portion of the anthology, these are the authors people are actually reading the most.鈥

News events also drove spikes in readership, such as film adaptations of James Baldwin鈥檚 鈥淚f Beale Street Could Talk鈥 and Don DeLillo鈥檚 鈥淲hite Noise,鈥 or the deaths of authors such as Didion, Wallace, Morrison and Philip Roth.听

The top book, 鈥淧arable of the Sower,鈥 saw a huge spike in readership in 2024 鈥 the year the futuristic novel is set, and the year SPL selected the novel for its program.听

鈥淲e鈥檝e deemed these canonical authors important enough to continue reading, to continue teaching, to continue studying and talking about, so it鈥檚 fascinating to see who we鈥檙e actually reading and when,鈥 Walsh said. 鈥淚 find it very beautiful that after years of these big debates about diversifying the canon, the works that people are turning to the most are by women and Black and Native writers, who previously were not even included in these anthologies.鈥

Co-authors include Daniella Maor, Karalee Harris, Emily Backstrom and Hongyuan Dong, all students at the UW. This research was supported in part by the .

For more information, contact Walsh at melwalsh@uw.edu and Gupta at ngupta1@uw.edu.

]]>
Video: Drivers struggle to multitask when using dashboard touch screens, study finds /news/2025/12/16/video-drivers-struggle-to-multitask-when-using-dashboard-touch-screens-study-finds/ Tue, 16 Dec 2025 17:00:09 +0000 /news/?p=90099

Once the domain of buttons and knobs, car dashboards are increasingly home to large touch screens. While that makes following a mapping app easier, it also means drivers can鈥檛 feel their way to a control; they have to look. But how does that visual component affect driving?

New research from the 天美影视传媒 and Toyota Research Institute, or TRI, explores how drivers balance driving and using touch screens while distracted. In the study, participants drove in a vehicle simulator, interacted with a touch screen and completed memory tests that mimic the mental effort demanded by traffic conditions and other distractions. The team found that when people multitasked, their driving and touch screen use both suffered. The car drifted more in the lane while people used touch screens, and their speed and accuracy with the screen declined when driving. The effects increased further when they added the memory task.听

These results could help auto manufacturers design safer, more responsive touch screens and in-car interfaces.

The team Sept. 30 at the ACM Symposium on User Interface Software and Technology in Busan, Korea.听

鈥淲e all know ,鈥 said co-senior author , a UW professor in the Paul G. Allen School of Computer Science & Engineering. 鈥淏ut what about the car鈥檚 touch screen? We wanted to understand that interaction so we can design interfaces specifically for drivers.鈥

As the study鈥檚 16 participants drove the simulator, sensors tracked their gaze, finger movements, pupil diameter and electrodermal activity. The last two are common ways to measure mental effort, or 鈥渃ognitive load.鈥 For instance, pupils tend to grow when people are concentrating.听

Related:

  • Story from

While driving, participants had to touch specific targets on a 12-inch touch screen, similar to how they would interact with apps and widgets. They did this while completing three levels of an 鈥淣-back task,鈥 a memory test in which the participants hear a series of numbers, 2.5 seconds apart, and have to repeat specific digits.听

The participants鈥 performance changed significantly under different conditions:

  • When interacting with the touch screen, participants drifted side to side in their lane 42% more often. Increasing cognitive load had no effect on the results.
  • Touch screen accuracy and speed decreased 58% when driving, then another 17% under high cognitive load.
  • Each glance at the touchscreen was 26.3% shorter under high cognitive load.
  • A 鈥渉and-before-eye鈥 phenomenon, in which drivers鈥 reached for a control before looking at it, increased from 63% to 71% as memory tasks were introduced.

The team also found that increasing the size of the target areas participants were trying to touch did not improve their performance.听

鈥淚f people struggle with accuracy on a screen, usually you want to make bigger buttons,鈥 said , a UW doctoral student in the Allen School. 鈥淏ut in this case, since people move their hand to the screen before touching, the thing that takes time is the visual search.鈥

Based on these findings, the researchers suggest future in-car touch screen systems might use simple sensors in the car 鈥 eye tracking, or touch sensors on the steering wheel 鈥 to monitor drivers鈥 attention and cognitive load. Based on these readings, the car鈥檚 system might adjust the touch screen鈥檚 interface to make important controls more prominent and safer to access.

鈥淭ouch screens are widespread today in automobile dashboards, so it is vital to understand how interacting with touch screens affects drivers and driving,鈥 said co-senior author , a UW professor in the Information School. 鈥淥ur research is some of the first that scientifically examines this issue, suggesting ways for making these interfaces safer and more effective.鈥

, a UW doctoral student in the Information School, is co-lead author. Other co-authors include , , and of TRI. This research was funded in part by TRI.

For more information, contact Wobbrock at wobbrock@uw.edu and Fogarty at jfogarty@cs.washington.edu.

]]>
AI can pick up cultural values by mimicking how kids learn /news/2025/12/11/ai-training-cultural-values/ Thu, 11 Dec 2025 17:04:44 +0000 /news/?p=90064 A video game shows two kitchens of different sizes.
In the Overcooked video game, players work to cook and deliver as much onion soup as possible. In the study鈥檚 version of the game, one player can give onions to help the other who has further to travel to make the soup. The research team wanted to find out if AI systems could learn altruism by watching different cultural groups play the game. Photo:

Artificial intelligence systems absorb values from their training data. The trouble is that values differ across cultures. So an AI system trained on data from the entire internet won鈥檛 work equally well for people from different cultures.

But a new 天美影视传媒 study suggests that AI could learn cultural values by observing human behavior. Researchers had AI systems observe people from two cultural groups playing a video game. On average, participants in one group behaved more altruistically. The AI assigned to each group learned that group鈥檚 degree of altruism, and was able to apply that value to a novel scenario beyond the one they were trained on.

The team Dec. 9 in PLOS One.听

鈥淲e shouldn鈥檛 hard code a universal set of values into AI systems, because many cultures have their own values,鈥 said senior author , a UW professor in the Paul G. Allen School of Computer Science & Engineering and co-director of the Center for Neurotechnology. 鈥淪o we wanted to find out if an AI system can learn values the way children do, by observing people in their culture and absorbing their values.鈥

As inspiration, the team looked to showing that 19-month-old children raised in Latino and Asian households were more than those from other cultures.听

In the AI study, the team recruited 190 adults who identified as white and 110 who identified as Latino. Each group was assigned an AI agent, a system that can function autonomously.听

These agents were trained with a method called inverse reinforcement learning, or IRL. In the more common AI training method, reinforcement learning, or RL, a system is given a goal and gets rewarded based on how well it works toward that goal. In IRL, the AI system observes the behavior of a human or another AI agent, and infers the goal and underlying rewards. So a robot trained to play tennis with RL would be rewarded when it scores points, while a robot trained with IRL would watch professionals playing tennis and learn to emulate them by inferring goals such as scoring points.

This IRL approach more closely aligns with how humans develop.听

鈥淧arents don鈥檛 simply train children to do a specific task over and over. Rather, they model or act in the general way they want their children to act. For example, they model sharing and caring towards others,鈥 said co-author , a UW professor of psychology and co-director of Institute for Learning & Brain Sciences (I-LABS). 鈥淜ids learn almost by osmosis how people act in a community or culture. The human values they learn are more 鈥榗aught鈥 than 鈥榯aught.鈥欌

In the study, the AI agents were given the data of the participants playing a modified version of the video game Overcooked, in which players work to cook and deliver as much onion soup as possible. Players could see into another kitchen where a second player had to walk further to accomplish the same tasks, putting them at an obvious disadvantage. Participants didn鈥檛 know that the second player was a bot programmed to ask the human players for help. Participants could choose to give away onions to help the bot but at the personal cost of delivering less soup.听

Researchers found that overall the people in the Latino group chose to help more than those in the white group, and the AI agents learned the altruistic values of the group they were trained on. When playing the game, the agent trained on Latino data gave away more onions than the other agent.听

To see if the AI agents had learned a general set of values for altruism, the team conducted a second experiment. In a separate scenario, the agents had to decide whether to donate a portion of their money to someone in need. Again, the agents trained on Latino data from Overcooked were more altruistic.听

鈥淲e think that our proof-of-concept demonstrations would scale as you increase the amount and variety of culture-specific data you feed to the AI agent. Using such an approach, an AI company could potentially fine-tune their model to learn a specific culture鈥檚 values before deploying their AI system in that culture,鈥 Rao said.听

Additional research is needed to know how this type of IRL training would perform in real-world scenarios, with more cultural groups, competing sets of values, and more complicated problems.

鈥淐reating culturally attuned AI is an essential question for society,鈥 Meltzoff said. 鈥淗ow do we create systems that can take the perspectives of others into account and become civic minded?鈥

, a UW research engineer in the Allen School, and , a software engineer at Microsoft who completed this research as a UW student, were co-lead authors. Other co-authors include , a scientist at the Allen Institute who completed this research as a UW doctoral student; , an assistant professor at San Diego State University, who completed this research as a post-doctoral scholar at UW; and , a professor in the Allen School and director of the at UW.听

For more information, contact Rao at rao@cs.washington.edu.

]]>