Jon Froehlich – UW News /news Wed, 23 Oct 2024 16:08:18 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 From accessibility upgrades to a custom cat-food bowl, this mobile 3D printer can autonomously add features to a room /news/2024/10/23/mobiprint-mobile-3d-printer-accessibility/ Wed, 23 Oct 2024 16:01:19 +0000 /news/?p=86574

Today鈥檚 3D printers make it fairly easy to conjure, say, a chess set into existence. But these printers are largely fixed in place. So if someone wants to add 3D-printed elements to a room 鈥 a footrest beneath a desk, for instance 鈥 the project gets more difficult. A space must be measured. The objects must then get scaled, printed elsewhere and fixed in the right spot. Handheld 3D printers exist, but they lack accuracy and come with a learning curve.

天美影视传媒 researchers created , a mobile 3D printer that can automatically measure a room and print objects onto its floor. The team鈥檚 graphic interface lets users design objects for a space that the robot has mapped out. The prototype, which the team built on a modified consumer vacuum robot, can add accessibility features, home customizations or artistic flourishes to a space.

The team Tuesday, Oct. 15, at the ACM Symposium on User Interface Software and Technology in Pittsburgh.

鈥淒igital fabrication, like 3D printing, is pretty mature at this point,鈥 said , a doctoral student in the Paul G. Allen School of Computer Science & Engineering. 鈥淣ow we鈥檙e asking: How can we push it further and further into the world, and lower the barriers for people to use it? How can we change the built environment and tailor spaces for peoples鈥 specific needs 鈥 for accessibility, for taste?鈥

The prototype system can add accessibility features, such as tactile markers for blind and low-vision people. These might provide information, such as text telling conference attendees where to go, or warn of dangers such as staircases. Or it can create a ramp to cover an uneven flooring transition. MobiPrint also allows users to create custom objects, such as small art pieces up to three inches tall.

Before printing an object, MobiPrint autonomously roams an indoor space and uses LiDAR to map it. The team鈥檚 design tool then converts this map into an interactive canvas. The user then can select a model from the MobiPrint library 鈥 a cat food bowl, for instance 鈥 or upload a design. Next, the user picks a location on the map to print the object, working with the design interface to scale and position the job. Finally, the robot moves to the location and prints the object directly onto the floor.

Related:

For printing, the current design uses called PLA. The researchers are working to have MobiPrint remove objects it鈥檚 printed and potentially recycle the plastic. They鈥檙e also interested in exploring the possibilities of robots that print on other surfaces (such as tabletops or walls), in other environments (such as outdoors), and with other materials (such as concrete).

鈥淚 think about kids out biking or my friends and family members who are in wheelchairs getting to the end of a sidewalk without a curb,鈥 said , a professor in the Allen School. 鈥淚t would be so great if in the future we could just send Daniel鈥檚 robot down the street and have it build a ramp, even if it was working just for a short period of time. That just shows you how reconfigurable environments can be.鈥

, an assistant professor at Purdue University, who was a doctoral student in the Allen School while doing this research, is a co-author on this paper. This research was funded by the National Science Foundation.

For more information, contact Zamora at danielcz@cs.uw.edu and Froehlich at jonf@cs.uw.edu.

]]>
Q&A: Researchers aim to improve accessibility with augmented reality /news/2023/10/17/accessibility-augmented-virtual-reality/ Tue, 17 Oct 2023 16:18:56 +0000 /news/?p=83150

 

Big Tech鈥檚 race into augmented reality (AR) grows more competitive by the day. This month, Meta of its headset, the Quest 3. Early next year, Apple its first headset, the Vision Pro. The announcements for each platform emphasize and that merge the virtual and physical worlds: a digital board game imposed on a coffee table, a movie screen projected above airplane seats.

Some researchers, though, are more curious about other uses for AR. The 天美影视传媒鈥檚 is applying these budding technologies to assist people with disabilities. This month, researchers from the lab will introduce multiple projects that deploy AR 鈥 through headsets and phone apps 鈥 to make the world more accessible.

Researchers from the lab will RASSAR, an app that can聽scan homes to highlight accessibility and safety issues, on Oct. 23 at the in New York.

Shortly after, on Oct. 30, other teams in the lab will present early research at the conference in San Francisco. One app and the other aims to for low-vision users.

UW News spoke with the three studies鈥 lead authors, and , both UW doctoral students in the Paul G. Allen School of Computer Science & Engineering, about their work and the future of AR for accessibility.

What is AR and how is it typically used right now?

Jae Lee: I think one commonly accepted answer is that you use a wearable headset or a phone to superimpose virtual objects in a physical environment. A lot of people probably know AR from 鈥淧ok茅mon Go,鈥 where you’re superimposing these Pok茅mon into the physical world. Now Apple and Meta are introducing 鈥渕ixed reality鈥 or passthrough AR, which further blends the physical and virtual worlds through cameras.

Xia Su: Something I have also been observing lately is people are trying to expand the definition beyond goggles and phone screens. There could be AR audio, which is manipulating your hearing, or devices trying to manipulate your smell or touch.

In Augmented Reality (AR) a headset or phone superimposes virtual objects in a physical space. In Virtual Reality (VR) a headset or goggles immerses the user in a virtual environment. Mixed Reality (MR) blends the physical and virtual worlds.

A lot of people associate AR with virtual reality, and it gets wrapped up in discussion of the metaverse and gaming. How is it being applied for accessibility?

JL: AR as a concept has been around for several decades. But in 鈥檚 lab, we鈥檙e combining AR with accessibility research. A headset or a phone can be capable of knowing how many people are in front of us, for example. For people who are blind or low vision, that information could be critical to how they perceive the world.

XS: There are really two different routes for AR accessibility research. The more prevalent one is trying to make AR devices more accessible to people. The other, less common approach is asking: How can we use AR or VR as tools to improve the accessibility of the real world? That鈥檚 what we’re focused on.

JL: As AR glasses become less bulky and cheaper, and as AI and computer vision advance, this research will become increasingly important. But widespread AR, even for accessibility, brings up a lot of questions. How do you deal with bystander privacy? We, as a society, understand that vision technology can be beneficial to blind and low-vision people. But we also might not want to include facial recognition technology in apps for privacy reasons, even if that helps someone recognize their friends.

Let鈥檚 talk about the papers you have coming out. First, can you explain your ?

XS: It’s an app that people can use to scan their indoor spaces and help them detect possible accessibility safety issues in homes. It鈥檚 possible because some iPhones now have (light detection and ranging) scanners that tell the depth of a space, so we can reconstruct the space in 3D. We combined this with models to highlight ways to improve safety and accessibility. To use it, someone 鈥 perhaps a parent who鈥檚 childproofing a home, or a caregiver 鈥 scans a room with their smartphone and RASSAR spots accessibility problems. For example, if a desk is too high, a red button will pop up on the desk. If the user clicks the button, there will be more information about why that desk鈥檚 height is an accessibility issue and possible fixes.

JL: Ten years ago, you would have needed to go through 60 pages of PDFs to fully check a house for accessibility. We boiled that information down into an app.

And this is something that anyone will be able to download to their phones and use?

XS: That鈥檚 the eventual goal. We already have a demo. This version relies on lidar, which is only on certain iPhone models right now. But if you have such a device, it鈥檚 very straightforward.

JL: This is an example of these advancements in hardware and software that let us create apps quickly. Apple announced , which creates a 3D floor plan of a room, when they added the lidar sensor. We鈥檙e using that in RASSAR to understand the general layout. Being able to build on that lets us come up with a prototype very quickly.

 

So RASSAR is nearly deployable now. The other areas of research you鈥檙e presenting are earlier in their development. Can you tell me about ?

JL:聽 It鈥檚 an app that uses an AR headset to enable people to speak more naturally with voice assistants like Siri or Alexa. There are all these pronouns we use when we speak that are difficult for computers to understand without visual context. I can ask 鈥淲here’d you buy it from?鈥 But what is 鈥渋t鈥? A voice assistant has no idea what I鈥檓 talking about. With GazePointAR, the goggles are looking at the environment around the user and the app is tracking the user鈥檚 gaze and hand movements. The model then tries to make sense of all these inputs 鈥 the word, the hand movements, the user鈥檚 gaze. Then, using a , GPT, it attempts to answer the question.

How does it sense what the motions are?

JL: We鈥檙e using a headset called HoloLens 2 developed by Microsoft. It has a gaze tracker that鈥檚 watching your eyes and trying to guess what you鈥檙e looking at. It has hand tracking capability as well. In a paper that we submitted building on this, we noticed that we have a lot of problems with this. For example, people don’t just use one pronoun at a time 鈥 we use multiple. We鈥檒l say, 鈥淲hat’s more expensive, this or this?鈥 To answer that, we need information over time. But, again, you can run into privacy issues if you want to track someone’s gaze or someone’s visual field of view over time: What information are you storing and where is it being stored? As technology improves, we certainly need to watch out for these privacy concerns, especially in computer vision.

This is difficult even for humans, right? I can ask, 鈥淐an you explain that?鈥 while pointing at several equations on a whiteboard and you won鈥檛 know which I鈥檓 referring to. What applications do you see for this?

JL: Being able to use natural language would be major. But if you expand this to accessibility, there鈥檚 the potential for a blind or low-vision person to use this to describe what鈥檚 around them. The question 鈥淚s anything dangerous in front of me?鈥 is also ambiguous for a voice assistant. But with GazePointAR, ideally, the system could say, 鈥淭here are possibly dangerous objects, such as knives and scissors.鈥 Or low-vision people might make out a shape, point at it, then ask the system what 鈥渋t鈥 is more specifically.

 

And finally you鈥檙e working on a system called . What is it and what prompted this research?

JL: This is going even more into the future than GazePointAR. ARTennis is a prototype that uses an AR headset to make tennis balls more salient for low vision players. The ball in play is marked by a red dot and has a crosshair of green arrows around it. Professor Jon Froehlich has a family member that wants to play sports with his children but doesn’t have the residual vision necessary to do so. We thought if it works for tennis, it’s going to work for a lot of other sports, since tennis has a small ball that shrinks as it gets further away. If we can track a tennis ball in real time, we can do the same with a bigger, slower basketball.

One of the co-authors on the paper is low vision himself, and he plays a lot of squash, and he wanted to try this application and give us feedback. We did a lot of brainstorming sessions with him, and he tested the system. The red dot and green crosshairs is the design that he came up with, to improve the sense of depth perception.

What鈥檚 keeping this from being something people can use right away?

JL: Well, like GazePointAR, it鈥檚 relying on a HoloLens 2 headset that鈥檚 $3,500. So that鈥檚 a different accessibility issue. It鈥檚 also running at roughly 25 frames per second and for humans to perceive in real time it needs to be about 30 frames per second. Sometimes we can鈥檛 capture the speed of the tennis ball. We’re going to expand the paper and include basketball to see if there are different designs people prefer for different sports. The technology will certainly get faster. So our question is: What will the best design be for the people using it?

For more information, contact聽Lee at jaewook4@cs.washington.edu, Su at xiasu@cs.washington.edu and Jon Froehlich at jonf@cs.washington.edu.

]]>
SoundWatch: New smartwatch app alerts d/Deaf and hard-of-hearing users to birdsong, sirens and other desired sounds /news/2020/10/28/soundwatch-alerts-ddeaf-and-hard-of-hearing-users-to-desired-sounds/ Wed, 28 Oct 2020 15:35:31 +0000 /news/?p=71310
UW researchers have developed a smartwatch app for d/Deaf and hard-of-hearing people who want to be aware of nearby sounds. The smartwatch will identify sounds the user is interested in and send the user a friendly buzz along with information about them. Photo: Jain et al./ASSETS 2020

Smartwatches offer people a private method for getting notifications about their surroundings 鈥 such as a phone call, health alerts or an upcoming package delivery.

Now 天美影视传媒 researchers have developed , a smartwatch app for deaf, and hard-of-hearing people who want to be aware of nearby sounds. When the smartwatch picks up a sound the user is interested in 鈥 examples include a siren, a microwave beeping or a bird chirping 鈥 SoundWatch will identify it and send the user a friendly buzz along with information about the sound.

The team Oct. 28 at the ACM conference on computing and accessibility.

“This technology provides people with a way to experience sounds that require an action 鈥 such as getting food from the microwave when it beeps. But these devices can also enhance people’s experiences and help them feel more connected to the world,” said lead author , a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “I use the watch prototype to notice birds chirping and waterfall sounds when I am hiking. It makes me feel present in nature. My hope is that other d/Deaf and hard-of-hearing people who are interested in sounds will also find SoundWatch helpful.”

The team started this project by designing a system for d/Deaf and hard-of-hearing people who wanted to be able to know what was going on around their homes.

“I used to sleep through the fire alarm,” said Jain, who was born hard of hearing.

The first system, called , uses Microsoft Surface tablets scattered throughout the home which act like a network of interconnected displays. Each display provides a basic floor plan of the house and alerts a user to a sound and its source. The displays also show the sound’s waveforms, to help users identify the sound, and store a history of all the sounds a user might have missed when they were not home.

The researchers tested HomeSound in the Seattle-area homes of six d/Deaf or hard-of-hearing participants for three weeks. Participants were instructed to go about their lives as normal and complete weekly surveys.

Based on feedback, a second prototype used machine learning to classify sounds in real time. The researchers created a dataset of over 31 hours of 19 common home-related sounds 鈥 such as a dog bark or a cat meow, a baby crying and a door knock.

“People mentioned being able to train their pets when they noticed dog barking sounds from another room or realizing they didn’t have to wait by the door when they were expecting someone to come over,” Jain said. “HomeSound enabled all these new types of interactions people could have in their homes. But many people wanted information throughout the day, when they were out in their cars or going for walks.”

In the second prototype of HomeSound, the tablets sent information to a smartwatch, which is how the researchers got the idea to make the standalone app. Photo: Jain et al./CHI 2020

The researchers then pivoted to a smartwatch system, which allows users to get sound alerts wherever they are, even in places they might not have their phones, such as at the gym.

Because smartwatches have limited storage and processing abilities, the team needed a system that didn’t eat the watch’s battery and was also fast and accurate. First the researchers compared a compressed version of the HomeSound classifier against three other available sound classifiers. The HomeSound variant was the most accurate, but also the slowest.

To speed up the system, the team has the watch send the sound to a device with more processing power 鈥 the user’s phone 鈥 for classification. Having a phone classify sounds and send the results back to the watch not only saves time but also maintains the user’s privacy because sounds are only transferred between the user鈥檚 own devices.

SoundWatch sends the sound to the user’s phone for classification. Photo: Jain et al./ASSETS 2020

The researchers tested the SoundWatch app in March 2020 鈥 before Washington’s stay-at-home order 鈥 with eight d/Deaf and hard-of-hearing participants in the Seattle area. Users tested the app at three different locations on or around the UW campus: in a grad student office, in a building lounge and at a bus stop.

People found the app was useful for letting them know if there was something that they should pay attention to. For example: that they had left the faucet running or that a car was honking. On the other hand, it sometimes misclassified sounds (labeling a car driving by as running water) or was slow to notify users (one user was surprised by a person entering the room way before the watch sent a notification about a door opening).

The team is also developing , which uses augmented reality to provide real-time captions and other sound information through HoloLens glasses.

“We want to harness the emergence of state-of-the-art machine learning technology to make systems that enhance the lives of people in a variety of communities,” said senior author , an associate professor in the Allen School.

Another current focus is developing a method to pick out specific sounds from background noise, and identifying the direction a sound, like a siren, is coming from.

The SoundWatch app is available for free as an . The researchers are eager to hear feedback so that they can make the app more useful.

“Disability is highly personal, and we want these devices to allow people to have deeper experiences,” Jain said. “We’re now looking into ways for people to personalize these systems for their own specific needs. We want people to be notified about the sounds they care about 鈥 a spouse’s voice versus general speech, the back door opening versus the front door opening, and more.”

Additional researchers on the HomeSound and SoundWatch projects are: , an associate professor in the UW human centered design and engineering department; and , doctoral students in the Allen School; , a high school senior at Bishop Blanchet High School; , who worked on this project as an undergraduate design major at the UW; , a doctoral student in the UW human centered design and engineering department; and , UW undergraduate students studying computer science and engineering; and , a freelance user experience designer. This research was funded by the National Science Foundation and a Google Faculty Research Award.

For more information, contact Jain at djain@cs.washington.edu and Froehlich at jonf@cs.washington.edu.

Grant number: IIS-1763199

]]>
Project Sidewalk helps users map accessibility around Seattle, other cities /news/2019/04/18/project-sidewalk/ Thu, 18 Apr 2019 15:55:29 +0000 /news/?p=61649 About 3.6 million adults in the United States use a wheelchair to get around, according to .

But unless you’re one of those people, you might not know how hard it is to get around your city.

Now people can help map out accessibility here in Seattle. 天美影视传媒 researchers have led the development of , an online crowdsourcing game that lets anyone with an internet connection use Google Street View to virtually explore neighborhoods and label curb ramps, missing or rough sidewalks, obstacles and more. Project Sidewalk first launched in , and it’s now available in 鈥 near Portland 鈥 and Seattle. 聽The team will present its results from the Washington, D.C., deployment May 7 at the 2019 in Glasgow, Scotland.

“A lot of people think this is something where you walk around your neighborhood and take pictures of accessibility problems with your smartphone,” said corresponding author , an assistant professor in the Paul G. Allen School of Computer Science & Engineering. “But Project Sidewalk is not like that at all. There’s no assumption that you have any physical experience with what you’re reporting on. That is the key difference. Anyone can do it from anywhere, as long as they have a web browser.”

To get started on Project Sidewalk, the team interviewed people with mobility impairments to learn about how accessibility 鈥 or a lack of it 鈥 affects their lives. From there the researchers came up with a method to use crowdsourcing to collect street-level data about accessibility in cities.

Project Sidewalk relies on volunteers to log accessibility issues across a city. So the team used a video game model to make it more fun. Players go on missions where they audit 500 to 1,000 feet of a city at a time.

“Your first mission is a guided mission,” Froehlich said. “We have to teach you how you walk around and how to label things. But then we also need to help you understand what accessibility means: What is a curb ramp? What does it mean to have a missing curb ramp?”

Project Sidewalk uses an “onboarding” process to teach players how to manipulate the map and about common accessibility issues. Photo: 天美影视传媒

Then players are sent out on solo missions 鈥 they’re either dropped into the city in an area where there aren’t already a lot of labels, or they can choose to go to a specific part of the city. For their first few missions, players receive helpful tips about the interface and shortcuts to make their labeling faster. Project Sidewalk also displays a progress bar that shows players how far they’ve gone on a mission.

“We’ve found that people love seeing that progress bar,” said first author , a doctoral student in the Allen School. “They say it makes it more fun and feel more like a game.”

To learn more about the development of Project Sidewalk, visit the .

After the team launched the Washington, D.C., version of Project Sidewalk in August 2016, 797 players added 205,385 labels to the city’s streets over the 18-month deployment. Players placed labels accurately about 72% of the time and were most likely to find and label curb ramps.

“We’re still working on analyzing the data,” Saha said. “But when we look at all the labels on the map, we can immediately start to see which portions of the city might be having issues.”

Players also made a variety of common errors, such as labeling a driveway as a curb ramp or labeling surface problems on the street when the sidewalks were fine. These errors prompted the team to develop a new verification “minigame” in the Seattle and Newberg versions of Project Sidewalk, in which players verify 10 labels that someone else placed.

“I want it to be like Super Mario Brothers 2, which has these fast little minigames that pop up between levels,” Froehlich said. “It gives people time to breathe and do something different. It’s something you could do on the bus.”

The team developed a new verification “minigame,” in which players verify 10 labels that someone else placed. Photo: 天美影视传媒

Because the data from Project Sidewalk is available to anyone, the researchers envision that it could serve multiple purposes, from helping government officials decide which areas to investigate first to enhancing the independence of people with mobility impairments.

Data from Seattle’s Project Sidewalk could inform other accessibility projects in the area, Froehlich said. For example, , which provides directions for pedestrians and wheelchair users looking to avoid hills, construction sites and other accessibility barriers, could use data from Project Sidewalk to be able to create better directions.

Hear related stories from KNKX:

Eventually the team would like to have computers use machine learning to help people add labels on Project Sidewalk and make accessibility audits go faster. The researchers hope to use the Project Sidewalk data to train an algorithm that would teach computers how to do their own audits.

“My ambitious vision is for anyone in the world to click on their city and have our system provide a visualization and an accessibility assessment,” Froehlich said. “It shouldn’t matter if you live in Paris, France, Beijing, China or Cairo, Egypt. If Google Street View has driven there, you should be able to get a map visualizing the city’s sidewalk accessibility.”

This research received a among submissions to the CHI conference. Other co-authors are and at the UW; , and at the University of Maryland; at Intelsat, who completed this research while at the University of Maryland; Ryan Holland at UCLA, who completed this research while at Montgomery Blair High School; at the University of Michigan, who completed this research while at the University of Maryland; and at Singapore Management University, who completed this research while at the University of Maryland. This research was funded by the National Science Foundation, a Singapore Ministry of Education AcRF Tier 1 Grant and a Sloan Research Fellowship.

###

For more information, contact Froehlich at jonf@cs.uw.edu.

Grant number: IIS-1302338

]]>