Franziska Roesner – UW News /news Wed, 24 Apr 2024 15:10:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Q&A: How TikTok’s ‘black box’ algorithm and design shape user behavior /news/2024/04/24/tiktok-black-box-algorithm-and-design-user-behavior-recommendation/ Wed, 24 Apr 2024 15:10:29 +0000 /news/?p=85181 A hand holds a smartphone with the TikTok app open.
Franziska Roesner, UW associate professor, set about researching both how TIkTok’s algorithm is personalized and how users engage with TikTok based on its recommendations. Photo:

TikTok’s swift ascension to the upper echelons of social media is often attributed to its recommendation algorithm, which predicts viewer preferences so acutely it’s spawned a maxim: “.” The platform’s success was so pronounced it’s seemed to spur other social media platforms to shift their designs. When users scroll through X or Instagram, they now see .

Yet for all that influence, the public knows little about how TikTok’s algorithm functions. So , a ӰӴý associate professor in the Paul G. Allen School of Computer Science & Engineering, set about researching both how that algorithm is personalized and how TikTok users engage with the platform based on those recommendations.

Roesner and collaborators will present two papers this month that mine real-world data to help understand the “black box” of TikTok’s recommendation algorithm and its impact.

Researchers first recruited 347 TikTok users, who downloaded their data from the app and donated 9.2 million video recommendations. Using that data, the team initially looked at how TikTok personalized its recommendations. In the first 1,000 videos TikTok showed users, the team found that a third to half of the videos were shown based on TikTok’s predictions of what those users like. The researchers will publish May 13 in the Proceedings of the ACM Web Conference 2024.

, which the team will present May 14 at the ACM CHI Conference on Human Factors in Computing Systems in Honolulu, explored engagement trends. Researchers discovered that over the users’ first 120 days, average daily time on the platform increased from about 29 minutes on the first day to 50 minutes on the last.

UW News spoke with Roesner about how TikTok recommends videos; the impact that has on users; and the ways tech companies, regulators and the public might mitigate unwanted effects.

What is it important for us to understand about how TikTok’s algorithm functions?

Franziska Roesner: TikTok users often have questions like: “Why was I shown this content? What does TikTok know about me? How is it using what it knows about me? And is it?” So we looked at what TikTok shows people and by what criteria. If we better understand how the algorithm functions, then we can ask whether we like how it works.

For example, if the algorithm is exploiting people’s weaknesses around certain types of content, if it predicts that I’m more likely to be susceptible to a certain type of misinformation, it could be pushing me down certain rabbit holes that might be dangerous to me. Maybe they mislead me, or they exacerbate mental health challenges or eating disorders. The algorithm is such a black box, to the public and to regulators. And to some extent, it probably is to TikTok itself. It’s not like someone is writing code that’s targeting a person who’s vulnerable to an eating disorder. The algorithm is just making predictions from a bunch of data. So we as researchers are interested in the features that it is using to predict, because we can’t really understand if and why a prediction is problematic without understanding those.

We also looked at how people engage with TikTok’s algorithm as we understand it. These considerations go hand in hand. As a security and privacy person, I’m always really interested in how people interact with technologies and how their designs shape what we read and believe and share. So researching the human experience helps to understand the impact of the algorithm and the platform design.

What did you learn from these studies?

FR: One thing that surprised me a little was that those of us who use TikTok — and I do use TikTok — probably spend more time on it than we wish to admit. I was also a little surprised that people watch only about 55% of videos to the end. We debated whether this was high or low. Is this part of the platform’s design, that once you’ve got whatever you wanted to get out of this video you move on? Or is it a sign that even this highly tuned recommendation algorithm is not doing that well? I don’t know which it is. But it’s useful to at least have a baseline to compare future findings against.

For more of Roesner’s research on TikTok, see her (parents posting videos of their children) on the platform.

Another important takeaway was looking at what features influence what videos the algorithm shows you. How much agency is TikTok potentially taking from us? How good is it at predicting what we’re likely to want to watch? How rabbit hole-y do those things get? In the study, we labeled each video within a user’s timeline as an “exploration video” or an “exploitation video.” An exploration video is not linked to videos that the user has seen before — for instance, there are no similar hashtags or creators. The idea is that there’s some value in the algorithm showing you new stuff. Maybe there’s societal value to not putting you down a rabbit hole. There’s also probably value for TikTok, because the more you see the same stuff, the more bored you get. They want to throw some spaghetti at the wall and see what sticks.

The exploitation videos are the ones that are more like, “We know what you like, we’re going to show you more videos that are related to these.” In the study, we looked at what fraction of the videos are explorative versus exploitative. We found that in the first 1,000 videos users saw, TikTok exploited users’ interests between 30% and 50% of the time. We then looked at how the videos differed and how TikTok treated them. For example, if you’re following someone, you’re significantly more likely to see videos from them. That’s probably not surprising. However, based on our data, scrolling past a video faster does not seem to impact as much what the algorithm is doing.

We also found that people finished watching the videos from accounts they were following less, but engaged with them more. We hypothesized that if someone sees a video from their friend, maybe they’re not that interested and don’t want to watch, but they still want to show support, so they engage.

In these papers you make several suggestions to mitigate the potential negative effects of TikTok’s design. Could you explain a few of those?

FR: We found that the data donations were not complete enough for us to be able to answer all the questions that we had. So there’s some lack of transparency in the data users could download and about the algorithm overall. We’ve seen this in other studies. People have looked at Facebook’s ad-targeting disclosures. If you ask why you’re seeing this ad, it usually offers the broadest criteria that were included — that you’re over 18 and in the United States, for instance. Yes, but also because you visited this product website yesterday. But the company isn’t sharing that. I’d like to see more transparency about how people’s data is used. Whether that would change what an individual would do is a different question. But I see it as the duty of the platform to help us understand that.

That also connects to regulation. Even if that information doesn’t change an individual’s behavior, it’s vital to be able to do studies that show, for example, how a vulnerable population is being disproportionately targeted with a certain type of content. That kind of targeting is not necessarily intentional, but if you don’t know that’s happening, you can’t stop it. We don’t know how these platforms are auditing internally, but there’s always a value in having external auditors with different incentives.

Before we had these platforms, we understood more about how certain content got to certain people because it came in newspapers or on billboards. Now we have a situation where everybody’s got their own little reality. So it’s hard to reason about what people are seeing and why and how that all fits together — let alone what to do about it — if we can’t even see it.

What is important for people to know about TikTok?

FR: Awareness is helpful. Remember that the platform and the algorithm kind of shape how you view the world and how you interact with the content. That’s not always bad, that can be good. But the platform designs are not neutral, and they influence how long you watch and what you watch, and what you’re getting angry or concerned about. Just remember that the algorithm shows you stuff in large part because it’s predicting what you might want to see. And there are other things you’re not seeing.

Additional co-authors on the papers included Karan Vombatkere of Boston University; Sepehr Mousavi, Olivia Nemes-Nemeth, Angelica Goetzen and Krishna P. Gummadi of Max Planck Institute for Software Systems; Oshrat Ayalon of University of Haifa and Max Planck Institute for Software Systems; Savvas Zannettou of TU Delft; and Elissa M. Redmiles of Georgetown University.

For more information, contact Roesner at franzi@cs.washington.edu.

]]>
Political ads during the 2020 presidential election cycle collected personal information and spread misleading information /news/2021/11/08/political-ads-2020-presidential-election-collected-personal-information-spread-misleading-information/ Mon, 08 Nov 2021 18:13:21 +0000 /news/?p=76414 UW researchers found that political ads during the 2020 election season used multiple concerning tactics, including posing as a poll to collect people's personal information or having headlines that might affect web surfers' views of candidates.
UW researchers found that political ads during the 2020 election season used multiple concerning tactics, including posing as a poll to collect people’s personal information or having headlines that might affect web surfers’ views of candidates. Photo: ӰӴý

Online advertisements are found frequently splashed across news websites. Clicking on these banners or links provides the news site with revenue. But these ads also often use manipulative techniques, researchers say.

ӰӴý researchers were curious about what types of political ads people saw during the 2020 presidential election. The team looked at more than 1 million ads from almost 750 news sites between September 2020 and January 2021. Of those ads, almost 56,000 had political content.

Political ads used multiple tactics that concerned the researchers, including posing as a poll to collect people’s personal information or having headlines that might affect web surfers’ views of candidates.

The researchers Nov. 3 at the ACM Internet Measurement Conference 2021.

“The election is a time when people are getting a lot of information, and our hope is that they are processing it to make informed decisions toward the democratic process. These ads make up part of the information ecosystem that is reaching people, so problematic ads could be especially dangerous during the election season,” said senior author , UW associate professor in the Paul G. Allen School of Computer Science & Engineering.

The team wondered if or how ads would take advantage of the political climate to prey on people’s emotions and get people to click.

“We were well positioned to study this phenomenon because of our previous research on misleading information and manipulative techniques in online ads,” said , UW professor in the Allen School. “Six weeks leading up to the election, we said, ‘There are going to be interesting ads, and we have the infrastructure to capture them. Let’s go get them. This is a unique and historic opportunity.'”

The researchers created a list of news websites that spanned the political spectrum and then used a to visit each site every day. The crawler scrolled through the sites and took screenshots of each ad before clicking on the ad to collect the URL and the content of the landing page.

The team wanted to make sure to get a broad range of ads, because someone based at the UW might see a different set of ads than someone in a different location.

“We know that political ads are targeted by location. For example, ads for Washington candidates will only be featured to viewers browsing from the state of Washington. Or maybe a presidential campaign will have more ads featured in a swing state,” said lead author , UW doctoral student in the Allen School.

“We set up our crawlers to crawl from different locations in the U.S. Because we didn’t actually have computers set up across the country, we used a to make it look like our crawlers were loading the sites from those locations.”

The researchers initially set up the crawlers to search news sites as if they were based in Miami, Seattle, Salt Lake City and Raleigh, North Carolina. After the election, the team also wanted to capture any ads related to the Georgia special election and the Arizona recount, so two crawlers started searching as if they were based in Atlanta and Phoenix.

The team continued crawling sites throughout January 2021 to capture any ads related to the Capitol insurrection.

Four screenshots of example poll ads in a square. Starting in the top left is a poll asking if Trump should concede. In the top right is an ad asking people to sign a thank you card for Dr. Fauci, in the bottom right is an ad that says "Sign the petition that Nancy Pelosi hates," and in the bottom left is a poll about whether illegal immigrants should get unemployment benefits
Some political ads posed as a poll to collect people’s personal information. Photo: ӰӴý

The researchers used natural language processing to classify ads as political or non-political. Then the team went through the political ads manually to further categorize them, such as by party affiliation, who paid for the ad or what types of tactics the ad used.

“We saw these fake poll ads that were harvesting personal information, like email addresses, and trying to prey on people who wanted to be politically involved. These ads would then use that information to send spam, malware or just general email newsletters,” said co-author , UW doctoral student in the Allen School. “There were so many fake buttons in these ads, asking people to accept or decline, or vote yes or no. These things are clearly intended to lead you to give up your personal data.”

Ads that appeared to be polls were more likely to be used by conservative-leaning groups, such as conservative news outlets and nonprofit political organizations. These ads were also more likely to be featured on conservative-leaning websites.

The most popular type of political ad was click-bait news articles that often mentioned top politicians in sensationalist headlines, but the articles themselves contained little substantial information. The team observed more than 29,000 of these ads, and the crawlers often encountered the same ad multiple times. Similar to the fake poll ads, these were also more likely to appear on right-leaning sites.

“One example was a headline that said, ‘There’s something fishy in Biden’s speeches,'” said Roesner, who is also the co-director of the . “I worry that these articles are contributing to a set of evidence that people have amassed in their minds. People probably won’t remember later where they saw this information. They probably didn’t even click on it, but it’s still shaping their view of a candidate.”

Three screenshots of example clickbait ads. The first shows Pence making an "eyebrow raising declaration after DC siege." The second says "Joe Biden goes on head-turning rant, fires off at reporter." The third shows Ted Cruz making a "head turning statement to Trump about the riot"
Click-bait news articles often mentioned top politicians in sensationalist headlines, but the articles themselves contained little substantial information. Photo: ӰӴý

The researchers were surprised and relieved, however, to find a lack of ads containing explicit misinformation about how and where to vote, or who won the election.

“To their credit, I think the ad platforms are catching some misinformation,” Zeng said. “What’s getting through are ads that are exploiting the gray areas in content and moderation policies, things that seem deceptive but play to the letter of the law.”

The world of online ads is so complicated, the researchers said, that it’s hard to pinpoint exactly why or how certain ads appear on specific sites or are viewed by specific viewers.

 

  • This paper was one of three runners-up for the best paper award at the ACM Internet Measurement Conference.
  • Related story:

 

“Certain ads get shown in certain places because the system decided that those would be the most lucrative ads in those spots,” Roesner said. “It’s not necessarily that someone is sitting there doing this on purpose, but the impact is still the same —  people who are the most vulnerable to certain techniques and certain content are the ones who will see it more.”

To protect computer users from problematic ads, the researchers suggest web surfers should be careful about taking content at face value, especially if it seems sensational. People can also limit how many ads they see by getting an ad blocker.

, a UW undergraduate student studying computer science is also a co-author on this paper. This research was funded by the National Science Foundation, the , and the John S. and James L. Knight Foundation.

For more information, contact badads@cs.washington.edu.

Grant number: CNS-2041894

]]>
Three UW teams awarded NSF Convergence Accelerator grants for misinformation, ocean projects /news/2021/10/01/three-uw-teams-awarded-nsf-convergence-accelerator-grants-for-misinformation-ocean-projects/ Fri, 01 Oct 2021 22:53:42 +0000 /news/?p=76057

Three separate ӰӴý research teams have been awarded $750,000 each by the National Science Foundation to advance studies in misinformation and the ocean economy.

The for phase 1 of the Convergence Accelerator program’s 2021 cohort. The federal agency hopes to build upon basic research and discovery to accelerate solutions in two critical areas: the “” and “.”

One team, from the UW Applied Physics Laboratory, was selected for the “Networked Blue Economy” track topic, and two UW teams — one from the UW Information School and another from the APL — were selected for the “Trust and Authenticity in Communications Systems” track.

Designed to transition basic research and discovery into practice, the Convergence Accelerator uses innovation processes like human-centered design, user discovery, team science, and integration of multidisciplinary research and partnerships. The Convergence Accelerator, now in its third year, aims to solve high-risk societal challenges through use-inspired convergence research, according to NSF.

The three projects that teams from the UW will lead include:

  • The “” project, from the APL and industry partners, will produce a flexible proof-of-concept technology to help people evaluate the source of information and its reliability. Drawing on the fields of technology development, law, business, policy, curriculum development, community management, interdisciplinary research and finance, the team will develop tools and components to generate and communicate digital “trust signals” in various settings. The result will be a proof-of-concept for a verified information exchange that would support tools that users can deploy to assess the trustworthiness and authenticity of digital information. Workstreams are anticipated to include food system safety and security, bank and financial information systems, public health information systems, academic publication and supply chains. , a principal research scientist at the APL, is the lead investigator.
  • The “” project team, composed of a multidisciplinary set of researchers from the UW, the University of Texas at Austin, Washington State University, Seattle Central College and Black Brilliance Research, will plan, facilitate and assess a series of seven workshops focusing on critical reasoning skills, the psychological and emotional aspects of information, and broader sociocultural dimensions of trust in information ecosystems. The workshop series will be hosted in collaboration with a diverse group of local stakeholders in Washington state and Texas, including urban and rural libraries, news outlets, civic organizations, and underrepresented communities. , an Information School associate professor and UW co-founder, is the principal investigator on the project.
  • In the “” project, three new community-run ocean sensors will provide Indigenous coastal communities with real-time data on the changing ocean environment. The floating systems, anchored to the seafloor, will be deployed in collaboration with coastal communities in Alaska, the Pacific Northwest and the Pacific Islands. Sofar Ocean’s existing buoy systems — designed to be affordable and convenient — can measure waves, sea surface temperature, cloudiness of the water, and water depth, and come equipped with solar power, satellite communication and potential for expansion. The project housed under will be done through the UW-based as well as its counterparts in Alaska and the Pacific Islands, which have long-standing, trusted relationships with Indigenous and coastal communities. , an oceanographer at the APL and the director of NANOOS, is the lead investigator.

Additionally, Assistant Professor and Associate Professor , both in the UW Paul G. Allen School of Computer Science & Engineering, are co-principal investigators on a team, led by the international grassroots community . That team aims to develop practical interventions to help individuals and community moderators analyze information quality, including misinformation, to build trust and address vaccine hesitancy. Zhang also is on another , based at the University of Michigan, that will help media platforms determine how to flag articles that contain misinformation.

During phase 1, each UW team will engage with the other members of their cohort in a fast-paced, nine-month hands-on journey, which includes the program’s innovation curriculum, formal pitch and phase 2 proposal evaluation. The program’s team-based approach creates a “co-opetition” environment, stimulating the sharing of innovative ideas toward solving complex challenges together, while in a competitive environment to try and progress to phase 2.

At the end of phase 1, each team participates in a formal pitch and proposal evaluation. Selected teams from phase 1 will proceed to phase 2, with potential funding up to $5 million for 24 months. Phase 2 teams will continue to apply Convergence Accelerator fundamentals to develop solution prototypes and to build a sustainability model to continue impact beyond NSF support. By the end of phase 2, teams are expected to provide high-impact solutions that address societal needs at scale.

Launched in 2019, the NSF Convergence Accelerator program builds upon basic research and discovery to accelerate solutions toward societal impact. Using convergence research fundamentals and integration of innovation processes, it brings together multiple disciplines, expertise and cross-cutting partnerships to solve national-scale societal challenges.

]]>
Soundbites: UW researchers examine deceptive ads on news websites /news/2020/09/28/soundbites-uw-researchers-examine-deceptive-ads-on-news-websites/ Mon, 28 Sep 2020 23:40:39 +0000 /news/?p=70704

In this video:

Franziska Roesner, associate professor in the Allen School
Eric Zeng, graduate research assistant in the Allen School

Journalists: download soundbites 

With the election season ramping up, political ads are being splashed across the web. In the age of misinformation, how can news consumers tell if the ads they’re seeing are legitimate?

USA Today and other mainstream news sites might seem like they would limit access to deceptive ads. But  by ӰӴý researchers found that both mainstream and misinformation news sites displayed similar levels of problematic ads.

The team, composed of researchers in the Paul G. Allen School of Computer Science & Engineering, in mid-January collected more than 55,000 ads across more than 6,000 mainstream news sites and about 1,000 misinformation news sites (such as those on ). Then the researchers manually examined ads from 100 each of the most popular mainstream and misinformation sites to categorize them as problematic or not. The team presented these findings May 21 at the Workshop on Technology and Consumer Protection.

Read more here.

Kiyomi Taguchi ktaguchi@uw.edu / 206-685-2716

]]>
Q&A: UW researchers clicked ads on 200 news sites to track misinformation /news/2020/09/28/uw-researchers-clicked-ads-on-200-news-sites-to-track-misinformation/ Mon, 28 Sep 2020 18:38:14 +0000 /news/?p=70660 Editor’s note: All images of ads in this story are screenshots and are intended to help illustrate points in the text.

A screenshot of ads hosted by the ad platform Taboola. One ad is about where Kirkland products come from and another is about N95 masks
UW researchers found that both mainstream and misinformation news sites displayed similar levels of problematic ads.

With the election season ramping up, political ads are being splashed across the web. But in the age of misinformation, how can news consumers tell if the ads they’re seeing are legitimate?

USA Today and other mainstream news sites might seem like they would limit access to deceptive ads. But by ӰӴý researchers found that both mainstream and misinformation news sites displayed similar levels of problematic ads.

The team, composed of researchers in the Paul G. Allen School of Computer Science & Engineering, in mid-January collected more than 55,000 ads across more than 6,000 mainstream news sites and about 1,000 misinformation news sites (such as those on ). Then the researchers manually examined ads from 100 each of the most popular mainstream and misinformation sites to categorize them as problematic or not. The team presented these findings May 21 at the Workshop on Technology and Consumer Protection.

Franziska Roesner, associate professor in the Allen School, and Eric Zeng, graduate research assistant in the Allen School, talk about deceptive ads on news sites. Soundbites available .

UW News had a conversation with the team about this research, where ads on news sites come from, and how things might change leading up to the election.

It sounds like there are two main types of ads on these sites: native and display ads. What’s the difference?

, graduate research assistant in the Allen School: A “native ad” is designed to blend in with the rest of the page. So for example on a news site, a native ad would look like a headline for a news article. Or in an app like Yelp, it’d be a sponsored listing for a restaurant. Sometimes sites will try to make ads very clear by having a big button that says “ad” or “ad content.” But sometimes sites make it vague so it’s hard for people to tell.

Three native ads, one about celebrities who refuse to admit they aren't famous anymore, one about a new cash law coming before the election and one about a drone that captured photos no one was supposed to see.
A screenshot of three native ads

“Display ads,” also sometimes called “banner ads,” are generally on the top or the bottom of the screen, in a sidebar or within the text of a news article. They look like images.

What makes an ad “problematic?”

, associate professor in the Allen School: That’s exactly one of the questions we are trying to study. We see all sorts of techniques in the wild, such as clickbait, native ads that look like articles, gross images, polls, sensational claims and more. We’re trying to classify and measure these types of techniques and study how prevalent they are. Now we’re also studying how users react to them.

, professor in the Allen School: In one sense, an ad on the web is just a paid way for me to get something in front of someone else, so they can click on it and come to my site. But advertising on the web can also be a mechanism to deliver content, as opposed to the old-fashioned definition of selling a product.

This ad says "Trump impeachment poll. Do you support Trump? Click (Yes) or (No).
A screenshot of an ad that looks like a political poll.

EZ: If you put a billboard or poster up, you had to convey the whole message in there and hopefully inspire people to do whatever you want. But for online ads, you just need to get people to click.

We saw ads that looked like political opinion polls, asking things like ‘Should Donald Trump be impeached?’ or ‘Which candidate do you prefer for president?’ Then if you click on it, it just takes you to an ad for some other product. Or maybe it really is a poll, but when you click on it, you have to sign up for a mailing list to submit your vote.

This medium enables different types of deceptions.

FR: Also, a billboard in the physical world is clearly an ad. We all understand that. But an ad that looks like a news headline that’s sitting among other legitimate headlines is potentially problematic. If I’m visiting The New York Times or another news outlet that I trust, and I can’t distinguish something on there as an ad, then I’m trusting that content way more than I would if I were on some random site.

Where do the ads we see on news sites come from?

EZ: News sites will embed a bit of code from an ad provider, like Google Ads, on their websites. Then when someone goes to the news site, the ad provider will look at all of the ads that advertisers have submitted, hold an auction among the advertisers to determine which ad is picked and then display the winning ad on the website.

FR: The ecosystem is really complicated. Let’s say The Seattle Times were to say, ‘We don’t want these types of ads on our site.’ It’s not so simple. It’s not like The Seattle Times chose the ads we’re seeing. They work with some ad providers that work with a bunch of other companies.

So if there’s a problematic ad on The Seattle Times site, it’s coming from what ad providers are pulling in. There’s also the targeting aspect: Who is viewing the page? Someone who tends to click on a certain type of ad is probably more likely to see it. Different visitors to the same site will get different ads. So it’s not even like the editors can load the page and see what the ads on their site will look like in advance.

What made you, as security and privacy experts, decide to start studying this?

FR: There’s been a lot of work in the security community, including work that we’ve done, looking at this broader ad ecosystem, but mostly from a privacy perspective — such as looking at what data ads collect about users’ browsing behaviors — or from a security perspective — such as looking for ads that are used to spread malware.

But then we started thinking about the fact that so much content that people see online is not from the primary websites they’re browsing, but from the ads on those pages. These ads might not necessarily be outright misinformation or lead to misinformation sites, but they’re still preying on the same types of biases.

TK: When asked about bad ads, privacy researchers used to talk about mechanisms — for example, studying how an ad is pervasively tracking an individual. This paper is broadening the definition, taking a look at it from the perspective of the content of the ad, and where it takes someone if they click on it.

FR: Instead of a technical attack where your computer is vulnerable, we’re thinking about it as more like your brain is vulnerable.

What was your goal with this project?

EZ: We wanted to compare mainstream news sites versus misinformation news sites to see if the quality of the ad content on those sites was any different. We hypothesized that we’d see more problematic ads on misinformation sites. But both had roughly similar quantities of these problematic ads. It’s evidence that both these types of websites are participating in the same advertising ecosystem.

For example, we found that the advertising provider Taboola ran more of the problematic ads than any of the other ad platforms that sites use. Taboola also claims that their ads provide more revenue to websites than standard banner display ads. If these ads can get people to click, then that’s earning the websites money.

Then, because mainstream news sites are struggling, they might be turning to ad providers like Taboola because it’s the best way to sustain their business, unfortunately. And then same for misinformation sites, it’s a way to make a quick buck by tricking people into clicking on these ads.

Why have ads if they’re going to be problematic?

FR: There’s tension here — the outcome can’t be ‘ads are bad.’ They fund the economic model of the web. I think legitimate content websites are walking this weird line between the quality of ad content and the revenue that they’re making from it.

The hope is that somehow we can balance these things so we can have ads and revenue, but improve the quality of content that people are seeing online.

How do you think the upcoming election will change the types of content from what you saw in January?

This ad has a picture of Donald Trump with the text "Radical democrats want to take away your guns! Sign the petition."
A screenshot of a political banner ad on a news site.

FR: We anticipate that things will get more interesting near the election, in terms of actual political ads and the mechanisms and techniques people will use. But we’re also interested in seeing if there are ads that use the political climate, such as those fake polls that aren’t legitimate ads for political candidates, as part of the technique.

EZ: We plan to continue collecting data to see what tactics these campaigns are using leading up to the election.

What, if anything, should people do as they’re seeing ads on their favorite news sites?

FR: In doing this work, I think I’ve become more aware of all the content on a page, but the ads in particular because they’re designed to draw you in. I’m practicing being more aware of my reactions to them.

TK: We’ve developed an intuition of what to be aware of when we’re crossing the street — Is there a crosswalk nearby? Has traffic in the opposite direction stopped? But I would say that in the online world, it’s sometimes hard to have that sense. Is a website intentionally trying to mislead us or is it just confusing?

We need to develop this level of street awareness, where we know that not everything out there on the web has our best interests at heart.

FR: It leads to a separate research question that we’re following up on now: How do we help people be aware of the emotional and cognitive impacts of these things? Eric, you looked at the most ads as part of this research. Do you have any advice?

EZ: Get an ad blocker.

This research was funded by The National Science Foundation.

For more information, contact Zeng at ericzeng@cs.washington.edu, Kohno at yoshi@cs.washington.edu and Roesner at franzi@cs.washington.edu.

Grant numbers: CNS-1565252, CNS-1651230

]]>
How people investigate — or don’t — fake news on Twitter and Facebook /news/2020/03/18/how-people-investigate-fake-news-on-twitter-and-facebook/ Wed, 18 Mar 2020 16:17:14 +0000 /news/?p=66859
UW researchers studied how people investigated potentially suspicious posts on their own Facebook and Twitter feeds.

Social media platforms, such as Facebook and Twitter, provide people with a lot of information, but it’s getting harder and harder to tell what’s real and what’s not.

Researchers at the ӰӴý wanted to know how people investigated potentially suspicious posts on their own feeds. The team watched 25 participants scroll through their Facebook or Twitter feeds while, unbeknownst to them, a Google Chrome extension randomly added debunked content on top of some of the real posts. Participants had various reactions to encountering a fake post: Some outright ignored it, some took it at face value, some investigated whether it was true, and some were suspicious of it but then chose to ignore it. have been accepted to the 2020 ACM CHI conference on Human Factors in Computing Systems.

“We wanted to understand what people do when they encounter fake news or misinformation in their feeds. Do they notice it? What do they do about it?” said senior author , a UW associate professor in the Paul G. Allen School of Computer Science & Engineering. “There are a lot of people who are trying to be good consumers of information and they’re struggling. If we can understand what these people are doing, we might be able to design tools that can help them.”

Participants had various reactions to encountering a fake post: Some outright ignored it, some took it at face value, some investigated whether it was true, and some were suspicious of it but then chose to ignore it. Photo: Franziska Roesner/ӰӴý

Previous research on how people interact with misinformation asked participants to examine content from a researcher-created account, not from someone they chose to follow.

“That might make people automatically suspicious,” said lead author , a UW doctoral student in the Allen School. “We made sure that all the posts looked like they came from people that our participants followed.”

The researchers recruited participants ages 18 to 74 from across the Seattle area, explaining that the team was interested in seeing how people use social media. Participants used Twitter or Facebook at least once a week and often used the social media platforms on a laptop.

Then the team developed a Chrome extension that would randomly add fake posts or memes that had been debunked by the fact-checking website Snopes.com on top of real posts to make it temporarily appear they were being shared by people on participants’ feeds. So instead of seeing a cousin’s post about a recent vacation, a participant would see their cousin share one of the fake stories instead.

An example of a fake post that a participant might see on their Facebook feed during the study. Photo: Geeng et al./2020 ACM CHI conference on Human Factors in Computing Systems

The researchers either installed the extension on the participant’s laptop or the participant logged into their accounts on the researcher’s laptop, which had the extension enabled. The team told the participants that the extension would modify their feeds — the researchers did not say how — and would track their likes and shares during the study — though, in fact, it wasn’t tracking anything. The extension was removed from participants’ laptops at the end of the study.

An example of a fake post that a participant might see on their Twitter feed during the study. Photo: Geeng et al./2020 ACM CHI conference on Human Factors in Computing Systems

“We’d have them scroll through their feeds with the extension active,” Geeng said. “I told them to think aloud about what they were doing or what they would do if they were in a situation without me in the room. So then people would talk about ‘Oh yeah, I would read this article,’ or ‘I would skip this.’ Sometimes I would ask questions like, ‘Why are you skipping this? Why would you like that?'”

Participants could not actually like or share the fake posts. On Twitter, a “retweet” would share the real content beneath the fake post. The one time a participant did retweet content under the fake post, the researchers helped them undo it after the study was over. On Facebook, the like and share buttons didn’t work at all.

An example of a fake post that a participant might see on their Facebook feed during the study. A participant mentioned skipping this post because they saw the word “Florida” and decided it didn’t pertain to them. Photo: Geeng et al./2020 ACM CHI conference on Human Factors in Computing Systems

After the participants encountered all the fake posts — nine for Facebook and seven for Twitter — the researchers stopped the study and explained what was going on.

“It wasn’t like we said, ‘Hey, there were some fake posts in there.’ We said, ‘It’s hard to spot misinformation. Here were all the fake posts you just saw. These were fake, and your friends did not really post them,'” Geeng said. “Our goal was not to trick participants or to make them feel exposed. We wanted to normalize the difficulty of determining what’s fake and what’s not.”

The researchers concluded the interview by asking participants to share what types of strategies they use to detect misinformation.

In general, the researchers found that participants ignored many posts, especially those they deemed too long, overly political or not relevant to them.

But certain types of posts made participants skeptical. For example, people noticed when a post didn’t match someone’s usual content. Sometimes participants investigated suspicious posts — by looking at who posted it, evaluating the content’s source or reading the comments below the post — and other times, people just scrolled past them.

“I am interested in the times that people are skeptical but then choose not to investigate. Do they still incorporate it into their worldviews somehow?” Roesner said. “At the time someone might say, ‘That’s an ad. I’m going to ignore it.’ But then later do they remember something about the content, and forget that it was from an ad they skipped? That’s something we’re trying to study more now.”

A Twitter post that says "Last year lettuce killed more Americans than undocumented immigrants so it's a good thing we're halting food inspections over a wall that won't work."
An example of a fake post that a participant might see on their Twitter feed during the study. Photo: Geeng et al./2020 ACM CHI conference on Human Factors in Computing Systems

While this study was small, it does provide a framework for how people react to misinformation on social media, the team said. Now researchers can use this as a starting point to seek interventions to help people resist misinformation in their feeds.

“Participants had these strong models of what their feeds and the people in their social network were normally like. They noticed when it was weird. And that surprised me a little,” Roesner said. “It’s easy to say we need to build these social media platforms so that people don’t get confused by fake posts. But I think there are opportunities for designers to incorporate people and their understanding of their own networks to design better social media platforms.”

, a UW master’s student in the Allen School, is also a co-author on this paper. This research was funded by the National Science Foundation.

For more information, contact Roesner at franzi@cs.washington.edu and Geeng at cgeeng@cs.washington.edu.

Grant number: CNS-1651230

]]>
New tools to minimize risks in shared, augmented-reality environments /news/2019/08/20/shared-augmented-reality-environments/ Tue, 20 Aug 2019 16:11:57 +0000 /news/?p=63617 A person holding up an iPad that shows a digital world over the real world.
For now, augmented reality remains mostly a solo activity, but soon people might be using the technology in groups for collaborating on work or creative projects.

A few summers ago throngs of people began using the Pokemon Go app, the first mass-market augmented reality game, to collect virtual creatures hiding in the physical world.

For now, AR remains mostly a solo activity, but soon people might be using the technology for a variety of group activities, such as playing multi-user games or collaborating on work or creative projects. But how can developers guard against bad actors who try to hijack these experiences, and prevent privacy breaches in environments that span digital and physical space?

ӰӴý security researchers have developed ShareAR, a toolkit that lets app developers build in collaborative and interactive features without sacrificing their users’ privacy and security. The researchers Aug. 14 at the in Santa Clara, California.

“A key role for computer security and privacy research is to anticipate and address future risks in emerging technologies,” said co-author , an assistant professor in the Paul G. Allen School of Computer Science & Engineering. “It is becoming clear that multi-user AR has a lot of potential, but there has not been a systematic approach to addressing the possible security and privacy issues that will arise.”

Learn more about the and its role in the space of computer security and privacy for augmented reality.

Sharing virtual objects in AR is in some ways like sharing files on a cloud-based platform like Google Drive — but there’s a big difference.

“AR content isn’t confined to a screen like a Google Doc is. It’s embedded into the physical world you see around you,” said first author , a UW undergraduate student in the Allen School. “That means there are security and privacy considerations that are unique to AR.”

For example, people could potentially add virtual inappropriate images to physical public parks, scrawl virtual offensive messages on places of worship or even place a virtual “kick me” sign on an unsuspecting user’s back.

“We wanted to think about how the technology should respond when a person tries to harass or spy on others, or tries to steal or vandalize other users’ AR content,” Ruth said. “But we also don’t want to shut down the positive aspects of being able to share content using AR technologies, and we don’t want to force developers to choose between functionality and security.”

To address these concerns, the team created a prototype toolkit, ShareAR, for the Microsoft HoloLens. ShareAR helps applications create, share and keep track of objects that users share with each other.

Another potential issue with multi-user AR is that developers need a way to signal the physical location of someone’s private virtual content to keep other users from accidentally standing in between that person and their work — like standing between someone and the TV. So the team developed “ghost objects” for ShareAR.

“A ghost object serves as a placeholder for another virtual object. It has the same physical location and rough 3D bulk as the object it stands in for, but it doesn’t show any of the sensitive information that the original object contains,” Ruth said. “The benefit of this approach over putting up a virtual wall is that, if I’m interacting with a virtual private messaging window, another person in the room can’t sneak up behind me and peer over my shoulder to see what I’m typing — they always see the same placeholder from any angle.”

The team tested ShareAR with three case study apps. Creating objects and changing permission settings within the apps were the most computationally expensive actions. But, even when the researchers tried to stress out the system with large numbers of users and shared objects, ShareAR took no longer than 5 milliseconds to complete a task. In most cases, it took less than 1 millisecond.

The team tested ShareAR with three case study apps: Cubist Art (top panel), which lets users create and share virtual artwork with each other; Doc Edit (bottom left panel), which lets users create virtual notes or lists they can share or keep private; and Paintball (bottom right panel), which lets users play paintball with virtual paint. In the Doc Edit app, the semi-transparent gray box in the top left corner represents a “ghost object,” or a document that another user wishes to remain private. Photo: Ruth et al./USENIX Security Symposium

Developers can to use for their own HoloLens apps.

“We’ll be very interested in hearing feedback from developers on what’s working well for them and what they’d like to see improved,” Ruth said. “We believe that engaging with technology builders while AR is still in development is the key to tackling these security and privacy challenges before they become widespread.”

, a professor in the Allen School, is also a co-author on this paper. This research was funded by the National Science Foundation and the Washington Research Foundation.

###

For more information, contact Roesner at franzi@cs.washington.edu, Ruth at kcr32@cs.washington.edu and Kohno at yoshi@cs.washington.edu.

Grant numbers: CNS-1513584, CNS-1565252, CNS-1651230

]]>
Information School to welcome high school students March 19 for ‘MisInfo Day’ – from ‘Calling BS’ faculty duo /news/2019/03/18/information-school-to-welcome-high-school-students-march-19-for-misinfo-day-from-calling-bs-faculty-duo/ Mon, 18 Mar 2019 21:38:56 +0000 /news/?p=61275 What is misinformation, and how — and why — does it spread? The ӰӴý is taking a leading role in helping people better navigate this era of increasing online fakery and falsehood.

On March 19, the iSchool will welcome more than 200 Seattle-area high school students for “,” a daylong workshop on how to navigate the misinformation landscape from and , the faculty duo who created the “Calling BS in the Age of Big Data” class and .

“MisInfo Day,” will be from 9:30 a.m. to 2:30 p.m. in the Husky Union Building’s North Ballroom.

West is an iSchool assistant professor, Bergstrom a professor of biology. Their most recent creation is , a website that helps users learn to tell real from fake images online.

The students — many of whom are studying government — will come from Nathan Hale, Franklin, Bellevue and Toledo high schools. Discussions will include defining misinformation and why we find it so compelling as well as “tips and tricks” for determining if news reports and social media posts are legitimate

The afternoon session will be an “Ask the Experts” panel, where the students will hear professionals from the Seattle Public Library, Snopes.com and the UW about their work. The students are asked to “come with questions about misinformation, fact-checking, confirmation bias and more.”

Other faculty and staff involved are:

  • , iSchool assistant professor
  • , UW librarian who manages the Information Science collection
  • , assistant professor in the Department
  • , assistant professor in the
  • Liz Crouse, one of several students involved from the iSchool’s Masters of Library Science program, who assisted West in coordinating the event and will conduct pre- and post-program surveys of students for an ongoing research project. Other MLIS students will lead breakout sessions during the event.

Bergstrom and West’s “Calling BS” work has drawn wide attention from press as well as other institutions, some of whom have already expressed interest in holding events modeled on “MisInfo Day.”

###

For more information, contact Maggie Foote, iSchool communications director, at 206-250-5992 or m2foote@uw.edu

]]>
For $1000, anyone can purchase online ads to track your location and app use /news/2017/10/18/for-1000-anyone-can-purchase-online-ads-to-track-your-location-and-app-use/ Wed, 18 Oct 2017 16:00:19 +0000 /news/?p=55074
New ӰӴý research finds that for a budget of roughly $1000, it is possible for someone to track your location and app use by purchasing and targeting mobile ads. The team aims to raise industry awareness about the potential privacy threat.

Privacy concerns have long swirled around how much information online advertising networks collect about people’s browsing, buying and social media habits — typically to sell you something.

But could someone use mobile advertising to learn where you go for coffee? Could a burglar establish a sham company and send ads to your phone to learn when you leave the house? Could a suspicious employer see if you’re using shopping apps on work time?

The answer is yes, at least in theory. New , to be presented in a Oct. 30 at the Association for Computing Machinery’s , suggests that for roughly $1,000, someone with devious intent can purchase and target online advertising in ways that allow them to track the location of other individuals and learn what apps they are using.

“Anyone from a foreign intelligence agent to a jealous spouse can pretty easily sign up with a large internet advertising company and on a fairly modest budget use these ecosystems to track another individual’s behavior,” said lead author a recent doctoral graduate in the UW’s Paul G. Allen School of Computer Science & Engineering.

The research team set out to test whether an adversary could exploit the existing online advertising infrastructure for personal surveillance and, if so, raise industry awareness about the threat.

“Because it was so easy to do what we did, we believe this is an issue that the online advertising industry needs to be thinking about,” said co-author , co-director of the and an assistant professor in the Allen School.  “We are sharing our discoveries so that advertising networks can try to detect and mitigate these types of attacks, and so that there can be a broad public discussion about how we as a society might try to prevent them.”

This map represents an individual’s morning commute. Red dots reflect the places where the UW computer security researchers were able to track that person’s movements by serving location-based ads: at home (real location not shown), a coffee shop, bus stop and office. The team found that a target needed to stay in one location for roughly four minutes before an ad was served, which is why no red dots appear along the individual’s bus commute (dashed line) or walking route (solid line.) Photo: ӰӴý

The researchers discovered that an individual ad purchaser can, under certain circumstances, see when a person visits a predetermined sensitive location — a suspected rendezvous spot for an affair, the office of a company that a venture capitalist might be interested in or a hospital where someone might be receiving treatment — within 10 minutes of that person’s arrival. They were also able to track a person’s movements across the city during a morning commute by serving location-based ads to the target’s phone.

The team also discovered that individuals who purchase the ads could see what types of apps their target was using. That could potentially divulge information about the person’s interests, dating habits, religious affiliations, health conditions, political leanings and other potentially sensitive or private information.

Someone who wants to surveil a person’s movements first needs to learn the (MAID) for the target’s mobile phone. These unique identifiers that help marketers serve ads tailored to a person’s interests are sent to the advertiser and a number of other parties whenever a person clicks on a mobile ad. A person’s MAID also could be obtained by eavesdropping on an unsecured wireless network the person is using or by gaining temporary access to his or her WiFi router.

The UW team demonstrated that customers of advertising services can purchase a number of hyperlocal ads through that service, which will only be served to that particular phone when its owner opens an app in a particular spot. By setting up a grid of these location-based ads, the adversary can track the target’s movements if he or she has opened an app and remains in a location long enough for an ad to be served — typically about four minutes, the team found.

Importantly, the target does not have to click on or engage with the ad — the purchaser can see where ads are being served and use that information to track the target through space. In the team’s experiments, they were able to pinpoint a person’s location within about 8 meters.

“To be very honest, I was shocked at how effective this was,” said co-author , an Allen School professor who has studied security vulnerabilities in products ranging from automobiles to medical devices. “We did this research to better understand the privacy risks with online advertising. There’s a fundamental tension that as advertisers become more capable of targeting and tracking people to deliver better ads, there’s also the opportunity for adversaries to begin exploiting that additional precision. It is important to understand both the benefits and risks with technologies.”

An individual could potentially disrupt the simple types of location-based attacks that the UW team demonstrated by frequently resetting the mobile advertising IDs in their phones — a feature that many smartphones now offer. Disabling location tracking within individual app settings could help, the researchers said, but advertisers still may be capable of harvesting location data in other ways.

On the industry side, mobile and online advertisers could help thwart these types of attacks by rejecting ad buys that target only a small number of devices or individuals, the researchers said. They also could develop and deploy machine learning tools to distinguish between normal advertising patterns and suspicious advertising behavior that looks more like personal surveillance.

The UW Security and Privacy Research Lab is a leader in evaluating potential security threats in emerging technologies, including telematics in automobiles, web browsers, DNA sequencing software and augmented reality, before they can be exploited by bad actors.

Next steps for the team include working with experts at the UW’s to explore the legal and policy questions raised by this new form of potential intelligence gathering.

The research was funded by The National Science Foundation, The Tech Policy Lab and the Short-Dooley Professorship.

For more information, contact the research team at adint@cs.washington.edu.

Grant number: NSF: CNS-1463968

]]>
UW professor Franziska Roesner named one of world’s top innovators under 35 /news/2017/08/16/uw-professor-franziska-roesner-named-one-of-worlds-top-innovators-under-35/ Wed, 16 Aug 2017 14:52:35 +0000 /news/?p=54421 MIT Technology Review has named ӰӴý professor one of . Roesner is a faculty member in the and co-director of the school’s .

Roesner’s research spans a number of projects related to privacy and security in emerging technologies. Though these endeavors include diverse platforms such as augmented reality and encrypted communication, her focus is often on the user experience — specifically, identifying the that may arise, and mitigating those issues to ensure that new technologies can fulfill their potential for end users.

“Some of these technologies have been envisioned for decades, but have only recently been within our grasp,” said Roesner. “So we’re now at a point where we must ask questions such as, ‘How will security or privacy issues be different with these new technological platforms? And how can we design these systems to mitigate those issues?'”

Franziska Roesner, UW professor of computer science and engineering. Photo: Dennis Wise/ӰӴý

Much of Roesner’s current research focuses on computer security and privacy in augmented reality technologies. Augmented reality, or AR, is aptly named. These are technology platforms in which digital content — such as labels or virtual, 3-D objects — is overlaid on a display of the real world. One contemporary example is Pokemon Go, the popular game that displays artificial characters over real scenes through a smartphone or tablet camera.

But the potential for augmented-reality technologies is not just in games. They could soon include devices and apps that are as indispensable as email access is today on a smartphone. Roesner’s research considers security concerns in AR technologies — such as the potential risk of having cameras and sensors constantly drinking up information about a user’s environment — as well as the potential risks of expressly malicious, or simply low-quality or “buggy,” apps that could place an AR user at risk. For example, consider an AR application for a car windshield that intentionally or accidentally blocks the driver’s view of oncoming cars. In these endeavors, she has also engaged directly with companies developing AR platforms, including , a Silicon Valley AR company, and Microsoft Research.

Roesner also conducts research on how well users adopt and interact with security tools for a variety of technologies.

“Ideally, we’d like to design and build security and privacy tools that actually work for end users,” said Roesner. “But to do that, we need to engage with those users, to understand what they need, and not build technology in isolation.”

For example, in recent efforts she on tools to protect anonymous or other sensitive sources. Much of this initial research involved looking thoroughly at the different approaches that journalists currently take to communicate with their sources and conduct research for a story. Understanding how journalists work, and how existing security tools may fail to serve their needs, can help build a more effective security tools like improved email encryption, Roesner said. To that end, her team has created , a new email encryption tool that emphasizes usability.

Roesner’s research has also uncovered current privacy risks in technologies, such as and . In these and other endeavors, she sees no shortage of pressing questions.

“As our technologies progress and become even more integral to our lives, the push to consider privacy and security issues will only increase,” said Roesner.

MIT Technology Review has released its list of top innovators under 35 each year since 1999. Past UW honorees include , associate professor of human centered design and engineering; , assistant professor of global health at the ; , a professor in both the Allen School and the Department of Electrical Engineering; Allen School professor , who is co-director of both the Security and Privacy Laboratory and the ; and , associate professor in the Allen School.

###

For more information, contact Roesner at franzi@cs.washington.edu.

]]>