Center for an Informed Public – UW News /news Tue, 31 Mar 2026 22:34:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Q&A: Ryan Calo, law professor and interdisciplinary researcher, talks about his new book, “Law and Technology” /news/2026/03/31/qa-ryan-calo-law-professor-and-interdisciplinary-researcher-talks-about-his-new-book-law-and-technology/ Tue, 31 Mar 2026 22:34:24 +0000 /news/?p=91165 A book cover
Ryan Calo, a UW professor of law, has written a new book, “Law & Technology.” Calo is also a professor in the Information School and an adjunct in the Paul G. Allen School of Computer Science & Engineering. Photo: University of Oxford Press

Since Ryan Calo joined ӰӴý School of Law in 2012, he has become a leading expert on the law and emerging technology.

Calo believes that few interesting questions — especially around technology — can be resolved by reference to a single discipline.

Calo is a co-founder of the , and the . He is also a professor in the and an adjunct in the .

Calo’s newest book, “,” published late last year, is a guide to a legal analysis of regulation and technology. Nearly a decade ago, Calo realized that the most recent book on the topic was published in the 1970s. He decided it was time for an updated resource reflecting current, rapidly evolving technology and the present regulatory environment.

UW News spoke with Calo about the book and the current legal and policy climate in the United States.

man wearing a plaid shirt standing outside
Ryan Calo is a professor in the UW School of Law and the Information School. He is an adjunct in the Paul G. Allen School of Computer Science & Engineering. Photo: Doug Parry/ӰӴý

Who is the intended audience for “Law and Technology”?

Ryan Calo: I wrote it primarily for new entrants to the field, be they junior scholars or students. I also hoped that the themes would resonate with more senior scholars and that it would be useful outside of academia for either analysis or instruction. Because ultimately, what the book does is proposes a methodology for analyzing technology from a legal perspective.

I spent a lot of time interacting with policymakers, staffers on Capitol Hill, people who work for senators and members of Congress. A legislator might come to a staffer and say, “Hey, my constituents are really worried about augmented reality or AI. They’re really worried about deep fakes.” That staff member doesn’t really have a place to start, and they end up just calling up experts, reading New York Times articles, talking to industry, but not in any kind of methodical way. This book is designed to help them figure out what’s going on.

I also hope that this book would be of use to people who are in practice and want to be more methodical about analyzing a given technology.

Technology evolves fast. How should the legal system and policymakers prepare to navigate the relationship between law and emerging technologies?

RC: Many of us have an expectation that technology is just going to change. It’s just going to evolve, and our job as lawyers or judges or policymakers, is to kind of scramble and accommodate the resulting disruption, and perhaps try to restore the status quo. Part of what I hope to see is legal scholars and policymakers acknowledging that the disruption isn’t inevitable.

We need to empower independent researchers to figure out what’s going on with new technology. Right now researchers are disempowered because they don’t have access to the relevant data and platforms. And many times when they try to get that data, they get served with a cease and desist letter.

We need to protect whistleblowers and make sure there’s adequate, truly top-notch expertise within government. If you have those things, then you’re much more likely to be able to figure out what could go wrong with these technologies without having to observe the harm unfold over a long period of time, as we have with the internet and now with AI.

You mentioned the School of Law’s leadership in tech policy. How is the UW positioned nationally in this space?

RC: We are really among the leaders in this area.

The School of Law has a lot of tech policy offerings, including a . Many faculty have contributed to scholarship over the years. We have lots of faculty writing about law and technology.

We also have been really a model for impactful interdisciplinary collaboration. Law students can work in the clinic or the Tech Policy Lab. I’m one of the founders of the Center for an Informed Public, which bridges human centered and design engineering as well as the Information School and dozens of other departments including psychology, education and even geography.

A third important example is the . We did a whole year of work mapping out who was doing work in the space — all the centers, all the labs, all the initiatives — all the people on the three campuses identified as working at this intersection.

We’re leaders across the country at the law school in terms of our student offerings in our research, but we are also part of that interstitial glue. People think of the iSchool, which they should. They think of computer science, which they should. But they also should think about who else is in the center of this, who else is at the heart of it, and the School of Law is a big part of that.

There’s been a lot of news lately about states trying to regulate AI and the federal government pushing back. What’s your perspective?

RC: If I were trying to sabotage the innovation edge of the United States, I would do at least two things, maybe three.

First, I would divest in basic research. The United States has had an innovation edge over the rest of the world in large part because of decisions made in the 1950s and beyond to invest in basic research. I would dismantle that, and I would try to make it really hard for universities to do research, either by spending less, disrupting the relationships, or messing with overhead in ways that makes research impossible.

The second thing I would do is make it really hostile for outside innovators to come in and participate in knowledge production here. I would, whether xenophobically or not, try to make it really hard for people with ideas and talent and knowledge to come here to the United States to work on teams with other Americans, to stay here and teach in our schools, to found companies. The second enormous advantage the United States has had is that the country has become attractive because of its commitment to the rule of law and its robust higher ed system, and that’s built on its innovation and investment in research. People from all over the world come here to try and make the next Google and Amazon, or are teaching in our schools and contributing to our ecosystem.

The third thing I would do in this hypothetical situation is remove non-existent hurdles to transformative technologies like AI. What do I mean? Federal leaders are currently talking about getting out of the way of AI, but there aren’t any regulations about AI, really. There are some state laws that have a kind of European flavor of risk management, like and . There are specific things that states are worried about, including deep fakes and labeling online social media accounts that are automated. There’s almost nothing standing in the way of AI innovation in terms of regulation.

The way that our system is structured is that the individual states, under our concept of federalism, are supposed to be laboratories of ideas, experimenting with legislation, and showing that it works or it doesn’t. Pretending that you’re pro-innovation because you’re trying to stamp out the very few regulatory hurdles that companies have to have to abide by all in the name of competing with China, which has AI laws, is just senseless. We’re much better off following the wisdom of the founders, who said, “Hey, if you have something new in society, let the states serve as laboratories for different laws, and we can all learn from each other about how that’s going.” That’s classic federalism and it used to be a pillar of conservative thinking.

The President doesn’t have the power to boss the states around in terms of their legislative capacities. And Congress has taken up the question of whether to try to preempt AI laws, and they resignedly declined. I just want to comment that the overall strategy of the administration has been deeply anti-innovation in its impact, even though it is vociferously proinnovation in its rhetoric.

Any final thoughts?

RC: We have an environment in the U.S. that promotes innovation, sometimes through laws, such as laws that protect intellectual property, and laws that make people feel safe enough to use products and services that companies can sell them to us. There’s not, and never has been, a one-to-one correlation between regulation and promoting innovation. It’s really important that we acknowledge, as a society and community, that sometimes laws are written in the service of innovation. What you want is a favorable regulatory environment, not a complete absence of the rule of law.

For more information, contact Calo at rcalo@uw.edu.

]]>
Q&A: How Instagram influencers profit from anti-vaccine misinformation /news/2024/03/11/instagram-influencers-profit-anti-vaccine-misinformation-disinformation-wellness/ Mon, 11 Mar 2024 15:15:10 +0000 /news/?p=84707 A person's hand holds a smartphone, with the Instagram analytics page open. There's a green plant in the background.
New research from the UW examines how three wellness Instagram influencers profited from anti-vaccine misinformation. Photo:

While Instagram might have a reputation for superficiality — a realm of exquisitely filtered images — it is as a news source. The platform is increasingly filled with information, some of it pernicious and distributed via influencers.

Researchers at the ӰӴý studied three prominent Instagram influencers spreading anti-vaccine misinformation as a route to profit. Each account occupies what lead author , a UW senior research scientist at the and staff researcher in the Information School, calls a “slightly different corner of Instagram.”

To protect the accounts’ anonymity, the team gave each a pseudonym, substituting the account’s actual name with a generic descriptor: the Wellness Homesteader (focused on things like homeschooling and farming), the Conspiratorial Fashionista (focused on fashion) and the Evangelical Mother (focused on Christianity). What unified the three U.S.-based accounts was that, amid their varied content, each dispersed overtly conspiratorial anti-vaccine messaging and used it to sell products and services they profited from either directly or indirectly.

The team recently published in the International Journal of Communication.

UW News spoke with Moran about the paper, the particular methods of Instagram influencers, and the ways “misinformation is an immensely profitable endeavor.”

What made you interested in researching this?

Rachel Moran: A lot of my research at the CIP has been in the vein of health-related claims, particularly in the anti-vaccine movement. We’ve done a couple of research studies where we looked at how influencers on Instagram share information about vaccines, how they validate whether it’s true or not. And we noticed this pattern of influencers directing people to buy things. It’s something we see in our everyday lives all the time now. Everyone is selling something online. So we’re interested in what happens when people use misinformation and leverage it to make profit.

Can you describe the patterns you found in the three accounts?

RM: They were all female and kind of catering to female audiences, and they leverage gender in a really interesting way. They’re kind of homing in on mothers’ responsibilities so they can, for want of a better word, “guilt trip” people into buying specific products. They’re eschewing traditional vaccines or medicine in favor of more “natural wellness” products, for example.

We also saw the use of multilevel marketing companies. During the pandemic, the Food and Drug Administration tried to put a handle on some of these wellness-related multilevel marketing companies that were leveraging the pandemic as a way of advertising their products. The FDA came out and said, “You’re not allowed to say that your product will cure COVID,” for example. There’s a bit of a loophole, where you can sell a multilevel marketing product if you’re not employed by that company. Then, the policies aren’t really enforceable. This allows individuals with these Instagram accounts to advertise the product and make money off of it by leveraging misinformation without any consequence.

In the paper, you discuss how the ‘parasocial relationships’ that develop through these kinds of accounts can help the anti-vaccine messaging gain users’ trust. Could you talk about that?

RM: It’s a through line to a lot of our work within misinformation spaces — the importance of these parasocial relationships, which are sort of one-sided relationships we build with the people that we follow online, celebrities and so on. But with Instagram, you get this look into someone’s everyday life that sometimes can be very mundane, and you kind of build a rapport through that. They’re showing content that feels relatable. Maybe you’ve bought the leggings that they’ve advertised, and they work well for you. You build up that incremental trust, so that if they then share something that isn’t within their wheelhouse —maybe they’re not medical experts, but they’re sharing medical advice — you may be less likely to question it.

And it’s not quite as one-sided anymore. On Instagram, we can reply to an influencer’s story, and they sometimes respond and provide a little semblance of a two-way relationship. This also means that they know that parasocial relationships are really important. It really shapes the content that we see and all kinds of influencers online. They know that their job is to build trust, and they can then use that trust to get people to buy things.

Could you give examples of ways you saw these influencers leveraging those sorts of relationships for profit?

RM: Often they would share throughout the day using Instagram Stories, which is this ephemeral content that disappears after 24 hours. Maybe it’s just them getting up and getting their kids ready for school, or maybe their child is sick, and they say, “Okay, I’m not going to treat it with medicine, I’m going to treat it with this essential oil.” And then they would direct their followers to the link in their bio, or to swipe up on the story. And it would take the followers to a multilevel marketing campaign, or maybe an Amazon affiliate link, where they can purchase the product. Maybe it’s very genuine, maybe they actually are using this product, and it’s a safe product. But often, it would come with some sort of anti-vaccine rhetoric — this is what they’re choosing instead of a vaccine, which contains these free radicals or metals or whatever they’re claiming.

Instagram videos and images can convey a lot more information than more text-based social media. Just as much as that visual richness is a great tool for spreading good information, it’s also a great tool for people who want to spread bad information. Because people often go to Instagram for entertainment, they’re not necessarily thinking as critically about the information that they’re seeing as they might be on a platform like X, where they anticipate encountering news. They aren’t thinking: “I have to question everything.” So they’re probably more vulnerable to misinformation.

A lot of attention has been paid to misinformation as a social and political tool. Why is it important for people to also pay attention to it as an economic phenomenon?

RM: I think it’s important because it’s an avenue that we’ve kind of forgotten about. In a way, I think we’re all attuned to the fact that scams exist online and offline. But we think about the big stories: someone losing their life savings. Yet we’re all kind of being scammed on a daily basis by being told that some products work when they don’t or, on a more dangerous level, being told to choose certain products over those backed by proven scientific medical knowledge. Looking at those economic mechanisms helps us consider why we’re so attracted to misinformation.

In terms of intervention, we need to think about media literacy — how do we give people the skills to recognize when they’re being scammed? And we need to think about what intervention looks like for these companies like Facebook and Instagram and Twitter. Or what it looks like for government. A lot of these tools are quite benign, like the fact that you can direct people off-platform to do certain things — that’s all well and good, and it affords a richer conversation online. But these are the mechanisms that get taken advantage of. So what are ways that we can potentially curtail this problem?

Anything else you want to add?

RM: One thing that I think a lot about is that you now see things on Instagram that are fairly politically extreme, but feel quite normalized, because you’re not always consuming the content in a really engaged way. So with these three influencers, the amount of content that is anti-vaccine is fairly small compared to the whole gamut of what they’re sharing every day. But the nature of the content is extreme. It’s not hidden. It’s not suggesting that you maybe should question getting a vaccine or talk to your doctor about getting a vaccine. It’s often straight conspiracy theories about vaccines. It’s quite jarring to see that a lot of this really hardcore anti-vaccine rhetoric comes from everyday people who get sucked in and make it their cause and share it alongside all of the other stuff that they do daily. We need to be attentive and discerning when we’re scrolling through TikTok or Instagram. We’re consuming so much on so many different topics so quickly, that if we step back and reflect on some of the things we’ve seen, they can often be quite extreme and extremely misinformed.

Co-authors include , who completed this research as a UW post-doctoral scholar with the CIP and is now at AnitaB.org, and , who completed this research as a UW researcher with the CIP. This work was funded by the CIP and the John S. and James L. Knight Foundation.

For more information, contact Moran at remoran@uw.edu.

]]>
ArtSci Roundup: A Conversation with Brad Smith, UW Public Lectures: An Evening with Masha Gessen, and More /news/2022/04/21/artsci-roundup-a-conversation-with-brad-smith-uw-public-lectures-an-evening-with-masha-gessen-and-more/ Thu, 21 Apr 2022 20:27:16 +0000 /news/?p=78200 Through public events and exhibitions, connect with the UW community every week!


Katz Distinguished Lecture: Abderrahmane Sissako

April 26, 7:00 PM |

What is the place of West Africa in the world and of the world in West Africa? These are the questions that the Oscar- and Palme d’Or-nominated filmmaker Adberrahmane Sissako asks insistently in films that address the impact of World Bank and IMF policies in Mali and beyond (Bamako, 2006), the confrontation between extremist and moderate Islam in the southern Sahara (Timbuktu, 2014), and exile in Europe and the difficulties of returning home (Life on Earth, 1999). In all of his films, Sissako brings a worldly sensibility to the representation of the most pressing concerns of the continent, but always with an eye for the beauty and tenderness in everyday life, no matter how difficult, and for the moral ambiguities and linguistic complexities that evade so many representations of West Africa.

Sponsored by Simpson Center for the Humanities. Co-sponsoredby the UW African Studies Program, the Black Cinema Collective, the Henry Art Gallery, and Northwest Film Forum.

Free |


A Conversation with Brad Smith

April 27, 5:00 PM | Husky Union Building

When your technology changes the world, what responsibility do you bear to address the global issues that arise? This conversation with Microsoft President & Vice ChairBrad Smith, inspired by his book “Tools and Weapons,” explores issues of responsibility and risk in the technological space, especially in the spread of misinformation.Margaret O’Mara(Department of History) moderates this panel with insights from UW professorsKate Starbird(Human Centered Design & Engineering) andJevin West (Information School) of UW’s Center for an Informed Public. Sponsored by the UW Alumni Association.

Free | Register & more info

 


15th Annual Allen L. Edwards Psychology Lectures: Brain | Mind | Body: Exploring the Human Mind through Neuroimaging

April 27, 7:30 PM |

30 years ago there was a scientific revolution that allowed us to finally measure the hidden workings of the human brain in action. This lecture series. sponsored by theDepartment of Psychology and presented by focuses on what we have learned from functional magnetic resonance imaging, and how this technique has evolved to provide insight into the neural underpinnings of attention and reading.


UW Public Lectures: An Evening with Masha Gessen

April 28, 7:30 PM | Kane Hall 130

Join the Office of Public Lectures for an evening with National Book Award winner, bestselling author, and journalist Masha Gessen (they/them).

One of our most trenchant observers of democracy, Masha Gessenis the author of eleven books, including the National Book Award-winningThe Future Is History: How Totalitarianism Reclaimed RussiaandThe Man Without a Face: The Unlikely Rise of Vladimir Putin. A staff writer atThe New Yorker, they have covered political subjects including Russia, L.G.B.T. rights, Vladimir Putin, Donald Trump, and the rise of autocracy among others.

$5 | More info


2022 Endowed Milliman Lecture in Economics:Causality in Data Science

April 29, 6:00 PM |

The Department of Economics hosts the 2022 Milliman Endowed Lecture, presented by Guido Imbens. This biennial lecture series brings world-renowned economists to the ӰӴý through the generosity of Glen and Alison Milliman.

Guido Imbens is a Professor of Economics at Stanford University, and 2021 Nobel Laureate in Economics for his “methodological contributions to the analysis of causal relationships,” along with David Card and Joshua D. Angrist. His research focuses on developing methods for drawing causal inferences in observational studies, using matching, instrumental variables, and regression discontinuity designs. He went on to teach at Harvard University, UCLA, and UC Berkeley after graduating with his Ph.D. from Brown University. In addition to his current position at Stanford University’s Graduate School of Business, he is a fellow of the American Academy of Arts and Sciences and the Econometric Society.

Free


2021-2022 WISIR Series: Contemporary Race & Politics in the United States; Race & Democracy

April 29, 11:30 AM |

The Washington Institute for the Study of Inequality and Race at the ӰӴý hosts a Webinar Series on Race and Contemporary Issues in the 2021-2022 academic year. These conversations focus on salient racial issues facing the country and will include ӰӴý faculty as well as faculty from other institutions to offer reflections and varying perspectives on these important topics.

This panel will be moderated byChip Turner, Associate Professor of Political Science, ӰӴý. Joining the discussion will be panelists:

  • Cristina Beltrán, Associate Professor, New York University
  • Michael Hanchard, Professor of Africana Studies, University of Pennsylvania
  • Deva Woodly, Associate Professor of Politics, The New School

Free |

]]>
Three UW teams awarded NSF Convergence Accelerator grants for misinformation, ocean projects /news/2021/10/01/three-uw-teams-awarded-nsf-convergence-accelerator-grants-for-misinformation-ocean-projects/ Fri, 01 Oct 2021 22:53:42 +0000 /news/?p=76057

Three separate ӰӴý research teams have been awarded $750,000 each by the National Science Foundation to advance studies in misinformation and the ocean economy.

The for phase 1 of the Convergence Accelerator program’s 2021 cohort. The federal agency hopes to build upon basic research and discovery to accelerate solutions in two critical areas: the “” and “.”

One team, from the UW Applied Physics Laboratory, was selected for the “Networked Blue Economy” track topic, and two UW teams — one from the UW Information School and another from the APL — were selected for the “Trust and Authenticity in Communications Systems” track.

Designed to transition basic research and discovery into practice, the Convergence Accelerator uses innovation processes like human-centered design, user discovery, team science, and integration of multidisciplinary research and partnerships. The Convergence Accelerator, now in its third year, aims to solve high-risk societal challenges through use-inspired convergence research, according to NSF.

The three projects that teams from the UW will lead include:

  • The “” project, from the APL and industry partners, will produce a flexible proof-of-concept technology to help people evaluate the source of information and its reliability. Drawing on the fields of technology development, law, business, policy, curriculum development, community management, interdisciplinary research and finance, the team will develop tools and components to generate and communicate digital “trust signals” in various settings. The result will be a proof-of-concept for a verified information exchange that would support tools that users can deploy to assess the trustworthiness and authenticity of digital information. Workstreams are anticipated to include food system safety and security, bank and financial information systems, public health information systems, academic publication and supply chains. , a principal research scientist at the APL, is the lead investigator.
  • The “” project team, composed of a multidisciplinary set of researchers from the UW, the University of Texas at Austin, Washington State University, Seattle Central College and Black Brilliance Research, will plan, facilitate and assess a series of seven workshops focusing on critical reasoning skills, the psychological and emotional aspects of information, and broader sociocultural dimensions of trust in information ecosystems. The workshop series will be hosted in collaboration with a diverse group of local stakeholders in Washington state and Texas, including urban and rural libraries, news outlets, civic organizations, and underrepresented communities. , an Information School associate professor and UW co-founder, is the principal investigator on the project.
  • In the “” project, three new community-run ocean sensors will provide Indigenous coastal communities with real-time data on the changing ocean environment. The floating systems, anchored to the seafloor, will be deployed in collaboration with coastal communities in Alaska, the Pacific Northwest and the Pacific Islands. Sofar Ocean’s existing buoy systems — designed to be affordable and convenient — can measure waves, sea surface temperature, cloudiness of the water, and water depth, and come equipped with solar power, satellite communication and potential for expansion. The project housed under will be done through the UW-based as well as its counterparts in Alaska and the Pacific Islands, which have long-standing, trusted relationships with Indigenous and coastal communities. , an oceanographer at the APL and the director of NANOOS, is the lead investigator.

Additionally, Assistant Professor and Associate Professor , both in the UW Paul G. Allen School of Computer Science & Engineering, are co-principal investigators on a team, led by the international grassroots community . That team aims to develop practical interventions to help individuals and community moderators analyze information quality, including misinformation, to build trust and address vaccine hesitancy. Zhang also is on another , based at the University of Michigan, that will help media platforms determine how to flag articles that contain misinformation.

During phase 1, each UW team will engage with the other members of their cohort in a fast-paced, nine-month hands-on journey, which includes the program’s innovation curriculum, formal pitch and phase 2 proposal evaluation. The program’s team-based approach creates a “co-opetition” environment, stimulating the sharing of innovative ideas toward solving complex challenges together, while in a competitive environment to try and progress to phase 2.

At the end of phase 1, each team participates in a formal pitch and proposal evaluation. Selected teams from phase 1 will proceed to phase 2, with potential funding up to $5 million for 24 months. Phase 2 teams will continue to apply Convergence Accelerator fundamentals to develop solution prototypes and to build a sustainability model to continue impact beyond NSF support. By the end of phase 2, teams are expected to provide high-impact solutions that address societal needs at scale.

Launched in 2019, the NSF Convergence Accelerator program builds upon basic research and discovery to accelerate solutions toward societal impact. Using convergence research fundamentals and integration of innovation processes, it brings together multiple disciplines, expertise and cross-cutting partnerships to solve national-scale societal challenges.

]]>
Communication technology, study of collective behavior must be ‘crisis discipline,’ researchers argue /news/2021/06/14/communication-technology-study-of-collective-behavior-must-be-crisis-discipline-researchers-argue/ Mon, 14 Jun 2021 19:33:40 +0000 /news/?p=74654

Our ability to confront global crises, from pandemics to climate change, depends on how we interact and share information.

Social media and other forms of communication technology restructure these interactions in ways that have consequences. Unfortunately, we have little insight into whether these changes will bring about a healthy, sustainable and equitable world. As a result, researchers now say that the study of collective behavior must rise to a “crisis discipline,” just like medicine, conservation and climate science have done, according to a published the week of June 14 in the Proceedings of the National Academy of Sciences.

“We have built and adopted technology that alters behavior at global scales without a theory of what will happen or a coherent strategy for reducing harm,” said , the lead author and a post-doctoral researcher at the ӰӴý’s .

Social media and other technological developments have radically reshaped the way that information flows on a global scale. These platforms are driven to maximize engagement and profitability, not to ensure sustainability or accurate information — and the vulnerability of these systems to misinformation and disinformation poses a dire threat to health, peace, global climate and more.

No one, not even the platform creators themselves, have much understanding of how their design decisions impact human collective behavior, the authors argue.

“We urgently need to understand this and move forward with focus on developing social systems that promote well-being instead of creating shareholder value by commandeering our collective attention,” said co-author , a UW professor of biology and faculty at the Center for an Informed Public.

Collective behavior and other complex systems are fragile. “When perturbed, complex systems tend to exhibit finite resilience followed by catastrophic, sudden, and often irreversible changes,” the authors write.

While there are studies and disciplines that focus on complex systems in the natural world, “we have a far poorer understanding of the functional consequences of recent large-scale changes to human collective behavior and decision making,” the authors write.

Averting catastrophe in the medium term (e.g., coronavirus) and long term (e.g., climate change, food security) will require rapid and effective collective behavioral responses — yet it remains unknown whether human social dynamics will yield such responses.

“We have seen individual studies about how climate-change disinformation gets over-represented even in the mainstream media, and studies show that in digital media that problem only gets worse,” said co-author , an associate professor of environmental studies at New York University.

Lacking a developed framework, tech companies have also fumbled their way through the ongoing coronavirus pandemic, unable to stem the “infodemic” of misinformation that impedes public acceptance of pandemic control measures such as wearing masks, widespread testing for the virus and vaccinations.

The situation parallels challenges faced in conservation biology and climate science, where insufficiently regulated industries optimize profits while undermining the stability of ecological and Earth systems.

“If we have a decade or so to act on climate change, we have far less time to sort out our social systems,” Bak-Coleman said.

Historically collective behavior has best been understood as when animals or people exhibit coordinated action without an obvious leader. This includes how fish school to evade predators or when a crowd spontaneously breaks into applause or becomes silent.

That thinking has evolved over the past decade, the authors write, from a phenomena to a contemporary view of collective action as a framework that reveals how interaction among individuals gives rise to collective action.

Additional co-authors on the paper include Rachel Moran at the UW; Mark Alfano at Delft University of Technology and Australian Catholic University; Wolfram Barfuss at University of Tübingen; Miguel A. Centeno, Andrew S. Gersick, Daniel I. Rubenstein and Elke U. Weber at Princeton University; Iain D. Cousin at University of Konstanz; Jonathan F. Donges at Stockholm University; Mirta Galesic and Albert B. Kao at Santa Fe Institute; Pawel Romanczuk at Humboldt Universität zu Berlin; Kaia J. Tombak at Hunter College of the City University of New York; and Jay J. Van Bavel at New York University.

Funding came from the UW eScience Institute, the John S. and James L. Knight Foundation, the UW Center for an Informed Public, the Deutsche Forschungsgemeinschaft, the National Science Foundation, The Max Planck Society, The Baird Society, The Emmy Noether Program, The Santa Fe Institute and the U.S. Navy’s Office of Naval Research.

For more information, contact Bak-Coleman at joebak@uw.edu.

 

]]>
Q&A: It’s not just social media — misinformation can spread in scientific communication too /news/2021/04/21/qa-its-not-just-social-media-misinformation-can-spread-in-scientific-communication-too/ Wed, 21 Apr 2021 21:04:25 +0000 /news/?p=73926
Academia is not immune to spreading misinformation, write UW researchers Jevin West and Carl Bergstrom in a recent paper. Photo: ӰӴý

When people think of misinformation, they often focus on popular and social media. But in a published April 12 in the Proceedings of the National Academy of Sciences, ӰӴý faculty members Jevin West and Carl Bergstrom write that scientific communication — both scientific papers and news articles written about papers — also has the potential to spread misinformation.

The researchers note that this doesn’t mean that science is broken. “Far from it,” write , an associate professor at the UW Information School and the inaugural director, and , a UW biology professor and a CIP faculty member. “Science is the greatest of human inventions for understanding our world, and it functions remarkably well despite these challenges. Still, scientists compete for eyeballs just as journalists do.”

UW News asked West and Bergstrom to discuss misinformation in and about science. Their emailed responses are below:

UW News: Many of us are familiar with the idea of fake news or misinformation on social media. Can you explain how some of these same concepts — such as hype and hyperbole, bias, filter bubbles and echo chambers and data distortion — also pop up in science and science communication? Why does this happen?

Jevin West

Science is run by humans, and humans respond to incentives. Scientists have strong incentives to be first to a result and to have their work noticed. Attention is a scarce resource. This creates an environment where scientists, universities, funders and journalists often hype their work more often than their results warrant. One example is an eye-catching paper title or a headline from a science journalist: “Muons upend all of physics.”

Carl Bergstrom

Researchers used to visit libraries and browse printed journals to keep up on the latest scientific research, but this is largely a thing of the past. Today most researchers access the literature through search engines, recommender systems and, to some degree, social media platforms. That creates the same kind of filter bubble problems that we see in society more broadly. Platforms optimize engagement, and the best way to engage a person is to deliver content that grabs their attention. Although the effects are less pronounced in science, it is still an issue that is not well understood and requires more attention.

 

West and Bergstrom are co-authors of “,” which came out in paperback this week.

 

How does a crisis like COVID-19 further fuel these issues?

The COVID-19 crisis, like any major crisis, involves high levels of uncertainty especially at first. As we tried to understand what was happening with SARS-CoV-2 early in 2020, we were looking at a virus about which we had very little prior knowledge — it had never been in humans until just a few months before. In uncertain environments, people are especially eager for answers. This creates an uncertainty vacuum into which all sorts of nonsense flows.

While scientists take their time to understand the origin of the virus, conspiracy theorists provide ready-made answers. Those with specific agendas cherry-pick from the range of research results. Scientists strive to accelerate research by sharing work prior to peer review, but reporters and others do not always treat that work with due caution. Journals try to hasten the peer review process, but sometimes this results in low quality work slipping through.

Despite all these challenges, science has come through remarkably well. Within 15 months, 10 vaccines already have been developed, with more on the way. Scientists sequenced the genome in a matter of days, worked out the structure of the virus and its proteins in exquisite detail, and are using sequence data from around the globe to track the spread and evolution of the virus and its many variants. Despite the challenges noted in our article, science remains among the greatest human inventions for understanding our world.

The term “significant” has a unique meaning to the scientific community. Can you describe that difference? How does the push for significance affect scientific results and papers?

In the science community, “significant” generally refers to statistical significance — the idea that a research result is statistically unlikely under some null hypothesis. This is a tricky concept, not only for the public, but also for scientists. Statistical significance does not necessarily mean that the effect is of a meaningfully important size. The cutoffs for deciding statistical significance differ based on the type of data and the discipline. And once a threshold level of statistical significance becomes entrenched, humans find ways to game the system to reach it — trying different methods until something works, for example. These are major topics of discussion in science today, and researchers look for better ways to report the degree of statistical support that their results carry. Again, as with the other topics discussed in this article, it doesn’t mean science is broken. It just means that science is in an ongoing process of refinement and improvement.

Can you talk about what happens when scientists find negative or non-significant results? Why could this be a problem?

Negative results tend to be boring: This drug doesn’t cure a disease, this sensor does not detect its target, this chemical reaction fails to proceed, this explanation for a phenomenon is unfounded. As a result, people are less interested in reading them, journals are less interested in publishing them and consequently scientists often cut their losses and don’t bother submitting negative results for publication. But this creates problems of its own. If scientists preferentially publish positive results, the scientific record is not an unbiased picture of scientific discovery. The positive results are in journals for everyone to read, while the negative results are hidden away in file cabinets or, more recently, on file systems. Indeed false claims can even become established as fact. Bergstrom and colleagues in 2016.

Fortunately, science has recognized this problem over the last decade and has proposed some solutions . For example, some publishers encourage the publication of negative results. Some fields have adopted a system known as “registered reports,” where researchers submit their experiment for peer review before the results are available, and publishers agree before the work is done to publish the results regardless of whether the results end up positive or negative.

What are some interventions that can help reduce misinformation both in science and in communications about science?

The most important intervention is teaching the public what science is and what is not. This includes teaching about the history and philosophy of science. It requires having scientists themselves engaging in the public. It involves calling out predatory journals (non-peer-reviewed journals), being cautious with preprint papers, understanding the tactics of those pushing purposeful and disingenuous doubt about science (e.g., ), and paying special attention to health misinformation that looks like science but is often anything but.

With more people paying attention to science and preprints right now thanks to the COVID-19 pandemic, what are some steps the general public can take when looking at preprints or news stories about science?

The rise of preprints is a good thing for science. Instead of waiting years for results, research findings can be made available immediately. During the pandemic this has been critical. But this shortened time scale comes at a cost. Preprints are not peer-reviewed. Peer review can take months and even years, and it doesn’t guarantee foolproof results. But it does a reasonably good job at filtering out the crackpot papers and those with obvious problems.

The public and journalists have to be extra careful with preprints. There have been preprints during the pandemic that have spread across the media landscape, even though there have been major problems with the paper and even debunked by more credible experts. If referencing newly deposited pre-prints, readers should invest more time into investigating the author, lab and institution pushing the results. When sharing results from preprints, it is important to tag the paper as non-peer-reviewed.

That said, some of the worst and most damaging papers published during the pandemic have gone through peer review, including a paper at Lancet that led to the cancellation of clinical trials — and later turned out to be fraudulent — so we have to be careful not to let up our guard on the peer-reviewed literature as well.

For more information, contact West at jevinw@uw.edu or Bergstrom at cbergst@uw.edu.

]]>
UW Center for an Informed Public co-authors report on mis- and disinformation surrounding the 2020 U.S. election /news/2021/03/02/uw-center-for-an-informed-public-co-authors-report-on-mis-and-disinformation-surrounding-the-2020-u-s-election/ Wed, 03 Mar 2021 00:20:22 +0000 /news/?p=73074  

illustration of a person putting a ballot in ballot bos with text "The Long Fuse: misinformation and the 2020 election"

The , a nonpartisan coalition of research institutions, including the ӰӴý, that identified, tracked and responded to voting-related mis- and disinformation during the 2020 U.S. elections, released its final report, “” on Tuesday, March 2. The report is the culmination of months of collaboration among approximately 120 people working across four organizations: the , , and the .

A handful of EIP researchers including the UW’s , associate professor of human centered design and engineering, will discuss key findings, insights and recommendations from the final report hosted by The Atlantic Council scheduled for noon-1:30 p.m. PST, Wednesday, March 3. The event is free and open to the public. Register .

The EIP’s “Long Fuse” final report expands upon the coalition’s rapid-response research and policy analysis surrounding the November 2020 U.S. election and will detail how misleading narratives and false claims about voting coalesced into the metanarrative of a “stolen election,” which propelled the Jan. 6 insurrection at the U.S. Capitol.

The EIP’s final report will also include a set of policy recommendations and share insights about how the coalition of researchers carried out their work, and how this model may be expanded to combat future large scale misinformation events.

Among the key findings:

  • Misleading and false claims and narratives coalesced into the metanarrative of a “stolen election,” which later propelled the Jan. 6 insurrection
  • Narrative spread was cross-platform: Repeat spreaders leveraged the specific features of each platform for maximum amplification
  • The primary repeat spreaders of false and misleading narratives were verified, blue-check accounts belonging to partisan media outlets, social media influencers, and political figures, including President Trump and his family
  • Many platforms expanded their election-related fact-checking and moderation policies during the 2020 election cycle, but application of moderation policies was inconsistent or unclear

The 2020 federal election demonstrated that actors — both foreign and domestic — remain committed to weaponizing viral false and misleading narratives to undermine confidence in the U.S. electoral system and erode Americans’ faith in our democracy, according to the report. Mis- and disinformation were pervasive throughout the campaign, the election, and its aftermath, spreading across all social platforms, the report found. The EIP was formed out of a recognition that the vulnerabilities in the current information environment require urgent collective action.

For more information, contact CIP Communications Manager Michael Grass at megrass@uw.edu.

]]>