The Tech Policy Press Podcast
The Tech Policy Press Podcast

Tech Policy Press is a nonprofit media and community venture intended to provoke new ideas, debate and discussion at the intersection of technology and democracy. You can find us at https://techpolicy.press/, where you can join the newsletter.

In their new book, Move Slow and Upgrade: The Power of Incremental Innovation, Evan Selinger, a professor in the Department of Philosophy at Rochester Institute of Technology and Albert Fox Cahn, founder in residence of the Surveillance Technology Oversight Project (STOP), argue that society is over-fixated on disruptive innovation over the kind of steady incrementalism that can deliver sustainable returns over longer time frames. They argue in favor of more careful deliberation and adopting what they call the “upgrader’s mindset,” which should be applied whenever “disruptive changes would pose the greatest social risk.”
The Pentagon wants AI that can fight wars — without limits. One of the United States’ leading AI companies says there are lines it won't cross. And this week, that standoff turned into an all-out confrontation. To discuss the implications of the dispute between Anthropic and the Pentagon, including the determination that the company represents a supply chain risk, Justin Hendrix spoke to two experts:Kat Duffy, senior fellow for digital and cyberspace policy at the Council on Foreign Relations, andAmos Toh, senior counsel in the Liberty and National Security Program at the Brennan Center for Justice.
Concerns about synthetic media and coordinated manipulation of online platforms have moved from theoretical worry to documented reality. Researchers, regulators, and civil society organizations are working to understand how algorithmically driven content recommendation systems can be exploited — not just by ideologically motivated actors, but by ordinary users pursuing financial gain.Fundación Maldita.es is a Spanish nonprofit that has been working on information integrity and fact-checking since 2017. Its most recent investigation focuses on TikTok, and what they found raises pointed questions about the platform's creator monetization program. Researchers at Maldita documented a network of hundreds of accounts — spanning eighteen countries — that were producing AI-generated videos of protests that never happened, and doing so not out of any discernible political motive, but to accumulate followers, qualify for TikTok's revenue-sharing program, and, in some cases, sell the accounts outright. In this episode, Justin Hendrix is joined by Maldita associate director for public policy Carlos Hernández-Echevarría and public policy officer Marina Sacristán.
As AI technologies proliferate, a growing number of people are asking what it means to live in a world dominated by algorithms and automated systems—and what gets lost when those systems optimize human behavior at scale. These questions sit at the intersection of political theory, technology policy, and everyday life, and they are drawing scholars from fields well outside computer science into the conversation.José Marichal is a political scientist at California Lutheran University who has been writing and teaching about technology and politics for more than two decades. Marichal's new book, You Must Become an Algorithmic Problem: Renegotiating the Socio-Technical Contract, considers the age of recommendation systems and large language models. Drawing on political philosophy, he argues that individuals have entered into an implicit bargain with technology companies, trading unpredictability and novelty for the convenience of algorithmically curated experience. The consequences of that bargain, he contends, reach beyond personal preference and into the foundations of liberal democratic citizenship.
This week marks the second DSA and Platform Regulation conference in Amsterdam, where experts will convene to consider the Digital Services Act (DSA) two years after it entered full effect across the European Union. Over that period, the law has been tested by national elections, geopolitical tensions, high-profile enforcement actions, and the rapid rise of generative AI. It has become both a benchmark for platform accountability and a political lightning rod.Ahead of the conference, Tech Policy Press senior editor Ramsha Jahangir spoke with members of the DSA Observatory, which is organizing the conference, to take stock. What have these first years of enforcement clarified? Where does opacity remain? And what does it mean to conduct DSA research in today’s political climate? Guests include:John Albert, associate researcher, DSA Observatory.Paddy Leerssen, postdoctoral researcher at the University of Amsterdam and part of the DSA Observatory.Magdelena Jozwiak, associate researcher at the DSA Observatory.
A wave of lawsuits in the Unites States is targeting tech firms for their product design decisions. Lawyer Carrie Goldberg has played a role in establishing the product liability theory that underlies them. As the founder of C.A. Goldberg, PLLC, in 2017, her firm brought a lawsuit that sought to apply product liability theory to a tech platform — Herrick v. Grindr — arguing that a dangerous app design, not just user behavior, was the source of harm. In 2022, Goldberg was appointed to the Plaintiffs’ Steering Committee in the federal social media multidistrict litigation. She’s led cases against Amazon, Meta, and Omegle, has testified before the Senate Judiciary Committee on child safety issues, and is the author of Nobody's Victim: Fighting Psychos, Stalkers, Pervs, and Trolls. Justin Hendrix spoke to her from her offices in Brooklyn about what she's learned over the last decade, and about some ongoing litigation that remains in dispute.
"Operation Metro Surge" — the massive immigration enforcement operation playing out right now in Minnesota — was billed as a targeted effort to apprehend undocumented immigrants. But what it has exposed goes far beyond immigration enforcement. It has pulled back the curtain on a sprawling surveillance apparatus that incorporates artificial intelligence, facial recognition, and other novel tools — not just to enable the raids that have turned violent and, in some cases, deadly; but also to silence dissent, to intimidate entire communities, and to discourage people from even watching what masked federal agents are doing in their own neighborhoods.To discuss these events and the prospects for reform, Justin Hendrix spoke to Irna Landrum, a senior campaigner at Kairos Fellowship and author of a recent piece on Tech Policy Press, "How ICE Uses AI to Automate Authoritarianism," and Alejandra Montoya-Boyer, vice president for the Center for Civil Rights and Technology at the Leadership Conference on Civil and Human Rights, which has called for reforms at the Department of Homeland Security and its component agencies.
In his forthcoming book, Your Data Will Be Used Against You, George Washington University Law School professor Andrew Guthrie Ferguson explores how the rise of sensor-driven technologies, social media monitoring, and artificial intelligence can be weaponized against democratic values and personal freedoms. Smart cars, smart homes, smart watches—these devices track our most private activities, and that data can be accessed by police and prosecutors looking for incriminating clues. What should legislatures, courts, and individuals do to protect civil liberties?
The killing of 37-year old nurse Alex Pretti by federal agents in Minneapolis was filmed from multiple angles by residents of the city, and local government officials have implored the public to share evidence of immigration enforcement agents committing acts of violence with investigators. But what are the challenges of using such artifacts in the pursuit of accountability? And what is there to learn from other efforts to use video, including from social media platforms, as evidence when seeking justice for crimes by state actors? Inequality.org managing editor and Tech Policy Press fellow Chris Mills Rodrigo joins Justin Hendrix to discuss these questions and more.
Today's guest is Jennifer Lind,  an associate professor of government at Dartmouth, a fellow at Chatham House London, and the author of the new book Autocracy 2.0: How China’s Rise Reinvented Tyranny, just out from Cornell Press. The book introduces the concept of 'smart authoritarianism,' a strategy that seeks to preserve political dominance while minimizing the economic damage of repression. It’s a sharp and unsettling argument—and one that is worth considering as a wave of autocratization continues to sweep across the globe, increasingly enabled by new technologies.
In a forthcoming paper, George Washington University Law School scholar Spencer Overton argues that the Trump administration's AI policy is consistent with its broader efforts to advance ethnonationalism. By eliminating policies intended to ensure safeguards against algorithmic bias—and recasting work on such problems as ideological threats to innovation—Trump's policies embed exclusion into the technological infrastructure of the future. As a growing body of research suggests, when AI systems operate without regulation, they default to dominant patterns that reproduce racial inequality and suppress cultural pluralism.
A new book titled Governing Digital China offers crucial insights into China's governance ecosystem. Written by Daniela Stockmann, a professor at the Hertie School in Berlin and director of the Center for Digital Governance, and Ting Luo, an associate professor in artificial intelligence and government at the University of Birmingham, the book reveals a more complex reality than simple top-down control.The authors show how massive tech companies like Tencent and Alibaba have become essential partners to the Chinese state, blending corporate and government power. At the same time, citizens exercise bottom-up influence, shaping how both platforms and the state respond to their needs. The result is what the authors call "popular corporatism"—a form of digital authoritarianism that operates quite differently than you might expect.
2026 is poised to be another landmark year for the child online safety debate in the United States.In recent years, states have passed dozens of bills aimed at expanding protections for kids as they navigate risks on social media platforms, AI chatbots and other pools, with more likely on the way. Lawmakers in Washington, meanwhile, are considering a flurry of proposals that could set a national standard on the issue. But many of these efforts are facing legal limbo as industry and some digital rights groups allege they violate constitutional rights and trample on privacy.Tech Policy Press senior editor Cristiano Lima-Strong spoke to three experts tracking the issue to assess the current policy landscape in the United States and how it may shift in 2026, particularly as state legislators continue to take up the cause:Amina Fazlullah is head of tech policy advocacy at Common Sense Media, a group that advocates for child online safety measures. She previously served as a tech policy fellow for Mozilla and as director of policy at the Benton Foundation.Joel Thayer is president of the Digital Progress Institute, a think tank that advocates for age verification policies. He previously clerked for Federal Trade Commission official Maureen Ohlhausen and served as policy counsel for the tech trade group The App Association.Kate Ruane is the director of the Free Expression Project at the Center for Democracy and Technology, a nonprofit that advocates for digital rights. She previously served as lead public policy specialist for the Wikimedia Foundation and as senior legislative counsel for the ACLU.
In what Reuters called a "mass digital undressing spree," Elon Musk is provoking outrage after his Grok chatbot responded to user prompts to remove the clothing from images of women and pose them in bikinis and to create "sexualized images of children" and post them on X. To discuss the controversy and the broader policy implications of generative AI with regard to child sexual abuse material and nonconsensual intimate imagery, Justin Hendrix spoke to Riana Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered AI and author of numerous reports and articles on these subjects, including for Tech Policy Press.
Tech Policy Press fellow Anika Collier Navaroli joined Justin Hendrix to discuss insights from her special 2025 series of podcasts, Through to Thriving. They discussed insights from her interviews over the course of the year with Ellen Pao, Jerrel Peterson, Alice Hunsberger, Vaishnavi J, Desmond Patton, Nora Benavidez, Mimi Ọnụọha, Timnit Gebru, Jasmine McNealy, Naomi Nix, and Chris Gilliard.
On Thursday, US President Donald Trump invited reporters into the Oval Office to watch him sign an executive order intended to limit state regulation of artificial intelligence. Trump said AI is a strategic priority for the United States, and that there must be a central source of approval for the companies that develop it.  Today's guest is Olivier Sylvain, a professor of law at Fordham Law School and a senior policy research fellow at the Knight First Amendment Institute at Columbia University.  He's the author of "Why Trump’s AI EO Will be DOA in Court," a perspective published on Tech Policy Press.
On Friday, the European Commission fined Elon Musk’s X €120 million for breaching the Digital Services Act, delivering the first-ever non-compliance decision under the European Union’s flagship tech regulation. By Saturday, Elon Musk was calling for no less than the abolition of the EU. To discuss the enforcement action, the politics surrounding it, and a variety of other issues related to digital regulation in Europe, Justin Hendrix spoke to Joris van Hoboken, a professor at the Institute for Information Law (IViR) at the University of Amsterdam, and part of the core team of the Digital Services Act (DSA) Observatory.
On this podcast, for years we’ve discussed issues such as conspiracy theories, mis- and disinformation, polarization, and the ways in which the design and incentives on today’s technology platforms exacerbate them. Today’s guest is Calum Lister Matheson,  associate professor and chair of the Department of Communication at the University of Pittsburgh and a faculty member of the Pittsburgh Psychoanalytic Center. He's the author of Post-Weird: Fragmentation, Community, and the Decline of the Mainstream, a new book from Rutgers University Press that applies a different lens on the question as he searches for insights into the seemingly inexplicable behaviors of communities such as serpent handlers, pro-anorexia groups, believers in pseudoscience, and conspiracy theorists that deny the reality of gun violence in schools.
The past few years have seen a great deal of introspection about a professional field which has come to be known as 'trust and safety,' comprised of the people who develop, oversee, and enforce social media policies and community guidelines. Many scholars and advocates describe it as having reached a turning point, mostly for the worst. Joining Tech Policy Press contributing editor Dean Jackson to discuss the evolution of trust and safety—not coincidentally, the title of their forthcoming article In the Emory Law Journal—are professors of law Danielle Keats Citron and Ari Ezra Waldman. Also joining the conversation is Jeff Allen, the chief research officer at the Integrity Institute, a nonprofit whose membership is composed of trust and safety industry professionals.
This week, the European Commission unveiled a sweeping plan to overhaul how the EU enforces its digital and privacy rules as part of a ‘Digital Omnibus,’ aiming to ease compliance burdens and speed up implementation of the bloc’s landmark laws. Branded as a “simplification” initiative, the omnibus proposal touches core areas of EU tech regulation — notably the AI Act and the General Data Protection Regulation (GDPR).The Commission argues that this update is necessary to ensure practical implementation of the laws, but civil society organizations see the proposed reform as the “biggest rollback of digital fundamental rights in EU history.”At the same time, leaders are talking loudly about digital sovereignty — including at last week’s summit in Berlin. But with the Omnibus appearing to weaken protections and tilt power toward large tech firms, what kind of sovereignty is actually being built?Tech Policy Press associate editor Ramsha Jahangir spoke to two experts to understand what the EU is trying to achieve:Leevi Saari, EU Policy Fellow at AI Now InstituteJulia Smakman, Senior Researcher at the Ada Lovelace Institute
In the latest episode in her special podcast series, Through to Thriving, Tech Policy Press fellow Anika Collier Navaroli talks about protecting privacy with Chris Gilliard. Gilliard is co-director of the Critical Internet Studies Institute and the author of Luxury Surveillance, a forthcoming book from MIT Press.
To discuss the past, present and future of information integrity work, Tech Policy Press contributing editor Dean Jackson spoke to American University Center for Security, Innovation and New Technology (CSINT) nonresident fellow Adam Fivenson and assistant professor and CSINT director Samantha Bradshaw.
This episode considers whether today’s massive AI investment boom reflects real economic fundamentals or an unsustainable bubble, and how a potential crash could reshape AI policy, public sentiment, and narratives about the future that are embraced and advanced not only by Silicon Valley billionaires, but also by politicians and governments. Justin Hendrix is joined by:Ryan Cummings, chief of staff at the Stanford Institute for Economic Policy Research and coauthor of a recent New York Times opinion on the possibility of an AI bubble;Sarah West, co-director of the AI Now Institute and coauthor of a Wall Street Journal opinion, "You May Already Be Bailing Out the AI Business"; andBrian Merchant, author of the newsletter Blood in the Machine, a journalist in residence at the AI Now Institute, and author of a recent piece in Wired on signals that suggest a bubble.
This episode was recorded in Barcelona at this year’s Mozilla Festival. One session at the festival focused on how to get better access to data for independent researchers to study technology platforms and products and their effects on society. It coincided with the launch of the Knight-Georgetown Institute’s report, “Better Access: Data for the Common Good,” the product of a year-long effort to create “a roadmap for expanding access to high-influence public platform data – the narrow slice of public platform data that has the greatest impact on civic life,” with input from individuals across the research community, civil society, and journalism. In a gazebo near the Mozilla Festival mainstage, Justin Hendrix hosted a podcast discussion with three people working on questions related to data access and advocating for independent technology research:Peter Chapman, associate director of the Knight-Georgetown Institute;Brandi Geurkink, executive director of the Coalition for Independent Tech Research and a former campaigner and fellow at Mozilla; andLK Seiling, a researcher at the Weizenbaum Institute in Berlin and coordinator of the DSA40 Data Access Collaboratory.Thanks to the Mozilla Foundation and to Francisco, the audio engineer on site at the festival.
For her special series of podcasts, Through to Thriving, Tech Policy Press fellow Anika Collier Navaroli spoke to artist Mimi Ọnụọha, whose work "questions and exposes the contradictory logics of technological progress." The discussion ranged across changing trends in nomenclature of data and artificial intelligence, the role of art in bearing witness to authoritarianism, the interventions and projects that Ọnụọha has created about the datafication of society, and why artists and policy practitioners should work more closely together to build a more just and equitable future.
Ryan Calo is a professor at the University of Washington School of Law with a joint appointment at the Information School and an adjunct appointment at the Paul G. Allen School of Computer Science and Engineering. He is a founding co-director of the UW Tech Policy Lab and a co-founder of the UW Center for an Informed Public. In his new book, Law and Technology: A Methodical Approach, published by Oxford University Press, Calo argues that if the purpose of technology is to expand human capabilities and affordances in the name of innovation, the purpose of law is to establish the expectations, incentives, and boundaries that guide that expansion toward human flourishing. The book "calls for a proactive legal scholarship that inventories societal values and configures technology accordingly."
Instagram has spent years making promises about how it intends to protect minors on its platform. To explore its past shortcomings—and the questions lawmakers and regulators should be asking—I spoke with two of the authors of a new report that offers a comprehensive assessment of Instagram’s record on protecting teens:Laura Edelson, an assistant professor of computer science at Northeastern University and co-director of Cybersecurity for Democracy, and Arturo Béjar, the former director of ‘Protect and Care’ at Facebook who has since become a whistleblower and safety advocate.Edelson and Béjar are two of the authors of “Teen Accounts, Broken Promises: How Instagram is Failing to Protect Minors.” The report is based on a comprehensive review of teen accounts and safety tools, and includes a range of recommendations to the company and to regulators.
Mallory Knodel,  executive director of the Social Web Foundation and founder of a weekly newsletter called the Internet Exchange, and Burcu Kilic, a senior fellow at Canada’s Center for International Governance Innovation, or CIGI, are the authors of a recent post on the Internet Exchange titled “Big Tech Redefined the Open Internet to Serve Its Own Interests,” which explores how the idea of the ‘open internet’ has been hollowed out by decades of policy choices and corporate consolidation. Kilic traces the problem back to the 1990s, when the US government adopted a hands-off, industry-led approach to regulating the web, paving the way for surveillance capitalism and the dominance of Big Tech. Knodel explains how large companies have co-opted the language of openness and interoperability to defend monopolistic control. The two argue that trade policy, weak enforcement of regulations like the GDPR, and the rise of AI have deepened global dependencies on a few powerful firms, while the current AI moment risks repeating the same mistakes. They say to push back we must call for coordinated, democratic alternatives: stronger antitrust action, public digital infrastructure, and grassroots efforts to rebuild truly open, interoperable, and civic-minded technology systems.
It’s been three years since Europe’s Digital Services Act (DSA) came into effect, a sweeping set of rules meant to hold online platforms accountable for how they moderate content and protect users. One component of the law allows users to challenge online platform content moderation decisions through independent, certified bodies rather than judicial proceedings. Under Article 21 of the DSA, these “Out-of-Court Dispute Settlement“ bodies are intended to play a crucial role in resolving disputes over moderation decisions, whether it's about content takedowns, demonetization, account suspensions, or even decisions to leave flagged content online.One such out-of-court dispute settlement body is called Appeals Centre Europe. It was established last year as an independent entity with a grant from the Oversight Board Trust, which administers Oversight Board, the content moderation 'supreme court' created and funded by Meta. Appeals Centre Europe has released a new transparency report, and the numbers are striking: of the 1,500 disputes the Centre has ruled on, over three-quarters of the platforms’ original decisions were overturned, either because they were incorrect, or because the platform didn’t provide the content for review at all.Tech Policy Press associate editor Ramsha Jahangir spoke to two experts to unpack what the early wave of disputes tells us about how the system is working, and how platforms are applying their own rules:Thomas Hughes is the CEO of Appeals Center EuropePaddy Leerssen is a postdoctoral researcher at the University of Amsterdam and part of the DSA Observatory, which monitors the implementation of the DSA.
Drawn from the biblical story in the book of Genesis, “Babel” has come to stand for the challenge of communication across linguistic, cultural, and ideological divides—the confusion and fragmentation that arise when we no longer share a common tongue or understanding. Today’s guest John Wihbey,  an associate professor of media Innovation at Northeastern University and the author of a new book titled Governing Babel: The Debate Over Social Media Platforms and Free Speech—And What Comes Next that tries to find an answer to how we can create the space to imagine a different information environment that promotes democracy and consensus rather than division and violence. The book is out October 7 from MIT Press.
Across the United States, dozens of state governments have attempted to establish their own efficiency initiatives, some molded in the image of the federal Department of Government Efficiency (DOGE). A common theme across many of these initiatives is the "stated goal of identifying and eliminating inefficiencies in state government using artificial intelligence (AI)" and promoting "expanded access to existing state data systems," according to a recent analysis by Maddy Dwyer, a policy analyst at the Center for Democracy and Technology.To learn more about what these efforts look like and to consider the broader question of AI’s use in government, Justin Hendrix spoke to Dwyer and Ben Green, an assistant professor in the University of Michigan School of Information and in the Gerald R. Ford School of Public Policy, who has written about DOGE and the use of AI in government for Tech Policy Press.
With two new bills headed to the desk of Governor Governor Gavin Newsom (D), California could soon pass the most significant guardrails for AI companions in the nation, sparking a lobbying brawl between consumer advocates and tech industry groups.In a recent report for Tech Policy Press,  associate editor Cristiano Lima-Strong detailed how groups are pouring tens if not hundreds of thousands of dollars into the lobbying fight, which has gained steam amid mounting scrutiny of the products. Tech Policy Press CEO and Editor Justin Hendrix spoke to Cristiano about the findings, and what the state's legislative battle could mean for AI regulation in the United States. This reporting was supported by a grant from the Tarbell Center for AI Journalism.
​From September 21–28, New York City will host Climate Week. Leaders from business, politics, academia, and civil society will gather to share ideas and develop strategies to address the climate crisis.​The tech industry intersects with climate concerns in a number of ways, not least of which is through its own growing demand for natural resources and energy, particularly to power data centers. What should a “tech agenda” for Climate Week include? What are the most important issues that need attention, and how should challenges and opportunities be framed?​Last week, Tech Policy Press hosted a live recording of The Tech Policy Press Podcast to get at these questions and more. Justin Hendrix was joined by three expert guests:​Alix Dunn, founder and CEO of The Maybe​Tamara Kneese, director of Data & Society's Climate, Technology, and Justice Program​Holly Alpine, co-Founder of the Enabled Emissions Campaign
Charlie Kirk, a conservative activist and co-founder of Turning Point USA, died Wednesday after he was shot at an event at Utah Valley University. Kirk’s assassination was instantly broadcast to the world from multiple perspectives on social media platforms including TikTok, Instagram, YouTube and X. But in the hours and days that have followed, the video and various derivative versions of it have proliferated alongside an increasingly divisive debate over Kirk’s legacy, the possible motives of the assassin, and the political implications. It is clear that, in some cases, the tech platforms are struggling to enforce their own content moderation rules, raising questions about their policies and investments in trust and safety, even as AI generated material plays a more significant role in the information ecosystem. To learn more about these phenomena, Justin Hendrix spoke to Wired senior correspondent Lauren Goode, who is covering this story.
Demand for computing power is fueling a massive surge in investment in data centers worldwide. McKinsey estimates spending will hit $6.7 trillion by 2030, with more than $1 trillion expected in the U.S. alone over the next five years. As this boom accelerates, public scrutiny is intensifying. Communities across the country are raising questions about environmental impacts, energy demands, and the broader social and economic consequences of this rapid buildout. To learn more about these debates—and the efforts to shape the industry’s future—Justin Hendrix spoke with two activists: one working at the national level, and another organizing locally in their own community. Vivek Bharathan is a member of the No Desert Data Center Coalition in Tucson, Arizona.Steven Renderos is executive director of MediaJustice, an advocacy organization that just released a report titled The People Say No: Resisting Data Centers in the South.
For the latest episode in her series of podcast discussions, Through to Thriving, Tech Policy Press fellow Anika Collier Navaroli spoke to Vaishnavi J, founder and principal of Vyanams Strategies (VYS), a trust and safety advisory firm focusing on youth safety, and a former safety leader at Meta, Twitter, and Google. Anika and Vaishnavi discussed a range of issues on the theme of how to center the views and needs of young people in trust and safety and tech policy development. They considered the importance of protecting the human rights of children, the debates around recent age assurance and age verification regulations, the trade-offs between safety and privacy, and the implications of what Vaishnavi called an “asymmetry” of knowledge across the tech policy community.
Today’s guest is Petter Törnberg, who with Justus Uitermark is one of the authors of a new book, titled Seeing Like a Platform: An Inquiry into the Condition of Digital Modernity, that sets out to address the “entanglement of epistemology, technology, and politics in digital modernity,” and what studying that entanglement can tell us about the workings of power. The book is part of a part of a series of research monographs that intend to encourage social scientists to embrace a “complex systems approach to studying the social world.”
Last year, Colorado signed a first-of-its-kind artificial intelligence measure into law. The Colorado AI Act would require developers of high-risk AI systems to take reasonable steps to prevent harms to consumers, such as algorithmic discrimination, including by conducting impact assessments on their tools.But last week, the state kicked off a special session where lawmakers held frenzied negotiations over whether to expand or dilute its protections. The chapter unfolded amid fierce lobbying by industry groups and consumer advocates. Ultimately, the state legislature punted on amending the law but agreed to delay its implementation from February to June of next year. The move likely tees up another round of contentious talks over one of the nation’s most sprawling AI statues.This week, Tech Policy Press associate editor Cristiano Lima-Strong spoke to two local reporters who have been closely tracking the saga for the Colorado Sun: political reporter and editor Jesse Paul and politics and policy reporter Taylor Dolven.
On this podcast, we’ve come back again and again to questions around mis- and disinformation, propaganda, rumors, and the role that digital platforms play in anti-democratic phenomena. In a new book published this summer by Oxford University Press called Connective Action and the Rise of the Far-Right: Platforms, Politics, and the Crisis of Democracy, a group of scholars from varied research traditions set out to find new ways to marry more traditional political science with computational social science approaches to understand the phenomenon of democratic backsliding and to bring some clarity to the present moment, particularly in the United States. Justin Hendrix had the chance to speak to two of the volume’s editors and two of its authors:Steven Livingston,  a professor and founding director of the Institute for Data Democracy and Politics at the George Washington University;Michael Miller,  managing director of the Moynihan Center at the City College of New York;Kate Starbird,  a professor at the University of Washington and a co-founder of the Center for an Informed Public; andJosephine Lukito,  assistant professor at the University of Texas at Austin and senior faculty research associate at the Center for Media Engagement.
In the latest installment in her series of podcasts called Through to Thriving, Tech Policy Press fellow Anika Collier Navoroli speaks with Dr. Jasmine McNealy, an attorney, critical public interest technologist, and professor in the Department of Media Production, Management, and Technology at the University of Florida;, and Naomi Nix, a staff writer for The Washington Post, where she reports on technology and social media companies. They discuss how they found themselves on the path through journalism and into a focus on tech and tech policy, the distinctions between truth and facts and whether there has ever been such a thing as a singular truth, how communities of color have historically seen and filled the gaps in mainstream media coverage, the rise of news influencers, and how journalists can regain the trust of the public.
Today’s guest, journalist Rahul Bhatia, has written a book that is part journalistic account, part history, and part memoir titled The New India: The Unmaking of the World's Largest Democracy.Reviewing the book in The Guardian, Salil Tripathi writes that “Bhatia’s remarkable book is an absorbing account of India’s transformation from the world’s largest democracy to something more like the world’s most populous country that regularly holds elections.” Bhatia considers the role of technology, including taking a close look at Aadhaar—India’s national biometric identification program—in order to consider the role it plays in the modern state and what the motivations behind it reveal.
On Thursday, Reuters tech reporter Jeff Horwitz, who broke the story of the Facebook Papers back in 2021 when he was at the Wall Street Journal, published two pieces, both detailing new revelations about Meta’s approach to AI chatbots. In a Reuters special report, Horwitz tells the story of a man with a cognitive impairment who died while attempting to travel to meet a chatbot character he believed was real. And in a related article, Horwitz reports on an internal Meta policy document that appears to endorse its chatbots engaging with children “in conversations that are romantic or sensual,” as well as other concerning behaviors. Earlier today, Justin Hendrix caught up with Horwitz about the reports and what they tell us about Silicon Valley’s no holds barred pursuit of AI, even at the expense of the safety of vulnerable people and children.
Daniel J. Solove is the Eugene L. and Barbara A. Bernard Professor of Intellectual Property and Technology Law at the George Washington University Law School. The project of his latest book, On Privacy and Technology, is to synthesize twenty five years of thinking about privacy into a “succinct and accessible” volume and to help the reader understand “the relationship between law, technology, and privacy” in rapidly changing world. Justin Hendrix spoke to him about the book and how recent events in the United States relate to his areas of concern.
Through To Thriving is a a special series of podcast episodes hosted by Tech Policy Press fellow Anika Collier Navaroli. With her guests, Anika is imagining futures beyond our current moment. For this episode, she spoke with Nora Benavidez, senior counsel and director of digital justice and civil rights at the nonprofit Free Press. Anika and Nora discussed the past and present state of platform accountability advocacy, the steps of building a campaign, the possibility of forming a creative agency to support advocates, and what to make of so called “woke AI.”This episode and conversation about advocating for change is dedicated to the memory and life of our former colleague and tech accountability researcher and advocate Brandi Collins-Dexter.
On Saturday, July 26, three days after the Trump administration published its AI action plan, China’s foreign ministry released that country’s action plan for global AI governance. As the US pursues “global dominance,” China is communicating a different posture. What should we know about China’s plan, and how does it contrast with the US plan? What's at stake in the competition between the two superpowers?To answer these questions, Justin Hendrix reached out to a close observer of China's tech policy.  Graham Webster is a lecturer and research scholar at Stanford University in the Program on Geopolitics, Technology, and Governance, and he is the Editor-in-Chief of the DigiChina Project, a "collaborative effort to analyze and understand Chinese technology policy developments through direct engagement with primary sources, providing analysis, context, translation, and expert opinion." Webster attended the World Artificial Intelligence Conference in Shanghai.
Yesterday, United States President Donald Trump took to the stage at the "Winning the AI Race Summit" to promote the administration's AI Action Plan. Shortly after it was published, Tech Policy Press editor Justin Hendrix sat down with Sarah Myers West, the co-director of the AI Now Institute; Maia Woluchem, the program director of the Trustworthy Infrastructures team at Data and Society; and Ryan Gerety, the director of the Athena Coalition, to discuss the plan and what it portends for the future.
This weekend, the Americans with Disabilities Act (ADA) turns 35. Signed into law on July 26, 1990, the law provides broad anti-discrimination protections for people with disabilities in the US, and has impacted how people with disabilities interact with various technologies. To discuss how the law has aged and what the fight for equity and inclusion looks like going forward, Tech Policy Press fellow Ariana Aboulafia spoke with three leaders working at the intersection of disability and technology:Maitreya Shah is the tech policy director at the American Association of People with Disabilities.Blake Reid is a professor at the University of Colorado.Cynthia Bennett is a senior research scientist at Google.
Tech Policy Press fellow Anika Collier Navaroli is the host of Through to Thriving, a special podcast series where she talks with technology policy practitioners to explore futures beyond our current moment. For this episode, Anika spoke with two experts on Trust & Safety about balance and resilience in a notoriously difficult field. Alice Hunsberger is the head of Trust & Safety at Musubi, a firm that sells AI content moderation solutions. Jerrel Peterson is the director of content policy at Spotify. Hunsberger and Peterson discussed how they broke into the field, their observations about the current state of the industry, how to better the working relationship between civil society and industry, and their advice for the next generation of practitioners.
Last week, following months of negotiation and just weeks before the first legal deadlines under the EU AI Act take effect, the European Commission published the final Code of Practice on General-Purpose AI. The Code is voluntary and intended to help companies demonstrate compliance with the AI Act. It sets out detailed expectations around transparency, copyright, and measures to mitigate systemic risks. Signatories will need to publish summaries of training data, avoid unauthorized use of copyrighted content, and establish internal frameworks to monitor risks. Companies that sign on will see a “reduced administrative burden” and greater legal clarity, the Commission said. At the same time, both European and American tech companies have raised concerns about the AI Act’s implementation timeline, with some calling to “stop the clock” on the AI Act’s rollout.To learn more, Tech Policy Press associate editor Ramsha Jahangir spoke to Luca Bertuzzi, senior AI correspondent at MLex, to unpack the final Code of Practice on GPAI, why it matters, and how it fits into the broader rollout of the AI Act.
In the United States, state legislatures are key players in shaping artificial intelligence policy, as lawmakers attempt to navigate a thicket of politics surrounding complex issues ranging from AI safety, deepfakes, and algorithmic discrimination to workplace automation and government use of AI. The decision by the US Senate to exclude a moratorium on the enforcement of state AI laws from the budget reconciliation package passed by Congress and signed by President Donald Trump over the July 4 weekend leaves the door open for more significant state-level AI policymaking.To take stock of where things stand on state AI policymaking, Tech Policy Press associate editor Cristiano Lima-Strong spoke to two experts:Scott Babwah Brennen, director of NYU’s Center on Technology Policy, and Hayley Tsukayama, associate director of legislative activism at the Electronic Frontier Foundation (EFF).
Helen Nissenbaum, a philosopher, is a professor at Cornell Tech and in the Information Science Department at Cornell University. She is director of the Digital Life Initiative at Cornell Tech, which was launched in 2017 to explore societal perspectives surrounding the development and application of digital technology. Her work on contextual privacy, trust, accountability, security, and values in technology design led her to work with collaborators on projects such as TrackMeNot, a tool to mask a user's real search history by sending search engines a cloud of ‘ghost’ queries, and AdNauseam, a browser extension that obfuscates a user’s browsing data to protect from tracking by advertising networks. Building on such projects, in 2015, she coauthored a book with Finn Brunton called Obfuscation: A User’s Guide for Privacy and Protest. The book detailed ideas on mitigating and defeating digital surveillance. With concerns about surveillance surging in a time of rising authoritarianism and the advent of powerful artificial intelligence technologies, Justin Hendrix reached out to Professor Nissenbaum to find out what she’s thinking in this moment, and how her ideas can be applied to present day phenomena.
At Tech Policy Press we’ve been tracking the emerging application of generative AI systems in content moderation. Recently, the European Center for Not-for-Profit Law (ECNL) released a comprehensive report titled Algorithmic Gatekeepers: The Human Rights Impacts of LLM Content Moderation, which looks at the opportunities and challenges of using generative AI in content moderation systems at scale. Justin Hendrix spoke to its primary author, ECNL senior legal manager Marlena Wisniak.
If you’ve been reading Tech Policy Press closely over the last three weeks, you may have come across one or more posts from collaboration with Data & Society called “Ideologies of Control: A Series on Tech Power and Democratic Crisis.” The articles in the series examine how powerful tech billionaires and authoritarian leaders and thinkers are leveraging AI and digital infrastructure to advance anti-democratic agendas, consolidate control, and reshape society in ways that threaten privacy, labor rights, environmental sustainability, and democratic governance. For this episode, Justin Hendrix spoke to four of the authors who made contributions to the series, including:Jacob Metcalf,  program director of the AI On the Ground Initiative at Data & Society;Tamara Kneese, program director of the Climate, Technology and Justice program at Data & Society;Reem Suleiman,  outgoing US advocacy lead at the Mozilla Foundation and  member of the city of Oakland's Privacy Advisory Commission; and Kevin De Liban, founder of TechTonic Justice.
For a special series of episodes dubbed Through to Thriving that will air throughout the year, Tech Policy Press fellow Anika Collier Navaroli is hosting discussions intended to help us imagine possible futures—for tech and tech policy, for democracy, and society—beyond the moment we are in.The third episode in the series features her conversation with Dr. Timnit Gebru, the founder and executive director of the Distributed Artificial Intelligence Research Institute. Last year, Dr. Gebru wrote an New York Times opinion that asked, “Who Is Tech Really For?” In the piece, she also asked, “what would an internet that served my elders look like?” This year, DAIR has continued to ask these questions by hosting an event and a blog called Possible Futures that imagines “what the world can look like when we design and deploy technology that centers the needs of our communities. In one of these pieces, Dr. Gebru, along with her colleagues Asmelash Teka Hadgu and Dr. Alex Hanna describe “An Internet for Our Elders.”
Concerns about AI chatbots delivering harmful, even profoundly dangerous advice or instructions to users is growing. There is deep concern over the effects of these interactions on children, and a growing number of stories—and lawsuits—about when things go wrong, particularly for teens. In this conversation, Justin Hendrix is joined by three legal experts who are thinking deeply about how to address questions related to chatbots, and about the need for substantially more research on human-AI interaction: Clare Huntington, Barbara Aronstein Black Professor of Law at Columbia Law School;Meetali Jain, founder and director of the Tech Justice Law Project; and Robert Mahari, associate director of Stanford's CodeX Center.
In Europe, the digital regulatory landscape is in flux. Over the past few years, the EU has positioned itself as a global leader in tech regulation, rolling out landmark laws like the AI Act. But now, as the much-anticipated AI Act approaches implementation, the path forward is looking anything but smooth. Reports suggest the European Commission is considering a delay to the AI Act’s rollout due to mounting pressure from industry, difficulties in finalizing technical standards, and geopolitical tensions—including pushback from the US government. At the same time, a broader movement for Europe to reduce its dependence on Amercian tech is gaining momentum: What does this push for digital sovereignty actually mean? To help us unpack all of this, Tech Policy Press associate editor Ramsha Jahangir spoke to Kai Zenner, Head of Office and Digital Policy Advisor to German MEP Axel Voss, and one of the more influential voices shaping the future of EU digital policy.
For a special series of episodes dubbed Through to Thriving that will air throughout the year, Tech Policy Press fellow Anika Collier Navaroli is hosting discussions intended to help us imagine possible futures—for tech and tech policy, for democracy, and society—beyond the moment we are in. The second episode in the series features her conversation with Dr. Desmond Upton Patton, who has long studied the intersection of technology and social issues and advised companies developing technologies and policies for social media and AI. Dr. Patton is the Brian and Randi Schwartz University Professor and Penn Integrates Knowledge University Professor at the University of Pennsylvania, and he serves on the board of Tech Policy Press.Recently, Dr. Patton has been teaching a class within Annenberg and the School of Social Policy & Practice called "Journey to Joy: Designing a Happier Life." In this episode, he discusses his personal and intellectual journey, and what the concept of joy has to do with technology and how we imagine the future.
In this episode, Justin Hendrix speaks with Nerima Wako-Ojiwa, director of Siasa Place, and Odanga Madung, a tech and society researcher and journalist, about the intersection of technology, labor rights, and political power in Kenya and across Africa. The conversation explores the ongoing struggles of content moderators and AI data annotators, who face exploitative working conditions while performing essential labor for major tech companies; the failure of platforms fail to address harmful biases and disinformation that particularly affect African contexts; the ways in which governments increasingly use platform failures as justification for internet censorship and surveillance; and the promise of youth and labor movements that point to a more just and democratic future.
Canadian political leaders are in a precarious moment. Fresh off the resignation of former Prime Minister Justin Trudeau and ascendancy of his successor, new Prime Minister and Liberal Party leader Mark Carney, the nation faces a brewing trade war with the United States and a deteriorating relationship with its president, Donald Trump.In addition to managing those global tensions, Canadian leaders have a long to-do list on tech policy, including figuring out the nation’s approach to artificial intelligence and online harms. How will the new Carney-led government in Canada navigate those issues?Tech Policy Press associate editor Cristiano Lima-Strong spoke to three experts to get a sense:Renee Black is founder of goodbot, where she works on preventing harmful disinformation and bias, and establishing frameworks that protect digital rights.Maroussia Lévesque is a doctoral candidate and lecturer at Harvard Law School, an affiliate at the Berkman Klein Center, and a senior fellow at the Center for International Governance Innovation.Vass Bednar is a public policy entrepreneur working at the intersection of technology and public policy.
Emily M. Bender and Alex Hanna are the authors of a new book that The Guardian calls “refreshingly sarcastic” and Business Insider calls a “funny and irreverent deconstruction of AI.” They are also occasional contributors to Tech Policy Press. Justin Hendrix spoke to them about their new book, The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, just out from Harper Collins.
Earlier this year, an entity called the Observatory on Information and Democracy released a major report called INFORMATION ECOSYSTEMS AND TROUBLED DEMOCRACY: A Global Synthesis of the State of Knowledge on News Media, AI and Data Governance. The report is the result of a combination of three research assessment panels comprised of over 60 volunteer researchers all coordinated by six rapporteurs and led by a scientific director that together considered over 1,600 sources on topics at the intersection of technology media and democracy ranging from trust in news and to mis- and disinformation is linked to societal and political polarization. Justin Hendrix spoke to that scientific director, Robin Mansell, and one of the other individuals involved in the project as chair of its steering committee, Courtney Radsch, who is also on the board of Tech Policy Press.
In February, California Governor Gavin Newsom appointed Vera Zakem as California’s State Chief Technology Innovation Officer at the California Department of Technology. Zakem brings deep experience from national security, democracy and human rights, and technology policy. Most recently, under former President Joe Biden, she served as the Chief Digital Democracy and Rights Officer at USAID, where she led global efforts to align emerging technologies with democratic values. Zakem assumes the role as California, like many governments, is accelerating its embrace of artificial intelligence. Justin Hendrix spoke with Zakem about the promise of state-led innovation and how to avoid its perils, what responsible AI governance might mean in practice, and how California might chart a course that’s both ambitious and accountable to its citizens.
On May 29, the Center for Civil Rights and Technology at The Leadership Conference on Civil and Human Rights released its Innovation Framework, which it calls a “new guiding document for companies that invest in, create, and use artificial intelligence (AI), to ensure that their AI systems protect and promote civil rights and are fair, trusted, and safe for all of us, especially communities historically pushed to the margins.” Justin Hendrix spoke to the Center’s senior policy advisor on Civil Rights and Technology, Frank Torres, about the framework, the ideas that informed it, and the Center’s interactions with industry.
On Thursday, May 22, the United States House of Representatives narrowly advanced a budget bill that included the "Artificial Intelligence and Information Technology Modernization Initiative," which includes a 10-year moratorium on the enforcement of state AI laws. Tech Policy Press editor Justin Hendrix and associate editor Cristiano Lima-Strong discussed the moratorium, the contours of the debate around it, and its prospects in the Senate.
In his New York Times review of the book, Columbia Law School professor and former White House official Tim Wu calls journalist Karen Hao’s new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, “a corrective to tech journalism that rarely leaves Silicon Valley.” Hao has appeared on this podcast before, to help us understand how the business model of social media platforms incentivizes the deterioration of information ecosystems, the series of events around OpenAI CEO Sam Altman’s abrupt firing in 2023, and the furor around the launch of DeepSeek last year. This week, Justin Hendrix spoke with Hao about the book, and what she imagines for the future.
Today’s guest is Milton L. Mueller,  a professor at the Georgia Institute of Technology in the School of Public Policy and the head of an advocacy policy analysis group called the Internet Governance Project. Mueller has long walked the halls and sat in the rooms where internet governance is discussed and debated, and has played a role in shaping global Internet policies and institutions. He’s the author of a new book called Declaring Independence in Cyberspace: Internet Self-Governance and the End of US Control of ICANN, which takes us into those rooms, telling the story of how and why the US government gave up its control of ICANN, a key internet governance institution responsible for internet names, numbers, and protocols. That history tells us a lot about where we are today when it comes to the broader geopolitics and governance of technology, and it has implications for the governance fights ahead, including over artificial intelligence.
In the wake of the most intense India-Pakistan escalation in two decades, experts are still trying to make sense of the role that the information war played in the physical one. In this episode, Tech Policy Press Associate Editor Ramsha Jahangir speaks to two experts from India and Pakistan who tirelessly navigated the deluge of rumor and disinformation during the crisis, and who came away with thoughts about the role of social media platforms and the incentives they create, particularly in times of conflict:Pratik Sinha, co-founder and editor at Alt News—one of India’s major fact-checking websites, and Asad Baig, founder of Media Matters for Democracy—a non-profit focused on media literacy and development in Pakistan.Sinha and Baig reflect on how the India-Pakistan conflict played out across digital platforms—and how it revealed a deeper, more dangerous dysfunction in the information ecosystem.
Last year, a United States federal judge ruled that Google is a monopolist in the market for online search. For the past three weeks, the company and the Justice Department have been in court to hash out what remedies might look like. Tech Policy Press associate editor Cristiano Lima-Strong spoke to two experts who are following the case closely, including Karina Montoya, a senior reporter and analyst for Center for Journalism and Liberty at the Open Markets Institute, and Joseph Coniglio, the director of antitrust and innovation at the Information Technology and Innovation Foundation (ITIF).
Last year, Elon Musk's xAI set up its "Colossus" supercomputer in an old Electrolux manufacturing facility in Memphis, Tennessee. Now, the residents of nearby neighborhoods are pushing for facts and fair treatment as the company looks to expand its footprint amid questions about its environmental impact. Justin Hendrix considers the state of play with Dara Kerr, a reporter for The Guardian; Amber Sherman, a Memphis activist; and artifacts from local media reporting over the past year.
Catherine Bracy is a civic technologist and community organizer whose work focuses on the intersection of technology and political and economic inequality. Justin Hendrix spoke with her about her new book, World Eaters: How Venture Capital is Cannibalizing the Economy. In it, she suggests how the venture capital industry must be reformed to deliver true innovation that advances society rather than merely outsized returns for an increasingly monolithic set of investors.
From visions of AI paradise to the project to defeat death, many dangerous and unscientific ideas are driving Silicon Valley leaders. Justin Hendrix spoke to Adam Becker, a science journalist and author of MORE EVERYTHING FOREVER: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, just out from Basic Books.
For a special series of episodes that will air throughout the year, Tech Policy Press fellow Anika Collier Navaroli is hosting a series of discussions intended to help us imagine possible futures—for tech and tech policy, for democracy, and society—beyond the moment we are in. Dubbed Through to Thriving, the first episode in the series features a discussion on how to build community and solidarity with Ellen Pao, currently the co-founder of a nonprofit called Project Include, which focuses on advancing diversity and inclusion in the tech sector. Previously, Pao was the interim CEO of Reddit and a venture capitalist.
Last month, a group of researchers published a letter “Affirming the Scientific Consensus on Bias and Discrimination in AI.” The letter, published at a time when the Trump administration is rolling back policies and threatening research aimed at protecting people from bias and discrimination in AI, carries the signatures of more than 200 experts. To learn more about their goals, Justin Hendrix spoke to three of the signatories:J. Nathan Matias, an Assistant Professor in the Department of Communication and Information Science at Cornell University.Emma Pierson, an Assistant Professor of Computer Science at the University of California, Berkeley.Suresh Venkatasubramanian, a Professor of Computer Science and Data Science at Brown University.
On Monday, April 14, the US Federal Trade Commission (FTC) will kick off its trial against Meta. In process for years, the case is over whether Mark Zuckerberg’s company has an illegal monopoly over social media and whether it should be forced to spin off Instagram and WhatsApp.To prepare to cover the arguments, Tech Policy Press Associate Editor Cristiano Lima-Strong spoke to two experts to better understand the issues at play.William (Bill) Kovacic is a Professor of Law and Policy and Director of the Competition Law Center at the George Washington School of Law. From January 2006 to October 2011, he was a member of the Federal Trade Commission and chaired the agency from March 2008 to March 2009. And for nearly a decade, Professor Kovacic served as a Non-Executive Director with the United Kingdom's Competition and Markets Authority.Gene Kimmelman is a senior policy fellow at Yale’s Tobin Center for Economic Policy. He was the Justice Department’s deputy associate attorney general during the Biden administration, and he has served as chief counsel to the head of the DOJ Antitrust Division and the Senate Antitrust Subcommittee.
On April 4, The New York Times reported that the European Commission is considering finding X, formerly Twitter, as part of its ongoing DSA investigation, which began in 2023. Tech Policy Press has discussed at length the extent and quality of transparency from platforms under the DSA, but there is limited insight into how the Commission is conducting its investigations into large online platforms and search engines. In most cases, the publicly available documents on cases are just press releases, while enforcement strategies and methods are not spelled out. To delve into the challenges this lack of transparency presents and how it impacts the public's understanding of the DSA, Tech Policy Press Associate Editor Ramsha Jahangir spoke to two researchers:Jacob van de Kerkhof, a PhD researcher at Utrecht University. His research is focused on the DSA and freedom of expression.Matteo Fabbri, a PhD candidate at IMT School for Advanced Studies in Lucca, Italy. Fabbri is also a visiting scholar at the Institute for Information Law at the University of Amsterdam. He recently published a research article titled "The Role of Requests for Information in Governing Digital Platforms Under the Digital Services Act: The Case of X."
Across the United States and in some cities abroad yesterday, protestors took to the streets to resist the policies of US President Donald Trump. Dubbed the "Hands Off" protests, over 1,400 events took place, including in New York City, where protestors called for billionaire Elon Musk to be ousted from his role in government and for an end to the Department of Government Efficiency (DOGE), which has gutted government agencies and programs and sought to install artificial intelligence systems to purportedly identify wasteful spending and reduce the federal workforce.In this conversation, Justin Hendrix is joined by four individuals who are following DOGE closely. The conversation touches on the broader context and history of attempts to use technology to streamline and improve government services, the apparent ideology behind DOGE and its conception of AI, and what the future may look like after DOGE. Guests include:Eryk Salvaggio, a visiting professor at the Rochester Institute of Technology and a fellow at Tech Policy Press;Rebecca Williams, a senior strategist in the Privacy and Data Governance Unit at ACLU;Emily Tavoulareas, who teaches and conducts research at Georgetown's McCourt School for Public Policy and is leading a project to document the founding of the US Digital Service; and Matthew Kirschenbaum, Distinguished University Professor in the Department of English at the University of Maryland.
On Tuesday, March 25th, Tech Policy Press hosted a webinar discussion to talk shop with others on the tech and democracy beat. We gathered seven colleagues from around the world to explore how tech journalists are grappling with the current political moment in the United States and beyond. In this episode, you'll hear the first session of the day, which features Tech Policy Press Associate Editor Ramsha Jahangir in discussion with​ Rina Chandran, Rest of World; Natalia Anteleva, Coda Story; Anupriya Datta, Euractiv; and Anisha Dutta, an award-winning investigative reporter.​This discussion delved into the global implications of these developments and key lessons from reporting in various political contexts. Questions included:​What key narratives are emerging globally from recent shifts in US policy?​How is the rise of a tech oligarchy shaping technology coverage outside the US?​What practical lessons can journalists learn from reporting on technology and politics in non-Western contexts?
On Tuesday, March 25th, Tech Policy Press hosted a webinar discussion to talk shop with others on the tech and democracy beat. We gathered seven colleagues from around the world to explore how tech journalists are grappling with the current political moment in the United States and beyond. In this episode, you'll hear the first session of the day, which features a discussion with Michael Masnick from Techdirt, Vittoria Elliot from Wired, and Emmanuel Maiberg from 404 Media.This session explored the intersection of technology and the current political situation in the US. Key questions included: How are tech journalists addressing the current situation, and why is their perspective so crucial? What critical questions are journalists covering the intersection of tech and democracy currently asking? How does the field approach reporting on anti-democratic phenomena and the challenges journalists face in this work?
Every now and again, a story that has a significant technology element really breaks through and drives the news cycle. This week, the Trump administration is reeling after The Atlantic magazine's Jeffrey Goldberg revealed that he was on the receiving end of Yemen strike plans in a Signal group chat between US Secretary of Defense Pete Hegseth and other top US national security officials. User behavior, a common failure point, appears to be to blame in this scenario. But what are the broader contours and questions that emerge from this scandal? To learn more, Justin Hendrix spoke to:Ryan Goodman is the Anne and Joel Ehrenkranz Professor of Law at New York University School of Law and co-editor-in-chief of Just Security. He served as special counsel to the general counsel of the Department of Defense (2015-16).Cooper Quintin is a senior staff technologist at the Electronic Frontier Foundation (EFF). He has worked on projects including Privacy Badger, Canary Watch, and analysis of state-sponsored malware campaigns such as Dark Caracal.
Last week, President Donald Trump ordered the firing of two Democratic members of the Federal Trade Commission, an independent agency that enforces federal consumer protection and competition laws and that, under former President Joe Biden, turned up its scrutiny of the tech sector's biggest companies. The two commissioners, Alvaro Bedoya and Rebecca Kelly Slaughter, plan to challenge Trump's firing, which they said will only benefit billionaire tech moguls like Mark Zuckerberg and Jeff Bezos.Tech Policy Press Associate Editor Cristiano Lima-Strong spoke to Bedoya on Monday, March 24.
What is necessary to develop a future that is less hospitable to authoritarianism and, indeed, to fascism? How do we build collective power against authoritarian forms of corporate and state power? Is an alternative form of computing possible? Dan McQuillan is the author of Resisting AI: An Anti-fascist Approach to Artificial Intelligence, published in 2022 by Bristol University Press.
Dr. Alondra Nelson holds the Harold F. Linder Chair and leads the Science, Technology, and Social Values Lab at the Institute for Advanced Study, where she has served on the faculty since 2019. From 2021 to 2023, she was deputy assistant to President Joe Biden and acting director and principal deputy director for science and society of the White House Office of Science and Technology Policy. She was deeply involved in the Biden administration’s approach to artificial intelligence. She led the development of the White House “Blueprint for an AI Bill of Rights,” which informed President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. To say the Trump administration has taken a different approach to AI and how to think about its role in government and in society would be an understatement. President Trump rescinded President Biden’s executive order and is at work developing a new approach to AI policy. At the Paris AI Action Summit in February, Vice President JD Vance promoted a vision of American dominance and challenged other nations that would seek to regulate American AI firms. And then there is DOGE, which is at work gutting federal agencies with the stated intent of replacing key government functions with AI systems and using AI to root out supposed fraud and waste.This week, Justin Hendrix had the chance to speak with Dr. Nelson about how she’s thinking about these phenomena and the work to be done in the years ahead to secure a more just, democratic, and sustainable future.
The goal of achieving "artificial general intelligence," or AGI, is shared by many in the AI field. OpenAI’s charter defines AGI as "highly autonomous systems that outperform humans at most economically valuable work,” and last summer, the company announced its plan to achieve AGI within five years. While other experts at companies like Meta and Anthropic quibble with the term, many AI researchers recognize AGI as either an explicit or implicit goal. Google Deepmind went so far as to set out "Levels of AGI,” identifying key principles and definitions of the term. Today’s guests are among the authors of a new paper that argues the field should stop treating AGI as the north-star goal of AI research. They include:Eryk Salvaggio, a visiting professor in the Humanities Computing and Design department at the Rochester Institute of Technology and a Tech Policy Press fellow;Borhane Blili-Hamelin, an independent AI researcher and currently a data scientist at the Canadian bank TD; andMargaret Mitchell, chief ethics scientist at Hugging Face.
A year ago, Europe’s Digital Markets Act—the DMA—went into effect. The European Commission says the purpose of the regulation is to make “digital markets in the EU more contestable and fairer.” In particular, the DMA regulates gatekeepers, the large digital platforms whose position gives them greater leverage over the digital economy. One year in, how has the DMA performed? Do Europeans enjoy more choice and competition? And what are the new politics of the DMA as European regulations are contested by the Trump administration and its supporters in US industry? To answer these questions and more, Tech Policy Press contributing editor Dean Jackson spoke to a set of experts following a conference hosted by the Knight Georgetown Institute titled “DMA and Beyond.” His guests include:Alissa Cooper, Executive Director of the Knight-Georgetown Institute (KGI)Anu Bradford, Henry L. Moses Professor of Law and International Organization at Columbia Law SchoolHaeyoon Kim, a Non-Resident Fellow at the Korea Economic Institute (KEI), andGunn Jiravuttipong, a JSD Candidate and Miller Fellow at Berkeley Law School.
Could AI help design better, more democratic platforms and online environments for public discourse? What are the opportunities, challenges, and risks of deploying AI in contexts where people are engaged in political discussion? Today’s guests are among the more than two dozen authors of a new paper on AI and the future of digital public squares:Audrey Tang, Taiwan's Cyber Ambassador and former Digital MinisterRavi Iyer, managing director of the USC Marshall School Neely Center for Ethical Leadership and Decision MakingBeth Goldberg, head of R&D at Jigsaw and a lecturer at Yale School of Public Policy
On this podcast, we regularly engage with questions about redesigning social media networks to make them more democratic, pluralist, and prosocial. One hypothesis people have about how to do that is through the decentralization of platforms and the introduction of middleware—tools built to give users more control over their social media experience and, thus, more autonomy in how they engage in public discourse. In this episode, you’ll hear a discussion with one entrepreneur building middleware for Bluesky: Rudy Fraser, the founder of Blacksky Algorithms and a fellow at the Berkman Klein Center for Internet & Society at Harvard University.
Last week, Tech Policy Press joined the Latin American Center for Investigative Journalism (EL CLIP) in publishing a report and series of articles documenting how adult users use public Facebook groups to identify and target accounts that indicate they are children for sexual exploitation. The “Innocence at Risk (Inocencia en Juego)” project, coordinated by EL CLIP with participation from Chequeado, includes a report from Lara Putnam, a professor of Latin American history and Director of the Civic Resilience Initiative of the Institute for Cyber Law, Policy, and Security at the University of Pittsburgh, and independent reports from journalists across Latin America investigating a pattern of behavior on the platform’s public groups in Colombia, Venezuela, and Argentina. They published their reports in EL CLIP, Chequeado, Crónica Uno, El Espectador, and Factchequeado. This episode features a discussion with Lara Putnam and Pablo Medina Uribe, who led the project at EL CLIP.
On January 22, President Donald Trump terminated all three Democratic members of the Privacy and Civil Liberties Oversight Board (PCLOB), an intelligence watchdog charged with monitoring the United States government's compliance with procedural safeguards on surveillance activities. The PCLOB's independence is also of concern to the European Commission, which relies on its reports in its assessment of whether US intelligence practices are aligned with EU Data Protection Framework standards. On February 24, two of the three terminated members filed suit against the government, arguing they were wrongfully terminated and must be reinstated. The outcome could determine the independence and effectiveness of the PCLOB going forward.This episode explores what's at stake in this matter, and it features three segments, including:Excerpts from remarks by the remaining PCLOB board member, Republican Beth Williams, at the annual State of the Net conference on February 11 in Washington, DC;An interview with former board member Travis LeBlanc conducted just days before he filed suit against the government;An interview with Greg Nojeim, Senior Counsel and Director of the Security and Surveillance Project at the Center for Democracy & Technology.
Tech Policy Press Associate Editor Ramsha Jahangir hosts a roundtable discussion on the first systemic risk assessments and independent audit reports from Very Large Online Platforms and Search Engines produced in compliance with the European Union's Digital Services Act. Ramsha is joined by:Hillary Ross, program lead at the Global Network Initiative (GNI);Magdalena Jozwiak, associate researcher at the DSA Observatory; andSvea Windwehr, the assistant director of EU policy at the Electronic Frontier Foundation (EFF).
This week, RightsCon, which bills itself as "the world’s leading summit on human rights in the digital age," descends on Taipei. To better understand the dynamics in the civil society community working on digital rights and tech policy matters in Taiwan, Justin Hendrix spoke to three experts:Liu I-Chen (劉以正), Asia Program Officer at ARTICLE 19Kuan-Ju Chou (周冠汝), Deputy Secretary-General of the Taiwan Association for Human Rights Grace Huang (黃寬心), Director for Global Justice and Digital Freedom at Judicial Reform Foundation
At the Paris AI Action Summit on February 10-11, remarks by EU and US leaders indicated significant divergence on how to think about AI. But on balance, nations are moving decisively toward innovation and exploitation of this technology and away from containing it or restricting it. In this episode, Justin Hendrix surfaces voices from the Summit, as well as reactions and discussion on these matters at this year's State of the Net conference on February 11 in Washington, DC, including comments by Center for Democracy & Technology vice president for policy Samir Jain, Abundance Institute head of AI policy Neil Chilson, and former Biden administration assistant director for AI policy Olivia Zhu.
Over the last two decades, as Berlin reinvented itself as a "creative city," social media both mirrored and shaped shifting social landscapes—offering new possibilities while also reinforcing inequalities. How did digital media practices reshape urban life? And what can Berlin’s story tell us about the broader relationship between technology, culture, and the places we live? Today’s guest is Jordan H. Kraemer, the author of a new book that tries to answer these questions and more. It's called Mobile City: Emerging Media, Space, and Sociality in Contemporary Berlin, published by Cornell University Press.
As Donald Trump’s second presidency enters its third week, Elon Musk is center stage as the Department of Government Efficiency moves to gut federal agencies. In this episode, Justin Hendrix speaks with two experts who are following these events closely and thinking about what they tell us about the relationship between technology and power:David Kaye, a professor of law at the University of California Irvine and formerly the UN Special Rapporteur on Freedom of Expression, andYaël Eisenstat, director of policy impact at Cybersecurity for Democracy at New York University.
Justin Hendrix speaks with Jathan Sadowski,  a senior lecturer in the Faculty of Information Technology at Monash University in Melbourne, Australia; co-host of This Machine Kills, a weekly podcast on technology and political economy; and author of the new book The Mechanic and the Luddite: A Ruthless Criticism of Technology and Capitalism from the University of California Press.
If Chinese AI startup DeepSeek’s efficiency and performance achievements stand up to scrutiny, it could have big implications for the AI race. It could call into question the strategic approach that the biggest US firms appear to be taking and the wisdom of the current American policy approach to AI. To discuss these issues, Justin Hendrix spoke to Karen Hao,  a reporter who covers AI. In recent years, she's reported on China and tech for the Wall Street Journal, written about AI for The Atlantic, and run a program for the Pulitzer Center  to teach other journalists how to report on AI. Hao has a book about OpenAI, the AI industry, and its global impacts that will be released later this year.
From Executive Orders on AI and cryptocurrency to "ending federal censorship," President Donald Trump had a busy first week in the White House. Justin Hendrix discussed the news with Damon Beres, a senior editor at The Atlantic, where he oversees the technology section. Beres wrote a piece reflecting on Trump's inauguration titled "Billions of People in the Palm of Trump’s Hand."
This episode features two segments. First, we hear from Nikki Gladstone, director of Rightscon, the annual conference organized by Access Now on issues at the intersection of human rights and technology. And in the second, you’ll hear from Robin Berjon and Sean McDonald, two of the folks behind Free Our Feeds, a new effort to raise a public interest foundation that will work to support making Bluesky’s underlying tech (the AT Protocol) resistant to billionaire capture.
Today- Friday, January 17, 2025 - the US Supreme Court delivered its order upholding the constitutionality of the Protecting Americans from Foreign Adversary Controlled Applications Act, a law passed by Congress and signed by President Joe Biden in April 2024. The Court found that the Act, which effectively bans TikTok in the US unless its Chinese parent company, ByteDance, sells it, does not violate the First Amendment rights of TikTok, its users, or creators.The decision clears the way for a ban to go into effect on January 19, 2025. Late this evening, TikTok issued a statement saying that “Unless the Biden Administration immediately provides a definitive statement to satisfy the most critical service providers assuring non-enforcement, unfortunately TikTok will be forced to go dark on January 19.” The White House had previously announced it would not enforce the ban before President Biden leaves office on Monday. Unless Biden takes action, this may set President-elect Donald Trump up to somehow come to TikTok’s rescue. To learn more about the ruling and what may happen next, Justin Hendrix  spoke to Kate Klonick, an associate professor of law at St. John's University and a fellow at Brookings, Harvard's Berkman Klein Center, and the Yale Information Society Project. The conversation also touches on recent moves by Meta’s founder and CEO, Mark Zuckerberg, to ingratiate himself to the incoming Trump administration.
Last fall, Cornell University PhD candidate Cristiana Firullo gave a presentation at the Trust and Safety Research Conference at Stanford University during a session on understanding algorithms and online environments. Titled "The Cursed Equilibrium of Algorithmic Traumatization," the talk focused on the work Firullo is doing with her colleagues at Cornell to try to understand why social media recommendation systems may produce harmful effects on users. Audio reporter Rebecca Rand spoke to Firullo about their hypotheses.
Even as the new year ushers in a new administration and Congress in the US at the federal level, dozens of states are kicking off new legislative sessions and are expected to pursue various tech policy goals. Justin Hendrix spoke to three experts to get a sense of the trends unfolding across the states on the regulation of AI, privacy, child online safety, and related issues: Keir Lamont, senior director at the Future of Privacy Forum (FPF) and author of The Patchwork Dispatch, a newsletter on state tech policy issues; Caitriona Fitzgerald, deputy director at the Electronic Privacy Information Center (EPIC), which runs a state privacy policy project and scores AI legislation; Scott Babwah Brennen, director of the Center on Technology Policy at New York University and an author of a recent report on trends in state tech policy.
This week’s guest is Dr. Ruha Benjamin, Alexander Stewart 1886 Professor of African American Studies at Princeton University and Founding Director of the IDA B. WELLS Just Data Lab. Benjamin was recently named a 2024 MacArthur Fellow, and she’s written and edited multiple books, including 2019’s Race After Technology and 2022’s Viral Justice. Last week she joined Justin Hendrix to discuss her latest book, Imagination: A Manifesto, published this year by WW Norton & Company.
This close to the end of 2024, it’s clear that one of the most significant tech stories of the year was the outcome of the Google search antitrust case. It will also make headlines next year and beyond as the remedies phase gets worked out in the courts. For this episode, Justin Hendrix turns the host duties over to someone who has looked closely at this issue: Alissa Cooper, the Executive Director of the Knight-Georgetown Institute (KGI). Alissa hosted a conversation with three individuals who are following the remedies phase with an expert eye, including:Cristina Caffarra is a competition economist and an honorary Professor at University College London, and cofounder of the Competition Research Policy Network at CEPR (Centre for Economic Policy Research), London.Kate Brennan is associate director at the AI Now Institute; andDavid Dinielli is an attorney and a visiting clinical lecturer and senior research scholar at Yale Law School.
Kate Starbird is a professor in the Department of Human Centered Design & Engineering and director of the Emerging Capacities of Mass Participation Laboratory at the University of Washington, and co-founder of the University of Washington's Center for an Informed Public. Justin Hendrix interviewed her about her team’s ongoing efforts to study online rumors, including during the 2024 US election; the differences between the left and right media ecosystems in the US; and how she believes the research field is changing.
Mass migration presents a challenge to democracy in multiple ways. Chief among them is that anti-immigrant sentiment often plays a major role in the advance of illiberal and anti-democratic politics. We've seen this play out in the United States, where President-elect Donald Trump has promised a dramatic crackdown on immigration and the mass deportation of millions. But the scale of today's migration may be dwarfed by what's to come. How has the movement of people affected the politics driving the development of surveillance, biometrics, big data and artificial intelligence technologies? And how do these technologies employed at borders and in governments themselves drive policy and change the way we think about the movement of people?Today's guest has spent years traveling the world to study how technology is being deployed in border regions and conflict zones, and she's written a book about it. Petra Molnar is a lawyer and an anthropologist and the author of The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence.
Robert Gorwa is the author of a new book titled The Politics of Platform Regulation: How Governments Shape Online Content Moderation, published by Oxford University Press. (The book is available open access- download a free copy here.) It is an analysis of how and why governments around the world engage in platform regulation. The lessons he draws from case studies of key regulatory developments in Europe, the United States, New Zealand, and Australia help explain the adoption of different regulatory strategies by these governments and the underlying politics that shape their approach.
At its November 21st "Summit of the Future of the Internet," billionaire Frank McCourt's Project Liberty hosted a panel discussion featuring Congresswoman Nancy Mace, a Republican from South Carolina, on a panel with Congressman Ro Khanna, a Democrat from California, that was moderated by the media personality Charlemagne the God. Last month, Congresswoman Mace led an effort to ban transgender women from using female bathrooms at the US Capitol in response to the election of Sarah McBride, who is set to be the first openly transgender person in Congress representing voters in Delaware. Evan Greer, director of Fight for the Future, a tech advocacy organization, took the opportunity to confront Congresswoman Mace's bigotry during the Project Liberty conference. Justin Hendrix spoke to Evan last week about the incident, where she believes the tech accountability and digital rights movement should draw the line when it comes to engaging with far-right politicians, and how we can go about building spaces where we can imagine a different future that is truly just and liberatory.
During his recent campaign, President-elect Donald Trump made various promises consistent with the ongoing effort by Elon Musk and MAGA Republicans to target researchers and civil society groups that study issues such as propaganda and mis- and disinformation. Today's guest has looked deeply at this effort, conducting an analysis of over 1800 pages of primary documents to identify the strategic approaches employed by these parties, including the House Judiciary Select Subcommittee on the Weaponization of the Federal Government, and the outcomes and broader democratic implications of the campaign. Philip M. Napoli is the James R. Shepley Professor of Public Policy, the Director of the DeWitt Wallace Center for Media & Democracy, and Senior Associate Dean for Faculty and Research for the Sanford School at Duke University. His findings are published in a new paper The Information Society titled "In pursuit of ignorance: The institutional assault on disinformation and hate speech research."
Parmy Olson is a Bloomberg Opinion columnist covering technology regulation, artificial intelligence, and social media. Her new book, Supremacy: AI, ChatGPT, and the Race that Will Change the World tells a tale of rivalry and ambition as it chronicles the rush to exploit artificial intelligence. The book explores the trajectories of Sam Altman and Demis Hassabis and their roles in advancing artificial intelligence, the challenges posed by corporate power, and the extraordinary economic stakes of the current race to achieve technological supremacy.
These days, if you see someone with their head bowed, you’re much more likely observing them staring into their phone than in prayer. But from digital rituals to the promises of abundance from Silicon Valley elites, has technology become the world’s most powerful religion? What kinds of promises of salvation and abundance are its leaders making? And how can thinking about technology in this way help us generate ways to reform our approach to it, particularly if we aim to restore humanist principles?Today’s guest is Greg Epstein, who drew on lessons from his vocation as a humanist chaplain at Harvard and MIT to write a new book, just out from MIT Press, called Tech Agnostic: How Technology Became the World's Most Powerful Religion, and Why It Desperately Needs a Reformation.
Today’s guest is Boston University School of Law professor Woodrow Hartzog, who, with the George Washington University Law School's Daniel Solove, is one of the authors of a recent paper that explored the novelist Franz Kafka’s worldview as a vehicle to arrive at key insights for regulating privacy in the age of AI. The conversation explores why privacy-as-control models, which rely on individual consent and choice, fail in the digital age, especially with the advent of AI systems. Hartzog argues for a "societal structure model" of privacy protection that would impose substantive obligations on companies and set baseline protections for everyone rather than relying on individual consent. Kafka's work is a lens to examine how people often make choices against their own interests when confronted with complex technological systems, and how AI is amplifying these existing privacy and control problems.
On Tuesday, November 5th, the final ballots will be cast in the 2024 US presidential election. But the process is far from over. How prepared are social media platforms for the post-election period? What should we make of characters like Elon Musk, who is actively advancing conspiracy theories and false claims about the integrity of the election? And what can we do going forward to support election workers and administrators on the frontlines facing threats and disinformation? To help answer these questions, Justin Hendrix spoke with three experts: Katie Harbath, CEO of Anchor Change and chief global affairs officer at Duco Experts;Nicole Schneidman, technology policy strategist at Protect Democracy; andDean Jackson, principal of Public Circle LLC and a reporting fellow at Tech Policy Press.
If you’re trying to game out the potential role of technology in the post-election period in the US, there is a significant "X" factor. When he purchased the social media platform formerly known as Twitter, “Elon Musk didn’t just get a social network—he got a political weapon.” So says today’s guest, a journalist who is one of the keenest observers of phenomena on the internet: Charlie Warzel, a staff writer at The Atlantic and the author of its newsletter Galaxy Brain. Justin Hendrix caught up with him about what to make of Musk and the broader health of the information environment.
Martin Husovec is an associate law professor at the London School of Economics and Political Science (LSE). He works on questions at the intersection of technology and digital liberties, particularly platform regulation, intellectual property and freedom of expression. He's the author of Principles of the Digital Services Act, just out from Oxford University Press. Justin Hendrix spoke to him about the rollout of the DSA, what to make of progress on trusted flaggers and out-of-court dispute resolution bodies, how transparency and reporting on things like 'systemic risk' is playing out, and whether the DSA is up to the ambitious goals policymakers set for it.
In this episode, Justin Hendrix speaks with three researchers who recently published projects looking at the intersection of generative AI with elections around the world, including:Samuel Woolley, Dietrich Chair of Disinformation Studies at the University of Pittsburgh and one of the authors of a set of studies titled Generative Artificial Intelligence and Elections;Lindsay Gorman, Managing Director and Senior Fellow of the Technology Program at the German Marshall Fund of the United States and an author of a report and online resource titled Spitting Images: Tracking Deepfakes and Generative AI in Elections; andScott Babwah Brennen, Director of the NYU Center on Technology Policy and one of the authors of a deep dive into the literature on the effectiveness of AI content labels and another on the efficacy of recent US state legislation requiring labels on political ads that use generative AI.
With Sam Woolley, Mariana Olaizola Rosenblat and Inga K. Trauthig are authors of a new report from the NYU Stern Center for Business and Human Rights and the Propaganda Research Lab at the Center for Media Engagement at the University of Texas at Austin titled "Covert Campaigns: Safeguarding Encrypted Messaging Platforms from Voter Manipulation." Justin Hendrix caught up with them to learn more about how political propagandists are exploiting the features of encrypted messaging platforms to manipulate voters, and what can be done about it without breaking the promise of encryption for all users.
In her new book, Fearless Speech: Breaking Free from the First Amendment, Dr. Mary Anne Franks challenges First Amendment orthodoxy and critiques “reckless speech,” which endangers vulnerable groups and protects corporate interests, in order to advance “fearless speech,” which seeks to advance equality and democracy.
A lot of folks frustrated with major social media platforms are migrating to alternatives like Mastodon and Bluesky, which operate on decentralized protocols. This summer, Erin Kissane and Darius Kazemi released a report on the governance on fediverse microblogging servers and the moderation practices of the people who run them. Justin Hendrix caught up with Erin Kissane about their findings, including the emerging forms of diplomacy between different server operators, the types of political and policy decisions moderators must make, and the need for more resources and tooling to enable better governance across the fediverse.
The results in this year’s installment of the Freedom House Freedom on the Net report generally follow the same distressing trajectory as prior reports, marking a 14th consecutive year in declines in internet freedom around the world. But in this year of elections, the Freedom House analysts also identified a set of concerning phenomena related to this most fundamental act of democracy and how governments are asserting themselves, for better or worse. Justin Hendrix spoke to report authors Allie Funk and Kian Vesteinsson about their findings.
In this episode, we're crashing a funeral... for CrowdTangle, a piece of software that allowed journalists and independent researchers to get insights into social media. Not our usual material, but this particular loss marks a huge blow in the ongoing fight for public access to data from the platforms, and underscores why we need to continue to fight for transparency. And the folks convened by the Knight-Georgetown Institute and the Coalition for Independent Technology Research refused to let it go unmarked.
Barry Lynn is the executive director of the Open Markets Institute in Washington DC and the author of this month's cover essay in Harper's titled "The Antitrust Revolution: Liberal democracy’s last stand against Big Tech." Justin Hendrix spoke to him about his essay, about the remedy framework proposed by the US Department of Justice following the ruling in the Google search antitrust trial, and about what to anticipate for the antitrust movement following the 2024 US presidential election.
Today’s guest is Sam Jeffers, cofounder and executive director of Who Targets Me. Jeffers has spent several yearshas spent several years building a suite of capabilities to make political advertising more transparent, including tools for individuals and data and support for academics, researchers and journalists. His organization also advocates for better policy from platforms, regulators and governments. (You can download the Who Targets Me browser extension to contribute your data to the project.)
Last week, Wall Street Journal technology reporter Jeff Horwitz first reported on details of an unredacted version of a complaint against Snap brought by New Mexico Attorney General Raúl Torrez. Tech Policy Press editor Justin Hendrix spoke to Horwitz about its details, and questions it leaves unanswered.
One of the most significant concepts in Europe’s Digital Services Act is that of “systemic risk,” which relates to the spread of illegal content, or content that might have foreseeable negative effects on the exercise of fundamental rights or on on civic discourse, electoral processes, public security and so forth. The DSA requires companies to carry out risk assessments to detail whether they are adequately addressing such risks on their platforms. What exactly amounts to systemic risk and how exactly to go about assessing it is still up in the air in these early days of the DSA’s implementation. In today’s episode, Tech Policy Press Staff Writer Gabby Miller speaks with three experts involved in conversations to try and get to best practices:Jason Pielemeier, Executive Director of the Global Network Initiative;David Sullivan, Executive Director of the Digital Trust & Safety Partnership; andChantal Joris, Senior Legal Officer at Article 19
Arvind Narayanan and Sayash Kapoor are the authors of AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference, published September 24 by Princeton University Press. In this conversation, Justin Hendrix focuses in particular on the book's Chapter 6, "Why Can't AI Fix Social Media?"
The Institute for Strategic Dialogue (ISD) recently assessed social media platforms’ policies, public commitments, and product interventions related to election integrity across six major issue areas: platform integrity, violent extremism and hate speech, internal and external resourcing, transparency, political advertising and state-affiliated media. Justin Hendrix spoke to two of the report's authors: ISD's Director of Technology & Society, Isabelle Frances-Wright, and its Senior US digital Policy Manager, Ellen Jacobs. ISD's assessment included Snap, Facebook, Instagram, TikTok, YouTube, and X.
Marietje Schaake is the author of The Tech Coup: How to Save Democracy from Silicon Valley. Dr. Alondra Nelson, a Professor at the Institute for Advanced Study, who served as deputy assistant to President Joe Biden and Acting Director of the White House Office of Science and Technology Policy (OSTP), calls Schaake “a twenty-first century Tocqueville” who “looks at Silicon Valley and its impact on democratic society with an outsider’s gimlet eye.” Nobel prize winner Maria Ressa says Schaake's new book “exposes the unchecked, corrosive power that is undermining democracy, human rights, and our global order.” And author and activist Cory Doctorow says the book offers “A thorough and necessary explanation of the parade of policy failures that enshittified the internet—and a sound prescription for its disenshittification.” Justin Hendrix spoke to Schaake just before the book's publication on September 24, 2024.
Gary Marcus writes that the companies developing artificial intelligence systems want the citizens of democracies “to absorb all the negative externalities” that might arise from their products, “such as the damage to democracy from Generative AI–produced misinformation, or cybercrime and kidnapping schemes using deepfaked voice clones—without them paying a nickel.” And, he says, we need to fight back. His new book is called Taming Silicon Valley: How We Can Ensure That AI Works for Us, published by MIT Technology Press on September 17, 2024.
In 2019, Thierry Breton, a French business executive who became the France’s Minister of Finance from 2005 to 2007, was nominated by President Emmanuel Macron to become a member of the European Commission for the Internal Market. In that role his name and face were closely associated with Europe’s push to regulate digital markets and the passage of legislation such as the Digital Services Act and the EU’s AI Act. On Monday, September 16 - in a letter that called into question EU Commission President Ursula von der Leyen’s governance - Breton resigned his post. While certain tech executives may be happy to see him go- Elon Musk posted “bon voyage” to the news - his departure spells change for Europe’s approach to tech going forward. To learn more, Justin Hendrix reached out to a European journalist who is covering these matters closely, and who has been kind enough to share his reporting on the EU AI Act with Tech Policy Press in the past: MLex Senior AI Correspondent Luca Bertuzzi.
At Tech Policy Press, we’re closely following the implementation of the Digital Services Act, the European Union law designed to regulate online platforms and services. One of the DSA’s key objectives is to identify and mitigate systemic risks.But how do we gauge what rises to the level of a systemic risk? How do we get the sort of information we need from platforms to identify and mitigate systemic risk, and how do we create the kinds of collaborations between regulators and the research community that are necessary to answer complex questions?Ramsha Jahangir, a reporting fellow at Tech Policy Press, recently discussed these questions with Dr. Oliver Marsh, who is head of tech research at Algorithm Watch, an NGO with offices in Berlin and Zurich that works on issues at the intersection of technology and society. Dr. Marsh has been leading research on systemic risks and the DSA’s approach, and just put out a detailed summary of his work.
Paris Marx, a Canadian tech critic, recently authored a post under the headline "Pavel Durov and Elon Musk are not free speech champions: The actions against Telegram and Twitter/X are about sovereignty, not speech." Justin Hendrix spoke to Paris about his assessment of these matters, and why those making claims in defense of free speech in the wake of Brazil’s ban on X and Telegram founder and CEO Pavel Durov’s arrest in France may in fact be undermining free expression and internet freedoms in the long run.
Today is Monday, September 9th. Today Judge Leonie Brinkema of the US District Court for the Eastern District of Virginia is presiding over the start of a trial in which the United States Department of Justice accuses Google of violating antitrust law, abusing its power in the market for online advertising. Google contests the allegations against it. To get a bit more detail on what to expect, Justin Hendrix spoke to two individuals covering the case closely who take a critical view of Google, the government’s allegations about its power in the online advertising market, and the company’s effect on journalism and the overall media and information ecosystem:Sarah Kay Wiley, director of policy at Check My Ads, which is running a comprehensive tracker on the case;Karina Montoya, a senior reporter and policy analyst at the Center for Journalism and Liberty, a program of the Open Markets Institute, who has covered the case extensively for Tech Policy Press.
Thirty tech bills went through the law making sausage grinder in California this past session, and now Governor Gavin Newsom is about to decide the fate of 19 that passed the state legislature. The Governor now has until the end of September to sign or veto the bills, or to permit them to become law without his signature. To learn a little more about some of the key pieces of legislation and the overall atmosphere around tech regulation in California, Justin Hendrix spoke to two journalists who live and work in the state and cover these issues regularly:Jesús Alvarado, a reporting fellow at Tech Policy Press and author of a recent post on SB 1047, a key piece of the California legislation; Khari Johnson, a technology reporter at CalMatters,a fellow in the Digital Technology for Democracy Lab at the Karsh Institute for Democracy at the University of Virginia, and the author of a recent article on the California legislation.
On August 26th, Justin Hendrix moderated a panel convened by the Social Science Research Council at its offices in Brooklyn, New York. The panel was titled “Platforms and Elections: the Global State of Play, and it featured:Dr. Shannon McGregor, associate professor at the UNC Hussman School of Journalism and Media and a principal investigator with the Center for Information Technology in Public Life (CITAP);Dr. Jonathan Corpus Ong, professor of global digital media. at the University of Massachusetts at Amherst, inaugural director of the Global Technology for Social Justice Lab; andDr. Chris Tenove, research associate and instructor at the School of Public Policy and Global Affairs and Assistant Director of the Center for the Study of Democratic Institutions, the University of British Columbia.This episode features a lightly edited recording of the conversation, which touches on topics ranging from the role of civil society and independent researchers in engaging with efforts to protect the integrity of elections and mitigate the spread of misinformation to current questions about how generative AI may impact politics.
Renée DiResta, who serves on the board of Tech Policy Press and has been an occasional contributor, is the author of Invisible Rulers: The People Who Turn Lies Into Reality, published by Hachette Book Group in June. Justin Hendrix had a chance to catch up with DiResta last week to discuss some of the key ideas in the book, and how she sees them playing out in current moment headed into the 2024 US election.
The billionaire owner of the social media platform X, Elon Musk, has been in a prolonged dispute with a Supreme Court Judge in Brazil regarding X’s content moderation practices. Earlier this year, Judge Alexandre de Moraes launched an investigation into X after Musk defied a court order to block accounts that supported former right-wing president Jair Bolsonaro and were accused of spreading misinformation and hate speech.On Friday afternoon, August 30, following a standoff over an order requiring X to appoint a new legal representative in Brazil, the Judge issued an order to suspend X in the country. Justin Hendrix spoke to three people following the situation closely in Brazil: Laís Martins, a journalist at the The Intercept in Brazil; Sérgio Spagnuolo, executive director & founder of the data-driven tech news organization Nucleo Journalism; and Dr. Ivar Alberto Hartmann, an associate professor at the Insper Institute of Education and Research in Brazil.
Justin Hendrix speaks with Mark Surman, President of Mozilla, about Mozilla’s work promoting open source AI, the importance of competition in the tech sector, and the regulatory challenges facing the industry. Surman discusses Mozilla's initiatives in AI investment and development, and reflects on what the recent ruling the Google search cases might mean for the future of Mozilla and the tech economy. And, Surman shares his hopes for the future- that we can arrive at a tech economy that is not purely extractive, but rather one that respects people’s values and dignity.
On Friday, August 16, the United States Ninth Circuit Court of Appeals issued a ruling in NetChoice v. Bonta, partially upholding and partially vacating a preliminary injunction against California's Age-Appropriate Design Code Act. The court affirmed that certain provisions of the law are likely to violate the First Amendment by compelling online businesses to assess and mitigate potential harms to children, but it vacated the broader injunction, remanding the case to the district court for further consideration of other parts of the statute, including restrictions on the collection and use of children's data. In this episode, Justin Hendrix recounts the basics of the Ninth Circuit ruling. And in a second segment that was recorded just days before Friday's ruling, Tech Policy Press fellow Dean Jackson is joined by Tech Justice Law Project executive director Meetali Jain and USC Marshall School Neely Center managing director Ravi Iyer for a discussion on key questions that were before the Ninth Circuit and their implications for future efforts at tech regulation.
Raúl Torrez was sworn in as New Mexico’s 32nd Attorney General in January 2023. Last December, Attorney General Torrez filed a lawsuit against Meta for allegedly failing to protect children from sexual abuse, online solicitation, and human trafficking. The outcome of this case could have broader implications for how online platforms are regulated and held accountable for user safety in the future, including through litigation. Justin Hendrix spoke to Attorney General Torrez in advance of a panel discussion he participated in alongside the Attorney General of Virginia at the 2024 Coalition to End Exploitation Global Summit on Wednesday, August 7, 2024 in Washington DC.
In May, Justin Hendrix moderated a discussion with David Rand, who is a professor of Management Science and Brain and Cognitive Sciences at MIT, the director of the Applied Cooperation Initiative, and an affiliate of the MIT Institute of Data, Systems, and Society and the Initiative on the Digital Economy. David's work cuts across fields such as cognitive science, behavioral economics, and social psychology, and with his collaborators he's done a substantial amount of work on the psychological underpinnings of belief in misinformation and conspiracy theories.David is one of the authors, with Thomas Costello and Gordon Pennycook, of a paper published this spring titled "Durably reducing conspiracy beliefs through dialogues with AI." The paper considers the potential for people to enter into dialogues with LLMs and whether such exchanges can change the minds of conspiracy theory believers. According to the study, dialogues with GPT-4 Turbo reduced belief in various conspiracy theories, with effects lasting many months. Even more intriguingly, these dialogues seemed to have a spillover effect, reducing belief in unrelated conspiracies and influencing conspiracy-related behaviors.While these findings are certainly promising, the experiment raises a variety of questions. Some are specific under the premise of the experiment- such as how compelling and tailored does the counter-evidence need to be, and how well do the LLMs perform? What happens if and when they make mistakes or hallucinate? And some of the questions are bigger picture- are there ethical implications in using AI in this manner? Can these results be replicated and scaled in real-world applications, such as on social media platforms, and is that a good idea? Is an internet where various AI agents and systems are poking and prodding us and trying to shape or change our beliefs a good thing? This episode contains an edited recording of the discussion, which was hosted at Betaworks.
The Distributed AI Research Institute, or DAIR—which seeks to conduct community-rooted AI research that is independent from the technology industry—has launched a new project called the Data Workers' Inquiry to invite data workers to create their own research and recount their experiences. The project is supported by DAIR, the Weizenbaum Institute, and TU Berlin. For this episode, journalist and audio producer Rebecca Rand parsed some of the ideas and experiences discussed at a virtual launch event for the inquiry that took place earlier this month.
It goes without saying that privacy and the creation of laws and regulations around it are fundamental to determining how we will live and work with technology, and whether technology operates in service of democratic societies or only in service of governments and corporations. A couple of weeks ago, Justin Hendrix had a chance to speak with two leaders from the Future of Privacy Forum (FPF)-Jules Polonetsky, its CEO, and Anne J. Flanagan, the head of its new Center on AI. They discussed the recent US Supreme Court decision to overturn the Chevron doctrine and its implications for privacy legislation in the United States, the fierce battle over privacy laws in the US, and potential conflicts between Europe's General Data Protection Regulation (GDPR) and the new AI Act. And, they talked about how the 15-year-old Future of Privacy Forum envisions its role in the age of artificial intelligence.
In the past week, multiple Silicon Valley billionaires announced endorsements of former President and 2024 Republican nominee Donald Trump. To dig a bit deeper into their motivations to support Trump and his new running mate, Ohio Senator and former venture capitalist J.D. Vance, Justin Hendrix invited on three sharp observers of politics and technology, including:Henry Farrell, a professor of the international affairs and democracy at Johns Hopkins University and the recent co-author with Abraham Newman of Underground Empire: How America Weaponized the World Economy.Elizabeth Spiers, a writer and digital strategist and contributing writer for the New York Times, and co-host the Slate Money Podcast.Dave Karpf, an associate professor at George Washington University in the School of Media and Public Affairs.
On June 26, the US Supreme Court issued a 6-3 ruling in Murthy v Missouri, a cased that considered whether the Biden administration violated the First Amendment in its efforts to address COVID-19 mis- and disinformation on social media. Tech Policy press fellow Dean Jackson, who studied the case closely, discussed the outcome and what it means for the future with three experts:Olga Belogolova, director of the Emerging Technologies Initiative at the Johns Hopkins School of Advanced International Studies (SAIS);Mayze Teitler, a legal fellow at the Knight First Amendment Institute; andNina Jankowicz, co-Founder and CEO of the American Sunlight Project.
In this episode, David Carroll, an associate professor of media design in the MFA Design and Technology graduate program at the School of Art, Media and Technology at Parsons School of Design at The New School, speaks to Ravi Naik, legal director at AWO, a consultancy with offices in London, Brussels, and Paris that works on a range of data protection and tech policy issues. Their discussion delves into the evolution of data protection from the Cambridge Analytica scandal to current questions provoked by generative AI, with a focus on a GDPR complaint against OpenAI brought by Noyb, the non-profit founded by Austrian activist Max Schrems.
In April, Google DeepMind published a paper that boasts 57 authors, including experts from a range of disciplines in different parts of Google, including DeepMind, Jigsaw, and Google Research, as well as researchers from academic institutions such as Oxford, University College London, Delft University of Technology, University of Edinburgh, and a think tank at Georgetown, the Center for Security and Emerging Technology. The paper speculates about the ethical and societal risks posed by the types of AI assistants Google and other tech firms want to build, which the authors say are “likely to have a profound impact on our individual and collective lives.” Justin Hendrix the chance to speak to two of the papers authors about some of these issues:Shannon Vallor, a professor of AI and data ethics at the University of Edinburgh and director of the Center for Technomoral Futures in the Edinburgh Futures Institute; andIason Gabriel, a research scientist at Google DeepMind in its ethics research team.
News and journalism organizations and dominant tech companies are in a years-long battle over content, clicks and revenue, and the tech companies are winning. What are policy options that encourage both the sustainability and quality of news content on popular online platforms? In this episode, Rebecca Rand explores perspectives on the subject, drawing on a conversation hosted by Justin Hendrix with experts Anya Schiffrin and Cory Doctorow at the Knight Foundation's INFORMED conference earlier this year.
In October 2023, during the third Belt and Road Forum in Beijing, China's leader Xi Jinping signaled a shift in focus from more grandiose physical infrastructure projects to 'small yet smart' initiatives. This shift underscores the need to understand China's ambitions to reshape global digital governance, moving away from an open and free internet towards a model rooted in government control and mass surveillance. The advocacy group Article 19 documents this shift in a recent report titled "The Digital Silk Road: China and the Rise of Digital Repression in the Indo-Pacific," examining China's influence on digital infrastructure and governance in Cambodia, Malaysia, Nepal, and Thailand. As the Indo-Pacific remains strategically significant for China in deploying next-generation technologies, the report argues that assessing China’s regional partnerships and their implications for digital repression is crucial for understanding its broader ambitions to reshape global digital norms. To discuss these issues in more depth, Justin Hendrix is joined by:Michael Caster, Asia Digital Program Manager at ARTICLE 19; andCatherine Tai, the deputy director for Asia and the Pacific team at Center for International Enterprise (CIPE).
In this episode, we explore a topic that sits at the heart of global digital policy: the contrasting visions of internet governance championed by the United States and its Western allies versus those promoted by China and nations in its orbit. This debate is playing out across various international venues and has profound implications for the future of digital rights, privacy, and the open internet. Justin Hendrix is joined by experts at the Atlantic Council that study these issues from a variety of angles and across multiple geographies, including:Rose Jackson, the director of the Democracy + Tech Initiative within the Atlantic Council Technology Programs;Konstantinos Komaitis, a nonresident fellow with the Democracy + Tech Initiative of the Atlantic Council's Digital Forensic Research Lab;Kenton Thibaut, a senior resident China fellow at the Atlantic Council's Digital Forensic Research Lab; andIria Puyosa, a senior research fellow at the Atlantic Council’s Digital Forensic Research Lab.
Angela Zhang is the author of High Wire: How China Regulates Big Tech and Governs Its Economy, published this year by Oxford University Press. With a career in the practice of law and in teaching it, Zhang has held roles King’s College London and at New York University School of Law, and most recently served as Director of Philip K. H. Wong Center for Chinese Law at the University of Hong Kong. She will join the University of Southern California as a Professor of Law in fall 2024.
A topic we returned to often in this podcast is the dire need for independent technology researchers to have access to platform data. Without it, we cannot understand the extent of the harms and effects of social media on people and on society, and we cannot understand the limits of those harms. This makes it difficult to respond in acute moments such as elections, and to understand issues such as the relationship between tech platforms and social cohesion, or mental health, or any number of the other issues policymakers care about. In this episode, Justin Hendrix speaks with two people on the front lines of the fight to secure access to data, including advocating for Meta to do better in light of the impending deprecation of CrowdTangle, a tool used by researchers study Meta's products, including Facebook and Instagram. They are:Brandi Guerkink, the executive director of the Coalition for Independent Technology Research, andClaire Pershan, EU advocacy lead at the Mozilla Foundation.
Madhumita Murgia, AI editor at the Financial Times, is the author of a new book called Code Dependent: Living in the Shadow of AI. The book combines reporting and research to provide a look at the role that AI and automated decision-making is playing in reshaping our lives, our politics, and our economies across the world.
Dr. Arati Prabhakar the Director of the White House Office of Science and Technology Policy and Technology Policy and Science Advisor to President Joe Biden. This week, she hosted an event in Washington DC called "AI Aspirations: R&D for Public Missions." Speakers included executive branch officials and agency leaders, from the Secretary of Education to the Food and Drug Administration Commissioner, as well as lawmakers such as Senators Amy Klobuchar and Mark Warner, and Representative Don Beyer. Prior to the event, Justin Hendrix spoke to Dr. Prabhakar about OSTP's priorities.
What are the risks to democracy as AI is incorporated more and more into the systems and platforms we use to find and share information and engage in communication? In this episode, Justin Hendrix speaks with Elise Silva, a postdoctoral associate at the University of Pittsburgh Cyber Institute for Law, Policy, and Security, and John Wihbey, an associate professor at Northeastern University in the College of Arts, Media, and Design. Silva is the author of a recent piece in Tech Policy Press titled "AI-Powered Search and the Rise of Google’s 'Concierge Wikipedia.'” Wihbey is the author a paper published last month titled "AI and Epistemic Risk for Democracy: A Coming Crisis of Public Knowledge?"
What role did technology play in India's elections, and what impact will the outcome have on tech policy in the country? Joining Justin Hendrix are three experts: Amber Sinha and Vandinika Shukla, both fellows at Tech Policy Press, and Prateek Waghre, the executive director at the Internet Freedom Foundation. Plus, Tech Policy Press program manager Prithvi Iyer sums up the election result.
The guests in this episode are authors of a new study titled Political Machines: Understanding the Role of AI in the US 2024 Elections and Beyond. The study is based on interviews with a variety of individuals who are currently grappling with how generative AI tools and systems will change the way the work. In a series of field interviews, the authors spoke with three vendors of political generative AI tools, a political candidate, a legal expert, a technology expert, an extremism expert, a digital organizer, a trust and safety industry professional, four Republican campaign consultants, and eight Democratic campaign consultants. Joining Justin Hendrix to discuss the results are:Dean Jackson, the principal at Public Circle LLC and a reporting fellow with Tech Policy Press;Zelly Martin, a PhD candidate at the University of Texas at Austin and a senior research fellow at the Propaganda Research Lab at the Center for Media Engagement; and Inga Trauthig, head of research at the Propaganda Research Lab at the Center for Media Engagement at the University of Texas at Austin.
This episode focuses on the role of shareholder activism in pursuing transparency and accountability from tech firms. In a week where board resolutions are up for a vote at Meta and Alphabet related to each company's development and deployment of artificial intelligence, Justin Hendrix spoke to five individuals working at the intersection of sustainable investing in tech accountability:Michael Connor, Executive Director of Open MICJessica Dheere, Advocacy Director at Open MICNatasha Lamb, Chief Investment Officer at Arjuna CapitalJonas Kron, Chief Advocacy Officer at Trillium Asset ManagementChristina O'Connell, Senior Manager for Shareholder Engagement and Investments at Ekō
As we documented in Tech Policy Press, when the US Senate AI working group released its roadmap on policy on May 17th, many outside organizations were underwhelmed at best, and some were fiercely critical of the closed door process that produced it. In the days after the report was announced, a group of nonprofit and academic organizations put out what they call a "shadow report" to the US Senate AI policy roadmap. The shadow report is intended as a complement or counterpoint to the Senate working group's product. It collects a bibliography of research and proposals from civil society and academia and addresses several issues the Senators largely passed over. To learn more, Justin Hendrix spoke to some of the report's authors, including:Sarah West, co-executive director of the AI Now InstituteNasser Eledroos, policy lead on technology at Color of ChangeParamita Shah, executive director of Just Futures LawCynthia Conti-Cook, director of research and policy at the Surveillance Resistance Lab
A conversation with Marwa Fatafta, who serves as policy and advocacy director for the nonprofit Access now, which has worked on digital civil rights, connectivity and censorship issues for the past 15 years. Along with other groups, Access Now has engaged Meta in recent months over what it says is the “systematic censorship of Palestinian voices” amidst the Israel-Hamas war in Gaza.
On Wednesday, May 15, 2024, a bipartisan US Senate working group led by Majority Leader Sen. Chuck Schumer (D-NY) released a report titled "Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate." Just hours after the report was released, Justin Hendrix spoke to two civil rights advocates who are working on AI policy about the good and the bad of the Senate report, and more broadly about how to set AI policy priorities that ensure a brighter future for all:Alejandra Montoya-Boyer, Senior Director at the Center for Civil Rights & Tech at the Leadership Conference on Civil and Human RightsClaudia Ruiz, Senior Civil Rights Policy Analyst at UnidosUS
One tech journalist whose byline always draws me in is Chris Stokel-Walker. He writes for multiple publications including The New York Times, The Washington Post, The Economist, Wired, Fast Company, and New Scientist. Now, he’s got a new book out: How AI Ate the World: A Brief History of Artificial Intelligence - And Its Long Future. Last week, I had the chance to speak with him about it, and about how he covers technology and tech policy generally.
Last October, Dr. Jasmine McNealy, as an associate professor at the University of Florida, a Senior Fellow in Tech Policy with the Mozilla Foundation, and a Faculty Associate at the Berkman Klein Center for Internet & Society at Harvard University, wrote in Tech Policy Press about the need for a policy agenda for "Rural AI." “Rural communities matter,” she wrote. “And that means they should matter when it comes to the development of policies on artificial intelligence.” The piece was a preview of sorts to a two-day workshop Dr. McNealy organized at the University of Florida in Gainesville that touched on topics ranging from connectivity to bias and discrimination in algorithmic systems to the connection between AI and natural resources. Justin Hendrix attended the workshop, and recently he checked in with Dr. McNealy and three of the other attendees he met there:Michaela Henley, program director and curriculum writer at Black Tech Futures and a senior research fellow representing Black Tech Futures at the Siegel Family Endowment;Dr. Dominique Harrison, founding principal of Equity Innovation Ventures; andDr. Theodora Dryer, who is director of the Water Justice and Technology Studio, founder of the Critical Carbon Computing Collective, and teaches on technology and environmental justice at New York University.
The Hippocratic oath, named for a Greek physician who lived ~2,500 years ago that some call the father of modern medicine, is one of the earliest examples of an expression of professional ethics. It is a symbol of a profession that has built in a number of protections for patient interests, with ethical frameworks and requirements that seek to assure they are maintained.Today’s guest is Chinmayi Sharma, an Associate Professor at Fordham Law School. Sharma thinks there should be a similar professional ethics framework in place for the developers of AI systems, and she’s written a substantial paper on the 'why' and the 'how' of her proposal.
One topic we come back to again and again on this podcast is disinformation. In many episodes, we’ve discussed various phenomena related to this ambiguous term, and we’ve tried to use science to guide the way.But the guests in this episode suggest that in the broader political discourse, the term is more than over used. Often, they say, lawmakers and other elites that employ it are crossing the line into hyping the effects of disinformation, which they say only helps propagandists and diminishes trust in society. To learn more Justin Hendrix spoke with Gavin Wilde, Thomas Rid, and Olga Belogolova, who with Lee Foster are the authors of an essay in the publication Foreign Affairs titled "Don’t Hype the Disinformation: Downplaying the Risk Helps Foreign Propagandists, But So Does Exaggerating It."
In an introduction to a special issue of the journal First Monday on topics related to AI and power, Jenna Burrell and Jacob Metcalf argue that "what can and cannot be said inside of mainstream computer science publications appears to be constrained by the power, wealth, and ideology of a small cohort of industrialists. The result is that shaping discourse about the AI industry is itself a form of power that cannot be named inside of computer science." The papers in the journal go on to interrogate the epistemic culture of AI safety, the promise of utopia through artificial general intelligence how to debunk robot rights, and more. To learn more about some of the ideas in the special issue, Justin Hendrix spoke to Burrell, Metcalf, and two of the other authors of papers included in it: Shazeda Ahmed and Émile P. Torres.
Last week President Joe Biden signed into law a measure that would force the Chinese firm ByteDance to divest its ownership of TikTok, or risk the app being banned in the US. The measure also included restrictions on the sale of personal data to foreign entities. What are the implications of these moves for US and global tech policy going forward? What will the inevitable legal challenges look like?To learn more, Justin Hendrix spoke with Anupam Chander, law professor at Georgetown and a visiting scholar at the Institute for Rebooting Social Media at Harvard University; Rose Jackson, the director of the Democracy and Tech Initiative at the Atlantic Council; and Justin Sherman, CEO of global cyber strategies and adjunct professor at Duke University.
Subcommittee on Innovation, Data, and Commerce held a hearing: “Legislative Solutions to Protect Kids Online and Ensure Americans’ Data Privacy Rights.” Between the Kids Online Safety Act (KOSA) and the American Privacy Rights Act (APRA), both of which have bipartisan and bicameral support, Congress may be closer to acting on the issues than it has been recent memory.One of the witnesses that the hearing was David Brody, who is managing attorney of the Digital Justice Initiative of the Lawyers' Committee for Civil Rights Under Law. Justin Hendrix caught up with Brody the day after the hearing, we spoke about the challenges of advancing the American Privacy Rights Act, and why he connects fundamental data to privacy rights to so many of the other issues that the Lawyers' Committee cares about, including voting rights and how to counter disinformation that targets communities of color.
This episode features two conversations. Both relate to efforts to better understand the impact of technology on society. In the first, we’ll hear from Sayash Kapoor, a PhD candidate at the Department of Computer Science and the Center for Information Technology Policy at Princeton University, and Rishi Bommasani, the society lead at the Stanford Center for Research on Foundation Models. They are two of the authors of a recent paper titled On the Societal Impact of Open Foundation Models. And in the second, we’ll hear from Politico Chief Technology Correspondent Mark Scott about the US-EU Trade and Technology Council (TTC) meeting, and what he’s learned about the question of access to social media platform data by interviewing over 50 stakeholders, including regulators, researchers, and platform executives.
Last week, a federal judge granted a motion to dismiss and strike a lawsuit brought by X Corp, formerly known as Twitter, against a nonprofit research outfit called The Center for Countering Digital Hate (CCDH).  To learn more about why the ruling matters, Justin Hendrix spoke to Alex Abdo, the litigation director at the Knight First Amendment Institute at Columbia University; Imran Ahmed, the CEO and founder of the Center for Countering Digital Hate; and Roberta Kaplan, a partner at the law firm of Kaplan, Hecker, and Fink, which represented CCDH in this matter.
On this show, when we talk about technology and democracy, guests are often talking about the relationship between technology and existing democratic systems. Today's guest wants us to think more expansively about what doing democracy means and the role the technology can play in it. Nathan Schneider, an assistant professor of media studies at the University of Colorado Boulder, is the author of Governable Spaces: Democratic Design for Online Life.
Last year, researchers at Human Rights Watch wrote about the global backlash against women’s rights. In multiple countries, they say, hard-won progress has been reversed amidst a wave of anti-feminist rhetoric and policies, and it may take decades to reverse the trajectory. It’s against that backdrop that today’s guest pursues concerns at the intersection of tech and digital rights with women’s human rights. Justin Hendrix speaks with Lucy Purdon, the founder of Courage Everywhere and author of a recent report for the Mozilla Foundation titled "Unfinished Business: Incorporating a Gender Perspective into Digital Advertising Reform in the UK and EU."
On Monday, March 18, the US Supreme Court heard oral argument in Murthy v Missouri. In this episode, Tech Policy Press reporting fellow Dean Jackson is joined by two experts- St. John's University School of Law associate professor Kate Klonick and UNC Center on Technology Policy director Matt Perault- to digest the oral argument, what it tells us about which way the Court might go, and what more should be done to create good policy on government interactions with social media platforms when it comes to content moderation and speech.
On March 18, the US Supreme Court will hear oral argument in Murthy v Missouri, a case that asks the justices to consider whether the government coerced or “significantly encouraged” social media executives to remove disfavored speech in violation of the First Amendment during the COVID-19 pandemic. Tech Policy Press reporting fellow Dean Jackson speaks to experts including the Knight First Amendment Institute at Columbia University's Mayze Teitler and Jennifer Jones, and the Tech Justice Law Project's Meetali Jain.
At INFORMED 2024, a conference hosted by the Knight Foundation in January, one panel focused on the subject of information integrity, race, and US elections. The conversation was compelling, and the panelists agreed to reprise it for this podcast. So today we're turning over the mic to Spencer Overton, a Professor of Law at the George Washington University, and the director of the GW Law School's Multiracial Democracy Project.He's joined by three other experts, including: Brandi Collins-Dexter, a media and technology fellow at Harvard's Shorenstein Center, a fellow at the National Center on Race and Digital Justice, and the author of the recent book, Black Skinhead: Reflections on Blackness and Our Political Future. Brandi is developing a podcast of her own with MediaJustice that explores 1980s era media, racialized conspiracism, and politics in Chicago;Dr. Danielle Brown, a social movement and media researcher who holds the 1855 Community and Urban Journalism professorship at Michigan State and is the founding director of the LIFT project, which is focused on mapping, networking and resourcing, trusted messengers to dismantle mis- and disinformation narratives that circulate in Black communities and about Black communities; andKathryn Peters, who was the inaugural executive director of University of North Carolina's Center for Information, Technology, and Public Life and was the co-founder of Democracy Works, where she built programs to help more Americans navigate how to vote. These days, she's working on a variety of projects to empower voters and address election mis- and disinformation.
On Monday, Feb. 26, 2024, the US Supreme Court heard oral arguments for Moody v. NetChoice, LLC and NetChoice, LLC v. Paxton. The cases are on similar but distinct state laws in Florida and Texas that would restrict social media companies’ ability to moderate content on their platforms. Justin Hendrix speaks with Tech Policy Press staff writer Gabby Miller and contributing editor Ben Lennett about key highlights from the discussion.
This week, a public consultation period ended for a new Hong Kong national security law, known as Article 23. Article 23 ostensibly targets a wide array of crimes, including treason, theft of state secrets, espionage, sabotage, sedition, and "external interference" from foreign governments. The Hong Kong legislature, dominated by pro-Beijing lawmakers, is expected to approve it, even as its critics argue that the law criminalizes basic human rights, such as the freedom of expression, signaling a further erosion of the liberties once enjoyed by the residents of Hong Kong.To learn more about what is happening in Hong Kong and what role tech firms and other outside voices could be doing to preserve freedoms for the people of Hong Kong, Justin Hendrix spoke to three experts who are following developments there closely:Chung Ching Kwong, senior analyst at the Inter-Parliamentary Alliance on ChinaLokman Tsui, a fellow at Citizen Lab at University of Toronto, andMichael Caster, the Asia Digital Program Manager with Article 19.
If you’ve been listening to this podcast for a while, you know we’ve spent countless hours together talking about the problems of mis- and disinformation, and what to do about them. And, we’ve tried to focus on the science, on empirical research that can inform efforts to design a better media and technology environment that helps rather than hurts democracy and social cohesion. Today’s guests are Jon Bateman and Dean Jackson. The two have just produced a report for the Carnegie Endowment for International Peace that looks at what is known about a variety of interventions against disinformation, and provides evidence that should guide policy in governments and at technology platforms.
A new book that ships this week from Oxford University Press titled simply Media and January 6th assembles a varied collection of experts that aim to shed light on the interplay between the media and the bloody coup attempt that then President Donald Trump led to try to hang on to power after he lost the 2020 election to Joe Biden. It delves into the reasons behind the occurrence of January 6th and highlights the pivotal role of media in this context. The book is structured to explore three essential inquiries: What is our interpretation of January 6, 2021? How should research evolve post-January 6, 2021? And what measures can be taken to avert a similar incident in the future? Justin Hendrix spoke to three of the book's four editors: Khadijah Costley White, Daniel Kreiss, and Shannon C. McGregor.
It's become trite to say there are a lot of elections taking place this year. But of course, technology is playing a role in them all. At Tech Policy Press, we're lucky to have a group of seven fellows this year who are based on four continents. They are paying close attention to elections in the nations they know best. To learn more about the recent election in Pakistan, its chaotic aftermath, and the unique role of technology and events there, I spoke to one of our fellows last week: Ramsha Jahangir, a Pakistani journalist currently based in the Netherlands.
Today's guests are Jonathan Stray, a senior scientist at the Center for Human Compatible AI at the University of California Berkeley, and Ravi Iyer, managing director of the Neely Center at the University of Southern California's Marshall School. Both are keenly interested in what happens when platforms optimize for variables other than engagement, and whether they can in fact optimize for prosocial outcomes. With several coauthors, they recently published a paper based in large part on discussion at an 8-hour working group session featuring representatives from seven major content-ranking platforms and former employees of another major platform, as well as university and independent researchers. The authors say "there is much unrealized potential in using non-engagement signals. These signals can improve outcomes both for platforms and for society as a whole."
In May 2022, Alvaro Bedoya was sworn in as a Commissioner of the US Federal Trade Commission following his nomination by President Joe Biden and confirmation in the Senate. In this conversation, Commissioner Bedoya discusses a recent settlement over the commercial use of facial recognition technologies and what it should signal to other businesses, voice cloning and the growing problem of impersonations utilizing AI, and how he thinks about the future.
Multiple past episodes of this podcast have focused on the topic of AI governance. But today’s guest, Blair Attard-Frost, has put forward a set of ideas they term "AI countergovernance." These are alternative mechanisms for community-led and worker-led governance that serve as means for resisting or contesting power, particularly as it manifests in AI systems and the companies and governments that advance them.
On Wednesday, January 31st, the US Senate Judiciary Committee hosted a hearing titled "Big Tech and the Online Child Sexual Exploitation Crisis." The CEOs of Meta, TikTok, X, Discord and Snap were called to the Capitol to answer questions from lawmakers on their efforts to protect children from sexual exploitation, drug trafficking, dangerous content, and other online harms. Gabby Miller reported on the hearing from New York, and Haajrah Gilani reported from Washington D.C.
Last year, the World Privacy Forum, a nonprofit research organization, conducted an international review of AI governance tools. The organization analyzed various documents, frameworks, and technical material related to AI governance from around the world. Importantly, the review found that a significant percentage of the AI governance tools include faulty AI fixes that could ultimately undermine the fairness and explainability of AI systems. Justin Hendrix talked to Kate Kaye, one of the report’s authors, about a range of issues it covers, from the involvement of large tech companies in shaping AI governance tools the role of organizations like the OECD in developing AI governance tools, to the need to consult people and communities that are often overlooked when making decisions about how to think about AI.
In October 2022, a group of researchers published a manifesto establishing a Coalition for Independent Technology Research. “Society needs trustworthy, independent research to relieve the harms of digital technologies and advance the common good,” they wrote. “Research can help us understand ourselves more clearly, identify problems, hold power accountable, imagine the world we want, and test ideas for change. In a democracy, this knowledge comes from academics, journalists, civil society, and community scientists, among others. Because independent research on digital technologies is a powerful force for the common good, it also faces powerful opposition.”In the months since that document was published, that opposition has grown. From investigations in Congress to lawsuits aimed at specific researchers, there is a backlash particularly against those who study communications and media, especially where the subjects of that research are often those most interested in advancing false and misleading claims about issues including elections and public health. Justin Hendrix, who is a member of the coalition, caught up with Brandi Geurkink, who was hired as the coalition's first Executive Director in December 2023, to discuss its priorities.
Today’s guest is Robert Weissman, president of the nonprofit consumer advocacy organization Public Citizen. He is the author of a letter addressed to the California Attorney General that raises significant concerns about OpenAI’s 501(c)(3) nonprofit status. The letter questions whether OpenAI has deviated from its nonprofit purposes, alleging that it may be acting under the control of its for-profit subsidiary, potentially violating its nonprofit mission. The letter raises broader issues about the future of AI and how it will be governed.
Today is the three month anniversary of the vicious Hamas attack and abduction of hostages that ignited the current war in Gaza. Just before the New Year, the Atlantic Council’s Digital Forensic Research Lab (DFRLab) published a report titled “Distortion by Design: How Social Media Platforms Shaped Our Initial Understanding of the Israel-Hamas Conflict.” This week, Justin Hendrix spoke to the report’s authors— Emerson T. Brooking, Layla Mashkoor, and Jacqueline Malaret— about their observations of the role that platforms operated by X, Meta, Telegram, and TikTok have played in shaping perceptions of the initial attack and the brutal ongoing Israeli siege of Gaza, which now continues into its fourth month. “Evident across all platforms,” they write, “is the intertwined nature of content moderation and political expression—and the critical role that social media will play in preserving the historical record.”
In a report released December 20, 2023, the Stanford Internet Observatory said it had detected more than 1,000 instances of verified child sexual abuse imagery in a significant dataset utilized for training generative AI systems such as Stable Diffusion 1.5. This troubling discovery builds on prior research into the “dubious curation” of large-scale datasets used to train AI systems, and raises concerns that such content may contributed to the capability of AI image generators in producing realistic counterfeit images of child sexual exploitation, in addition to other harmful and biased material. Justin Hendrix spoke the report’s author, Stanford Internet Observatory Chief Technologist David Thiel.
If you’ve listened to some of the dialogue in hearings on Capitol Hill about how to regulate AI, you’ve heard various folks suggest the need for a regulatory agency to govern, in particular, general purpose AI systems that can be deployed across a wide range of applications. One existing agency is often mentioned as a potential model: the Food and Drug Administration (FDA). But how would applying the FDA work in practice? Where does the model break down when it comes to AI and related technologies, which are different in many ways from the types of things the FDA looks at day to day? To answer these questions, Justin Hendrix spoke to Merlin Stein and Connor Dunlop, the authors of a new report published by the Ada Lovelace Institute titled Safe before sale: Learnings from the FDA’s model of life sciences oversight for foundation models.
At the end of this year in which the hype around artificial intelligence seemed to increase in volume with each passing week, it’s worth stepping back and asking whether we need to slow down and put just as much effort into questions about what it is we are building and why. In today’s episode, we’re going to hear from two researchers at two different points in their careers who spend their days grappling with questions about how we can develop systems and modes of thinking about systems that lead to more just and equitable outcomes, and that preserve our humanity and the planet:Dr. Batya Friedman is a Professor in the Information School and holds adjunct appointments in the Paul G. Allen School of Computer Science & Engineering, the School of Law, and the Department of Human Centered Design and Engineering at the University of Washington, where she co-directs the Value Sensitive Design Lab and the UW Tech Policy Lab.Dr. Aylin Caliskan is an Assistant Professor in the Information School at the Paul G. Allen School of Computer Science & Engineering, is an affiliate of the UW Tech Policy Lab, part of the Responsible AI Systems and Experiences Center, the NLP Group, and the Value Sensitive Design Lab. She is also co-director elect for the Tech Policy Lab, a role she will assume when Dr. Friedman retires from the university.
In both the US and Europe, policymakers are making important decisions about the governance of the bulk collection of communications and data for intelligence purposes. In the US, some of these questions are at the fore as Congress considers how to extend the Foreign Intelligence Surveillance Act's Section 702 program, which is set to expire at the start of 2024. To get a sense of how the broader policy debate around government surveillance is advancing in both the US and Europe, Justin Hendrix spoke to two experts on the subject who happened to be meeting together in Washington DC last week: Dr. Thorsten Wetzling, head of the Digital Rights, Surveillance and Democracy research unit of the Berlin think tank Stiftung Neue Verantwortung (SNV), and Greg Nojeim, Director of the Security and Surveillance Project at the Center for Democracy and Technology (CDT).
In April 2021, the European Commission introduced the first regulatory framework for AI within the EU. This Friday, after a marathon set of negotiations, EU policymakers reached a political consensus on the details of the legislation. This AI Act represents the most significant comprehensive effort in the world’s democracies to regulate a technology that promises major social and economic impact. While the AI Act will still have to go through a few final procedural steps before its enactment, the contours of it are now set. To find out more about what was decided, Justin Hendrix spoke to one journalist who reported directly on the negotiations in Brussels: Luca Bertuzzi, technology editor at EURACTIV.
For the past two years, there has been a steady stream of news out of Kenya about the relationships between major tech firms – including Meta, TikTok and OpenAI – and outsourcing firms like Sama and Majorel that have employed content moderators on their behalf. In the spring of this year, more than 150 moderators announced the formation of the African Content Moderators Union, which advocates for better pay and working conditions, and a lawsuit against Meta is working its way through Kenya’s courts. This month will see an important ruling in that case. To learn more about the situation on the ground and what it’s been like for the individuals involved in this fight while the legal progress unfolds, Justin Hendrix spoke to Njenga Kimani, a researcher at Siasa Place, a youth-led, prodemocracy NGO based in Nairobi, and three moderators who’ve worked on platforms including TikTok, Meta, and OpenAI: James Oyange Odhiambo, Sonia Kgomo, and Richard Mathenge.
To learn more about the recent leadership crisis at OpenAI and what lessons policymakers should take from it, Justin Hendrix spoke to Karen Hao, a contributing writer at The Atlantic who is currently working on a book about OpenAI. With staff writer Charlie Warzel, Hao wrote a piece for The Atlantic under the headline "Inside the Chaos at OpenAI," drawing on conversations with current and former employees of the company.
On November 15, the Open Markets Institute and the AI Now Institute hosted an event in Washington D.C. featuring discussion on how to understand the promise, threats, and practical regulatory challenges presented by artificial intelligence. Justin Hendrix moderated a discussion on harms to artists and creators, exploring questions around copyright and fair use, the ways in which AI is shaping the entire incentive structure for creative labor, and the economic impacts of the "junkification" of online content. The panelists included Liz Pelly, a freelance journalist specialized in the music industry; Ashley Irwin, President of the Society of Composers & Lyricists; and Jen Jacobsen, Executive Director of the Artist Rights Alliance.
This episode explores Broken Code: Inside Facebook and the Fight to Expose its Harmful Secrets, a new book by Wall Street Journal technology reporter Jeff Horwitz. His relentless coverage of Meta, including first reporting on the documents brought forward by whistleblower Frances Haugen in the fall of 2021, has been pivotal in shedding light on the complex interplay between social media platforms, society, and democracy. Justin Hendrix talks to him about his journey, new details revealed in the book, and the impact his reporting has had in driving platform accountability both in the United States and internationally.
Today's guest is Dr. Matthew Guariglia, a senior policy analyst for the Electronic Frontier Foundation and author of the new book, Police and the Empire City: Race and the Origins of Modern Policing in New York, just out from Duke University Press. Guariglia says we're really living in a world of police surveillance built in the early 20th century, even as police departments wield powers that only a few years ago we thought might only be in the hands of federal intelligence agencies.
Today’s guest is Wiebke Hutiri, a researcher with a particular expertise in design patterns for detecting and mitigating bias in AI systems. Her recent work has focused on voice biometrics, including work on an open source project called Fair EVA that gathers resources for researchers and developers to audit bias and discrimination in voice technology. Justin Hendrix spoke to Hutiri about voice biometrics, voice synthesis, and a range of issues and concerns these technologies present alongside their benefits.
Today’s guest is Ravi Iyer, a data scientist and moral psychologist at the Psychology of Technology Institute, which is a project of the University of Southern California Marshall School’s Neely Center for Ethical Leadership and Decision Making and the University of California-Berkeley’s Haas School of Business. He is also a former Facebook executive, and at the company he worked on a variety of civic integrity issues. The Neely Center has developed a design code that seeks to address a number of concerns about the harms of social media, including issues related to child online safety. It is endorsed by individuals and organizations ranging from academics at NYU and USC to the Tech Justice Law Project and New Public, as well as technologists that have worked at platforms such as Twitter, Facebook, and Google. Justin Hendrix spoke to Iyer about the details of the proposed code, and in particular how they relate to the debate over child online safety.
At the September G20 summit in Delhi, the government of prime minister Narendra Modi promoted the country’s digital public infrastructure (DPI) as a model for the world for how to develop digital systems that enable countries to deliver social services and provide access to infrastructure and economic opportunities to residents. Other world leaders were enthusiastic about the pitch, endorsing a common framework for DPI systems. But even as an Indian vision for DPI appears to be attractive beyond that country’s borders, what are the ideas and events that shaped India’s approach? Today's guest is Mila Samdub, a researcher at the Information Society Project at Yale Law School who recently published an essay titled “The Bangalore Ideology: How an amoral technocracy powers Modi’s India,” looking at histories of technocratic ideas in India, and how they have combined with Modi’s particular brand of populism.
A lot is written about the supply side of mis- and disinformation, including how propagandists and political leaders are using messages and platforms to impact public opinion. But less is written about the demand side. When it comes to false beliefs that each of us adopt and harbor to help us understand the world and events in it, what are the incentives and social dimensions that each of us as individuals and as members of the community are responding to that drive our appetite for misinformation? Today’s guest has devoted her research to this subject, and has just published a book that serves as a very accessible entry point to the latest scholarship on this question. Dannagal Young is a Professor of Communication and Political Science at the University of Delaware and the author of Wrong: How Media, Politics, and Identity Drive our Appetite for Misinformation.
There is a term you've likely heard on the Tech Policy Press podcast in the past: the Brussels Effect. The term is meant to describe the European Union’s outsized influence on global markets through its regulations. You may not know that the term was first coined by Anu Bradford, a professor at Columbia Law School. She wrote a book about it called The Brussels Effect: How the European Union Rules the World. Now, she has a new book, just out from Oxford University Press, called Digital Empires: The Global Battle to Regulate Technology The book describes the geopolitical competition to establish digital governance models between the US, the EU, and China. Justin Hendrix had the opportunity to speak to Bradford about the book, and why she thinks the US government, by failing to regulate its tech companies, may ultimately imperil not only the US model but internet freedom more broadly.
The 13th installment of the Freedom on the Net report from Freedom House finds that "while advances in artificial intelligence offer benefits for society, they have also been used to increase the scale and efficiency of digital repression." Justin Hendrix spoke with two of the report's authors- Allie Funk and Kian Vesteinsson about their findings, which unfortunately do not represent a change of trajectory from prior years.
While US Senators are busy holding hearings and forums and posing for pictures with the CEOs of AI companies, the European Union is just months away from passing sweeping regulation of artificial intelligence. As negotiations continue between the European Parliament, Council, and Commission, Justin Hendrix spoke to one observer who is paying close attention to every detail: the Ada Lovelace Institute's European Public Policy Lead, Connor Dunlop. Connor recently published a briefing on five areas of focus for the trilogue negotiations that recommence next week.
In Blood in the Machine: The Origins of the Rebellion Against Big Tech, Los Angeles Times technology columnist Brian Merchant has written a new history of perhaps one of the most famous movements for worker rights and power in the face of automation. The book sets the record straight on the Luddites, and unpacks what today’s workers can learn from them.
The ubiquity of cameras in our phones and our environment, coupled with massive social media networks that can share images and video in an instant, means we see often graphic and disturbing images with great frequency. How are people processing such material? And how is it different for people working in newsrooms, social media companies, and human rights and social justice organizations? What protections might be put in place to protect people from vicarious trauma and other harms, and what is the ultimate benefit of doing this work?In their new book, Graphic: Trauma and Meaning in Our Online Lives, University of California Berkeley scholars Alexa Koenig and Andrea Lampros set out to answer those questions.
In 2019, journalist Kashmir Hill had just joined The New York Times when she got a tip about the existence of a company called Clearview AI that claimed it could identify almost anyone with a photo. But the company was hard to contact, and people who knew about it didn’t want to talk. Hill resorted to old fashioned shoe-leather reporting, trying to track down the company and its executives. By January of 2020, the Times was ready to report what she had learned in a piece titled “The Secretive Company That Might End Privacy as We Know It.” Three years later, Hill has published a book that tells the story of Clearview AI, but with the benefit of a great deal more reporting and study on the social, political, and technological forces behind it. It's called Your Face Belongs to Us: A Secretive Startup's Quest to End Privacy As We Know It, just out from Penguin Random House.
Today’s episode features two segments, both of which consider the scale of technology platforms and their power over markets and people. In the first, Rebecca Rand delivers a conversation with University of Technology Sydney researcher Dr. Luis Lozano-Paredes about a community of drivers in Colombia who have hacked together a way to preserve their power alongside the adoption of ride sharing apps. And in the second, Justin Hendrix speaks with Columbia University Law School Professor of Law, Science and Technology Tim Wu, who recently spent two years on the National Economic Council in the White House as Special Assistant to the President for Competition and Technology. The conversation touches on privacy legislation, ideas about competition and scale, and Wu's observations on the landmark antitrust trial between the Justice Department and Google, which wrapped up its first week of testimony on Friday. The conversation took place at the All Tech is Human Responsible Tech Summit, hosted with the Consulate General of Canada in New York, on September 14th.
This episode features two segments on the subject of disinformation. In the first, Rebecca Rand speaks with Dr. Shelby Grossman, a research scholar at the Stanford Internet Observatory, on recent research that looks at whether AI can write persuasive propaganda. In the second segment, Justin Hendrix speaks with Dr. Kirsty Park, the Policy Lead at the European Media Observatory Ireland, and Stephan Mündges, the manager of the Institute of Journalism at TU Dortmund University and one of the coordinators of the German-Austrian Digital Media Observatory, about the report they authored that looks in detail at baseline reporting from big technology platforms that are part of the EU Code of Practice on Disinformation.
One of the problems we come back to again and again on the Tech Policy Press podcast is the problem of how to govern social media platforms. Today’s guest is Paul Gowder, Professor of Law and Associate Dean of Research and Intellectual Life at Northwestern University's Pritzker School of Law and a founding fellow of the Integrity Institute. Gowder is the author of The Networked Leviathan: For Democratic Platforms, a book that he says takes an institutional political science approach to the problem of tech platform governance, arguing “that the goals of effective governance capacity development and of global justice” can come together, and that we can build “worldwide direct democratic institutions to exercise public authority over the operations of the big platforms.”
This episode features two segments. In the first, Rebecca Rand speaks with Alina Leidinger, a researcher at the Institute for Logic, Language and Computation at the University of Amsterdam about her research- with coauthor Richard Rogers- into which stereotypes are moderated and under-moderated in search engine autocompletion. In the second segment, Justin Hendrix speaks with Associated Press investigative journalist Garance Burke about a new chapter in the AP Stylebook offering guidance on how to report on artificial intelligence.
This episode features two segments. In the first, Rebecca Rand considers the social consequences of "machine allocation behavior" with Cornell researchers Houston Claure and Malte Jung, authors of a recent paper on the topic with coauthors Seyun Kim and René Kizilcec.In the second segment, Justin Hendrix speaks with Tom Kemp, author of a new book out August 22 from Fast Company Press titled Containing Big Tech: How to Protect Our Civil Rights, Economy, and Democracy.
This week, Indian legislators approved a data protection law that will govern the processing of data in the country. The bill creates a data protection board and gives the government new powers, including to request information from companies and to issue orders to block content. While there is still work to do to determine how the law will be administered, it joins a range of new tech policy laws and regulations enacted against a backdrop of the increasing centralization of power in India’s government.To discuss the bill, Justin Hendrix is joined by Aditi Agrawal, an independent technology journalist based in New Delhi; Kamesh Shekar, a tech policy expert who leads the privacy and data governance vertical at The Dialogue, a think tank based in Delhi; and Prateek Waghre, the Policy Director at the Internet Freedom Foundation, a digital rights advocacy organization based in India.
Lots of voices are calling for the regulation of artificial intelligence. In the US, at present it seems there is no federal legislation close to becoming law. But in 2023 legislative sessions in states across the country, there has been a surge in AI laws proposed and passed, and some have already taken effect. To learn more about this wave of legislation, I spoke to two people who just posted a comprehensive review of AI laws in US states: Katrina Zhu, a law clerk at the Electronic Privacy Information Center (EPIC) and a law student at the UCLA School of Law, and EPIC senior counsel Ben Winters.
A unique collaboration between social scientists and Meta to conduct research on Facebook and Instagram during the height of the 2020 US election has at long last produced its first work products. The release of four peer-reviewed studies last week in Science and Nature mark the first of as many as sixteen studies that promise fresh insights into the complex dynamics of social media and public discourse. But beyond the findings of the research, the partnership between Meta and some of the most prominent researchers in the field has been held up as a model. With active discussions ongoing in multiple jurisdictions about how best to facilitate access to platform data for independent researchers, it’s worth scrutinizing the strengths and weaknesses of this partnership. And to do that, Justin Hendrix is joined by one researcher who was able to observe and evaluate nearly every detail of the process for the last three years: the project's rapporteur, Michael Wagner, who in his day job is a professor in the University of Wisconsin-Madison's School of Journalism and Mass Communication.
In today’s podcast, Justin Hendrix talks with director, writer and actor Alex Winter, whose new documentary, The YouTube Effect, is in select theaters now and will be available on streaming platforms on August 8th. The film's creators assert that "the story of YouTube is the great dilemma of our times; the technology revolution has made our lives easier and more enriched, while also presenting dangers and challenges that make the world a more perilous place."
Today’s guest on the podcast is Ifeoma Ajunwa, the AI.Humanity Professor of Law and Ethics and Director of AI and the Law Program at Emory Law School, and author of the Quantified Worker: Law and Technology in the Modern Workplace. from Cambridge University Press. The book considers how data and artificial intelligence are changing the workplace, and whether the law is more equipped to help workers in this transition, or to provide for the interests of employers.
Artificial intelligence will likely impact every type of job. But this summer, Hollywood actors and writers have raised substantial concerns about the ways in which generative AI systems may be used to replace aspects of their human craft. The Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) and the Writers Guild of America (WGA) are currently joined in a dual strike, hoping to make progress on a range of labor grievances with the studios and streaming companies that employ them. Today’s guest is Justine Bateman, a writer, director, producer, author, and member of the Directors Guild of America (DGA), the WGA, and SAG-AFTRA. Bateman has been on both sides of the camera for much of her life, and has a particularly sharp perspective on how AI may change the entertainment industry, and why it matters to all workers that the unions are standing up on these issues now.
One of the most urgent debates in tech policy at the moment concerns encrypted communications. At issue in proposed legislation, such as the UK’s Online Safety Bill or the EARN It Act put forward in the US Senate, is whether such laws break the privacy promise of end to end encryption by requiring content moderation mechanisms like client-side scanning. But to what extent are such moderation techniques legal under existing laws that limit the monitoring and interception of communications? Today’s guest is James Grimmelmann, a legal scholar with a computer science background who recently conducted a review of various moderation technologies to determine how they might hold up in under US federal communication privacy regimes including the Wiretap Act, the Stored Communications Act, and the Communications Assistance for Law Enforcement Act (CALEA). The conversation touches on how technologies like server side and client side scanning work, the extent to which the law may fail to accommodate or even contemplate such technologies, and where the encryption debate is headed as these technologies advance.
Tomorrow's virtual worlds will be governed, at least at first, by today's legal and regulatory regimes. How will privacy law, torts, IP, or even criminal law apply in 'extended reality' (XR)?Drawing from the discussion at a conference hosted earlier this year at Stanford University called "Existing Law and Extended Reality," this episode asks what challenges will emerge from human behavior and interaction-- with one another and with technology-- inside XR experiences, and what choices governments and tech companies will face in addressing those challenges.This episode of The Sunday Show was produced by Tech Policy Press audio and reporting intern Rebecca Rand, and features the voices of experts such as Brittan Heller (the organizer of the Stanford conference), Mary Anne Franks, Kent Bye, Jameson Spivack, Joseph Palmer, Eugene Volokh, Amie Stepanovich, Susan Aaronson, Florence G'Sell, and Avi Bar Zeev.
This spring, Karen Kornbluh and Adrienne Goldstein from the German Marshall Fund’s Digital Innovation and Democracy Initiative published a document they call the Civic Information Handbook, which they produced in collaboration with University of North Carolina at Chapel Hill Center for Information, Technology, and Public Life (CITAP). Civic information—“important information needed to participate in democracy—is too often drowned out by viral falsehoods, including conspiracy theories.” The Handbook is intended as a resource to help knowledge-producing organizations in the “amplification of fact-based information.” To learn more about the handbook and the ideas on which it is based, Justin Hendrix spoke to GMF research assistant Adrienne Goldstein, as well as Kathryn Peters, executive director of UNC CITAP.
Alex Hanna, the director of research at the Distributed AI Research Institute and Emily M. Bender, a professor of linguistics at the University of Washington, are the hosts of Mystery AI Hype Theater 3000, a show that seeks to "break down the AI hype, separate fact from fiction, and science from bloviation." Justin Hendrix spoke to Alex and Emily about the show's origins, and what they hope will come of the effort to scrutinize statements about the potential of AI that are often fantastical.
Last week, Canada passed the Online News Act, legislation that requires tech platforms to remunerate Canadian news outlets, and the platforms are not happy. In response, Google announced it will remove links to Canadian news outlets from its products. Meta also said it would remove Canadian news from Facebook and Instagram. The Act itself has yet to be implemented- it has to first go through a regulatory process to sort out how it will work. So, these moves by the platforms may be a tactic in the negotiation of the particulars. But the platforms also clearly want to send a message to other jurisdictions where similar legislation is under consideration.For an expert opinion on the politics surrounding Canada’s Online News Act and its broader implications, Tech Policy Press Contributing Editor Ben Lennett spoke to one person who has been following it closely from his perch in Montreal. Taylor Owen is the Beaverbrook Chair in Media, Ethics and Communications, the founding director of The Center for Media, Technology and Democracy, and an Associate Professor in the Max Bell School of Public Policy at McGill University.
Over the past few months, there have been a range of voices calling for the urgent regulation of artificial intelligence. Comparisons to the problems of nuclear proliferation abound, so perhaps it’s no surprise that some want a new international body similar to the International Atomic Energy Agency (IAEA). But when it comes to AI and global governance, there’s already a lot in play- from ethics councils to various schemes for industry governance, activity on standards, various international agreements, and legislation that will have international impact, such as the EU’s AI Act. To help get his head around the complicated, evolving ecology of global AI governance, Justin Hendrix spoke to two of the three authors of a recent paper in the Annual Review of Law and Social Science that attempts to take stock of and explore the tensions between different approaches, including Michael Veale, an associate professor in the Faculty of Laws at University College London, where he works on the intersection of computer science, law, and policy; and Robert Gorwa, a postdoctoral researcher at the Berlin Social Science Center, a large publicly-funded research institute in Germany.
Earlier this month, Justin Hendrix traveled to RightsCon, the big gathering of individuals and organizations concerned with human rights and technology organized by Access Now. The sprawling event had hundreds of sessions on a wide range of themes, but one topic discussed across multiple tracks was the importance of encrypted communications, especially to groups such as political dissidents and journalists. A key panel at RightsCon featured Signal President Meredith Whittaker, who spoke out about policies proposed in legislatures around the world that threaten the promise of end-to-end encryption to preserve the privacy of messages sent between individuals and groups. Leaders of encrypted apps have pulled together of late to speak out against the proposed UK Online Safety Bill, signing letters and appearing at events. Shortly after RightsCon, Hendrix connected with Whittaker to learn more about Signal’s posture against such legislation, why she sees encrypted communications as so crucial to freedom and human rights, and how the company thinks about safety and its role in the broader digital ecosystem.
In the United States, it’s fair to say that federal, state and local governments have struggled in the era of digitalization. Decades in to that era, there is still a gap between the policy outcomes we seek and what citizens often get when they engage with government agencies and services online. At its worst this gap means people aren’t receiving critical services that sustain their lives; and at the very least it reduces faith in government to be able to solve problems right at the moment when it’s clear the collective challenges we face are going to Jennifer Pahlka, who served in President Barack Obama’s administration as deputy chief technology officer and founded the nonprofit Code for America, has written a book that asks us to reexamine how government works, and how it should work, in the digital age. It's called Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better, and it's the subject of the podcast today.
Last week, a group of very important people, including the U.S Secretaries of State and Commerce and trade representatives from President Joe Biden’s administration, met with top European Union officials in the heart of the Swedish Lapland for the fourth Ministerial meeting of the U.S.-EU Trade and Technology Council, or “TTC”. Pressing needs were tackled, new initiatives were launched, commitments were made, and cooperation was deepened on a range of tech policy issues, at least according to the press releases. To hear an unvarnished view from someone who was at the meeting about what might actually come of it all, Justin Hendrix invited on a journalist who is, in my opinion, one the best tech policy reporters in the world: Mark Scott, Chief Technology Correspondent for Politico.
Today’s show has two segments both focused on generative AI. In the first segment, Justin Hendrix speaks with Irene Solaiman, a researcher who has put a lot of thought into evaluating the release strategies for generative AI systems. Organizations big and small have pursued different methods for release of these systems, some holding their models and details about them very close, and some pursuing a more open approach. And in the second segment, Justin Hendrix speaks with Calli Schroeder and Ben Winters at the Electronic Privacy Information Center about a new report they helped write about the harms of generative AI, and what to do about them.
Last week, the Supreme Court released decisions in Gonzalez v. Google, LLC, and Twitter, Inc. v. Taamneh. In this episode we’ll discuss what it tells us about how the Court is thinking about social media and intermediary liability, and what it might tell us about future cases the Court may hear. I’m joined by an expert who follows these issues closely, and has shared his expertise with us on this podcast before: Anupam Chander, a law professor at Georgetown University.
Today’s episode features a discussion with Nick Seaver, a professor at Tufts University and the author of Computing Taste: Algorithms and the Makers of Music Recommendation from the University of Chicago Press. Nick is an anthropologist who studies how people use technology to make sense of cultural things. His book is the product of ethnographic observation and conversations with developers working on music recommendation algorithms and other systems designed to understand and cater to user preferences. His research gives us a better understanding of the motivations of the executives and engineers designing systems to command our attention, which he considers to be “a currency, a capacity, a filter, a spotlight, and a moral responsibility.”
Justin Hendrix speaks to writer Malcolm Harris about his book, PALO ALTO: A HISTORY OF CALIFORNIA, CAPITALISM, AND THE WORLD, which considers the historical antecedents for the project of Silicon Valley.
Recently Justin Hendrix caught up with Gus Hurwitz, a professor of law at the University of Nebraska and the director of the Governance and Technology Center. He’s also the Director of Law and Economics Programs at the International Center for Law and Economics, a Portland based think tank that focuses on antitrust law and economics policy issues. Hurwitz told Hendrix he’s leaving Nebraska at the end of the semester for a new position that is soon to be announced. The conversation covered a range of topics, from how to think about the relationship between technology and the law, how to get engineers to engage with ethical and legal concepts, the view of the coastal tech policy discourse from Hurwitz’s vantage in the middle of the country, the role and politics of the Federal Trade Commission, and why he finds some inspiration in Frank Herbert’s Dune.
In the course of its investigation into the insurrection at the US Capitol, the House Select Committee on January 6th spoke to hundreds of witnesses, including social media executives with insight into the role that platforms played in propagating the false claims that motivated violence that day, and in connecting and facilitating the movement and organization of people that sought to overthrow the election.One of the individuals that testified to the Select Committee was a former Twitter official, Anika Collier Navaroli. Justin Hendrix had a chance to speak with Anika earlier this month, to hear how her thinking has evolved in this time under the spotlight, and what she’s hoping to do next to continue her journey as an intellectual and an activist working at the intersection of tech, media and democracy.
Tech Policy Press editor Justin Hendrix is joined by a UK lawmaker and advocate who has been influential in the global push for more protections for children online. Baroness Beeban Kidron OBE is a Crossbench member of the House of Lords and sits on the Democracy and Digital Technologies Committee, and she’s a Commissioner for UNESCO's Broadband Commission for Sustainable Development, where she is a member of the Working Group on Child Online Safety. She’s the Founder and Chair of 5Rights Foundation, which seeks to ensure children and young people are afforded the right to participate in the digital world “creatively, knowledgeably and fearlessly.” 5Rights played a key role in advancing the UK Children’s Code, as well as the California Age Appropriate Design Code Act, passed last year. Baroness Kidron discussed the broad trajectory of efforts to address online child safety, what she thinks about the legal challenge to the California law and some of the harsher provisions of child safety laws in other parts of the country, and where she believes the fight for child digital safety is headed in the future.
In this episode, Tech Policy Press board member and UCLA School of Law postdoctoral research fellow Courtney Radsch interviews Anne Marie Engtoft Larsen, Denmark’s Tech Ambassador, who represents the Danish Government to the global tech industry and in global governance forums on emerging technologies. The discussion focuses on the role of tech in society, how to regulate artificial intelligence, how to accommodate non-English and indigenous languages in a tech ecosystem focused on scale, and how to capitalize journalism in the age of social media.
This episode features two segments. We’ll hear from Ellen P. Goodman, Senior Advisor for Algorithmic Justice at the U.S. National Telecommunications and Information Administration (NTIA), which just launched an inquiry seeking comment on “what policies will help businesses, government, and the public be able to trust that Artificial Intelligence (AI) systems work as claimed – and without causing harm.” And, we’ll speak with Dr. Michal Luria, a Research Fellow at the Center for Democracy & Technology who had a column in Wired this month under the headline, Your ChatGPT Relationship Status Shouldn’t Be Complicated. She says the way people talk to each other is influenced by their social roles, but ChatGPT is blurring the lines of communication.
In this episode, Justin Hendrix is joined by a columnist and author who’s spent the last few years thinking about a past era of automation, a process that yielded him a valuable perspective when considering this moment in time. Los Angeles Times technology columnist Brian Merchant is the author of a recent column under the headline, "Afraid of AI? The startups selling it want you to be," and the forthcoming book Blood in the Machine: The Origins of the Rebellion Against Big Tech, which tells the story of the 19th century Luddite movement.
This is Part 2 of two episodes looking back on the Cambridge Analytica scandal, which arguably kicked off five years ago when the New York Times and the Guardian published articles on March 17, 2018. The Times headline was “How Trump Consultants Exploited the Data of Millions,” while the Guardian went with “Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach.”That number, and the scale of the scandal, would only grow in the weeks and months ahead. It served as a major catalyzing moment for privacy concerns in the social media age. In these two episodes we’ll look back on what has happened since, the extent to which perceptions of what happened have changed or been challenged, and what unresolved questions that emerged from the scandal mean for the future.In this second episode, we’ll hear a panel discussion hosted by the Bipartisan Policy Center that I helped moderate at the end of March. The panel featured Katie Harbath, a former Facebook executive who is now a Fellow in the Digital Democracy Project at the Bipartisan Policy Center; Alex Lundry, Co-Founder, Tunnl, Deep Root Analytics; and Matthew Rosenberg, a Washington-based Correspondent for the New York Times and one of the individuals on the byline of that first story on Cambridge Analytica.
This is Part 1 of two episodes looking back on the Cambridge Analytica scandal, which arguably kicked off five years ago when the New York Times and the Guardian published articles on March 17, 2018. The Times headline was “How Trump Consultants Exploited the Data of Millions,” while the Guardian went with “Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach.”That number, and the scale of the scandal, would only grow in the weeks and months ahead. It served as a major catalyzing moment for privacy concerns in the social media age. In these two episodes we’ll look back on what has happened since, the extent to which perceptions of what happened have changed or been challenged, and what unresolved questions that emerged from the scandal mean for the future.In this first episode, Justin Hendrix speaks with David Carroll, a professor of media design in the MFA Design and Technology graduate program at the School of Art, Media and Technology at Parsons School of Design at The New School. Carroll legally challenged Cambridge Analytica in the UK courts to recapture his 2016 voter profile using European data protection law, events that were chronicled in the 2019 Netflix documentary The Great Hack.
Two weeks ago, Tech Policy Press editor Justin Hendrix participated in Tech and Society week, a series of events across Georgetown’s campus hosted by Emily Tavoulareas, Managing Chair of the Georgetown Initiative on Tech & Society. The panel featured a discussion between three podcast hosts focused on tech and tech policy, including Hendrix and:Bridget Todd, director of public communications for Ultraviolet, a gender justice organization trying to build a more feminist, anti-racist internet and the creator and host of the iHeartRadio tech and culture podcast There Are No Girls on the InternetQuinta Jurecic, a fellow in governance studies at the Brookings Institution, a senior editor at Lawfare, and a contributing writer at The Atlantic. Jurecic is one of an array of hosts on the Lawfare podcast, and she’s the co-host of a long running series called Arbiters of Truth that focuses on the information ecosystem.
Across the United States, there is a growing number of lawsuits that seek to hold tech firms accountable for various alleged harms. My guest today is tracking such suits closely. Gaia Bernstein is a Law Professor, Co-Director of the Institute for Privacy Protection and Co-Director of the Gibbons Institute for Law Science and Technology at the Seton Hall University School of Law. She writes teaches and lectures in the intersection of law, technology, health and privacy, and she is the author of a new book on the subject, just out from Cambridge University Press, titled Unwired: Gaining Control over Addictive Technologies.
Is technology ultimately neutral? Are the biases we discover in the systems we interact with today just bugs or defects that we can trust will be addressed in version 2.0 or 3.0 of the system? Or is there something inherently wrong with the tech industry’s approach to developing algorithms and software? In today’s podcast, we speak to the author of a new book that takes on this question. In More than a Glitch. Confronting Race, Gender, and Ability Bias in Tech, data scientist and journalist Meredith Broussard considers the ways in which racism, sexism, and ableism are coded into systems, and what we must do to ensure a more inclusive future.
In this episode of the podcast, we hear three perspectives on generative AI systems and the extent to which their makers may be exposed to potential liability. I spoke to three experts, each with their own views on questions such as whether Section 230 of the Communications Decency Act-- which has provided broad immunity to internet platforms that host third party content-- will apply to systems like ChatGPT. Guests, in order of appearance, include: Jess Miers, legal advocacy counsel at the Chamber of Progress, an industry coalition whose partners include Meta, Apple, Google, Amazon, and others;James Grimmelmann, a law professor at Cornell with appointments at Cornell Tech and Cornell Law School;Hany Farid, a professor at the University of California Berkeley with a joint appointment in the computer and information science departments.
At Columbia University, data scientist Chris Wiggins and historian Matthew Jones teach a course called Data: Past, Present and Future. Out of this collaboration has come a book, How Data Happened: A History from the Age of Reason to the Age of Algorithms, to be published on Tuesday, March 21st by W.W. Norton. It should be required reading for anyone working with data of any sort to solve problems. The book promises a sweeping history of data and its technical, political, and ethical impact on people and power.
Answers on how best to regulate technology differ depending on the values and politics of any particular jurisdiction. Yet it’s worth looking for points of consensus. In general these days, we in the United States have a lot to learn from lawmakers and regulators in Europe, who are further down the path in their regulatory experiments. In this episode, Justin Hendrix speaks with one German lawmaker, Tobias Bacherle, who was elected to the Bundestag in 2021 representing Alliance 90/The Greens. The conversation touches on issues including encryption, the Digital Services Act, the US-EU Trade and Technology Council, and the relationship between tech and the environment.
In the spring, Tech Policy Press editor Justin Hendrix teaches a course called Tech, Media and Democracy that is a partnership of faculty at NYU, Cornell Tech, CUNY’s Queens College, The New School and Columbia Journalism School. The course hosts a range of expert speakers on issues at the intersection of those topics, and graduate students in journalism, information science, computer science, media studies and design collaborate to produce prototypes and investigations of key issues. A recent guest speaker was Peter Pomerantzev, an author and researcher who is concerned with propaganda, polarization and how we come to understand the world around us. Emily Bell, director of the Tow Center at Columbia and one of the faculty on the course, led the discussion, which ranges from topics including the information component of the war in Ukraine to the tension between democracy and authoritarianism to the role of journalism and technology in shaping public discourse.
In this episode we look at questions around ethical, legal and business risks surrounding so-called generative AI and synthetic media, and the opportunity that exists if they are employed responsibly. The first segment features Matthew Ferraro, an attorney at the firm WilmerHale who counsels clients about such risks and, with his colleagues, recently wrote a piece for Tech Policy Press on the "Ten Legal and Business Risks of Chatbots and Generative AI." And the second segment features Claire Leibowicz from the Partnership on AI and Sam Gregory from the human rights organization WITNESS, who worked together with other partners to develop a set of Responsible Practices for Synthetic Media.
How will so-called "generative AI" tools such as OpenAI's ChatGPT change our politics, and change the way we interact with our representatives in democratic government? This episode features three segments, with:Kadia Goba, a politics reporter at Semafor and author of a recent report on the AI Caucus in the U.S. House of Representatives;Micah Sifry, an expert observer of the relationship between tech and politics and the author of The Connector, a Substack newsletter on democracy, organizing, movements and tech, where he recently wrote about ChatGPT and politics;Zach Graves, executive director of Lincoln Network, and Marci Harris, CEO and co-founder of PopVox.com, co-authors with Daniel Schuman at DemandProgress of a recent essay in Tech Policy Press on the risks and benefits of emerging AI tools in the legislative branch.
The past few years have seen a number of high profile hearings on Capitol Hill, with lawmakers expressing concern and even outrage at tech CEOs often for their failures to just satisfy their own policies. And, there have been high profile investigations by certain committees, including the investigation of competition in digital markets in the House Judiciary Committee and its Subcommittee on Antitrust, Commercial and Administrative Law. But when it comes to passing laws, Congress has made little progress in the domain of tech policy. An academic and a tech policy expert, today’s guest played an active role in the investigations and legislative proposals led by Democrats over the last few years. Anna Lenhart served as a staffer on the House Judiciary Committee Antitrust Subcommittee under then Chairman David Cicilline (R-RI), where she supported tech oversight and investigations. And, she was senior technology policy Advisor to Representative Lori Trahan (D-MA), who serves on the Energy and Commerce Committee. I caught up with Anna for a kind of exit interview, as she recently left Congress to return to academia and a handful of projects focused on some of the issues she cared most about in her time on the Hill.
Amazon is one of the world’s largest and most powerful companies. Yet one of the engines of its might is largely invisible to customers- its vast network of millions of third party sellers. In today’s episode we talk with Moira Weigel, an Assistant Professor of Communications Studies at Northeastern University and the author of a recent report for Data & Society, Amazon's Trickle Down Monopoly: Third Party Sellers and the Transformation of Small Businesses. For the report, Weigel spent a good amount of time trying to understand experience of the people operating the small businesses that power Amazon’s global expansion.
This episode features four segments that dive into Gonzalez v. Google, a case before the Supreme Court that could have major implications on platform liability for online speech. First, we get a primer on the basics of the case itself; then, three separate perspectives on it. Asking the questions is Ben Lennett, a tech policy researcher and writer focused on understanding the impact of social media and digital platforms on democracy. He has worked in various research and advocacy roles for the past decade, including serving as the Editor in Chief of Recoding.tech and as policy director for the Open Technology Institute at the New America Foundation.Ben’s first interview is with two student editors at the publication Just Security, Aaron Fisher and Justin Cole, whom Tech Policy Press worked with this week to co-publish a review of key arguments in the amicus briefs filed with the Court on the Gonzalez case. Then, we’ll hear three successive interviews, with Mary McCord, Executive Director of the Institute for Constitutional Advocacy and Protection (ICAP) and a Visiting Professor of Law at Georgetown University Law Center; Anupam Chander, a Professor of Law and Technology at Georgetown Law; and David Brody, Managing Attorney of the Digital Justice Initiative at the Lawyer’s Committee for Civil Rights Under the Law.
Elon Musk, the platform’s new owner, says that Twitter is both a social media company and a "crime scene." The crime he appears most concerned about is purported censorship by the tech firms, which he says has occurs at the U.S. government’s direction. Musk, who claims he is leading a “revolution” against such practices, has given a small number of people access to internal Twitter documents- the so-called Twitter Files- including emails and internal message board communications that, in their selective release, demonstrate executives at the firm engaging with politicians and federal agencies on a range of issues, from COVID-19 to election disinformation. This week, there were two hearings in the House of Representatives on this subject, including a Committee on Oversight and Accountability hearing titled “Protecting Speech from Government Interference and Social Media Bias, Part 1: Twitter’s Role in Suppressing the Biden Laptop Story,” and a hearing of the new Select Subcommittee on the Weaponization of the Federal Government that was intended to “discuss the politicization of the FBI and DOJ and attacks on American civil liberties.”If we look past the conspiracy theories and legal gibberish, is there any there there? Should we pursue reforms and require greater transparency around the interaction between platforms and government? In this episode, we hear from three experts:Shoshana Weissmann, Digital Director and Fellow at the R Street InstituteDarren Linvill, Associate Professor, Clemson University Media Forensics Hub Mike Masnick, Founder of TechDirt and CEO of the Copia Institute
Today, we’re going to listen in on a panel discussion that took place at the end of last year, hosted by the Knight First Amendment Institute at Columbia University. The Institute’s Research Director, Katy Glenn Bass, hosted a conversation with based on themes from the scholar David G. Robinson’s first book Voices in the Code. The book contains the story of how a group of patients, doctors, data scientists, and advocates worked together to develop a new way to match kidney donations for transplants, with the goal of making the process fair and open. The book bears insights on how algorithmic systems that are often heavily freighted with moral and political complexity can and should be developed with care to avoid excluding the voices of non-technical stakeholders in the outcome, and is a guide for policymakers concerned with questions around transparency, safety and equity in such systems. Panelists included Robinson, as well as scholars Deborah Raji and J. Nathan Matias.
Frequently on this podcast we come back to questions around information, misinformation, and disinformation. In this age of digital communications, the metaphorical flora and fauna of the information ecosystem are closely studied by scientists from a range of disciplines. We're joined in this episode by one such scientist who uses observation and ethnography as his method, bringing a particularly sharp eye to the study of propaganda, media manipulation, and how those in power— and those who seek power— use such tactics. Samuel Woolley is the author of Manufacturing Consensus: Understanding Propaganda in the Age of Automation and Anonymity, just out this week from Yale University Press. He’s also the author of The Reality Game: How the Next Wave of Technology Will Break the Truth; co-author, with Nick Monaco, of a book on Bots; and co-editor, with Dr. Philip N. Howard, of a book on Computational Propaganda.
Earlier this month, Getty Images, one of the world’s most prominent suppliers of editorial photography, stock images, and other forms of media, announced that it had commenced legal proceedings in the High Court of Justice in London against Stability AI, a British startup firm that says it builds AI solutions using "collective intelligence," claiming Stability AI infringed on Getty’s intellectual property rights by including content owned or represented by Getty Images in its training data. Getty says Stability AI unlawfully copied and processed millions of images protected by copyright and the associated metadata owned or represented by Getty Images without a license, which the company says is to the detriment of the content’s creators. The notion at the heart of Getty’s assertion- that generative AI tools like Stable Diffusion and OpenAI’s DALLE-2 are in fact exploiting the creators of the images their models are trained on- could have significant implications for the field. Earlier this month I attended a symposium on Existing Law and Extended Reality, hosted at Stanford Law School. There, I met today’s guest, Michael Running Wolf, who brings a unique perspective to questions related to AI and ownership, as a former Amazon software engineer, a PhD student in computer science at McGill University, and as a Northern Cheyenne man intent on preserving the language and culture of native people.
In 2004, Mark Zuckerberg launched “TheFacebook” at Harvard University before rolling the social networking site out to other students at Dartmouth, Columbia, and Yale. Soon, it was available on hundreds of college and university campuses, and thereafter the rollout included high schools. Now, there are nearly 3 billion monthly active users of the site, and it is readily apparent that it has had a significant impact on society in a variety of ways. One such impact is on mental health. Researchers have found that Facebook use is associated with multiple mental health issues, ranging from anxiety, insomnia, depression and addiction to body image and eating disorders, alcohol use, and more. But while much of the evidence collected is concerning, most such studies have not identified a solid causal connection between Facebook and negative mental health, and many skeptics remain. But in today’s episode, we’re going to discuss one study that does appear to draw a causal connection between the use of Facebook and poor mental health with two its authors: Luca Braghieri, an Assistant Professor in the department of Decision Sciences at Bocconi University in Italy; and Alexey Makarin, an Assistant Professor in the Applied Economics group at the MIT Sloan School of Management.
In the years following the 2016 U.S. presidential election, much effort has been put into understanding foreign influence campaigns, and into disrupting efforts by Russia and other countries, such as China and Iran, to interfere in U.S. elections. Political and other computational social scientists continue to whittle at questions as to how much influence such campaigns have on domestic politics. One such question is how much did the Russian Internet Research Agency's (IRA) tweets, specifically, affect voting preferences and political polarization in the United States? A new paper in the journal Nature Communications provides an answer to that specific question. Titled Exposure to the Russian Internet Research Agency foreign influence campaign on Twitter in the 2016 US election and its relationship to attitudes and voting behavior, the paper matches Twitter data with survey data to study the impact of the IRA's tweets. To learn more about the paper, Justin Hendrix spoke with one of its authors, Joshua Tucker, a professor of politics at NYU, where he also serves as the director of the Jordan Center for the Advanced Study of Russia and the co-director of the NYU Center for Social Media and Politics (CSMaP). Hendrix and Tucker talked about the study, as well as what can and cannot be understood about the impact of the broader campaign of the IRA, or certainly the broader Russian effort to interfere in the U.S. election, from its results.
To learn more about the events on January 8th, 2023, when supporters of former far-right Brazilian President Jair Bolsonaro stormed the country's capital, and the connection between U.S. and Brazilian election disinformation, Justin Hendrix spoke with a prominent Brazilian journalist who has been covering these issues for years: Patrícia Campos Mello, a reporter at large and columnist at the newspaper Folha de São Paulo. They discussed the role of social media in Brazilian politics, as well as the possibility that the attacks may spur new regulations.
Imagine a company that hides who it works with and where billions of dollars flow around the world. That earns its profits financing a global network containing piracy, porn, fraud and disinformation, even doing business with figures sanctioned by the U.S. Treasury, including Russian companies that may access and store data about people browsing websites and apps in Ukraine, potentially opening a mechanism for Russian intelligence to target individuals there. A company that tells the public that it doesn’t make money from guns that nevertheless does business with the maker of the AR-15, the weapon used in so many horrific mass killings, including the recent massacre of teachers and students in Uvalde, Texas. Is this some organized crime syndicate or shady offshore shell company? No, it’s Google, one of the biggest and most prominent technology companies on the planet. This episode features a conversation with Craig Silverman, a journalist who has spent years uncovering fraud in the opaque world of digital advertising and media manipulation. With his colleagues at ProPublica, in a recent series of articles Silverman employed a unique investigative approach to uncover just exactly how Google operates in a shadowy realm of deceit and disinformation.
According to the legislation that established the January 6th Committee, the members were mandated to examine “how technology, including online platforms” such as Facebook, YouTube, Twitter and Reddit and others “may have factored into the motivation, organization, and execution” of the insurrection. When the Committee issued subpoenas to platforms a year ago, Chairman Bennie Thompson (D-MS) said, “Two key questions for the Select Committee are how the spread of misinformation and violent extremism contributed to the violent attack on our democracy, and what steps—if any—social media companies took to prevent their platforms from being breeding grounds for radicalizing people to violence.” In order to learn what came of this particular aspect of the Committee’s sprawling, 18 month investigation, in this episode I’m joined by four individuals who helped conduct it, including staffing the depositions of social media executives, message board operators, far-right online influencers, militia members, extremists and others that gave testimony to the Committee:Meghan Conroy is the U.S. Research Fellow with the Digital Forensic Research Lab (DFRLab) and a co-founder of the Accelerationism Research Consortium (ARC), and was an Investigator with the Select Committee to Investigate the January 6th Attack on the U.S. Capitol.Dean Jackson is Project Manager of the Influence Operations Researchers’ Guild at the Carnegie Endowment for International Peace, and was formerly an Investigative Analyst with the Select Committee. Alex Newhouse is the Deputy Director at the Center on Terrorism, Extremism, and Counterterrorism and the Director of Technical Research at the Accelerationism Research Consortium (ARC), and served as an Investigative Analyst for the Select Committee.Jacob Glick is Policy Counsel at Georgetown’s Institute for Constitutional Advocacy and Protection, and served as an Investigative Counsel on the Select Committee.
Avi Asher-Schapiro is a journalist covering digital rights and technology for the Thomson Reuters Foundation. For the final Tech Policy Press podcast of 2022, Justin Hendrix spoke to Asher-Schapiro about some of the most significant stories he and his colleagues covered in 2022, as well as what may make headlines in 2023 at the intersection of technology and society, delving into topics ranging from surveillance and crypto to social media and tech policy.
On Friday, Congresswoman Lori Trahan, a member of the House Energy and Commerce Committee, led a group of Democrats including Senator Ron Wyden and Representatives Katie Porter, Stephen Lynch, Susan Wild, Mondaire Jones, Kathy Castor, Adam Schiff, and Elissa Slotkin to sign letters requesting information from gaming companies about their efforts to combat hate, harassment, and extremism in online games. The letters were sent to companies including Activision Blizzard, Take-Two Interactive, Riot Games, Epic Games, Valve, Microsoft, Sony, and Roblox. The letters followed a report issued by the Anti-Defamation League (ADL) earlier this month that found that 77 percent of adults and 66 percent of teens have reported experiences of harassment while playing online games over the past year, and identified a number of other concerns about social gaming environments. Today, I’m joined by one of the authors of that report, ADL Center for Technology and Society Director of Strategy and Operations Daniel Kelley; as well as by Queens University professor Amarnath Amarasingam, coauthor of a report commissioned by the United Nations Office of Counter-Terrorism on the intersection of gaming and violent extremism that was released in October.
A little more than a year ago, in the first article announcing the release of the Facebook Files, the documents brought out of the company by whistleblower Frances Haugen, the Wall Street Journal’s Jeff Horwitz reported on Cross Check, a Facebook system that “exempted high-profile users from some or all” of the platform’s rules. The program shields millions of elites from normal content moderation enforcement. While the existence of such a program was known, its scale was and perhaps still is shocking.Following the Journal’s reporting and subsequent concern in the public, Facebook (now Meta) President of Global Affairs Nick Clegg announced the company would request a policy advisory opinion from its independent Oversight Board. 14 months later, the Oversight Board has completed its review and published its opinion. To talk more about the opinion, the Cross Check system and the problem of content moderation more generally, I’m joined with one member of the Oversight Board, Nighat Dad, a lawyer from Pakistan and founder of the Digital Rights Foundation; and one outside observer who answered the board’s call for opinions about the Cross Check system, R Street Institute senior fellow and University of Pennsylvania Annenberg Public Policy Center distinguished research fellow Chris Riley.
Last week, the Chinese government under President Xi Jinping took steps to finally move away from its zero-COVID policy, following two weeks of protests in multiple cities. The unrest and anti-government sentiment was perhaps the most pronounced since the 1989 Tiananmen Square crackdown. And while these events gave Western observers an opportunity to grapple with the complexity of Chinese politics, generational and regional differences in the views of the population, and ultimately how the authoritarian government responds to public pressure, it also gave us a chance to see how the Chinese censorship and surveillance apparatus operates. This week’s Tech Policy Press podcast comes in two parts. In both, we’ll hear from reporters covering the intersection of China and technology. This is the second part, and it features a conversation with two individuals covering China for the New York Times, Paul Mozur and Muyi Xiao. In their collaborative coverage they have mixed open source visual investigations methods with traditional reporting to get a sense of the protests and the state’s response.
Last week, the Chinese government under President Xi Jinping took steps to finally move away from its zero-COVID policy, following two weeks of protests in multiple cities. The unrest and anti-government sentiment was perhaps the most pronounced since the 1989 Tiananmen Square crackdown. And while these events gave Western observers an opportunity to grapple with the complexity of Chinese politics, generational and regional differences in the views of the population, and ultimately how the authoritarian government responds to public pressure, it also gave us a chance to see how the Chinese censorship and surveillance apparatus operates. This week’s Tech Policy Press podcast comes in two parts. In both, we’ll hear from reporters covering the intersection of China and technology. This is the first part, and it features a conversation with Liza Lin, a Reporter at The Wall Street Journal. She covers Asia technology news for the Journal from Singapore. Before that she was the paper’s China correspondent, based in Shanghai. She was part of a team at the Journal to named as Pulitzer Finalists for the International Reporting category in 2021 for coverage of Chinese leader Xi Jinping, and with other Journal reporters won the Gerald Loeb Award for International Reporting in 2018 for a series of stories on China's Surveillance state. She’s co-author of a book on that subject titled Surveillance State: Inside China's Quest to Launch a New Era of Social Control, with Josh Chin.
On Friday, Elon Musk announced via tweet that documents related to Twitter’s decision to intervene in the propagation of an October 2020 story in the New York Post about then candidate Joe Biden’s son, Hunter Biden, would be made public. The incident caused a furor at the time, with some Republicans and supporters of former President Donald Trump insinuating that it was proof that social media firms are biased against conservative interests. Some even maintain that the actions of Twitter and Facebook with regard to this particular New York Post story may have had some impact on the outcome of the election, as far-fetched as that might be. Today, we’ll hear two voices on the disclosures. The first is David Ingram, who covers tech for NBC News and will walk us through what happened. And the second is Mike Masnick, the editor of the influential site Tech Dirt, who offers his first thoughts on the disclosures, and what they portend for the future of Twitter under Elon Musk.
For this episode of the Tech Policy Press podcast, I had the chance to speak to Chris Anderson, Ph.D., a professor of sociology at the University of Milan who is leading a course on tech manifestos and their evolution, inviting his students to dissect the language for what it can tell us about politics and power. Documents such as A Declaration of the Independence of Cyberspace and A Manifesto for Cyborgs have given way to more vacuous statements from billionaires, such as Mark Zuckerberg's Facebook manifesto, Building Global Community. These days a lot of Silicon Valley’s leaders don’t have much in the way of ideas, but they do have a lot of money, so either way they can push whatever agenda they may have on the rest of us. From promises of abundance delivered by artificial intelligence, to a 'global community' convened on social media platforms, to reimagined economies or even a new world order built on the blockchain, tech manifestos remain important, since they often signify large amounts of capital are about to be deployed to try to manifest someone's new vision.
By all accounts, Elon Musk’s acquisition of Twitter is not going well. And yet many have the real sense that something important may be lost if the platform collapses, or if there is a substantial migration away from it to alternatives like Mastodon, the open source, decentralized platform that has grown from three hundred thousand monthly active users to nearly two million since Musk bought Twitter. In this episode, Tech Policy Press editor Justin Hendrix had the chance to discuss Musk’s takeover with Dr. Johnathan Flowers, and to learn more about some of the exclusive norms he’s observed that may create obstacles to communities of color when contemplating the switch to Mastodon.
Today we’re going to hear from the editor of-- and two authors included in-- a book of essays about how particular bits of software have changed the world in different ways, the just-published "You Are Not Expected to Understand This": How 26 Lines of Code Changed the World from Princeton University Press. The book is at once delightful and enlightening, revealing how technology interacts with people and society in both good and bad ways, and how important and long lasting the decisions we take when designing software and systems can be on the world. This episode features:Torie Bosch, the editor of Future Tense, a collaborative project of Slate magazine, New America, and Arizona State University, and the editor of the book;Meredith Broussard, an associate professor at the Arthur L. Carter Journalism Institute of New York University and research director at the NYU Alliance for Public Interest TechnologyCharlton McIlwain, Vice Provost for Faculty Engagement and Development at New York University and Professor of Media, Culture, and Communication at NYU’s Steinhardt School of Culture, Education, and Human Development
Media reports suggest that large swathes of employees at Twitter have resigned after the platform’s new owner, Elon Musk, issued a kind of ultimatum asking them to commit to "long hours at high intensity" to build “Twitter 2.0.” Last night, according to an internal Twitter email shared with CNN, employees who decided to stay at the company received an email that said the company's offices will be temporarily closed and badge access will be restricted through Monday. Whether the platform will remain functional with so many core engineering and other crucial teams decimated is an open question. To talk more about Twitter, Musk, and what is potentially lost, Justin Hendrix spoke to Dr. Meredith Clark, whose research focuses on the intersections of race, media, and power. She’s leading a project to archive Black Twitter, as part of a larger project to archive the Black web. And, she’s the author of a forthcoming book on Black Twitter.
According to the BBC, to date at least 348 Iranian protesters have been killed and nearly 16,000 arrested in women-led protests that erupted three months ago after the death Mahsa Amini, a 22-year-old woman who died in custody after being detained by morality police for allegedly breaking the strict rules on the wearing of hijabs.One way the regime has responded to these antigovernment protests is to block access to the internet, independent news sites and social media and communication platforms. To talk more about how these tactics are being applied in Iran and around the world, and what policymakers in democratic countries can do to help dissidents on the ground, I spoke to two experts on digital and human rights:Yasmin Green, CEO of Jigsaw and author of a recent piece in Wired on Iran's internet blackoutsKian Vesteinsson, Senior Research Analyst for Technology and Democracy at Freedom House, one of the authors of the 12th annual Internet Freedom Report
Voting in the U.S. midterm elections closed on Tuesday, and as of Sunday morning, November 13, Democrats secured another majority in the Senate. But ballots are still being counted in key races that will determine which party controls the House.  It is clear, however, that the margins determining leadership in both chambers will be extremely small. In order to explore how the elections may impact the legislative debate over tech policy issues, Tech Policy Press editor Justin Hendrix spoke with three experts from civil society groups that regularly engage with lawmakers to find what scenarios and considerations are front of mind, even as we wait for the final tally:Emma Llansó, Director of the Free Expression Project, Center for Democracy and TechnologyYosef Getachew, Director of the Media and Democracy Program, Common CauseMatt Wood, Vice President of Policy and General Counsel, Free Press
This episode features a discussion with Brandi Collins-Dexter, the author of the new book BLACK SKINHEAD: Reflections on Blackness and Our Political Future. Brandi is both an academic and a civil rights activist in the fight for media and tech justice, and her book is a rollercoaster ride through those issues through culture and music and politics. Part media and cultural criticism, part memoir, and part warning, the book takes us to the fringes of Black communities and tries to make sense of our political moment.
As the U.S. midterm elections approach next week, there is a renewed focus on understanding the spending on and claims made in political advertising in digital channels, particularly on social media. But what is going on across the web, beyond the social media platforms? A recent report from the University of North Carolina at Chapel Hill Center on Technology Policy found that as a result of restrictions on political ads instituted by major platforms ahead of the 2020 elections, political advertisers are increasingly turning to political advertising on other platforms. Programmatic advertising accounts for a substantial and increasing share of political advertising, they say, and more attention needs to be paid to this complex and confusing ecosystem of companies- large and small- that serve up ads on websites, apps, streaming services, and other digitally connected devices. This episode features a discussion with the report's authors, J. Scott Babwah Brennen & Matt Perault.
Danielle Citron is the inaugural Jefferson Scholars Foundation Schenck Distinguished Professor in Law at the University of Virginia School of Law, where she teaches and writes about information privacy, free expression and civil rights. She is the vice president of the Cyber Civil Rights Initiative, a nonprofit devoted to fighting for civil rights and liberties in the digital age, and in 2019 she was named a MacArthur Fellow for her work on cyberstalking and intimate privacy. Her latest book, The Fight for Privacy: Protecting Dignity, Identity, and Love in the Digital Age, published by W.W. Norton and Penguin Vintage UK, was released this month.
In this episode of the podcast, we present two segments that explore how the combination of media, platforms, politics and people play out in Latino communities in the U.S., particularly at crucial moments for democracy, such as at election time. The first segment is with individuals who are leading efforts to understand and confront mis- and disinformation targeting Latino communities:Roberta Braga, Director of Counter-Disinformation Strategies at EquisJaime Longoria, Manager of Research and Training for the Disinfo Defense League at Media Democracy Fund.And the second segment is a discussion with two researchers at the University of Texas at Austin who spent the summer talking specifically to Latino users of WhatsApp about how the political discourse plays out in their communities on that widely used messaging app, and wrote about it for Tech Policy Press as part of a special series of essays on race, ethnicity, technology and elections:Inga Kristina Trauthig, Ph.D., Research Manager of the Propaganda Research Lab at the Center for Media Engagement at The University of Texas at AustinKayo Mimizuka, Graduate Research Assistant at the Center for Media Engagement and a Ph.D. student in the School of Journalism and Media at The University of Texas at Austin.
In recent episodes of this podcast we’ve explored the policies and practices of the social media platforms with regard to elections. In this week’s episode, we’ll hear two segments on this theme. First, an interview with Daniel Kriess, an Associate Professor in the Hussman School of Journalism and Media at the University of North Carolina at Chapel Hill and a principal researcher at the UNC Center for Information, Technology, and Public Life. With Ph.D candidate Erik Brooks, Daniel is the author of Looking to the Midterms: The State of Platform Policies on U.S. Political Speech, a recent post at Tech Policy Press.In the second segment, we zoom out and discuss the trajectory of tech company policies on elections over the last twenty six years with Katie Harbath and Collier Fernekes, authors of a recent report for the Bipartisan Policy Center that was based on an archive of public announcements made by the firms. Katie is a former Facebook public policy director and now leads Anchor Change, a consultancy she started after leaving the tech company. Collier is a research analyst at the Bipartisan Policy Center.
Earlier this year, an investigation published in the New Yorker by Ronan Farrow suggested that commercial spyware called Pegasus, developed by the Israeli firm NSO Group, is being used by governments in at least 45 countries around the world, including by U.S. and European intelligence and law enforcement services. The technology permits government agents to gain access to the contents of cell phones by exploiting flaws in device operating systems and software. In this episode, we hear from three individuals in Bangkok, Thailand; pro-democracy activists who have seen their community targeted with Pegasus, part of a range of activities intended to discourage dissent and limit free expression:Yingcheep Atchanont, a program manager at iLawRuchapong Chamjirachaikul, advocacy officer at iLawDarika Bamrungchok, a program manager at Thai Netizen
Regular users of social media platforms are well aware that they often produce toxic discourse. Scholars continue to produce results that bring clarity to the mechanisms by which digital and social media exacerbate partisan and identity-based conflict. A better understanding is crucial for keying in on what platforms should be held responsible for, devising better policy, and potentially designing solutions. A new peer-reviewed paper from Petter Törnberg, a researcher at the University of Amsterdam Institute for Social Science Research, contributes to this understanding by developing a computational model that “suggests that digital media polarize through partisan sorting, creating a maelstrom in which more and more identities, beliefs, and cultural preferences become drawn into an all-encompassing societal division.”
Last week, President Joe Biden’s White House published a 73-page document produced by the Office of Science and Technology Policy titled Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People. The White House says that “among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public.“ The Blueprint, then, is “a guide for a society that protects all people from these threats—and uses technologies in ways that reinforce our highest values.”To discuss the blueprint and the broader context into which it was introduced, Tech Policy Press spoke to one expert who had a hand in writing it, and one external observer who follows these issues closely. Joining the discussion are Suresh Venkatasubramanian, a professor of computer science and data science and director of the Data Science Initiative at Brown University, who recently completed a 15-month appointment as an advisor to the White House Office of Science and Technology Policy; and Alex Engler, a fellow at the Brookings Institution, where he researches algorithms and policy.
Some of the most controversial debates over speech and content moderation on social media platforms are now due for consideration in the Supreme Court. Last month, Florida’s attorney general asked the Court to decide whether states have the right to regulate how social media companies moderate content on their services, after Florida and Texas passed laws that challenge practices of tech firms that lawmakers there regard as anti-democratic. And this month, the Supreme Court decided to hear two cases that will have bearing on interpretation of Section 230 of the Communications Decency Act, which generally provides platforms with immunity from legal liability for user generated content. To talk about these various developments, Justin Hendrix spoke to three people covering these issues closely. Guests include:Brandie Nonnecke, Director of the CITRIS Policy Lab at UC Berkeley and the Director of Our Better WebJameel Jaffer, Director of the Knight First Amendment Institute at Columbia UniversityWill Oremus, a news analysis writer focused on tech and society at The Washington PostThe guests also made time to discuss Elon Musk’s on-again, off-again pursuit of Twitter, which appears to be on-again, and how his potential acquisition of the company relates to the broader debate around speech and moderation issues.
On September 21, Justin Hendrix moderated a panel discussion for the McCourt Institute at a pre-conference spotlight session on digital governance ahead of Unfinished Live, a conference on tech and society issues hosted at The Shed in New York City. The topic given by the organizers was Digital Governance and the State of Democracy: Why Does it Matter? Panelist included:  Erik Brynjolfsson, the Jerry Yang and Akiko Yamazaki Professor and Senior Fellow, Stanford Institute for Human-Centered AI (HAI) and Director of the Stanford Digital Economy LabMaggie Little, Director of the Ethics Lab at Georgetown UniversityEli Pariser, Co-Director of New_Public, an initiative focused on developing better digital public spaces; andEric Salobir, the Chair of the Executive Committee, Human Technology Foundation, a research and action network placing the human being at the heart of technology development
On Monday, the U.S. Supreme Court agreed to hear two cases that concern whether tech platforms can be held liable for user generated content, as well as for content that users see because of a platform’s algorithmic systems. In deciding to hear Gonzalez et al vs. Google and Taamneh, Mehier et al vs Twitter et al, the Court will broach the question of whether Section 230 of the Communications Decency Act should be narrowed, and whether it still immunizes the owners of websites when that algorithmically “recommend” third-party content into a user’s feed.To learn more about these cases and the potential implications of the Court’s decision, Tech Policy Press spoke to an expert on tech and internet law: Anupam Chander, the Scott K. Ginsberg Professor of Law and Technology at Georgetown University.
The former President and his supporters continue to sow doubt in the outcome of the 2020 election, and in the election system more generally. Now, with the the 2022 midterm elections just a month away, a number of observers are perplexed at the posture of large social media platforms, where false claims continue to fester and efforts to mitigate misinformation always seem puny compared to the scale of the problem. This week we hear from three experts who are following these issues closely: Nora Benavidez, Senior Counsel and Director of Digital Justice and Civil Rights, Free PressPaul Barrett, Deputy Director, Center for Business & Human Rights, NYU Stern School of BusinessMike Caulfield, Research Scientist at the Center for an Informed Public, University of Washington
In a new paper-- "The uselessness of AI Ethics," published in the online edition of the journal AI and Ethics, Luke Munn, points to over 80 lists of AI ethical principles produced by governments, corporations, research groups and professional societies. In is paper, he expresses concern that most of these ethics statements deal in vague terms and lack any kind of actual enforcement. But in critiquing attempts at defining an ethical code for AI, he is not suggesting we let the technology develop in a technical vacuum. On the contrary, he wants us to think more deeper about the potential problems in deploying AI. In this episode of the podcast, Mark Hansen, Director of the Brown Institute for Media Innovation and a professor at Columbia Journalism School, speaks with Munn about his ideas, which are part of a growing movement that sees the problems with AI less in purely computational terms, but instead as an area of social science.
As content moderation and other trust and safety issues have been, to put it mildly, at the fore of tech concerns over the last few years, it’s interesting to take a step back and look at the various conferences, professional organizations and research communities that have emerged to address this broad and challenging set of subjects. To get a sense of where trust and safety is as a field at this moment in time, Tech Policy Press spoke to three individuals involved in it, each coming from different perspectives:Shelby Grossman, a research scholar at the Stanford Internet Observatory and a leader in the community of academic researchers studying trust and safety issues as co-editor of the recently launched Journal of Online Trust and SafetyDavid Sullivan, the leader of an industry funded consortium focused on developing best practices for the field called the Digital Trust and Safety Partnership; andJeff Allen, co-founder and chief research officer of an independent membership organization of trust and safety professionals, the Integrity Institute.
A series of reports published this summer by Article 19- working with UNESCO and with funding from the European Union- take an in-depth look at how social media platforms operate in a global context, documenting a lack of understanding of cultural nuances and local languages, insufficient mechanisms for users and civil society groups to engage on moderation, a lack of transparency, and a power asymmetry that leaves local actors feeling powerless.To learn more about the project and its recommendations, in this episode we hear from four individuals involved in the drafting of the reports:Pierre François Docquir, Head of Media Freedom, ARTICLE 19, who led the project globally;Roberta Taveri, an ARTICLE 19 program officer who played a role in delivering the research on Bosnia and Herzegovina;Catherine Muya from ARTICLE 19 East Africa, who focused on Kenya, andSherly Haristya, PhD, an independent researcher who conducted the research on Indonesia.
In this episode of the Tech Policy Press podcast, we’re going to explore how law enforcement and other government agencies in the United States acquire data drawn from commercial data brokers for investigative purposes, and the questions raised by these practices.This is an issue that is still at question in the nation’s courts and is under active discussion on Capitol Hill. For instance, this summer the House Judiciary Committee hosted a hearing it titled Digital Dragnets: Examining the Government's Access to Your Personal Data. At the hearing, experts witnesses testified that government agencies at all levels, including federal agencies such as the Department of Homeland Security (DHS), Central Intelligence Agency (CIA), Internal Revenue Service (IRS), the Department of Defense (DOD), as well as state and local law enforcement are collecting a massive amount of personal data on American citizens, sidestepping constitutional protections against unwarranted search and seizure provided in the Fourth Amendment. The hearing included discussion of the proposed Fourth Amendment is Not For Sale Act, which would restrict government entities from engaging in such practices.But while the courts and Congress deliberate, government agencies are acquiring this information from software providers, including one such firm that was the subject of a recent investigative report from the Associated Press titled Tech tool offers police ‘mass surveillance on a budget. Today, I’m joined by the two reporters who spent months trying to understand how a little known company in Virginia goes about acquiring commercially available data and selling it to police in departments across the country- global investigative journalist Garance Burke and national investigative reporter Jason Dearen.
it is well understood that for all the shortcomings of the tech platforms’ approach to elections in this country, it’s much worse abroad, where often language and cultural barriers combine with fewer political and business incentives for firms such as Meta, Twitter, YouTube and TikTok to properly resource elections. Now, just weeks before a general election in Brazil that will decide that country’s next President, there are signs that disinformation is rife on the platforms, with many observers concerned about the potential for violence. To learn more, Justin Hendrix spoke to two experts involved in efforts to identify and mitigate disinformation in Brazil: João Brant, coordinator of desinformante, an initiative of the nonprofit Ponteio Comunicação, Information and Culture and the Instituto Cultura e Democracia in Brazil, and Flora Rebello Arduini, Campaigns Director at SumOfUs, a global activist community that seeks to curb the growing power of corporations.
A common theme on this podcast is the future, and the visions of the future that a certain set of Silicon Valley tech and venture accelerationists are working hard to advance. Today we’re going to hear from author and scholar Douglas Rushkoff about his latest book-Survival of the Richest: Escape Fantasies of the Tech Billionaires- which lampoons and deflates these characters, offering instead a humanist approach to defining the future by how we comport ourselves in the present.
This episode features a conversation with Bloomberg journalist Mark Bergen. He’s the author of Like, Comment, Subscribe: Inside YouTube’s Chaotic Rise to World Domination, from Viking. This is a business book, a history, and a contemplation of YouTube’s role in society all in one. Bergen explores how the company evolved into the massive juggernaut it is today, and along the way gives insight into concerning phenomena that we’ve discussed on this podcast in the past, such as the relationship between YouTube and violent extremism, misogyny, racism, white nationalism and a variety of other ills. The book pulls the curtain back on the internal dynamics and decisions that bring us to today. And it asks us to contemplate whether anyone- from Google’s leadership to regulators in any of the world’s governments- can truly get their heads or hands around YouTube.
The Tech Transparency Project (TTP), a research initiative of the nonprofit Campaign for Accountability, is focused on holding major tech companies to account– including Meta, the company that operates Facebook, Instagram, and WhatsApp. For instance, TTP collected what it calls Facebook’s ‘broken promises’ on issues ranging from bullying and harassment to fraud and deception to violence and incitement. A new report released this month, Facebook Profits from White Supremacist Groups, says the company is “failing to remove white supremacist groups and is often profiting from searches for them on its platform,” exposing how it “fosters and benefits from domestic extremism.” To hear more about the findings in the report, Tech Policy Press spoke to Katie Paul, TTP’s Director.
In last Sunday’s podcast, I promised an occasional series of discussions on the relationship between social media, message apps and election mis- and disinformation. In today’s show, I’m joined by two guests who just did a deep dive into the issue, producing a 'score card' that compares the policies and performance of the tech companies on multiple dimensions for New America’s Open Technology Institute:Spandana (Spandi) Singh, a policy analyst at New America's Open Technology Institute, andQuinn Anex-Ries, a PhD candidate in American Studies at USC and an intern with the Open Technology Institute this summer.Their findings are summarized in a report, Misleading Information and the Midterms: How Platforms are Addressing Misinformation and Disinformation Ahead of the 2022 U.S. Elections.
A little more than a year ago, a coalition of multidisciplinary researchers at Stanford, MIT, Northwestern, the University of Pennsylvania and Columbia set out to crowd source ideas to address the political divide in what was dubbed the Strengthening Democracy Challenge. “Anti-democratic attitudes and support for political violence are at alarming levels in the US," said Robb Willer, Director of the Polarization and Social Change Lab and Professor of Sociology at Stanford, at the time of the announcement. "We view this project as a chance to identify efficacious interventions, and also to deepen our understanding of the forces shaping these political sentiments.”After reviewing more than 250 submissions from researchers, activists and others, the research coalition selected 25 interventions it deemed most promising to test against one another in an "experimental tournament" utilizing a sample of 31,000 U.S. adults. To learn more about the challenge, some of the promising projects that emerged from it, and whether tech platforms may play a role in efforts to address polarization, I spoke to Willer and his colleague, Jan Gerrit Voelkel, a Ph.D. student in the Department of Sociology at Stanford University and also a member of the Polarization and Social Change Lab.
With the U.S. midterm election cycle about to kick into high gear, social media platforms are announcing updates to their civic integrity policies and approaches to countering election mis- and disinformation.In this week's podcast, we hear from election administrators themselves about the impact of election misinformation. This is the first in an occasional series Tech Policy Press will publish this fall on social media and election integrity. This episode draws audio from a panel discussion hosted by the U.S. House Committee on Oversight and Reform on August 11, 2022, that took place on the occasion of the publication of a majority staff report on the problem of election disinformation.
When most people think about the problem of mis- and disinformation, they think first of social media platforms like Facebook and Twitter. But how might the affordances of search engines, when used by ideologically motivated individuals, contribute to an unhealthy information ecosystem? Dr. Francesca Tripodi has a new book out on the subject, The Propagandists' Playbook: How Conservative Elites Manipulate Search and Threaten Democracy, which I had the chance to discuss with her this week.
In recent months, press reports have emerged about individuals in multiple countries falling victim to extortion and fraud schemes enabled by often highly rated lending apps downloaded from Google’s Play Store. Last week, Diana Baptista and Avi Asher-Schapiro, journalists at the Thomson Reuters Foundation, told the story of how a man fell prey to one of these apps operating in Mexico. In this podcast episode, Baptista describes the man's experience, the broader phenomenon and the surrounding context.
Earlier this year in California, two State Assembly members— Democrat Buffy Wicks and Republican Jordan Cunningham— introduced the California Age Appropriate Design Code Bill. The California Age Appropriate Design Code would place limitations on what companies can do with youth data, including tracking location and profiling. It puts limitations on manipulative design, and includes transparency measures so users are aware and consent to the use of their information. The bill makes the California attorney general responsible enforcement of the state’s rules, opening up the possibility of litigation or fines against companies that do not follow the Code. It would also require the California Privacy Protection Agency to create a Children’s Data Protection Task Force that would formulate recommendations on best practices.A coalition of civil society and tech policy groups supports the Code, including organizations such as Common Sense Media, Accountable Tech, the Electronic Privacy Information Center, the Sesame Workshop, the Consumer Federation of California, and the National Hispanic Media Coalition. Industry groups, such as TechNet and the California Chamber of Commerce, oppose the bill, and other experts have raised concerns in particular about requirements for age verification. The California State Assembly voted 72-0 to pass the bill, and it is now with the California Senate. For this podcast, Tech Policy Press spoke to three people— all college students and activists— who support it, in part due to their own experiences:Aliza Kopans, a rising sophomore at Brown University, cofounder of Technic(ally) Politics and an intern at Accountable Tech;Emma Lembke, a rising sophomore at the Washington University in St. Louis, founder of the Log Off Movement, cofounder of Technic(ally) politics and an intern at Accountable TechKhoa-Nathan Ngo, rising college sophomore and a youth collaborator at GoodforMedia.
This episode features two segments. First up, an interview with Solana Larsen and Bridget Todd, two of the folks behind Mozilla’s Internet Health Report and its award-winning podcast, IRL. This year, Mozilla decided to publish its Internet Health Report as a series of podcast episodes delving into the experiences of people building AI and working on AI policy. The series digs into a range of topics, including surveillance, labor, healthcare, geospatial data, and disinformation in social media.The second segment features a discussion with William Frey, a researcher and Ph.D. candidate at Columbia University and the lead author of a new paper titled Digital White Racial Socialization: Social Media and the Case of Whiteness.
In today’s episode of the podcast, we’re going to hear from FTC Chair Lina Khan, who was appointed in June 2021, as well as FTC Commissioner Rebecca Kelly Slaughter, who was appointed to a Democratic seat on the Commission in 2018. This isn’t a typical episode- what you’ll hear is audio of a special event hosted on Tuesday, July 19 by the Economic Security Project (ESP) and the Law and Political Economy Project (LPE). These organizations brought together scholars, advocates, and government officials to discuss how new thinking and research seeks to reframe dominant economic paradigms, and why it is so important to redefine and challenge monopolies. The event, Resourcing a New Paradigm: The Future of Antimonopoly Research, was introduced by Becky Chao, Director of Antimonopoly at the Economic Security Project, and it is her voice you’ll hear first. After remarks from Chair Khan and Commissioner Slaughter, you’ll hear a panel discussion moderated by the Open Markets Institute’s Legal Director, Sandeep Vaheesan. The full complement of speakers includes:Lina Khan, Chair, Federal Trade Commission Rebecca Kelly Slaughter, Commissioner, Federal Trade Commission Elettra Bietti, Joint Postdoctoral Fellow, NYU School of Law and the Digital Life Initiative at Cornell Tech in New YorkBrian Callaci, Chief Economist, Open Markets Institute Seeta Peña Gangadharan, Associate Professor in the Department of Media and Communications, London School of Economics and Political ScienceLenore Palladino, University of Massachusetts Amherst Assistant Professor of Economics and Public PolicyBecky Chao, Director of Antimonopoly, Economic Security ProjectAmy Kapczynski, Professor of Law and Faculty Director, Global Health Justice PartnershipModerated by Sandeep Vaheesan, Legal Director, Open Markets Institute By the end of this 90 minutes, you will be up to date on the key ideas, challenges and opportunities ahead for the intellectual project to redefine antimonopoly thinking and law to pursue not just economic but also social and racial justice.
On Wednesday, July 20, the United States House of Representatives Energy & Commerce Committee held a markup that included H.R. 8152, the "American Data Privacy and Protection Act,” which is touted as the first comprehensive national privacy legislation with bipartisan support. To discuss the bill and its prospects in detail, Tech Policy Press spoke with two experts on tech policy and civil rights issues: Nora Benavidez, Senior Counsel and Director of Digital Justice and Civil Rights at Free Press, and Justin Brookman, Director of Technology Policy for Consumer Reports.
This episode features a conversation with the author of a new book that makes a compelling argument for the substantial deprivatization of the Internet. In Internet for the People: The Fight for Our Digital Future, Ben Tarnoff says to create a more democratic and equitable society we need to diminish the role of the market in the future of the internet, and reduce the power of profit motive to define our online experience.
For the second year running, the Gay & Lesbian Alliance Against Defamation- GLAAD- has released a Social Media Safety Index that finds that major tech platforms are failing to keep LGBTQ users safe. The report was released at a time when the broader social and political context is growing more dangerous- in the US, nearly 250+ anti-LGBTQ bills have been introduced in legislatures this year, even as we see a surge of online hate speech and disinformation about the LGBTQ community, as well as physical attacks. To learn more about the challenges this community faces in holding social media platforms to account, I spoke to two people who helped author the report and devise the index: Jenni Olsen, Senior Director of Social Media Safety at GLAAD, and Andrea Hackl, a research analyst at Goodwin Simon Strategic Research.
India is the world’s most populous democracy, and also one that is facing challenges. This week we focus on the Indian government’s efforts to create a  bureaucratic apparatus to enforce what appears to be an ever more frequent number of requests for social media platforms to remove content deemed inappropriate for one reason or another. And for this week’s episode, I’m joined by the author of a recent piece on this subject, Angrej Singh, who is interning with Tech Policy Press this summer. Angrej helped to pull together the panel of experts-- all based in India-- that you’ll hear from today, including:Neeti Biyani, Policy and Advocacy Manager, Internet SocietyTejasi Panjiar, Associate Policy Counsel, Internet Freedom FoundationApar Gupta, Executive Director, Internet Freedom Foundation
At this year’s Collision, a tech conference that took place in June in Toronto, Tech Policy Press editor Justin Hendrix had the opportunity to interview two editors about how they think about the problem of disinformation, and how they direct their publication’s coverage of it as an issue. This short podcast installment is audio of the live stage discussion with Betsy Reed, editor in chief of The Intercept, and Matt Kaminski, editor in chief of Politico. Many thanks to Stephen Twomey and the other organizers of the Collision conference for including this panel in a series of discussions on the role of the fourth estate.
One of the areas where applications of machine learning and artificial intelligence are most fraught with ethical concerns is in law enforcement and criminal justice. To learn more about the opportunities and the concerns, Tech Policy Press spoke to Renée Cummings, who joined the University of Virginia’s School of Data Science in 2020 as the School’s first Data Activist in Residence. In addition to being an AI ethicist, she is also a Criminologist and Criminal Psychologist.
In the age of social media and disinformation, journalists, civil society groups, researchers, and media watchdogs in democracies are figuring out how to band together to create a line of defense against those who seek to sow division and doubt in advance of elections. This week, a French coalition calling itself the Online Election Integrity Watch Group published a summary report on its activities ahead of this spring’s national election there. The group includes entities such as the Alliance for Securing Democracy, Check First, GEODE, the Institute of Complex Systems, the Institute for Strategic Dialogue, Tracking Exposed, and Reset Tech, an initiative run by the Luminate foundation. To learn more about what the Watch Group learned in this election cycle, Tech Policy Press spoke to the report’s lead authors, Théophile Lenoir and Iris Boyer.
This episode focuses on how best to create mechanisms for outside scrutiny of technology platforms. The first segment is with Brandon Silverman, the founder and former CEO of CrowdTangle, an analytics toolset acquired by Facebook in 2016 that permitted academics, journalists and others to inspect how information spreads on the platform. And the second segment is a panel provided courtesy of the non-partisan policy organization the German Marshall Fund of the United States. On June 15, GMF hosted Opening the Black Box: Auditing Algorithms for Accountable Tech, featuring Anna Lenhart, Senior Technology Policy Advisor, Rep. Lori Trahan, a Democrat from Massachusetts; Deborah Raji, a fellow at the Mozilla Foundation and a PhD Candidate in Computer Science at UC Berkeley; and Mona Sloane, a sociologist affiliated with NYU and the University of Tübingen AI center. The panel was moderated by Ellen P. Goodman, a Professor at Rutgers Law School and a Visiting Senior Fellow at The German Marshall Fund of the United States.
This week, the NYU Stern Center for Business and Human Rights released a report on YouTube that Tech Policy Press Editor Justin Hendrix helped write with the Center’s Deputy Director, Paul Barrett. YouTube is generally understood to have avoided the scrutiny of journalists, researchers and lawmakers, at least relative to other social media platforms like Facebook. But there is a cost to flying under the radar. To address some of the key issues, this episode features two segments. The first is a conversation with Paul Barrett, and the second with two of the sources for the report, University of Washington associate professor and Center for an Informed Public cofounder Kate Starbird and Mnemonic Associate Director of Advocacy Dia Kayyali.
When it comes to visions of the way that technology will intersect with society in the future, Silicon Valley has a near monopoly. It’s been nearly 30 years since Richard Barbrook and Andy Cameron published the essay The Californian Ideology, which they say naturalized and gave a “technological proof to a libertarian political philosophy, and therefore foreclosing on alternative futures,” a “faith” that is “made possible through a nearly universal belief in technological determinism.”Now, the economic power of Silicon Valley has left its billionaire class fairly certain it is above reproach, unchecked and unchallenged, even as some of the biggest firms spawned there are locked in a staring contest with governments in Washington, Brussels and beyond. To talk more about the ways in which Silicon Valley elites have captured so much of what people define as “progress” and the pronouncements of some individuals who make wild promises about the abundant future that technology will supposedly deliver, Tech Policy Press spoke to Dave Karpf, who has been thinking about these issues for some time. He is the author of The MoveOn Effect: The Unexpected Transformation of American Political Advocacy, published in May 2012 by Oxford University Press, and Analytic Activism: Digital Listening and the New Political Strategy published in December 2016 by Oxford University Press.
On the Tech Policy Press podcast we talk a lot about the intersection of technology, media and politics. We talk about the flow of information and how political elites, journalists and citizens shape it.  There is substantial contrast in how the pieces fit together in China, for instance, compared to the United States. And yet, there are parallels that one might not expect. A recent documentary film explored these issues in the context of a particularly compelling moment in time: the beginning of the COVID-19 pandemic. Directed by Nanfu Wang, In the Same Breath (HBO) is a riveting account of how the pandemic unfolded, how governments tried desperately to control the message as it did, and the ways in which citizens in two very different cultures and systems reacted, even as they themselves participated in shaping of the discourse on social media. This episode of the podcast features a discussion with Nanfu Wang, who directed and produced the film.
The latest reports from the Intergovernmental Panel on Climate Change (IPCC) do not mince words. They say that “climate change is causing dangerous and widespread disruption in nature and affecting the lives of billions of people.”  The quality of the public discourse on climate issues plays a role. A report released by the IPCC in February says that the “[r]hetoric and misinformation on climate change and the deliberate undermining of science have contributed to misperceptions of the scientific consensus, uncertainty, disregarded risk and urgency, and dissent.” An April installment describes how “opposition from status quo interests” and “the propagation of scientifically misleading information” are “barriers” to climate action and have “negative implications for climate policy.” This week, a coalition of groups published a report titled Deny, Deceive, Delay: Documenting and Responding to Climate Disinformation at COP26 & Beyond that outlines prominent discourses that seek to pervert and prevent efforts to address climate change. The report makes recommendations for governments, social media platforms platforms and the media on what to do to address the the issue.Tech Policy Press spoke to two individuals involved in the effort to produce the report to learn more:Jennie King, the head of civic action and education at the Institute for Strategic Dialogue (ISD)Michael Khoo, the Climate Change Coalition co-chair at Friends of the Earth
Even as the COVID-19 pandemic continues to simmer, there is a good amount of science emerging about the relationship between the information environment and vaccine uptake. Today we’ll hear from two researchers from different disciplines about their work on social media and vaccine misinformation. First up is John Alexander Bryden, Executive Director of the Observatory on Social Media at Indiana University, with whom I discuss the results of some recent research his team had conducted on the problem. And second, I speak with Kolina Koltai, who when I interviewed her at the end of April was transitioning from her position as a postdoctoral fellow at the Center for an Informed Public at The University of Washington to a role at Twitter.
When it comes to content moderation and the regulation of harmful content on social media, there are various metaphors at play for how to think about doing it. One that we’ve explored on this podcast in the past is to see it as a form of administration, or what legal scholar evelyn douek calls the “rough online analogue of offline judicial adjudication of speech rights, with legislative-style substantive rules being applied over and over again to individual pieces of content by a hierarchical bureaucracy of moderators.” But some scholars, like douek, see limitations in this way of thinking. That includes Rachel Griffin, a PhD candidate and lecturer at Sciences Po Law School who recently published a new paper in the Journal of Intellectual Property, Information Technology and E-Commerce Law titled The Sanitised Platform. The paper employs thinking from feminist legal scholar Vicki Schultz about US law on sexual harassment in the workplace as a framework to critique approaches to content moderation and social media regulation.
This week the Philippine Congress declared Ferdinand Marcos Jr. the winner of the recent election, confirming that he will become the country's next president. Marcos, know by his nickname “Bongbong,” is the son of the late dictator and kleptocrat with the same name, who was president from 1965-1986. Marcos Sr. declared martial law in 1972, a year before his second term was to come to an end, ushering in years of brutality, oppression and poverty in the Philippines. To learn more about the role of social media in the rehabilitation of the Marcos brand and to dig a little deeper into the conditions that drive disinformation, I spoke to Dr. Jonathan Corpus Ong, an associate professor of global digital media at the University of Massachusetts Amherst, a fellow at Harvard University's Shorenstein Center, and the author of a recent piece in Time magazine, The World Should Be Worried About a Dictator’s Son's Apparent Win in the Philippines. Jonathan is also the cohost, with Kat Ventura, of a podcast on the world of troll farms and propaganda in the Philippines called Catch Me If You Can. Check it out.
For the past six years, an independent research program at New America called Ranking Digital Rights has evaluated the policies and practices of some of the world’s largest technology and telecom firms, producing a dataset that reveals their shortcomings with respect to human rights obligations. Ranking Digital Rights evaluates more than 300 aspects of each company it ranks that fall broadly into three categories: governance, freedom of expression, and privacy.Following the release of this year’s report, which we covered at Tech Policy Press, Ranking Digital Rights hosted a session on Charting the Future of Big Tech Accountability. Nathalie Maréchal, Policy Director at Ranking Digital Rights and a past guest on this podcast, moderated the panel, which included:Sarah Couturier-Tanoh, Shareholder Association for Research and Education (SHARE)Jesse Lehrich, Co-Founder, Accountable Tech Chris Lewis, President and CEO, Public KnowledgeKatarzyna Szymielewicz, President, Panoptykon FoundationSophie Zhang, activist and Facebook whistleblower
The June cover story for Wired magazine is on a movement in tech that many see as having the potential to rewire not just the internet, but to produce a fundamentally more democratic and equitable society.  The story is titled “Paradise at the Crypto Arcade: Inside the Web3 Revolution,” and I had the chance to speak to its author, Wired senior writer Gilad Edelman.
UN human rights experts that chronicled Facebook’s role in spreading hate speech in Myanmar concluded that it played a “determining role” in the genocide against the Rohingya people. Facebook’s own investigation into the situation also found fault with the company’s practices, and made various recommendations for how it should develop a human rights strategy to protect against such things from happening again. Today, we’re going to hear from a refugee from the violence, who is with other Rohingya refugees in a camp in Cox's Bazar, Bangladesh, as well as three human rights advocates. And we’ll learn about another complaint filed by sixteen Rohingya youth to Ireland’s Organisation for Economic Co-operation and Development, the OECD, that argues that Facebook violated the OECD Guidelines for Multinational Enterprises by allowing its platform to be used to incite violence against them and their community. The remedy sought by these refugees is for Facebook to divest from a portion of its 2017 profits and provide remediation for their community in the form of educational activities and facilities in Cox’s Bazar.Please note that the connection to Cox’s Bazar was not perfect- if you have any trouble making out a word here or there, you can refer to the transcript at the Tech Policy Press website.
Researchers Alice Marwick, Benjamin Clancy, and Katherine Furl this week released Far-Right Online Radicalization: A Review of the Literature, an analysis of "cross-disciplinary work on radicalization to better understand the present concerns around online radicalization and far-right extremist and fringe movements." In order to learn more about the issues explored in the review, I spoke to Marwick, who is an Associate Professor of Communication at the University of North Carolina at Chapel Hill and a Principal Researcher at the Center for Information, Technology, & Public Life (CITAP).
Over the past year of publishing this podcast, we’ve looked again and again at the issue of the power of tech platforms in society. Now, there is a book titled The Power of the Platforms: Shaping Media and Society, by Rasmus Kleis Nielsen and Sarah Anne Ganter, just published at the end of last month by Oxford University Press.Justin Hendrix had the chance to catch up with one of the authors about what they learned in writing the book, and the complexities of the subject.
If you take the time to look at the SEC filings for Meta Platforms, Inc. - the company that operates Facebook, Instagram and WhatsApp - you will find various disclosures about its ongoing legal battles. Taken together they reveal patterns, particularly in how the company is led. To get an update on some of the key cases under consideration, from Cambridge Analytica to competition, I spoke with one particularly keen observer of Meta: Jason Kint, the CEO of Digital Content Next.
Last week at Stanford University, former President Barack Obama gave a keynote address at a Stanford University Cyber Policy Center symposium entitled “Challenges to Democracy in the Digital Information Realm." This week, many of the issues Obama discussed were brought into sharp relief when it was announced that billionaire Elon Musk will acquire Twitter for the price of $44 billion dollars.For reactions to Obama's speech- and to Musk’s antics- I spoke withDavid Kaye, Professor of Law at UC Irvine and the former United Nations Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression;Emily Bell, Director of the Tow Center for Digital Journalism at Columbia University; and Jameel Jaffer, Director of the Knight First Amendment Institute at Columbia University. In the opening, you'll also hear just under the last five minutes of Obama's speech, which will give you a sense of it.
There is a growing literature and practice around how to equitably collaborate with traditionally marginalized communities to build better technology. A pair of investigative reports into Worldcoin’s launch may well serve as the basis for an instructive case study in what not to do. The first report, by Richard Nieva and Aman Sethi at BuzzFeed News, was published April 5th. It’s titled Inside Worldcoin’s Globe-Spanning, Eyeball-Scanning, Free Crypto Giveaway: The Sam Altman–founded company Worldcoin says it aims to alleviate global poverty, but so far it has angered the very people it claims to be helping.The second report, by Eileen Guo & Adi Renaldi at MIT Technology Review, was published April 6th. It’s titled Deception, exploited workers, and cash handouts: How Worldcoin recruited its first half a million test users: The startup promises a fairly-distributed, cryptocurrency-based universal basic income. So far all it's done is build a biometric database from the bodies of the poor.Tech Policy Press had the chance to talk to both pairs of journalists separately last week.