Fixing the internet will require a cultural shift
Prioritize public interest over profit with tech innovation, social and regulatory controls, expert says
Solutions
How can we regulate the internet in a way that lets us reap the game-changing benefits and avoid the equally huge risks?
A Q&A with Francine Berman
In this series, the Gazette asks Harvard experts for concrete solutions to complex problems. Francine Berman, the Edward P. Hamilton Distinguished Professor in Computer Science at Rensselaer Polytechnic Institute, is a Faculty Associate at the Berkman Klein Center for Internet & Society. Berman’s current work focuses on the social and environmental impacts of information technology, and in particular of the Internet of Things — a deeply interconnected ecosystem of billions of everyday things linked through the web.
GAZETTE: Do you think the internet has been a force for good in the world?
BERMAN: Yes and no. What the internet and information technologies have brought us is tremendous power. Tech has become critical infrastructure for modern life. It saved our lives during the pandemic, providing the only way for many to go to school, work, or see family and friends. It also enabled election manipulation, the rapid spread of misinformation, and the growth of radicalism.
Are digital technologies good or evil? The same internet supports both Pornhub and CDC.gov, Goodreads and Parler.com. The digital world we experience is a fusion of tech innovation and social controls. For cyberspace to be a force for good, it will require a societal shift in how we develop, use, and oversee tech, a reprioritization of the public interest over private profit.
Fundamentally, it is the public sector’s responsibility to create the social controls that promote the use of tech for good rather than for exploitation, manipulation, misinformation, and worse. Doing so is enormously complex and requires a change in the broader culture of tech opportunism to a culture of tech in the public interest.
GAZETTE: How do we change the culture of tech opportunism?
BERMAN: There is no magic bullet that will create this culture change — no single law, federal agency, institutional policy, or set of practices will do it, although all are needed. It’s a long, hard slog. Changing from a culture of tech opportunism to a culture of tech in the public interest will require many and sustained efforts on a number of fronts, just like we are experiencing now as we work hard to change from a culture of discrimination to a culture of inclusion.
That being said, we need to create the building blocks for culture change now — pro-active short-term solutions, foundational long-term solutions, and serious efforts to develop strategies for challenges that we don’t yet know how to address.
In the short term, government must take the lead. There are a lot of horror stories — false arrest based on bad facial recognition, data-brokered lists of rape victims, intruders screaming at babies from connected baby monitors — but there is surprisingly little consensus about what digital protections — specific expectations for privacy, security, safety, and the like — U.S. citizens should have.
“It’s hard to solve problems online that you haven’t solved in the real world. Moreover, legislation isn’t useful if the solution isn’t clear.”
Francine Berman
We need to fix that. Europe’s General Data Protection Regulation (GDPR) is based on a well-articulated set of digital rights of European Union citizens. In the U.S. we have some specific digital rights — privacy of health and financial data, privacy of children’s online data — but these rights are largely piecemeal. What are the digital privacy rights of consumers? What are the expectations for the security and safety of digital systems and devices used as critical infrastructure?
Specificity is important here because to be effective, social protections must be embedded in technical architectures. If a federal law were passed tomorrow that said that consumers must opt in to personal data collection by digital consumer services, Google and Netflix would have to change their systems (and their business models) to allow users this kind of discretion. There would be trade-offs for consumers who did not opt in: Google’s search would become more generic, and Netflix’s recommendations wouldn’t be well-tailored to your interests. But there would also be upsides — opt-in rules put consumers in the driver’s seat and give them greater control over the privacy of their information.
Once a base set of digital rights for citizens is specified, a federal agency should be created with regulatory and enforcement power to protect those rights. The FDA was created to promote the safety of our food and drugs. OSHA was created to promote the safety of our workplaces. Today, there is more public scrutiny about the safety of the lettuce you buy at the grocery store than there is about the security of the software you download from the internet. Current bills in Congress that call for a Data Protection Agency, similar to the Data Protection Authorities required by the GDPR, could create needed oversight and enforcement of digital protections in cyberspace.
Additional legislation that penalizes companies, rather than consumers, for failure to protect consumer digital rights could also do more to incentivize the private sector to promote the public interest. If your credit card is stolen, the company, not the cardholder, largely pays the price. Penalizing companies with meaningful fines and holding company personnel legally accountable — particularly those in the C suite — provide strong incentives for companies to strengthen consumer protections. Refocusing company priorities would positively contribute to shifting us from a culture of tech opportunism to a culture of tech in the public interest.
GAZETTE: Is specific legislation needed to solve some of today’s thorniest challenges —misinformation on social media, fake news, and the like?
BERMAN: It’s hard to solve problems online that you haven’t solved in the real world. Moreover, legislation isn’t useful if the solution isn’t clear. At the root of our problems with misinformation and fake news online is the tremendous challenge of automating trust, truth, and ethics.
Social media largely removes context from information, and with it, many of the cues that enable us to vet what we hear. Online, we probably don’t know whom we’re talking with or where they got their information. There is a lot of piling on. In real life, we have ways to vet information, assess credentials from context, and utilize conversational dynamics to evaluate what we’re hearing. Few of those things are present in social media.
Harnessing the tremendous power of tech is hard for everyone. Social media companies are struggling with their role as platform providers (where they are not responsible for content) versus their role as content modulators (where they commit to taking down hate speech, information that incites violence, etc.). They’ve yet to develop good solutions to the content-modulation problem. Crowdsourcing (allowing the crowd to determine what is valuable), third-party vetting (employing a fact-checking service), advisory groups, and citizen-based editorial boards all have truth, trust, and scale challenges. (Twitter alone hosts 500 million tweets per day.)
The tremendous challenges of promoting the benefits and avoiding the risks of digital technologies aren’t just Silicon Valley’s problem. The solutions will need to come from sustained public-private discussions with the goal of developing protective strategies for the public. This approach was successful in setting the original digital rights agenda for Europe, ultimately leading to multiple digital rights initiatives and the GDPR. While GDPR has been far from perfect in both conceptualization and enforcement, it was a critical step toward a culture of technology in the public interest.
GAZETTE: What do you see as foundational longer-term solutions?
BERMAN: Today it is largely impossible to thrive in a digital world without knowledge and experience with technology and its impacts on society. In effect, this knowledge has become a general education requirement for effective citizenship and leadership in the 21st century.
And it should be a general education requirement in educational institutions, especially in higher ed, which serve as a last stop before many professional careers. Currently, forward-looking universities, including Harvard, are creating courses, concentrations, minors, and majors in public interest technology — an emerging area focused on the social impacts of technology.
Education in public interest technology is more than just extra computer science courses. It involves interdisciplinary courses that focus on the broader impacts of technology — on personal freedom, on communities, on economics, etc. — with the purpose of developing the critical thinking needed to make informed choices about technology.
And students are hungry for these courses and the skills they offer. Students who have taken courses and clinics in public interest technology are better positioned to be knowledgeable next-generation policymakers, public servants, and business professionals who may design and determine how tech services are developed and products are used. With an understanding of how technology works and how it impacts the common good, they can better promote a culture of tech in the public interest, rather than tech opportunism.
Interview was edited for clarity and length.