“Fool me once, shame on you,” the saying goes.
With the benefit of hindsight, there’s no better encapsulation of the experiment tech companies have conducted on humanity with social media. First introduced to the public more than a decade ago, products like Facebook and Twitter offered amazing possibilities to find long-lost friends and connect instantly with anyone on the globe.
Optimism about social media’s potential, however, soon faced the horrors of what the technology has created: a teen mental health crisis fueled by social media addiction; democracies weakened by disinformation spreading online like wildfire; and civil conflicts around the world perpetrated on social media platforms.
So if social media has caused so many problems, why hasn’t government regulated it?
For years, the leaders of Meta, Apple, Google and Microsoft have made a show of calling for internet regulation. We can see in retrospect the bad faith behind these efforts, as some of the richest companies in human history put their colossal lobbying operations to work, ensuring that social media remains a lawless frontier.
Just like the tobacco industry a generation ago, tech companies have been using every tool available to them behind the scenes to oppose new laws that would require any measure of accountability. Big Tech’s lobbying groups have poured resources into challenging California’s landmark kids online safety law in federal court, where a misinformed judge issued a poorly-reasoned preliminary injunction last year.
I know the playbook these companies have used quite well. Prior to my current job, I was a lobbyist for Amazon. I’ve used the same argument: we support regulation, just not this regulation – and with a similar conspicuous absence of workable alternatives.
Having once been on the inside, I can tell you with confidence: the tech companies have fooled us all.
Now, these same tech giants are now pushing their latest innovation, artificial intelligence. Just like social media, they’re in a race to scale and irrevocably entangle societies with these new products as quickly as possible in order to achieve market dominance.
To make matters worse, AI has the potential to have far more severe consequences than social media:
- Two billion people will vote in 50 countries in elections around the world this year, and we don’t yet have an answer for the combination of disinformation, social media and generative AI content.
- The release of open-source AI models is accelerating an international arms race, while the tech is already empowering scammers, sextortionists and might soon empower terrorists.
- Actors, writers, journalists and other creators continue to have their work plagiarized and stolen for profit by these same companies, without their consent or compensation.
This all brings us back to the second part of that famous saying: “Fool me twice, shame on me.”
Indeed, shame on all of us if we allow Meta, Amazon, Google and Microsoft to fool us again – this time with AI.
Our elected leaders at all levels of government have given these companies the benefit of the doubt for far too long. Lawmakers can no longer take the tech industry’s word when their armies of lobbyists make the case that they police themselves and shouldn’t be held accountable – when the basis for that trust is nonexistent.
The ultimate test of these companies’ good faith is whether they will be willing to accept accountability for the harms that their products have created and may create. Rather than listen to arguments from companies and their lobbyists, lawmakers need to push ahead with regulations that makes them liable for the products they create. Vermont is charting a path, as recently introduced AI liability legislation shows.
That can start here in California – where so many of these tech giants were born and still call home. Legislators must not be fooled again by their tactics, and instead heed the concerns of their constituents who are overwhelmingly concerned about the dangers AI poses to democracy and want transparency and accountability from these tech behemoths.
It’s time to show Big Tech that they can no longer make us for fools.
Casey Mock is the chief policy and public affairs officer at the Center for Humane Technology, a nonprofit working to align technology with humanity’s best interests. He is also a lecturing fellow at Duke University.