- The Startup Breakdown
- Posts
- The Swift Scandal👩‍🎤
The Swift Scandal👩‍🎤
The deepfake attack on Taylor Swift and the broader implications for AI ethics and legislation
This is The Startup Breakdown, the newsletter where we learn, laugh, and love startups. By joining this growing community of hundreds of future startup aficionados (think i spelled that right?), you're getting a beachside view of the ocean that is the startup and VC scene. This ain’t your grandpa’s newsletter, so prepare yourself for an inbox full of 4/20 jokes and Succession references.
If you'd like to receive these newsletters directly in your inbox once a week, hit subscribe and never miss an email!
Love what you're reading? Craving even more startup goodness, in-depth news analysis, and maybe some extra memes? Click below to upgrade to our premium edition and become the startup guru you were born to be.
Happy Tuesday, folks.
As mentioned, I am re-launching the referral program.
Interested in goodies like Startup Job Hunt Guides, free Premium subscriptions, and even a Startup Breakdown mug?
Get to sharin’
*PS, last time I did this, we had some ~dedicated~ individuals creating new email addresses to boost their referrals… So I will be double-checking to make sure they’re legit. Shouldn’t be a problem, but felt the disclosure to be necessary,
Taylor Swift Deepfakes Might Inspire Federal Action
Taylor Swift found herself trending on social media, and unfortunately, it wasn’t related to music of Travis Kelce.
The star was the victim of a deepfake attack whereby AI was used to generate pornographic photos with her face. Most of the photos have been taken down, though one tweet received 47 million views.
The outcry from the Swifties (who flooded Twitter with posts to drown out the deepfakes) was only matched by that of politicians. Though some states have criminalized AI-generated photos such as these, there is actually no federal law against it, though a Preventing Deepfakes of Intimate Images Act was written by Joe Morelle.
This use case is the most common for deepfakes, but fake photos of Trump’s arrest, Michael Saylor shilling crypto, and Biden’s voice being used to call voters highlight just how harmful the tech can be for everyone.
The UN and WEF actually consider it the biggest threat facing modern society, and as cheap, accessible AI tools make it easier, the practice won’t simply fade away.
Giphy
I do expect some sort of federal action, but the discourse online has been different from the sentiment being spewed at the top.
Many were questioning why it required a billionaire being the victim for action to be taken. What happened to Swift is absolutely horrendous, and I want it to be very clear that the sick people behind it should be held responsible. We need to do more to ensure that this sort of thing stops happening.
However, the questions are valid. Deepfake porn has been an issue for years now. Even slightly lesser known celebrities like Scarlett Johansson have been victimized in the past. Just a couple of weeks ago, a teenage girl killed herself after AI nudes were spread around her school.
Few politicians have batted an eye. A victimized celebrity being necessary for action screams of PR boosting legislation going into an election year.
A more cynical, not exactly far-fetched take is that the government wants to capitalize on this high-profile case to infringe on our digital privacy. Considering the NSA purchases citizens’ data without a warrant, why not reinforce their surveillance authorities to “prevent another Taylor Swift incident.”
Similarly, there are arguments that politicians will use this to halt AI innovation. Many officials have voiced concerns over the pace of innovation, and some are genuinely concerned about consumer harm.
More likely is that elected officials view slowing AI as fighting against the rising power of tech companies and their charismatic leaders.
However, I find this take to be flawed. AI is inherently a centralized technology, requiring massive capital and data, leading to very few entities capable of competing.
As evidenced by China’s own experimentation, AI can strengthen the government’s hold on power. It’s not the AI itself that our government might fear, but rather how it is wielded and whether it will topple the current hierarchy.
OpenAI has begun working with the government, and Palantir has been doing so for years. Politicians are interested in the tech for their own purposes, and they have no interest in slowing down progress so long as doing so would harm their own interests.
So if you’re a founder with no moral objection to working with the government, you’ll have plenty of opportunities.
However, I’d be more interested in the teams building some sort of defense against the inevitable violations of privacy.
I cannot express in strong enough words have f***ed up the Swift situation.
I support a federal ban on AI-generated pornographic images.
But call me skeptical of the motivation behind such a decision.
More than that, call me wary of the wording of the final law and how it might set up a slippery slope of digital encroachment.
AI-generated deepfake technology carries critical implications, exemplified by the Taylor Swift incident which could prompt federal regulations. The motivations behind such legislation and its potential impact on privacy and AI innovation are anything but altruistic, though.
Love what you're reading? Craving even more startup goodness, in-depth news analysis, and maybe some extra memes? Click below to upgrade to our premium edition and become the startup guru you were born to be.
How impactful will this be for you? |
|
Is today's newsletter better than the last one? |
Cheers to another day,
Trey
Reply