As you scroll through your social media feed or let your favorite music app curate the perfect playlist, it may feel like artificial intelligence is improving your life – learning your preferences and serving your needs. But lurking behind this convenient facade is a growing concern: algorithmic harms.
These harms aren’t obvious or immediate. They’re insidious, building over time as AI systems quietly make decisions about your life without you even knowing it. The hidden power of these systems is becoming a significant threat to privacy, equality, autonomy and safety.
AI systems are embedded in nearly every facet of modern life. They suggest what shows and movies you should watch, help employers decide whom they want to hire, and even influence judges to decide who qualifies for a sentence. But what happens when these systems, often seen as neutral, begin making decisions that put certain groups at a disadvantage or, worse, cause real-world harm?
The often-overlooked consequences of AI applications call for regulatory frameworks that can keep pace with this rapidly evolving technology. I study the intersection of law and technology, and I’ve outlined a legal framework to do just that.
Slow burns
One of the most striking aspects of algorithmic harms is that their cumulative impact often flies under the radar. These systems typically don’t directly assault your privacy or autonomy in ways you can easily perceive. They gather vast amounts of data about people — often without their knowledge — and use this data to shape decisions affecting people’s lives.
Sometimes, this results in minor inconveniences, like an advertisement that follows you across websites. But as AI operates without addressing these repetitive harms, they can scale up, leading to significant cumulative damage across diverse groups of people.
Consider the example of social media algorithms. They are ostensibly designed to promote beneficial social interactions. However, behind their seemingly beneficial facade, they silently track users’ clicks and compile profiles of their political beliefs, professional affiliations and personal lives. The data collected is used in systems that make consequential decisions — whether you are identified as a jaywalking pedestrian, considered for a job or flagged as a risk to commit suicide.
Worse, their addictive design traps teenagers in cycles of overuse, leading to escalating mental health crises, including anxiety, depression and self-harm. By the time you grasp the full scope, it’s too late — your privacy has been breached, your opportunities shaped by biased algorithms, and the safety of the most vulnerable undermined, all without your knowledge.
This is what I call “intangible, cumulative harm”: AI systems operate in the background, but their impacts can be devastating and invisible.