Forget dystopian scenarios – AI is pervasive today, and the risks are often hidden

The turmoil at ChatGPT-maker OpenAI, sparked by the board of directors firing high-profile CEO Sam Altman on Nov. 17, 2023, has put a spotlight on artificial intelligence safety and concerns about the rapid development of artificial general intelligence, or AGI. AGI is loosely defined as human-level intelligence across a range of tasks.

The OpenAI board stated that Altman’s termination was for lack of candor, but speculation has centered on a rift between Altman and members of the board over concerns that OpenAI’s remarkable growth – products such as ChatGPT and Dall-E have acquired hundreds of millions of users worldwide – has hindered the company’s ability to focus on catastrophic risks posed by AGI.

OpenAI’s goal of developing AGI has become entwined with the idea of AI acquiring superintelligent capabilities and the need to safeguard against the technology being misused or going rogue. But for now, AGI and its attendant risks are speculative. Task-specific forms of AI, meanwhile, are very real, have become widespread and often fly under the radar.

As a researcher of information systems and responsible AI, I study how these everyday algorithms work – and how they can harm people.

AI is pervasive

AI plays a visible part in many people’s daily lives, from face recognition unlocking your phone to speech recognition powering your digital assistant. It also plays roles you might be vaguely aware of – for example, shaping your social media and online shopping sessions, guiding your video-watching choices and matching you with a driver in a ride-sharing service.

AI also affects your life in ways that might completely escape your notice. If you’re applying for a job, many employers use AI in the hiring process. Your bosses might be using it to identify employees who are likely to quit. If you’re applying for a loan, odds are your bank is using AI to decide whether to grant it. If you’re being treated for a medical condition, your health care providers might use it to assess your medical images. And if you know someone caught up in the criminal justice system, AI could well play a role in determining the course of their life.

AI has become nearly ubiquitous in the hiring process.

Algorithmic harms

Many of the AI systems that fly under the radar have biases that can cause harm. For example, machine learning methods use inductive logic, which starts with a set of premises, to generalize patterns from training data. A machine learning-based resume screening tool was found to be biased against women because the training data reflected past practices when most resumes were submitted by men.

The use of predictive methods in areas ranging from health care to child welfare could exhibit biases such as cohort bias that lead to unequal risk assessments across different groups in society. Even when legal practices prohibit discrimination based on…

Access the original article