AI-generated college admissions essays tend to sound male and privileged, study finds

AI-generated college admissions essays tend to sound male and ...

In an examination of thousands of human-written college admissions essays and those generated by AI, researchers found that the AI-generated essays are most similar to essays authored by students who are males, with higher socioeconomic status and higher levels of social privilege. The AI-generated writing is also less varied than that written by humans.

“We wanted to find out what these patterns that we see in human-written essays look like in a ChatGPT world,” said AJ Alvero, assistant research professor in the Department of Information Science in the Cornell Ann S. Bowers College of Computing and Information Science. “If there is the strong connection with human writing and identity, how does that compare in AI-written essays?”

Rene Kizilcec, associate professor of information science in Cornell Bowers CIS is a co-author of “Large Language Models, Social Demography, and Hegemony: Comparing Authorship in Human and Synthetic Text,” published Sept. 27 in the Journal of Big Data.

This research stemmed from Alvero’s dissertation work at Stanford University. Part of his research involved an analysis of approximately 800,000 college admission essays written from 2015–17 by prospective students in the University of California system.

“We consistently found that there was a strong connection between the profiles of the applicants—their test scores, their demographic information, even the high schools they were applying from—and their admissions essays,” Alvero said. “The relationship was so strong that we were consistently able to predict an applicant’s SAT score, within about 120 points.”

“The ways that we speak can encode and contain information about our past and who we are,” he said, “and it’s very similar in writing, at least with personal statements.”

For this work, Alvero and the team compared the writing style of more than 150,000 college admissions essays, submitted to both the University of California system and an engineering program at an elite East Coast private university, with a set of more than 25,000 essays generated with GPT-3.5 and GPT-4 prompted to respond to the same essay questions as the human applicants.

For their analysis, the researchers used the Linguistic Inquiry and Word Count, a program developed in the mid-1990s by University of Texas social psychologist James W. Pennebaker that counts the frequencies of writing features, such as punctuation and pronoun usage, and cross-references those counts with an external dictionary.

“One of the first big-data analyses of college admissions essays was done about a decade ago by Pennebaker,” Alvero said, “and we wanted to try to build a robust understanding of these patterns across institutions, across time, and we did that through using the same method that they used.”

Alvero, Kizilcec and the team found that while LLMs’ writing styles don’t represent any particular group in social comparison analyses, they do “sound,” in terms of word selection and usage, most like male students who came from more privileged locations and backgrounds.

For example, AI was found on average to use longer words (six or more letters) than human writers. Also, AI-generated writing tended to have less variety than essays written by humans, although it more closely resembled essays from private-school applicants than those from public-school students.

Additionally, humans and AI tend to write about affiliations (with groups, people, organizations and friends) at similar rates—despite the AI not actually having any affiliations. As LLMs like ChatGPT become more popular and more refined, they will be used in all sorts of settings—including college admissions.

“It’s likely that students are going to be using AI to help them craft these essays—probably not asking it to just write the whole thing, but rather asking it for help and feedback,” Kizilcec said. “But even then, the suggestions that these models will make may not be well aligned with the values, the sort of linguistic style, that would be an authentic expression of those students.

“It’s important to remember that if you use an AI to help you write an essay, it’s probably going to sound less like you and more like something quite generic,” he said. “And students need to know that for the people reading these essays, it won’t be too difficult for them to figure out who has used AI extensively. The key will be to use it to help students tell their own stories and to enhance what they want to convey, not to replace their own voice.”

Alvero and Anthony Lising Antonio, associate professor of education at the Stanford University Graduate School of Education, are co-corresponding authors.

More information:
A. J. Alvero et al, Large language models, social demography, and hegemony: comparing authorship in human and synthetic text, Journal of Big Data (2024). DOI: 10.1186/s40537-024-00986-7

Provided by
Cornell University

Citation:
AI-generated college admissions essays tend to sound male and privileged, study finds (2024, October 2)

Your email address will not be published. Required fields are marked *

Subscribe
Don't miss the best news ! Subscribe to our free newsletter :