AI cannot automate science – a philosopher explains the uniquely human aspects of doing research

AI cannot automate science – a philosopher explains the uniquely ...

Consistent with the general trend of incorporating artificial intelligence into nearly every field, researchers and politicians are increasingly using AI models trained on scientific data to infer answers to scientific questions. But can AI ultimately replace scientists?

The Trump administration signed an executive order on Nov. 24, 2025, that announced the Genesis Mission, an initiative to build and train a series of AI agents on federal scientific datasets “to test new hypotheses, automate research workflows, and accelerate scientific breakthroughs.”

So far, the accomplishments of these so-called AI scientists have been mixed. On the one hand, AI systems can process vast datasets and detect subtle correlations that humans are unable to detect. On the other hand, their lack of commonsense reasoning can result in unrealistic or irrelevant experimental recommendations.

While AI can assist in tasks that are part of the scientific process, it is still far away from automating science – and may never be able to. As a philosopher who studies both the history and the conceptual foundations of science, I see several problems with the idea that AI systems can “do science” without or even better than humans.

AI models can only learn from human scientists

AI models do not learn directly from the real world: They have to be “told” what the world is like by their human designers. Without human scientists overseeing the construction of the digital “world” in which the model operates – that is, the datasets used for training and testing its algorithms – the breakthroughs that AI facilitates wouldn’t be possible.

Consider the AI model AlphaFold. Its developers were awarded the 2024 Nobel Prize in chemistry for the model’s ability to infer the structure of proteins in human cells. Because so many biological functions depend on proteins, the ability to quickly generate protein structures to test via simulations has the potential to accelerate drug design, trace how diseases develop and advance other areas of biomedical research.

As practical as it may be, however, an AI system like AlphaFold does not provide new knowledge about proteins, diseases or more effective drugs on its own. It simply makes it possible to analyze existing information more efficiently.

AlphaFold draws upon vast databases of existing protein structures.

As philosopher Emily Sullivan put it, to be successful as scientific tools, AI models must retain a strong empirical link to already established knowledge. That is, the predictions a model makes must be grounded in what researchers already know about the natural world. The strength of this link depends on how much knowledge is already available about a certain subject and on how well the model’s programmers translate highly technical scientific concepts and logical principles into code.

AlphaFold would not have been successful…

Access the original article

Subscribe
Don't miss the best news ! Subscribe to our free newsletter :