How AI can transform layering technique for semiconductors

To make computer chips, technologists around the world rely on atomic layer deposition (ALD), which can create films as fine as one atom thick. Businesses commonly use ALD to make semiconductor devices, but it also has applications in solar cells, lithium batteries and other energy-related fields.

Today, manufacturers increasingly rely on ALD to make new types of films, but figuring out how to tweak the process for each new material takes time.

Part of the problem is that researchers primarily use trial and error to identify optimal growth conditions. But a recently published study—one of the first in this scientific field—suggests that using artificial intelligence (AI) can be more efficient.

In the ACS Applied Materials & Interfaces study, researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory describe multiple AI-based approaches for optimizing the ALD processes autonomously. Their work details the relative strengths and weaknesses of each approach, as well as insights that can be used to develop new processes more efficiently and economically.

“All of these algorithms provide a much faster way of converging to optimum combinations because you’re not spending time putting a sample in the reactor, taking it out, doing measurements etc. as you typically would today. Instead you have a real-time loop that connects with the reactor,” said Argonne principal materials scientist Angel Yanguas-Gil, a co-author of the study.

Cutting edge, but with challenges

In ALD, two different chemical vapors, known as precursors, adhere to a surface, adding a thin layer of film in the process. This all happens inside a chemical reactor and is sequential: one precursor is added and interacts with the surface, then any excess of it is removed. Afterwards the second precursor is introduced then later removed, and the process repeats itself. In microelectronics, the ALD thin film might be used to electrically insulate nearby components in nanoscale transistors.

ALD excels at growing precise, nanoscale films on complex, 3D surfaces such as the deep and narrow trenches patterned into silicon wafers to manufacture today’s computer chips. This has motivated scientists worldwide to develop new thin film ALD materials for future generations of semiconductor devices.

However, developing and optimizing these new ALD processes is challenging and labor-intensive. Researchers have to consider many different factors that can alter the process, including:

The complex chemistries between the molecular precursorsReactor design, temperature and pressureThe timing for each dose of their precursors

In an effort to find ways of overcoming these challenges, Argonne scientists evaluated three optimization strategies—random, expert system and Bayesian optimization—the latter two utilizing different AI approaches.

Set it and forget it

Researchers evaluated their three strategies by comparing how they optimized the dosage and purge times of the two precursors used in ALD. Dosage time refers to the time period when a precursor is added to the reactor, while purge time refers to the time needed to remove excess precursor and gaseous chemical products.

The goal: Find the conditions that would achieve high and stable film growth in the shortest time. Scientists also judged the strategies on how quickly they converged on the ideal set of timings using simulations that represented the ALD process inside a reactor.

Linking their optimization approaches to their simulated system let them measure film growth in real time after each cycle, based on the processing conditions their optimization algorithms generated.

“All of these algorithms provide a much faster way of converging to optimum combinations because you’re not spending time putting a sample in the reactor, taking it out, doing measurements, etc., as you would, typically. Instead you have a real-time loop that connects with the reactor,” said Argonne Principal Materials Scientist Angel Yanguas-Gil, a co-author of the study.

This set up also made the process automatic for the two AI approaches by forming a closed-loop system.

“In a closed-loop system, the simulation performs an experiment, gets the results, and feeds it to the AI tool. The AI tool then learns from it or interprets it in some way, and then suggests the next experiment. And this all happens without human input,” said Noah Paulson, a computational scientist at Argonne and the lead author.

Despite some weaknesses, the AI approaches effectively determined the optimal dose and purge timings for different simulated ALD processes. This makes the study among the first to show that thin-film optimization in real time is possible using AI.

“This is exciting because it opens up the possibility of using these types of approaches to rapidly optimize real ALD processes, a step that could potentially save manufacturers precious time and money when developing new applications in the future,” concluded Jeff Elam, a senior chemist at Argonne and co-author.

New etching technique could advance the way semiconductor devices are made

More information:
Noah H. Paulson et al, Intelligent Agents for the Optimization of Atomic Layer Deposition, ACS Applied Materials & Interfaces (2021). DOI: 10.1021/acsami.1c00649

Provided by
Argonne National Laboratory

Citation:
How AI can transform layering technique for semiconductors (2021, June 25)
retrieved 25 June 2021
from https://techxplore.com/news/2021-06-ai-layering-technique-semiconductors.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Access the original article
Subscribe
Don't miss the best news ! Subscribe to our free newsletter :