Handbook of Semiconductor Manufacturing Technology 2nd Edition

April 23, 2017 | Author: Eugeniu Borodin | Category: N/A
Share Embed Donate


Short Description

sss...

Description

DK4126—Prelims —31/5/2007—20:41—SRIDHAR—240429—XML MODEL CRC12a – pp. 1–15

DK4126—Prelims —31/5/2007—20:41—SRIDHAR—240429—XML MODEL CRC12a – pp. 1–15

Foreword to the Second Edition

In 1958, the year of the invention of the integrated circuit, the price of a single silicon transistor was about $10. Today, it is possible to buy more than 20 million transistors at that price. Not only the transistors but an equal number of passive components and a set of interconnections that permit the devices to function as a dynamic random access memory also can be bought for that price. This cost reduction is unprecedented. This progress was due to the work of tens of thousands of very capable engineers throughout the world. Every step of the fabrication process, from the preparation of pure silicon materials to the final packaging operations, has been carefully examined, reinvented, and improved. Highly automated equipment have been developed for the manufacturing processes. This book provides an overview of the current status of this work, and takes a look at the expected future growth of the semiconductor industry. These cost reductions have tremendously expanded the field of electronics. In 1958, the most common electronic products were radio and television sets, and the semiconductor market represented about $218 million. But now semiconductors are used in everything from automobiles to x-ray machines, and its worldwide market exceeds $200 billion. The success story is not yet over as the industry is expected to achieve remarkable growth in the future.

Jack S. Kilby

DK4126—Prelims —31/5/2007—20:41—SRIDHAR—240429—XML MODEL CRC12a – pp. 1–15

DK4126—Prelims —31/5/2007—20:41—SRIDHAR—240429—XML MODEL CRC12a – pp. 1–15

Preface to the Second Edition

The primary purpose of the second edition of this handbook is to serve as a reference to the practitioners and developers of semiconductor manufacturing technology. Most of the chapters deal with individual process, equipment, material, or manufacturing “control/support/infrastructure” technologies. However, we have supplemented these with a few overview chapters that provide additional background. There is a significant content of graphs, tables, and formulas, which experts in each subfield might find useful. However, unlike a “mostly numbers handbook,” we have embedded such compact reference information into a basic description of current and anticipated practice in integrated circuit manufacturing. Although this handbook is not as tutorial as a typical textbook, we have attempted to make each chapter highly readable to nonspecialists in this field. We hope that the book is useful to the scientific community involved in the development and manufacturing of semiconductor products as well as to the students in this field. Note that the book mainly addresses silicon-based manufacturing, although many of the topics are applicable to building devices on other semiconductor substrates. Generous reference lists/bibliographies are also included for the benefit of readers. Making the best use of the freedom given by editors, the authors have used a breadth of approaches, as well as styles, across the chapters. Some have provided more “historical information,” while the others have insisted more on “prognostication.” There are chapters that stick pretty closely to the “traditional basics” and others that emphasize the current/future R&D challenges in the area. This flexibility has provided the authors an opportunity to express their personal perspectives and sense of excitement about what is important today and in the near future in this field. Of course, semiconductor manufacturing technology has become so specialized that many of the chapters required multidisciplinary expertise, which in turn has widened perspectives used in the book. It was definitely challenging to provide up-to-date reference data in a field as broad and rapidly changing as semiconductor manufacturing technology. For example, note the increase in page count found in the National/International Technology Roadmap for Semiconductors editions published from 1992 to 2005. We are glad to publish the second edition of this book, in which we have tried to anticipate what will be most relevant to readers in 2006 and beyond. While most of the authors, who are distinguished technologists in industry and academia, have updated their chapters from the first edition, some others have provided entirely new chapters on topics of current interest in semiconductor manufacturing or its R&D horizon. This continued scaling and evolution of device construction have also resulted in new issues and perspectives, which have influenced the topics as well as their treatments and relationships within this book. Our authors have responded to this challenge with an excellent coordination between and within chapters. The scenario in semiconductor industry, like when the first edition was published, still remains exciting and we again appreciate this opportunity to share the technical knowledge and experience of the authors/colleagues with the readers.

DK4126—Prelims —31/5/2007—20:41—SRIDHAR—240429—XML MODEL CRC12a – pp. 1–15

In addition to thanking the authors, we also like to acknowledge the contributions of Tim Wooldridge, our assistant editor. We have all enjoyed working on this project and hope that the readers too find the book interesting and useful.

Robert Doering and Yoshio Nishi

DK4126—Prelims —31/5/2007—20:41—SRIDHAR—240429—XML MODEL CRC12a – pp. 1–15

Editors

Robert Doering is a senior fellow and a technology strategy manager at the Texas Instruments (TI). His previous positions at TI include manager of Future-Factory Strategy, director of Scaled-Technology Integration, and director of the Microelectronics Manufacturing Science and Technology (MMST) Program. The MMST Program was a 5-year R&D effort, funded by the DARPA, the U.S. Air Force, and the TI, which developed a wide range of new technologies for advanced semiconductor manufacturing. The major highlight of the program was the demonstration, in 1993, of sub-3-day cycle time for manufacturing 350-nm CMOS integrated circuits. This was principally enabled by the development of 100% single-wafer processing. He received a BS degree in physics from the Massachusetts Institute of Technology in 1968 and a PhD in physics from the Michigan State University in 1974. He joined the TI in 1980, after serving several years as the faculty of the physics department at the University of Virginia. His research was based on the nuclear reactions and was highlighted by the discovery of the giant spin–isospin resonance in heavy nuclei in 1973 and by the pioneering experiments in medium-energy heavy-ion reactions in the late 1970s. His early work at the TI was on SRAM, DRAM, and NMOS/CMOS device physics and process flow design. His management responsibilities during the first 10 years at the TI included advanced lithography and plasma etching as well as CMOS and DRAM technology development. Dr Doering is an IEEE fellow and chair of the Semiconductor Manufacturing Technical Committee of the IEEE Electron Devices Society. He also chairs the National Research Council Board of Assessment for the NIST Electronics and Electrical Engineering Laboratory. He represents the TI on many industry committees, including the Technology Strategy Committee of the Semiconductor Industry Association, the board of directors of the Semiconductor Research Corporation (SRC), the Technical Program Group of the Nanoelectronics Research Corporation, and the Corporate Associates Advisory Committee of the American Institute of Physics. Dr Doering is also one of the two U.S. representatives to the International Roadmap Committee, which governs the International Technology Roadmap for Semiconductors. He has authored/presented over 150 publications and invited papers/talks and has 20 U.S. patents. Yoshio Nishi is a professor in the Department of Electrical Engineering (research) and also in the Department of Material Science and Engineering at the Stanford University since May 2002. He also serves as the director of the Stanford Nanofabrication Facility of National Nanotechnology Infrastructure Network of the United States and director of research of the Center for Integrated Systems. He received a BS degree in materials science and a PhD in electronics engineering from Waseda University and the University of Tokyo, respectively. He researched on semiconductor device physics and silicon interfaces in the Toshiba R&D, which resulted in the discovery of ESR PB Center at SiO2–Si interface, the first 256-bit MNOS nonvolatile RAM, SOS 16-bit microprocessor, and the world’s first 1-MB CMOS DRAM. In 1986 he joined Hewlett-Packard as the director of the Silicon Process Lab, and then established the ULSI Research Lab.

DK4126—Prelims —31/5/2007—20:41—SRIDHAR—240429—XML MODEL CRC12a – pp. 1–15

Dr Nishi joined the TI, Inc. in 1995 as the senior vice president and the director of R&D for the semiconductor group, implemented a new R&D model for silicon technology development, and established the Kilby Center. Since May 2002, he became a faculty member at the Stanford University, and his research interest covers nanoelectronic devices and materials including a metal gate/high-k MOS, a device layer transfer for 3D integration, nanowire devices, and resistance change nonvolatile memory materials and devices. He has published more than 200 papers including conference proceedings, and he coauthored/edited nine books. He holds more than 70 patents in the United States and Japan. During the period of 1995–2002, he served as a board member of the SRC and the International Sematech, the NNI panel, the MARCO governing council, and other boards. Currently, he serves as an affiliated member of the Science Council of Japan. Dr Nishi is a fellow of the IEEE, a member of the Japan Society of Applied Physics and the Electrochemical Society. His recent awards include the 1995 IEEE Jack Morton Award and the 2002 IEEE Robert Noyce Medal.

DK4126—Prelims —31/5/2007—20:41—SRIDHAR—240429—XML MODEL CRC12a – pp. 1–15

Contributors

Michael Ameen

Axcelis Technologies, Inc. Beverly, Massachusetts

Nick Atchison

Multigig, Inc. Scotts Valley, California

Sanjay Banerjee

Department of Electrical and Computer Engineering University of Texas at Austin Austin, Texas

Gabriel G. Barna

Process Development and Control Silicon Technology Development Texas Instruments, Inc. Dallas, Texas

Robert Baumann

Component Reliability Group Silicon Technology Development Texas Instruments, Inc. Dallas, Texas

Ivan Berry

Axcelis Technologies, Inc. Beverly, Massachusetts

Duane S. Boning

Stephanie Watts Butler

Silicon Technology Development Texas Instruments, Inc. Dallas, Texas

Jeff Byers

KLA-Tencor Austin, Texas

Andreas Cangellaris

Department of Electrical and Computer Engineering The University of Illinois Urbana, Illinois

Freescale Semiconductor, Inc. Austin, Texas

Sorin Cristoloveanu

Institute of Microelectronics, Electromagnetism and Photonics Grenoble, France

Francois M. d’Heurle

IBM Thomas J. Watson Research Center Yorktown Heights, New York

Vallabh H. Dhudshia

G. K. Celler

SafeFab Solutions Plano, Texas

Mei Chang

SEMATECH Austin, Texas

Soitec USA Peabody, Massachusetts

Applied Materials, Inc. Santa Clara, California

Walter Class

Axcelis Technologies, Inc. Beverly, Massachusetts

C. Rinn Cleavelin

Silicon Technology Development Texas Instruments, Inc. Austin, Texas

Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Cambridge, Massachusetts

Silicon Technology Development Texas Instruments, Inc. Dallas, Texas

Louis Breaux

Luigi Colombo

Silicon Technology Development Texas Instruments, Inc. Dallas, Texas

Will Conley

Sean Collins

Silicon Technology Development Texas Instruments, Inc. Dallas, Texas

Alain C. Diebold

Girish A. Dixit

Novellus Systems, Inc. San Jose, Califonia

Simon Fang

United Microelectronics Corporation Hsin-Chu City, Taiwan

Leonard Foster

Facilities Department Texas Instruments, Inc. Dallas, Texas

Gene E. Fuller

Strategic Lithography Services Punta Gorda, Florida

Glenn W. Gale SEZ AG Villach, Austria

DK4126—Prelims —31/5/2007—20:41—SRIDHAR—240429—XML MODEL CRC12a – pp. 1–15

Ce´sar M. Garza

Freescale Semiconductor, Inc. Austin, Texas

Hans-Joachim Gossmann

Axcelis Technologies, Inc. Beverly, Massachusetts and Advanced Micro Devices Hopewell Junction, New York

Gautum Grover

Cabot Corporation Aurora, Illinois

John R. Hauser

Electrical and Computer Engineering Department North Carolina State University Raleigh, North Carolina

Robert H. Havemann

Novellus Systems, Inc. San Jose, Califonia

Howard Huff SEMATECH Austin, Texas

G. Dan Hutcheson

VLSI Research, Inc. Santa Clara, California

Frederick W. Kern, Jr.

Hitachi Global Storage Technologies San Jose, California

Brian K. Kirkpatrick

Silicon Technology Development Texas Instruments, Inc. Dallas, Texas

Vincent Korthuis

Silicon Technology Development Texas Instruments, Inc. Dallas, Texas

Michael Lamson

Silicon Technology Development Texas Instruments, Inc. Dallas, Texas

Christian Lavoie

IBM Thomas J. Watson Research Center Yorktown Heights, New York and De´partement de Ge´nie Physique

E`cole Polytechnique de Montre´al Montre´al, Canada

Wen Lin

Consultant Allentown, Pennsylvania

Erdogan Madenci

Department of Aerospace and Mechanical Engineering University of Arizona Tucson, Arizona

Jonathon Reid

Novellus, Inc. San Jose, California

Syed A. Rizvi

Nanotechnology Education and Consulting Services San Jose, California

Ron Ross

Texas Instruments, Inc. Santa Cruz, California

Andrew J. McKerrow

Stephen M. Rossnagel

J. W. McPherson

Leonard Rubin

Silicon Technology Development Texas Instruments, Inc. Dallas, Texas Silicon Technology Development Texas Instruments, Inc. Dallas, Texas

Mohammed J. Meziani

Department of Chemistry and Laboratory for Emerging Materials and Technology Clemson University Clemson, South Carolina

Hiro Niimi

Silicon Technology Development Texas Instruments, Inc. Dallas, Texas

E. T. Ogawa

Silicon Technology Development Texas Instruments, Inc. Dallas, Texas

Sylvia Pas

Silicon Technology Development Texas Instruments, Inc. Dallas, Texas

Pankaj Pathak

Department of Chemistry and Laboratory for Emerging Materials and Technology Clemson University Clemson, South Carolina

Devadas Pillai

Intel Corporation Chandler, Arizona

Shahid Rauf

Freescale Semiconductor, Inc. Austin, Texas

IBM Thomas J. Watson Research Center Yorktown Heights, New York Axcelis Technologies, Inc. Beverly, Massachusetts

Dieter K. Schroder

Electrical Engineering Department Arizona State University Tempe, Arizona

Bruno W. Schueler Revera Inc. Sunnyvale, California

Thomas E. Seidel

AIXTRON, Inc. Sunnyvale, California

Thomas Shaffner

National Institute of Standards and Technology Gaithersburg, Maryland

Gregory B. Shinn

Silicon Technology Development Texas Instruments, Inc. Dallas, Texas

Terry Sparks

Freescale Semiconductor, Inc. Austin, Texas

Greg S. Strossman

XPS/ESCA and TOF-SIMS Services Evans Analytical Group Sunnyvale, California

Ya-Ping Sun

Department of Chemistry and Laboratory for Emerging Materials and Technology Clemson University Clemson, South Carolina

DK4126—Prelims —31/5/2007—20:41—SRIDHAR—240429—XML MODEL CRC12a – pp. 1–15

P. J. Timans

Mattson Technology Fremont, California

Ting Y. Tsui

Silicon Technology Development Texas Instruments, Inc. Dallas, Texas

Eric M. Vogel

Li-Qun Xia

Lawrence C. Wagner

Shi-Li Zhang

Department of Electrical Engineering University of Texas at Dallas Dallas, Texas

Peter L. G. Ventzek

Freescale Semiconductor, Inc. Austin, Texas

Semiconductor Quality Department Texas Instruments, Inc. Dallas, Texas

Brad VanEck

Samuel C. Wood

SEMATECH Austin, Texas

Responsive Learning Technologies Los Altos, California

Applied Materials, Inc. Santa Clara, California

School of Information and Communication Technology Royal Institute of Technology Stockholm, Sweden and School of Microelectronics Fudan University Shanghai, China

DK4126—Prelims —31/5/2007—20:41—SRIDHAR—240429—XML MODEL CRC12a – pp. 1–15

Contents

1

Introduction to Semiconductor Devices John R. Hauser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1

2

Overview of Interconnect—Copper and Low-K Integration Girish A. Dixit and Robert H. Havemann . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-1

3

Silicon Materials Wen Lin and Howard Huff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1

4

SOI Materials and Devices Sorin Cristoloveanu and George K. Celler . . . . . . . . . . . . . . . . . . . . . . . . . 4-1

5

Surface Preparation

6

Supercritical Carbon Dioxide in Semiconductor Cleaning Mohammed J. Meziani, Pankaj Pathak, and Ya-Ping Sun . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-1

7

Ion Implantation

Glenn W. Gale, Brian K. Kirkpatrick, and Frederick W. Kern, Jr. . . . . . . . . . . 5-1

Michael Ameen, Ivan Berry, Walter Class, Hans-Joachim Gossmann, and Leonard Rubin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-1

8

Dopant Diffusion

9

Oxidation and Gate Dielectrics C. Rinn Cleavelin, Luigi Colombo,

Sanjay Banerjee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-1

Hiro Niimi, Sylvia Pas, and Eric M. Vogel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-1

10

Silicides

11

Rapid Thermal Processing

12

Low-K Dielectrics

13

Chemical Vapor Deposition

14

Christian Lavoie, Francois M. d’Heurle and Shi-Li Zhang . . . . . . . . . . . . . . . . . . . . . . . . . . .

10-1

P.J. Timans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11-1

Ting Y. Tsui and Andrew J. McKerrow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

12-1

Li-Qun Xia and Mei Chang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13-1

Atomic Layer Deposition

Thomas E. Seidel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

14-1

15

Physical Vapor Deposition

Stephen M. Rossnagel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15-1

16

Damascene Copper Electroplating

Jonathan Reid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

DK4126—Prelims —31/5/2007—20:41—SRIDHAR—240429—XML MODEL CRC12a – pp. 1–15

16-1

17

Chemical–Mechanical Polishing Gregory B. Shinn, Vincent Korthuis, Gautum Grover, Simon Fang, and Duane S. Boning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17-1

18

Optical Lithography

18-1

19

Photoresist Materials and Processing Ce´sar M. Garza, Will Conley, and Jeff Byers . . . . . . . . 19-1

20

Photomask Fabrication

21

Plasma Etch

22

Equipment Reliability Vallabh H. Dhudshia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22-1

23

Overview of Process Control

24

In-Line Metrology Alain C. Diebold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24-1

25

In-Situ Metrology

26

Yield Modeling

27

Yield Management

28

Electrical, Physical, and Chemical Characterization

Dieter K. Schroder, Bruno W. Schueler, Thomas Shaffner, and Greg S. Strossman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

28-1

29

Failure Analysis

29-1

30

Reliability Physics and Engineering

31

Effects of Terrestrial Radiation on Integrated Circuits Robert Baumann . . . . . . . . . . . . . . . . 31-1

32

Integrated-Circuit Packaging

Gene E. Fuller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Syed A. Rizvi and Sylvia Pas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

20-1

Peter L.G. Ventzek, Shahid Rauf, and Terry Sparks . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21-1

Stephanie Watts Butler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

23-1

Gabriel G. Barna and Brad VanEck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25-1

Ron Ross and Nick Atchison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

26-1

Louis Breaux and Sean Collins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Lawrence C. Wagner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J.W. McPherson and E.T. Ogawa . . . . . . . . . . . . . . . . . . . . .

Michael Lamson, Andreas Cangellaris, and Erdogan Madenci . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27-1

30-1

32-1

33

300 mm Wafer Fab Logistics and Automated Material Handling Systems Leonard Foster and Devadas Pillai . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33-1

34

Factory Modeling

35

Economics of Semiconductor Manufacturing

Samuel C. Wood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G. Dan Hutcheson . . . . . . . . . . . . . . . . . . . . . . .

DK4126—Prelims —31/5/2007—20:41—SRIDHAR—240429—XML MODEL CRC12a – pp. 1–15

34-1 35-1

Appendix A: Physical Constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A-1 Appendix B: Units Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B-1 Appendix C: Standards Commonly Used in Semiconductor Manufacturing . . . . . . . . . . . . . . . . . . . C-1 Appendix D: Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D-1 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I-1

DK4126—Prelims —31/5/2007—20:41—SRIDHAR—240429—XML MODEL CRC12a – pp. 1–15

1

Introduction to Semiconductor Devices 1.1 1.2 1.3

Introduction.......................................................................... 1-1 Overview of MOS Device Characteristics .......................... 1-3 MOSFET Device Scaling ...................................................... 1-8

1.4

Manufacturing Issues and Challenges............................... 1-22

1.5

John R. Hauser North Carolina State University

1.1

Scaling Rules



Performance of Scaled Devices

MOSFET Gate Stack Issues † Channel Doping Issues † Source/Drain Contact Issues † Substrate and Isolation Issues † Thermal Budget Issues

Advanced MOS Device Concepts...................................... 1-44

SOI Substrates and Devices † Multiple Gate MOS Devices † Transport Enhanced MOS Devices † MOSFETS with Other Semiconductors † Advanced Semiconductor Device Concepts

1.6 Conclusions......................................................................... 1-53 References ........................................................................................ 1-53

Introduction

The silicon metal oxide semiconductor field effect transistor (MOSFET) has emerged as the ubiquitous active element for silicon very large scale integration (VLSI) integrated circuits. The competitive drive for improved performance and cost reduction has resulted in the scaling of circuit elements to ever-smaller dimensions. Within the last 35 years, MOSFET dimensions have shrunk from a gate length of 5 mm in the early 1970s to 45 nm today, and are forecast to reach less than 10 nm at the end of the projected shrink path in about 2020. While this process has been driven by market place competition with operating parameters determined by products, manufacturing technology innovations that have not necessarily followed such a consistent path have enabled it. This treatise briefly examines metal oxide semiconductor (MOS) device characteristics and elucidates important future issues which semiconductor technologists face as they attempt to continue the rate of progress to the identified terminus of the technology shrink path in about 2020. In the early days of semiconductor device development (the 1950s), the bipolar junction transistor was the dominant semiconductor device. As large-scale integration of devices developed in the 1960s, the MOSFET became the preferred device type and has eventually grown to dominate the use of semiconductor devices in integrated circuits (ICs). This has been predominantly due to the development of complementary MOS devices (CMOS), where digital logic circuits can be formed that exhibit extremely low power dissipation in either of the two logic states. Complementary MOS is not only a device technology, but also a logic circuit technology that dominates the IC world because of the advantages of very low power dissipation over other forms of semiconductor circuits. Thus, over time in 1-1

DK4126—Chapter1—23/5/2007—18:32—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-2

Handbook of Semiconductor Manufacturing Technology

the general scheme of ICs, bipolar semiconductor devices have come to be used in only special applications. Because of this dominance of MOS devices in large-scale ICs, only the MOSFET will be reviewed in this discussion. In the late 1960s, the semiconductor industry emerged from the era of wet processing, contact printing, and negative resist at approximately 10 mm minimum gate lengths to face considerable problems related to particulate reduction, dry processing, and projection printing. Positive resist overcame the resist-swelling problem and provided improved resolution, but at the cost of particle generation caused by brittle resist and railed wafer handling equipment in use at that time. Whole wafer projection printing improved the resolution and yield, but required the use of larger wafers to offset the additional capital cost. Dry processing was the banner developmental thrust of the period. Plasma processing, initially conducted in a pancake reactor between opposing electrodes dramatically increased yield, but required a trained artisan to achieve uniformity and throughput. Sputter metal deposition replaced evaporation that had earlier produced substrate stress voiding problems. Wafer size was increased to offset the cost of the more sophisticated and expensive process equipment. Dynamic random access memory factories were the workhorses to develop and prove out the next technology generation. The MOSFET began to emerge in the 1970s as the device technology for VLSI, although large-scale integration (LSI) bipolar technology persisted longer than many forecast. In the early 1980s, Japanese semiconductor manufacturers seized manufacturing technology leadership with major capital commitments, dramatically increasing manufacturing yield and factory efficiency, to capture a major share of the Dynamic random access memories (DRAM) market. Quality became a major issue when it was reported that quality levels of Japanese memories were consistently and substantially better than American manufacturers. As the cost of manufacturing equipment development escalated, a transition began with the emergence of dedicated manufacturing equipment companies, reducing the value of proprietary process development and enhancing the value of product definition and circuit design. Major semiconductor companies initially pressured these equipment vendors to “customize” each production tool to their proprietary specifications, inhibiting reduced costs of capital equipment. Japan became a major supplier of semiconductor manufacturing equipment, further exacerbating problems for the U.S. semiconductor industry. This scenario produced a strategic inflection point in the IC production: U.S. vendors of DRAM suffered financial losses and many ultimately exited the mass memory market. This situation spawned cooperation among the major U.S. semiconductor manufacturers, resulting in the establishment of the Semiconductor Research Corp. (SRC) in 1981 and of SEMATECH in 1988. Both of these pursued the concept of industrial collaboration in semiconductor research and the concept of equipment “cost of ownership”, leading to significant collaboration with IC manufacturers and equipment vendors. Wafer size was again increased to reduce the cost of IC manufacturing. Meanwhile, the relentless march of the technology to smaller feature size continued. This was made possible by stepand-repeat projection printing, reduced generation of particulates and single wafer processing replacing many batch processing steps. Microprocessor technology emerged with dynamic memory as a separately addressed technology requirement. Subsequently, the microprocessor manufacturing technology led the race toward smaller device structures and higher complexity on the chip. Dynamic memory continues to advance with smaller memory cell size and ever increasing memory size. Manufacturing equipment automation, cleanliness (for particle reduction) and efficient factory management through cost of ownership reductions have become the overriding issues to be addressed. Yield was increased in the 1980s and 1990s three- and fourfold to the levels unheard of in the early years of IC manufacturing. The collaborative work of SEMATECH and the U.S. semiconductor industry led to an initial “roadmap” of semiconductor technology in 1992 and to the subsequent revisions approximately every 2 years. Significantly aided by SRC and SEMATECH, the U.S. regained the technology and market leadership by the early 2000s in the world semiconductor markets. Since the 1990s, enhanced collaboration among semiconductor companies has occurred to continually recognize and address technology problems associated with the continuing decrease in device feature size. Old issues have persisted and many new ones have arisen. Among the most important issues are: (1)

DK4126—Chapter1—23/5/2007—18:32—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

Introduction to Semiconductor Devices

1-3

the increasing capital cost of manufacturing plants, (2) the increased difficulty of lithographic processes as feature sizes decrease, (3) the increasing cross-talk and capacitive loading on-chip as frequency of operation is increased, (4) power dissipation problems, (5) fundamental device limits as feature sizes approach the nanometer range, and (6) the need for enhanced metrology and test equipment. In addition, system and circuit design have presented additional sets of problems that have been separately addressed by a computer-aided design (CAD) industry that arose in a similar fashion to the manufacturing equipment industry. Today at device feature sizes below 50 nm, the semiconductor industry faces unprecedented problems with the task of continuing affordable feature size scaling with the continued improvement of the manufacturing processes. As will be subsequently discussed, fundamental device scaling issues have changed at around the 100–50 nm feature size and many new materials and processes are being required to continue scaling to the ultimate limits imposed by semiconductor device physics. Key areas for advancement of the MOSFET include advanced gate dielectrics, gate contacts, source/drain structures and contacts to the source/drain, and the possible replacement of bulk CMOS with silicon on insulator (SOI) CMOS or eventually with more complex three-dimensional device structures that might perform improved circuit functions. To implement these new materials and smaller device dimensions, new manufacturing methods and equipment will be required with tighter tolerances, lower manufacturing cost of ownership, and increased process control. Improved predictability from improved technology computer-aided design (TCAD) is necessary to avoid the expense of process development on the manufacturing line. Thus, the battle continues to ever increase device density and lower costs through smaller device dimensions as the infusion of semiconductor devices continues into a still rapidly growing electronics market.

1.2

Overview of MOS Device Characteristics

The MOSFET, is the predominant semiconductor device, in large-scale ICs. The basic physics behind the device can be understood with reference to Figure 1.1, which shows a cross-section of the basic components of an n-channel device. In the MOS device, a voltage applied to the gate (VG in Figure 1.1) controls the current flow between the drain and the source. The physical mechanism of control is through controlling the mobile charge density of a channel of electrons near the interface between the silicon and an insulator. The gate-to-semiconductor structure acts in many ways like a capacitor with the channel charge controlled by the gate-to-source voltage. One difference with a simple capacitor is the existence of a threshold voltage VT, which is required at the gate to establish the onset of a conductive channel. This is due to the background doping density of the substrate, which for an n-channel device as shown is of opposite conductivity type or p-type. The applied gate voltage must establish a depletion layer of some width with an accompanying voltage before the conductive channel of electrons, begins to dominate the capacitor charge. A typical capacitance–voltage characteristic for an n-MOS transistor is shown in Figure 1.2 for the case of source and drain tied together. For voltages less than VT in the figure, only a depletion region (with positive charge density) exists or an accumulation layer of holes (positive charge) exists for a large negative gate voltage (with negative gate charge). The minimum in the capacitance corresponds to a maximum value of the depletion layer established in the semiconductor substrate. The width of this maximum depletion layer can be expressed as:

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 43si kT lnðNa =ni Þ Wdm Z q2 Na

ð1:1Þ

where Na is the acceptor doping density in the semiconductor and ni is the intrinsic carrier density. This also occurs theoretically when the surface potential has achieved the value:

DK4126—Chapter1—23/5/2007—18:32—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-4

Handbook of Semiconductor Manufacturing Technology Gate, VG < VT

Source

Drain, VD > 1 ID ~ 0

n+

n+

Depletion region p type

(a) "off" state, VG < VT, no channel exists

Source, VS = 0

Gate, VG > VT

Drain, VD > 0 ID = f(VG,VD)

n+

p type

n+

Channel

Depletion region

(b) "on" state, VG > VT, conductive channel exists

FIGURE 1.1 Schematic of basic n-channel metal oxide semiconductor field effect transistor (MOSFET). (a) In “off ” state, (b) In “on” state.

js Z 2ðkT=qÞlnðNa =ni Þ Z 2fB

ð1:2Þ

The rapidly increasing capacitance curve for voltages above VT is indicative of the rapid establishment of an inversion layer above the depletion layer, which in this case, is a conductive channel of electrons. The heavily doped nC source/drain regions shown in Figure 1.1 are used to make “ohmic” contact to the conductive channel so that a voltage difference between the source and the drain will result in current flow from the positive voltage at the drain terminal to the negative voltage at the source. In the “off ” stage as illustrated in Figure 1.1A, the drain current is very small (ideally zero) and in the “on” state as illustrated in Figure 1.1B, the drain current is a function of both VG and VD. The larger the gate voltage, the larger will be the density of conduction channel electrons and the larger will be the device drain current. Ideally, for gate voltages significantly above the threshold voltage, the gate-to-channel looks

DK4126—Chapter1—23/5/2007—18:32—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

Introduction to Semiconductor Devices

1-5

1.20 1.00

C /COX

0.80 0.60 0.40

Surface accumulation

Inversion – conductive channel established

Depletion region

0.20 0.00

VFB

VT Gate voltage (V)

FIGURE 1.2

Gate C–V curve for MOSFET with drain and source at same voltage.

like a capacitor of dielectric thickness equal to the gate oxide thickness and the capacitance approaches a constant as seen by the curve in Figure 1.2 for large positive gate voltages. A typical graph of ID vs. VD for several gate voltages is shown in Figure 1.3. This is for a long-channel device that shows little of the “short-channel” effects to be subsequently discussed. The I–V characteristic exhibits three distinct regions of operation: (1) cutoff or subthreshold, where VG!VT, (2) the triode region, where VD!VGKVT, and (3) the saturation region, where VDOVGKVT and the current is approximately independent of drain voltage as shown in Figure 1.3 for large drain voltages. Ideally in the subthreshold region the current would be zero, but the conductive channel does not go abruptly to zero at the threshold voltage, so an exponentially decreasing current exists in the subthreshold region due to an exponentially decreasing inversion charge density. This is best illustrated by the typical log plot of

0.00020 VGS = 1.2 V

Leff = 9.97 μm W = 10 μm 0.00015

Id (A)

1.0 V 0.00010 0.8 V 0.00005

0.00000 0.0

0.6 V

0.2

0.4

0.6

0.8

1.0

0.4 V 0.2 V 1.2

Vds (V)

FIGURE 1.3 Typical ID –VD characteristic for long n-channel metal oxide semiconductor field effect transistor at constant gate voltages.

DK4126—Chapter1—23/5/2007—18:32—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-6

Handbook of Semiconductor Manufacturing Technology

10−4 VD = 2.0V

Subthreshold region

10−5

VD = 0.25V

ID (A)

10−6 10−7

Triode and saturation region

10–8 10−9 10–10 −0.50

0.00

VT

0.50

1.00

1.50

2.00

VG (V)

FIGURE 1.4

Typical ID –VG MOSFET characteristic in the subthreshold region.

ID vs. VG at two drain voltages as shown in Figure 1.4. As can be seen in the figure, the current is approximately linear on the log plot over the voltage region below the threshold voltage. The standard first-order model for MOSFET terminal current is the set of equations [1]:

8 ðaÞ Subthreshold region > > > < ID Z ðbÞ Triode region > > > : ðcÞ Saturation region

Io expðKqðVT KVG Þ=mkTÞ

for VG ! VT

mn Cox ðW=LÞðVG KVT KVD =2ÞVD VD ! VG KVT mn Cox ðW=2LÞðVG KVT Þ2

for VG O VT and

for VG O VT and VD O VG KVT ð1:3Þ

These are approximate equations that help to define the three major regions of operation as (a) subthreshold region, (b) triode region (or linear region for small values of VD), and (c) saturation region. The equation for the subthreshold region must be matched to the conduction region equation in a manner such that the value and first derivative of the current are continuous. The equations are used here to illustrate the major dependences on the device parameters. The device and structural dependences occur through the effective channel length L, the channel width W, the channel mobility mn, and the inversion layer capacitance Cox. An additional parameter frequently used to characterize the subthreshold region is the current slope factor:

S Z mðkT=qÞlnð10Þ xðkT=qÞlnð10Þð1 C 3si tox =3ox Wdm Þ

ð1:4Þ

In this equation, a model for the ideality factor m is also given in terms of the oxide thickness and the maximum depletion layer width. This value is frequently compared to an ideal subthreshold slope factor of (kT/q) ln(10)Z60 mV/decade at room temperature. For short-channel devices, additional parameters are needed to provide even a first-order description of device current. A typical set of current characteristics for a short-channel MOSFET are shown in Figure 1.5. In this particular case, it is the characteristic of an n-channel device with an oxide thickness of approximately 1.7 nm, a channel length of approximately 97 nm, and a channel width of 10 mm. What constitutes a short-channel device is not readily defined simply in terms of the effective channel length. As devices have been scaled to ever smaller dimensions over the years, the concept of a short-channel

DK4126—Chapter1—23/5/2007—18:32—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

Introduction to Semiconductor Devices

1-7

0.012 0.010

Leff = 0.097 μm W = 10 μm

Vgs = 1.2V

ID (A)

0.008

1.0 V

0.006

0.8 V

0.004 0.6 V 0.002 0.000 0.0

0.4 V 0.2

0.4

0.6

0.8

1.0

0.2 V 1.2

Vds (V)

FIGURE 1.5

Typical ID –VD characteristic for short n-channel MOSFET at constant gate voltages.

device has been pushed to ever smaller dimensions. This scaling is discussed in a subsequent section. However, devices that can be characterized as short-channel devices exhibit certain important deviations from the ideal device equations as described by the set of Equation 1.3. One important feature is the lack of a clear current saturation region for large drain voltages as can be seen in comparing Figure 1.3 and Figure 1.5. This lack of ideal current saturation is known to be due primarily to two physical effects. One is the channel-length modulation, whereby an increase in drain voltage reduces the effective channel length. As can be seen from Equation 1.3, a decrease in L will cause a resulting increase in the device current even for the equation describing the current saturation region. The second effect is a dependence of device threshold voltage on drain voltage with the threshold voltage decreasing with increasing drain voltage. Again in Equation 1.3, we can see that this will result in an increasing device current. This decrease in threshold voltage is due to an incomplete shielding of the channel from the drain voltage and depends strongly on the three parameters: oxide thickness, maximum depletion layer thickness, and effective channel length. This is a two-dimensional potential feedback effect from the drain to the channel. This is discussed in more detail in a subsequent section, but for a typical drain current characteristic, is typically described by a so called drain-induced barrier lowering (DIBL) factor. The channel-length modulation effect is frequently included in the basic device model by the use of a factor (1ClnVD), which is used to multiply Equation 1.3 in the triode and saturation regions. A first order model for DIBL can be taken as DVTZsVD in all regions of operation. These are the two most important modifications of the basic device equations needed for short-channel effects. Both of these modifications will result in a finite slope on the I–V characteristic at the large drain voltages in the classical current saturation region. Most of the finite slope seen in Figure 1.5 can be accounted for by a DIBL effect (with sx0.22). A key factor in the dominance of MOSFETs is the ability to produce complementary n- and p-channel transistors with similar device characteristics. The figures given so far have been for n-channel devices in which a positive gate voltage is required to establish a conductive channel and current flows from the most positive of the source/drain contacts to the most negative. A discussion of the p-channel devices would be essentially the same, except for the reversal of all operating voltages and current directions. A negative voltage at the gate first depletes an n-type substrate and then establishes a conductive channel of holes (positive charge) after the threshold voltage is exceeded. Positive current can then flows from the most positive of the source/drain contacts to the most negative with the most negative contact identified as the drain contact. Figures showing the I–V characteristics of the p-channel devices would be very similar to Figure 1.3 through Figure 1.5 with the current and voltage directions reversed. One major

DK4126—Chapter1—23/5/2007—18:32—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-8

Handbook of Semiconductor Manufacturing Technology

difference should, however, be noted and this relates to the mobility parameter (which would be mp in Equation 1.3), which is about a factor of 2.0–2.5 times smaller for holes than for electrons. However, the current supplied by a p-MOS can be equal to that of an n-MOS by increasing the (W/L) ratio for the p-MOS to compensate for the lower hole mobility. It is frequently assumed that (W/L) for the p-MOS devices is approximately 2.5 times that of the n-MOS devices, although logic gates may require other ratios of the device dimensions in order to achieve desired switching speeds.

1.3

MOSFET Device Scaling

The primary driving factor behind the exponential increase over the past 40 years of IC functionality, commonly referred to as Moore’s law, has been the ability to continually scale MOS devices to ever smaller dimensions. This is illustrated in Figure 1.6, which shows a series of experimental devices (production and experimental devices) spanning the major technology nodes from 130 to 32 nm, and the expected production times from 2001 to 2009 [2]. Figure 1.7 shows a cross-sectional drawing of a typical MOS device drawn approximately to scale in both the vertical and horizontal dimensions for devices in the range of the 130- to 32-nm nodes. Critical dimensions on the figure are the physical gate length LP, the gate oxide thickness tox, and the effective channel length Leff (same as L in Equation 1.3). Of particular note is the very thin nature of the gate oxide when compared with other device dimensions such as the effective channel length. This structure shows a typical channel contacting structure consisting of the source/drain extension junction (depth XJ) and a deeper drain contact junction (depth XJC). Devices typically have a silicide layer extending nearly to the edge of the drain extension layer and a metal source/ drain and gate contact layer, which are not shown in this simplified drawing. These layers can be seen in the actual cross-sectional views of Figure 1.6. Also not shown in Figure 1.7 are sub-channel doping layers under the gate in the region of Leff for controlling threshold voltage, punch-through and DIBL. These are discussed in more detail later.

Accelerated scaling of planar transistors 130-nm node

90-nm node 70-nm length (production in 2001)

65-nm node

45-nm node 50-nm length (production in 2003) 30-nm prototype 32-nm node (production in 2005) 20-nm prototype (production in 2007)

Intel

15-nm prototype (production in 2009)

FIGURE 1.6 Illustration of device scaling from the 130-nm node to the 32-nm node. (From Marcyk, G., INTEL Corp., ftp://download.intel.com/technology/silicon/Marcyk_tri_gate_0902.pdf)

DK4126—Chapter1—23/5/2007—18:32—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

Introduction to Semiconductor Devices

Spacer tsw

1-9

LP

tgate

Gate contact

tox

XJ

n+

Drain extension

XJC

n+

Drain contact

n+

Substrate body

FIGURE 1.7 Metal oxide semiconductor device cross-section drawn approximately to scale in vertical and horizontal directions.

1.3.1

Scaling Rules

Since first published in 1992, the International Technology Roadmap for Semiconductors (ITRS) [3] has been the benchmark document for guiding the scaling of MOS devices. Each edition of this document has projected future scaling trends and future device and IC system performance for the next 15 years. The 2004 update [4] projects development to the year 2019 when there is a considerable concern that the end of any practical CMOS scaling will have been reached due to fundamental material and device limits. Several of these limits and problems will be subsequently discussed. Figure 1.8 shows some important projected scaling results from both the 1997 and the 2004 ITRS documents. The figure shows several device parameters in terms of major technology generations and the year (or expected year) of its 1000 Open data points — 1997 ITRS Closed data points — 2004 ITRS LTN

LL LP

10 tox = EOT

1.0

tciv = Inversion CET

1.0

Supply voltagedd Golden era of scaling

0.1

10

XJ

VDD (V)

Dimension (nm)

100

LTN = Technology node LL = Printed gate length LP = Physical gate length XJ = Junction depth at channel tciv = Cap. equivalent thickness

tox = EOT

VDD

EOT = Equivalent oxide thickness

250 180 130 90 65 45 32 (1997) (1999) (2001) (2004) (2007) (2010) (2013)

22 16 (2016) (2019)

0.1

Technology generation (nm) and year

FIGURE 1.8 devices.

Dimensions of several important metal oxide semiconductor device parameters as projected for scaled

DK4126—Chapter1—23/5/2007—18:32—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-10

Handbook of Semiconductor Manufacturing Technology

introduction into production. The technology generation nodes need some discussion. In the 1997 ITRS, the nodes were listed as 250, 180, 130, 100, 70, and 50 nm. In the 2004 ITRS, the last three of these nodes have been changed to 90, 65, and 45 and the additional nodes of 32, 22, and 16 are included. One of the intents of the major technology nodes is to identify device scaling that will result in an increase in device packing density by a factor of 2 (a doubling of device density each generation along Moore’s curve). Ideally, pffiffiffi then each technology generation represents a shrink in linear device dimension by a factor of 1= 2 Z 0:707. With this in mind the 90-nm dimension is considerably closer to an ideal shrink than was the original 100 nm specification. Thus, the 1997 data points are plotted using the 2004 node identifications to be consistent with the 2004 document. Also to be noted from the expected dates of introduction, the time between the major nodes is not a constant time interval. For the 250, 180, and 130 nodes, the time interval is only 2 years, while for the remaining nodes the projected time interval is 3 years. This indicates some of the expected difficulty in achieving the expected shrinks beyond the 130 nm node. Several lengths are shown in the graph from the 2004 ITRS. First, the upper curve labeled LTN corresponds to the major technology node, which is identified with the expected half-pitch of densely packed DRAMS circuits. For the critical MOS device dimensions, especially of “high performance” (HP) logic devices, other critical dimensions are defined as LL, the lithographically printed gate length; and LP, the physical gate length after etching of the gate contact material. Also shown in the figure are projections for the depth of the extension junction used to contact the conductive channel. As can be seen by the approximate straight lines on the log scale of Figure 1.8, all of these dimensions are projected to scale as some constant fraction of the technology node dimension. It should be noted that, the 2004 ITRS did not distinguish between the technology node and the physical device dimension, so there was some confusion about these dimensions in the earlier ITRS documents. It should be noted that, the 2004 ITRS develops different scenario for HP logic circuits and for low power circuits. While the line width parameters are common for different applications, other parameters such as dielectric thickness and supply voltage, which are included in Figure 1.8 are application dependent. In general, parameters discussed in this chapter are for the HP logic circuits, since these applications place the most severe constraints on the device performance and will probably be the most difficult one to achieve with scaled devices. Other very key device parameters are the gate dielectric thickness and the logic level power supply voltage. First, considering the supply voltage VDD, in the 2004 ITRS, it was projected to decrease to about 0.5 V by the 45-nm node or by the end of the 2004 projected roadmap. The 2004 ITRS provides a very different scenario with the 45-nm node having a supply voltage of 1.0 V and the value decreasing to 0.7 V at the 16-nm node or the end of the roadmap. This is a major rethinking of the voltage limits for scaled CMOS devices. Along with this has been a major change in the projected decrease in dielectric thickness. In the 2004 ITRS, the oxide thickness was projected to reach about 0.9 nm (actually listed as !1 nm) at the 45-nm node. Actually the 2004 ITRS projects even thinner tox or equivalent oxide thickness (EOT) values than did the 1997 ITRS. In this case, EOT in recognition that the gate dielectric will most likely not be a pure oxide, but some enhanced oxide for reduced gate leakage. This is explored in more detail in a subsequent section. An additional parameter introduced in the 2004 ITRS is the “equivalent electrical thickness in inversion” (tciv) as also plotted in Figure 1.8. This is also labeled “inversion CET” in Figure 1.8 for inversion capacitance equivalent thickness (CET). It should be noted that, the supply voltage and oxide thickness values used here are for so called HP devices such as would be used in the state-of-the-art microprocessors. Other MOS devices needed for low power applications would not be scaled as aggressively, but would be optimized for other considerations. The HP device scaling is emphasized here as it typically represents the most severe scaling constraint in terms of material properties and device dimensions. A brief discussion is needed for the inversion CET capacitance as this is a critical parameter for MOS device operation. Referring back to Figure 1.1, the conductive channel induced by the gate voltage into the semiconductor substrate acts much like the plate of a capacitor with the gate electrode as one plate and the conductive channel as the other electrode. Ideally, the capacitance of the gate-to-channel is that of a

DK4126—Chapter1—23/5/2007—18:32—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

Introduction to Semiconductor Devices

1-11

capacitor with dielectric constant equal to that of the gate insulator and thickness of the gate insulator tox or EOT. For such an ideal case, the CETwould equal the EOT. However, this is not quite correct, especially when the dimensions get very small. First, if the gate is polysilicon with a finite doping density, there is a finite depletion layer that exists in the polysilicon gate. This essentially adds to the equivalent dielectric thickness of the gate-to-channel capacitor structure. The ITRS projections for the ultimately scaled devices assume that this additional thickness will be eliminated by the use of appropriate metal gates. In addition, the charge in the semiconductor is not exactly a sheet charge, but exhibits some finite thickness. Even in the classical model of the surface charge the conductive channel has some finite width. An in-depth analysis of the conductive channel must consider quantum mechanical confinement effects of the carriers within a surface potential well and this results in an additional thickness of the conductive channel [5–7]. The net result is that, the charge centroid of the conductive channel is some distance below the dielectric– semiconductor interface. This adds an additional thickness to the oxide, which must be included in an expression describing the charge–voltage relationship for the MOS gate. This additional thickness is estimated as 0.4 nm in the ITRS tables. This additional 0.4-nm value accounts for the difference in the EOT and inversion CET values of Figure 1.8. In terms of the gate capacitance characteristic, this effect reduces the maximum capacitance values in either accumulation or inversion as illustrated in Figure 1.9. This shows several theoretical C–V curves for a 100 nm!100 nm capacitor on a p-type substrate and for a 1-nm thick oxide. Several curves are shown for a substrate doping density of 4!1018/cm3 (This value is used in order to give a threshold voltage of a few tenths of a volt). Curve (a) is the ideal C–V curve for a 1-nm oxide assuming an ideal gate contact, such as a metal and ignoring quantum confinement channel effects. Curve (b) is for a metal gate and including quantum channel effects. Finally, curves (c) and (d) are for nC polysilicon gates doped to 2!1020/cm3 and 1!1020/cm3, respectively and including quantum effects. Comparing curves (a) and (b), one sees that the major influence of the channel quantum effects is to increase the voltage for channel inversion (or threshold voltage) and to lower the peak capacitance in both accumulation and inversion. As previously discussed and reported [8,9], these are due to the shift of the channel charge centroid away from the surface due to quantum confinement effects. These make the capacitor structure appear to have a larger effective thickness and lower capacitance in the accumulation region where the conductive channel is formed. This increased thickness is represented by the tciv values in Figure 1.8 and as previously stated, this increase is estimated in the ITRS as about 0.4-nm independent of the physical oxide thickness. This accounts for the difference in the tox and tciv curves in Figure 1.8. 400 350

(a) Ideal, metal gate, no QM

(a)

Capacitance (pF)

300

(b)

250

(c)

200 150 100 50 0 −3

FIGURE 1.9

(a) Metal gate, QM

(d) 4×1018/cm3

NB = tox = 1nm (c) Poly, 2×1020/cm3, QM (d) Poly, 1×1020/cm3, QM −2

0 −1 Gate voltage (V)

1

2

Theoretical C–V for a thin gate dielectric, including QM effects and poly depletion effects.

DK4126—Chapter1—23/5/2007—18:32—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-12

Handbook of Semiconductor Manufacturing Technology

Curves (c) and (d) in Figure 1.9 illustrate the further reduction in inversion layer capacitance due to the use of a polysilicon gate. In the inversion region (large positive gate voltages), the polysilicon has a finite depletion layer, which adds an additional effective thickness to that of the oxide and quantum confinement effects. The capacitance values represent induced charge per unit voltage, so the differences between curves (b) and (c) or (d) represent reductions in transconductance that would occur due to polysilicon depletion. If the polysilicon doping density could be increased without limit, the polysilicon depletion effect could be minimized. However, from literature reports, it appears that it will be very difficult to get electronically active doping densities much above 1020/cm3 for nC polysilicon and above the mid to upper 1019/cm3 for pC polysilicon. For these doping densities, the polysilicon depletion represents a significant reduction in gate capacitance for a thin EOT dielectric. In terms of MOS device performance, the important gate thickness related parameters in Figure 1.8 are the supply voltage, which represents maximum gate-to-source voltage and the inversion capacitance equivalent thickness, tciv. A first order approximation for the Cox parameter in Equation 1.3 is:

Cox Z 3ox =tciv Z Capacitance=unit area

ð1:5Þ

This capacitance multiplied by VGKVT will then give the maximum channel induced charge per unit area. The possibility of using advanced dielectrics, such as high-k dielectrics in place of silicon dioxide is discussed in a subsequent section. One important conclusion that can be gleamed from Figure 1.8 is that the projected scaling relationships have changed significantly from the 1997 to the 2004 ITRS document. The 1997 projections were essentially the case of “constant field” scaling, where all device dimensions and the voltage were projected to scale with the same factor as the technology node and the physical gate length. This type of device scaling had (in 1997) been practiced for many technology generations and can be said to constitute the “golden era of scaling” as identified in the figure for technology nodes above the 90-nm node. During this golden era of scaling, it was relatively easy (in retrospect) to scale MOS devices with relatively small changes in the device structure from generation to generation. The relative dimensions of MOS devices were close to those shown with the device cross-section illustrated in Figure 1.7. However, as Figure 1.8 shows scaling projections beyond the 90-nm node represent a significant departure from that of the golden era of scaling and away from that of constant field scaling. As subsequently discussed, this is due to the fact that several fundamental semiconductor and device limits are being approached and scaling beyond the 90-nm node becomes considerably more difficult than during the golden era of scaling. Figure 1.10 shows the projected scaling of two other MOS device dimensions, the gate contact thickness and the spacer thickness. Overall, these are seen to scale with approximately the same scaling factor as the technology node as projected by both the 1997 and 2004 ITRS documents.

1.3.2

Performance of Scaled Devices

An obvious advantage of device scaling is the ability to pack more and more devices and electronic functionality within the same given silicon chip area. This has been the major driving force to propel the electronics industry along Moore’s curve. Along with the increased packing density is of course, a desire to continue to improve the performance of the resulting circuits in terms of operating speed as well as maintaining some control over the power dissipation. Thus, it is important to understand how device scaling and in particular, how the ITRS scaling scenario affects fundamental MOS device performance. For digital circuits, three of the most important device parameters are: a. Maximum saturated drain current (Idsat) when VGZVDZVDD. b. Off-state drain leakage current (Ioff ) when VGZ0, VDZVDD. c. Device capacitances.

DK4126—Chapter1—23/5/2007—18:32—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

Introduction to Semiconductor Devices

1-13

1000

Dimension (nm)

100 tgate = Gate thickness 10

tw = Spacer thickness

1 Open data points — 1997 ITRS Closed data points — 2003 ITRS 0.1

250

180

130

90

65

45

32

22

16

Technology generation (nm)

FIGURE 1.10

Projected scaling of gate and spacer thickness.

In turn, the saturated drain current depends on further device parameters, such as threshold voltage and DIBL, which then become important device parameters. A frequently used first order model for saturated drain current is [1]

Idsat Z Wvsat Cox ðVG KVT KVdsat Þ Idsat Z

Wmn Cox 1 ðV KVT K Vdsat ÞVdsat Leff ð1 C Vdsat mn =vsat Leff Þ G 2

ð1:6Þ

where mn is the average channel mobility and vsat is the high field saturated drift velocity of the channel carriers (electrons or holes). The first equation comes from looking at the saturated velocity equation for current near the drain, while the second equation comes from integrating the channel potential equation along the channel. While these are first order model equations, they do include much of the major device physics of transport along the surface channel. Both of these equations must be satisfied at the drain saturation voltage from which we can evaluate

L v Vdsat Z eff sat mn

"sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi # 2ðVG KVT Þmn 1C K1 Leff vsat

ð1:7Þ

For maximum drive current, the gate voltage is set to the power supply voltage. Using this expression in either of Equation 1.6 can then give the maximum saturated drain current. If we define the following quantities

YZ

ðVG KVT Þmn ; Leff vsat

XZ

Vdsat ðVG KVT Þ

ð1:8Þ

then the saturated drain current can be expressed as

Idsat Z Wvsat Cox ðVG KVT ÞFI ðXÞ;

where FI ðXÞ Z 1KX and X Z

i 1 hpffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 C 2Y K1 Y

DK4126—Chapter1—23/5/2007—18:32—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

ð1:9Þ

1-14

Handbook of Semiconductor Manufacturing Technology

An upper limit on FI(X) is FI(X)/1 as carrier velocity saturation becomes the dominant factor limiting the saturation current. The surface-channel mobility mn is not exactly a constant as assumed in deriving this set of device equations, but is known to depend on the effective surface electric field [10], which can be expressed as

Eeff x

Vgap Cox VG C VT C 2 KfB 23Si 2

ð1:10Þ

where fB has the usual meaning as in Equation 1.2 and Vgap is a voltage corresponding to the silicon bandgap (1.1 V). Under most conditions (Vgap/2–fB) can be neglected and

Eeff x

Cox 3 ðV C VT Þ Z ox ðVG C VT Þ xðVG C VT Þ=6tciv 23Si G 23Si tciv

ð1:11Þ

The last expression has previously been used by Hu [11] as an approximation to the effective surface field. From these first order device current equations, there are three “field” terms that are of importance: (a) (VG–VT)/Leff, (b) (VGCVT)/6tciv, and (c) (VG–VT)/tciv The first field is a lateral electric field along the channel, while the second term is the effective field in the semiconductor at the surface, and the last term is the excess electric field across the gate dielectric, which is responsible for the channel charge. This is somewhat less than the oxide electric field, which can be some 20%–25% larger due to the depletion layer charge needed before establishing the conductive channel. In terms of the MOS device performance, a critical parameter is the effective channel length, Leff. However, this is not a parameter specified by the ITRS projected scaling values. This value is critically dependent on the contacting junction formation techniques and not just on the physical line width. In order to make good ohmic contact to the channel, the effective channel length must be somewhat less than the physical gate length, so that the contacting layer extends under the gate oxide for some small distance. In order to carry forward with the analysis here, some value is needed for the effective channel length. For use here, this has been assumed to be 0.7 times the physical channel length. This essentially means that the effective channel length is assumed to be one technology generation ahead of the physical gate length and to remain a fixed percentage of the physical gate length for each technology generation. Manufacturing problems associated with achieving all the ITRS parameters are covered in subsequent sections. Figure 1.11 shows plots of these three fields according to the ITRS scaling rules for each major technology node. The open data points are obtained from the 1997 roadmap values, while the open data points are from the projected scaling in the 2004 roadmap. First, we note that the three fields are approximately constant using the 1997 parameters. This is again indicative of the essentially constant field scaling during the golden age of scaling. For the 2004 scaling projections, one sees that the vertical fields, Ec and Eeff in the figure are approximately constant for projected nodes of 65 nm and below. There is a projected increase in these fields by about a factor of 2 in the 2004 ITRS values as compared with the 1997 ITRS values. This increase will have two effects: (1) a possible increase in gate leakage due to the increased gate field and (2) a decrease in average surface carrier mobility due to the increased effective surface field. The major change reflected in the 2004 projections is the greatly changed functional nature of the lateral MOS device field, Elat in Figure 1.11. This is due to the reduced scaling of supply voltage relative to the scaling of the effective channel length. In terms of MOS device operation, this will enhance velocity saturation effects as the point along the channel where velocity saturation occurs will occur closer to the source contact with the increased average lateral field. In terms of the device current of Equation 1.9, this will make the FI(X) factor closer to unity. In summary, for the device electric fields, the 2004 projected scaling is essentially a constant field scaling for the vertical MOSFET fields and an ever increasing lateral device average field.

DK4126—Chapter1—23/5/2007—18:32—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

Introduction to Semiconductor Devices

1-15

107 Ec = (VDD – VT)/tciv

Field (V/cm)

Open data points — 1997 ITRS Closed data points — 2004 ITRS Eeff = (VDD +VT)/6tciv

106

Note large increase in 2004 ITRS Elat = (VDD – VT)/Leff

105

Golden era of scaling 250 180 130 90 65 45 32 22 16 (1997) (1999) (2001) (2004) (2007) (2010) (2013) (2016) (2019) Technology generation (nm) and year

FIGURE 1.11

Projected scaling of various device field parameters.

To complete the model parameters, a value of mobility and saturation velocity is needed. For the carrier saturation velocity, the value of 1!107 is frequently used for long-channel devices. There may be some tendency for this value to increase at small device dimensions because of velocity overshoot effects. However, for an initial calculation this will not be considered. The electron mobility is strongly dependent on the effective surface electric field because of surface scatterings. At a peak effective surface field of 1!105 V/cm, the surface mobility of electrons varies from approximately 250 cm2/V s to approximately 320 cm2/V s [10]. So for, the 1997 scaling an average value of 280 cm2/V s can be used. For the 2004 scaling projections, the peak surface effective field is expected to increase to about 2!105 V/cm. Unfortunately, this increase in effective surface field will result in a reduction in surface mobility by about a factor of 2 (in this region of field, the surface mobility is approximately inversely proportional to the surface field). In order to illustrate the separate effects of the lateral field and the reduced surface mobility, results will be shown for the 2004 parameters using both 280 and 140 cm2/V s. The consequences of these projected scaling rules can be seen with respect to the first order device parameters in Figure 1.12. For the 1997 constant field scaling, the parameters are essentially constant for each technology generation as expected. For the 2004 scaling, the drain saturation voltage is seen to be an ever decreasing function of the technology generation, being about 0.33 times the excess voltage above threshold for the 90-nm node and decreasing to about 0.20 times the excess voltage for the 16-nm node. This is a direct consequence of the increased lateral field as seen in Figure 1.11. This in turn causes an increase in the FI device equation factor as seen in the figure. This has a simple physical interpretation of representing the fraction of the Vsat velocity that the carriers have achieved when current saturation occurs. Because of the higher lateral fields, the figure shows that this factor slowly increases with the shrinking of device dimensions. The final important parameter is the device current or rather the projected current per unit length of gate. This is shown for the first order model and the above parameters in Figure 1.13. For the 1997 constant field parameters, this value is constant at about 760 mA/mm (or mA/mm using the ITRS notation). For the 2004 projected scaling parameters, the maximum saturated drive current is projected to increase somewhat from the 90-nm node to the 45-nm node, but then stay relatively constant to the 16-nm node. The model values are close to the ITRS desired roadmap values to the 45-nm node, but then

DK4126—Chapter1—23/5/2007—18:32—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-16

Handbook of Semiconductor Manufacturing Technology 1.00

Device factors ( )

0.80

0.60

Open data points — 1997 ITRS Closed data points — 2004 ITRS

mn = 280 cm2/Vs mn = 140 cm2/Vs

FI

0.40

Vdsat /(VDD – VT)

0.20 VT/VDD 0.00

250 180 130 (1997) (1999) (2001)

90 65 45 32 22 16 (2004) (2007) (2010) (2013) (2016) (2019)

Technology generation (nm) and year

FIGURE 1.12

Projected scaling of various metal oxide semiconductor device current density factors.

fall below the ITRS projected values. By comparing the saturated current values to the model equations, one can see that most of the projected increase in drive current comes about from the increase in the Cox(VG–VT) value or in the increase in the vertical electric field seen in the effective and oxide fields in Figure 1.11. A small amount of the increase comes from the increased lateral field because of the upper limit of the carrier saturation velocity. 2500 Open data points — 1997 ITRS Closed data points — 2004 ITRS

2004 ITRS roadmap values

Idsat (μA/μm) or (A/m)

2000

1500

1000

500

0

Simple model predictions mn = 280 cm2/ V s

mn = 140 cm2/ V s

Simple model predictions

1997 ITRS roadmap values

250 180 130 90 65 45 32 22 16 (1997) (1999) (2001) (2004) (2007) (2010) (2013) (2016) (2019) Technology generation (nm) and year

FIGURE 1.13

Projected scaling of n-metal oxide semiconductor saturated drain current at maximum gate voltage.

DK4126—Chapter1—23/5/2007—18:32—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

Introduction to Semiconductor Devices

1-17

2000 Open data points — 1997 ITRS Closed data points — 2004 ITRS

Idsat (μA /μm) or (A/m)

1500 Simple model predictions mp = 70 cm2/ V s

mp = 35 cm2/ V s

1000 2004 ITRS roadmap values (50% of nMOS values)

500

Simple model predictions

1997 ITRS roadmap values

0

FIGURE 1.14

250 180 130 90 65 45 32 22 16 (1997) (1999) (2001) (2004) (2007) (2010) (2013) (2016) (2019) Technology generation (nm) and year

Projected scaling of p-metal oxide semiconductor saturated drain current at maximum gate voltage.

A similar set of calculations can be made for holes, using a surface mobility of approximately 70 and 45 cm2/V s for the 1997 and 2004 parameters, respectively, and using a vsat value of 8.4!106 cm/s. These values give the projected p-channel current values shown in Figure 1.14. In this case, one can again see that there is a considerable increase in the projected saturation current using the 2004 scaling parameters. In this case, the increase comes both from the increased lateral field and from the projected increase in the surface charge density due to the increased surface field with the larger power supply voltage. In this case, the simple model predicts p-channel saturated current values slightly larger than the ITRS projected values. The relative enhancement of the p-channel current with respect to the n-channel current arises in the simple model, because the ratio of saturated velocities for holes-to-electrons is larger than the corresponding ratio of low field mobilities. The increased lateral field, with scaling, pushes both types of devices closer to the velocity saturation limit, hence increasing the p-channel saturation current by a larger factor than the n-channel saturation current. The relative ratio of hole-to-electron current remains essentially constant for constant field scaling in both the vertical and lateral dimensions. The simple model neglects at least two important factors that must be considered in any more realistic estimate of current drive. First, the DIBL and channel-length modulation effects cause an increase in current above that predicted by such a first order model. As indicated in Figure 1.5, this can cause a relatively large percentage increase in the available drive current at the supply voltage value. A reasonable enhancement value might be a 25%–30% increase in current. A second neglected factor is the effects of source and drain series resistance. These resistances reduce the current drive by effectively reducing the internally applied gate-to-source voltage. In order to minimize this latter effect, the source/drain contacts must be very carefully constructed and this becomes more difficult as device dimensions shrink. Typical source/drain resistances can easily reduce the available current drive by 25%–30%. Thus, these two neglected factors tend to somewhat offset each other with one increasing current drive and the other decreasing current drive. For the purpose here, it will be simply assumed that these effects tend to offset each other and that the simple model gives reasonably good first order approximations to the available current drive at the projected ITRS scaling. In any case, the user should not place high confidence in the exact projected values, but only accept them to within some 25%–30% of accuracy. Difficulties in

DK4126—Chapter1—23/5/2007—18:32—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-18

Handbook of Semiconductor Manufacturing Technology

achieving the scaled parameters and possible enhancements are discussed in a subsequent section on manufacturing issues. Device current drive is important because it relates to other important parameters, such as switching speed or logic gate delay, and power dissipation. In terms of logic gate delay, the important parameters can be obtained from the simple equation

Idsat Z CL DVL =Dt

ð1:12Þ

where CL is the gate load capacitance and DVL is the change in load voltage in time Dt. From this, we can calculate a gate delay as

td Z CL DVL =Idsat

ð1:13Þ

This illustrates the importance in saturated drive current in achieving ever smaller gate delays, which can translate into higher system clock frequencies. While the load capacitance is a complex combination of device capacitances and wiring layout capacitances, it is generally agreed that to the first order, the load capacitance is expected to vary directly with the lithography dimension or with the technology generation. In the golden era of scaling, the saturation current per unit gate length was approximately constant, implying that the saturation current was also varying directly with the technology generation. pffiffiffi Also the logic voltage level was scaling directly with the technology generation. Thus, if we let aðx 2Þ be a scale factor (or perhaps the inverse scale factor) per technology generation, then for constant field scaling

CL f 1=a;

DVL f 1=a;

Idsat f 1=a and

td f ð1=aÞð1=aÞ=ð1=aÞ Z 1=a

ð1:14Þ

The last line is the most important, where the constant field scaling gives a gate delay, which decreases by approximately 0.707 (1/a) per technology generation. This affords a path for the system level clock frequency to increase by about 40% (by a factor aZ1.414) for each technology generation. This has been used by the industry during the golden era of scaling to not only increase the packing density and functionality, but also to rapidly increase the clock frequency of microprocessors by many factors of 2. The projected scaling in the 2004 ITRS makes it considerably more difficult to continue decreasing the gate delay. This arises because the highly desired decrease in logic voltage level has been considerably slowed. A detailed look at the projected scaling of the power supply voltage in Figure 1.8 shows that the decrease has changed to about 15% per technology generation rather than the previous 30% per generation. To a first order approximation then

pffiffiffi DVL f 1= a

ð1:15Þ

In order to maintain a gate delay that then improves in the same manner as for constant field scaling, one must require a slower decrease in saturation drain current than with the technology generation. The required decrease in saturation current is then the same as that of the power supply voltage. If we then use the scaling factors:

pffiffiffi Idsat f 1= a

ð1:16Þ

pffiffiffi td f 1=a but it requires that Idsat =W f a

ð1:17Þ

CL f 1=a;

pffiffiffi DVL f 1= a;

then one finds

This last equation summarizes the important gate delay projections from the 2004 ITRS. The gate delay is projected to continue to decrease directly with the technology generation, but in order to achieve this, the saturated current per gate width must be forced to increase with each technology generation.

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

Introduction to Semiconductor Devices

1-19

The 2004 ITRS clearly states that the proposed scaling scenario is based upon this important assumption. The projected linear increase in saturated current per unit width seen in Figure 1.13 and Figure 1.14 for n- and p-channel devices is a direct consequence of this assumption. Under this scenario, four technology generations require a doubling of the saturated drive current per unit length and this is essentially the projection seen in Figure 1.13 and Figure 1.14 for the ITRS projected values. Another possible scenario would be to accept a device scaling that produces a constant current per unit width and accept a slower decrease in gate delay per technology generation. If one considers keeping the pffiffiffi saturated current per unit gate length constant then this leads to a tdf1= a instead of the relationship of Equation 1.17. In this case, four technology generations would be required to double the clock frequency instead of the two generations with the projected ITRS scaling. Another very important scaling parameter is the power dissipation per gate and the power density or power per unit gate area. Traditionally, power dissipation in CMOS circuits has been dominated by the dynamic switching power because of the very low gate and drain currents in the two stable logic states. Considering only this dynamic power for one gate, this can be expressed as

1 2 f pg Z CL VDD 2 pg C V2 f Z L DD Ag 2Ag

ð1:18Þ

where CL is the load capacitance that is being switched at some frequency f. The first form expresses power per gate, while the second form represents power density or power per unit gate area Ag. While both of these are important, the power density is probably the more important factor as present ICs are close to perceived limits in terms of handling power per unit area. For constant field scaling with the scaling of Equation 1.14 this becomes

pg ð1=aÞð1=aÞ2 f f f f =a Ag ð1=aÞ2

ð1:19Þ

To keep constant power per unit gate area, one can then increase the clock frequency directly with the scaling factor per generation or double the clock frequency with every two technology generations. Of course, one must also consider total chip power, which for increasing chip areas would continue to increase with each generation. However, if power per unit area is the limiting factor, the decreased gate switching speed can be directly employed with the increased system level clock frequencies. Things have now, however, changed with the 2004 ITRS projections due to the slower rate of decrease in supply voltage. The decrease in voltage squared will no longer offset the decreasing area and in fact, both the capacitance and the voltage decrease are needed to offset the decreasing area and one obtains with the 2004 ITRS scaling

pg ð1=aÞð1=aÞ f f ff Ag ð1=aÞ2

ð1:20Þ

The power per unit area is now directly proportional to the clock frequency and any increase in the clock frequency will result in an increased power density! This is an important implication of the slowed voltage scaling of the 2004 ITRS. IC design and performance is no longer in the golden era of scaling that prevailed for many generations before about the 90-nm node. For system applications, one needs to take a little more general interpretation of the clock frequency in the above equations. For a large system not all gates will be operating at any given time and only the active gates contribute to the dynamic power dissipation. Thus, we can expand the interpretation of the

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-20

Handbook of Semiconductor Manufacturing Technology

frequency to an equivalent clock frequency with

f Z fclk fact

ð1:21Þ

where the two factors are the clock (clk subscript) frequency and the fraction of gates that are active (act subscript). It is then the product of the system level clock frequency and the fraction of active gates that must remain constant with the 2004 ITRS scaling. Thus, the clock frequency could still be increased by 40% per technology node, provided an additional 40% of the gates were inactive at all times and the average power per gate would not change. These tradeoffs obviously offer many possible alternatives, such as the possibility of doubling the circuit functionality and keeping the same clock frequency at a new technology node. For example, two microprocessors at the same clock frequency could be implemented, instead of increasing the clock frequency and reducing the percentage of active circuits. In addition to the dynamic switching power, static power dissipation is becoming a major design parameter for scaled circuits. The MOS off-state drain leakage current, Id,leak, consists of components due to (1) gate leakage, (2) drain-substrate leakage, and (3) drain-to-source leakage. As covered in more detail later, all of these components tend to increase exponentially with the decreasing device dimensions. How to control these in the manufacturing process is a major challenge. For the purposes of proper circuit operation and power management, these parasitic and undesirable currents must be kept below some upper limit. Static off-state power dissipation can be expressed as

poff Z VDD Id;leak

ð1:22Þ

Limits for the allowable leakage current comes not from the fundamental device limits, but from circuit and system limits on the allowable values of static off power that can be managed in a given IC application. In general, one desires a low value of off-to-on state leakage current. Acceptable past values for MOS devices have been in the range of 1!10K5 to 1!10K4 and acceptable values were not even specified in the 1997 ITRS. Figure 1.15 shows the off-to-on current ratios specified in the 2004 ITRS for the 90- to 22-nm technology nodes. Three curves are shown for (1) HP, (2) low operating power (LOP), and (3) low standby power (LSP). While there is some increase projected in the allowable leakage with scaled devices, we can see that the projected leakage is in the range of 1!10K4, 1!10K5 and 1!10K7 for

10−3

High performance

Id, leak /Idsat

10−4

10−5

Low operating power 2004 ITRS Values

10−6 Low standby power

10−7

10−8

FIGURE 1.15

90

65

45 32 22 Technology generation (nm)

16

Projected ratio of allowable offstate leakage current to saturated drain current.

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

Introduction to Semiconductor Devices

1-21

the three applications, respectively. Difficulties in achieving these numbers are further discussed in the manufacturing section. Static power dissipation at the system level is becoming much more of a problem with further scaling of devices and is a major consideration with highly desirable low power applications and especially LSP applications. In terms of the future scaling of off-state power per unit device area, consider the best case scenario, where leakage current remains a constant multiple of saturated device current. In this case and for the 2004 ITRS scaling factors of Equation 1.16, one obtains

pffiffiffi pffiffiffi poff ð1= aÞð1= aÞ f Za Ag ð1=aÞ2

ð1:23Þ

This indicates that the expected off-state power per unit gate area is expected to increase with future generations even with a fixed off-to-on current ratio and will increase even faster with the increased offstate current shown in Figure 1.15. This is another indication that power management is to become more and more of a major technology limiter. Various power-down techniques can perhaps be used to help and manage this problem at the circuit and system level, as there does not appear to be any solution at the basic device level. These various scaling rules can be conveniently summarized in a table, such as Table 1.1. For the most generalized scaling, it is assumed that different scaling could possibly be used with the following parameters: (1) device width and length (aL), (2) oxide thickness (aox), (3) supply voltage (av), (4) saturation current (aI), (5) wiring capacitance (aW), and (6) effective clock frequency (aF). In the case of traditional constant field scaling, all of these parameters have the same value. For the major technology nodes, the length scaling factor is approximately 1.414. The third column in the table summarizes the approximate scaling for the 2004 ITRS, where the supply voltage and oxide thickness scales at approximately half the rate of the technology generation. This table is a convenient summary of the ITRS scaling discussed previously in this section, and summarizes some of the difficulties this presents in regard to control the power density and increase the clock frequency for future ICs.

TABLE 1.1

Technology Scaling Rules for Generalized Scaling, Constant Field Scaling and 2004 ITRS Scaling Rules

Physical Parameter

Device dimensions (L,W) Metal oxide semiconductor (MOS) oxide capacitance thickness Supply voltage Idsat Idsat/W Doping density Wiring length or wiring cap MOS device area Logic gate area Wiring capacitance Logic gate delay (td) Logic clock frequency Gate power dissipation Gate power density Off state current Off state power density

Generalized Scaling

Constant Field Scaling (Golden Era or Scaling)

2004 ITRS Scaling (aLZ1.414 for Each Major Technology Node)

1/aL 1/aox

1/a 1/a

1/aL pffiffiffiffiffi 1= aL

1/av 1/aI 1 a2L =av 4 a2L 1/aW 1=a2L 1=a2W 1/aW aI/(aWav) aF aF =ðaW a2v Þ aF a2L = aW a2v 1/aI a2L =ðav aI Þ1

1/a 1/a 1 a4a2 1/a 1/a2 1/a2 1/a 1/a a 1/a2 1 1/a 1

pffiffiffiffiffi 1= aL pffiffiffiffiffi 1= a pffiffiffiffiffi L aL 2 a3=2 L 4 aL 1/aL 1=a2L 1=a2L 1/aL 1/aL aF aF =a2L aF pffiffiffiffiffi R1= aL RaL

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-22

1.4

Handbook of Semiconductor Manufacturing Technology

Manufacturing Issues and Challenges

The scaling projections in the previous section and those developed by the industry in the ITRS represent a desirable set of device goals in terms of device dimensions and performance that are needed to continue the exponential growth in IC system performance in future years. The scaling projections and device performance projections do not necessarily take into account physical material and device structure limits that may make it very difficult, if not impossible to achieve these goals. Some considerations of physical limits are incorporated into the scaling projections, especially in regard to limits on gate dielectric thickness. These considerations are the fundamental reason that the projected scaling for power supply voltage and oxide thickness were fundamentally changed between the 1997 and 2004 ITRS document. It was simply realized that achieving the projected oxide thicknesses in the 1997 document were physically impossible. Likewise, some of the projections in the 2004 ITRS for device performance and/or size scaling may prove to be physically impossible to achieve. This section presents some of the major challenges faced by the semiconductor industry in achieving the dimensional and performance scaling of CMOS devices discussed in previous sections. Very serious manufacturing problems exist in achieving the projected scaling to the end of conventional CMOS devices.

1.4.1

MOSFET Gate Stack Issues

The gate stack design is the key part of the MOSFET. It has been recognized for some time that continuing to scale the gate stack with the same materials of silicon dioxide and polysilicon will not produce acceptable devices for gate oxide thicknesses approaching 1 nm and beyond. Unlike in the golden era of scaling, innovative approaches, in both the gate dielectric and contacting material, are necessary. The gate stack (see Figure 1.7) is composed of (1) the oxide–silicon interface, (2) the gate insulator, and (3) the gate contact layer. For devices approaching the end of the roadmap, the gate oxide performance at a projected thickness of less than 1 nm is critical. The SiO2 cannot be utilized at these dimensions. Aside from reproducibility and manufacturability issues, tunneling currents are the major problem. Figure1.16 shows experimental gate oxide tunneling currents for oxide thickness ranging from 3.5 to 1.4 nm at an applied voltage, where direct tunneling dominates the current [12]. While these currents are large, they must be evaluated in terms of what might be an acceptable level in a particular device application. It has been experimentally shown that good device characteristics can be achieved with very thin oxides, provided the gate length is sufficiently short [13]. A gate current of 1 A/cm2 was suggested early on as an upper limit [12]. However, it is now generally accepted that the allowable gate current can be much larger from a device performance point of view. The ITRS document for HP devices lists acceptable gate current densities as high as 1!104 A/cm2. It is important to understand how such large gate currents can be considered as acceptable. If one assumes that an Ioff/Ion ratio of 10K4 as shown in Figure 1.15 for HP devices is acceptable and that half the off-state leakage can be due to gate current, then we can write

Ig Ig I Z % 5 !10K5 d Ag WLp WLp

ð1:24Þ

The worst case (lowest limit) will occur for the minimum drain current, which occurs for p-MOS devices and ranges in terms of current per unit width from about 500 to 1000 A/m for the 90- to 16-nm nodes. Then using the lower value, we can obtain

Ig ð2:5 !10K4 A=cmÞ ð2500A=cm2 Þ % Z Ag Lp ðLp =nmÞ

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

ð1:25Þ

Introduction to Semiconductor Devices

1-23

For the 90- to 16-nm technology generation, this gives values from about 68 to 417 A/cm2. For an n-MOS gate, the limits would be about a factor of 4 larger, keeping the same ratio of off-to-on current. As from the previous section, the 2004 ITRS projects even larger ratios of off-to-on current for the smaller device dimensions. In any case for the HP devices, the gate current density can be quite large and cannot degrade the device performance. For LOP and LSP device applications, the current density must be considerably lower. Even though the current densities may seem high from conventional wisdom about a dielectric, the total currents can be a small fraction of the saturated drain current. Limits to gate current come not from degradation in device performance, but from overall chip power considerations. It is now certainly known that SiO2 for the MOS gate will be used at considerably smaller device dimensions than thought some few years ago. However, the data in Figure 1.16 illustrates the gate leakage problem with SiO2. For a given voltage, the current increases about a factor of 10 for each decrease of about 2-nm in oxide thickness due to direct tunneling through the oxide. The 2004 ITRS projects that oxynitrides, which are somewhat better than pure SiO2, will not satisfy the leakage current requirements at the 45-nm node and beyond, and that improved gate dielectric materials are needed for future technology generations. The purpose here is not to try to project exactly when alternative dielectrics will be required, but to discuss the potential advantages of improved gate dielectrics. The potential for reduced gate leakage with advanced dielectrics can be seen from the first order equation for carrier tunneling probability T through a barrier of height Vb and thickness td [14]:

0

1 sffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2mqV b A ðaÞ Low Fields : Tfexp@K t Z2 d 0 1 sffiffiffiffiffiffiffi 3=2 4 2m ðqV Þ b ðbÞ High Fields : Tfexp@K td A 3 Z2 qV

ð1:26Þ

where V/tb is the electric field across the barrier. From this, we can identify the important material and device parameters determining tunneling current: 104 N – MOSFET Measurement

103

Gate current density (A/cm2)

102

tox (Å) 15

Simulation

101

20

100 10−1

25

10−2 10−3

29

10−4 10−5

32

10−6

35

10−7 10−8

36

0

1

2

3

Gate voltage (v)

FIGURE 1.16 Measured and simulated gate currents for thin SiO2 gate stacks. (From Taur, Y., D. A. Buchanan, W. Chen, D. J. Frank, K. E. Ismail, S-H. Lo, G. A. Sai-Halong et al., Proc. IEEE, 85, (1977): 486.)

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-24

Handbook of Semiconductor Manufacturing Technology

pffiffiffiffiffiffiffiffiffi mVb td pffiffiffiffi 3=2 ðbÞ High fields : mVb td

ðaÞ Low fields :

ð1:27Þ

When considering different possible gate dielectric materials, a device constraint is the need to maintain the same capacitance across the dielectric, so that the same channel charge can be induced into the semiconductor with the same gate voltage. From the capacitance equation, we can see that for the same capacitance the dielectric thickness would vary directly with the dielectric constant as:

td Z

3d f 3d C

ð1:28Þ

We can thus define a tunneling current “figure of merit” which captures the important material and device parameters for comparing the tunneling current of different dielectrics as:

pffiffiffiffiffiffiffiffiffi ðaÞ Low fields : mVb 3d pffiffiffiffi ðbÞ High fields : mVb3=2 3d

ð1:29Þ

Since these parameters appear as negative quantities in the exponential factor, whichever material has the largest value of this parameter will, in theory, have the lowest tunneling current. If only the dielectric constant varied between materials, one would always want a dielectric material with a very high dielectric constant. However, there is a strong tendency for materials with high dielectric constants to also have smaller bandgaps, which must translate into lower barrier heights. Figure 1.17 shows some of the most important potential high-k dielectrics and their bandgap values vs. dielectric constant. While the high-k materials tend to have smaller bandgaps from the above figure of merit, one can see that a doubling of the dielectric constant is more important than a reduction in the barrier height by a factor of 2. The tunneling current is very sensitive to the tunneling barrier thickness. Table 1.2 shows values of these figure of merit parameters for several materials, assuming that the effective mass is the same for all the materials. This is done because of the lack of reliable effective mass values for the various materials. Two sets of values are shown for several of the materials because of some uncertainty with regard to dielectric constant and barrier height. When two values are given, the first set corresponds to a very optimistic set and the second term corresponds to the low range of

Conduction band offset (eV)

4 SiO2 3

2

Al2O3

HfO2

ZrO2 Ta2O5

Y2O3

1

BaZrO3 0

FIGURE 1.17

Si3N4

0

5

10

15 20 25 Dielectric constant (K)

30

35

Variation of conduction band offset with Si for various high-k dielectrics.

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

40

Introduction to Semiconductor Devices TABLE 1.2 Material SiO2 Si3N4 Ta2O3 TiO2 HfO2 ZrO2

1-25

Tunneling Figure of Merit Parameters for Several Dielectrics 3d

Vb(V)

Low Fields Vb1=2 3d

High Fields Vb3=2 3d

3.9 7.8 25.0 25.0 50.0 30.0 28.0 20.0 24.0 20.0

3.0 2.0 1.5 2.1 1.0 1.0 1.5 1.4 1.4 1.3

6.75 11.0 30.6 36.2 50.0 30.0 34.3 23.7 28.4 22.8

20.3 22.1 45.9 76.1 50.0 30.0 51.4 33.1 39.7 29.6

expected values. In general, more improvement is seen to be theoretically possible at low fields than high fields; and higher dielectric constant materials offer larger potential improvements even though they tend to have reduced barrier heights. The larger parameters in the table would be expected to show the lowest leakage currents, all other things being equal. It must be remembered that the values in the table do not account for any possible differences in tunneling effective mass between the different materials. In most cases, one would expect the tunneling mass to be smaller in the higher-k dielectrics and to negate some of the potential advantage of the higher-k materials. The case of Si3N4 is interesting because the low field parameter projects an improvement in the tunneling current, while the high field parameter projects little improvement in the tunneling current over that of SiO2. For the present and future MOS devices, the low field regime is the most important and thus, one does expect an improvement with nitrides and oxynitrides, and oxynitrides have been verified by many investigators to show reductions in tunneling currents [15,16]. This is the first route for manufacturers toward reducing leakage currents in MOS devices and has been implemented already by the IC manufacturers. A high-k dielectric such as HfO2 with a dielectric constant of 28 can be made about 28/3.9Z7.2 times physically thicker than SiO2 and give the same capacitance or same inversion layer charge at the same gate voltage. In such a case, a 7.2-nm thick HfO2 dielectric would have an EOT of only 1.0-nm and may be potentially much easier to manufacture and control. The ITRS anticipates the use of alternative high-k dielectrics for future technology generations and expresses the dielectric thickness in terms of EOT values. In terms of real physical thickness (td) of a dielectric layer, the EOT can be expressed as

EOT Z td ð3ox =3d Þ

ð1:30Þ

In experimental measurements such as capacitance measurements, only the EOT value can be determined unless one has a separate independent measurement of physical thickness or dielectric constant. However, for MOS device applications it is the EOT value that is important in terms of induced channel charge in inversion. In addition to appropriate barrier height and dielectric constant, any alternative gate dielectric must form a stable compound and stable interface with silicon as well as the gate contact material at any subsequent processing temperatures. There are potential problem areas with all potential high-k dielectrics and the search for the most appropriate high-k gate dielectric represents an important research area. The most promising material appears to be HfO2 and alloys of this with silicon, the so called Hafinum silicates. The quality of the dielectric-silicon interface is critical for achieving high channel mobility. It is not clear that the required low surface state densities, low fixed charge and smooth interface can be achieved with any material combination other than silicon–silicon dioxide. If such an interface layer is required, it may be extremely difficult to achieve EOT below about 0.5 nm as this may be the range of interface oxide needed for good oxide-silicon properties. Some recognition of this is inherent in the ITRS projections as the end-of-roadmap EOT value is projected to be approximately 0.5 nm.

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-26

Handbook of Semiconductor Manufacturing Technology

The need for high-k gate dielectrics has been at the forefront of gate stack research since the 1997 ITRS and much research has been done on a wide variety of MOS devices with high-k dielectrics. Many researchers have demonstrated orders of magnitude improvements in the gate tunneling currents over pure SiO2 for EOT values in the range of 1–1.5 nm [17,18]. Researchers have also reported capacitors and transistors with EOT values approaching the 0.5 nm range. The fundamental approach for reducing tunneling currents as expressed in the figure of merit of Equation 1.29 is now well established. However, some problems have continued to be prevalent in the search for the ideal high-k gate dielectric. Many of the lowest EOT experiments have shown unacceptable reductions in surface mobility. Also many samples have shown unacceptable field-dependent threshold voltage shifts. The ideal way to transition an MOS surface interface from silicon into a high-k dielectric has proven to be a somewhat elusive goal, but progress is continually being made and the future of scaled MOS devices seems to include, of necessity, a high-k material in order to reduce the gate tunneling currents to acceptable levels. Gate oxide reliability has always been a major concern as MOSFET devices are scaled. Operating voltages must remain below some maximum value, which is typically set by the requirement of a 20-year lifetime for devices when operated at maximum voltages. Previously reported work on SiO2 has shown that gate oxide lifetime is determined primarily by the electric field applied to the oxide. In the era of constant field scaling, the oxide field tended to stay constant with scaled device dimensions. To the same order of approximation as involved in Equation 1.10, one can write:

Eox xðVDD C ðVbi K2fB Þ=2Þ=tciv xVDD =tciv

ð1:31Þ

Since the threshold voltage tends to be about 20% of the supply voltage, the oxide equivalent dielectric field according to the ITRS projected scaling rules should be about 1.24 times the Ec curve shown in Figure 1.11. Assuming this to be the case, the oxide equivalent dielectric field should remain below about 1!107 V/cm. The term equivalent dielectric field is used here as the dielectric will most likely not be pure SiO2, but some more appropriate high-k material. Previous studies of SiO2 have shown that it can achieve 20-year lifetimes, provided the oxide field remains below about 8!106 V/cm [11]. The values in Figure 1.11 prior to the 65-nm node are consistent with this as an upper limit on the oxide field. Thus, there are potential dielectric reliability problems with scaling beyond the 90-nm node as the gate field is projected to increase. An additional factor that is somewhat unknown is the reliability of dielectrics with the very large tunneling currents that are projected in the dielectrics. Carriers transiting the oxide by pure tunneling are not expected to interact with the atoms creating any defects. However, some small fraction of the carriers will interact in the dielectric and create defect centers, which contribute to reliability problems. If a dielectric is sufficiently thin, charge trapping should not be a problem. For SiO2, it has previously been observed that charge trapping effects tend to become negligible at thicknesses around 4–6 nm, because any trapped carriers can rapidly tunnel out of the oxide into the gate or substrate [19,20]. Hot carrier reliability is expected to greatly improve voltages below approximately 3 V, since few carriers can gain sufficient energy to create interface states or shallow oxide traps. Finally, as the dielectric EOT becomes thinner, the amount of interface charge or dielectric charge needed to shift the device threshold voltage by a small voltage such as 1 mV keeps increasing, so the scaled devices are much more tolerant to interface and dielectric charges. This is one of the positive aspects of scaled devices. For the high-k dielectrics which are expected to be used with the scaled devices, reliability is still somewhat unknown. However, one trend in the right direction is the reduced magnitude of the dielectric field with increased dielectric constant. For example, an oxide field of 1!107 V/cm for SiO2, with a dielectric constant of 3.9, corresponds to a dielectric field of less than 2!106 V/cm for a high-k dielectric with a constant of 20. This provides considerable encouragement that reliability of high-k dielectrics will not be a major problem at the expected fields. Some preliminary data on the reliability of high-k dielectrics has been encouraging, and it may in fact be possible to increase the effective dielectric field as proposed in the ITRS without degrading long-term device lifetime. As this is written, there are still

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

Introduction to Semiconductor Devices

1-27

considerable problems to be resolved for the gate dielectric to be used as researchers attempt to approach EOT values in the 0.5-nm range. In addition to the gate insulator, the gate contact is a critical part of the gate stack. This has typically been polysilicon and more typically both nC and pC polysilicon for n- and p-channel devices, respectively. In the early history of MOS devices, the replacement of metal gates by polysilicon was one of the key developments leading to rapid improvements in yield of early ICs. The MOS device has, however, continued to maintain the name MOS device. The use of polysilicon has become a major problem for continued device scaling, because the finite depletion layer associated with the gate charge in the polysilicon is causing significant drops in the current drive and transconductance of MOS devices. The polysilicon depletion effect becomes more important with scaling, because the oxide field (see Figure 1.11) remains essentially constant with scaling. This means that the charge per unit area in the gate polysilicon remains constant with scaling. In turn, the voltage drop in the polysilicon for the same gate doping density remains constant. Thus, since voltage levels decrease with scaling, the polysilicon voltage drop becomes a larger fraction of available device voltage and current drive is reduced with each technology generation. To maintain a constant dielectric field with polysilicon depletion requires a further decrease in an already very thin dielectric. Simulated C–V curves for nC polysilicon on a p-type substrate have been shown in Figure 1.9. This was previously used to discuss primarily the quantum size effects in the silicon substrate that cause an effective increase in the equivalent dielectric thickness for the capacitance and for the channel inversion charge. The typical effect of a polysilicon gate as compared with a metal gate can be seen by comparing curves (c) and (d) with curve (b) which includes only quantum confinement effects. The capacitance values represent charge per unit voltage, so the differences between the curves (c) and (d), and curve (b) represent reductions in transconductance or channel charge that would occur due to polysilicon depletion. If the polysilicon doping density could be increased without limit, the polysilicon depletion effect could be minimized. However, from literature reports, it appears that it will be very difficult to achieve electronically active doping densities much above the 1!1020/cm3 for n-type silicon and above the mid to upper 1!1019/cm3 for p-type polysilicon. As can be seen from Figure 1.9, this represents a significant degradation in current drive capability for devices beyond the 90-nm node. To overcome the polysilicon depletion effect, there are two approaches: (1) attempt to very heavily dope the polysilicon with values near 1!1021/cm3 needed, or (2) replace polysilicon with a metal gate. The most fruitful approach appears to be the metal gate approach, which is at this time being extensively pursued by the industry. For metal gates, two different metals with two work functions (WF) are needed—one for n-channel and one for p-channel devices. This is required to provide appropriate metal WFs to essentially replace the WFs of nC and pC polysilicon. Metals are needed with WFs near the conduction and valence bands of the underlying silicon. For a direct replacement for nC and pC polysilicon, WF values of 4.1 and 5.2 eV are needed. For possible metal gates, a wide variety of WF values exist in the elemental metals ranging from below 3.0 eV for metals, such as Li, Rb, Sr, Cs, Ce, Pr, and Eu; to over 5.0 eV for metals such as Ni, Se, Rh, Pd, Ir, Pt, Ru, and Au. However, most of the elemental metals have significant problems when used as gate contact materials over either SiO2 or the most promising high-k dielectrics such as the Hf silicates. The low WF metals tend to be too reactive, while the high WF metals tend to have adhesion problems. The latter adhesion problem can be controlled by the use of capping layers and bilayers of metals such as RuTa, RuMo, or TiPt have shown considerable promise for p-type MOS devices [21–23]. As a general class of materials, the metal nitrides and metal silicides are much more stable than the elemental metals and considerable research has been done on such compounds for possible metal gates. Some of the most promising of these are TaN [24], TiN [25], TiAlN [26], TaSiN [26], and HfN [27]. Metal silicides have been extensively used as source/drain contact materials for CMOS, so the possible extension of this technology to MOS gates is an attractive approach. For such an application, the metal silicide must extend completely to the gate dielectric and such a silicide process has been referred to a “full silicidation” (FUSI) process [28].

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-28

Handbook of Semiconductor Manufacturing Technology

The threshold voltage of a MOS device depends not only on the fundamental or intrinsic WF of the metal gate, but also on the presence of any possible charge dipole layers within the silicon/dielectric/metal gate structure. Such dipole layers are possible at either of the dielectric interfaces or at an internal interface between a high-k dielectric and an interface oxide or nitride layer. Such dipole charge layers are known to pin the Fermi level for most metal-semiconductor interfaces near the midgap energy. For an MOS gate contact, such dipole layers will result in some effective WF that differs from the intrinsic gate WF. Evidence for such dipole layers have also been found for many metal-high-k interfaces. Typically, such dipole layers result in an effective WF more toward the midpoint of the Si bandgap and away from the band edges. Some of the most promising metal gate materials are summarized below: A. p-MOS (desired WF w5.2 eV): Pt (5.4 eV), Ru (5.2 eV), Ti (4.6 eV), Mo (4.7 eV), TaAlN (4.9 eV), TiN (4.8 eV), CVD TaN (5.1 eV), and B doped FUSI NiSi (5.1 eV) B. n-MOS (desired WF w4.1 eV): Ta (4.2 eV), Ti (4.2 eV), TaSiN (4.2 eV), and As doped FUSI NiSi (4.6 eV) As of this review, the most appropriate metal gates and integration approach has not been determined. However, the future of MOS devices most likely involves a return to the dominant use of metal gates.

1.4.2

Channel Doping Issues

The proper selection of substrate doping and engineering of the doped layers beneath the gate are key to optimum performance of the MOSFET device with respect to important parameters, such as threshold voltage, peak channel mobility, DIBL, subthreshold current slope, and drain-to-source punch-through voltage. Many of the issues regarding channel doping can be understood with reference to Figure 1.18, which illustrates the important charge regions under the channel. First, and perhaps most important, there are depletion regions under the gate (represented as Xdm(y)) and depletion regions surrounding the source/drain junctions (represented as Xsd and Xdd). The doping density under the channel controls the depth of all these depletion regions with larger doping densities required to obtain thinner depletion regions as device dimensions are scaled. Of primary importance is the requirement that the junction depletion regions do not overlap under the gate or else junction, punch-through will occur and large drain-to-source currents can flow. As the junction built-in voltage does not scale with device scaling, this requirement becomes more difficult to satisfy with scaling. Vs

Vds

L O

Y Xdm (y)

n+

Xsd Depletion boundary

n+

Xj Xdd

X p -Substrate

Vsb

FIGURE 1.18 Charge sharing model for VT reduction and other short-channel effects. (From Arora, N., MOSFET Models for VLSI Circuit Simulation, Springer, New York, 1993.)

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

Introduction to Semiconductor Devices

1-29

Controlling short-channel effects are very critical to the ability to properly scale MOS devices. The most important of these are: (1) threshold voltage reduction, (2) DIBL effect, and (3) subthreshold current slope parameter. While these parameters are frequently expressed in terms of complex function of device parameters, such as junction depths, doping, oxide thickness, etc., they are primarily functions of the two-dimensional geometry shown in Figure 1.18. The dotted lines within the depletion regions are intended to represent the regions of charge controlled by the various electrodes (gate, source, and drain). One way to look at the drain voltage dependence of VT is that part of the underlying depletion region is controlled from the source and drain, and not from the gate electrode. The fraction of charge controlled by the gate depends on the width of the depletions and junction depth relative to the gate length L. If the same relative size of the junction depths and depletion regions is maintained at each technology generation, then the relative importance of short-channel effects should remain the same. To achieve this, the depletion region widths must be scaled in the same manner as other device dimensions. Another way of viewing the drain voltage dependence of VT is that the potential under the center of the gate is controlled not only by the gate charge, but is also influenced by the source and the drain depletion region charges. Adding more charge at the depletion regions will shift the potential under the gate, resulting in more channel charge for the same gate voltage. Drain induced barrier lowering and changes in VT arise from similar physical effects. At zero drain bias, there will be some lowering of VT due to the source/drain charges. As the drain bias is increased, more charge at the drain depletion region will result in a larger effect and a further reduction in threshold voltage; hence a voltage-dependent threshold voltage or DIBL. This again is primarily a geometrically determined parameter depending on the junction depths, oxide thickness, and depletion layer depth relative to the channel length. Low DIBL requires that the field lines from the drain terminate on charges other than those in the channel. This is determined by the degree to which the gate is electrically shielded from the drain. Maintaining a fixed relative geometry is again the key to control the DIBL at small device dimensions. Each 10 mV of VT reduction caused by DIBL will increase the Ioff current by two to three times. However, DIBL does increase Idast as well as increasing transconductance of the device. An optimum tradeoff occurs for a DIBL that is approximately 10% of VDD. The channel doping densities and doping profile must be selected to obtain a proper device threshold voltage and to control short-channel effects. Some of the implications of this, with respect to the doping of the scaled devices, is now explored and discussed. To control depletion layer punch-through, the channel must be sufficiently heavily doped under the gate region. There are several possible ways this can be achieved. First, the channel could just be uniformly doped with some large doping density in the region identified as the substrate body in Figure 1.7. However, since the channel mobility is degraded by a large bulk impurity density, there are reasons for desiring a lightly doped region just under the gate where the inversion channel exists. This can be accomplished by use of a retrograde channel doping such as shown in Figure 1.19, where a lightly doped region is shown directly under the gate with a more heavily doped region deeper into the substrate. This is frequently referred to as a super steep retrograde doping profile. Such a profile also gives added flexibility with respect to threshold voltage control, since the threshold voltage can be controlled somewhat by adjusting the depth of the lightly doped layer. The super steep retrograde doping profile, however, does not provide sufficient flexibility to achieve all the desired goals of the channel doping for short-channel devices. For example, the doping density needed to control punch-through, and limit the size of the junction depletion layers can be too large to give a desired low threshold voltage even when the heavily doped layer is located some distance under the gate oxide. In such cases, it has been found highly desirable to employ another doping technique that allows a non-uniform lateral doping density under the gate. This is the pocket implant technique illustrated in Figure 1.20. This is also frequently referred to as a superhalo doping. By the use of shallow angled implants, using the gate as a mask, heavily doped pockets can be formed in the substrate adjacent to the source/drain contact implants. These heavily doped pockets can provide the heavily doped regions needed to terminate the junction depletion regions and control punch-through. However, they can leave a more lightly doped substrate under the center of the gate region, so that an acceptable threshold voltage can be achieved. For greatest flexibility, this should be combined with the super steep retrograde doping

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-30

Handbook of Semiconductor Manufacturing Technology

Spacer tsw n+

Lp Gate contact tox Lightly doped

XJ

XJC

tgate Drain extension

n+

n+ p-type, Heavily doped, NB Drain contact

Substrate body

FIGURE 1.19

Channel structure and doping for super steep retrograde channel doping.

profile. In this manner, one can somewhat separate the problem of punch-through control from that of threshold voltage control. However, this becomes more and more difficult with dimensional scaling because of the large doping densities required, and because of the small dimensions over which the doping profiles need to vary by large factors. To a first order approximation, the doping density for a given depletion width of the drainto-substrate p–n junction with the drain side, which is very heavily doped is given by

NB Z

23s ðVj C Vbi Þ F qW 2

ð1:32Þ

where NB is the light side doping density, Vj is the applied junction bias voltage (maximum of VDD), Vbi is the built-in junction potential (w0.9–1.1 V for NBO1017/cm3), W is the width of the depletion region, and the factor F accounts for the curvature of the p–n junction. For a planar junction, FZ1; for a cylindrical junction where Rj is the radius of the p–n junction (Rjy0.65XJ), F is given by

FZ

2½W=ðW C Rj Þ ln

WCRj Rj

C

2

2 Rj K1 WCRj

ð1:33Þ

For shallow junctions, the cylindrical junction approximation is more accurate than the planar junction approximation. To prevent the depletion layer punch-through, W must scale with feature size

Spacer tsw n+

Lp Gate contact tox

XJ

tgate Drain extension n+

XJC Heavily doped "pocket" implant

Drain contact n+

Substrate body

FIGURE 1.20

Example of heavily doped pocket implants for punch-through control.

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

Introduction to Semiconductor Devices

1-31

and scaling W means that doping density NB must scale somewhat faster than the inverse feature size, since VjCVbi does not scale as fast as feature size. Figure 1.21 shows the required doping densities according to these equations as a function of technology generation from the 90- to 16-nm node. Using the ITRS projections for dimensions, the factor F is approximately constant for each technology generation, depending only on the ratio of W to Rj Two potential limits are shown for the depletion layer: one for Leff/4 and one for Leff/2. The first giving a highly desirable limit, while the latter providing probably an upper limit on the acceptable value of the depletion layer width. For the two cases, FZ0.806 and 0.885, respectively, so the junction curvature does not give very much smaller limits than a planar junction with the values being only 12%–20% lower in value as can be seen in Figure 1.21. Also shown in the figure is a “mean” curve representing the geometrical mean of the Leff/4 and the Leff/2 curves for the case of a cylindrical junction. This can be taken as typical of the desired peak doping densities to control the junction depletion region depth with future scaling. While the Leff/2 value might seem too large, the bulk of the depletion region will be confined between the two deep drain contact junctions of XJC as shown in Figure 1.7. The spacing between these junctions can be two or more times the effective channel length. Thus, a depletion layer extension of Leff/2 from these junctions is not unreasonable. These required doping values are quite large and there are significant consequences of such large doping densities to be subsequently discussed. As a point of reference, Frank et al. have published a typical pocket implant doping profile for a 25-nm effective channel length device [29,30] and this is shown in Figure 1.22. Their peak doping density is somewhat above 1!1019/cm3 with a maximum depletion region depth of about Leff/2. In comparing with Figure 1.21, this would correspond to the 90-nm technology node and this one data point compares very well with the Leff/2 curves in Figure 1.21. Large doping densities as shown in Figure 1.21 will give ever increasing drain-to-substrate leakage currents as the densities required for punch-through control are in the degenerate range and approach those of tunnel junctions. A model for junction tunnel current has previously been presented by Moll [31] as

1022

Doping density (cm−3)

Abrupt junction Cylindrical junction

1021

Doping density to limit depletion layer width to Leff /4 Mean

1020 Leff /2

1019

90

65

45

32

22

16

Technology generation (nm)

FIGURE 1.21 Doping densities required for punch-through control with different limits on the maximum width of the depletion layers.

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-32

Handbook of Semiconductor Manufacturing Technology

Gate tox = 1.5 nm

25nm

Source n-type:

2.0 ×1019 1.5 ×1019 1.0 ×1019 5 ×1018 0

Drain

25 nm y= 1.2 V 0.8 V

p -type:

1.0×1019 5×1018

0.4 V y = −0.4 V

0

FIGURE 1.22 Doping concentrations for a 25-nm gate length device. (From Frank, D. J., R. H. Dennard, E. Nowak, P. M. Solomon Y. Taur, and H-S. P. Wong, Proc IEEE, 89, (2001): 259; Taur, Y. C. H. Wann, and D. J. Frank, IEDM Tech. Digest., 789, 1998.)

" pffiffiffiffiffiffiffiffiffi # pffiffiffiffiffiffiffiffiffi 3 2m q EVj 4 2m Eg3=2 pffiffiffiffiffi exp K Jt Z 3qZE 4p2 Z2 Eg

ð1:34Þ

where m is the tunneling effective mass taken as 0.19m0, Vj is the applied junction voltage, Eg is the bandgap energy and E is the electric field taken here for an abrupt junction as

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qNB ðVbi C Vj Þ Ey 23s

ð1:35Þ

Calculations based upon this model are shown in Figure 1.23 for doping densities in the range of 1! 1019 to 1!1021/cm3 covering the range shown in Figure 1.21. Two curves are shown corresponding to the expected maximum device voltages. Shown across the top of the figure is the corresponding electric field over the range of 1–10 MV/cm. The values of current are similar to the values reported by Frank et al. [29], which includes some experimental data points. If one compares the doping densities in Figure 1.21 with the tunneling currents in Figure 1.23, it can be seen that there are large potential problems with junction leakage for the smallest device dimensions. If one makes the same assumptions about allowable magnitudes of drain leakage current as for gate leakage current and assumes a somewhat optimistic scenario that the tunneling current can be confined to a spatial depth corresponding to the junction depth, then we can approximate

Jt %

ð2:5 !10K4 A =cmÞ ð2500A =cm2 Þ Z XJ ðXJ =nmÞ

ð1:36Þ

This would indicate that the junction leakage current would exceed allowable limits around the 32-nm technology generation. Requirements for the control of depletion layer punch-through and tunneling currents probably limit the ability to scale bulk CMOS to dimensions somewhat less than the ultimate 2004 ITRS limits. Alternative device structures such as fully depleted (FD) SOI or double gate (DG) structures may be ultimately required. Such structures will be discussed in a section 1.5.2. One approach for reducing the junction electric field that might come to mind is to dope the deep contacting junction more lightly, so that the junction is not basically a one-sided junction, but have the

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

Introduction to Semiconductor Devices Peak electric field (MV/cm) 3 4 5

2

108

1-33

6

7

8

9 10

107

Tunneling current (A/cm2)

106 105 1.0 V

104 103 102

V = 0.5V

101 100 10−1

Model calculations for triangular barrier

10−2 10−3 10−4 10−5 10−6 19 10

FIGURE 1.23 electric fields.

1020 Doping density (cm−3)

1021

Estimated drain-to-substrate leakage current for various punch-through doping densities and peak

depletion region extend more into the deep contacting junction. However, as will be subsequently shown, the deep contacting junction must be as heavily doped as possible in order to obtain an acceptable contact resistance to an external contact. Thus, there is little that can be done in practice to reduce the peak electric field by reducing the doping density on the source/drain contact side of the junction. In addition to controlling the depletion regions from the source/drain junctions, the substrate doping is important in setting the device threshold voltage. In a given technology, one generally desires devices with various channel lengths from some minimum channel length to larger channel lengths. It is thus desirable to have a device threshold voltage for the minimum channel length that is not very different from that of a long-channel threshold device. For a long-channel device, if one assumes a retrograde doping profile as shown in Figure 1.19, then first order calculations can be made from the standard semiconductor equations regarding the relationship between threshold voltage, width of the lightly doped layer, and the doping in the heavily doped layer. To look at important trends and magnitudes, it will be assumed that the doping density in the lightly doped layer may be neglected and that the depth of the lightly doped layer is xI, and that the depletion layer extends to a depth xB into the heavily doped layer. The model for substrate charge is then

( rZ

0

for 0! x! xI

KqNB

for xI ! x! xI C xB

ð1:37Þ

Using this substrate charge density, one can evaluate the surface potential for the threshold-channel condition and evaluate the theoretical threshold voltage for a given bulk doping density and given depth of the lightly doped layer [32]. Inversely, one can determine the bulk doping density required for a desired threshold voltage, given the depth of the lightly doped layer and the other device parameters such as oxide thickness. One finally needs a gate WF, which can be conveniently taken as at the band edges of the semiconductor. One complication is the need to correct the standard equations for quantum confinement effects. This can be done with various published correction models or one can use the

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-34

Handbook of Semiconductor Manufacturing Technology VT Design value (V) 1021

0.20

0.18

0.15

0.11

Doping density (cm−3)

XI = XJ /2 1020

0.10

0.09

Short channel devices with barrier lowering due to source/drain regions

XI = XJ /4 XI = 0

1019

XI = XJ /2

Long channel devices

XI = XJ/4 XI = 0

1018

Calculated values from NCSU cvc program

Open circles are for short channel device design including estimated DIBL Closed circles are for long channel device design with no DIBL included

1017

90

65

45 32 Technology generation (nm)

22

16

FIGURE 1.24 Doping densities for long- and short-channel devices needed with the retrograde doping model to achieve the design values of threshold voltage at each technology generation.

simplified approach of the ITRS to assume that these effects make the oxide thickness appear to be 0.4nm thicker than the physical thickness. Some of the important results of such an evaluation are shown in Figure 1.24 for the 90- to 16-nm technology generations using the dimensional parameters of the 2004 ITRS. Concentrate first on the three lower curves in the figure with solid circles as data points, which are the results for the calculation outlined above that would represent long-channel threshold voltages of the values seen on the top axis of the figure. The values are seen to be in the 1!1018 to 3!1018/cm3 range. The results are essentially independent of n- or p-channel type. Three curves are shown for lightly doped layer thicknesses of 0, xJ/4 and xJ/2 representing the most practical thicknesses of lightly doped surface layers. As seen by the curves, there is little difference in the required doping density for either the different technology generations or on the thickness of the lightly doped layer. For the calculations shown in the curves, quantum confinement effects were estimated by the technique of increasing the oxide thickness by 0.4 nm. Also shown in the figure are required doping densities for the device structures as determined by the NCSU cvc software, which includes a more detailed model for the quantum confinement effects, and which solves more exactly the device equations. These are the square data points and should be compared with the lower xIZ0 curve. These points are just slightly higher in value that those obtained with the much simpler model and provide some additional justification of the simple model of increasing the effective oxide thickness by 0.4 nm to account for the QM effects. The NCSU cvc software cannot be used to compare the other curves, as it assumes a uniformly doped substrate. These values indicate why one cannot simply heavily dope the entire substrate to control the punch-through effect. The doping densities required for the proper threshold voltage are much lower than the densities shown in Figure 1.21 that are required for punch-through control. As one decreases gate length from the long—to the short—channel case, moving toward a case as shown in Figure 1.22, several physical effects occur. First, the shielding of the source/drain depletion regions from the channel will become less effective and the threshold voltage will begin to decrease. Second, as the channel length becomes very small, the doping densities from any pocket implants will

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

Introduction to Semiconductor Devices

1-35

begin to overlay and cause an increase in the doping density under the center of the gate. In the minimum channel length design, as shown in Figure 1.22, one can see considerable enhancement of the doping density under the center of the gate due to the overlap of the pocket implants. This increase in effective channel doping tends to increase the threshold voltage. Thus, with a pocket implant or halo structure one has two competing effects: one tending to reduce the threshold voltage and one tending to increase the threshold voltage. This can greatly delay the rolloff of threshold voltage with decreasing channel length and allows much shorter channel lengths than would be otherwise possible. As a function of decreasing channel length, it is not unusual for the threshold voltage to first increase and then decrease with decreasing channel length. The decrease in threshold voltage with channel length has been extensively studied both theoretically and experimentally. Various models predict that the decrease in threshold voltage is an exponential function of distance of the source/drain from the center of the channel (Leff/2) and depends on a length parameter that is a function of the geometry of the channel structure [33–38]. The other expected dependency is on the charge in the depletion layer, which gives the model form

DVT f

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi VD C Vbi expðKLeff =2lÞ

ð1:38Þ

If we assume contributions from both the source and drain then

DVT Z VDB ð1 C

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 C VDD =Vbi ÞexpðKLeff =2lÞ

ð1:39Þ

In this, it has been assumed that the source is at zero volts and the worst case drain condition is with the drain voltage at the supply voltage. The Vbi parameter is the equilibrium junction voltage (about 1 V), VDB is the proportionality constant and l is the exponential decay length. It should be noted that, not all authors use the same notation for the exponential term. In some cases, the l term has just been called l [37] or L and in some cases the exponential term has been expressed as KpLeff/2L [29], which gives a different meaning to the length scale factor. The data of Wann et al. is consistent with the numerical values

VDB yVbi ðy1 VoltÞ LDB yðtox ðxI C xB ÞxJ Þ1=3

ð1:40Þ

If one uses this model for the reduction in threshold voltage, this can be included in the simple model previously discussed for substrate doping, and one can evaluate the required increase in channel doping for obtaining a desired threshold voltage. The upper three curves in Figure 1.24 illustrate the results of including this model of channel threshold voltage reduction combined with the requirement at minimum channel length to have the same threshold voltage as for the long channel case. In other words, if the channel doping densities are as given by the upper three curves under the center of the channel for a minimum channel length device, the channel reduction effect will be offset by the increased voltage due to the increased doping. The required doping densities of the upper three curves are in the range that one might expect to achieve with the overlapping doping densities of a pocket implant. A very accurate design of a pocket implant and the substrate doping density profile requires twodimensional computer simulations. However, the above first order calculations illustrate that with a properly designed pocket implant, with an increased doping density under the channel for a minimum channel length device it may be possible to offset most of the DIBL effect and obtain the required low threshold voltages for scaled devices. If all dimensions are scaled together, the design problem is similar at each technology generation. However, the manufacturability of future scaled devices becomes more questionable, because the doping densities must increase and the dimensions over which the densities must change values become smaller.

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-36

Handbook of Semiconductor Manufacturing Technology 120

DIBL at VDD (mV)

110 100

XI = 0

90 XI = XJ /4

80

XI = XJ /2

70 60

90

65

45 32 Technology generation (nm)

22

16

FIGURE 1.25 Estimated device drain-induced barrier lowering (DIBL) for nominal short-channel devices using a retrograde doping profile to set long-channel threshold voltage.

The model calculations also provide estimates of the value of DIBL that might be expected for such scaled devices. Assuming that DIBL is just the drain voltage manifestation of Equation 1.39, one can write

DIBL Z VDB ð

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 C VDD =Vbi K1ÞexpðKLeff =2lÞ

ð1:41Þ

Plots of this are shown in Figure 1.25 for different assumptions on the width of the lightly doped surface layer. Controlling DIBL really means controlling the device relative geometry and if the relative geometry can be maintained as this modeling assumes, DIBL is expected to remain relatively constant and within acceptable limits (!100 mV in Figure 1.25). A final important geometrical parameter is the subthreshold slope parameter. A model for this parameter has previously been given as Equation 1.4 and depends on the ratio of oxide thickness (corrected for QM effects) and the substrate depletion layer depth. For the retrograde doping model and the doping parameters in the previous figures, the predicted value of S is given in Figure 1.26. A small value of S (ideal value of 60) results from a deep depletion region and this is achieved for the long channel design. The increased substrate doping required at short channels degrades the slope parameter, but the resulting values are still very good. Again, if one can keep the relative geometry the same, the subthreshold parameter remains in good control, but the subthreshold slope is expected to degrade with short-channel length devices.

1.4.3

Source/Drain Contact Issues

The typical drain contact structure, shown in Figure 1.27, consists of the drain contact junction of depth XJC and the drain extension of depth XJ. Although the initial purpose of the shallow drain extension was to reduce the peak electric field near the drain end of the conductive channel, and thereby reduce hot electron injection, as devices are scaled to lower voltages, the reduced field is not as important for this purpose. However, the shallow extension is required to reduce short-channel effects, as discussed in the previous section, while the deeper contacting junction is necessary to accommodate the thickness of the silicide contact layer and to minimize junction leakage. XJC should not be larger than required to meet these conditions, since a larger value requires a thicker oxide spacer and a longer tab extension with

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

Introduction to Semiconductor Devices

1-37

80.0 X I = XJ/2

S (mV/decade)

75.0 XI = 0

70.0

X I = XJ /2

65.0

XI = 0

60.0 55.0

Open circles are for short channel device design and including estimated DIBL Closed circles are for long channel device design with no DIBL included

50.0

90

65

45

32

22

16

Technology generation (nm)

FIGURE 1.26

Estimated subthreshold slope factors for long and short-channel devices.

increased resistance in order to minimize short-channel effects. The source/drain resistance design should minimize both parasitic effects and contact resistance [39–42]. There are three major components of source/drain resistance: (1) an accumulation layer resistance due to the gate overlap with the drain region, (2) a spreading resistance as carriers spread from the channel into the XJ junction, and (3) a bulk junction resistance due to the length of the drain extension over the XJC and silicide contact. The latter two components are especially significant with regards to scaling to smaller device dimensions. Referring to Figure 1.27, Rtab the resistance of the drain extension can be estimated as

Rtab y

r[tab WXJ

or

Rtab W yrsc [tab

ð1:42Þ

where r is the resistivity of the drain extension, [lab is the length of the drain extension, and rsc is the sheet resistance of the drain extension. Without employing a two-dimensional analysis, a reasonable estimate for the tab extension length is 0.5LP or half the physical gate length. For a constant sheet resistance, the RtabW product is then expected to scale as the device dimensions. Several authors have studied the spreading resistance problem [39,40,43] and the analytical approximations from these studies can be written as

Metal Gate

Spacer

Rtab

XSI

Silicide Rc XJc

XJ Body

FIGURE 1.27

Illustration of the major components of source or drain resistance.

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-38

Handbook of Semiconductor Manufacturing Technology

Rsp x

2r lnðbXJ =XC Þ pW

ð1:43Þ

where XC is the inversion accumulation layer thickness and b is a constant that varies from 0.37 to 0.90 depending on the method of derivation. For the purpose of this discussion, an intermediate value of 0.58 [43] will be used. This resistance is to be added to the bulk extension resistance of Equation 1.42 to give the total extension resistance of:

Rtab W xrð[tab =XJ Þ 1 C

2XJ lnðbXJ =XC Þ Z rð[tab =XJ ÞFtab p[tab

ð1:44Þ

The spreading resistance increases the extension resistance by the quantity in brackets in the above equation (factor Ftab). In order to estimate the magnitude of this term, a typical channel inversion/accumulation layer thickness must be estimated and this depends on the surface field or the surface inversion/accumulation layer density. For a typical oxide field of approximately 1!107 V/cm, an inversion/accumulation layer density of around 2!1013/cm2 is expected. The work of Stern can then be used to estimate an average inversion/accumulation region thickness of about 2.5 nm [5,6]. The quantity in brackets Ftab, in the above equation can then be expected to range from about a factor of 2.08 at the 90-nm node to about 1.0 at the 16-nm node. Any component of the source/drain resistance must be compared with the channel resistance, Rsat, which in the on-state can be defined as:

Rsat Z VDD =Idsat

ð1:45Þ

Circuit simulations show that when the external source and drain resistances are equal to about 10% of this value, the saturated drain current will be reduced by about 8%. Thus, an acceptable value of resistance will be estimated as one, where the source or drain component is less than 1/20 of this value (5% contribution for each source and drain component). Using Equation 1.9 for the saturated drain current gives

Rsat W Z

VDD tciv y vsat Cox ðVDD KVT ÞFI 0:85vsat 3ox FI

ð1:46Þ

In the second form of the equation, the approximation VT/VDDZ0.15 has been used. If the factor FI were an exact constant, then the RsatW product would be expected to scale directly with the capacitance EOT. For the device model previously discussed, it is found that FI varies from about 0.65 to 0.80 for n-channel devices over the 90- to 16-nm nodes and varies from about 0.46 to 0.67 for corresponding p-channel devices. Calculated results for RsatW are shown by the upper four curves in Figure 1.28 for the 90- to 16-nm technology nodes. Two curves are shown for both n- and p-channel devices. The open circle data points are calculated from the device model of Equation 1.9 for Idsat, while the solid data points come from using Equation 1.45 and the 2004 ITRS values of Idsat The two approaches give similar trends and magnitudes, with the 2004 ITRS current values giving slightly larger values. Also the values are higher for p-channel devices than for n-channel devices because of the lower saturated drain currents for p-channel devices. The general trend is for RsatW to become slightly lower for smaller device dimensions. These values and Equation 1.44 can now be used to estimate required values of the extension junction resistivity. Using the 10% of channel resistance requirement leads to

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

Introduction to Semiconductor Devices

1-39

100

ρtab or Rsat W (Ω cm)

Rsat W p-channel devices

10−1 n-channel devices 10−2

1.6 ×1019 2.1×1019

10−3

rtab for p-channel rtab for n-channel

9.9 ×1019

Open data points from device models Solid data points from Idsat of 2004 ITRS

10−4

90

65

45

32

6.9 ×1019

22

16

Technology generation (nm)

FIGURE 1.28 Estimated RsatW values and resulting contact tab sheet resistances for scaled metal oxide semiconductor devices. Estimated for approximately a 10% reduction in saturated drain current.

rð[tab =XJ ÞFtab !

1 R W 20 sat

ð1:47Þ

r! 0:025Rsat W In the final expression llab/XJ is assumed to be constant (approximately equal to 1.0) and the worst case value of Ftab(2.0) has been taken. This limit on extension junction resistivity is shown as the lower two curves in Figure 1.28 for n- and p-channel devices. Also shown in the figure are values of doping density corresponding to these resistivity values at the end points of the technology nodes. For example, for a p-channel MOSFET, the estimate is that the drain extension must be doped to at least 1.6!1019/cm3 for the 90-nm node and at least 6.9!1019/cm3 for the 16-nm node. Corresponding values for n-channel devices are 2.1!1019 and 9.9!1019/cm3. If drain extensions are formed by ion implantation with approximately Gaussian profiles, then the peak doping densities would be some 2–3 times the average values. In terms of sheet resistances, the corresponding limits are 1500U/cm2 to 2400U/cm2 for n-channel devices and 3000U/cm2 to 4800U/cm2 for p-channel devices. The required doping densities for the drain extensions are not too severe. However, the technology for producing junctions as shallow as needed for scaled devices is a critical technology that must be developed. A final important resistance contribution comes from the deep contacting junction of depth XJC and the contact resistance between the silicon and silicide layer. The most important component of this resistance is the contact resistance between any top metal or silicide layer to the silicon. A model for this resistance is [41]

RC Z

rsc WLc0

ð1:48Þ

where rsc is the contact resistance (in U/cm2) and Lc0 is an effective window length for uniform current flow with negligible resistance contribution from the sheet resistance of the XJC junction. The effective window length can be expressed as:

Lc0 Z Lc tanhðLc =Lt Þ pffiffiffiffiffiffiffiffiffiffiffiffiffi Lt Z rsc =Rsd

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

ð1:49Þ

1-40

Handbook of Semiconductor Manufacturing Technology

where Lc is the physical contact length, Lt is known as the transfer length and Rsd is the sheet resistance of the underlying silicon comprising the XJC junction. The effective window length has two limiting cases of Lc0 xLc for Lt[Lc and L 0 c xLt for Lt/Lc. In all cases Lc0 % Lc and therefore,

Rc R

rsc WLc

ð1:50Þ

A good device contact design will have Lc>Lt. However, there is little advantage in making Lc larger than about 2Lt. If LcZLt, the contact resistance is 30% larger than the right hand side value and for LcZ 2Lt the contact resistance is only about 8% larger than the right hand side. Thus, making Lc larger than about 2Lt, does not appreciably lower the contact resistance, but adds additional junction capacitance. It will be assumed here that these simpler approximations apply and that the right hand side of Equation 1.50 provides a good estimate of the contact resistance. This requires that the contact junction have a sheet resistance satisfying the equation:

Rsd ! rsc =4L2c

ð1:51Þ

With these reasonable simplifying assumptions, in order for the contact resistance to contribute less than 5% of the device resistance one must have

Rc W x

rsc 1 % R W 0 rsc % 0:05ðRdsat WÞLc Lc 20 dsat

ð1:52Þ

For the estimates of device resistance in Figure 1.28, and estimated values of contact length, limits on the allowable contact resistance can be obtained. For this, we can estimate the contact length as approximately two times the MPU half-pitch, which is the same as the technology node. Figure 1.29 shows the resulting required contact resistance values from this estimate for technology generations from 90- to 16-nm using the 2004 ITRS estimates of parameters and of saturation current. The values are essentially the same as those presented in the 2004 ITRS. The values are seen to be somewhat lower for n-channel devices as expected due to the larger saturated current of an n-channel device. The values range from the around 1!10K7 to the upper 1!10K9 O-cm2. These are very low

Contact resistance (Ω - cm2)

10−6 Upper limit values for 10% Contribution to device resistance 10−7

p-channel devices

n-channel devices

10−8

2004 ITRS Idsat values used 10−9

90

65

45

32

22

Technology generation (nm)

FIGURE 1.29

Estimated allowable source/drain contact resistances for scaled devices.

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

16

Introduction to Semiconductor Devices

1-41

values and achieving such low contact resistances remains one of the major challenges of future MOS devices contacting structures. To carry the analysis one step further, the contact resistance is typically assumed to be due to tunneling through the barrier between a metal (or silicide acting as a metal) and a heavily doped semiconductor. For such a model,

" rsc f exp

4p

# pffiffiffiffiffiffiffiffiffiffi 3s m fB pffiffiffiffiffiffi h NB

ð1:53Þ

In this, the important material parameters are the metal-semiconductor barrier height fB and the semiconductor doping density near the barrier NB. A small contact resistance requires a small barrier height and/or a large doping density. For typical silicides on silicon, the barrier height is close to half the bandgap due to pinning of the Fermi level near the center of the bandgap. With a barrier height of around 0.5 V and for other silicon parameters, the predicted contact resistances for doping densities of around 2!10 20 is around 1!10K7 O-cm2 [44]. This is consistent with the best contact resistances experimentally achieved with metal (or silicide) contacts on silicon. However, considerably lower values are needed for scaled MOS devices with projected values more than 10 times smaller required for the end-of-roadmap devices. Considerable research work is underway on techniques for achieving the required low contact resistances. The most promising approaches involve the use of SixGe1Kx (SiGe) as an interface material between the silicon contact junction and the silicide or metal contact. The primary advantage of SiGe for this application is the lower bandgap, which can result in much lower contact resistances as predicted by Equation 1.53. Another advantage of SiGe is that the maximum doping density has been shown to be larger than that for pure Si and large doping densities can be achieved at low temperatures [45–48]. The optimum Ge concentration for low contact resistances appears to be in the range of 20%–30% and contact resistances as low as 1!10K8 O-cm2 has been demonstrated on both n- and p-type material [45–48]. Achieving the required values of drain extension and contact resistance is one of the major challenges of future MOS transistors. One modification to the basic device structure that helps these resistance values is the raised or elevated source/drain structure as shown in Figure 1.30. In this approach, the heavily doped drain contact is elevated above the original surface by using a selectively grown epitaxial layer. A silicide layer is formed on top of the epitaxial silicon layer either by selective deposition or conventional reacted metal. The structure has several advantages for aggressively scaled devices. First, the spacer oxide can be thinner than in conventional structures and this can greatly reduce the extension resistance as previously discussed. Second, the nC contact layer can be sufficiently thick to contain the silicide layer without an excessively deep contacting junction or without excessive junction leakage. Third, the selective layer can potentially be heavily doped during growth to obtain a low contact Spacer

Gate

Metal

Silicide n+ Selective epitaxy layer n+ Body

FIGURE 1.30

Illustration of the major components of an elevated source/drain contacting structure.

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-42

Handbook of Semiconductor Manufacturing Technology

resistance. A final potential advantage relates to the possible mechanical stress that can be incorporated into a transistor from the SiGe contact structure. This is discussed in a subsequent section. However, a disadvantage of the elevated source/drain contact structure is that two selective epitaxy processes are required for both n- and p-channel devices since for CMOS both types of devices are required. The elevated S/D structure is expected to become essential for the ultimate scaling of MOS devices. Also associated with the source/drain structure is a parasitic source and drain-to-substrate capacitance. To minimize this capacitance, the length of the contacts should be as small as possible. However, the overriding factor is probably the contact resistance so that one cannot reduce the contact length and the parasitic capacitance must be accepted. One means to reduce the parasitic capacitance is the use of SOI wafers and the advantages of such structures are explored in a subsequent section. Because the source/drain resistances do not scale with the fundamental device dimensions in the same manner as device current, the source/drain resistances are major potential barriers to achieving the ultimate performance of scaled MOS devices. Creative structures and approaches will be required to extend scaling to the ultimate dimensions. The elevated source/ drain structure with heavily doped SiGe contact layers is one such approach.

1.4.4

Substrate and Isolation Issues

The substrate and isolation refers to those components of the integrated circuit that provide electrical isolation between the devices and prevents undesired device interactions such as latch-up in CMOS. Isolation has conventionally been achieved by the LOCOS structure. However, because of well-known scaling problems, such as large “‘birds beak”’ regions, new isolation techniques such as trench isolation have become essential for highly scaled CMOS. The 2004 ITRS assumes that shallow trench isolation (STI) will be the standard isolation technology for highly scaled CMOS. There do not appear to be major barriers to implementing implement trench isolation such as shown in Figure 1.31, although there are always technical challenges in performing fine line etches with high aspect ratios and uniformly filling with an insulator. The ability to implement improved isolation is essential to achieving the full benefits of increased scaling in terms of increased packing density. Latch-up control is expected to become less of a problem as operating voltages decrease and for voltages below 0.6 V should no longer be a problem.

1.4.5

Thermal Budget Issues

The scaling of MOS devices to ever thinner device layers necessitates going to lower thermal budget processing achieved by lower temperatures and/or shorter processing times. Some estimates of allowed thermal budgets can be made based upon expected device layer thicknesses. For example, the channel doping profile for a retrograde doping structures must not be deeper than the source/drain junction depth. Also the source/drain extension junction must be very shallow to control short-channel effects. These junctions must be considerably less than the feature size. If we make the assumptions that the peak

n+ poly

(STI)

n+

n-FET

p+ poly n+

p+ (STI)

p-well

p-FET

p+

n-well Shallow trench isolation

FIGURE 1.31

Shallow trench isolation.

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

(STI)

Introduction to Semiconductor Devices

1-43

doping densities are approximately 100 times the background doping density and that 50% of the junction pffiffiffiffiffiffiffiffi depth can arise from diffusion, one can establish a maximum allowed value of the quantity 4Dt which is approximately 0.23XJ, where D is the impurity diffusion coefficient and t is the time at which diffusion occurs. This is essentially the amount of allowed diffusion, which ranges from approximately 4.7 to 0.8 nm at the 90- and 16-nm nodes. D is typically modeled as a function of temperature with two parameters D0 and E as:

D Z D0 expðKE=kTÞ

ð1:54Þ

Figure 1.32 shows allowable times at different temperatures for the 90- to 16-nm technology nodes based upon the above defined amount of junction diffusion and previously given diffusion coefficient values [49]. Curves are shown for B and As impurities, but results for P are very close to B and results for Sb are very close to As. Results are shown for temperatures of 9008C and below as results for 10008C are probably too short to be practical for the device dimensions in these generations. While there may be some variations from the values depending on the amount of allowed diffusion, these values do not include any transient enhanced diffusion rates, which may reduce the allowed diffusion times ever further. These results illustrate the need for low thermal budget processes for ultra scaled MOS devices. For the same total process time, the temperature has to be reduced by more than 100oC in moving from the 90- to 16-nm nodes. Such low thermal budgets will also push processing toward single wafer, rapid thermal processing steps and will place much more emphasis on the control of transient enhanced diffusion processes. Limitations on the allowed processing temperature will become more of a concern as devices are scaled to ever smaller dimensions. This coupled with the very shallow layers with required very high doping densities will make ion implantation followed by annealing, more problematic for future devices. The times presented in Figure 1.32 do not appear to be sufficient to provide dopant activation and simultaneously to suppress transient enhanced diffusion. At some point other doping technologies will be required to achieve and maintain the required doping profiles. Possible alternative doping techniques include: (1) low temperature (8008C or less) epitaxy (or selective epitaxy), (2) diffusion from a doped glass source, (3) diffusion from a low temperature Ge/Si alloy which can be subsequently etched, (4) direct gas source diffusion in an rapid thermal processing (RTP) system, (5) gas immersion, laser diffusion (GILD), and (6) planar doping layers using atomic layer epitaxy. Of these many possibilities, low temperature 106

Time at temperature (S)

105

As (or Sb) 700°C

104 103

800°C 800°C

102 101 100

10−1

700°C

900°C B (or P)

900°C

For sqrt(4Dt) =0.22 XJ 90

65

45

32

22

16

Technology generation (nm)

FIGURE 1.32 not included.

Limits to thermal budget for As and B with future device generations. Transient enhanced diffusion is

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-44

Handbook of Semiconductor Manufacturing Technology

epitaxy will probably find a major role as it will also probably be required for elevated source/drain structures for low resistance contacts as discussed in the previous section. With low temperature epitaxy, thin heavily doped layers can be achieved that are very difficult if not impossible to achieve by other techniques. However, with selective epitaxy, separate hard masking is required for making both n- and p-type S/D regions. GILD with projection masking (PGILD) does not require resists or hard masking and may become an important doping technology. Regardless of the doping technique, obtaining the required shallow junctions and very heavily doped regions will be a major challenge for scaled technologies. Silicon on insulator as discussed in the next section may also offer some advantages over bulk MOS in this regard.

1.5

Advanced MOS Device Concepts

While bulk CMOS has been the standard technology for many device generations, new substrate and device concepts may be essential to achieve the ultimate scaling of MOS devices. Some of the most promising approaches being explored are briefly discussed in this section.

1.5.1

SOI Substrates and Devices

The possible switch of substrate material to SOI at some future point may be essential for achieving the ultimate MOS device dimensions. Silicon on insulator devices have already been used in some applications for improved speed of operation. In SOI, the MOS devices are formed within a thin silicon layer formed by various techniques over an insulating layer (typically oxide or oxynitride). From the MOS device point of view, two types of situations can exist. If the semiconductor layer is sufficiently thick that the surface depletion layer under the gate terminates within the silicon layer, the device is a partially depleted (PD) device. If the silicon layer is sufficiently thin such that the silicon material under the gate is fully depleted (FD) before the inversion layer forms, the device is a FD device. In PD SOI, even though the material under the gate is only PD, the source/drain junctions can extend through the silicon layer giving reduced parasitic capacitances and higher operating speed for a given technology generation. Device isolation and latch-up control are simplified for SOI with fewer processing steps required. This can potentially offset the higher substrate costs and potentially lower yield. From a device viewpoint, there are some drawbacks to PD SOI due to a floating body effect. To eliminate such effects, a contact is required to the body increasing somewhat the area associated with a given transistor. However, a separate body contact can be of advantage in some logic approaches where threshold voltage switching is used with a separate body voltage. A major advantage of an SOI substrate is that the depth of the contacting source/drain junctions can be controlled by the thickness of the silicon layer over the oxide layer. This applies to both FD and PD devices and is in one way, the shallow junctions required for deeply scaled MOSFETs can be achieved. For PD devices, this is the major advantage of an SOI substrate as the device characteristics and scaling properties are essentially the same as for the bulk MOS devices. Most of the potential advantages and limitations of FD SOI devices are very similar to DG MOS devices as discussed in Section 1.5.2, so a discussion of the FD MOS device is combined with the DG device. The discussion there will show that the feasibility of FD SOI devices is questionable at the 90 nm and below technology generations because of excessive short-channel effects. One disadvantage of the SOI wafer is the power dissipation capability, which is less than a conventional Si wafer because of the poorer thermal properties of an oxide layer. For HP circuits with a high percentage of active devices, this can be a major disadvantage as power dissipation is becoming more of a limiting factor for scaling as discussed in section 1.3.2.

1.5.2

Multiple Gate MOS Devices

Considerable research and modeling has been performed in recent years on multiple gate MOS devices. One of the most extensively studied multiple gate devices is the DG device shown in simplified form in

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

Introduction to Semiconductor Devices

1-45

Source

Drain Top gate

n+

Thin channel

n+

Bottom gate

FIGURE 1.33 structure.

Simplified schematic of dual gate MOSFET structure. Wrap around gate device also has similar

Figure 1.33. The structure consists of a thin conductive channel sandwiched between two control gates consisting of dielectric and gate contact material. Heavily doped source and drain regions extending slightly under the gates make contact to the inversion regions. The DG device is typically assumed to be sufficiently thin as to constitute a FD region within the semiconductor channel. A major advantage of the DG device is that the current drive can potentially be twice as large as that of a single gate device since an inversion layer can exist near both the gates. For very thin channels, these two charge layers can merge into a single conductive layer within the channel. The practical fabrication of such DG devices is very difficult and no universally accepted technique has been developed for such structures. Some techniques attempt to fabricate such structures using a planar arrangement in which the channel length and width are in the plane of the silicon wafer. This has many advantages in that the W and L device ratio can be controlled by the lithography in much the same way as conventional MOS devices. However, forming the dual gates above and below a planar thin silicon channel is very difficult, especially in obtaining exact alignment of the two gates. Such techniques typically use some form of SOI layer for the thin silicon channel. Other fabrication approaches, move the plane of the conductive channel out of the wafer surface and into the vertical direction or perpendicular to the wafer surface. The so called “finfet” is one such example, where the width dimension of the channel is taken in the vertical direction [50,51] with the channel length parallel to the wafer surface. (Also similar to the DELTA FET [52]). The gate then typically wraps around the top of the channel. Variations of this approach are the “tri-gate” device, where the channel width is increased and the gate occurs on the top and two sides of the channel. Another approach is to place the channel in the vertical direction with current flow in the vertical direction from a bottom source contact to a top drain contact. In this approach, the gate can completely surround the channel giving the so called “gate-all-around” or “surrounding-gate” device. All of these multiple gate devices have somewhat similar properties and similar advantages and disadvantages. The dual gate structure will be discussed here as an example of all such multiple gate devices. For the DG structure as in Figure 1.33 important properties are the control of threshold voltage and short-channel effects. This structure has been extensively studied theoretically [33–37,53] and the exponential decay factor for controlling short-channel effects has been shown to be

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 3si 3 t 1 C ox si tsi tox lZ 23ox 43si tox

ð1:55Þ

It has been reported that short-channel effects are acceptable, if Leff/2lO3. If we use this value in Equation 1.39 to estimate threshold voltage reduction, this value is consistent with a threshold voltage reduction of about 0.12 V, where 0.1 V is sometimes taken as a limit on acceptable short-channel effects.

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-46

Handbook of Semiconductor Manufacturing Technology

From the above limit, an equation for the upper limit on silicon thickness can be obtained as

2sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 3 23si tox 4 1 3ox Leff 2 1C K15 tsi % 3ox 18 3si tox

ð1:56Þ

For a given effective channel length and oxide thickness, this can be used to estimate the allowed thickness of the silicon layer. However, before computing such values, some thought should be given to the oxide thickness term. This equation was derived without consideration of any second order effects such as Quantum size effects, which cause the inversion channel to not be located at the oxide interface, but some distance into the silicon. Also the derivation did not consider the possible use of high-k gate dielectrics. The latter effect can be taken into account by using the EOT value, since tox always appears divided by 3ox. For oxide thickness, it is probably most appropriate to replace the tox term by the inversion capacitance oxide thickness value (tciv as used here) to thereby correct for quantum size effects. Figure 1.34 shows predicted limits on silicon thickness for the 90- to 16-nm technology nodes. Two limit curves are shown, one using tciv and one using tox in Equation 1.56. While both show similar trends, the limit for tciv is slightly smaller than for tox as would be expected. Regardless of which limit is used, the most striking results is the very small value of the required silicon thickness, which ranges from approximately 5–7 nm for the 90-nm node, to approximately 0.4–0.7 nm for the 16-nm node. However, these values are consistent with other published results. For example, it has been published in Ref. [29] that a 5-nm Si film with a 1.5-nm oxide can be scaled to a channel length of about 20 nm. These three values are all consistent with the 65-nm node data in Figure 1.34. Other researchers have suggested that the thickness of a Si layer may be somewhat thicker than shown in Figure 1.34 as determined by the above analysis. For example Doyle et al. suggest that an acceptable relationship for DG devices is tsi!2Leff/3 [54]. This limit is about a factor of 2 larger than the values seen in Figure 1.34. Such a value would extend the range of possible DG devices, but at the expense of larger short-channel effects. Other curves shown in Figure 1.34 for reference are the effective channel length and the two possible values to use for oxide thickness. Also shown in the figure is a horizontal line at approximately 3 nm indicating that for Si thicknesses below this value quantum box effects must be considered. This arises because the width of the surface inversion layers is on the order of 3–4 nm in Si with holes having a slightly broader distribution than electrons [55]. When the thickness of a semiconductor layer 102 Double gate silicon thickness estimates

Thickness (nm)

Leff 101

tsi using tox tsi using tciv

Quantum box effects

tciv

100

tox Surround gate Si thickness can be about 1.4 times thicker Single gate SOI about 2 times thinner 10−1

90

65

45

32

22

16

Technology generation (nm)

FIGURE 1.34

Thickness of Si for double gate (DG) MOSFET needed to control short-channel effects.

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

Introduction to Semiconductor Devices

1-47

approaches this value, quantum size confinement effects will begin to exist. This will cause the energy level for the conduction electrons (or holes) to increase and for the required device threshold voltage to increase. Because of this, it has been projected that it will not be practical to use channel thicknesses very much below approximately 5 nm [29]. This is a very serious potential problem and makes the projections for using DG devices beyond about the 45-nm node very questionable. In the previous section, the possibility of FD single gate MOS devices on SOI was discussed. In many ways, a single gate FD device on SOI looks like one half of a DG structure. If one splits the DG device along the center line parallel to the gate oxide and replaces half of the device by a thick oxide one has essentially a SG SOI device. It might then be expected that the FD SOI single gate device would have essentially the same limits as shown in Figure 1.34 for silicon thickness. However, the limits are not quite these, but are more restrictive. For a FD SOI device, there is an additional feedback path from the drain to the gate through the thick oxide at the bottom of the channel. This causes the short-channel effects to be more severe than for the DG structure. Several investigators have shown that the short-channel effects for single gate FD device occur at about twice the channel length as for the DG FD device. Thus, for single gate SOI devices the allowed silicon thickness can be taken as approximately one-half the values given in Figure 1.34. This makes the values very small and brings into question the feasibility of FD SOI devices at 90 nm and below. A major difference between bulk CMOS devices and SOI and the DG device is the ability to compensate for some of the short-channel effects by the use of overlapping halo implants with bulk CMOS. To achieve a similar effect with DG devices would require some type of lateral variation of channel impurity density in DG devices. This might be possible in vertical-channel devices where the vertical channel is etched from within a deposited semiconductor layer, but would seem to be very difficult to achieve in most DG devices structures. The discussion has so far been on the DG device where much theoretical work has been done. For other multiple gate device structures, such as, the tri-gate, finfet, or surround gate device structure, the thickness of the allowed silicon structures may be slightly larger than for the DG structure. An estimate can be provided by the cylindrical FD device structure, where it has been shown that the natural scale length is about 30% smaller as compared with the DG structure. This means that the silicon thickness for a cylindrical gate device can be about a factor of 1.4 larger than for the DG structure. The short-channel properties of MOS devices are controlled primarily by the aZLeff/2l parameter. This was discussed earlier for bulk MOS devices in connection with the threshold voltage reduction and DIBL. For FD devices, it has been shown that [35]

S xðkT=qÞlnð10Þ

1 1K2expðKLeff =2lÞ

ð1:57Þ

This has also been shown to apply approximately to the surround gate device, which represents a limiting case of a multiple gate device. A plot of this equation is shown in Figure 1.35. For FD devices the long-channel value of subthreshold slope approaches the ideal value of approximately 60 mV/dec. As the figure shows, the value degrades rapidly for values of Leff/2l less than about 3.0, which represents the limit used in estimated the allowed value of Si thickness in Figure 1.34. The threshold voltage reduction due to short-channel effects can be estimated with the same equations as previously used (see Equation 1.39). This gives the predicted short-channel effect shown in Figure 1.36, which shows three curves for different drain voltages. Again the predicted threshold voltage decreases rapidly for Leff/2l less than about 3.0. These theoretical curves are very similar to the published results obtained by much more complete two-dimensional numerical simulations of FD MOS devices. Although, there may be some uncertainty in the exact value of Leff/2l at which a given value of threshold voltage reduction occurs, this curve should provide a reasonable first order estimate of the effect. Also shown as a vertical line is the limit used to establish an upper limit on Si thickness of FD devices and this is seen to occur at a threshold voltage reduction of about 0.1 V, which is probably a reasonable value when the actual threshold voltage must be on the same order of value.

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-48

Handbook of Semiconductor Manufacturing Technology 100 Subthreshold slope (mV/dec)

Theory for DG and SG SOI Values for surround gate are slightly lower

90 80 70

60 Limit used for estimating silicon thickness 50

0

2

4

6

8

10

Leff/2l

FIGURE 1.35

Variation of DG subthreshold slope factor on normalized effective channel length.

For FD devices, the long-channel threshold voltage has been estimated as [37]

VT Z

8 qN t t A si ox > > > < 3

for single gate SOI

qNA tsi tox > > > : 23

for double gate

si

si

ð1:58Þ

These represent simply the voltage required to fully deplete the Si layer. From this we can see that the accuracy of the threshold voltage depends on the accuracy with which the thickness of the Si layer can be controlled. For a 10% variation in threshold voltage, the Si layer thickness needs to be controlled to about 10%, i.e., for a 5-nm Si layer the thickness control must be about 0.5 nm [29]. Such requirements of tight thickness control coupled with the very thin layers will prove very difficult to meet for future devices. 0.1 0.0

Solid line is VDD = 1V

∆V T (V)

−0.1

0.5

VDD = 2V

−0.2 −0.3 −0.4 −0.5

Limit used for estimating silocon thickness 0

2

4

6

8

10

L eff/2l

FIGURE 1.36

Variation of DG MOSFET threshold voltage on normalized effective channel length.

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

Introduction to Semiconductor Devices

1-49

At present, the most promising techniques for producing SOI layers are SIMOX, Smart Cut and Bonded Wafers. All of these have great difficulty in obtaining such thin layers with the thickness control projected as needed for future MOS devices. Some fundamental breakthroughs are needed in fabrication thin layers for FD SOI devices as well as for multiple gate devices. The most recently researched multiple gate devices, such as the finfet or tri-gate devices do not alleviate the requirement of ultra-thin Si layers for FD devices. For such devices, the thickness of the Si fin used in fabricating the devices must satisfy the same requirements as a DG device. This means that to control short-channel devices, the thickness of the Si layer must be considerable less than the channel length as indicated in Figure 1.34. In fact, the channel thickness must be four to five times less than the channel length. In most of the out-of-plane multiple gate device concepts, the channel thickness must be controlled by an etching process with this minimum width controlled by the lithographic process. This means that the channel length must be four to five times larger than the minimum lithographically defined dimension. This will greatly limit the current capability of such devices and may in fact completely offset any current enhancement advantage of such multiple gate devices. Another problem with out-of-plane MOS devices is that one does not have control of both L and W by the lithographic process, but one of these is controlled by the thickness of some Si layer. This makes it very difficult to adjust the W/L ratios as needed to compensate for the lower mobility of holes and to compensate for the delay times of various logic gates as well as adjusting this ratio for various analog applications. The W/L ratio can only be adjusted in steps by using multiple devices again offsetting some of the advantages of multiple gate devices. Other major problems with existing techniques for producing SOI material is the degradation in material quality with thin layers. Most experimental results have shown a degradation in carrier mobility when the SOI layer thickness is reduced below about 10 nm [29]. While it is uncertain if this is just a material quality problem or a fundamental problem with increased scattering processes, it remains a serious problem with existing SOI techniques. For the reasons outlined above, multiple gate FD MOS devices face very formable obstacles in competing with bulk MOS devices. Controlling short-channel effects require very thin FD Si layers with respect to channel length and manufacturing techniques for producing such thin layers are not available with electrical properties comparable to bulk Si. Breakthroughs will be required in the manufacture of such multiple gate devices, if they are to become the mainstream MOS devices. A final summary of some of the multiple gate MOS concepts is shown in Table 1.3 along with some of the potential advantages and weaknesses.

TABLE 1.3 Concept

Summary of Some Multiple Gate FET Concepts Tied Gates (NgatesO2)

Side Wall Conduction (Finfet, Tri-gate)

Double Gate, Planar Conduction

Advantages

Higher Id and thicker fin

Particular strengths

Thicker Si body

Higher Id, improved S, and improved SC effects Ease of integration

Potential weakness

Limited device width and corner effects

Higher Id, improved S, and improved SC effects Bulk compatible and good Si thickness control Limited width

Fin thickness and shape

Independent, Double Gate, Planar Conduction

Vertical Conduction, Wrap Around Gate

Improved SC effects

Improved SC effects and 3D integration

Electrically adjustable threshold voltage Difficult integration and degraded S

Litho independent gate length Single gate length process integration

Source: Selected from 2004 International Technology Roadmap for Semiconductors, http://www.itrs.net/Links/2004Update/ 2004Update.htm

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-50

1.5.3

Handbook of Semiconductor Manufacturing Technology

Transport Enhanced MOS Devices

One technique for enhancing the current capability of MOS devices at small dimensions is the application of appropriate mechanical strain to the silicon channel region [56–60]. Mechanical strain causes important changes in the internal band structure of the Si. Figure 1.37 illustrates some of the important changes in the bands [60]. For the conduction band, strain splits the sixfold degenerate energy bands along the 100 crystal directions into two groups of bands. With sufficient strain, all the electrons can be transferred essentially into the lower group of bands and if transport is along the appropriate crystal direction, the electrons will behave along the channel as if they had a smaller effective mass than for the unstrained Si. For the valence band, strain again splits the degenerate heavy and light hole bands into two bands with each band having a smaller effective mass than the heavy hold band in the unstrained Si. The most effective type of stress depends on the direction of stress relative to the MOS channel, and for p-channel devices should be compressive stress along the channel length dimension or tensile stress along the width dimension. For n-channel devices, the stress should by tensile stress along either channel direction. The smaller effective mass of electrons or holes resulting from the splitting of the degenerate energy bands can enhance carrier transport along the channel. A very simple illustration is for the carrier mobility, which can be expressed in the most basic form as

mZ

qhti m

ð1:59Þ

where hti is the mean time between scattering events and m* is the effective mass. A smaller effective mass in this equation will result in a larger mobility. This is somewhat oversimplified as the mean time between scattering events may also depend on effective mass. However, for Si, the net result of appropriate mechanical stress is to give an enhanced low field carrier mobility and this has been experimentally Unstr. Si

Conduction band ∆2 Valleys

Ec

Strained Si ∆4

∆6

mt

∆2

mI

∆4 Valleys

mI

mt < m I

mt mt Strained Si

Valence band Bulk Si

HH

E out-ofplane

E

k

∆E ≤ 38

meV

k

10%Ge

LH Spin-orbit

FIGURE 1.37 Effects of Si strain on energy bands. (From Ieong, M., B. Doris, J. Kenzierski, D. Rim, and M. Yang, Science, 306, (2004): 2057.)

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

Introduction to Semiconductor Devices

1-51

verified for both electrons and holes. The mobility enhancement factor has been found to be as large as 1.83 for electrons and 1.53 for holes at low fields [56–60]. Just as important as mobility for determining current drive is the saturated drift velocity of electrons or holes. The saturated drift velocity arises physically because of the rapid generation of optical phonons when the carrier energy in a high electric field exceeds the optical phonon energy (Eopx0.05 eV for LO phonon). A somewhat oversimplified model for the saturated drift velocity predicts [61]

rffiffiffiffiffiffiffiffiffiffiffiffiffi 8Eop vsat x 3pm

ð1:60Þ

From this, one can see that a reduction in the effective mass should result in a larger saturated drift velocity. From the device model in Section 1.3.2, one can see that a larger low-field mobility and a larger saturated velocity will both contribute to a larger device current for all other parameters remaining fixed. The application of mechanical stress will likely become a standard technology to enhance device current as devices are pushed to ever smaller dimensions. Reported results indicate that the MOS drive current can be enhanced by 15%–25% with the use of mechanical stress. There are several possible ways by which mechanical stress can be obtained in MOS devices. One approach uses a stress relaxed SiGe layer between a Si device layer and a Si substrate. When the SiGe layer is sufficiently thick, the stress will relax in the layer producing a SiGe layer with a lattice constant larger than that of pure Si. When a sufficiently thin Si layer is then grown expitxially on top of the SiGe layer, the resulting Si lattice will seek to match that of the SiGe layer giving a larger lattice spacing in the plane of the film and a Si film under tension. Other possibilities for stress include deposited films of various types that are under stress at room temperature, such as nitride films. For optimum p- and n-channel devices, the p-channel device should be under compression, while the n-channel device should be under tension. One attractive means of accomplishing this has been reported (by Intel) and consists of separate approaches for the two types of transistors [62,63]. For the p-channel device, a SiGe layer is selectively grown in the source/drain regions of the transistor. Since, the SiGe layer has a larger lattice constant, it tends to expand in the source/drain region putting the Si channel under uniaxial, compressive strain. For the n-channel devices a high tensile silicon nitride film is deposited over the top of the transistor resulting in a uniaxial, tensile strain in the n-channel device [62,63]. There are certainly questions as to how well such techniques will work for multiple-channel length devices, but strain optimization will probably be an important tool in the future device designer’s toolbox.

1.5.4

MOSFETS with Other Semiconductors

Another possible approach to the enhanced transport is to move to another semiconductor such as SiGe alloys for the active region of the MOS devices. Other more exotic semiconductors such as InSb (with very high mobilities) may also be considered. These are certainly research issues to be pursued. However, one must not just look at carrier mobility, but at such parameters as saturated drift velocity which is probably more important than low field mobility. For Ge, the saturated drift velocities are known to be lower than for Si because of a lower optical phonon energy (see Equation 1.60). The use of SiGe with a small Ge concentration might give an enhancement from mobility before the loss due to lower velocity saturation occurs, but this is not obvious. For any move away from Si as the channel material, obtaining an acceptable gate dielectric will also be a formable task. Because of all the multitude of problems with any alternative semiconductor it is not clear that such concepts for enhanced transport will be practical in the real world or for cost competitive CMOS ICs.

1.5.5

Advanced Semiconductor Device Concepts

As Si CMOS devices are scaled to their ultimate limit in size, there is continued and growing interest is exploring new semiconductor device concepts that might have lower size limits or lower power

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-52

Handbook of Semiconductor Manufacturing Technology

requirements that could continue the integration of electronic functions to even greater densities and higher performance. While there is no guarantee that such efforts will be successful, they must be pursued so that all possible avenues for extending the electronics revolution beyond the fundamental Si limits can be explored. Viewed in the broad context, MOS devices are charge-based devices. A gate voltage controls the conductive charge in the MOS channel. As such, all charge-based devices are limited by the capacitance, voltage and current capability. This is captured succinctly in the CV/I term related to switching speed. All charge-based device concepts will have somewhat similar limitations and it is difficult to envision other materials and physical arrangements of charge-based devices competing with Si-based MOS devices both from a significant performance advantage and from a cost perspective. Thus, advanced device concepts for beyond Si MOS limits are probably best sought in device concepts that are not charge-based device concepts. Some of the most important avenues are 1. Molecular and biological-based devices. In these concepts the properties of individual molecules are used for transport and charge storage. Potential advantages are the small possible size of individual molecules, which might constitute a single device. 2. Spintronics. These concepts seek to use the properties of spin with its two quantized states to represent and process information. Potential advantages are the storage and manipulation of information on a single electron. 3. Quantum interference devices. Involves concepts for performing logic operations using the wave nature of electrons and the quantum interference effects between electron waves. Again the potential advantage is the potential size of devices. 4. Phase change devices. This broad classification involves using phase changes to store and process information such as, crystalline-amorphous, magnetic-nonmagnetic, or metal-insulator phase changes. Potential advantages are breaking the dependence on charge-based concepts. 5. Optical switches. Involves concepts to use photons for information processing and storage instead of electrons. Another non charge-based approach. Much research is needed in any of these areas to bring forth alternative device concepts that can successfully compete with Si-based MOS devices. Some of the most promising non-MOS device concepts are summarized in Table 1.4. Some of these are still charge-based devices that have potentially smaller size TABLE 1.4

Summary of Some New Device Concepts

Device Type

Supported architectures

Cell size (pitch) Density (#/cm2) Switch speed Circuit speed Switching energy (J) Binary throughput (Gbit/ns/cm2) Operating temperature

1D such as Carbon Nanotube

Resonant Tunneling Devices

Conventional Conventional and cross Cellular bar Neural Networks (CNN) 100 nm 100 nm 3!109 3!109 Unknown 1 THz 30 GHz 30 GHz 2!10K18 O2!10K18 86 86 RT

RT

Single Electron Molecular-Based Quantum Devices Devices Cellular Automata (QCA)

Spin-Based Transistors

CNN

Memory based

QCA

Quantum

40 nm 6!1010 1 GHz 1 GHz 1!10K18 10

Unknown 1!1012 Unknown Unknown 1.3!10K16 Unknown

60 nm 3!1010 30 MHz 1 MHz O4!10K17 0.06

100 nm 3!109 700 GHz 30 GHz 2!10K18 86

20 K

RT

RT or cryogenic

Cryogenic

Source: Selected from 2004 International Technology Roadmap for Semiconductors, http://www.itrs.net/Common/ 2004Update/2004Update.htm

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

Introduction to Semiconductor Devices

1-53

limits than MOS devices and some are new non charge-based device concepts. In all cases, any new device must provide very low power, be compatible with high levels of integration at very low cost and must provide logic level gain in a three-terminal device. Since, relatively little is known about how such devices could be integrated in large numbers, many of the entries in the table must be taken as best estimates. Only the future will tell if any of these devices will replace MOS devices or find a role in complementing MOS devices.

1.6

Conclusions

It is obvious that Si MOS devices will be pushed to their ultimate limits of scaling. Exactly what technology generation is the ultimate limit for Si devices is somewhat debatable and many experts in the past have predicted the premature demise of scaled Si MOS devices. History has proven that Si MOS device concepts can be pushed much further that most experts envision at any particular point in time. However, there are limits both from a physics point of view and perhaps from an economic point of view. As discussed in previous sections, the golden era of easy scaling has passed and scaling beyond about the 90-nm node becomes increasingly difficult and increasingly requires more new materials and innovative concepts for continued scaling. Scaling may continue to the end of the roadmap (in about 2018) or scaling may simply become too difficult or too costly to continue to the end of the roadmap. In any case, scaling Si MOS devices to the 16-nm node will require significant innovative concepts to achieve the predicted dimensions and device performance. To achieve the ultimate limits of semiconductor devices the general trends in manufacturing must certainly be toward ever thinner dimensions in all vertical as well as all horizontal dimensions. This in turn of necessity must push manufacturing toward lower thermal budget processing. At the same time from economic reasons, manufacturing is pushed toward larger wafer sizes. The continued downsizing of MOS devices will require that many new materials such as, high-k gate dielectrics and metal gates be introduced into the manufacturing process. New processes, such as selective epitaxy and selective depositions will be required. Device structures will require the control of interface layers, such as the dielectric–silicon interface to atomic dimensions. Such control will in turn require higher levels of automation in the manufacturing process and more in-situ monitoring of the manufacturing process. Although the continued scaling of MOS devices is becoming more difficult, the IC industry appears equal to the task of pushing MOS devices to the ultimate limits at near atomic dimensions.

References 1. Arora, N., MOSFET Models for VLSI Circuit Simulation. New York: Springer, 1993. 2. Marcyk, G. “High Performance Non-Planar Tri-gate Transistor Architecture.” INTEL publication, ftp://download.intel.com/technology/silicon/Marcyk_tri_gate_0902.pdf (accessed on February, 2007). 3. The National Technology Roadmap for Semiconductors 1997 Edition, published by Semiconductor Industry Association 1997. 4. International Technology Roadmap for Semiconductors, http://www.itrs.net/Links/2004Update/ 2004Update.htm (accessed on February, 2007). 5. Stern, F., and W. E. Howard. “Properties of Semiconductor Surface Inversion Layers in the Electric Quantum Limit.” Phys. Rev. 163 (1967): 817. 6. Stern, F. “Self-Consistent Results for n-Type Si Inversion Layers.” Phys. Rev. B 5 (1972): 4891. 7. Stern, F. “Quantum Properties of Surface Space-Change Layers.” CRL Crit. Rev. Solid State Sci. 4 (1974): 499. 8. Van Dort, M. J., P. H. Woerlee, and A. J. Walker. “A Simple Model for Quantization Effects in Heavily-Doped Silicon MOSFETs at Inversion Conditions.” Solid State Electron. 37 (1994): 411.

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-54

Handbook of Semiconductor Manufacturing Technology

9. Rios, R., and N. D. Arona. “Determination of Ultra-Thin Gate Oxide Thickness for CMOS Structures Using Quantum Effects.” IEDM Tech. Digest. (1994): 613. 10. Takagi, S., A. Toriumi, M. Iwase, and H. Tango. “On the Universality of Inversion Layer Mobility in Si MOSFETs: Part I—Effects of Substrate Impurity Concentration.” IEEE Trans. ED 41 (1994): 2357. 11. Hu, C. “Gate Oxide Scaling Limites and Projections.” IEDM Tech. Digest. (1996): 319. 12. Taur, Y., D. A. Buchanan, W. Chen, D. J. Frank, K. E. Ismail, S-H. Lo, G. A. Sai-Halong. et al. “CMOS Scaling into the Nanometer Regime.” Proc. IEEE 85 (1977): 486. 13. Momose, H. S., M. Ono, T. Yoshitomi, T. Ohgmo, S. Nakamma, M. Saito, and H. Iwai. “Tunneling Gate Oxide Approach to Ultra-High Current Device in Small Geometry MOSFETs’.” IEDM Tech. Digest. (1994): 593. 14. Kane, E. O. “Theory of Tunneling.” J. Appl. Phys. 32 (1961): 83. 15. Parker, C., G. Lucovsky, and J. R. Hauser. “Ultrathin Oxide–Nitride Gate Dielectric MOSFETs.” IEEE Trans. ED, 19 (1998): 106. 16. Yang, H., and G. Lucovsky. “Integration of Ultrathin (1.6–2.0 nm) RPECVD Oxynitride Gate Dielectrics into Dual Poly-Si Gate Submicron CMOSFETs.” IEDM Tech. Digest. (1999): 245. 17. Groeseneken, G., L. Pantisano, L. A. Ragnarsson, R. Degraeve, M. Houssa, T. Kauerauf, P. Roussel, S. De Gendt, and M. Heyns. “Achievements and Challenges for the Electrical Performance of MOSFETs with High-k Gate Dielectrics.” Symp. Phys. Failure Anal. IC (2004): 147. 18. Gusev, E. P., D. A. Buchanan, E. Cartier, A. Kumar, D. DiMaria, S. Guha, A. Callegari. et al. “Ultrathin High-k Gate Stacks for Advanced CMOS Devices.” IEDM Tech. Digest. (2001): 451. 19. Lo, G. D., D. L. Kwong, K. J. Abbott, and D. Nagarian. “Thickness Dependence of ChargeTrapping Properties in Ultrathin Thermal Oxides Prepared by Rapid Thermal Oxidation.” J. Electrochem. Soc. 140 (1993): L16. 20. Torimi, A., J. Koga, H. Satake, and A. Ohata. “Performance and Reliability Concerns of UltraThin Gate Oxides MOSFETs’.” IEDM Tech. Digest. (1995): 847. 21. Lu, C-H. , G. M. T. Wong, M. D. Deal, W. Tsai, P. Majhi, C. O. CHus, M. R. Visokay, et al. “Characteristics and Mechanism of Tunable Work Function Gate Electrodes Using a Bilayer Metal Structure on SiO2 and HfO2.” IEEE Elect. Dev. Lett. 26 (2005): 445. 22. Lee, J. H., H. Zhong, Y-S. Suh, G. Heuss, J. Gurganus, B. Chen, and V. Misra. “Tuanble Work Function Dual Metal Gate Technology for Bulk and Non-Bulk CMOS.” IEDM Tech. Digest. (2002): 359. 23. Lee, J. H., Y-S. Suh, H. Lazar, R. Jha, J. Gurganus, Y. Lin, and V. Misra. “Compaibility of Dual Metal Gate Electrodes with High-k Dielectrics for CMOS.” IEDM Tech. Digest. (2003): 232. 24. Kim, Y. H., C. H. Lee, T. S. Jeon, W. P. Bai, C. H. Choi, S. J. Lee, L. Xinjian, R. Clarks, D. Roberts, and D. L. Kwong. “High Quality CVD TaN Gate Electrode for Sub-100 nm MOS Devices.” IEDM Tech. Digest. (2001): 667. 25. Datta, S., G. Dewey, M. Doczy, B. S. Doyle, B. Jin, J. Kavalieros, R. Kotlyar, M. Metz, N. Zelick, and R. Chau. “High Mobility Si/SiGe Strained Channel MOS Transistors with HfO2/TiN Gate Stack.” IEDM Tech. Digest. (2003): 653. 26. Suh, Y. S., G. Heuss, H. Zhong, S. N. Hong, and V. Misra. “Electrical Characteristics of TaSixNy Gate Electrodes for Dual Gate Si-CMOS Devices.” Digest. 2001 Symp. VLSI Tech. 47 (2001). 27. Yu, H. Y., J. F. Kang, C. Ren, J. D. Chen, Y. T. Hou, C. Shen, M. F. Li, et al. “Robust High-Quality HfN-HfO2 Gate Stack for Advanced MOS Device Applications.” Elect. Dev. Lett. 25 (2004): 70. 28. Anil, K. G., A. Veloso, S. Kubicek, T. Schram, E. Augendre, J. F. deMarneffe, K. Devriendt, et al. “Demonstration of Fully Ni-Silicided Metal Gates on HfO2 Based High-k Gate Dielectrics as a Candidate for Low Power Applications.” Digest. 2004 VLSI Tech. 190 (2004). 29. Frank, D. J., R. H. Dennard, E. Nowak, P. M. Solomon, Y. Taur, and H-S. P. Wong. “Device Scaling Limits of Si MOSFETs and Their Application Dependencies.” Proc. IEEE 89 (2001): 259. 30. Taur, Y., C. H. Wann, and J. Frank. “25 nm CMOS Design Considerations.” IEDM Tech. Digest. (1998): 789. 31. Moll, J. L., Physics of Semiconductors. New York: McGraw-Hill, 1964.

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

Introduction to Semiconductor Devices

1-55

32. Brews, J. R. “Sensitivity of Subthreshold Current to Profile Variations in Long-Channel MOSFETs.” IEEE Trans. ED (1996): 2164. 33. Yan, R-H. , A. Ourmazd, and F. Lee. “Scaling the Si MOSFET: From Bilk to SOI to Bulk.” IEEE Trans. ED 39 (1992): 1704. 34. Suzuki, K., T. Tanaka, Y. Tosaka, H. Horie, and Y. Arimoto. “Scaling Theory for Double-Gate SOI MOSFET’s.” IEEE Trans. ED 40 (1993): 2326. 35. Tosaka, Y., K. Suzuki, and T. Sugii. “Scaling-Parameter-Dependent Model for Subthreshold Swing S in Double-Gate MOSFET’s.” IEEE Elect. Dev. Lett. 15 (1994): 466. 36. Wong, H-S. , D. J. Frank, Y. Taur, and J. M. C. Stork. “Design and Performance for Sub-0.1 mm Double-Gate SOI MOSFET’s.” IEDM Tech. Digest. (1994): 747. 37. Wann, C. H., K. Noda, T. Tanaka, M. Yoshida, and C. Hu. “A Comparative Study of Advanced MOSFET Concepts.” IEEE Trans. ED 43 (1996): 1742. 38. Auth, C. P., and D. Plummer. “Scaling Theory for Cylindrical, Fully-Depleted, Surround-Gate MOSFET’s.” IEEE Elect. Dev. Lett. 18 (1997): 74. 39. Pimbley, J. M. “Two-Dimensional Current Flow in the MOSFET Source-Drain.” IEEE Trans. ED, ED-33 (1986): 986. 40. Ng, K. K., and W. T. Lynch. “Analysis of the Gate-Voltage-Dependent Series Resistance of MOSFETs.” IEEE Trans. ED ED-33 (1986): 965. 41. Ng, K. K., and W. T. Lynch. “The Impact of Intrinsic Series Resistance on MOSFET Scaling Resistance of MOSFETs Scaling.” IEEE Trans. ED, ED-34 (1987): 503. 42. Tsui, B-Y. , and M-C. Chen. “Series Resistance of Self-Aligned Silicided Source/Drain Structure.” IEEE Trans. ED 40 (1993): 197. 43. Ng, K. K., R. J. Bayrens, and S. C. Fang. “The Spreading Resistance of MOSFETs.” IEEE Elect. Dev. Lett. EDL-6 (1995): 195. 44. Chang, C. Y., Y. K. Fang, and S. M. Sze. “Specific Contact Resistance of Metal-Semiconductor Barriers.” Solid State Electron. 14 (1971): 54. 45. Chieh, Y-S. , J. P. Krusius, D. Green, and M. Ozturk. “Low-Resistance Bandgap-Engineered W/Si1KxGex/Si Contacts.” IEEE Elect. Dev. Lett. 17 (1996): 360. 46. Gannavaram, S., N. Pesovic, and M. Ozturk. “Low Temperature (%8008C) Recessed Junction Selective Silicon–Germanium Source/Drain Technoloby for sub-70 nm CMOS.” IEDM Tech. Digest. (2000): 437. 47. Ozturk, M., J. Liu, H. Mo, and N. Pesovic. “Advanced Si1KxGexx Source/Drain and Contact Technologies for Sub-70 nm CMOS.” IEDM Tech. Digest. (2002): 375. 48. Liu, J., and M. Ozturk. “Nickel Germanosilicide Contacts Formed on Heavily Boron Doped Si1KxGex Sourec/Drain Junctions for Nanoscale CMOS.” IEEE Trans. ED 52 (2005): 1535. 49. Nishi, Y. and R. Doering, eds. Handbook of Semiconductor Manufacturing Technology, 19. NY: Marcel Decker, 2000. 50. Huang, X., W-C. Lee, C. Kuo, D. Hisamoto, L. Chang, J. Kedzierski, E. Anderson, et al. “Sub 50 nm FinFET: PMOS.” IEDM Tech. Digest. (1999): 67. 51. Choi, Y-K. , N. Lindert, P. Xuan, S. Tang, D. Ha, E. Anderson, T-J. King, J. Bokor, and C. Hu. “Sub-20 nm CMOS FinFET Technologies.” IEDM Tech. Digest. (2001): 421. 52. Hisamoto, D., T. Kaga, and E. Takeda. “Impact of the Vertical SOI ‘DELTA’ Structure on Planar Device Technology.” IEEE Trans. ED 38 (1991): 1419. 53. Wong, H-S. P. , D. J. Frank, and P. M. Solomon. “Device Design Considerations for Dougle-Gate, Ground-Plane and Single-Gate Ultra-Thin Dual-Gate MOSFETs a the 25 nm Channel Length Generation.” IEDM Tech. Digest. (1998): 407. 54. Doyle, B., R. Arghavani, D. Barlage, S. Datta, M. Doczy, J. Kavalieros, A. Murthy, and R. Chau. “Transistor Elements for 30 nm Physical Gate Lengths and Beyond.” Intel Tech. J. 6 (2002): 42. 55. Li, Y., S-M. Yu, C-S. Tang, and T-S. Chao. “Comparison of Quantum Correction Models for Ultrathin Oxide Single- and Double-Gate MOS Structures Under the Inversion Condition.” Proc. 2003 IEEE Conf. Nanotech. 1 (2003): 36. 56. Welser, J., J. L. Hoyt, S. Takagi, and J. F. Gibbons. “Strain Dependence of the Performance Enhancement in Strained-Si n-MOSFETs.” IEDM Tech. Digest. 94 (1994): 373.

DK4126—Chapter1—23/5/2007—18:33—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

1-56

Handbook of Semiconductor Manufacturing Technology

57. Rim, K., S. Koester, M. Hargrove, J. Chu, P. M. Mooney, J. Ott, T. Kanarsky, et al. “Strained Si NMOSFETs for High Performance CMOS Technology.” Tech. Digest. 2001 VLSI Symp. (2001): 59. 58. Hoyt, J. L., H. M. Nayfeh, S. Eguchi, I. Aberg, G. Xai, T. Drake, E. A. Fitzerald, and D. A. Antoniadis. “Strained Silicon MOSFET Technology.” IEDM Tech. Digest. (2001): 23. 59. Rim, K., J. Chu, H. Chen, K. A. Jenkins, T. Kanarsky, K. Lee, A. Mocuta, et al. “Characteristics and Device Design of Sub-100 nm Strained Si N- and PMOSFETs.” Tech. Digest. 2002 VLSI Symp. (2002): 98. 60. Ieong, M., B. Doris, J. Kenzierski, D. Rim, and M. Yang. “Silicon Device Scaling to the Sub-10 nm Regime.” Science 306 (2004): 2057. 61. Sze, S. M., Physics of Semiconductor Devices. New York: Wiley Interscience, 1981. 62. Thompson, S. E., M. Armstrong, C. Auth, S. Cea, R. Chau, G. Glass, T. Hoffman, et al. “A Logic Nanotechnology Featuring Strained-Silicon.” IEEE Elect. Dev. Lett. 25 (2004): 191. 63. Thompson, S. E., M. Armstrong, C. Auth, M. Alavi, M. Buehler, R. Chau, S. Cea, et al. “A 90-nm Logic Technology Featuring Strained-Silicon.” IEEE Trans. ED 51 (2004): 1790.

DK4126—Chapter1—23/5/2007—18:34—ANBARASAN—240430—XML MODEL CRC12a – pp. 1–56

2

Overview of Interconnect—Copper and Low-k Integration Girish A. Dixit Robert H. Havemann Novellus Systems, Inc.

2.1 Introduction.......................................................................... 2-1 2.2 Dual Damascene Copper Integration ................................. 2-6 2.3 Copper/Low-k Reliability................................................... 2-16 2.4 Conclusion .......................................................................... 2-19 References ....................................................................................... 2-20

Over the past decade, integrated circuit scaling and performance needs have driven significant changes in interconnect materials and processes at each successive technology generation. Foremost among these changes has been the transition from aluminum to copper conductors [1–7]—a transition that is virtually complete for logic devices and now underway for memory devices [8,9]. The primary impetus for this ongoing transition has been a need for the improved performance afforded by copper’s lower resistivity as compared with aluminum as well as by copper’s ability to accommodate higher current densities. The need for improved performance has also driven a concomitant change in the insulator surrounding the conductor, which for logic devices has transitioned from the traditional silicon dioxide dielectric to materials with lower dielectric constant (low-k), such as F-doped oxides and C-doped oxides. The simultaneous integration of copper with low-k dielectrics presented a significant challenge to the industry, and, while the manufacturing use of copper interconnects has become pervasive, each successive technology generation offers new challenges in terms of meeting density, performance, and reliability requirements. This chapter will provide an overview of copper and low-k interconnect integration including process architectures, materials, performance, and reliability issues as well as future scaling challenges and potential technology directions.

2.1

Introduction

The Information Revolution and enabling era of silicon Ultra-Large-Scale-Integration (ULSI) have spawned an ever-increasing level of functional integration on-chip, driving a need for greater circuit density and higher performance. For classical transistor scaling, device performance improves as the gate length and the gate dielectric thickness are scaled. Only recently have new materials, such as high-k gate dielectrics and metal gates been considered as essential for continued transistor scaling. In contrast, as the chip wiring (interconnect) is scaled, performance degrades as both resistance and current density increase due to the smaller cross-sectional area of the scaled conductor. The introduction of copper metallization served as an enabler for continued interconnect scaling due to its lower resistivity 2-1

DK4126—Chapter2—23/5/2007—18:35—ANBARASAN—240431—XML MODEL CRC12a – pp. 1–24

2-2

Handbook of Semiconductor Manufacturing Technology

(w1.8 m-ohm-cm) as compared with traditional AlCu metallization (w3.3 m-ohm-cm) as well as its ability to accommodate higher current density [10,11]. An additional consequence of scaling is an increase in sidewall capacitance as conductors are placed in closer proximity to one another. While metal thickness can be reduced to mitigate the increase in sidewall capacitance, the consequences of this approach are increased resistance and current density. Alternative circuit design solutions, such as increasing conductor spacing and/or adding extra levels of interconnect with relaxed design rules, have the drawbacks of reduced density and increased cost. The introduction of low-k dielectrics provided a materials solution that mitigated sidewall capacitance [12–18] and provided more latitude in the co-optimization of process architecture and circuit design. Previous analyses [19–25] have highlighted the interconnect performance issues that are incurred as integrated circuit design rules continue to scale. As illustrated in the International Technology Roadmap for Semiconductors (ITRS) (see Figure 2.1), a chief concern is the increasing latency or ResistanceCapacitance (as in RC delay of inverter circuit) (RC) delay of global wiring [26,27]. Since local and intermediate interconnects tend to scale in length, latency is dominated by global interconnects connecting large functional logic blocks, as shown in Figure 2.2. Future increase in microprocessor chip size predicted by the ITRS [27] bring heightened concern, since interconnect latency is proportional to the square of the length. While design solutions such as the use of repeaters (as shown in Figure 2.1) or reverse scaling may mitigate latency in the near term, these approaches typically result in larger chip size and/or more levels of interconnect, leading to higher product cost. For local and intermediate wiring levels, crosstalk is an additional interconnect performance issue that must be considered. Signal crosstalk is given by the ratio of line-to-line (sidewall) capacitance to total capacitance as shown in Figure 2.3. As transistor operating voltage continues to scale downward, interconnect crosstalk and noise levels must be reduced to avoid spurious transistor turn-on. Since crosstalk is dominated by interconnect sidewall capacitance (as is overall capacitance for minimum feature size as shown in Figure 2.3), process-related solutions, such as the use of thinner metallization 100

Reference: R. Ho & M. Horowitz Gate delay (fan out 4)

Global wire signal delay increases with scaling

Local (scaled)

Relative delay

10

Global with repeaters Global w/o repeaters

Repeater insertion reduces global delay

1

Local wire signal delay improves with scaling 0.1 250

180

130

100

70

50

35

Process technology node (nm)

FIGURE 2.1 Relative delay for logic gate Fan Out (as in Fan Out of inverter circuit) (FOZ4) vs. both local and global interconnects as a function of technology node. (From Deodhar, V. V. and Davis, J. A., Tech. Dig. Int. Symp. Circuits Syst., V-349–V-352, 2003.)

DK4126—Chapter2—23/5/2007—18:35—ANBARASAN—240431—XML MODEL CRC12a – pp. 1–24

Overview of Interconnect—Copper and Low-k Integration A chip

2-3

A more advanced chip

Macro circuit

Scaled macro circuit

Local wire in a macro Global wire between macros

FIGURE 2.2 Example scaling of global vs. local interconnects: length of global interconnects does not scale relative to chip size, whereas the length of local interconnects is reduced by the scaling factor.

Capacitance (×10–16 Farad/μm)

and/or low-k dielectrics must be implemented to enable continued scaling. This is yet another reason why copper and low-k dielectrics have become an essential part of the Integrated Circuit (IC) scaling engine. As operating frequency continues to increase power dissipation in the interconnect system, which is proportional to both switching frequency and capacitance, has become a significant portion of the overall power dissipated in the chip as shown in Table 2.1 and discussed in several recent papers [28–37]. Thus, the need to limit interconnect power dissipation provides yet another impetus for capacitance reduction in addition to latency concerns. Typical high performance designs utilize a hierarchical or “reverse scaling” metallization scheme (Figure 2.4), where widely spaced “fat wires” are used on an upper global interconnect and power levels to minimize RC delay and voltage drop. Maintaining power distribution at constant voltage through equipotential wires to all Vdd bias points requires increasingly lower resistance global wires as operating voltage continues to scale and switching frequencies increase. This need has been partially addressed by the introduction of ball-grid-array packaging technology [38–40] that Crosstalk ~ C L–L / (C L–L+ C L–G)

4.0

3.0

C L–L

C Total 2.0

C L–G

1.0 C L–G

C L–L

0.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 Feature size (μm)

Line-to-line capacitance = C L–L Line-to-ground capacitance = C L–G

FIGURE 2.3 Simulation of interconnect capacitance as a function of feature size assuming fixed metal height and equal line/space. Line-to-line capacitance dominates as feature size decreases. Interconnect crosstalk is given by lineto-line capacitance divided by the total capacitance.

DK4126—Chapter2—23/5/2007—18:35—ANBARASAN—240431—XML MODEL CRC12a – pp. 1–24

2-4

Handbook of Semiconductor Manufacturing Technology

TABLE 2.1 Selected Overall, Technology Characteristics Based on the 2005 International Technology Roadmap for Semiconductors Year MPU (Microprocessor Unit) ⁄2 pitch MPU patterned gate length (nm) DRAM 1⁄2 pitch (nm) MPU: frequency of on-chip clock for high performance (MHz) MPU: frequency of chip to board for high performance (MHz) High volume-MPU (cost-performance) chip size at production (mm2) High volume-MPU (cost-performance) Mtransistors/cm2 DRAM memory chip size at production (mm2) DRAM memory chip Gbits/cm2 Maximum power w/high performance heat sink (watts) Maximum power (watts)—battery (Hand-held) Minimum logic Vdd—(maximum performance) volts Minimum logic Vdd—(lowest power) volts (battery power) 1

2005

2007

2010

90 54 80 5204 3125 111

68 48 65 9285 4883 140

45 30 45 15,079 9536 140

174 88 1.22 167 2.8 1.1 0.9

276 110 1.94 189 3.0 1.1 0.8

552 93 4.62 198 3.0 1.0 0.7

2013

2016

2019

32 21 32 22,980 18,625 140

22 15 22 39,683 36,378 140

16 11 16 62,443 71,051 140

1104 93 9.23 198 3.0 0.9 0.60

2209 93 18.46 198 3.0 0.8 0.50

4,417 93 36.93 198 3.0 0.7 0.5

Source: From International Technology Roadmap for Semiconductors, published by the Semiconductor Industry Association, 2005.

distributes individual power feeds across the chip. However, new packaging technologies will undoubtedly be required to alleviate the increasing level of power dissipation generated on-chip. Over the past decade, the aforementioned device scaling and performance needs have driven dramatic changes in interconnect materials and processes at each successive technology generation. While the motivation for moving from aluminum to copper metalization and from oxide to low-k dielectrics is clear, significant material and process innovation has been and will continue to be required to meet the interconnect goals set forth in the ITRS. A summary of key interconnect requirements from the ITRS [26] is shown in Table 2.2. The smaller feature sizes and higher aspect ratios projected for copper dual damascene structures necessitate thinner and more conformal metal barriers to prevent copper diffusion into surrounding dielectrics. While advanced physical vapor deposition (PVD) barrier/seed deposition technologies have proven to be extendable to at least the 45 nm technology node new metal deposition techniques, such as atomic layer deposition (ALD), will ultimately be required to achieve ultra-thin

FIGURE 2.4 Example of hierarchical interconnect architecture used in 90 nm node digital signal processor (DSP) (Courtesy of Texas Instruments).

DK4126—Chapter2—23/5/2007—18:35—ANBARASAN—240431—XML MODEL CRC12a – pp. 1–24

Overview of Interconnect—Copper and Low-k Integration

2-5

TABLE 2.2 Selected Interconnect Technology Projections from the 2005 International Technology Roadmap for Semiconductors 2005

2007

2010

2013

2016

2019

MPU ½ pitch

YEAR

90

68

45

32

22

16

MPU gate length (nm)

32

25

18

13

9

6

Number of metal levels

11

11

12

13

13

14

Number of optional levels – ground planes/capacitors 2

Jmax (A)/cm – intermediate wire (at 105 C) Metal 1 (Cu) wiring pitch (nm)

4 8.91×105 180

4 2.08×106 136

4 5.15×106 90

4 8.08×106 64

4 1.47×107 44

4 2.23×107 32

Metal 1 A/R

1.7

1.7

1.8

1.9

2

2

Metal 1 barrier/cladding thickness (nm)

6.5

4.8

3.3

2.4

1.7

1.2

Cu thinning at minimum Metal 1 pitch due to erosion (nm), 10% x height, 50% areal density, 500 μm square array

15

12

8

6

4

3

Metal 1 effective resistivity (assumes conformal barrier and includes electron scattering effects) (μΩ-cm)

3.15

3.47

4.08

4.83

6.01

7.34 15853

Metal 1 RC delay (ps) over 1mm with effective resistivity

440

767

1792

3451

8040

Intermediate wiring pitch (nm)

200

140

90

64

44

32

1.7/1.5

1.8/1.6

1.8/1.6

1.9/1.7

2.0/1.8

2.0/1.8

Intermediate wiring barrier/cladding thickness (nm)

7.3

5.2

3.3

2.4

1.7

1.2

Cu thinning at minimum intermediate pitch due to erosion (nm), 10% x height, 50% areal density, 500 μm square array Intermediate metal effective resistivity (assumes conformal barrier and includes electron scattering effects) (μΩ-cm)

17

13

8

6

4

3

3.07

3.43

4.08

4.83

6.01

7.34

355

682

1825

3504

8147

16059

Intermediate wiring dual damascene A/R (Cu wire/via)

Intermediate wiring RC delay (ps) over 1mm calculated using effective resistivity shown above

300

210

135

96

66

48

2.2/2.0

2.3/2.1

2.4/2.2

2.5/2.3

2.6/2.4

2.8/2.5

Global wiring barrier/cladding thickness (nm)

7.3

5.2

3.3

2.4

1.7

1.2

Cu thinning of global wiring due to dishing (nm), 100 μm wide feature

24

19

14

10

8

6

Global metal effective resistivity (assumes conformal barrier and includes electron scattering effects) (μΩ-cm)

2.53

2.73

3.10

3.52

4.20

4.93

Global wiring RC delay (ps) over 1mm calculated using effective resistivity shown above

111

209

523

977

2210

4064

Interlevel metal insulator-effective dielectric constant (k)

3.1– 3.4

2.7 – 3.0

2.5 – 2.8

2.1 – 2.4

1.9 – 2.2

1.6 – 1.9

≤2.7

≤2.4

2.2

2.0

1.8

1.6

Minimum global wiring pitch (nm) Global wiring dual damascene A/R (Cu wire/via)

Minimum expected bulk dielectric constant (k) Manufacturing solutions exist Manufacturable solutions are known Manufacturable solutions are not known

Source: From International Technology Roadmap for Semiconductors, published by the Semiconductor Industry Association, 2005.

barriers. Likewise, while improvements in copper electroplating technology have enabled extendibility through multiple generations, new chemistries and techniques must be developed to accelerate bottomup fill and plate on high resistivity seeds, as feature size continues to decrease. Paralleling the development of advanced copper metallization techniques is an equally important effort focused on lower k dielectric materials. The integration of new low-k dielectrics brings numerous reliability concerns that include thermally- or mechanically-induced cracking or adhesion loss, poor mechanical strength, moisture absorption, lower dielectric breakdown voltage/time dependent dielectric breakdown (TDDB), texture effects and poor thermal conductivity. The reduced mechanical strength of porous low-k dielectrics is of particular concern in both processing (especially during chemical–mechanical polishing (CMP)) and packaging. Thus, mechanical properties, such as hardness, modulus, cohesive strength, cracking limit, and crack propagation velocity have been key metrics for ongoing materials development. For so-called “porous” ultra-low-k (ULK) materials, co-optimization of deposition chemistry with porogen removal and anneal has yielded dielectric materials that are compatible with advanced manufacturing requirements [41–48]. However, while current progress is encouraging, past history underscores the difficulty of introducing new low-k materials into production and much work remains to be done.

DK4126—Chapter2—23/5/2007—18:35—ANBARASAN—240431—XML MODEL CRC12a – pp. 1–24

2-6

Handbook of Semiconductor Manufacturing Technology

In general, a key integration challenge for ULK materials and for scaling copper damascene is the extendibility of CMP techniques. In addition to accommodating more mechanically fragile and chemically hydrophobic dielectric materials, future CMP processes must also contend with tighter dishing and erosion specifications. The decrease in metal thickness dictated by scaling means that to maintain design tolerances for resistance and capacitance, tighter control of vertical dimensions is required in the damascene processes (see Table 2.2). Achieving these goals will likely require co-optimization of both electroplating and planarization techniques as well as enhanced in situ process control. While significant copper and low-k process integration challenges lie ahead, several generations of copper/low-k production provide a solid foundation for moving forward. The next section will describe some of the key process integration challenges and solutions that have contributed to the successful implementation of copper/low-k interconnects in high volume production.

2.2

Dual Damascene Copper Integration

Subtractive etch, the approach used in fabricating aluminum-based interconnects is inapplicable in the fabrication of copper-based interconnects, due to the lack of volatility of copper–halide complexes at moderate temperatures. As a result, copper interconnect fabrication requires a damascene approach whereby the metallization is inlaid into interconnect geometries which are pattern-transferred into the Positive pattern

Metal etch

Dielectric deposition

Negative pattern

Dielectric etch

Metal deposition Metal Dielectric Metal CMP

Dielectric planarization by CMP

Subtractive etch (Al)

Dielectric deposition

Photoresist Etch stop (dielectric)

Damascene (W, Cu)

FIGURE 2.5 Comparative process flows associated with the fabrication of interconnect structures using subtractive and damascene technologies.

DK4126—Chapter2—23/5/2007—18:35—ANBARASAN—240431—XML MODEL CRC12a – pp. 1–24

Overview of Interconnect—Copper and Low-k Integration TABLE 2.3

2-7

Key Process and Performance Differences for Subtractive vs. Damascene Interconnect Technologies Key Differences—Subtractive vs. Damascene Subtractive

Dual Damascene

Interconnect resistance variance depends mostly on line width variance (driven by litho/etch bias, wet cleans) Resistance variance mostly constant with increasing width and thickness CLL variance depends on line width variance CXO variance depends on line width/depth variance Via depth fixed for each level of interconnect Self limiting oxidation of Al Chemical–mechanical polishing (CMP) limited to single material

M3 CXO

CXO

M2 CXO

M2 CLL

CXO

M1 CLG Sub

Interconnect resistance variance depends on line width/depth variance (driven by litho/etch bias, wet cleans, etch depth uniformity) Resistence variance may change with increasing width and thickness CLL variance depends on line width/depth variance CXO variance depends on line width/depth variance Via depth function of layout(width of overlying line) Cu oxidation no self limiting Chemical–mechanical polishing (CMP) of composite structure

dielectric of interest. A flow comparison of the subtractive and damascene sequences is illustrated in Figure 2.5. A dual damascene process also offers lower fabrication cost due to the limited use of chemical–mechanical planarization processes compared to the multiple uses of this unit process in the subtractive etch fabrication of interconnects. In addition, low via resistances are achieved through the reduction of the number of high resistivity interfaces in the interconnect structure. However, the dielectric etches and metal fill processes face higher aspect ratios due to the dual damascene structure. Table 2.3 lists key differences between the two approaches. Dual damascene copper interconnects may be fabricated using two primary schemes; via first scheme or trench first scheme as outlined in Figure 2.6. Continued scaling of interconnect geometries also requires integrating low permittivity materials into the copper interconnect structure. The chemically amplified materials used in the photolithographic steps of pattern transfer exhibit increased sensitivity to the impurities in the chemical vapor deposited low permittivity dielectric materials. Interactions between impurities (N, H, and combinations of these) in the low permittivity dielectric films and the UV lithography resist lead to a loss of sensitivity of the photoactive compounds in the pattern definition layers. The interaction between the amine groups and photoactive compounds in the resist may lead to

Resist Sacrificial fill Via first approach

Metal Dielectric barrier Low-k dielectric

Trench first approach

FIGURE 2.6

Comparison of via first vs. trench first dual damascene approaches.

DK4126—Chapter2—23/5/2007—18:35—ANBARASAN—240431—XML MODEL CRC12a – pp. 1–24

2-8

Handbook of Semiconductor Manufacturing Technology Top down SEM images

Undeveloped photoresist

Trench litho

No contact between Via/M2

FIGURE 2.7 The schematic representation of resist poisoning (left), scanning electron microscopy (SEM) top down, and cross-section micrographs illustrating the pattern disruption due to undeveloped resist. (From Dixit, G., Proceedings of the International Reliability Physics Symposium, Tutorial Notes, 2004.)

undeveloped resist thus preventing the formation of all the required features in a multilevel interconnect structure. Figure 2.7 shows the phenomenon of resist poisoning occurring in the trench pattern step of a via first dual damascene scheme [49]. Various modifications to the sequence and details of process steps used in the fabrication sequence may be employed to overcome the risk of resist poisoning. Figure 2.8 outlines the various schemes used in dual damascene fabrication [50,51]. In the self-aligned approach shown in Figure 2.8a, the via level dielectric, or interlayer dielectric (ILD) and an etch stop layer (typically silicon nitride or silicon carbide for inorganic ILDs and oxide for organic ILDs) are sequentially deposited, followed by pattern and etch of via into the etch stop layer. The dielectric for the trench is then deposited onto the patterned etch stop layer. The trench features are delineated into this dielectric and the trench etch is extended to complete transferring the via pattern from the etch stop layer into the interlayer dielectric. The etch stop layer defines the trench height, while maintaining a vertical profile of the via sidewall. The etch stop layer is removed from the bottom of the trench during the final etch step, which simultaneously clears the dielectric barrier from the bottom of the via. The chief advantage of the buried via approach is that all patterning is done on planar surfaces; major disadvantages include the need for an etch stop layer (which increases sidewall capacitance), the need for high etch selectivity to the etch stop layer and susceptibility to partial via definition if trench and via are misaligned. Partial vias present a potential reliability issue and, thus, this integration scheme should be avoided unless ample alignment tolerance is provided in the product design. Via first approach for dual damascene has been a workhorse for the industry. In this scheme, the entire dielectric stack (including intervening etch stop layer if desired) for a given interconnect level is deposited prior to pattern definition. The vias are then patterned and etched down to the etch stop layer as shown in Figure 2.8b. The high aspect ratio vias are filled with a sacrificial inorganic material to protect the underlying etch stop layer during the trench etch. The sacrificial fill material also assists the trench patterning process by limiting the variation in resist thickness over via. The etch rate of the sacrificial fill material is required to be similar or slightly higher than the etch rate of the dielectric during the trench etch. Bottom anti-reflective layers (BARC) are frequently used as sacrificial fills in via first dual damascene.

DK4126—Chapter2—23/5/2007—18:35—ANBARASAN—240431—XML MODEL CRC12a – pp. 1–24

Overview of Interconnect—Copper and Low-k Integration

2-9

Resist (a) Self aligned approach

Sacrificial fill Metal Dielectric barrier Low-k dielectric

(b) Via first approach

Dielectric hard mask Metallic hard mask

(c) Via first tri layer approach

(d) Dual top hard mask approach

FIGURE 2.8 Schematic flows of improvised via first and trench first approaches to overcome challenges in the pattern transfer process for low-k porous dielectrics.

In the trench first approach, via is patterned on the etched trench, which may present significant topography. In case of misalignment, partial vias can only be avoided by extended overetch to ensure that the full ILD/inter-metal dielectric (IMD) thickness is cleared, which taxes etch selectivity to the via etch stop layer (not protected by a BARC as in the case of via first approach). Via first tri-layer approach (Figure 2.8c) utilizes a sacrificial dielectric film, such as undoped silicon dioxide deposited on top of the sacrificial via fill material, to improve the fidelity of the pattern transfer process. A low temperature dielectric deposition process is preferred to ensure compatibility with the sacrificial fill materials, whose glass transition temperatures reside below those of conventional dielectric deposition processes. The dielectric film serves as a barrier to prevent interaction between the resist and contaminants from the underlying low dielectric constant materials, thus enhancing the robustness towards resist poisoning. The low etch rate of the silicon dioxide during the low-k etch process and further separation between the low-k dielectric and the UV resist offers advantages in controlling the sidewall roughness of the resultant features. Figure 2.8d depicts one of the trench first dual damascene sequences with two hard mask layers [52,53]. The sacrificial hard mask stack is comprised of a dielectric/metal nitride bilayer. Materials such as titanium nitride may be used as a metal hard mask. The optical transparency of this material is a key requirement to ensure alignment between the successive pattern steps and the transparency requirements limit the useable thicknesses of these films. The process steps typically involve pattern and etch of the trench features into the metal hard mask followed by via pattern. Following via etch; a sacrificial fill of via may be utilized prior to completing the trench etch with the metal nitride mask. Typical post-etch resist removal processes include an oxidizing plasma which in turn may also cause oxidation and loss of carbon from the sidewalls of the trench features. The oxidation damage to low-k materials is undesirable as it leads to an increase in the dielectric permittivity. Due to the absence of resist following the trench etch, an oxidizing plasma clean is unnecessary and the post-etch clean may be accomplished through the use of

DK4126—Chapter2—23/5/2007—18:35—ANBARASAN—240431—XML MODEL CRC12a – pp. 1–24

2-10

Handbook of Semiconductor Manufacturing Technology

TABLE 2.4

Process and Integration Challenges for Different Dual Damascene Schemes Damascene Schemes

Approach

Challenges

Self-aligned

Risk of forming partial vias, worse with scaling Additional k impact due to middle etch stop layer Has not been reported in any practical use Difficult to clear resist puddle in trench (depth of focus) Risk of misalignment leading to partial vias Alignment issue aggravated with scaling Resist poisoning Fencing (different etch rates of dielectric/sacrificial via fill) Resist poisoning and lithography rework Low-k damage in post etch cleaning Trench etch process requires further optimization to control profiles and limit fencing Sacrificial dielectric compatibility with sacrificial fill layer Multiple materials in stack (etch selectivity management) Significant etch process challenge, may require multiple tools Top hard mask compatibility with Cu Chemical–mechanical polishing (CMP)

Trench first

Via first Via first tri layer Dual top hard mask

solvents, thus eliminating the risk of oxidizing the low-k material. This approach involves a metal etch step to define the trench into the metal nitride and this may require additional process equipment compared to the approaches without metal hard masks. Table 2.4 summarizes the challenges in the various dual damascene schemes. Integrating low permittivity materials with porosity, high carbon content, and marginal mechanical properties present challenges in the areas of interface engineering, pattern transfer, and metallization. Figure 2.9 summarizes various issues that may be faced during the dual damascene integration of copper with low dielectric constant materials. In order to lower the effective interconnect capacitance between successive generations of integrated circuits, it is necessary to reduce the dielectric constant of the bulk dielectric as well as the dielectric etch stop layer. As a result, the commonly used etch stop layer such as silicon nitride is replaced with nitrogen doped silicon carbide. The presence of carbon in the etch stop E-CD control due to CMP dishing, erosion Reliability: EM, BTS: B/S, ECP, anneal

Top CD spacing

CMP C/U Dishing

Cu filling of trench and via; B/S, ECP

Side wall barrier

Voids

Uncontrolled etch bottom Corrosion

Etch profile

Via to Cu, Etch, pre-clean Contact to W Plug: Etch, pre-clean

adhesion fence

Cu filling

W

W

W

W

Post CMP dielectric passivation Effective dielectric constant w/Etch Etch profiles and selectivity

Metal barrier for Cu

FIGURE 2.9 Schematic representation of issues faced in the integration of Cu/low-k dielectric interconnect structures. (From Dixit, G., Proceedings of the International Reliability Physics Symposium, Tutorial Notes, 2004.)

DK4126—Chapter2—23/5/2007—18:35—ANBARASAN—240431—XML MODEL CRC12a – pp. 1–24

Overview of Interconnect—Copper and Low-k Integration

2-11

layer as well as in the bulk dielectric leads to a concern for interface delamination, as the carbon in both these films may segregate preferentially to the surfaces [54,55]. Innovative pre- and post-dielectric deposition treatments are utilized to denude the surfaces of the excess carbon so as to promote the strong interfacial adhesion between the various films in the dielectric stack. The local changes in the film concentrations may in turn present challenges to the dielectric etch and post-etch cleaning processes as the etch rates at the interfaces may differ from the etch rates of the bulk materials. Plasma etch of the multilayer dielectric stack challenges the etch unit processes in maintaining the etch selectivity between the different layers, while simultaneously producing acceptable profiles of the resultant via and trench features [56,57]. Intervening etch stop layers in via/trench architecture degrade the effective capacitance of the structure and such layers are undesirable. Eliminating the intermediate etch stop may result in features with non-optimal shapes, such as facets, micro-trenches, or fences as shown in Figure 2.10. Tailoring the etch unit process to meet the varying requirements of etch rate and selectivity between the different materials, with minimal loading effects, is of primary importance in designing out the intermediate etch stop layers from the dual damascene structure. The impact of resist removal processes on the dielectric properties of low dielectric constant materials needs careful evaluation [58–61]. The chemical plasmas used for stripping the residual resist and by-products of etch adversely affect the carbon concentration of the low-k materials. The porous structure of the low-k material is then susceptible to adsorb moisture and may exhibit a large increase in permittivity. Figure 2.11 shows the impact on the final trench structure produced after etch, ash, and solvent clean. Undesirable undercut of the sidewall is noted with the case of an unoptimized resist removal process. Figure 2.12 shows the lateral carbon concentration profile on the trench sidewall for structures exposed to different resist removal plasmas. A reduction in the carbon concentration close to the sidewall is seen for both types of processes and the width of the zone with variable carbon content differs significantly between the two processes. These physical changes to the low-k dielectric material lead to alteration of the electrical properties as seen in Figure 2.13 where the normalized product of resistance and capacitance of an interconnect structure is plotted for different resist removal processes. The choice of an optimum etch in the dual damascene scheme is thus of high importance to achieve effective electrical performance.

Micro-trench Faceting Fencing

FIGURE 2.10 SEM cross-section images illustrating undesirable results, such as micro-trenching, faceting, and fencing encountered due to non-optimized pattern transfer. (From Dixit, G., Proceedings of the International Reliability Physics Symposium, Tutorial Notes, 2004.)

DK4126—Chapter2—23/5/2007—18:36—ANBARASAN—240431—XML MODEL CRC12a – pp. 1–24

2-12

Handbook of Semiconductor Manufacturing Technology

No ash

Standard ash

Optimized ash

FIGURE 2.11 SEM cross-sections of trenches etched into a porous low-k dielectric. The post-etch ash induced damage to the dielectric is noted in the form of an undercut of the low-k material by the buffered hydrofluoric acid (HF) decoration and results in unwanted line width gain. (From Dixit, G., Proceedings of the International Reliability Physics Symposium, Tutorial Notes, 2004.) Oxidizing chemistry

Reducing chemistry

EELS-O EELS-C 30 nm

80,000

1.5×105 100 nm Counts

Counts

60,000 40,000

EELS-O EELS-C

1.0×105

5.0×105 20,000 0 (a)

0 0

50

100 150 Position (nm)

200

0

50

(b)

100 150 Position (nm)

200

FIGURE 2.12 Transmission electron microscope (TEM) electron energy loss spectroscopy (EELS) profiles of carbon and oxygen concentrations across the dielectric between trenches. Varying widths of the carbon loss zone are noted with different post-etch ash chemistries. (From Dalton, T. J., Fuller, N., Tweedie, C., Dunn, D., Labelle, C., Gates, S., Colburn, M., et al., Tech. Dig. IEEE Int. Interconnect Tech. Conf., 154–56, 2004.)

Normalized RC

1.84

1.76

1.68

1.60

1

2

3

4 5 6 Ash chemistry

7

8

9

FIGURE 2.13 A plot showing a normalized RC product, measured on an interdigitated metal comb structure, as a function of different post-etch ash processes. Damage to the dielectric leads to higher capacitance and higher RC product. (From Dixit, G., Proceedings of the International Reliability Physics Symposium, Tutorial Notes, 2004.)

DK4126—Chapter2—23/5/2007—18:36—ANBARASAN—240431—XML MODEL CRC12a – pp. 1–24

Overview of Interconnect—Copper and Low-k Integration

2-13

In order to address the downstream process compatibility, dual damascene features fabricated in low-k dielectric materials may contain sacrificial layers, such as dielectric or metal hard masks, which remain prior to metallization and are removed during the chemical–mechanical polish of copper. These sacrificial layers contribute to increase the aspect ratios of the damascene feature and additional stress on the dielectric etches and copper fill processes. The liner/barrier processes for contact and via schemes typically includes an inert ion based sputter pre-clean that ensures consistent ohmic contact between the under/overlying metallic layers. The sputter pre-clean redistributes the material removed from the interface of interest and the sputtered species are deposited onto the sidewall of the features (Figure 2.14). In the case of copper low-k interconnects, the redeposited film presents an additional challenge, as this material may then diffuse into the porous dielectric, thus degrading the intra-metal isolation and increasing the risk of copper contamination of the entire structure. The outdiffusion of the redeposited material into the dielectric and the resultant free volume within the interconnect feature also contributes to increased risk in stress migration of interconnects. The high aspect ratios due to the presence of sacrificial layers in the dual damascene flow challenge the capability of the physical vapor deposition process in providing continuous conformal deposition. The ease of integration between the physical vapor deposited layers and subsequent electroplating of copper has led to the need to extend the physical vapor deposition process for copper barrier and seed layers. In order to overcome the line-of-sight limitations of physical vapor deposition, innovative approaches such as resputtering of the barrier layer are utilized to improve the coverage of the barrier layers on the sidewalls of the vias and trenches. The resputter in copper barrier deposition may also be used to eliminate the inert ion sputter pre-clean process. In the barrier first approach [62], the copper barrier resputter step is tuned to achieve a controlled penetration of the via into the underlying metal. Due to the varying aspect ratios of via in the dual damascene architecture, the bottom coverage of barrier may vary significantly in vias within different trench geometries. As a result, in achieving a controlled recess of the via, the dielectric in certain trench geometries may be exposed during the resputter step, thereby leading

Process sequence Step 1: pre-clean + Ar+ Ar

Cu from bottom of via sputtered onto the via sidewalls Stress migration Under thermal stress the Cu originally sputtered on the sidewall diffuses into the dielectric further exposing weak points in the barrier

Step 2: barrier

Step 3: Cu seed & fill

Barrier coverage can be compromised by agglomerated Cu Cu originally sputtered onto the sidewall during PC

Cu from the via bottom

FIGURE 2.14 Schematic representations of process sequence of pre-clean in the Cu barrier/seed deposition and its impact on failure of via (TEM cross-section) subjected to thermal stress. (From Dixit, G., Proceedings of the International Reliability Physics Symposium, Tutorial Notes, 2004.)

DK4126—Chapter2—23/5/2007—18:36—ANBARASAN—240431—XML MODEL CRC12a – pp. 1–24

2-14

Handbook of Semiconductor Manufacturing Technology

FIGURE 2.15 SEM cross-sections illustrating the damage that may be caused to the trench dielectric due to un-optimized resputtering in the copper barrier deposition process.

to micro trenching or trench bottom roughening as seen in Figure 2.15. The rough trench bottom poses a reliability risk due to the possibility of micro voids between the barrier and copper seed interface. Precise control of the barrier deposition parameters to achieve differentiated barrier thicknesses in the bottom of trench and via features is necessary to eliminate damage to the dielectric and roughening during the resputtering process. The composition of the Ta(N) layer as well as the crystallographic texture of the Ta deposited on top of the Ta(N) play a key role in determining the continuity of the copper seed deposited on top of the barrier [63]. A continuous seed with good adhesion is necessary to ensure complete filling of the electroplated copper into the dual damascene features. As the interconnect geometries scale into the sub-100-nm regime, the thickness of the copper seed deposited in these features also needs to be scaled down to prevent pinch-off prior to the electroplating step. Electroplating on thin copper seeds with relatively high sheet resistance is challenged by the terminal effect in the plating cell, whereby the voltage drop across the wafer diameter induces a large non-uniformity in the electroplated film thickness and may also lead to voiding within the features. Continued development in barrier/seed deposition and electroplating of copper is required to extend these processes to future generations of interconnect. The resistivity of copper in sub-50 nm features is significantly higher compared to the resistivity in wider lines (Figure 2.16) [64,65]. The relatively high volume of these narrow lines occupied by the highly resistive Ta(N)/Ta barriers, smaller grain size of the copper in narrow features compared to wider features as well as the sidewall roughness of the damascene trenches all contribute to the increased resistivity. Thin high efficiency barriers, such as atomic layer deposited Ta(N) and integration of these materials with thin low resistivity seed layers for electroplating is necessary to ensure the extendibility of copper metallization to decanano scale interconnect features. A number of difficulties are encountered in the chemical–mechanical planarization processes of copper low-k dielectrics. Carbon doped low dielectric constant materials exhibit lower hardness and modulus. While the dual damascene pattern transfer may be accomplished through the use of suitable hard mask layers, in order to realize the maximum benefit in overall capacitance of the interconnect sacrificial masking layers need to be removed during the chemical–mechanical planarization processes. The exposure of the low-k materials to the polishing environment presents hurdles in maintaining acceptable dishing and erosion of the copper low-k structures during chemical–mechanical planarization processes.

DK4126—Chapter2—23/5/2007—18:36—ANBARASAN—240431—XML MODEL CRC12a – pp. 1–24

Overview of Interconnect—Copper and Low-k Integration

Resistivity (μΩ -cm)

3.5

Blanket films large grain no grain scattering

3.0

2-15

Non-optimized copper process

2.5

2.0 optimized copper fill and anneal 1.5

0

50

100

150

200

250

300

350

400

Thickness/width (nm)

FIGURE 2.16 Measured copper resistivity as a function of line width. Resistivity increases at smaller line widths due to electron scattering from grain boundaries and sidewalls. Optimization of plating chemistry and post-plate anneal for large grain growth reduces grain boundary scattering.

Precise control of the shear force applied during the polishing process is necessary to achieve good process performance as well as to minimize mechanical damage to the low-k materials [66–73]. The hydrophobic nature of porous low-k materials with inferior wetting characteristics requires the implementation of advanced vapor drying techniques to minimize defects, such as residues of the polishing slurry and watermarks. The 90 and 65 nm generation devices have been successfully integrated with copper and low permittivity materials with bulk dielectric constants in the range of 2.9–3.1. Numerous challenges have been overcome in achieving good device yields and reliability with this first generation of porous films. Further scaling of interconnects with forthcoming device generations will require the integration of even lower permittivity materials (2.2!k!2.9) that enable sizeable reductions in the interconnect capacitance. The inevitable degradation of mechanical properties with lower permittivity insulators poses formidable challenges to the integration of these materials into multilevel interconnect structures. Modifications to existing dual damascene fabrication schemes and processes may offer means to overcome some of the difficulties in the pattern transfer module. While the hierarchical reverse scaling approach of interconnect architecture offers some relief in the RC delay of the intermediate and global metal levels, continued scaling of physical dimensions demands the extension of lower permittivity dielectrics into the higher levels of multilevel interconnects. Lower permittivity materials also entail higher residual tensile stresses, increasing the risk of crack initiation and propagation. Engineering the packaging related processes [74–79] to accommodate devices with multiple levels of porous materials with low mechanical strength is a key to realize the successful implementation of low permittivity materials for intermediate and global interconnections. The increasing resistivity of the copper with thinner and narrower lines raises serious concern about the extendibility of copper metallization. Efforts to reduce the relatively high volume of high resistivity barrier layers within the metal lines are hampered by the difficulties in reliable integration of atomic layer deposited barrier films with the copper metallization. Novel design approaches to comprehend and accommodate resistivity-dependent interconnect line width may be necessary to overcome the projected increase of line resistance as well as the increased variances in line resistance due to the pattern-dependent limitations of chemical–mechanical planarization processes. Ultimately, the extendibility of copper metallization with low permittivity insulators will depend on the economic and practical factors that dictate the new cost sensitive era of consumer devices.

DK4126—Chapter2—23/5/2007—18:36—ANBARASAN—240431—XML MODEL CRC12a – pp. 1–24

2-16

2.3

Handbook of Semiconductor Manufacturing Technology

Copper/Low-k Reliability

Copper metallization offers significant reliability improvement as compared with aluminum metallization, but also presents several new integration and reliability challenges. Since copper readily diffuses into silicon and most dielectrics, copper leads must be encapsulated with metallic (such as Ta and TaN) and dielectric (such as SiN and SiC) diffusion barriers to prevent electrical leakage between adjacent metal leads and degradation of transistor performance. Copper diffusion is greatly enhanced by electric fields imposed between adjacent leads during device operation (w1E5 V/cm), so absolute barrier integrity is crucial to long-term device reliability (see Figure 2.17). Validation of barrier reliability required a new test procedure—bias temperature stress (BTS)—now prevalently used by the industry [80–83]. The electromigration behavior of copper also differs from aluminum in that surface diffusion tends to dominate over grain boundary diffusion. This difference may be one reason that preliminary data show deterioration in copper reliability at very small feature sizes (i.e., a large ratio of surface area to crosssectional area) [84]. From a resistivity standpoint, it is still important to maximize copper grain size to reduce grain boundary scattering, but as compared with aluminum, grain size plays a lesser role in determining copper’s electromigration behavior. Yet another key difference is that copper, unlike aluminum, does not form a self-limiting passivation oxide that provides protection from chemical attack. Indeed, oxidized copper generally has poor adhesion to metal and dielectric diffusion barriers, which leads to severe degradation in electromigration performance (see Figure 2.18). Thus, it is essential to chemically remove residual copper oxide before in situ deposition of a hermetic dielectric diffusion barrier (see pretreatment effects in Figure 2.19). As shown in Figure 2.20, dielectric barrier hermeticity is also important from the standpoint of preventing moisture absorption in low-k dielectrics, which can degrade both dielectric constant and electrical breakdown. In addition to electromigration effects, inherent stress gradients in copper metallization can also lead to void migration even without the assistance of an electric field. Stress gradients in copper are a consequence of damascene processing (i.e., encapsulation) and subsequent thermal cycling (see Figure 2.21), with higher stress levels generally promoting the likelihood of void formation. Pre-existing

FIGURE 2.17 Example of dielectric barrier failure leading to copper diffusion and subsequent shorting between interconnects.

DK4126—Chapter2—23/5/2007—18:36—ANBARASAN—240431—XML MODEL CRC12a – pp. 1–24

Overview of Interconnect—Copper and Low-k Integration

2-17

e

FIGURE 2.18

Example of void formed by copper electromigration.

Interfacial O(atoms/cm2)

1×1016

1×1015

1×1014

1×1013

No CuxO Reduction

Permeable SiC

Hermetic SiC

No CuxO reduction

Permeable SiC

Hermetic SiC

2

%Adhesion failures(Tm

Crystal

Crystal

Tm

(a)

Tb >Tm

(b)

(c)

wc

FIGURE 3.7 Convection patterns in a Czochralski melt due to (a) thermal convection, (b) crystal rotation and (c) crucible rotation. (From Lin, W., Oxygen in Silicon, ed. Shimura, F., Academic Press, 1994, chap. 2. Reproduced with permission from Elsevier.)

growth, in Equation 3.5. Secondly, the rotating crystal draws a uniform flow from the central region of the melt, perpendicular to the interface over the radius and spins it outward radially near the surface, Figure 3.7b. As a result, the convection flow by the crystal rotation has the effect to counter and reduce the thermal convection flow, and the convection flow induced by the crucible rotation (has same general flow pattern as thermal convection. See Figure 3.7c). The magnitude of the flow induced by the crystal rotation is characterized by the dimensionless Reynolds number NRe:

NRe Z ur 2 =y;

ð3:7Þ

where u is the crystal rotation rate and r is the crystal radius. While forced convection, i.e., the crystal rotation, has the effect of overriding the harmful thermal convection, the net effect depends on the relative magnitude of the two components. The relative effect of 2 2 the two components may be expressed by the ratio, NRe =NGr . If NRe O NGr , the crystal rotation will effectively isolate the segregation process at the growth interface from the thermal convection in the melt [33]. Therefore, small melt and high crystal rotation would suppress the effect of thermal convection. 3.3.2.3.3 Macroscopic Radial Impurity Uniformity In the silicon growth from a homogeneous melt, the radial uniformity is controlled by the uniformity of boundary layer thickness across the interface. In Equation 3.4, if the growth rate is assumed to be uniform across the interface, then the radial uniformity of the impurity is determined by the variation in boundary layer thickness. It may be shown [34], in the case of dopant in silicon, that the rate of change in keff with respect to the variation in diffusion boundary layer thickness is very sensitive in the range of layer thickness encountered in the silicon CZ growth. The effect is greater for n-type than for the p-type dopant in silicon. The relevance of the melt convection condition to the radial impurity incorporation is the following. In the absence of thermal convection, the result of the crystal rotation would be a uniform diffusion boundary layer over the crystal radius at the interface. Hence, if the crystal rotation is the only source of convection, no radial segregation will result from the fluid flow effect and impurity incorporation is uniform across the crystal radius. In real crystal growth, however, the thermal convection flows form a general pattern in the crucible, as rising streamlines along the crucible wall, which fall in the center following a gradually curved path (see Figure 3.7a). This path results in a stagnation point near the center of the crystal interface. Therefore, when thermal convection is strong compared to the forced convection, the diffusion boundary layer thickness at the outer region of the interface may be reduced by the thermal convection velocity, while the effect is small in the region near the center of the interface. Thus, thermal convection flow causes a radial variation in a boundary layer thickness. If the segregation coefficient of the dopant involved, k0, is smaller than unity, one finds a

DK4126—Chapter3—23/5/2007—18:40—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

3-12

Handbook of Semiconductor Manufacturing Technology

significant radial variation in impurity incorporation; it is high in the center and low in the outer region of the crystal. The greater the k value deviates from unity, the greater the radial variations. By the same principle, increased crucible rotation can cause variations in the boundary layer thickness and radial gradient in incorporation. As discussed above, harmful thermal convection effects can be suppressed by increased crystal rotation and small melt growth. Lin and co-workers [17,23] have demonstrated such effects by using a double crucible arrangement (see Figure 3.6a). The use of a smaller diameter and a low aspect ratio of inner crucible reduced the effect of thermal convection. The improvements on radial dopant uniformity and random dopant concentration fluctuations, and characteristic of the thermal convection, were observed for As, P, Sb, and oxygen for which the segregation coefficients deviated significantly from unity. 3.3.2.3.4 Microscopic Inhomogeneity In general, the microscopic inhomogeneity of impurity in CZ silicon crystals is a result of growth rate fluctuations and impurity segregation during crystal growth. The growth rate fluctuations cause variations in the impurity incorporation levels. The lattice strain associated with local impurity concentration variations give rise to the so-called “striations,” which are revealed by chemical etching or x-ray topography. Severe microscopic dopant inhomogeneity corresponds to a large variation in carrier concentration and is not desirable in silicon materials used for device fabrication, especially when such variation is comparable to the device feature size. This is an important consideration in VLSI/ULSI fabrication. Large oxygen striations can result in preferential precipitation, often observed as concentric ring patterns in etched wafers following thermal processing. In CZ silicon growth, there are several sources of microscopic growth rate variations which are discussed below. 3.3.2.3.4.1 Non-Centrosymmetric Thermal Distribution in the Silicon Melt In large-melt silicon growth, finite thermal asymmetry exists about the center of the melt. During crystal growth, as the crystal is rotated about the growth axis, the interface experiences slightly different temperatures at different positions in the melt. Therefore, the growth rate of a given crystal element parallel to the crystal axis, fluctuates periodically, as illustrated schematically in Figure 3.8. In general, the fluctuation is most pronounced in the crystal elements furthest from the crystal center. The periodicity of the fluctuation is determined by the average growth rate, f, and crystal rotation rate u, as f/u. The variations in the impurity incorporation level that correspond to the growth rate variations are commonly referred to as “rotational striations.” When the equilibrium segregation coefficient (k0) of the element involved is less than unity, the fluctuation in the impurity incorporation level is in phase with the growth rate fluctuation. If k0O1, the fluctuations will be out of phase (see Figure 3.8). The relationship may be realized readily by examining the Equation 3.4. 3.3.2.3.4.2 Thermal Convection-Related Temperature Fluctuations Growth rate fluctuations caused by thermal convection related temperature variations are mostly random in nature and etched striations are characteristically aperiodic. When thermal convection is significant, the microstriations bear the signature of high-frequency fluctuations in the order of tens of hertz. 3.3.2.3.4.3 Automatic Diameter Control-Induced Perturbation Growth-rate fluctuation can be further perturbed by the ADC commonly employed in silicon crystal growth. The crystal pull-rate is slaved by optically monitored crystal diameter variations, in order to maintain a preset diameter. The pull-rate adjustments are both “instantaneous” (a few seconds) and “long term” (minutes). The long term pull-rate adjustment determines the average growth rate. The instantaneous pull-rate adjustment imposes modifications on the microscopic growth rate fluctuations resulting from thermal asymmetry and thermal convection. The net effect is to smear the periodic nature of the impurity fluctuation resulting from the melt thermal asymmetry.

DK4126—Chapter3—23/5/2007—18:40—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

Silicon Materials

3-13

w f Growth rate f fw Impurity incorporation

For K0 1

FIGURE 3.8 Schematic illustration of growth rate fluctuation and its relationship with microscopic impurity fluctuations in CZ crystal. Here, f and u are growth rate and crystal rotation rate, respectively. (From Lin, W. and Stavola, M., J. Electrochem. Soc., 132, 1412, 1985. Reproduced with permission from Electrochemical Society.)

3.3.2.4

Oxygen Incorporation and Segregation in Czochralski Silicon Growth

3.3.2.4.1 Incorporation Mechanism Unlike most intended dopants in silicon CZ crystal growth, which show “normal freezing” behavior under normal growth conditions, oxygen is an unintended dopant that enters the silicon melt continuously by dissolving the silica crucible. The incorporation behavior of oxygen into silicon is the result of complex interplay among crucible dissolution, surface evaporation, thermal convection and forced convection, as shown in Figure 3.9. In a side-heated CZ hot-zone, the dissolution of the silica crucible is the highest at the side wall of the crucible. The silicon melt dissolves SiO2 and absorbs its oxygen. The oxygen-rich silicon melt would rise along the crucible wall, following the thermal convection flow pattern to near the melt surface, and then to the melt center where it is drawn toward the crystal for incorporation by the forced convection induced by the crystal rotation. When the oxygen-rich melt is near the surface, a great portion of the oxygen is evaporated through the free surface. The oxygen concentration incorporated into the growing crystal, therefore, is proportional to the oxygen concentration in the melt adjacent to the growing crystal. During steady-state growth, there exists a dynamic equilibrium of oxygen between the four controlling factors in the system. Due to the difference in thermal characteristics of different hot-zone designs, the oxygen incorporation behavior varies from one grower design to another. The major oxygen-controlling factors are discussed in the following. 3.3.2.4.1.1 Effect of Surface Evaporation and Crucible Dissolution In the absence of thermal and forced convection (a hypothetical case), the melt oxygen concentration is proportional to the ratio of melt-crucible contact area to the available free surface area. In such a case, the transport of oxygen would depend entirely on diffusion. This ratio constantly decreases as the aspect ratio is reduced during the growth. The variation of this ratio is the basic characteristic of the axial oxygen profile of CZ silicon, in which the concentration is observed to gradually decrease from the seed end toward the tang end (for example, see Figure 3.15). However, the dependence of concentration on this geometric ratio is influenced by the crucible dissolution rate and ambient pressure. The dissolution rate

DK4126—Chapter3—23/5/2007—18:40—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

3-14

Handbook of Semiconductor Manufacturing Technology

Si crystal

SiO

8

Si melt

Silica crucible

Heater

Crucible dissolution

Forced convection

Thermal convection

Surface evaporation

FIGURE 3.9 Schematic of a silicon Czochralski growth system showing relationship among oxygen-controlling factors. (From Benson, K. E., Lin W., and Martin, E. P., Semiconductor Silicon 1981, eds. Huff, H. R., Kregler, R. J., and Takeishi, Y., Electrochemical Society, Princeton, NJ, 1981, 33. Reproduced with permission from Electrochemical Society.)

of the silica is material density and temperature dependent. Therefore, using crucible with an inner wall made of “porous silica,” for example, would enhance the dissolution rate. Increasing the melt-crucible contact area by using corrugated inner surface will obviously serve similar purpose. Mathematical modeling of the observed oxygen incorporation behavior based on dissolution and evaporation, as carried out by Carlberg et al. [35], is merely a first-order approximation. In actual silicon growth, oxygen incorporation and uniformity are greatly affected by thermal convection and forced convection. No modeling effort thus far accurately takes these factors into account. A study of oxygen evaporation from the melt surface was made using “shoulder” growth of a 300mm-diameter crystal from a 350-mm-diameter crucible. In this experiment, the effect of the free melt surface evaporation on the oxygen level during crystal crown growth was examined. Figure 3.10 plots oxygen concentration variation in the grown silicon, as the melt surface is covered by the growing “shoulder.” The relationship between oxygen concentration and available surface is not linear. When the crystal shoulder diameter is small (!125 mm), the oxygen evaporation (and therefore the oxygen concentration of the growing crystal) is very sensitive to the diameter change. When the majority of the surface is covered with the crystal, the evaporation seems to remain nearly constant. By extrapolating the curve in Figure 3.10, the oxygen concentration corresponding to a very small and to a maximum size of crystal (when the melt is fully covered) may be obtained. The maximum oxygen evaporation accounts for about 30% of the available dissolved oxygen. This value is much lower than generally conceived, that over 90% of the dissolved oxygen from crucible is evaporated from the melt surface [36]. More discussion of the oxygen incorporation behavior on the 300-mm-diameter crystal growth will be made in a later section. Lin and Hill [17] studied the effect of ambient pressure on the oxygen distribution in the melt and grown crystal, via crystal growth experiments. Based on the experimental evidence, physical models of oxygen distributions in large silicon melts were proposed and are shown schematically in Figure 3.11. It is shown that the thermal convection is the main oxygen transport mechanism when the melt aspect ratio is high in a large melt growth. The oxygen-rich flow conforms with the suggested thermal convection

DK4126—Chapter3—23/5/2007—18:40—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

Silicon Materials

3-15

Crystal diameter (mm)

24

Oxygen concentration (ppma)

23

50 100

150

200

250

275

22 21 20 19 18 17 16 15 14

0

0.1

0.2

0.3 0.4 0.5 0.6 Crystal cross-section area

0.7

0.8

0.9

Crucible cross-section area

FIGURE 3.10 Oxygen concentration measured as a function of fraction of melt surface being covered by the growing crystal during crown growth. The crystal crown accounts for less than 10% of the initial charge. (From Lin, W. and Benson, K. E., Annual Review of Materials Science, 17, 273, 1987. Reproduced with permission from Annual Reviews.)

pattern shown in Figure 3.7a. Experimental results also showed that under atmospheric pressure, the oxygen distribution is not uniform near the melt center, where a stagnant region of low oxygen concentration exists. The nature of this non-uniformity is displayed in crystals’ radial oxygen profiles near the seed end, in Figure 3.12a. However, as the crystal growth progresses, the forced convection due to crystal rotation modulates the oxygen distribution and results in the local mixing, and the gross nonuniformity near the stagnant region is largely diminished. On the other hand, under the reduced pressure, Figure 3.12b, the oxygen distribution in the melt is more uniform near the surface of the melt. The grown crystal possesses better radial oxygen uniformity. When thermal convection is not significant, as in the case of a low aspect ratio configuration, the melt oxygen is both diffusion rate-dependent and crucible dissolution rate-dependent. The melt tends to be uniform in oxygen. Both enhanced forced convection and ambient pressure have little effect on the oxygen uniformity. 3.3.2.4.1.2 Effect of Crystal and Crucible Rotations For a given system, i.e., fixed starting melt geometry and hot-zone thermal distribution, etc., the parameters that can significantly alter the oxygen incorporation are crucible and crystal rotations, and growth rate variations. The effect of crystal/crucible rotation on the fluid flow patterns were studied in the past by simulation using fluid of similar viscosity as that of silicon melt at room temperature [37]. Figure 3.13 shows the fluid flow patterns due to various combinations of crystal and crucible rotations. In the real crystal growth, however, the flow patterns can be significantly altered by the presence of thermal convection. The results of the simulations provide very useful information on the effect of rotational parameters. Kakimoto et al. [38,39] observed thermal and forced convection flows of silicon melt directly during Czochralski growth using x-ray radiography with solid tracers, for various crystal and crucible rotation speeds. The effect of non-axial, symmetrical temperature distributions on the thermal convection flows was clearly observed. The suppression of thermal convection by crystal rotationinduced forced convection was also evidenced. One way to gain information on the flow properties of a growing system is to analyze grown crystals following parametric growing experiments. One finds that forced convection is effective in controlled oxygen incorporation.

DK4126—Chapter3—23/5/2007—18:40—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

3-16

Handbook of Semiconductor Manufacturing Technology

SiO

SiO

(a)

Atmospheric pressure

(b)

Reduced pressure

FIGURE 3.11 Schematic representations of oxygen distribution in silicon melt at high and low melt level configurations. The dots and lines represent oxygen concentration. (a) At atmospheric pressure. (b) At reduced pressure. (From Lin, W. and Hill, D. W., Silicon Processing, ASTM STP 804, 1983, 24. Reproduced with permission from ASTM.)

As discussed previously, crystal rotation rate determines the magnitude of the upward melt flow. This flow can serve as oxygen transport from crucible bottom to the growing interface. The net effect on the overall flow pattern and oxygen incorporation depends on its magnitude and rotational direction relative to the crucible rotation. Figure 3.14 shows an “uncommon” oxygen profile of silicon grown under high crystal rotation (30 rpm) with crucible in iso-rotation mode (2 rpm). At about 30% of melt solidified, the oxygen incorporation underwent a mode change and sharply increases to a very high concentration level (w25 ppma). This behavior indicates that as the melt aspect ratio is reduced during the crystal growth, at some transition point, the strong forced convection takes over as the dominant transport mechanism which draws oxygen-rich melt from the crucible bottom to the growing interface. Crucible rotation develops radial pressure gradients, which enhance the thermal convection flow arising from non-vertical temperature gradient as shown in Figure 3.7c. Therefore, fast crucible rotation helps the transport of oxygen from near the crucible wall to the growing crystal and enhances incorporation. Figure 3.15 shows the effect of crucible rotation on the incorporation level. Fast melt flow also results in a thinner melt-crucible boundary diffusion layer, a condition that will enhance crucible dissolution. Figure 3.16 shows several axial oxygen concentration profiles of silicon grown with several combinations of crystal/crucible rotation rates, under both counter- and iso-rotation conditions in the same grower and a reduced pressure. These results show that the forced convection induced by crystal/crucible rotations has very significant effects on the melt flow pattern, even in the presence of thermal convection. The bulk of the incorporation behavior is consistent with, and can be interpreted from, the simulated flow patterns (see Figure 3.13).

DK4126—Chapter3—23/5/2007—18:40—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

Silicon Materials

3-17

g = 0.51 20

20

13 ppma

14 ppma

g=0.77

30

g = 0.33

14 ppma

20 30

g = 0.22 17 ppma

20

Percent transmission

30

g=0.64

20

13 ppma

30 20

15 ppma

g=0.28

30

30 20

Near seed

30

Near seed

20

12 ppma

40

30 0

(a)

17 ppma

25 Radial distance (mm)

50

0 (b)

25

50

Radial distance (mm)

FIGURE 3.12 Radial profiles of 9 mm IR transmission at various stages: gZfraction solidified. (a) At atmospheric pressure. (b) At reduced pressure (18 torr). Oxygen concentration is computed in accordance with ASTM F121-80. (From Lin, W. and Hill, D. W., Silicon Processing, ASTM STP 804, 1983, 24. Reproduced with permission from ASTM.)

3.3.2.4.1.3 Incorporation in pC and nC silicon crystal growth Degenerately doped n- and p-type silicon, in the concentration range of 1018–1019 atoms/cm3 are common substrate materials for n/nC and p/pC epitaxial structures for complementary metal–oxide– silicon (CMOS) ICs. Unlike the lightly doped silicon, oxygen precipitation behavior in degenerately doped silicon is drastically different depending on the conductivity type [41]. In this resistivity range (0.005–0.02 ohm–cm), oxygen precipitation is retarded in Sb-doped silicon while the kinetics are at their peak for p-type, boron-doped silicon. The possible sources of differences in precipitation kinetics in pC and nC were investigated. Oxygen diffusion mechanism was found not to be affected by the presence of heavy Sb or light boron [42]. The nucleation mechanisms for oxygen precipitates have been thought to be different in pC and nC. Experimentally, the oxygen incorporation in nC and pC have been found to be different from the lightly doped silicon [43]. Figure 3.17 shows the axial oxygen distribution in 100 mm diameter pC(0.005–0.01 ohm–cm) and C n (0.02–0.08 ohm–cm) crystals as measured by SIMS, compared to the distribution band of pK (8.20 ohm–cm) grown with the same conditions. It is seen that on the average, the pC crystals exhibit about 25% higher incorporation rate than nC crystals (with pC contents higher than pK, while nC are lower). Other studies [44,45] also show the dependency of oxygen incorporation on doping level, but to different degrees. In considering the oxygen incorporation mechanism, it would not be surprised to find that the dependency obtained would vary somewhat depending on the melt size, melt aspect ratio, etc., used in the crystal growth experiments. Several possible mechanisms behind the dependency of oxygen incorporation on the dopant concentration have been suggested [46–48]. Some data indicate that the reduced oxygen incorporation into heavily Sb-doped silicon is due to Sb2O3 evaporation from the melt,

DK4126—Chapter3—23/5/2007—18:40—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

3-18

Handbook of Semiconductor Manufacturing Technology

Small

Small

Small

Large

Large

Crucible dominating

Isorotation Large

Large

Small

Medium

Counterrotation Crystal dominating Large

Small

Small

Large

Small

FIGURE 3.13 The variation of Czochralski flow patterns with relative directions and magnitudes of crystal and crucible rotations. (From Carruthers, J. R. and Nassau, K., J. Appl. Phys., 39, 5205, 1968. Reproduced with permission from AIP.)

thus reducing oxygen concentration in the melt [46]. Others explain the lower oxygen incorporation observed in heavily Sb-doped silicon as due to accelerated SiO evaporation from the melt caused by simultaneous evaporation of elemental antimony [48]. In the case of pC crystal growth, the enhanced oxygen incorporation is speculated due to enhanced crucible dissolution by heavily boron-doped silicon melt. It is pointed out here that the difference in oxygen incorporation level cannot account for the difference observed in precipitation kinetics in pC and nC. The difference in the nucleation mechanism probably plays a significant role. 3.3.2.4.2 Oxygen Segregation and Microscopic Inhomogeneity Solute segregation during the solidification of a binary system is determined by the nature of its phase diagram near the solvent’s melting temperature. The equilibrium segregation coefficient of the solute is a physical constant and is related to the slopes of the liquidus and solidus immediately adjacent to the melting temperature, above the primary phase of the equilibrium phase diagram as is shown in Figure 3.18. The equilibrium segregation coefficient, k0 can be lesser or greater than 1 and can be deduced readily when the equilibrium phase diagram near the primary phase is established. In practice, when the solidification is not under equilibrium condition, the liquidus and solidus will shift in positions and segregation coefficient will deviate from k0. Solute segregation phenomenon can be demonstrated by a controlled directional solidification of a binary alloy, in which the solute concentration along the direction of solidification will follow Equation 3.1. If the solution is in good mixing condition and solidification rate is small, the k value can approach k0.

DK4126—Chapter3—23/5/2007—18:40—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

Silicon Materials

3-19

26

Oxygen concentration (ppma)

24 22 30/ 2

20

15/ 8

18 16 14 12 0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Fraction solidified

FIGURE 3.14 Axial oxygen profile of silicon grown with crystal rotation of 30 rpm and crucible rotation of 2 rpm (in iso-rotation). The dashed line represents profiles due to normal growth. (From Lin, W., Oxygen in Silicon, ed. Shimura, F., Academic Press, 1994, chap. 2. Reproduced with permission from Elsevier.)

30

Oxygen concentration (ppma)

25 Crucible rotation

20

25 20 15

15

10

5 10 Crystal rotation 28 rpm

5

0

0.2

0.4

0.6

0.8

1.0

Fraction solidified

FIGURE 3.15 Oxygen profiles at various crucible rotation rates, with crystal rotation rate (in counter rotation) held at 28 rpm. (From Moody, J. W., Semiconductor Silicon 1986, eds. Huff, H. R., Abe T., and Kolbeson, B., Electrochemical Society, Pennington, NJ, 1986, 100. Reproduced with permission from Electrochemical Society.)

DK4126—Chapter3—23/5/2007—18:40—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

3-20

Handbook of Semiconductor Manufacturing Technology

Oxygen concentration (ppma)

23 18 kg charge DC heating

21 19 17 15 13 11

0

0.1

0.2

0.3

0.4 0.5 0.6 0.7 Fraction solidified

0.8

0.9

1.0

FIGURE 3.16 Axial oxygen profiles of silicon crystal grown with several combinations of crystal and crucible rotation rates. (From Lin, W. and Benson, K. E., Annual Review of Materials Science, 17, 273, 1987. Reproduced with permission from Annual Reviews.)

In the growth of CZ silicon, the dopant segregation behavior is similar to the solute in the directional solidification of a binary alloy and the dopant distribution along the grown crystal will follow Equation 3.1. If the dopant is non-volatile, such as boron or phosphorus, and melt is in complete mixing, the k value approaches the equilibrium value. However, in the case of oxygen, the incorporation behavior is quite different from the dopant element. The crucible dissolution rate, the oxygen transport mechanisms 30 100 mm silicon growth 18 kg charge

Oxygen concentration (ppm)

25

20

15

10 P −IR

5

P + Sims

N + Sims 0

0.2

0.4 0.6 Fraction solidified

0.8

1.0

FIGURE 3.17 A comparison of oxygen incorporation levels of lightly doped p and heavily doped p and n (pC and nC) silicon crystals grown under identical conditions. (From Oates, A. S. and Lin, W., J. Cryst. Growth., 89, 117, 1988. Reproduced with permission from Elsevier.)

DK4126—Chapter3—23/5/2007—18:40—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

Silicon Materials

3-21

Solute segregation Csolid

Temp

Cliquid

K0 < 1

L

= k0

s Cs

Cl

conc K0 > 1

Temp

L

s Cl

Cs

conc

FIGURE 3.18 Schematic showing relationship between k0 of solute and slopes of liquidus and solidus in a binary system. (From Lin, W., Proceedings of 2nd International Symposium on Advanced Science and Technology of Silicon Materials, Kono, 1996, 288. Reproduced with permission.)

(thermal- and forced- convection) used and surface evaporation rate determine the melt oxygen concentration near the growing interface for incorporation [23]. Therefore, the oxygen in the silicon melt during CZ growth is a dynamic system. Depending on the growth parameters employed, the oxygen concentration and its profile along the grown crystal can be drastically different from those expected from a “normal freezing” behavior of the dopant element. Figure 3.16 shows oxygen concentration profiles from several CZ crystals grown using different parameters but with the same grower and setup. From these profiles, it appears that there is no common “segregation” behavior for oxygen displayed by these CZ silicon crystals. In fact, these axial oxygen distributions are not the results of oxygen segregation during silicon solidification. One can certainly fit the power function of Equation 3.1 to any of the profiles in Figure 3.16 and obtain a k value. But the k value so obtained does not describe the segregation behavior of oxygen during silicon freezing, but merely represent the characteristics of the oxygen concentration change in the melt, near the growing interface in a particular growing process. Therefore, it is inappropriate and incorrect to assign the k value so obtained as a “segregation coefficient” of oxygen. More importantly, it must be emphasized that the k value so obtained has absolutely no relationship with the equilibrium segregation coefficient of oxygen, which is a physical constant. Although oxygen segregation is not visualized readily at the macro level, it is realized at the micro level at the growing interface in CZ silicon growth. In general, the degree of impurity incorporation (i.e., the effective segregation coefficient) at the growing interface is determined by the crystal growth rate f, at the interface, the boundary layer thickness d (a function of crystal rotation) and equilibrium segregation coefficient k0, as may be described by BPS expression, Equation 3.4. In the BPS expression, the k0 and diffusion coefficient D are material constants, and f and d are crystal growth parameters. If the crystal and crucible rotations are maintained constant, then the only parameter can cause variation in oxygen incorporation rate is the growth rate. An analysis of Equation 3.4 shows that the conditions listed in Table 3.1 are true. It is clear from the table that the impurity incorporation rate can fluctuate with the changing growth rate, unless the equilibrium segregation coefficient k0 is unity. However, k0Z1 is not consistent with the phase rule [49]. Furthermore, it is also realized that the change in keff is greater when k0 is further deviated from unity. In large diameter CZ silicon crystals, non-uniform oxygen

DK4126—Chapter3—23/5/2007—18:40—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

3-22

Handbook of Semiconductor Manufacturing Technology TABLE 3.1 Directions of Incorporation Rate Change in Response to Crystal Growth Rate Changes for k0!1 and k0O1 Equilibrium Segregation Coeff K0 O1 O1 !1 !1 Z1

Crystal Growth Rate f Z \ Z \

Impurity Incorporation Rate Keff \ Z Z \ keffZk0

There is no response in incorporation rate with growth rate change when k0Z1.

incorporation in the form of so-called oxygen striations are usually observed. The striations are results of impurity segregation due to fluctuations in the microscopic growth rate. The existence of oxygen striations in CZ silicon is an evident that oxygen in silicon does segregate and that k0 for oxygen is not unity. 3.3.2.4.2.1 Equlibrium Segregation Coefficient, k0, of Oxygen The k0 value of oxygen defines the basic segregation characteristics and therefore the effective incorporation rate during the silicon growth. The k0 has been of interest to researchers in order to understand the microscopic oxygen incorporation and resulting precipitation characteristics in silicon. However, the details of the phase equilibrium of the Si–O system for the Si-rich alloys have not been extensively studied. Many investigations for the k0 value of oxygen have only been carried out in the last three decades. The efforts can be divided into two categories. The first is a direct method, by oxygen concentration analysis on quenched oxygen–silicon alloys, as is commonly done in phase diagram studies. The second approach is to deduce the k0 value from the silicon crystal growth experiments. One major factor affecting the k0 determination is the accuracy of the oxygen concentration analysis. In this regard, infrared absorption using single crystal is more accurate and less uncertain than oxygen analyses on quenched samples using other methods such as differential thermal analysis. Due to the variety of methods used and accuracy of the oxygen concentration determination, the k0 values reported over the years certainly do not show a great consensus. A range of k0 values have been reported by various authors [27,50–55], ranging from greater to less than unity and including unity. Among the various k0 studies, the use of crystal growth experiments and analysis via the BPS relation has been considered a very accurate and repeatable approach. Lin and Hill [52] first applied the approach by growing a small diameter (14 cm) crystal in a large system in a controlled experiment. The oxygen incorporation level was observed to change with a change in crystal growth rate and it shows that oxygen does segregate during silicon solidification. From such an experiment, an equilibrium segregation coefficient of approximately 0.25 was deduced. Similarly, Lin and Stavola [53] studied the origin of microscopic oxygen inhomogeneity and its effect on oxygen precipitation using large-spacing oxygen striations prepared by a manually controlled crystal growth. The periodic nature of the oxygen profile shown in Figure 3.19 is a result of growth rate fluctuations and oxygen segregation. With k0!1, the oxygen fluctuations and growth rate fluctuations are “in phase” (i.e., the maxima and minima of the oxygen fluctuations coincide with the microscopic growth rate fluctuations) see Table 3.1. The growth rate fluctuations are due to asymmetrical temperature distribution in a large silicon melt. These results again show that oxygen segregates significantly, which corresponds to a non-unity equilibrium segregation coefficient. The phase analysis [53] showed that k0!1 for oxygen and that oxygen behaves similarly to arsenic in silicon (i.e., k0w0.3). More recently, Iino et al. [55] used a similar approach to study the k0 of oxygen by analyzing oxygen striations in comparison with that due to phosphorus. They obtained k0 values between 0.13 and 0.37 (averaged to 0.21). The range of values obtained are in good agreement with the values obtained by Lin et al. [52,53].

DK4126—Chapter3—23/5/2007—18:40—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

3-23

18

16

[Oi] in ppma

Silicon Materials

0.76 mm

FIGURE 3.19 Top: IR 9 mm profile parallel to crystal axis showing periodic oxygen fluctuations. Bottom: micrograph of etched silicon after heat treatment showing precipitation bands corresponding to high oxygen regions of the fluctuations (top). (From Lin, W., Proceedings of 2nd International Symposium on Advanced Science and Technology of Silicon Materials, Kono, 1996, 288. Reproduced with permission.)

3.3.2.4.2.2 Microscopic Inhomogeneity and Oxygen Precipitation Oxygen segregation produces inhomogeneity when there is a perturbation in the growth rate. The thermal treatment experiments of silicon containing microfluctuations in oxygen concentration show that the precipitation is not uniform. Figure 3.19 bottom is a micrograph after the sample was heat treated at 10508C for 5 h following chemical etching. It shows heavy precipitate bands corresponding to high concentration regions of oxygen in the fluctuations. The precipitation density in the low oxygen region appears to be insignificant. The difference in the precipitation densities in the high and the low oxygen regions, however, cannot be accounted for by the difference in their [Oi], approximately 9% (concentration fluctuation is w4.5% about the mean). Furthermore, the fact that the precipitation occurs at the sample surface suggests that no denudation takes place. Apparently, the oxygen outdiffusion near the sample surface is suppressed by a fast nucleation/growth mechanism in the high oxygen band regions, while the precipitation kinetics at the low oxygen regions seem retarded. These preferential precipitation bands correspond to concentric ring patterns often observed in the oxidized- or heat-treated wafers following chemical etching. An example is shown in the x-ray topography in Figure 3.20. The oxygen precipitation kinetics in CZ silicon with microfluctuations is not strictly proportional to oxygen concentration. It is likely that the preferential precipitation of oxygen in the high [Oi] regions reflects the large number of nuclei available in the same regions. It is reasonable to postulate that the mechanism for inhomogeneous precipitation behavior involves the preferred nucleation of the precipitates at microdefect centers introduced during crystal growth. Such defects, ranging from several hundred angstroms to tenths of a micron in size, have been related to temperature fluctuations and remelting phenomenon at the growing interface. The grown-in microdefects in CZ silicon have been extensively studied in recent years. Basically, as in the FZ silicon, two types of microdefects may be formed during crystal growth [58]; D defects are vacancy agglomerates in nature and their formation and density are growth rate dependent. The D defects form when the growth rate is above a critical value, below which the A-defects formation are favored. The A defects are clusters of silicon self-interstitials in the form of dislocation loops. In a crystal’s “D defect region,” where the vacancy concentration is excessive (greater than the equilibrium value), defects such as

DK4126—Chapter3—23/5/2007—18:40—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

3-24

Handbook of Semiconductor Manufacturing Technology

FIGURE 3.20 X-ray topography of heat treated CZ wafer showing concentric oxygen precipitation bands at high oxygen regions of the concentration fluctuations. (From Shimura, F., Semiconducto Silicon Technology, Academic Press, New York, 1989, 258. Reproduced with permission from Elsevier.)

crystal originated pits (COP), flow pattern defects (FPD) and laser scattering tomography defects (LST) have been observed and they have been shown to cause degradation of MOS devices (mainly gate oxide integrity) [59]. Further discussion on grown-in microdefects and their relevance to crystal growth parameters will be made later in this chapter. Nakajima et al. [60] studied the distribution of as-grown D defects in the silicon crystal in relation to microscopic growth rate fluctuations on large striated silicon crystals. The study revealed that LST defects occur at the maxima of the growth rate fluctuations, while FPD defects occur at the minima. See Figure 3.21. This result means that the LST defects occur at the high oxygen concentration regions of the [Oi] fluctuations. The LST defects have been speculated as oxygen precipitates in nature (they can be annihilated by high-temperature anneal in hydrogen) [61]. It is assumed that its formation may be the result of either (a) nucleation of oxygen precipitates on vacancy clusters or (b) direct interaction of oxygen atoms with non-clustered point defects. The existence of LST defect centers (a current crystal as shown in Figure 3.19 was grown with a growth rate of 1.5 mm/min) suggests there are far more nuclei available in the high oxygen regions than in the low oxygen regions. When subjected to precipitation heat treatment(s), as the oxygen precipitation progresses in the bands of high nuclei density, self-interstitials are ejected and flood the neighboring low oxygen regions, where the precipitation has not started. The supersaturated self-interstitials would raise the nucleation barrier in the low oxygen regions of the fluctuations and retards its precipitation process [62]. In the current discussion, the occurrence of the microdefects in the high oxygen regions may be a major factor in the formation of observed preferential precipitation. It would be interesting and useful to conduct a similar experiment with a crystal grown with microfluctuations in oxygen, but without D defects by using a low-growth rate. Such an experiment would clarify the role of microdefects in the observed preferential oxygen precipitation. 3.3.2.4.3 Controlled Oxygen Silicon Crystal Growth 3.3.2.4.3.1 Normal Czochralski Growth For ease of discussion, oxygen concentrations in silicon crystals for IC applications may be conveniently classified into high, medium, and low concentration ranges. If we designate 14–17 ppma (ASTM -F121-80)

DK4126—Chapter3—23/5/2007—18:40—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

3-25

Microscopic growth rate

Growth direction

Silicon Materials

Crystal LSTD FPD

Melt

Solid-liquid interface LSTD nuclei FPD nuclei

Axial growth distance

FIGURE 3.21 Schematic showing the occurrence of laser scattering tomography (LST) and flow pattern density (FPD) defects at maxima and minima of the growth rate fluctuations, respectively, of the CZ growth. (From Nakajima, K. et al., Semiconductor Silicon 1994, eds. Huff, H. R., Bergholz, W., and Sumino, K., Electrochemical Society, Pennington, NJ, 1994, 168. Reproduced with permission from Electrochemical Society.)

as the medium range, the concentrations above and below this range are referred to as high and low concentrations, respectively. From previous discussions, it is seen that the forced convection is an effective tool for controlling oxygen incorporation. In order to achieve a desired oxygen level with an axial uniformity in a silicon crystal, the following procedure may be carried out. Experimentally, for a given crystal growing system, one can establish oxygen incorporation profiles as a function of crystal/crucible rotation rates via studies, such as shown in Figure 3.15. Using selected rotational parameters at different stages of crystal growth, one can develop and tailor the growth processes to grow crystals of desired oxygen concentration with substantial axial- and radial-uniformity. Figure 3.22 shows an example of using variable crucible rotation rates to change the oxygen incorporation levels during the growth, while the crystal rotation rate is maintained constant. It shows that the crucible ramping to a higher rotation rate effectively raises the oxygen level and changes the oxygen concentration profile. The incorporation level can be further enhanced when alternate ramping-up and -down of crucible rotation at medium and high rates are employed, as shown in the Figure 3.22. Presumably, the action causes local “disturbance” and thinning of the boundary layer between the crucible and the melt, thus increases crucible dissolution. The example in Figure 3.22 demonstrates the usefulness of forced convection in the enhancement or retardation of oxygen incorporation. With the proper use of forced convection, including the use of alternate rampingup and -down of the crucible rotations, the increased uniform axial incorporation for high and medium ranges of oxygen can be achieved. Concentration profiles a and b of Figure 3.23 are oxygen profiles using variable crucible rotations for high and medium oxygen concentration levels. In the crystal growing systems, where the forced convection alone cannot achieve the level/uniformity desired, additional sources of oxygen may be added by increasing the melt-crucible contact surface.

DK4126—Chapter3—23/5/2007—18:40—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

3-26

Handbook of Semiconductor Manufacturing Technology

22 20 19 18 17 16

2 rpm

15 14 12 10 8

0

0.1

0.2

0.3

0.4 0.5 0.6 Fraction solidified

0.7

0.8

Crucible rotation

Oxygen concentration (ppma)

21

0.9

FIGURE 3.22 Axial oxygen profile of a silicon crystal showing the effect of crucible rotation rate on oxygen incorporation level. (From Lin, W. and Benson, K. E., Annual Review of Materials Science, 17, 273, 1987. Reproduced with permission from Annual Reviews.)

Methods such as sand-blasting of the crucible surface [63] and the addition of an extra quartz rod/ring placed at strategic locations [64] have been used. Figure 3.24 shows a crucible design with its bottom surface fabricated in a corrugated configuration. It is shown that the additional contact sources uniformly increase the incorporation level, curve b of Figure 3.25, as may be compared with curve c, 24

Oxygen concentrarion (ppma)

22 a

20 18 16

b

c

14 12 10

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

0.1

Fraction solidified

FIGURE 3.23 Uniform axial oxygen profiles of silicon crystals using variable crucible rotation rates during growth (curve a and b), as compared with that due to normal growth (curve c). (From Lin, W., Oxygen in Silicon, ed. Shimura, F., Academic Press, 1994, chap. 2. Reproduced with permission from Elsevier.)

DK4126—Chapter3—23/5/2007—18:40—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

Silicon Materials

3-27

FIGURE 3.24 Schematic of the cross-sectional view of the crucible design with corrugated crucible bottom. (From Lin, W., Oxygen in Silicon, ed. Shimura, F., Academic Press, 1994, chap. 2. Reproduced with permission from Elsevier.)

grown with a normal crucible. Curve a shows an oxygen profile resulting from the silicon growth with extra quartz material adhered to the crucible bottom surface as an added oxygen source under normal growth conditions. It is pointed out that profiles a, b, and c are obtained for a fixed set of crystal and crucible rotations. When variable forced convection parameters are applied, the concentration profiles may be tailored to improve axial uniformity as shown in Figure 3.23. Curve a of Figure 3.23 is the result of using variable crucible rotation on crystal growth with additional quartz material adhered to the crucible bottom.

24

Oxygen concentration (ppma)

22 20

a

18 b 16

c

14 12 10

d 0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Fraction solidified

FIGURE 3.25 Axial oxygen profiles showing enhanced incorporation by (a) extra quartz adhered to crucible bottom and (b) corrugated crucible bottom. Curve (c) is due to normal growth and curve (d) is due to the use of forced convection conditions for low-oxygen incorporation. (From Lin, W., Oxygen in Silicon, ed. Shimura, F., Academic Press, 1994, chap. 2, Reproduced with permission from Elsevier.)

DK4126—Chapter3—23/5/2007—18:41—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

3-28

Handbook of Semiconductor Manufacturing Technology

While thermal convection, crucible, and crystal rotations all have effects on the oxygen incorporation level, these parameters also have a great impact on the radial oxygen gradient. The relevance of these parameters to radial impurity uniformity has been discussed in Section 3.3.2 of this chapter. In order to achieve radial oxygen uniformity, the melt flows causing the radial non-uniformity in diffusion boundary layer thickness have to be suppressed. For example, when using a high crucible rotation rate to enhance oxygen incorporation, the radial oxygen uniformity is degraded unless, a high crystal rotation in the opposite direction is used. As discussed earlier, the crucible rotation creates a fluid flow in the same general direction as the thermal convection (see Figure 3.7c). Such flow will cause the diffusion boundary layer to be thinner at the edge than at the center of the interface. Consequently, more oxygen is incorporated at the center than at the edge of the crystal. The increased crystal rotation suppresses the effect of crucible rotation and thermal convection flows, and results in a thinner diffusion boundary layer. The thickness variations in the thinner boundary layer would cause less radial gradient in oxygen incorporation. The faster crystal rotation also results in more mixing in the melt, adjacent to the growing interface and tends to improve the melt concentration uniformity. It should be mentioned that there exists a drastic decrease in oxygen concentration in the CZ crystal’s radial oxygen profiles near the crystal’s periphery region. This is due to oxygen out-diffusion during the crystal growth following solidification. In large melt growth, the controllability for low level oxygen incorporation is more limited than that for the medium and high incorporation levels, due to the domination of thermal convection function as the transport of oxygen-rich melt. This is enhanced for the growth of the crystal portion near the seed end. The curve d in Figure 3.25, represents a typical range of concentration profiles in the low oxygen range (11–14 pima) that may be obtained with the experimental hot-zone used. In this range, further uniformity of the profile can be obtained by increasing the incorporation level of the lower concentration portion of the profile using crucible ramping. However, suppressing the seed-end oxygen incorporation by forced convection is usually not an efficient process. It often results in degradation in the oxygen radial gradient and other disadvantages in growth conditions and crystal properties. From the above discussion, one realizes that in normal CZ growth, the small and/low-aspect-ratio melt configurations facilitates the incorporation of low-levels of oxygen due to the reduction of thermal convection. Such a growth environment may be obtained by growing silicon from a small/shallow inner crucible in a double-crucible type set-up. Axial uniformity as well as concentration control using a double-crucible set-up has been demonstrated [65]. Although, such an apparatus is more involved than a normal CZ growth, as the CZ charge size continues to increase, the concept of a small/shallow melt growth in a double-crucible is capable of extension to continuous CZ growth, to be discussed later in this section. For silicon materials requiring low oxygen and low microdefect density, however, the most effective method is growth under an applied magnetic field. For very low level oxygen incorporation (a few parts per million atomic), crucibles made of non-oxygen containing materials, such as Si3N4, have been used for silicon crystal growth [66,67]. In this case, the source of oxygen is from the silica beads added in the silicon melt. The oxygen incorporation is controlled by the ratio of the surface area of the SiO2 beads to the free melt surface. In general, the oxygen incorporation in normal CZ silicon growth from a large melt can be controlled within G1.5 ppma of the normal concentration range of interest. 3.3.2.4.3.2 Magnetic Field Applied Czochralski Growth (MCZ) From the previous discussion, it is clear that the thermal convection in a large silicon melt plays a major role in determining many aspects of the crystal quality of CZ silicon. In particular, oxygen concentration as well as dopant and oxygen uniformity are of concern. The ability of a magnetic field to suppress the thermal convection in electrically conducting fluids was demonstrated in the 1960s in crystal growth experiments [68]. In 1970, a transverse magnetic field was applied for the same purpose in CZ growth of indium antimonide [69]. The application of a magnetic field across an electrically conductive melt increases the effective kinematic viscosity of the melt [70] and thus, suppresses the thermal convection and related temperature fluctuations in the melt.

DK4126—Chapter3—23/5/2007—18:41—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

Silicon Materials

3-29

Hoshi et al. [71] first reported CZ silicon crystal growth under an applied transverse magnetic field. Aside from a reduction/elimination of impurity striations, MCZ displayed the potential for growing lowoxygen, low-microdefect, and higher resistivity CZ silicon for applications in power devices and imaging devices such as CCD. Since 1980, various types of magnetic field configurations [72,73] in terms of field directions (VMCZ for vertical, HMCZ for horizontal, and cusp magnetic field) and the magnet types used (normal conductive or superconductive), have been developed and crystal growth studies have been carried out. Figure 3.26 shows a comparison of the magnetic field direction/distribution of the three MCZ methods. In general, both VMCZ growth and HMCZ growth have been shown to reduce temperature fluctuation and related growth rate fluctuations, resulting in reduced impurity striations. However, the magnetic field effects on impurity incorporation behavior vary widely depending upon the field direction with respect to the growth axis and the field strength. For example, increased field strength enhances oxygen concentration under a vertical field [74], whereas oxygen incorporation is retarded under an increased transverse magnetic field [75]. Therefore, a transverse magnetic field facilitates the

H

H

(a)

(b)

Hr Hz

(c)

FIGURE 3.26 Schematics showing arrangements for magnetic Czochralski; (a) horizontal magnetic field, (b) vertical magnetic field and (c) cusp magnetic field. The arrows indicate field directions.

DK4126—Chapter3—23/5/2007—18:41—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

3-30

Handbook of Semiconductor Manufacturing Technology

incorporation of low (!5!1017 atoms/cm3) to medium (5–10!1017 atoms/cm3) oxygen concentrations, whereas a vertical field facilitates the growth of silicon with medium to high (O1018 atoms/ cm3) oxygen concentrations. In fact, extremely high oxygen concentrations, near or above the solubility limit, can be incorporated under certain growth conditions with vertical magnetic fields [76,74]. Forced convection induced by crystal and crucible rotations can also perturb the oxygen incorporation under a magnetic field. Several large diameter MCZ crystal growth experiments from large melts have provided information on the difference in oxygen incorporation behavior between HMCZ and VMCZ. For example, Ohwa et al. [76] made a comparison of oxygen incorporation behavior between VMCZ and HMCZ using the same puller, as is shown in Figure 3.27. In the HMCZ mode, a low level of oxygen concentration (w4 ppma) is incorporated in a horizontal magnetic field of 0.25 Tesla (2500 G) with good axial uniformity. The crystal rotation shows no effect on the incorporation level under this condition. However, it is shown in another study [75] that, under HMCZ, an increase in crucible rotation increases the oxygen incorporation level. As in the normal CZ, high crucible rotation often results in some degradation in radial uniformity. In such an instance, the use of a higher crystal rotation rate would help to reduce the oxygen radial gradient. Figure 3.27 also shows that, in VMCZ mode, higher oxygen concentration may be incorporated using a magnetic field strength one-third of that for HMCZ. Unlike HMCZ, the oxygen incorporation level in VMCZ is a strong function of crystal rotation and the

Solubility limit

Oxygen concentration (× 1017 cm−3)

30

VMCZ 0.08 Tesla 25 kg 15 rpm 2 rpm

CZ

10

6

HMCZ 0.25 Tesla 30 kg 25 rpm 15 rpm

3

0

0.1

0.2

0.3 0.4 Fraction solidified

0.5

0.6

0.7

FIGURE 3.27 Oxygen profiles of silicon crystals grown under horizontal and vertical applied magnetic fields under the conditions indicated. The data show that crucible rotation rate has significant enhancement effect on oxygen incorporation under vertical magnetic field. (From Ohwa, M. et al., Semiconductor Silicon 1986, eds. Huff, H. R., Abe T., and Kolbesen, B., Electrochemical Society, Pennington, NJ, 1986, 117. Reproduced with permission from Electrochemical Society.)

DK4126—Chapter3—23/5/2007—18:41—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

Silicon Materials

3-31

incorporation switches to a much higher level through a sharp transition during the crystal growth. Both the incorporation level and the transition point are clearly a function of crystal rotation, indicating that under VMCZ mode, the crystal rotation induced forced convection acts as a major transport mechanism. Figure 3.28 shows models proposed by Hoshi et al. [77] for the magnetic damping of the melt flow for VMCZ and HMCZ. In view of the model for VMCZ, the melt flow from outer to center regions of the melt (flow perpendicular to the vertical magnetic flux) are retarded. Under this condition, the forced convection induced by the crystal rotation is an effective transport for the oxygen-rich melt from the crucible bottom to the growing crystal. As the melt aspect ratio is reduced during the crystal growth, at some transition point, the forced convection would take over as the dominant transport mechanism and the incorporation level is sharply increased. The occurrence of this “transition” seems to depend on the crystal rotation used, the higher the crystal rotation, the sooner this transition occurs during the growth. It is interesting to note that the observed “transition” in incorporation level for the VMCZ growth is similar to that discussed earlier for normal CZ growth under high crystal rotation (30 rpm) with the crucible in iso-rotation mode (2 rpm) (see Figure 3.14). In both cases, the behavior is attributed to strong crystal-rotation-induced convection. In the case of HMCZ, the model of Hoshi et al. shows that the transverse magnetic flux damps the vertical melt flow near the wall, due to thermal convection and crucible rotation, resulting in retarded oxygen transport and therefore, a low level of oxygen incorporation. The forced convection induced by the crystal and crucible rotations is more effective in the transverse direction, parallel to the magnetic flux. Thus, the increased crucible rotation would help crucible dissolution and transport of the oxygen-rich melt follow flow path delineated in the model. The results by Ohwa et al. discussed above and other large-diameter, large-melt VMCZ growth indicate that oxygen incorporation (and other growth characteristics) in VMCZ is affected by many variables and is not easily controlled. Furthermore, impurity segregations were observed to depend on the vertical field strength [78,79]. The segregation coefficient of the impurity (such as phosphorus, gallium, carbon, and oxygen) tends to increase (towards unity) as the magnetic field increases. However, the radial uniformity of these impurities is significantly degraded in a vertical field [74]. For example, the radial oxygen gradient in a 100-mm-diameter silicon crystal can reach 30%–50% when grown in a high vertical field. These properties are not desirable. The change in segregation and severe degradation of the radial

Magnetic flux Forced convection by ws Forced convection by wc Oxygen flow

ws

B

wc VMCZ

ws

B

wc HMCZ

FIGURE 3.28 The effect of vertical and horizontal magnetic field on damping melt flow in Czochralski crucible. (From Shimura, F., Semiconducto Silicon Technology, Academic Press, New York, 1989, 258. Reproduced with permission from Elsevier.)

DK4126—Chapter3—23/5/2007—18:42—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

3-32

Handbook of Semiconductor Manufacturing Technology

gradient found for growth in VMCZ has not been reported for HMCZ growth. From considerations of growth control ability and crystal properties, HMCZ is the preferred approach for MCZ growth. The HMCZ method can grow large diameter silicon with oxygen levels ranging from a few parts per million atomic to over 20 ppma with axial concentration uniformity. The CZ crystal growth under an applied cusp magnetic field was designed to minimize the undesirable characteristics of VMCZ discussed above [80,81]. A CZ growing system with cusp magnetic field uses two sets of coils (often superconducting) co-axially with the crystal, which are energized in opposing directions [82], as shown in Figure 3.26c. In this arrangement, the crystal-melt interface is located in the symmetry plane between the two coils and is maintained in this position throughout the growth by adjusting the crucible height. The resulting magnetic field distribution in the growing system is represented by the dotted lines in Figure 3.26c. Essentially, the major and significant magnetic components are vertical magnetic field orthogonal to the crucible bottom, Hz, and the radial field orthogonal to the sidewalls of the crucible, Hr. The melt free surface has no orthogonal magnetic field component. As discussed earlier, the crucible erosion rate and evaporation rate at the melt free surface are the factors determining the oxygen concentration in the bulk melt, and therefore the incorporation level. The erosion rate at the sidewall is believed to be under diffusion control. The orthogonal components of the magnetic field at the sidewall and bottom have the damping effect on melt flows parallel to the crucible surface (such as thermal convection flows), resulting in thicker diffusion boundary layer, and thus a lower erosion rate. At the melt free surface, on the other hand, the boundary layer controlling the oxygen evaporation is determined by the radial melt flows, such as Marangoni flows and centrifugal flows pumped outwards from the crystal by the crystal rotation induced forced convection. But these flows are not damped since there are no orthogonal magnetic field components at the melt free surface. Thus, the oxygen level can be reduced by retarding the crucible erosion while leaving surface evaporation unaffected. The degree of oxygen level reduction is proportional to the applied field strength generated by the two coils. However, when the melt surface is located away from the symmetry plane of the cusp magnetic field, the melt surface will be subjected to the effect of orthogonal magnetic components, resulting in reduced oxygen evaporation and increased oxygen level [79]. In general, the main application of cusp magnetic field applied CZ growth is in the low oxygen incorporation in addition to the usual advantages of magnetic CZ, such as reduced temperature fluctuations in the melt, etc. The orthogonal components acting on the melt surface and on the crucible walls can be controlled independently. Thus, the controllability of the magnetic field is the main benefit such that the desired incorporation level with good axial uniformity can be achieved. The crystals so grown are without degradation in radial properties and abnormal segregation behavior of the dopant experienced by the VMCZ incorporation. However, it is essential to optimize the crystal and crucible rotation condition to match the applied field strength and fraction of solidification, during the growth, in order to achieve radial and axial uniformity of oxygen and dopant [83].

3.3.3 3.3.3.1

Grown-In Microdefects Relevance to Growth Parameters

Microdefects are formed in normal “dislocation-free” melt-grown silicon crystals due to condensation or agglomeration of excess point defects at or near the growing interface. In the 1970s, the so-called A and B defects were observed in FZ crystals [84]. The A defects were identified as small “extrinsic” dislocation loops and B defects the embyros for the A defects as characterized by transmission electron microscopy (TEM) [164]. The defects are the result of condensation of Si self-interstitials and they can be electrically active when decorated, as may be revealed by EBIC (electron beam-induced current) [85]. Fast growth rates were found to eliminate the A and B defects [84]. The other type of defects, termed “D” defects in FZ crystals were attributed to the condensations of excess vacancies. In recent years, the microdefects in as-grown CZ silicon have been studied extensively. Defects similar to D and A defects found in FZ silicon have been observed in CZ silicon [86]. The major findings and their relevance to crystal growth are discussed in the following.

DK4126—Chapter3—23/5/2007—18:42—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

Silicon Materials

3-33

In a CZ silicon grown with a “normal” growth rate, the cross-section can often be divided into two regions: the inner region is vacancy-rich and the outer concentric region is silicon self-interstitial rich. The two regions are separated by a concentric ring border that can be characterized by populations of oxidation induced stacking faults (OSF) upon oxidation of the wafer as is shown in the schematic in Figure 3.29. Empirically, it has been found that the diameter of the “OSF ring” is reduced when the growth rate is reduced and vice versa. The microdefects have been observed inside the OSF-ring by several detection methods; Crystal originated pits (COPs) as revealed by repeated standard clean 1 (SC1) of the wafer surface (as small scattering centers) [87] (see Figure 3.30), flow pattern defects (FDPs) as delineated by the Secco etch of the wafer surface without agitation [88], laser scattering tomography (LST) defects [89] or by an optical precipitate profiler (OPP). Their respective defect densities have been correlated with the crystal growth rate (Figure 3.31) and with the defect densities of the gate-oxide integrity (GOI) measurements (Figure 3.32). Furthermore, from the correlation such as shown in Figure 3.33, it was speculated that the defects detected inside the OSF-ring by different methods have a common origin, i.e., they are agglomerates of vacancies in nature and are referred to as “D” defects. Systematic studies have found that COPs, FPDs, and LSTs are same defects revealed by different detection methods [90]. The defects in the region outside the OSF-ring are referred to as “A” defects, which have been shown not to affect GOI, but degrade DRAM performance and yield. “A” defects are normally observed as “large pits” following Secco etch of the wafer surface. Due to the harmful effect of the D defects on the gate oxide quality and A defects on DRAM, it is desirable to minimize the formation of clusters of either of these point defects, but to achieve a nominal low level of each type. In the extreme case, the diameter of the OSF-ring shrinks to zero and whole wafer is free of “D” defect. Experimentally, the OSF-ring diameter was found to increase as the growth rate is increased. However, the dependence of the ring diameter on the pull rate behave remarkably different for different crystal diameters or when the surface cooling condition is changed via, for example, an added radiation shield [91]. A radiation shield serves to isolates the crystal surface from the radiation heat from the heater and silicon-melt surface, resulting in an increase in surface cooling rate and therefore, an

A-Defect area OSF area D-Defect area

r

FIGURE 3.29 Schematic showing a ring region (oxidation induced stacking faults region upon oxidation) separating A-defect-rich and D-defect-rich areas.

DK4126—Chapter3—23/5/2007—18:42—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

3-34

Handbook of Semiconductor Manufacturing Technology

12 CZ 1

Number of COP (a.u.)

10 Polished

8

1h SC1 6

2h SC1 3h SC1

4

4h SC1

2 0 .12 – .14

.16 –.18

.14 – .16

.20 –.24

.18–.20

>.30

.24 –.30

LPD Size (μm)

FIGURE 3.30 Light scattering point counts due to crystal originated pits (COPs) defects as a function of length of standard clean 1 (SC1) clean solution treatment and “particle size.” (From Wagner, P. et al., Proceeding of 2nd International Symposium on Advanced Science and Technology of Silicon Materials, Kono, 1996, 101. Reproduced with permission.)

increase in axial thermal gradient in the crystal near the liquid–solid interface. The net effect depends on the heat shield design. Crystal growth experiments with various designs of heat shields show that the OSF-ring diameter, and therefore the fraction of the silicon wafer containing D defects, depends not only on the crystal growth rate, but also the axial thermal gradient in the crystal at the growing interface. Based on the experiments, von Ammon et al. [91] found that there exists a constant critical value of V/G(r)Z

FPD density (cm2)

103

102

101

100

0.4

0.8

1.2

1.6

2

2.7

Growth rate (mm/min)

FIGURE 3.31 Flow pattern defect density as a function of CZ crystal growth rate. (From Yamagishi, H. et al., Semiconductor Silicon 1994, eds. Huff, H. R. and Bergholz, W., and Sumino, K., Electrochemical Society, Pennington, NJ, 1994, 124. 58. Reproduced with permission from Electrochemical Society.)

DK4126—Chapter3—23/5/2007—18:42—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

3-35

Breakdown yield (> 8MV/cm) (%)

Silicon Materials

100 80 60 40 20 0 100

CZ–1:Oi=7.1–12.8 CZ–2:Oi=14.1–15.6 ×1017(atoms/cm3) FZ–1:N2 doped FZ–2:N2 non–doped

101

102 FPD density

103

(1/cm)2

FIGURE 3.32 C-mode oxide break yield as a function of flow pattern defect density. (From Yamagishi, H. et al., Semiconductor Silicon 1994, eds. Huff, H. R., Bergholz, W., and Sumino, K., Electrochemical Society, Pennington, NJ, 1994, 124. Reproduced with permission from Electrochemical Society.)

FPD density (1/cm2)

CcritZ1.3!10K3 cm2 minK1 KK1, which allows a calculation of the radial position r of the OSF-ring, if the pull rate V and radial variation of the axial temperature gradient G(r) are known [91,92]. The empirical formula can be explained based on Voronkov’s theory [93,94] which predicted a change from vacancy to Si interstitial type defects at a critical value of V/G. Based on the empirical equation, Figure 3.34 shows a computed dependence of OSF ring diameter on the crystal pull rate for various crystal diameters. It shows that the larger the crystal diameter, the slower the pull rate is required to completely eliminate the vacancy type defects [95]. At 100-mm-diameter, a pull rate of 0.65 mm/min is required for vacancy defect-free growth, while a much lower pull rate, 0.35 mm/min, is required to

103

102

101

100 100

CZ : 7.1–15.6 3 ×1017(atoms/cm ) FZ−1:N2 dope FZ−1:N2 non−dope

101 102 COP density (1/5" wafer)

103

FIGURE 3.33 Correlation between COPs and FPDs. (From Yamagishi, H. et al., Semiconductor Silicon 1994, ed. Huff, H. R., Bergholz, W., and Sumino, K., Electrochemical Society, Pennington, NJ, 1994, 124. Reproduced with permission from Electrochemical Society.)

DK4126—Chapter3—23/5/2007—18:44—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

3-36

Handbook of Semiconductor Manufacturing Technology

OSF-ring diameter D (%)

100 100 mm 80

150 mm

60

200 mm

40

300 mm 400 mm

20 0

0.4

0.5

0.6 0.7 0.8 Pull rate V (mm/min)

0.9

1

FIGURE 3.34 Oxidation induced stacking faults-ring diameter as a function of crystal pull rate for different crystal diameters. (From von Ammon, W., Proceedings of 2nd International Symposium on Advanced Science and Technology of Silicon Materials, Kono, 1996, 233. Reproduced with permission.)

eliminate the vacancy defects for the 300-mm-diameter crystal. Too low a pull rate, however, presents a problem in maintaining a stable and steady growth rate, which is essential for dislocation-free growth. The above discussions on the occurrence of the D defects are for lightly doped silicon crystal growth. The defect formation mechanism is apparently different for heavily doped boron silicon. It is found that the diameter of OSF ring [95] and COP density [96,97] decrease dramatically with increasing boron concentration (for resistivity !20 m ohm–cm, Figure 3.35). This effect was attributed to Fermi-level effects of boron-vacancy pairs [98]. The other probable explanation was based on the modification of the generation mechanism and/or diffusivity of the intrinsic defects in the strained silicon lattice, induced by the high concentration of smaller-sized boron atoms compared to silicon.

COP> 0.12 μm (a.u.)

10 CZ I CZ II

8 6 4 2 0

6

8

10 12 14 Resistivity (mΩ − cm)

16

18

FIGURE 3.35 Crystal originated pits density as a function of crystal resistivity, showing that the crystal is essentially COP-free when the resistivity is less than 10 m ohm–cm. (From Wagner, P. et al., Proceedings of 2nd International Symposium on Advanced Science and Technology of Silicon Materials, Kono, 1996, 101. Reproduced with permission.)

DK4126—Chapter3—23/5/2007—18:44—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

Silicon Materials

3-37

W SiO2 Si–SUB

55° 100 nm

FIGURE 3.36 Cross-sectional transmission electron microscopy (TEM) observation of an oxide defect. (From Itsumi, M. et al., Proceeding of 2nd International Symposium on Advanced Science and Technology of Silicon Materials, Kono, 1996, 270. Reproduced with permission.)

3.3.3.2

Defect Structures

Although the microdefects in CZ silicon have been detected for quite sometime by various methods as mentioned above, only recently the detailed structures have been revealed by TEM and it is meaningfully correlated to the features detected by the other methods. Modern day bulk defect detection methods, such as IR Laser Scattering Tomography and precision thinning/etching tools, such as FIB (Focused Ion Beam) make it possible to isolate bulk defects in a small thin sample to be studied by TEM. Since the most significant impact of the microdefects in silicon is on the GOI failures, the defects were first observed by TEM as voids by Parks et al. [99], as the origin of oxide defects. The octahedral voids were also observed to exist under the dielectric breakdown sites of thermal oxide films by the Cu decoration method [100], Figure 3.36. The TEM also identified the LST defects in the silicon

2 nm 2 nm [100] [110]

2 nm

50 nm (a)

10 nm (b)

FIGURE 3.37 Cross-sectional TEM view of LST defects Photo in (a) is an enlarged view of (b) showing oxide layer at void-crystal interface. (From Itsumi, M. et al., Proceedings of 2nd International Symposium on Advanced Science and Technology of Silicon Materials, Kono, 1996, 270. Reproduced with permission.)

DK4126—Chapter3—23/5/2007—18:44—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

3-38

Handbook of Semiconductor Manufacturing Technology

Sio2

Si−SUB

FIGURE 3.38 Schematic representation of an octahedral defect structure. (From Itsumi, M. et al., Proceedings of 2nd International Symposium on Advanced Science and Technology of Silicon Materials, Kono, 1996, 270. Reproduced with permission.)

bulk as octahedral voids. [101–103]. In general, the TEM findings on the D defects can be summarized as voids bounded by {111} planes having a structure of two inverted pyramids, Figure 3.37 and Figure 3.38. The analyses by the energy dispersive x-ray spectroscopy (EDXS) indicated that there are no other signals except silicon. The silicon signal is weaker inside the defect than its surrounding, an evidence of a cavity. Their typical size is 0.1–0.2 mm and they often appear as twins or triplets, although the individual defect is usually an incomplete octahedron (some tops of the octahedrons are cut to result in a complex polyhedrons). The sidewalls of the octahedron are lined with an oxide layer; EDXS and Auger spectroscopy analyses suggested that the oxide is SiO2 and is approximately 2 nm thick, Figure 3.37b. The void structure with oxide linings in the sidewalls make it unique as a CZ crystal defect, since CZ crystal growth incorporate significant amount of interstitial oxygen. The formation mechanism of the octahedron void defects is not clear. It involves complex interplay between several crystal growing factors. The temperature fluctuations at the growing interface, the growth rate (and axial thermal gradient in the crystal) and dwell time at a critical temperature range (several temperature ranges have been proposed in between 900 and 11008C [104,105]) in the post-freezing crystal all play a role. Several models have been proposed [106].

3.4 3.4.1

Trends in Large Diameter Silicon Growth Evolution in Crystal Diameter

As semiconductor technology continues to advance, the IC production is projected to be in sub-60 nm technology generation in 2008 [ITRS]. In parallel with the design rule decrease, the increased circuit design complexity results in an increased chip size. This has been the major driving force for increased wafer diameter for the last 30 years, that is, to increase the required number of IC’s per wafer in order to reduce IC manufacturing cost. Figure 3.39 shows the wafer diameter evolution in the industry, since wafer diameter was about 1 00 in 1960. The 200 mm was initiated in the late 1980s. In 1995, the

DK4126—Chapter3—23/5/2007—18:44—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

Silicon Materials

3-39

400

Wafer diameter (mm)

300 250

200 150 100

50 0 1950

FIGURE 3.39

1960

1970

1980 Year

1990

2000

2010

Evolution of silicon crystal diameter.

development of 300 mm wafer was initiated, targeting for IC manufacture in the 0.25–0.18 mm design rule generation, although the 300 mm era only began in earnest in about 2002 at smaller design rules. Concurrently, Japan launched a project for the development of the 450 mm wafer era technology. In the up-scaling of the wafer diameter, in the 300–450 mm-diameter range, the most significant technical challenges are in the crystal growth. The growing process is far more complex than in the past. Unlike small diameter crystals, the dislocation-free as-grown yield will dominate the cost of the manufacturing of 300–450 mm-diameter wafers. To economically produce large diameter silicon crystals, one needs to employ a large charge size for growing a long crystal. A charge greater than 200 kg for 300 mm-diameter crystal growth or 450 kg for a 450 mm crystal is necessary. At this melt size, the thermal convection is severe. The temperature fluctuations associated with the thermal convection will make initial thin neck growth more difficult. The thermal convection would also result in higher oxygen incorporation in the crystal. It is common to apply an external magnetic field, such as a cusp magnetic field, to the large melt to reduce the thermal convection effect and the melt-crucible interaction. When growing a CZ crystal weighing 150 kg or more employing a thin neck growth to achieve the initial dislocation-free seed, one must consider the risk of fracturing of the thin neck due to the crystal weight exceeding the fracture strength of silicon. An estimate strictly based on fracture strength in tensile mode [107] predicts that the crystal weight limit is about 200 kg when the smallest neck is w4 mm in diameter (a targeted neck diameter commonly used for necking). However, for 400–450-mm diameter growth, the crystal weight needs to be in the range of 400–500 kg to be comparable with the 200 mm crystals in production economics. At this weight level and to avoid neck fracture, the “Dash Neck” diameter needs to be larger than 6 mm, a diameter that is difficult to achieve dislocation free structure in the necking process. One of the solutions is to devise a “crystal suspending system” to help support the crystal weight through a “subsidiary cone” grown following the dislocation-free neck is established

DK4126—Chapter3—23/5/2007—18:44—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

3-40

Handbook of Semiconductor Manufacturing Technology

Wire Seed holder Dash necking

Crystal suspending system

Subsidiary cone Necking for suspending Cone Si single crystal >400 kg

Silicon melt >450 kg

FIGURE 3.40 Schematic of a suspending system for weight support of large diameter growing crystal. (From Yamagishi, H. et al., Proceedings of 2nd International Symposium on Advanced Science and Technology of Silicon Materials, Kono, 1996, 59. Reproduced with permission.)

[108]. An example of such a proposed device is shown in Figure 3.40. Another crystal weight related problem is the “creep” phenomenon at the high stress concentration region at the “plastic temperature,” O9008C. The intersection of the crystal neck and “crown” is such a region [110]. When the stress from the crystal weight (plus the meniscus column and surface tension in the melt) exceeds the critical resolved shear stress for slip, slip dislocations will be generated and propagated down the crystal, along the slip systems, h110i/(111). Two possible consequences may result. If the slip exits the crystal, the dislocation-free growing process will not be interrupted, but the crystal length above this point will not be useful. If the dislocation reaches the growing solid–liquid interface, the continued growth will not be dislocation-free. The latter case occurs when the crystal length, L, is less than LZR tan 54.748, where R is the radius of the growing crystal. Besides weight related problems, the large diameter silicon requires growth rate reduction as well. As the crystal diameter is increased, dissipation of the massive latent heat of solidification from the freezing interface becomes more difficult, since the heat transfer paths are longer. This can be understood from the heat balance shown in Figure 3.3. In silicon crystal growth, a sufficiently high growth rate is essential to maintain a steady crystal growth, in order to maintain the dislocation-free structure. One can enhance the growth rate by enhancing the heat transfer rate via increased crystal surface cooling. Radiation shields have been used to reduce the radiation effect from the melt and the heater [91]. However, by doing so, the thermal gradient is increased resulting in more curved isotherms and interface shapes, enhancing the condition for higher thermal stress. The stress-induced slip can

DK4126—Chapter3—23/5/2007—18:44—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

Silicon Materials

3-41

occur causing structure loss. In the severe case, the high thermal stress can cause crystal cracking. Eventually, the growth rate issue may be a limiting factor in determining the maximum diameter for CZ silicon. The increase in diameter has a profound effect on the crystal’s cooling rate and, therefore, the microdefect formation. If the growing crystal’s V/G is above the critical value, D defects are generated within part of, or entire radius. It is now understood that the formation of D defects (vacancy clusters), such as COPs, FDPs, and LSTs, the defect density is a function of the dwell time in the temperature range of 9008C–11008C. [104,105]. Slow cooling reduces GOI defect density. As the crystal diameter is increased, the crystal cooling rate decreases, and the dwell time at 9008C–11008C increases. It was estimated that for the 300 mm crystal, the dwell time in this temperature range is 50% longer than that of the 200 mm, 100% for the 400 mm case [95]. Therefore, the diameter increase causes the reduction in D defect density. The D defects on wafer surface cause oxide thinning and its density is directly correlated with the defect density in the GOI test. Therefore, it appears that the diameter increase certainly has a positive effect on the microdefect density. If the diameter increase requires a decrease in the growth rate, then V/G may be in the region that the entire crystal is Si interstitial rich. The defects are in the form of small dislocation loops (A and B defects). From the available reports, it appears that the A and B defects have no effect on the GOI, but may cause junction leakage in devices such as charge storage in DRAM, as crystallographic defects, especially if decorated with metallic, are potential recombination centers. The Si self-interstitial-rich silicon also does not favor oxygen precipitation, as SiO2 induces excess Si atoms in the crystal. Therefore, oxygen precipitation in such silicon may be impeded. This factor plus the fact that the large diameter crystals are low in oxygen (most are grown with MCZ) will require significant research to assure internal gettering in these materials. It appears that the large diameter crystals grown today and in the foreseeable future will contain either vacancy or self-interstitial type microdefects or both using the “normal” crystal growing processes. However, with the understanding of the relationship between intrinsic point defects and V/G during the crystal growth, a growth process may be designed to grow defect-free silicon (Pure silicon or Nearly Perfect Crystal), which is free of COPs and dislocation loops [111]. The schematic in Figure 3.41 shows the relationship between V/G and grown-in defect concentration. There is a region of V/G within which the concentrations of vacancy and interstitials are below the threshold of defect formation. In order for a crystal to be in the defect-free, a narrow range of V/G value has to be maintained across the growing interface during the whole growing process. The vertical thermal gradient can vary from center to edge of the growing crystal, and it is also a function of crystal length grown (or the size of the remaining melt). Therefore, maintaining V/G to a target value presents major challenges in process design as well as the design of grower’s hot-zone thermal characteristics. With the availability of defect-free silicon, via growing process control, the device makers have more options in materials selection based on factors related to cost, nature of the processing, etc. The epitaxial wafers and hydrogen (or argon) annealed wafers [113] are also widely used. The current hydrogen/argon annealing process can provide a 5–10 mm deep defect-free zone and, with designed thermal ramping, can provide nuclei for internal gettering (IG; by temperature ramping during anneal). The epitaxial silicon layer has a higher quality than the melt grown bulk silicon. The layer is essential free of interstitial oxygen, carbon, and microdefects, and lower in surface particle and metal contamination than the bulk wafer. The DRAM manufacturing group has been the largest user of the polished wafers. The microdefect problem associated with the bulk wafer may drive a significant fraction of the DRAM manufacturers to switch to epitaxial wafers. Two possible epi structures exist: p/pC and pK/pK. The former has also been popular with the microprocessor and application specific integrated circuit (ASIC) manufacturers, the advantages including improved gate oxide quality, internal gettering, latchup immunity, etc. The pK/pK approach is useful to “mask” the microdefect problems in the polished pK wafers. The microdefects, neither dislocation loops nor voids are not found to extend into the epitaxial layer during the epitaxial growth [114].

DK4126—Chapter3—23/5/2007—18:44—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

3-42

Handbook of Semiconductor Manufacturing Technology

[V ] or [I ]

Agglomerate of vacancy

Higher

0.2 mm

(v/G)*/COP (v/G)*OSF

COP(Void defects) V-rich

50 nm

(Pure silicon) (v/G)*L/DL

2 mm

I-rich

Oxygen precipitates

Smaller

v/G 0.5 mm

(OSF nuclei) Agglomerater of interstitial silicon (Large dislocation loop)

FIGURE 3.41 Schematic showing grown-in defects depending on the V/G ratio, where V is growth rate and G is the thermal gradient at the solid/melt interface. (From Rozgonyi, G. A., Semiconductor Silicon 2002, ed. Huff, H. R., Fabry, L., and Kishino, S., Electrochemical Society, Pennington, NJ, 2002, 149. Reproduced with permission from Electrochemical Society.)

3.4.2

Continuous Czochralski Silicon Growth

The idea of continuous CZ growth is due to the fact that, as the silicon diameter continues to increase the maximum grown crystal length of the batch CZ process is limited by the charge size. The initial motivation of the approach of the continuous growth was for the increased crystal length and thus improved throughput and operation cost of the CZ grower. A continuous CZ growth based on a two-container system was first demonstrated by Fiegl [24]. In addition to increased crystal length, many other desirable crystal properties were also demonstrated, such as improved axial uniformity of both

+ Dopant Co

Co / k Co

V2

V1

FIGURE 3.42 Schematic showing a generalized two-container arrangement for crystal growth from a constant volume melt by continuous feed. (From Lin, W. and Benson, K. E., Annual Review of Materials Science, 17, 273, 1987. Reproduced with permission from Annual Reviews.)

DK4126—Chapter3—23/5/2007—18:44—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

Silicon Materials

3-43

dopant and oxygen concentrations. In particular, the arrangement affords the use of a small and shallowmelt CZ set-up in the growing container. In many ways, the two-container arrangement with crystal pulling from a constant melt volume is the same as the double-crucible operation. Figure 3.42 is a generalized two-container system reconfigured from a constant-volume double-crucible, where V1 and V2 are the melt volumes in the outer and inner crucibles, respectively, in a double-crucible set-up. The reconfigured arrangement facilitates the addition of polysilicon and dopant. Only the outer crucible (the feeder) needs to be lifted or adjusted for the melt level control for constant volume/melt level in the growing crucible. The system offers the same advantages as the double crucible in an axial uniformity in dopant and oxygen and the benefit of a small melt effect. Figure 3.43 shows the uniform axial oxygen profile obtained due to “constant volume” growth. The uniform axial dopant profile at concentration C0 is obtained when the initial doping of the growing crucible is C0/k and the feeding crucible concentration is maintained at C0. The incorporation behavior of impurities in the feed material is similar to one-pass

Dopant concentration

Oxygen concentration

Increased v2

Decreased v2

Co

V2

Impurity concentration

V1

CI

KC I

V Volume solidified

FIGURE 3.43 Axial oxygen, dopant and impurity concentrations along the crystal grown from the arrangement shown in Figure 3.43. The oxygen concentration level can be adjusted by changing the melt volume (From Lin, W. and Benson, K. E., Annual Review of Materials Science, 17, 273, 1987. Reproduced with permission from Annual Reviews.)

DK4126—Chapter3—23/5/2007—18:44—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

3-44

Handbook of Semiconductor Manufacturing Technology

Oxygen concentration (ppma)

15

Diameter 100 mm Pull speed 100 mm/h P-type

14 13

10 kg

12 11

7.5 kg

10 5 kg

9 8 0

20

40 60 80 Crystal length (cm)

100

120

FIGURE 3.44 Uniform axial oxygen distributions from “constant volume” continuous-feed silicon growth. The oxygen level is shown to depend on the melt size. (From Fiegl, G., Solid State Technol., August permission from Solid State Technology.)

zone leveling [22] with the “zone width” being equivalent to the melt volume in the growing crucible, V2. In the continuous growth mode, V1 is the total melt volume passing from the feeding (outer) crucible to the growing crucible before V2 is consumed and reduced. The oxygen concentration level is determined by the volume and aspect ratio of the melt in the growing crucible (see Figure 3.44). While the continuous “feed and pull” system with two containers appears straightforward, many engineering challenges remain to be solved or improved for the method to be practical. Among the problems to be solved are the liquid melt transfer and establishment of thermal stability and radial thermal symmetry in the melt while receiving replenishment melt from an external source. In recent years, with the availability of high-purity, small-diameter silicon beads (w0.1–1 mm) from the fluidized bed process, “feed and pull” may be carried out in a double-crucible arrangement as illustrated in Figure 3.45a and b. In case of configuration in Figure 3.45a, the partition between the inner and outer crucibles can be inserted, after the polysilicon charge is completely melted, before the seeding begins [115]. Crystal length of 2 m grown under such continuous “feed and pull” mode was demonstrated. If the total melt volume is kept constant, the partition separating both the melt concentrations becomes

Poly Si +dopant

Co

Poly Si +dopant

Co

Poly Si +dopant Co

Co/k Co

(a)

Co/k

Co/k

Co

(b)

Co

Silica baffle

(c)

FIGURE 3.45 Crystal growth from double-crucible arrangements with (a) constant melt level or (b) constant inner melt volume, maintained by continuous-feed (c) crystal growth from a single container equipped with a circular silica baffle. Melt level is keep constant by continuous feed. The melt concentration is maintained at C0/k. (From Lin, W. and Benson, K. E., Annual Review of Materials Science, 17, 273, 1987. Reproduced with permission from Annual Reviews.)

DK4126—Chapter3—23/5/2007—18:44—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

Silicon Materials

3-45

unnecessary. The resulting “one melt” growth retains the advantages of double-crucible arrangements, Figure 3.45c. In this arrangement, the oxygen concentration level is controlled, as in the two-container case, by the melt volume and aspect-ratio. However, the application of forced convection for an additional oxygen incorporation control is more restricted than for standard batch processes. Furthermore, in this design, the single crucible contains a physical barrier to prevent unmelted silicon particles from reaching the growing crystal and a baffle that reduces thermal convection. One may also view the continuous feed mechanism as playing the role of the outer crucible, which supplies silicon with dopant concentration C0 (the intended concentration in the crystal). The small-diameter polysilicon beads add a new dimension to the development of continuous “feed-and-pull” silicon growth. Shiraish et al. [116] uses liquid-feed for continuous crystal growth of large diameter silicon (150 and 200 mm-diameter crystals). In this approach, the polysilicon rods are melted into liquid silicon immediately above the growing melt, inside the growing chamber, and continuously fed into the CZ melt. The continuous-mode growth provides flexibility where the melt volume and aspect ratio of the growing crucible can be adjusted for oxygen incorporation level. This is especially important for the low oxygen incorporation, which cannot be easily attained in the standard CZ growth with a melt size of 80 kg or larger.

3.5

Wafer Preparation

Silicon semiconductor devices are mostly fabricated on polished wafer or epitaxial wafer. Thus, the first step in device fabrication is the preparation of mirror polished, clean, and damage-free silicon surfaces in accordance with the specifications. As the design rule of device fabrication advances into the deep submicron region, the device processing and performance are more sensitive to the starting material’s characteristics. The requirements of the geometrical tolerance of the polished wafers as well as their bulk characteristics are becoming more stringent. The polished wafers are prepared through the complex sequence of shaping, polishing, and cleaning steps after a single crystal ingot is grown. Although the detailed shaping processes vary depending on the manufacturer. The processes described below are generic in nature. Newly introduced processing technologies will be discussed where appropriate. Figure 3.46 is a flow chart showing a generic wafer shaping process. The single crystal ingot is first evaluated for crystal perfection and resistivity, before it is surface ground to a cylindrical shape of a precise diameter. Flat(s) or a notch with preferred crystallographic orientations are ground on the ingot surface parallel to the crystal axis. The primary flat or notch, for example, is positioned perpendicular to a h110i direction on a (100) wafer and is used for alignment of the wafer in the device processing with automated handling equipment. The primary flat, or notch, also serves as an orientation reference for chip layout, since devices fabricated on wafers are crystallographically oriented. The existence of secondary flat on the wafer, shorter than the primary, is used to identify the wafer surface orientation and conductivity type [117].

3.5.1

Slicing

The slicing operation produces silicon slices from the ground ingot. Slicing defines the critical mechanical aspects of a wafer, such as thickness, taper, warp, etc. The slicing is commonly carried out by an inner diameter (ID) circular saw (Figure 3.47a), after the ingot is rigidly mounted to maintain an accurate crystallographic orientation as previously determined by x-ray diffraction. The ID saw uses a thin stainless steel blade bonded with diamond particles on the inner edge of the blade. Recently, the development of multiple-wire saws has enabled the silicon slicing to result in high throughput and superior mechanical properties such as significantly reduced bow and wrap. Figure 3.48b shows a schematic of a multiple-wire saw. In this arrangement, parallel, equally spaced, and properly tensioned stainless steel wires spun across two pulleys are part of a single stainless steel wire winding through a complex set of pulleys. Cutting of multiple slices results when the ingot is pressed against the traveling wires under injection of slurry. Although the cutting rate is much slower than the ordinary ID saw (ordinary ID saw is 80–100 times higher

DK4126—Chapter3—23/5/2007—18:44—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

3-46

Handbook of Semiconductor Manufacturing Technology

Wafer shaping process Crystal growth Ingot surface grinding, orientation flattening or notching Wafer slicing by ID or multiple wire saw Edge / notch rounding Lapping or grinding (both sides) Edge polishing Donor anneal (p− only) Double-side or single-side polish

Backseal poly Si, LTO, etc. Pace (option)

CMP (final polish) Epitaxy

FIGURE 3.46

Final cleaning and packaging

Flow chart describing generic steps involved in wafer preparation employing modern technologies.

rate), as many as 300 slices can be produced simultaneously. Besides higher throughput, the multiple-wire saw has other major advantages over the ID saw. Slicing by the multi-wire is actually the result of low speed grinding/lapping action by the slurry. Improved bow, warp, TTV, and taper are much easily obtained than with the ID saw. In addition, the slow lapping action by the moving wire results in small kerf loss, which affords more slices per inch of ingot. It is shown that the kerf size loss is very close to the diameter of the wire used. The multiple-wire saw offers material savings (reduces kerf loss by 30% compared to the ID saw), increased productivity, and improved wafer mechanical properties. It has been used for slicing 200and 300-mm-diameter wafers and is expected to be used for the future “diameter generations.”

3.5.2

Chemical Etching

Chemical etching of the slices is done to remove mechanical damage induced during the previous shaping steps-ingot surface grinding and slicing. The etching can be carried out by either an acidic solution or a caustic etchant. The acidic system is mostly based on HNO3–HF system (or with modifiers such as acetic acid [118]). The surface material removal is the result of two-step reactions. The Si surface is first oxidized by HNO3 to form SiO2, followed by its removal by HF. The acid etch produces a smooth and shiny surface. However, since the reaction is exothermic, temperature control is critical in order to maintain uniform etching. Caustic etching uses a alkaline solution [119], such as KOH, with certain stabilizers. The KOH etch offers a uniform etching rate, but produces a rougher surface than the acid

DK4126—Chapter3—23/5/2007—18:44—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

Silicon Materials

3-47

Silicon ingot

Crystal ingot

Wire ID blade ID saw

Multiple wire saw

FIGURE 3.47 Schematics showing the traditional inner diameter (ID) saw and recently developed multiple-wire saw for silicon wafer slicing.

etch, since KOH etching rate is crystallographic orientation dependent. Chemical etching of the slice may be repeated after subsequent mechanical operations, such as edge rounding and lapping/surface grinding to remove mechanical damage.

3.5.3

Edge Rounding

The square edge of sliced silicon wafers is rounded by an edge grinder. The rounded-edge wafers greatly reduce mechanical defects, such as edge chips and cracks induced by wafer handling. Edge chips and cracks can serve as stress raisers, which facilitate the onset of wafer breakage or plastic deformation, and slip dislocations during thermal processing. In addition, the rounded edge eliminates the occurrence of epitaxial crown (thicker epitaxial layer at the wafer edge) in the epitaxial deposition process and pile-up of the photoresist at the wafer edge. The shape of the rounded edge usually follows an industrial standard (i.e., SEMI standard) in which the edge profile fits within the boundary of a standard template. However, variations from the “standard” exist. In some applications, the rounded edge is modified to be more “blunt” in order to facilitate the chemical–mechanical polishing (CMP) operation for inter-level dielectric planerization. In this case, the “blunt” edge supposedly prevents the wafer from slipping out of the template during polishing. Other applications require more “rounded” edge shapes for increased strength. Often, a compromise on the edge profile is required.

3.5.4

Lapping/Grinding

The lapping of the silicon slice surface takes place when the slice is ground between two counter rotating cast iron plates in the presence of an abrasive slurry, usually a mixture of micron-sized alumina or silicon carbide particles suspended in a solution. The purpose of lapping operation is to remove the nonuniform damage left by slicing, and to attain a high degree of parallelism and flatness, both global and local. In fact, post-lapping slices possess the best mechanical characteristics in the entire shaping process flow. The subsequent mirror-polishing operation generally degrades the flatness characteristics attained by the lapping operation. However, lapping with slurry also introduces fresh damage to the silicon surface, which requires subsequent chemical etching and chemical–mechanical polishing for removal. Chemical etching and CMP of the silicon surface degrades the wafer flatness. To circumvent this situation, surface grounder (with a precision grinder bonded with diamond particles) on both sides of the wafer is employed to achieve surface flatness with reduced surface damage. With the reduced surface damage, the need for chemical etching and CMP is also reduced and good mechanical properties may be retained.

DK4126—Chapter3—23/5/2007—18:44—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

3-48

3.5.5

Handbook of Semiconductor Manufacturing Technology

Polishing

Polishing is accomplished by a chemical–mechanical polishing process involving a polishing pad and a slurry. The polishing slurry is usually an alkaline colloidal solution containing micron-sized silica particles. While CMP is used to remove surface damage and to produce a mirror-finished surface, it also degrades the wafer flatness achieved by lapping/grounding. Therefore, it is essential to optimize operational parameters so as to minimize the polishing time and flatness degradation. Double-side polishing (CMP on both front and back surfaces simultaneously) has been found to result in superior flatness than the single side polishing arrangements. The combination of surface grinding (on both the sides of the wafer) and double-side CMP has shown to result in a superior total indicator reading (TIR), total thickness variation (TTV) and local flatness. Such an approach is becoming a standard manufacturing process for large diameter wafers (R300 mm) preparation. The wafer preparation via mechanical methods (i.e., grinding, CMP etc.,) has its limits in the degree of flatness that it can achieve. To supplement and to fine tune the local topography for further improvement in local flatness, tools such as plasma assisted chemical etching (PACE) [120] have been developed. Such a tool employs a spatially confined plasma with a scanning mechanism to allow material removal to be controlled as desired over the wafer surface. The PACE utilizes low energy neutral ions (i.e., /1 eV) rather than energetic ions, which are involved in reactive ion etching. Therefore, PACE produces minimum or no subsurface damage.

3.5.6

Cleaning

Wafer surface contamination can affect electronic device performance. The contaminants can be attached to the wafer surface physically or chemically. Cleaning of the wafers is necessary at many steps in device fabrication processes, as well as during the wafer shaping processes. The silicon wafer must be free of contamination before it is shipped to the device fabrication line. The cleaning process for the removal of surface contaminants during wafer shaping and polishing processes is discussed below. In general, the contaminants can be classified as molecular, ionic, or atomic. Typical molecular contaminants include waxes, resins, and oil used in polishing and sawing operations, and material from the plastic containers used for slice transport and storage. Molecular contaminants are absorbed on the wafer surface by weak electrostatic forces. They should be removed before subsequent cleaning involving chemical reactions. The ionic contaminants, such as NaC ClK, and FK are present after wafer treatments in HF-containing or caustic solutions. They are attached to the wafer surface by chemical absorption. The atomic contaminants of concern are due to the transition metal atoms, such as Fe, Ni, and Cu. The transition metals and ionic species can cause degradation in device performance. Chemical cleaning is an effective method to remove contaminants on the wafer surface. Many chemical cleaning processes have been developed. The process that is widely used in the semiconductor industry is the so-called “RCA Clean” [121], which consists of two consecutive cleaning solutions, including H2O–H2O2–HN4OH (Standard clean 1, SC1) and H2O–H2O2–HCl (Standard clean 2, SC2). The SC1 clean, with volume ratios typically 5:1:1 is to remove organic contaminants by both the solvating action of NH4OH and strong oxidizing action of H2O2. The NH4OH can also form soluble complexes with some metals such as gold, copper, nickel, and cobalt. The SC2 clean, with typical volume ratios of 6:1:1 removes transition and alkali metals from wafer surface, and prevents redeposition from the solution by forming soluble metal complexes (with ClK). The SC1 can also remove particles physically attached to wafer surface by etching effect of NH4OH which detaches the particles from the wafer surface. The K to the wafer repulsion effect of the opposite charges transferred from the electrolyte NHC 4 and OH surface and detached particle, respectively, prevent the particles from redepositing on the wafer surface. A modified RCA Clean [122], by adding a brief etch in diluted HF solution after SC1 was designed to eliminate the thin oxide layer grown on silicon surface due to the SC1 process. The thin oxide resulted from the SC1 was thought to hinder surface for cleaning by the SC2. Many modifications of the “RCA clean” exist (published and unpublished). Most of the modifications are on the volume ratios. For

DK4126—Chapter3—23/5/2007—18:44—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

Silicon Materials

3-49

example, in order to reduce silicon surface roughness, there is a trend to greatly reduce the volume fraction of the NH4OH in SC1 from the original formula. The effect of surface roughness of silicon on the gate oxide integrity has been reported as significant when the oxide thickness is thinner than 5 nm, although the literature is often not consistent, probably due to different process conditions. Other chemical cleaning solutions/processes, such as piranah (H2SO4–H2O2–H2O), ozonated water, etc., have also been shown to be effective. However, the “RCA clean” and its modified versions have been the most popular cleaning processes used by the semiconductor silicon manufacturers.

3.6 3.6.1

Epitaxial Growth Silicon Epitaxial Wafer

An epitaxial silicon wafer refers to a structure where an epitaxial layer is grown on a single crystal silicon substrate by chemical vapor deposition (CVD), normally at a high temperature. The CVD process usually involves the hydrogen reduction of high purity silicon tetrachloride (SiCl4), trichlorosilane (SiHCl3), or dichlorosilane (SiHCl2) to form solid silicon. An added source gas in the reaction, such as diborane (B2H2) or arsine (AsH3) provides dopant atoms for n- or p-type electrical carriers, respectively, in the epitaxial layer. The primary purpose of the epitaxial growth is to create a layer with a different, usually lighter, concentration of electrically active dopant than the substrate. Depending on the IC characteristics, the epi layer must meet a set of specifications for thickness, electrically active dopant concentration, sharpness of the epi-substrate interface, defect density, and contamination. The epitaxial layer was initially applied to bipolar devices. Typically, a lightly doped layer is grown over a substrate containing a pattern (often referred to as sub-collector) of opposite type impurity by diffusion or ion implantation. The structure allows a vertical transistor to be built with a minimum of collector resistance and readily permits junction isolation between devices (see Figure 3.48). The most common application of the epitaxial layer in CMOS device processing involves a lightly doped layer over a heavily doped substrate of the same conductivity type, i.e., either p/pC or n/nC structure. The p/pC is by far the dominant universal epitaxial structure. It has been used by CMOS-based logic, microprocessor, ASIC, and some DRAM manufacturers. Figure 3.49 is a schematic of a CMOS structure built on pKlayer of the p/pC wafer. It is realized today that p/pC wafer has many advantages over the bulk pK wafers. The initial motivation for using such a structure was to reduce the metal–oxide–silicon (MOS) device’s leakage current. Since a lightly doped bulk substrate has a higher concentration of minority carriers, the carriers can diffuse hundreds of micrometers to space-charge layers and are collected as reverse-bias leakage currents. This minority-carrier diffusion current can dominate over leakage current generated within the space-charge layers, especially at higher operating temperatures (R408C) [123]. A technique that can circumvent this problem is to form a pK layer (w1015 atoms/cm3) as an epi layer grown on a heavily doped substrate (w1019 atoms/cm3) [124,125]. The pC substrate has few minority carriers (electrons), so the minority carriers are only generated from the thin epi layer. Thus, the diffusion current

Isolation diffusion P

Base contact

Emitter contact

P

n+ n+ buried

Collector contact

Substrate contact

n+

P

n epi collector

P substrate

FIGURE 3.48

The role of epi layer in a bipolar device.

DK4126—Chapter3—23/5/2007—18:44—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

3-50

Handbook of Semiconductor Manufacturing Technology

Input Vdd

Output

+

n

+

p

+

n

p epi

+

n well

p

p+ substrate

FIGURE 3.49

The role of epi layer in a complementary MOS device.

from the substrate is suppressed, even though the minority-carrier diffusion lengths (in the epitaxial layer) are long; the minority-carrier diffusion length is small in the pC substrate. This is especially important in preserving holding times in dynamic nodes (e.g., DRAM) [123]. Since its initial application in CMOS, many added beneficial functions have been found to be associated with the p/pC structure. The major ones include its effects in minimizing soft errors, preventing latch-up of the device, and providing sites gettering of harmful impurities. From the device design view point, using p/pC structure for CMOS fabrications is the best solution for avoiding latch-up without resorting to circuit design modifications, such as adding “guard rings,” etc. The latch-up is due to the turn-on of a parasitic 4-terminal, n–p–n–p lateral transistor between neighboring NMOS and PMOS transistors, as shown in Figure 3.49. However, if the formation of a vertical parasitic p–n–p is favorable with the presence of pC substrate, the turn-on of the lateral 4-terminal device may be suppressed. In a first approximation, in order to suppress the lateral p–n–p–n device, the effective epi thickness (taking into account the boron up-diffusion after device processing) must be smaller than the separation of two neighboring n–p transistors. Latch-up simulators are available to provide information for designing latch-up-free design layout, from which an appropriate epi thickness may be estimated. pC Substrates in p/pC wafers have been found to serve as an excellent intrinsic gettering sites. Two effects have been observed. The first is the gettering effect due to pC silicon’s high efficiency in generating oxygen precipitates–dislocation complexes. The oxygen precipitation in pC, with resistivity around 10 m ohm–cm, has fast kinetics and can result in a precipitate density, which is more than an order of magnitude higher than in pK (10 ohm–cm) under a Lo–Hi annealing condition [41]. Much of the oxygen precipitation behavior is discussed in Section 3.7. The second gettering effect by the pC silicon is due to the “segregation” effect, which drives metallic impurities to the pC region from the pK epi layer under a “segregation” annealing [126]. The pC, like other highly doped (degenerately doped) silicon regions, such as diffused phosphorus layers, source/drain regions in the CMOS structure, have long been observed to getter metal impurities and perceived as due to an enhanced solubility effect. The segregation effect of the impurities (such as Fe) was proposed to be due to the difference in the Fermi levels of pC and pK silicon [127]. One of the unique features associated with the p/pC structure is the possible existence of misfit dislocations at the epi-substrate interface (see Figure 3.50). These dislocations can act as gettering sites as well. It is well known that doping with boron causes the silicon crystal lattice to contract (w0.014 A/ atom% boron added) [128]. The vast difference in the boron doping levels in p/pC(w1019 atoms/cm3 in the substrate while the epi layer is doped with 1015 atoms/cm3) results in a lattice mismatch between the epitaxial layer and the substrate during CVD deposition [129]. Under this condition, the epitaxial growth is accompanied by lattice stress, tensile on the epi layer side, and compressive on the substrate side. The misfit stress increases as the layer continues to grow, until the local stress exceeds the elastic limit at the deposition temperature, at which time the stress is partially relaxed by forming misfit dislocations at the epi-substrate interface. The onset of misfit dislocation formation during epi deposition is a

DK4126—Chapter3—23/5/2007—18:45—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

Silicon Materials

3-51

FIGURE 3.50 Cross-sectional view of an etched p/pC epi wafer, showing etched misfit dislocation pits at the episubstrate interface.

function of the degree of lattice mismatch, deposition temperature, and deposition thickness. The unrelaxed misfit stress will contribute to both the wafer bow and the wrap. It should be reminded that the misfit dislocations formed on the (100) p/pC epi structure are contained in the (100) interface, with a Burger’s vector of the form bZa/2h110i [130]. Since the (100) plane is not a glide plane in the silicon lattice, the dislocation is immobile; it is “locked” [131] in the interface. Furthermore, the dissociation of such a dislocation into two mobile dislocations in two inclined {111} slip planes is energetically unfavorable. However, it is possible to move misfit dislocations in silicon by a “climb” mechanism under high stress conditions. Experimental evidences [129] show that the movement is towards the substrate side, i.e., the side with the smaller lattice constant. The misfit dislocations so formed are stable and could contribute to the gettering effect. They present no harmful effect to the epi layer structure. Perhaps the most fundamental difference between the polished bulk wafer and pK layer of the p/pC wafer is that CVD silicon is superior in quality. The epi layer is free of oxygen and carbon incorporation. The incorporation of dopant is uniform and free of local fluctuations (such as striations in CZ materials discussed in Section 3.3.2.4.2). In general, CVD epitaxial silicon has been recognized as superior materials for GOI in device processing than bulk CZ silicon (see Figure 3.51). More significantly, in deep sub micron design-rule technologies, CVD silicon epitaxial material is of vital importance and beneficial to gate oxide integrity. Chemical vapor deposition epitaxial silicon layer is also free of grown-in microdefects stemming from the agglomerations of point defects encountered in the melt grown silicon (CZ). Grown-in “D” defect (such as COPs, LSTs, and FDPs) density in CZ silicon has been correlated with the oxide defect density. The issue of grown-in defects is discussed in Section 3.3.3. However, the defects due to agglomeration of point defects in CZ growth tend not to extend into the epitaxial layer during CVD deposition. It has been demonstrated that as thin as 0.3 mm epitaxial layer, grown on a CZ substrate containing COPs can mask the harmful effect of COPs [114]. In consideration of the impact of grown-in microdefects on device yield and reliability, many device manufacturers will be prompted to switch from bulk CZ wafers to p/pC or even pK/pK epitaxial wafers.

3.6.2

Heteroepitaxy

Heteroepitaxial growth refers to epi deposition, where the deposited layer and substrate’s lattice constants are slightly different due to difference in chemical composition, although they may be of the same crystal

DK4126—Chapter3—23/5/2007—18:45—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

3-52

Handbook of Semiconductor Manufacturing Technology

1.6 Defect density, D0 (cm−2)

1.4 1.2

p − / p + epi p − bulk Pre-heat treated p − bulk

1.0 0.8 0.6 0.4

5.5 v Voltage

{

2v

{

0

{

0.2 10 v

FIGURE 3.51 Defect density (D0) extracted from voltage measurements on thin oxide grown on three different silicon surfaces. (From Boyko, K. C., Feiller R. L., and Lin, W. Unpublished.)

system. Heteroepitaxy has been employed in the fabrication of new emerging Si materials and advanced device processing. For example, strained silicon growth, SiGe alloy deposition for recessed S/D, SiGe for a heterojunction bipolar transistor (HBT) via selective epitaxy, are among the recent applications. In heteroepitaxy, due to a difference in crystal lattice constant across the interface, there exist a strain in the grown epitaxial layer in order to accommodate the lattice mismatch. The layer is pseudomorphic when the layer thickness is thinner than a certain critical thickness [132], above which the stress in the film is partially relaxed by the formation of dislocations. Figure 3.52 shows a lattice model for strained pseudomorphic epi layers in two different heteroepitaxy arrangements. In both cases, the epi thickness is thinner than its critical thickness. In Figure 3.52a situation, the epi layer is in a state of biaxial compressive strain, and in Figure 3.52b, a state of biaxial tensile strain. When the pseudomorphic layer continues to grow in thickness and reaches a certain critical thickness [132], the epi layer partially plastically relaxes and releases the lattice strain by generation of misfit dislocations at the growing interface. This is shown in Figure 3.53. The completely relaxed heteoroepitaxial layer is in equilibrium state. As will be discussed in the following sections, both pseudomorphic and relaxed epi layers have their applications.

3.6.3

Selective Epitaxial Growth

Compared to blanket silicon epitaxy discussed above, selective epitaxy growth (SEG) has not been widely used. Silicon bipolar transistors have used, selective epi other than SiGe for improved performance. A 1988 IBM paper [133] reports on a selective epitaxy base transistor, which forms the base with borondoped selective epi rather than boron implantation. The development of the SiGe HBT made use of the SEG in bipolar and BICMOS fabrication in the 1990s. The SEG has also been used to make the source and drain (S/D) elevation for CMOS transistors to solve the problem on silicon real estate limitation. The process becomes indispensable with SOI processing. More recently, SEG has been employed for growing recessed S/D with SiGe to induce uniaxial compression strain in the PMOS channel. The SEG of silicon has been studied from the early days of silicon technology [134–136]. In SEG of silicon, epitaxial deposition only takes place on “windows” of bare silicon, single crystal substrate of a patterned wafer. Nucleation is suppressed elsewhere in regions that is covered by silicon dioxide (or silicon nitride). The major difference in SEG process from that of conventional epitaxial growth is the addition of extra HCl to the SiH2–H2 or SiH2Cl2–H2 chemistries, although the by-product of the

DK4126—Chapter3—23/5/2007—18:45—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

Silicon Materials

3-53

Strained epi layer A Tensile Interface

Substrate B (a)

Strained epi layer C Compressive Interface Substrate D

(b)

FIGURE 3.52 Lattice model of strained pseudomorphic layers grown with (a) Lattice A!Lattice B, and (b) Lattice COLattice D. Layer A is in biaxial tensile strain and layer C is in biaxial compressive strain. Layer A and Layer C are thinner than their respective “critical thickness.”

chlorine-containing source gas also provides the selectivity. Figure 3.54 illustrates the SEG on a patterned wafer. A SEG process at a reduced pressure and a high temperature along with a careful pre-deposition clean (such as high temperature H2 bake) can result in a grown silicon layer with minimum crystalline defects. When doing SEG on (100) substrates, {311} facet growths are found to be common at the edges of the pattern [136]. When the edges of the pattern are aligned in the h100i directions, {311} facets will be situated in the corners, and their size and effect are minimized. 3.6.3.1

SEG Growth of SiGe for HBT

The epitaxial growth of SiGe is the heart of the heterojunction NPN transistor technology. The modern SiGe HBT was developed in the 1990s. A SiGe HBT is similar to a conventional Si bipolar NPN transistor

Relaxed epi layer

Interface Substrate

FIGURE 3.53 Lattice model of heteroepitaxy after strain relaxation accompanied by generation of misfit dislocations. The relaxed epi layer is thicker than the critical thickness.

DK4126—Chapter3—23/5/2007—18:45—ANBARASAN—240432—XML MODEL CRC12a – pp. 1–78.

3-54

Handbook of Semiconductor Manufacturing Technology

SiO2 P+

Si

P+

Si (b)

(a)

SiO2

n–well P+

Si

P+

Si

(c)

(d)

FIGURE 3.54 Schematic illustration of a typical selective epitaxy growth (SEG) process for device isolation: (a) oxide deposition; (b) window formation; (c) epi growth (d) n-well drive-in. (From Borland, J. O. and Drowley, C. I., Solid State Technol., 28, August, 1985, 141. Reproduced with permission from Solid State Technology.)

except for the base. SiGe, a material with narrower bandgap than Si, is used as the base material. The Ge composition is typically graded across the base, with the Ge decreasing from the collector side to the emitter side (peak concentration ranging from 10 to 25 atom% Ge, depending on the application). This creates an accelerating electric field (sloped conduction band) for minority carriers moving across the base, as schematically shown in Figure 3.55. A direct result of the Ge grading in the base is higher speed, and thus higher operating frequency, fTw100 GHz. Typically, thin SiGe a few tens of nanometer thick is deposited selectively on the substrate (collector) at a moderate temperature (6508C–7508C) and a reduced pressure of a few torr using SiH2Cl2, GeH4, B2H2, and HCl. The boron doping level of the layer is

Ge/B content

12% Ge 1019 B

BJT

Energy (eV)

HBT 30 500°C

Substrate

Ta > 1300°C

f c ≈ 1.4 × 1018 0 cm−2 at 200 KeV

FIGURE 4.9 Evolution of a separation by implantation of oxygen (SIMOX) structure during oxygen implantation. (Adapted from Hemment, P. L. F., Reeson, K. J., Kilner, J.A., Chater, R. J., Marsh, C., Booker, G. R., Davis, J. R., and Celler, G. K., Nucl. Instr. Meth. Phys. Res., B21, 129, 1987.)

DK4126—Chapter4—23/5/2007—18:47—CRCPAG—240433—XML MODEL CRC12a – pp. 1–52.

4-16

4.3.5.3

Handbook of Semiconductor Manufacturing Technology

Implant Optimization

Temperature Ti of the Si substrates during implantation also plays a critical role in achieving good microstructure of SOI wafers. In early SIMOX wafers, 1010 cmK2 threading dislocations were typical. Reduction to 106 cmK2 threading dislocations was achieved by increasing implantation temperature to approximately 6008C [68]. To further reduce the defects, the dedicated oxygen implanters had to be improved and optimized for very high doses. To avoid sputtering of metals from the walls of the implantation chamber and onto the wafers, the chamber interior was coated with silicon. 4.3.5.4

Low-Dose SIMOX

SIMOX technology has greatly evolved and improved since the first device demonstrations by Izumi et al. [61]. The efforts to reduce defect density and cut down the cost of processing were helped by the devicescaling trend that required moving to thinner films. Thinner Si film means lower implant energy, and thinner BOX translates into a lower implant dose. Lower oxygen dose, in particular, helps in improving crystalline quality of the wafers. Cutting the oxygen dose is not entirely trivial; mechanisms that allow formation of a planar and uniform BOX are not understood well enough to make theoretical predictions of optimum implantation conditions. Experimentally a few “sweet spots” were identified. For example, a high quality planar BOX was obtained at a much lower dose of 4!1017 cmK2 by modifying the implant and anneal conditions [69]. This low-dose BOX is about 100-nm thick, suitable for the sub-0.25-mm CMOS devices. The feasibility of even thinner BOX films has been demonstrated with OC implantation of just 2!1017 cmK2 at 65 keV, followed by 4 h anneal at 13508C, which resulted in a 56-nm thick oxide [70]. 4.3.5.5

ITOX Process

The success of SIMOX with a thin BOX was greatly facilitated by a discovery and development of a procedure known as internal oxidation (ITOX). Synthesized BOX is prone to have some pinholes or Si pipes that electrically short the Si film to the substrate. However, when an SOI wafer is oxidized at approximately 13508C, a small fraction of oxygen that diffuses through the surface oxide also diffuses through the silicon film (instead of oxidizing the Si surface) and reacts with it at the Si/BOX interface [71]. This internal oxidation improves the stoichiometry of the BOX, giving it properties much closer to those of thermally grown SiO2, reduces or closes Si pipes, and as shown schematically in Figure 4.10, it slightly increases the overall thickness of the BOX. Commercially available SIMOX wafers, known by trade names like MLD (modified low dose) and Advantoxe, fine-tune the processing parameters and add some additional improvements. For example, the SIMOX structure was modified by adding a low dose (1015 cmK2) room temperature implant after

Superficial thermal oxide

Surface SiO2 Si film

BOX + ITOX

BOX 0.2 μm

Before oxidation

FIGURE 4.10

After oxidation

Schematic representation of the internal oxidation (ITOX) process.

DK4126—Chapter4—23/5/2007—18:47—CRCPAG—240433—XML MODEL CRC12a – pp. 1–52.

SOI Materials and Devices

4-17

the “standard” hot implant [72]. This additional step leads to a more planar BOX layer with fewer Si inclusions, since it amorphizes the Si just above the peak of oxygen concentration Rp. 4.3.5.6

Patterned Buried Oxide

SIMOX SOI substrates are normally formed by a blanket oxygen implantation. But there are some applications, such as SOC and microprocessors with embedded DRAMs, where it might be useful to have BOX only in some parts of the wafer, with other areas remaining as conventional bulk Si. Attempts to implant oxygen locally through a thick-patterned masking layer were started already in 1980s [73,74]. In these experiments, after the standard 2!1018 cmK2 oxygen dose was implanted, the boundary between the SOI and bulk regions was extremely defective because of high stresses at the oxide edges. After low dose SIMOX was developed, the density of defects at the boundary was reduced considerably, but it was still high enough to potentially cause problems with device yield and reliability. Cohen and Sadana, reduced further the number of defects in the transition region between SOI and bulk [75]. They used a blanket low-dose oxygen implantation, followed by a touch-up amorphizing OC implant at room temperature and then by patterned internal oxidation (ITOX process). This led to a thicker BOX in regions exposed to oxidation (but also much thinner Si in the same areas). In the extreme case of a subthreshold OC implant dose of 1.5!1017 cmK2, the BOX was discontinuous under the oxidation mask. Ogura [76] used a different approach in which oxygen precipitated into BOX layers only in regions that were previously damaged by HeC implantation. Researchers from Shanghai have recently reported interesting results on forming low-dose patterned BOX by carefully optimizing the dose and energy conditions, followed by 13008C anneal in Ar with 3% O2 [77]. Good results were obtained at 3.5!1017 cmK2 @100 keV and at 2!1017 cmK2 @ 50 keV.

4.3.6 4.3.6.1

Other Fabrication Methods Overview—SOI Menagerie

Over the years, there have been many attempts to utilize localized templates and then extend epitaxial growth from these templates to other regions, but these approaches, although scientifically interesting, have not led to many practical solutions. The situation is somewhat different when the insulator is not amorphous, but monocrystalline. Heteroepitaxial growth of Si on a bulk crystalline sapphire, became a commercial technology known as SOS, but was found to be of limited utility except for rad-hard CMOS. Several other approaches have been studied, which significantly enlarged our body of knowledge about the microstructure and morphology of thin silicon films. These methods, including those already discussed by us, are summarized in Table 4.1 that is reprinted from Celler and Cristoloveanu [3], and an interested reader can obtain the details by following the references listed in the Table. The great variety of approaches did not lead to commercial applications. Many of these techniques require intimate integration of SOI fabrication with device making—in other words, the SOI is patterned and the pattern is device- or circuit-dependent. This means that the SOI fabrication needs to be completed by the same production line that makes devices. Complexity of doing both discouraged most semiconductor chipmakers. There were two interesting exceptions in the past, DI and SOS. And to complete the picture, there is also an interesting recent approach, silicon-on-nothing (SON), in which the isolation layer is formed as a part of the device fabrication process. All three methods are briefly explained below. 4.3.6.2

DI Technology

This technology was developed at TI by Bean and Runyan for bipolar and high voltage applications [78]. The approach required one starting bulk Si wafer, patterning and deep etching, thick oxide, and deposition of very thick polysilicon at high temperatures. The technology provided complete dielectric isolation between large regions of single crystalline Si. It was suitable for some large devices and high voltages. The detrimental aspect was the presence of a

DK4126—Chapter4—23/5/2007—18:47—CRCPAG—240433—XML MODEL CRC12a – pp. 1–52.

4-18

Handbook of Semiconductor Manufacturing Technology

polycrystalline handle wafer—dimensional stability (and with it wafer flatness, bow, and warp) was inferior to conventional Si wafers. LEGO process was an attempt to preserve the unique structure of DI wafers while providing a single crystalline Si handle wafer [84]. A related development to produce power devices in very thick films is ongoing [90]. 4.3.6.3

SOS

Heteroepitaxy played an important role in the early days of SOI. SOS [79] is about as old as DI, and the main motivation for its development was radiation hardness. Since the crystalline quality of as-grown epitaxial Si on Al2O3 is rather poor, post-epi steps are usually added. Commonly they include a Si implant that amorphizes the most defective Si near the alumina interface. In the subsequent solid phase epitaxial regrowth, the template is the near-surface Si that has better crystalline quality. 4.3.6.4

SON

Silicon-on-nothing process consists in growing by selective epitaxy a sacrificial SiGe layer in STIpredefined regions of a bulk-Si wafer. A silicon film with suitable thickness (20 nm or less) is then epitaxially grown. Selective etching of the SiGe layer leaves an empty space (air-gap) underneath the film. The suspended Si membrane can be used to fabricate gate-all-around (GAA) transistors. Alternatively, the air-gap can be filled with a dielectric in order to form a localized SOI structure, integrated in the bulkSi wafer [89,91].

4.4

Advanced Wafer Engineering

The International Technology Roadmap for Semiconductors points out that the performance of Si devices should be improving by approximately 17% per year. For many years now such improvements were being achieved primarily by shrinking the device dimensions, i.e., device scaling and advances in lithography provided the needed performance. More recently, it has been realized that just shrinking the device dimensions is no longer sufficient. New or improved materials and novel configurations of existing materials have become essential. “New” materials and modification of existing materials are now pervasive in IC manufacturing. They include Cu interconnects as a replacement for Al, low-k dielectric to separate the layers of interconnects, development of high-k dielectric to replace the traditional SiO2 gate dielectric. In addition to these device-scale improvements, modifications to the starting wafers can also greatly enhance performance of the circuits that are built on them [92]. Below, we describe some of such wafer-scale modifications. They range from extremely simple solutions, such as rotation of the wafer crystalline plane, to much more complex in which charge carrier mobilities are enhanced by modifying basic properties of silicon, i.e., its lattice parameters. In almost all of these wafer-scale solutions, the ability to bond wafers, and in particular a transfer of layers from one substrate to another that is enabled by Smart Cut, plays a key role.

4.4.1 4.4.1.1

Crystal Orientations A Rotation within the (100) Plane

Silicon-on-insulator wafers produced by layer transfer add a new degree of freedom to the fabrication process, namely the possibility of choosing different crystal orientations for the thin active layer and the handle wafer below it. Traditionally, Si wafers were made with the (100) crystalline orientation of the surface and a notch (or a flat in smaller diameter wafers) that defined the !110O direction in the wafer plane. The (100) surface was chosen in the early days of MOS technology as it yielded the lowest density of interface states, Dit. With advanced surface preparation techniques available today, this selection is less significant. The notch (flat) location defines the alignment of the edges of rectangular silicon die with the (110) preferred cleavage planes of Si. This alignment was essential when wafers were divided into die by

DK4126—Chapter4—23/5/2007—18:47—CRCPAG—240433—XML MODEL CRC12a – pp. 1–52.

SOI Materials and Devices

4-19

scribing and breaking, and it is still of some importance today when wafers are cut with a saw, since it helps to maintain smooth edges of the die. Currently, when every enhancement of charge mobility is important, the traditional configuration with transistor channels in the (100) plane and the current flow in the !110O direction is less than ideal. Specific shapes of conduction and valence bands in silicon lead to different optimum configurations for the mobilities of electrons and holes [93]. For the NMOS, electrons have the highest mobility in the conventional (100) plane, and both !100O and !110O directions of the current flow are approximately equivalent [94,95]. But for PMOS, there is about 16% mobility advantage when the channel in the (100) wafer is aligned with the !100O direction [94]. This can be easily implemented in SOI by rotating the thin film by 45 degrees around the axis normal to the wafer plane, so that the !110O in the handle coincides with the !100O direction of the active device film (See Figure 4.11) [95]. 4.4.1.2

Other Crystalline Planes

Because of the difference in the shape of electronic bands for electrons and holes, the crystalline plane that provides the best performance for NMOS devices, namely the (100), is far from optimum for the PMOS. PMOS performance can be greatly enhanced simply by choosing wafers with the (110) surface plane. Hole mobility is then almost doubled in comparison to the (100) configuration, and it is true even for extremely thin (110) oriented films, of the order of 6 nm [96]. Unfortunately, this comes at a significant penalty to the NMOSFET performance, as shown in the data of Figure 4.12 [97]. The question then arises what is the best way to independently optimize crystalline orientations for NMOS and PMOS. One option relies on using conventional wafers and varying the device architecture [98], another option involves wafers that are engineered to provide access to both crystalline orientations [97,99]. Nonplanar transistors such as FinFETs are described in Section 4.10 of this chapter. It suffices to say here that in such structures the transistor channel plane is perpendicular to the wafer plane. Doris et al. [98], have proposed utilizing a mix of planar transistors for NMOS and FinFETS for PMOS in order to get both the (100) plane for the former and the (110) plane for the latter, assuming that the current flow direction in the PMOS is parallel to !110O. The main problem with this scheme is that the circuit architecture becomes complex, and integration of planar and non-planar devices on the same wafer is not trivial. The second option for simultaneous use of two different crystalline orientations moves the burden from device fabrication to the wafer level. Hybrid orientation technology (HOT) is based on forming SOI wafers in which crystalline orientation of the Si film is different than that of the handle wafer [97,99]. It should be clear that only technologies based on wafer bonding make such structures possible. HOT SOI wafers are typically made by Smart Cut, so that the Si film is (100) and the handle is (110) or vice versa. In the first case, NMOS devices are made in SOI, while regions that will be dedicated to PMOS

(100)

Handle wafer

FIGURE 4.11

)

00

(1

>

10

4:1 @ 22 nm

Contact ≈110 nm

22 nm

32 nm

45 nm

65 nm

90 nm

FIGURE 21.28 Illustration of International Technology Roadmap for Semiconductors etched contact feature scaling roadmap with the impact on the feature aspect ratio compared to the gate electrode. The dashed line represents the contact aspect ratio referenced to the gate electrode height.

DK4126—Chapter21—23/5/2007—18:30—ANBARASAN—240450—XML MODEL CRC12a – pp. 1–69

21-50

Handbook of Semiconductor Manufacturing Technology

process tools or new technologies for patterning, CVD or etch. So the same or preferably a reduction in feature aspect ratio can significantly reduce the risk and complexity for the introduction of the next technology node. 21.4.1.4

Imaging Resolution, Control and Line Edge Roughness

Assuming that immersion lithography in combination with alternative patterning techniques will be able to achieve the resolution requirements, then from a more practical standpoint, LER as defined in the final feature, may be the most critical technology challenge at the 45–32 nm technology nodes.110 “Roughness” has not improved from the 90 to 65 nm nodes and in most cases it has worsened as device features have been scaled, the impact continues to become more critical. While a known phenomena at 90 and 65 nm, LER effects in device performance and reliability will become critical at the 45–32 nm technology nodes. A number of investigators have modeled and verified the electrical effects of the LER for various devices and have shown that it is not be significant effect compared to other process variations until 45 nm and below.110–112 For example, a line CD variation of 8 nm (LER of 4 nm on each side) at 100 nm has very little effect on the transistor performance but at 30–20 nm it becomes the dominant variable. As for current and previous technology nodes discussed also in Section 21.3, phenomena like templating make sequences of etches, their interaction with photoresists in process integrations as important as single etch processes in role roughness plays in CD control. With respect to aspect ratio scaling, studies show that LER can be a function of photoresist thickness where thinner photoresist results in less LER— another case for the use of thin photoresist.

21.4.2

Device Roadmap Etch Challenges for 45–32 nm Technology Nodes

21.4.2.1

New Materials

Even though the transistor technology is discussed in other chapters of the handbook, it is probably worth a short review of some critical issues that may effect etch process development. Another approach to the device performance and critical scaling requirements is the introduction of novel device structures. These structures may compensate for material and process limitations such as patterning while meeting the device requirements. Novel transistor structures may likely dominate the scaling challenge due to cost and timing of the next generation lithography. As with alternative masking schemes, alternative device structures may provide a more timely and lower risk approach to meeting the device performance scaling challenges compared to the introduction of new materials into the classical planar MOSFET. These structures such as the FinFET transistor can be fabricated with existing materials—silicon and oxynitride dielectrics. Refractory metals and metal oxide dielectrics are more difficult to process, are expensive and pose cross-contamination risks. At the 45 nm technology node, the device low power performance requirement will exceed the likely capability of planar device scaling with thermal oxide or nitride where the standby gate leakage may actually exceed the device on (ion) current. The reliability and the short channel effects may also limit the operating voltage requirements. To meet these requirements, the use of high-k gate dielectric materials will be required. These offer new challenges to etch since these will most likely be a refractory metal oxide or silicate. Since pinning effects significantly limit the polysilicon gate material, the use of metallic gate may also be required. There may well be a metallic gate electrode etch that has to stop on a metallic oxide and both layers may be very difficult to etch.113,114 High-k material etch process chemistries are described by Sha and Chang.115 21.4.2.2

Novel Transistor Structures

There are a number of alternative or novel transistors being investigated including a number starting with the classical planar structure. For example, a fully silicide integrated gate electrode (FUSI) where the gate is formed in polysilicon and than a metal is deposited over the top of the gate. Then the gate is completely converted into silicide thus eliminating the need to etch a refractory metal. Improved device performance can be also be achieved by use of ultra-thin silicon active region operating in the fully depletion MOSFET

DK4126—Chapter21—23/5/2007—18:30—ANBARASAN—240450—XML MODEL CRC12a – pp. 1–69

Plasma Etch

21-51

S/D Extensions

Lateral SiGe Etching

Dielectric filling

Mono-Si film S

D

SiGe (a)

(b)

(c)

(d)

FIGURE 21.29 Illustration of the basic silicon-on-nothing process: (a) Selective SiGe/Si epitaxy and integration of the gate stack, (b) source drain (SD) areas etch and selective removal of the SiGe layer, (c) tunnel filling with dielectric layers, and (d) selective epitaxy of the SD areas to contact the Si-channel.

mode such as can be fabricated thin silicon on insulator (SOI). This approach requires only one midband gap metal gate material. Also, a similar novel structure can be build using a sacrificial layer that can be replaced with a thinner dielectric isolation layer. The silicon-on-nothing (SON) transistor structure offers the possible advantage of using epitaxy on a bulk substrate that has more planar silicon integration; so most of the etch processes are consistent with planar CMOS.116 However, to form the dielectric isolation under the active region, a sacrificial layer must be removed and a dielectric deposited under the active region, as illustrated in Figure 21.29. This is accomplished by the use of an high selective isotropic etch of a sacrificial layer of epitaxy SiGe under the thin active silicon layer. Since this is a lateral etch, it must be very isotropic and have very high selectivity to the very thin silicon layer. As similar novel structures are devised, new requirements may need this type of isotropic removal of thin sacrificial layers. 21.4.2.3

Ultra Thin Silicon Structures

One planar structure device that can address a number of the electrical scaling issues is the fully depleted transistor on ultra-thin silicon layer on insulator (FDSOI) planar structure.99,100 Other than the aggressive gate CD scaling, the major etch issue with this type of structure is the use of very thin active SOI. The active silicon layer under the gate structure can be on the order of 100 A˚, making the etch selectivity even that more critical. Since there must be a selective epitaxy layer deposited to reduce the source/drain resistance, the surface must have almost no damage or residue since there is very little material to be able to remove any damage or residue. Multi-gate devices have been showed to have tremendous device improvement without the dependence on extremely effective oxide thickness scaling. But the fabrication of such double gate planar structures is difficult with the close gate alignment, contacting the bottom gate, multiple-layer processing, etc. Several self-aligned gate structures such as the gate all around reduce some of the alignment issues but involve very complex process sequences. Multi-gate device (Figure 21.30)117 are also interesting since the higher gate to channel ratio reduces the requirement on the oxide thickness and the narrow channel for speed performance. The device performance of multi-gate CMOS transistors has been shown by a number of investigators.118 However, most of the original schemes when applied to the traditional planar device structures are very cumbersome to fabricate such as using a buried gate electrode. However, such a device that may be simpler to fabricate is a gate-all-around version of the SON transistor since it is a planar structure and the deposition of the gate electrode can occur after the sacrificial layer is removed under the active region. This also forms a self-aligned gate electrode. To address these device issues, the vertical transistor or FinFET was developed since the active region is a very thin vertical element.100,119–121 Even though this structure involves high aspect ratio features and topography, it utilizes standard thickness SOI substrates and oxynitride gate dielectric used for 65 nm high performance devices and has been shown to be extendable to possibly the 22 nm technology node.

DK4126—Chapter21—23/5/2007—18:30—ANBARASAN—240450—XML MODEL CRC12a – pp. 1–69

21-52

Handbook of Semiconductor Manufacturing Technology

Drain

LGate

Gate

Source

HFin

TFin

FIGURE 21.30 Illustration of the FinFET transistor. (Reprinted from Nowak, E. J., Aller, I., Ludwig, T., Kim, K., Joshi, R. V., Chuang, C.-T., Bernstein, K., and Puri, R., IEEE Circuits Dev., 20, 20, 2004. With permission. Copyright 2004 IEEE.)

21.4.3

Front End of Line Etch Processes

21.4.3.1

Gate Etch

The application of metallic gate electrode materials with high-k metal oxide or silicate dielectrics is very likely the dominant technology for the classical planar complementary metal-oxide-semiconductor fieldeffect transistor (CMOSFET) device structures going into the 45–32 nm technology nodes (ITRS roadmap). These materials are needed to meet the device speed and dielectric leakage requirements. Most of the refractory metals, silicide, and metal oxide materials identified as possible candidates are more difficult to etch than polysilicon since these typically have less non-volatile etch by-products and the reduced effectiveness of a hard mask at these dimensions. The use of selective epitaxy in active regions may increase the requirement for less etch residue and mechanical damage. 21.4.3.2

Dual Metal Gate

Maximization of CMOS performance for the classical planar device structures will probably also require dual metal electrode materials with matched work functions for both n- and p-type devices (Figure 21.31). Not only does this mean additional etch processes for quite different materials, but depending on the integration, both materials may have to be etched as a stacked layer. This also places additional selectivity requirements on the opposite doped type regions. To attempt to simply the integration, one metal gate material is deposited on the high-k dielectric and then removed where the other type devices will be fabricated. Then, the next gate electrode is deposited usually followed by polysilicon deposition. This means that the first etch must not etch or damage the high-k dielectric that will have detrimental effects on the devices made over this dielectric. The gate patterning must etch both metal layers on some of the die and only the second metal over the remaining part (see Figure 21.31). Again, this places significant selectivity challenges for the gate etch since the area with one metal layer will be exposed to additional etch of the high-k dielectric. 21.4.3.3

High-k Dielectric Etch

Following the gate etch and usually the spacer formation, the high-k dielectric must be removed. Unlike the classical silicon oxides, these materials do not etch in HF solutions and often depends on the amount

DK4126—Chapter21—23/5/2007—18:30—ANBARASAN—240450—XML MODEL CRC12a – pp. 1–69

Plasma Etch

21-53

Resist Oxide TiN HfO2

TiN removal PMOS NMOS

1 μm

Oxide (PMOS)

PMOS NMOS

TiN (NMOS)

Poly

HF clean PMOS NMOS

TaSiN

PMOS NMOS

HfO2 (light contrast) Oxide (NMOS) (PMOS)

TiN (PMOS)

PMOS NMOS

HfO2 (Light contrast) (NMOS)

FIGURE 21.31 Schematic of a dual-metal gate process flow with corresponding scanning electron micrographs (SEM) images of a static random access memory (SRAM) bit cell array showing the different layers for the N-channel metal-oxide semiconductor (transistor) and P-channel metal-oxide semiconductor (transistor) regions. (From Samavedam, S., La, L., Smith, J., Dakshina-Murphy, S., Luckoski, E., Schaeffer, J., Zalava, M. et al., Symposium on VLSI Technology, 24, 2002.)

of damage during the gate etch or other process conditions such as thermal treatments. Similar to the metal gate materials, these are also difficult to dry etch as they can have some morphology and have metallic component that is less volatile such as hafnium or zirconium, and as a result the selectivity to silicon substrate is typically not as good. Unfortunately, due the scaling issues and fabrication of the transistors on ultra thin silicon regions the required selectivity is significantly higher. Considerable effort is underway to identify possible dry etch solution but as no totally satisfactory solutions have been developed, a different approach such as atomic layer etching (ALE) may be required. However, it is difficult to focus on all the possible etch schemes since there is not a clear technology direction that will be taken by the industry. But there are some clear challenge differences for some of the likely alternative options, and these should be considered in the etch technology development and ultimate capabilities or these will define the next etch technology. For example, the planar, ultra thin body transistor structure or SON requires the removal of a very thin sacrificial layer under the very thin active silicon region116 (Figure 21.29). This etch by definition must be highly isotropic. Wet chemical etching has been used but this is very limited and the mechanical stresses on these extremely thin structures may be unacceptable except for investigation and feasibility tests. A chemical downstream etch has been shown to give highly isotropic etch using fluorine chemistry such as CF4.122 However, this is a timed etch since it does have an effective end point capability, and there is a loss of selectivity when the SiGe is completely removed (endpoint). An alternative process using an ICP chamber operated in a remote low power plasma mode with no bias power.123 This has been shown to the same or higher selectivity with improved uniformity and the capability to monitor the plasma using optical emission. Probably the leading candidate for the next generation novel structure is the vertical dual-gate on ultra-thin silicon or the FinFET structure (Figure 21.30). There are a number of different multi-gate, dual gate, or tri-gate variations that have been proposed, and each offer some slightly different etch challenge. The most basic structure will probably be a simple vertical fin formed in a SOI substrate with initially classical doped polysilicon gate electrode on silicon oxynitride dielectric. The silicon etch uses typical gate chemistries except that it is a very high aspect ratio, 10:1 and greater, depending on the device performance requirements. This has many of the concerns expressed about the difficulties of processing high aspect ratio features.

DK4126—Chapter21—23/5/2007—18:30—ANBARASAN—240450—XML MODEL CRC12a – pp. 1–69

21-54

21.4.3.4

Handbook of Semiconductor Manufacturing Technology

Spacer Etch

Another process that has been impacted is the gate spacer etch. Although very similar to previous technology nodes, this process is also affected by the feature scaling. This means that not only will the spacer be narrower with a higher aspect ratio, but also the uniformity requirements will increase to reduce the variation on the device performance. Similar to the gate etch, it is more difficult to etch with very high selectivity to be able to stop on the high-k gate dielectric compared to silicon oxynitride films. Also, on novel structures such as the FUSI or FinFET, there may be multi-type film stacks such as metal gates or even more extremely high feature aspect ratios. Of course, there is still the inevitable device scaling, which puts more requirements on the hard mask and photoresist trim process capabilities. As mentioned previously, a key to improved patterning process capability will be the use of multi layer patterning especially with an organic hard mask layer. This will probably be more compatible to the new gate materials and the lower thermal budget that may be required. Also, the multiple layers allow the opportunity to optimize the feature sizing methods such as trimming. 21.4.3.5

STI or Isolation

For those devices that may still use bulk silicon, there is even more CD control and probably the same LER requirements as the transistor gate. Improved transistor isolation will mean even higher aspect ratios and profile controls since the use of low-k dielectrics maybe needed to improve cross talk similar to the metal interconnects. These typicallly have the same feature gap fill issues as the current TEOS CVD films but be more difficult to implement for the planarization and oxide recess through the following etch and clean steps. Low-k dielectric even Fluorinated TEOS (FTEOS) introduces new concerns about other chemical effects or “contamination” of the active region. Recent device performance has been seen with the use of a stressor layer in the bottom of the STI regions. This again adds complexity and control requires for the etch and module integration. There will likely be a number of silicon etch processes required for the various electrical isolation steps such as for FDSOI transistor. The trend to more SOI structures, even to ultra-thin silicon structures offer actually less challenges for etch since the etch needs only to pattern very thin silicon layers over BOX. In fact this is a significant factor in selection of different device integration strategies. 21.4.3.6

Contact Feature Etch Challenges

Contact is always a very critical closed feature that has to be fabricated since it has the minimum sizing to allow the highest possible circuit density. And usually it is a very high aspect ratio feature since as mentioned before, the gate electrode stack and dielectric capping layers are not scaling with the technology. Similar to the current technology nodes, the requirement for contact and interconnect become more difficult with scaling but contact is more difficult to image with extremely low defectivity or missing patterns. A relatively thick photoresist is needed for the high anti-reflection (coating) (AR) oxide etch process, so a high photoresist exposure dose is required and, along with high pattern density image proximity effects, results in a small process window. These factors have prompted the increased use of silicon containing bilayer or multi-layer resist strategies at the 65 nm node and will probably become required at 45 nm and below. Also the roughness (similar to LER) that occurs during patterning is also a significant problem at contact since this marginality in the process is indicative of a high defective process and can also impact contact resistance and reliability. So the lithography challenge is some ways except for CD uniformity is more difficult than gate since there is not a post-develop “trick” that has been shown effective in reducing the printed image sizing. Some techniques such as material “repair” or treatment have been shown to improve the resist selectivity after develop but these do little to reduce the sizing. There have been a number of attempts to increase the post-developed contact sizing by the addition of a polymer (resist) that reacts with the contact sidewall similar to the resist poisoning phenomenon. After a second develop step, there is a layer remaining on the sidewalls of the original photoresist that effectively reduces the contact pattern CDs.124

DK4126—Chapter21—23/5/2007—18:30—ANBARASAN—240450—XML MODEL CRC12a – pp. 1–69

Plasma Etch

21.4.4

21-55

Back End of Line Etch Processing and Ultra Low-k Dielectrics

As device scaling has meant the increase in device speeds, the BEOL has not been a significant limitation to circuit performance until recently. Significant performance effects have been seen at 90 and 65 nm high performance circuits due to the BEOL resistivity and capacitance increasing line capacitance coupling. At 45 nm some have suggested that this may actually be a more significant limitation than the transistor performance. The trend in the industry to compensate this effect has been to introduce low-k dielectric materials. But this is proving to be more difficult than expected and the copper line resistivity has been shown to also be a critical issue. The copper line resistivity is increasing more than predicted for a given scaling factor. There are several effects such as barrier metal thickness, as well as significant grain boundary and surface effects at these dimensions. So the BEOL metallization may become the new limitation for continuing on Moore’s Law scaling roadmap.125 This scaling requirement for 45 nm requires smaller CD but, for lower copper line resistivity, this means deeper metal trench structures or higher aspect ratios. But as the line depth increase so does the line capacitance losses, which requires the use of even lower-k effective dielectric structures or integrations and to maintain this during processing. It also could result in the use of a thicker barrier to maintain step coverage or mandate the use of a different process to improve step coverage such as atomic layer deposition (ALD). As expected etch can have a direct impact on these parameters. 21.4.4.1

Challenge of Etching Ultra Low-k Dielectrics

The porous or porogen containing ULK materials under investigation for the 45 nm BEOL generally etch much faster than previous inter-layer dielectrics such as TEOS or carbon doped oxide (SiCOH). However, there a number of issues with these materials that impact our ability to etch. For example, these films’ dielectric properties are very sensitive to subsequent processing such as CMP, cleaning and photoresist removal. A more dense oxide cap layer is therefore used to protect the bulk film. But as the structures are etched, the exposed sidewalls are subject to the damage. To prevent this sidewall damage, various modifications have been made to the processing such as the ashing and etching chemistries. Another approach is to help minimize the exposure of the low-k material. A dual hard mask integration approach helps since the trench pattern is etched in the top hard mask layer and the resist can be stripped without exposing the low-k to the plasma, usually a standard oxygen process. In this trench first sequence, the via pattern is then applied and etched into the second hard mask layer and through the low-k dielectric, stopping on the etch stop/diffusion barrier at the bottom. The via layer resist is removed while only the less critical via sidewalls are exposed. The trench features can now be etched through the top hard mask layer and the etch stop layer (ESL) opened to the copper. This improves feature alignment and reduces the exposure of the metal sidewalls to oxidizing chemistries that can significantly impact the dielectric properties. In addition, considerable investigation has been done into alternative photoresist removal processing and chemistries.126,127 Most of these rely on reducing chemistries such as NH3 or N2/H2, or on less oxidizing mixtures such as CO or CO2/O2. Photoresist removal can also can be done in situ or immediately following the dielectric etch in the same chamber, but there can be significant chamber wall effects depending on the etch chemistries and reactor design. If reactive polymers are deposited on the sidewalls, then these can be released during the photoresist removal step increasing the etch or damage to the low-k material sidewalls. 21.4.4.2

New Porous ULK Dielectrics

To reduce the emissivity of the dielectric, the pore density and size must be increase and this leads to obvious impact on the mechanical properties. This worsens the etch sidewall damage since more of the layer becomes carbon-like or porous. To help alleviate some of the mechanical and process issues with the porous dielectric films, a different low-k material is being developed where the porosity in not introduced until after the etch and metal deposition process steps. This is referred to as a “solid first” approach.128

DK4126—Chapter21—23/5/2007—18:30—ANBARASAN—240450—XML MODEL CRC12a – pp. 1–69

21-56

Handbook of Semiconductor Manufacturing Technology

The as deposited film contains clusters of a polymeric material that decompose at low temperature than the bulk film. So after CMP, metallization and dielectric deposition processing such as the cap layer or several metal layers, the layer is heated to above this temperature and the porogen decomposes leaving voids in the matrix dielectric layer. This reduces much of the mechanical issues since the porogen containing film is denser and has higher modulus. Also, the lack of pores reduces the diffusion of reactant species that can remove carbon within the layer or damage the layer during etch and resist removal. The dielectric etch performance is also improved since the film density results in better etch profile control. This approach has a number of processing advantages, but there are still many questions concerning the reliability due to high film stress that results from the significant shrinkage of the film during the porogen removal process. 21.4.4.3

Metal Line Resistivity

A significant line resistivity effect is due to the thickness of the diffusion barriers such as tantalum, tantalum nitride or TiN.129 Even though this is primarily a deposition process limitation, the side wall roughness and more importantly the profile shape following etch can be significant in the ability of given deposition process to provide continuous uniform barrier coverage. As mentioned before, there is often an oxide or cap layer used to protect the ULK material. With the high etch rates for the low-k dielectric and the possible residual chamber effects a significant undercut or notched profile can occur during etching and resist strip. This can be very problematic for deposition even ALD. Again LER and sidewall texture can contribute to part of the increasing line resistivity by requiring thicker barrier thicknesses to have the required reliability. Porosity of the ULK dielectric also contributes to this issue and some have even proposed a separate deposition to “seal” the pores to improve barrier coverage and limit metal diffusion. Lastly, the effects of surface scattering is only made worst for rough surfaces. 21.4.4.4

The Ultimate Low-k: Air

As the challenges increase to find a material that meets the dielectric properties and that can be processed while maintaining the required effective k value, many as the best approach consider a structure change. This is similar to high performance transistor where the integration may have to resort to using a novel structure instead of difference materials such as the structure illustrated in Figure 21.30. As previously mentioned, the BEOL interconnect may offer biggest challenges to meet the technology performance requirements at 45 nm and below. Even lower emissible dielectric material will probably be available but the integration of these into a more manufacturable process seems to offer un-surmountable difficulties. For most of the proposed air gap integrations there is actually a significant reduction in the challenges to the etch processing since these usually would reuse previous generation materials such as TEOS, low-k dielectrics or sacrificial organic layers (Figure 21.32). These have previously been extensively utilized in previous generations and the etch processing is well characterized and understood. In some proposals, organic layer can be used as the metal line layer such as the porogen material so the line etch is basically similar to the dry develop process used for similar multilayer resist processing. For most air gap structures, there remain several issues besides the inherent reliability of these structures including the etch and metallization process interactions of having unlanded vias on air spaces. Although this may seem to be a significant issue, in fact in practice, these may not actually be as critical. First, most of these can be mitigated by layout, sizing and the use of via etch processing that provide taper or smaller bottom CD’s. In most processing, the effects of unlanded via structures have been observed and these effects are more significant in the porous low-k materials due to the higher etch rates. In misaligned test structures, deep very small voids are etched along the sides of metal lines, but in almost all analysis these do not form voiding of the subsequent metallization processing nor have via failures been reported for such features. In actual circuits layouts, there are other more significant effects of such misalignment such as resistance or shoring that require sufficient alignment and overlay control where

DK4126—Chapter21—23/5/2007—18:30—ANBARASAN—240450—XML MODEL CRC12a – pp. 1–69

Plasma Etch

21-57

CoWP (a)

(b) Non-conformal dielectric

Airgap (c)

(d)

FIGURE 21.32 Schematic representation of air gap formation using non-conformal chemical vapor deposition (CVD) technique: (a) Once the metal layer is completed, (b) a self-aligned cap layer is deposited on the copper lines, (c) the dielectric is etched between the line (d) followed by a non-conformal CVD process to form air gap spaces.

this misalignment has not been a problem for full dielectric structures. But more data is needed for the air gap structure to determine the actual mechanisms and the possible impact on reliability and yield.

21.5

Nanotechnology—22 nm and Beyond

21.5.1

Device Roadmap Challenges

In the past several years, a number of investigators have predicted the end of the silicon era as we approached the threshold to nanotechnology. Even the definition of where nanotechnology begins is a topic within itself, but most agree that the 22 nm node is well at this technology. It is very risky to attempt to project the future, but that is more the case for semiconductor technology development trends, especially more than one generation out. If the 45–32 nm nodes will be the transitional phase with the introduction of some new materials and structures, there is little disagreement that the 22 nm node will be at the nanotechnology level and maybe at the limit of silicon technology.94,110,130,132–134 Since every physical limit of the conventional technology will be pushed if not exceeded, new structures and materials will be needed at almost every element to be able to build such structures, but it appears that silicon will still be the fundamental base for the technology. The devices will have to be scaled truly to the atomic level as illustrated in Figure 21.33 and Figure 21.34, and so the process technology must also start to perform at this level. A number of investigators have attempted to forecast the most likely path and, if we assume a normal 15 year Intellectual property (IP) development cycle,135 these projections should start to be a bit more focused on the most likely scenarios. These forecasts usually involve the transition from classical planar CMOS on bulk silicon substrate to planar on ultra thin silicon active region such as FDSOI. This will probably quickly migrate or will be introduced with high-k dielectrics and metal gates if the manufacturing issues are resolved in the next several years. The next performance boost will probably be similar devices but with multi-gates.136 These seem to be consistent with the ITRS roadmap and could be introduced later in the 45 or 32 nm nodes.1 The possible alternative to this migration is the FinFET structure as described earlier (Figure 21.33). It offers the advantages of both ultra-thin silicon and multigate performance. And because of the simpler technology, it can provide the needed performance but much sooner than the next level of scaled planar technology will likely be available. Nanowire transistor

DK4126—Chapter21—23/5/2007—18:30—ANBARASAN—240450—XML MODEL CRC12a – pp. 1–69

21-58

Handbook of Semiconductor Manufacturing Technology

G Lg

G

S

G S

S

D

D

D

Tsi

xide ried o

Bu

Double-gate Fin FET (Tsi = 2/3 Lg)

Omega Fin FET (Tsi = Lg)

Nanowire Fin FET (Tsi = 2Lg)

FIGURE 21.33 Evolution of FinFET device to nanowire structure. (Reprinted from Yang, F.-L., Lee, D.-H., Chen, H.-Y., Chang, C.-Y., Liu, S.-D., Huang, C.-C., Chung, T.-X. et al., IEEE Symposium on VLSI Technology, 196, 2004. With permission. Copyright 2004 IEEE.)

could then be the next evolutionary extension of the current FinFET structures and is projected to provide further enhanced device characteristics so that the extreme scaling (beyond the lithographic limit) may not be needed137,139 (Figure 21.34). Not only would this be a step function in the technology roadmap, but also most investigators now see this as the logical progression to continue the device scaling of a 45 nm FinFET that can be scaled to 22 nm nanowire transistors. The introduction of carbon nano-tube FET carbon-nanotube field-effect transistor (CNFET) could also use the same basic structure with the silicon nanowire active region being replaced with carbon tube, so yet further scaling to single electron quantum dot structures134,140–142 (Figure 21.35).

N+ Poly-Si gate

10 nm (a)

Buried oxide Lg = 5 nm (b)

Buried oxide

FIGURE 21.34 Transmission electron microscopy cross-sections of nanowire structure. (Reprinted from Yang, F.-L., Lee, D.-H., Chen, H.-Y., Chang, C.-Y., Liu, S.-D., Huang, C.-C., Chung, T.-X. et al., IEEE Symposium on VLSI Technology, 196, 2004. With permission. Copyright 2004 IEEE.)

DK4126—Chapter21—23/5/2007—18:30—ANBARASAN—240450—XML MODEL CRC12a – pp. 1–69

Plasma Etch

21-59

20 nm

Poly-Si gate

Drain

Source Substrate

Quantum dot Buried oxide

112700 3.0 K X300 K 100 nm

FIGURE 21.35 Extension of FinFET like structure to quantum dot device structure. (Reprinted from Zhuang, L., Guo, L., and Chou, S. Y., Appl. Phys. Lett., 72, 1205, 1998. With permission. Copyright 1998, American Institute of Physics.)

If many of the FEOL roadmap limits seem to be have been pushed out, the critical BEOL interconnect is just now being addressed - increasing copper line resistivity.143 Even at the 32 nm node, the increase in copper resistivity may actually limit circuit performance. There does not appear to be any obvious solution to the resistivity issue at 22 nm since ultra thin barrier, high aspect ratio, and design options will need to be pushed to their extreme to allow 32 nm to meet the technology requirements. The only possible migration will be in further reduction of the line-to-line capacitance. As previous discussed, low dielectric emissivity can significantly improve the resistance-capacitance product (RC) delay losses and air gap can approach k values close to 1.0.133 So as the resistivity continues to increase without much opportunity to reduce it, improving the interconnect losses at 22 nm must take advantage of the air gap structures. Besides the issues previously discussed for air gap structures, the requirement to attempt anything that will reduce resistance will likely drive the thickness of the lines up that will further strain the patterning requirements with increasing feature aspect ratios. Extending the previous materials discussions, there is in some ways even less uncertainty about the alternative materials for the 22 nm technology than for 45–32 nm; because for now, the scaling can only be accomplished with a FinFET type structure for the basic CMOS transistor. The materials needed to meet the device requirements will probably be developed at 45–32 nm for the FDSOI and other planar devices. For example, the FinFET devices are capable of meeting the 45 nm targets with polysilicon and silicon oxynitride but by the 32–22 nm nodes, these too will probably require high-k dielectrics and metal gate electrode materials. Since there are already feasibility demonstrations of device structures at the 22 nm scaling, there is a high probability that new technologies such as ALD can be scaled to this level.

21.5.2

Imaging Limitation and Etch Interactions

Beyond extending optical immersion lithography to the 32 nm node, an alternative approach for gate fabrication may be the use of a hard mask formed by a spacer process on a sacrificial material.138 This has some definite limitations and requires double patterning—one for the minimum gate CD and one for the contacts and larger features, i.e., anything greater than the minimum. As in the current lithography development thrust, LER becomes even more challenging for lithography and etches with w9 nm physical gate length and total gate roughness of 1 nm.1

DK4126—Chapter21—23/5/2007—18:30—ANBARASAN—240450—XML MODEL CRC12a – pp. 1–69

21-60

21.5.3

Handbook of Semiconductor Manufacturing Technology

Extension of Existing (45–32 nm) Etch Technology Node

As with the 45–32 nm technologies, the critical etch challenges will probably be the feature aspect ratio and selectivity requirements to be able to meet the scaling roadmap targets. With appropriate masking technique resolution at 22 nm does not appear to present any significant roadblocks but still a number of challenges. Unfortunately, the aspect ratio become even more difficult with a number of critical steps approaching or exceeding 4:1. Beside the contact etch ratio that will probably exceed 4:1, the BEOL copper metal resistivity is increasing as the line become smaller and this will drive the integration to have deeper metal trench structures as well.102 Even though 22 nm will use material processing characterized for previous generations such as high-k dielectrics and metal gates, the selectivity challenges are even more difficult since the layers become extremely thin. These will be even more difficult along with the probable use of 3D transistor structures may require a different approach—ALE. This is analogous to ALD now being developed or already introduced into standard CMOS technology. Atomic layer etching can be modified to provide etch conditions ranging from highly anisotropic to purely isotropic. For 3D structures such as the SON transistors previously discussed and nanowire structures, isotropic etch processes may be needed and these will require the same precision control of any anisotropic process144 (Figure 21.36). Primarily chemically driven selective etch processes may no longer provide the capability since in many cases the polymer level through which the etch chemistry reaction must proceed is thicker that the feature size. So like with ALD, the ALE reactant species must be adsorbed on the surface, and then energy is supplied to cause these molecules to react. So for each layer or cycle, the chemistry and reaction rate can be precisely controlled. Use of thermal energy such as with a heated substrate stage or blanket radiation such as in rapid thermal processor will result in isotropic etching. The use of a directional energy source such as ion bombardment, electron beam or collimate photon beam will result in very anisotropic process. Since many of the materials may have similar composition such as metal and their oxides, ALE does not need to have a chemical selectivity mechanism; but in principle, the desired number of atoms is removed per cycle. Even in those etch processes that do require quite the same level of control selectivity or do not lack the chemical etch selectivity properties, the 22 nm technology will still require significant improvements in process control. As discussed in an open panel forum at the American Vacuum Society Symposium 2003, industry process technology leaders expressed the need for a paradigm shift in equipment and process control—from one of controlling equipment parameters such as power, pressure, temperature to one of being able to adjust ion energies, into densities, ion species, etc. One technique that offers most of these is neutral beam etching. Similar to ion implantation where the beam properties can be precisely

2 μm

FIGURE 21.36 Example of a 3D nanowire structure fabricated using isotropic etch. (Reprinted from Milanovic, V. and Doherty, L., In Proceedings of ASME International Mechanical Engineering Congress and Exposition (IMECE), 33392, 2002. With permission. Copyright 2002, ASME.)

DK4126—Chapter21—23/5/2007—18:30—ANBARASAN—240450—XML MODEL CRC12a – pp. 1–69

Plasma Etch

21-61

p3 – 28 120 / 200

Resist Poly-Si

50 nm 10.0 kV

X100 K

300 nm

FIGURE 21.37 Polysilicon profiles using Cl2/SF6 neutral beam etch. (Reprinted from Noda, S., Nishimori, H., Ida, T., Arikado, T., Ichiki, K., Ozaki, T., and Samukawa, S., J. Vac. Sci. Technol. A, 22, 1507, 2004. With permission. Copyright 2004, American Institute of Physics.)

determined, so does the large area ion beam provide similar controls but for the entire substrate to be able to maintain low process times. Although neutral ion beam may see limited introduction for 45–32 nm technologies, it is difficult to imagine that the current RIE technique will be extendable to the 22 nm level for most critical etch applications. S. Noda et al. have reported excellent results using neutral beam processing of polysilicon gate etch145 (Figure 21.37). One such early application of neutral beam processing has been the demonstration of photoresist strip on ULK porous dielectrics.146,147

21.5.4

Process and Equipment Requirements

It has been shown that generally new process equipment is needed about every other generation.131 This is true for deposition with the probable wide use of atomic level deposition at the 45–32 nm technologies for the deposition of high-k gate dielectrics and copper BEOL diffusion barriers. This equipment scenario will probably be the case for lithography and etch at the 22 nm node. For etch, the need to have increased selectivity and atomic level control will drive the introduction of ALE. This equipment could be the same used for deposition today with slight optimization for the chemistry and reaction mechanisms. In addition, current etch tools could also be easily modified to include the capability to cycle gas injection and ion bombard for an ALE process while maintaining the flexibility to run more tradition RIE conditions. However due to the very low etch rates, these ALE processes will probably be limited to very thin films where there is not a good chemical selectivity mechanism. For other processes with either every increasing aspect ratio such as contact etch and the thicker layer etching where ALE may not prove viable, the stringent process selectivity and feature definition requirements still must be achieved. These requirements will probably drive equipment designs to allow lower and selectable ion energies, and maybe the possibility of selecting the ion species and higher directionality. An example of a critical process is the FinFET fin and gate etches where the aspect ratio is increasing while the CD is rapidly scaling down. Also, the gate etch must be done over the topography of the fin structure as illustrated in Figure 21.38 and serves as another example of the process complexity tradeoff as a function of feature aspect ratio.133 As mentioned already, neutral beam etch seem to offer

DK4126—Chapter21—23/5/2007—18:30—ANBARASAN—240450—XML MODEL CRC12a – pp. 1–69

21-62

Handbook of Semiconductor Manufacturing Technology

Tsi Lgate

Lgate

Wgate

Tsi Fin height = ½ Wgate

I~Wgate/Lgate → lateral scaling

I~2 Fin height Lgate×Nfin → vertical dimension not litho limited

FIGURE 21.38

Comparison of planar device and FinFET aspect ratio and scaling requirements.

this level of ion level control. The basic design is simply an ICP source with an ion extraction grid or a high aspect ratio “neutralization” plate (Figure 21.39). There are number of ICP sources that can be readily modified to add this feature.

21.5.5

Challengers to Silicon beyond 22 nm

There is already intense research into the alternative materials for silicon; but of these, the carbon nanotubes is probably the most likely next material. There are a number of properties that exceed those of silicon such as resistivity, strength, etc., but the more important is that it appears to be compatible with

DC Pulse modulation On/Off = 50/50-100μ sec RF 13.56 MHz DC

+ − + − + − + − + − + − +− + − + − + − + − + − + − + − − + − + − + −

Top carbon plate Quartz tube Bottom carbon plate

RF 600 kHz

Neutral beam

Wafer Stage

FIGURE 21.39 Schematic diagram of an experimental neutral beam etch chamber. (Reprinted from Noda, S., Nishimori, H., Ida, T., Arikado, T., Ichiki, K., Ozaki, T., and Samukawa, S., J. Vac. Sci. Technol. A, 22, 1507, 2004. With permission. Copyright 2004, American Institute of Physics.)

DK4126—Chapter21—23/5/2007—18:30—ANBARASAN—240450—XML MODEL CRC12a – pp. 1–69

Plasma Etch

21-63

silicon processing. For example, it could be used in the standard process as the active channel region for the transistor where the source-drain and gate element can be fabricated in silicon.148,149 Due to the low resistivity and mechanical stability, carbon nanotubes may also find applications in the BEOL such as via since these may facilitate the columnar growth.150,151 However, there is almost as intense effort ongoing with silicon structures that could extend similar nano-column or tube structures in silicon, so only at time in the future will there be an end of the roadmap for silicon etching.134,135,152

21.6

Modeling of Plasma Etching Processes

Plasma etching processes are physically and chemically complex phenomena, and are often difficult to thoroughly characterize experimentally. Considerable attention has therefore been paid to computational modeling of plasmas and etching processes in the last 15 years. Some aspects of plasma processes can now be reliably analyzed using commercially available software, while other areas remain topics of intense research. With growing complexity of plasma processing applications, introduction of new materials at an unprecedented pace, and structure dimensions approaching nanometer scale, it is imperative that modeling will play a strong role in design of future plasma processing tools and processes. A brief review of plasma models is included in this section. More details can be obtained in the cited references. Attention here will focus only on computational models. The importance of analytical models (e.g., Refs. 2 and 153) can however not be deemphasized as they remain invaluable tools for plasma tool design and engineering analysis. Computational plasma process models can generally be sub-divided into three categories: equipment models, feature scale models, and atomistic models. These models are often inter-coupled to analyze complicated problems but large disparity in time and spatial scales makes simultaneous simulation of all pertinent physical and chemical phenomena very challenging. Equipment models typically address gas flow in the plasma reactor, plasma generation, chemistry within the plasma, reactor electrodynamics, plasma interactions with driving circuits and sheath dynamics. These models address phenomena on relatively large spatial scales (cm) and moderate time scales (ns–ms). Feature scale models simulate etching and related surface phenomena within small structures. Analysis often relies on macro-variables (sticking coefficients, sputtering yields, etc.), to represent surface processes. Feature scale models address issues on small spatial scales (mm) and relatively long time scales (seconds). A new class of models utilizes molecular dynamics (MD) or ab-initio techniques to investigate etching relevant surface processes from first principles. These models address issues on very small spatial (nm) and small temporal (fs–ps) scales. Kinetic, fluid and hybrid techniques were all explored in the early days of multi-dimensional plasma reactor modeling. Kinetic models include particle-in-cell models154 which self-consistently track macroparticles and their interactions and models that attempt to directly solve the Boltzmann equation155 to determine important plasma properties. As these techniques are computationally expensive and it becomes progressively difficult to represent the complexity of actual plasma processes, these techniques are primarily used now for research or specialty applications. Fluid156 and hybrid157 techniques however have been explored in considerable detail and several commercial software158,159 based on these techniques are available. In fluid and hybrid plasma model, Maxwell equations are solved in conjunction with equations governing species mass, momentum, and energy balance to determine important plasma properties. Electrons generally drive etching relevant plasmas and have a broad energy distribution that strongly impacts their transport properties and plasma chemistry. Fluid and hybrid models either assume a Maxwellian electron energy distribution, solve Boltzmann equation to determine electron energy distribution or use Monte Carlo techniques. Fluid plasma models have been coupled to models of external driving circuitry,160 kinetic models that compute quantities not well captured by fluid models (e.g., ion and neutral energy and angular distribution at surfaces),161 and surface physics models.162 It is fair to state that plasma equipment modeling is a mature area and plasma reactor dynamics can be simulated with reasonable fidelity. Uncertainty with plasma chemistry5 (atomic and molecular processes,

DK4126—Chapter21—23/5/2007—18:30—ANBARASAN—240450—XML MODEL CRC12a – pp. 1–69

21-64

Handbook of Semiconductor Manufacturing Technology

heavy particle reactions) is often the only hurdle that hinders the use of plasma equipment models for an even broader set of applications. Plasma equipment models have been successfully applied to the modeling of capacitively coupled plasma etchers (single frequency, dual frequency,163 magnetized164), ICP sources,157 ECR,165 and helicon166 plasmas. Fair to adequate mechanisms exist for many of the commonly used plasma etching gases. Feature scale models have immensely grown in maturity in the last few years. Several techniques have been explored for feature scale modeling and they all remain equally important for problem solving. Broadly speaking, feature scale modeling has been done using Monte Carlo methods,167 string based methods168 and level set methods.169 In Monte Carlo models, surface and material underneath is represented using macro-particles. Plasma species, whose characteristics are either assumed or determined using plasma equipment simulations, are then bombarded on the material stack. A surface reaction mechanism is used to determine how the structure evolves in time. Monte Carlo simulators allow representation of detailed surface processes and can easily account for sub-surface processes. However, to overcome the statistical noise in the simulations, large number of particles often have to be used slowing down simulations considerably. In string based methods, the surface of the structure is represented using a set of inter-connected strings in 2D models (or patches in 3D simulations). Using fluxes of plasma species, impingent flux on the structure surface is determined. Fluxes on material surface are used in conjunction with a surface mechanism to determine how the strings or patches evolve in time. String based techniques are computationally fast and it is relatively straight-forward to implement most surface processes. Representation of sub-surface material and simultaneous etching and deposition is nonetheless a non-trivial task in string based models. Level set methods have been used for both etching and deposition modeling. The material is represented by a function, one of whose equipotential planes coincides with the structure surface. Simulation methodology is similar to string based models although the surface is evolved by solving a differential equation governing the function. Level set method is slower than string based technique but simulations are numerically more stable. Representation of sub-surface materials and simultaneous etching and deposition are however challenges level set methods share with string based models. Feature scale models have been applied to the modeling of a wide variety of plasma etching processes including polysilicon38 and photoresist28 etching, and SiO2170 and low-k171 dielectric etching. Molecular dynamics models have in recent years started playing a major role in unearthing the fundamentals of plasma etching. In the MD models, quantum mechanical interactions between atoms (both material and plasma based) are represented using pseudo-potentials that are either determined experimentally or using ab initio quantum mechanics models. These pseudo-potentials are used in classical mechanics models to simulation the dynamics on the material surface in contact with the plasma. Molecular dynamics models have been used to understand the formation of reactive layers on a variety of films, and the role that different ions and radicals play in plasma etching or surface passivation. These models have examined Cl2172 and fluorocarbon173 etching of Si, fluorine etch of SiO2174 and fluorocarbon etching of SiO2.175

References 1. International Technology Roadmap for Semiconductors, Semiconductor Industry Association, 2003 Edition. 2. Lieberman, M. A., and A. J. Lichtenberg. Principles of Plasma Discharges and Materials Processing. New York: Wiley, 1994. 3. Rauf, S. J. Appl. Phys. 87 (2000): 7647. 4. Wang, S.-B., and A. E. Wendt. J. Vac. Sci. Technol. A 19 (2001): 2425. 5. Christophorou, L. G., and J. K. Olthoff. Fundamental Electron Interactions with Plasma Processing Gases. London: Kluwer, 2004.

DK4126—Chapter21—23/5/2007—18:30—ANBARASAN—240450—XML MODEL CRC12a – pp. 1–69

Plasma Etch

21-65

6. Nastasi, M., J. W. Mayer, and J. K. Hirvonen. Ion Solid Interactions: Fundamentals and Applications. Cambridge, MA: Cambridge University Press, 1996. 7. Eisele, K. M. J. Electrochem. Soc. 128 (1981): 123. 8. Coburn, J. W., and H. F. Winters. J. Appl. Phys. 50 (1979): 3189. 9. Standaert, T. E. F. M., M. Schaepkens, N. R. Rueger, P. G. M. Sebel, G. S. Oehrlein, and J. M. Cook. J. Vac. Sci. Technol. A 16 (1998): 239. 10. Atkins, P. Physical Chemistry, 5th ed., 877. New York: Freeman, 1994. 11. Bestwick, T. D., G. S. Oehrlein, and D. Angell. Appl. Phys. Lett. 57 (1990): 431. 12. Sekine, M. Appl. Surf. Sci. 192 (2002): 270. 13. Kitajima, T., Y. Takeo, and T. Makabe. J. Vac. Sci. Technol. A 17 (1999): 2510. 14. Keller, J. H., J. C. Forster, and M. S. Barnes. J. Vac. Sci. Technol. A 5 (1993): 2487. 15. Asmussen, J. Jr., T. A. Grotjohn, M. Pengun, and M. A. Perrin. IEEE Trans. Plasma Sci. 25 (1997): 1196. 16. Chen, F. F., and R. W. Boswell. IEEE Trans. Plasma Sci. 25 (1997): 1245. 17. Samukawa, S., K. Sakamoto, and K. Ichiki. Jpn. J. Appl. Phys. 40 (2001): L779. 18. Hwang, G. S., and K. P. Giapis. J. Vac. Sci. Technol. B 15 (1997): 70. 19. Tonnis, E. J., D. B. Graves, V. H. Vartanian, L. Beu, T. Lii, and R. Jewett. J. Vac. Sci. Technol. A 18 (2000): 393. 20. Armacost, M., P. D. Hoh, R. Wise, W. Yan, J. J. Brown, J. H. Keller, G. A. Kaplita, et al. IBM J. Res. Dev. 43 (1999): 39. 21. Chatterjee, A., I. Ali, K. Joyner, D. Mercer, J. Kuehne, M. Mason, E. Esquivel, et al. J. Vac. Sci. Technol. B 15 (1997): 1936. 22. Yeon, C.-K., and H.-J. You. J. Vac. Sci. Technol. A 16 (1998): 1502. 23. Ullal, S., H. Singh, V. Vahedi, and E. Aydil. J. Vac. Sci. Technol. A 20 (2002): 499. 24. Nagase, M., N. Ikezawa, K. Tokashiki, M. Takimoto, and K. Kasama. Dry Process Symposium, Tokyo, Japan, Paper I-3, 2001. 25. Bell, F. H., and O. Joubert. J. Vac. Sci. Technol. B 15 (1997): 88. 26. Greer, F., D. Fraser, J. W. Coburn, and D. B. Graves. J. Appl. Phys. 94 (2003): 7453. 27. Ling, L., X. Hua, X. Li, G. S. Oehrlein, E. A. Husdon, P. Lazzeri, and M. Anderle. J. Vac. Sci. Technol. B 22 (2004): 2594. 28. Rauf, S. J. Vac. Sci. Technol. B 22 (2004): 202. 29. DeBord, J. R. D., V. Jayaraman, M. Hewson, W. Lee, S. Nair, H. Shimada, V. L. Linh, J. Robbins, and A. Sivasothy. 2004 International Conference on Microelectronics and Interfaces Proceedings, 229, 1998. 30. Kim, Y., J. Lee, H. Cho, and J. Moon. 2000 International Conference on Micro-process and Nanotechnology Proceedings, 106, 2000. 31. Sato, Y., E. Shiobara, Y. Onishi, S. Yoshikawa, Y. Nakano, and S. Hayase. “Y Hamada.” J. Vac. Sci. Technol. B 20 (2002): 909. 32. Lee, G. Y., Z. G. Lu, D. M. Dobuzinsky, X. J. Ning, and G. Costrini. International Interconnect Technology Conference Proceedings, 87, 1998. 33. Rauf, S., P. J. Stout, and J. Cobb. J. Vac. Sci. Technol. B 19 (2001): 172. 34. Goldfarb, D. L., A. Mahorowala, G. M. Gallatin, K. E. Petrillo, S. Ragson, H. H. Sawin, S. D. Allen, M. C. Lawson, and R. W. Kwong. J. Vac. Sci. Technol. B 22 (2004): 647. 35. Chan, V. W. C., C. H. Hai, and P. C. H. Chan. J. Vac. Sci. Technol. B 19 (2001): 743. 36. Xu, S., T. Lill, and D. Podlesnik. J. Vac. Sci. Technol. A 19 (2001): 2893. 37. Foucher, J., G. Cunge, L. Vallier, and O. Joubert. J. Vac. Sci. Technol. B 20 (2002): 2024. 38. Tuda, M., K. Shintani, and H. Ootera. J. Vac. Sci. Technol. A 19 (2001): 711. 39. Vallier, L., J. Foucher, X. Detter, E. Pargon, O. Joubert, and G. Cunge. J. Vac. Sci. Technol. B 21 (2003): 904. 40. Xu, S., Z. Sun, A. Chen, X. Quian, and D. Podlesnik. J. Vac. Sci. Technol. A 19 (2001): 871. 41. Hung, C.-C., H. C. Lin, M.-F. Wang, T.-Y. Huang, and H.-C. Shih. Microelectron. Eng. 63 (2002): 405. 42. Tuda, M., K. Shintani, and J. Tanimura. Appl. Phys. Lett. 79 (2001): 2535.

DK4126—Chapter21—23/5/2007—18:31—ANBARASAN—240450—XML MODEL CRC12a – pp. 1–69

21-66

43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. 76. 77. 78. 79. 80. 81. 82. 83.

Handbook of Semiconductor Manufacturing Technology

Vitale, S. A., and B. A. Smith. J. Vac. Sci. Technol. B 21 (2003): 2205. Bell, F. H., O. Joubert, and L. Vallier. J. Vac. Sci. Technol. B 14 (1996): 96. Bell, F. H., and O. Joubert. J. Vac. Sci. Technol. B 14 (1996): 2493. Bell, F. H., and O. Joubert. J. Vac. Sci. Technol. B 15 (1997): 88. Xu, S., Z. Sun, X. Quian, J. Holland, and D. Podlesnik. J. Vac. Sci. Technol. A 19 (2001): 166. Kraft, R., I. Gupta, and T. Kinoshita. J. Vac. Sci. Technol. B 16 (1998): 496. Desvoivres, L., L. K. Vallier, and O. Joubert. J. Vac. Sci. Technol. B 18 (2000): 156. Panagopoulos, T., N. Gani, M. Shen, Y. Du, and J. Holland. In Proceedings of the 2004 International Conference on Microelectronics and Interfaces, 170, 2004. Shan, H., B. Y. Pu, K.-H. Ke, M. Welch, and C. Deshpandey. J. Vac. Sci. Technol. B 14 (1996): 521. Detter, X., R. Palla, I. Tomas-Boutherin, E. Pargon, G. Cunge, O. Joubert, and L. Vallier. J. Vac. Sci. Technol. B 21 (2003): 2174. Gambino, J. P., and E. G. Colgan. Mater. Chem. Phys. 52 (1998): 99. Sekine, M., Y. Kakuhara, and T. Kikkawa. Electron. Commun. Jpn, Part 2 79 (1996): 93. Schreyer, T. A., A. J. Bariya, J. P. McVittie, and K. C. Saraswat. J. Vac. Sci. Technol. A 6 (1988): 1402. Ikeguchi, K., J. Zhang, H. Lee, and C-L. Yang. In Proceedings of the 2004 International Conference on Microelectronics and Interfaces, 98, 2004. Kim, J. S., W. T. Tang, W. S. Lee, B. Y. Yoo, Y. C. Shin, T. H. Kim, K. Y. Lee, Y. J. Park, and J. W. Park. J. Vac. Sci. Technol. B 17 (1999): 2559. Ohiwa, T., A. Kojima, M. Sekine, I. Sakai, S. Yonemoto, and Y. Watanabe. Jpn. J. Appl. Phys. 37 (1998): 5060. Matsui, M., T. Tatsumi, and M. Sekine. Plasma Sources Sci.Technol. 11 (2002): A202. Zhang, Y., G. S. Oehrlein, and F. H. Bell. J. Vac. Sci. Technol. A 14 (1996): 2127. Schaepkens, M., G. S. Oehrlein, C. Hedlund, L. B. Jonsson, and H.-O. Blom. J. Vac. Sci. Technol. A 16 (1998): 3281. Givens, J., S. Geissler, J. Lee, O. Cain, J. Marks, P. Keswick, and C. Cunningham. J. Vac. Sci. Technol. B 12 (1994): 427. Shon, J., T. Chien, H. Y. Kim, W. S. Lee, J. Kim, and D. Kiel. In Proceedings of the 2001 Dry Process Symposium, 225, Paper VI-7, 2001. Kim, J., C. W. Chu, C. J. Kang, W. S. Han, and J. T. Moon. J. Vac. Sci. Technol. B 20 (2002): 2065. Quiao, J., B. Jin, P. Phatak, J. Yu, and S. Geha. J. Vac. Sci. Technol. B 17 (1999): 2373. Kim, J.-H., J.-S. Yu, C.-K. Ryu, S.-J. Oh, S.-B. Kim, J.-W. Kim, J.-M. Hwang, S.-Y. Lee, and I. Kouichiro. J. Vac. Sci. Technol. A 18 (2000): 1401. Sun, Y.-C., and T.-Y. Huang. In Proceedings of the Dry Process Symposium, 219, Paper VI-6, 2001. Ito, S., H. Namba, T. Hirata, K. Ando, S. Koyama, N. Ikezawa, T. Suzuki, T. Saitoh, and T. Horiuchi. Microelectron. Reliab. 42 (2002): 201. Hook, T. B., D. Harmon, and C. Lin. Microelectron. Reliab. 41 (2001): 751. Brozek, T., J. Huber, and J. Walls. Microelectron. Reliab. 40 (2000): 625. Chang, K.-M., T.-H. Yeh, I.-C. Deng, and H.-C. Lin. J. Appl. Phys. 80 (1996): 3048. Lin, B.-W.S.-S., C.-S. Tsai, and C.-C. Hsia. Microelectron. Reliab. 40 (2000): 2039. Hashimoto, K., F. Shimpaku, A. Hasegawa, Y. Hikosaka, and M. Nakamura. Thin Solid Films 316 (1995): 1. Lee, Y. J., S. W. Hwang, G. Y. Yeon, J. W. Lee, and J. Y. Lee. Thin Solid Films 341 (1999): 168. Keil, D. L., B. A. Helmer, and S. Lassig. J. Vac. Sci. Technol. B 21 (2003): 1969. Kropewnicki, T., K. Doan, B. Tang, and C. Bjo¨rkman. J. Vac. Sci. Technol. A 19 (2001): 1384. Mogab, C. J., A. C. Adams, and D. L. Flamm. J. Appl. Phys. 49 (1978): 3796. Tatsumi, T., M. Matsui, M. Okigawa, and M. Sekine. J. Vac. Sci. Technol. B 18 (2000): 1897. Matsui, M., T. Tastsumi, and M. Sekine. J. Vac. Sci. Technol. A 19 (2001): 1282. Kurihara, K., Y. Yamaoka, K. Karahashi, and M. Sekine. J. Vac. Sci. Technol. A 22 (2004): 2311. Stoffels, W. W., E. Stoffels, and K. Tachibana. J. Vac. Sci. Technol. A 16 (1998): 87. Standaert, T. E. F. M., C. Hedlund, E. A. Joseph, G. S. Oehrlein, and T. J. Dalton. J. Vac. Sci. Technol. A 22 (2004): 53. Doh, H.-H., J. H. Kim, S.-H. Lee, and K.-W. Whang. J. Vac. Sci. Technol. A 14 (1996): 2827.

DK4126—Chapter21—23/5/2007—18:31—ANBARASAN—240450—XML MODEL CRC12a – pp. 1–69

Plasma Etch

84. 85. 86. 87. 88. 89. 90. 91. 92. 93. 94. 95. 96. 97. 98. 99. 100. 101. 102. 103. 104. 105. 106. 107. 108. 109. 110. 111. 112. 113. 114. 115. 116. 117. 118.

21-67

Li, X., X. Hua, M. Fukusawa, and G. S. Oehrlein. J. Vac. Sci. Technol. A 21 (2003): 284. Teii, K., M. Hori, M. Ito, and T. Goto. J. Vac. Sci. Technol. A 18 (2000): 1. Hori, M., and T. Goto. Appl. Surf. Sci. 192 (2002): 135. Li, X., X. Hua, M. Fukusawa, and G. S. Oehrlein. J. Vac. Sci. Technol. A 21 (2003): 284. Tsukada, T., H. Nogami, Y. Nakagawa, E. Wani, K. Mashimo, H. Sato, and S. Samukawa. Thin Solid Films 341 (1999): 84. Hua, X., C. Stolz, G. S. Oehrlein, P. Lazzeri, N. Coghe, M. Anderle, C. K. Inoki, T. S. Kuan, and P. Jiang. J. Vac. Sci. Technol. A 23 (2005): 151. Posseme, N., O. Joubert, L. Vallier, and N. Rochat. J. Vac. Sci. Technol. B 22 (2004): 2772. Sankaran, A., and M. J. Kushner. Appl. Phys. Lett. 82 (2003): 1824; Sankaran, A., and M. J. Kushner. J. Vac. Sci. Technol. A 22 (2004): 1260. Min, J.-H., S.-W. Hwang, G.-R. Lee, and S.-H. Moon. J. Vac. Sci. Technol. B 21 (2000): 1210. Kojima, A., T. Sakai, and T. Ohiwa. J. Vac. Sci. Technol. B 22 (2004): 2611. Claeys, C. In Proceedings of the 17th IEEE Conference VLSI Design (VLSID ’04), 275, 2004. Onai, T., S. Kimura, K. Suko, H. Miyazaki, and F. Yano. Hitachi Rev. 52 (2003): 117. Ito, T. Fujitsu Sci. Tech. J. 39 (2003): 3. Hiramoto, T. 2004 IEEE Conference on Integrated Circuit Design and Technology, 59, 2004. Samavedam, S., L. La, J. Smith, S. Dakshina-Murphy, E. Luckoski, J. Schaeffer, M. Zalava, et al. 2002 Symposium on VLSI Technology, 24, 2002. Nguyen, B.-Y., A. Thean, T. White, A. Vandooren, M. Sadaka, L. Mathew, A. Barr, et al. 2004 IEEE Conference on Integrated Circuit Design and Technology, 237, 2004. Chang, L., C. Yang-kyu, D. Ha, P. Ranade, X. Shiying, J. Bokor, H. Chenming, and T. J. King. Proc. IEEE 91 (2003): 1860. Chuang, C.-T., K. Bernstein, R. V. Joshi, R. Puri, K. Kim, E. J. Nowak, T. Ludwig, and I. Aller. IEEE Circuits Dev. 20, no. 1 (2004): 6. Englehart, M., G. Schindler, W. Steinho¨gl, and G. Steinlesberger. Microelectron. Eng. 64 (2002): 3. Rottstegge, J., W. Herbst, S. Hien, G. Futterer, C. Eshbaumer, C. Hohle, J. Schwider, and M. Sebald. Proc. SPIE 4690 (2004): 233. Lin, B. J. Proc. SPIE 5377 (2004): 46. Switkes, M., R. R. Kunz, M. Rothschild, R. F. Sinta, M. Yeung, and S.-Y. Baek. J. Vac. Sci. Technol. B 21 (2003): 2794. Zandbergen, P., D. Van Steenwinckel, J. H. Lammers, H. Kwinten, and C. Juffermans. IEEE Trans. Semi. Manufact. 18 (2005): 37. Tserepi, A., G. Cordoyiannis, G. P. Patsis, V. Constantoudis, E. Gogolides, E. S. Valamontes, D. Eon, et al. J. Vac. Sci. Technol. B 21 (2003): 174. Yoon, J.-Y., M. Hata, J.-H. Hah, H.-W. Kim, S.-G. Woo, H.-K. Cho, and W.-S. Han. Proc. SPIE 5376 (2004): 196. Foucher, J., G. Cunge, L. Vallier, and O. Joubert. J. Vac. Sci. Technol. B 20 (2002): 2024. Constantoudis, V., G. Patsis, and E. Gogolides. Proc. SPIE 5038 (2003): 901. Satou, I. Jpn. J. Appl. Phys. 38 (1999): 7008. Yoshizama, M., S. Moriya, H. Nakano, Y. Shirai, T. Morita, T. Kitagawa, and Y. Miyamoto. Jpn. J. Appl. Phys. 43 (2004): 3739. Vandooren, A., A. Barr, L. Mathew, T. R. White, S. Egley, D. Pham, M. Zavala, et al. IEEE Elect. Dev. Lett. 24 (2003): 342. Shimada, H., and K. Maruyama. Jpn J. Appl. Phys. 43 (2004): 1768. Sha, L., and J. P. Chang. J. Vac. Sci. Technol. A 22 (2004): 88. Monfray, S., A. Souifi, F. Boeuf, C. Ortolland, A. Poncet, L. Militaru, D. Chanemougame, and T. Skotnicki. IEEE Trans. Nanotechnol. 2 (2003): 295. Nowak, E. J., I. Aller, T. Ludwig, K. Kim, R. V. Joshi, C.-T. Chuang, K. Bernstein, and R. Puri. IEEE Circuits Dev. 20 (2004): 20. Schaeffer, J., C. Capasso, L. Foneca, S. Samavedam, D. Gilmer, Y. Liang, S. Kalpat, et al. 2004 IEEE Int. Elect. Dev. Mtg. 287, (2004).

DK4126—Chapter21—23/5/2007—18:31—ANBARASAN—240450—XML MODEL CRC12a – pp. 1–69

21-68

Handbook of Semiconductor Manufacturing Technology

119. Wong, H.-S., B. Doris, E. Gusev, M. Icong, E. Jones, J. Kedzieski, Z. Ren, K. Rim, and H. Shang. 2003 IEEE Symposium on VLSI Technology, 13, 2003. 120. Matthews, L., D. Yang, A. Thean, A. Vandooren, C. Parker, T. Stephens, R. Mora, et al. IEEE Symposium on VLSI Technology, 97, 2005. 121. Gossmann, H.-J. L., A. Agarwal, T. Parrill, L. M. Rubin, and J. M. Poate. IEEE Trans. Nanotechnol. 2 (2003): 285. 122. Borel, S., C. Arvet, J. Bildie, V. Caubet, and D. Louis. Jpn. J. Appl. Phys. 43 (2004): 3964. 123. Sparks, T., S. Rauf, L. Vallier, G. Cunge, and T. Chevolleau. International Symposium of AVS, 2004. 124. Liang, M.-C., H.-Y. Tsai, C.-C. Chung, C.-C. Hsueh, H. Chung, and C.-Y. Lu. IEEE Elect. Dev. Lett. 24 (2003): 562. 125. Case, C. Future Fab Int. 17, (2004) chap. 6. 126. Matsumoto, S., A. Ishii, K. Hashimoto, Y. Nishioka, M. Sekiguchi, S. Isono, T. Satake, et al. IEEE International Interconnect Technology Conference (IITC), 262, 2003. 127. Moore, D., R. Carter, H. Cui, P. Burke, P. McGrath, S. Q. Gu, D. Gidley, and H. Peng. J. Vac. Sci. Technol. B 23 (2005): 332. 128. Maex, K., M. R. Baklanov, D. Shamiryan, F. lacopi, S. H. Brongersma, and Z. S. Yanovitskaya. J. Appl. Phys. 93 (2003): 8793. 129. Steinlesberger, G., M. Engelhart, G. Schindler, W. Steinhogl, A. von Glasow, and K. Mosig. Impact of Annealing on the Resistivity of Ultrafine Cu Damascene Interconnect. Mat. Tech. and Rel. for Adv. Interconnect. Symp. Proceedings, 766, 2003. 130. Connelly, D., C. Faulkner, D. E. Grupp, and J. S. Harris. IEEE Trans. Nanotechnol. 3 (2004): 98. 131. Goronkin, H., P. Von Allmen, R. Tsui, and T. Zhu. Nanostruct. Sci. Technol. 67, (1999), chap. 5. 132. De Blauwe, J. IEEE Trans. Nanotechnol. 1 (2002): 72. 133. Ma, Y., G. Tavid, and F. Cerrina. J. Vac. Sci. Technol. B 22 (2004): 3124. 134. Chan, V., C. Hai, and P. Chan. J. Vac. Sci. Technol. B 19 (2001): 743. 135. Finch, R., J. Feldman, F. Zumsteg, M. Crawford, A. Feiring, V. Petrov, F. Schadt III., and R. Wheland. Semicond. Fabtech. 14 (2001): 167. 136. Shenoy, R., and K. Saraswat. IEEE Trans. Nanotechnol. 2 (2003): 265. 137. Yang, F.-L., D.-H. Lee, H.-Y. Chen, C.-Y. Chang, S.-D. Liu, C.-C. Huang, et al. IEEE Symposium on VLSI Technology, 196, 2004. 138. Hu, S.-F., Y.-C. Wu, C.-L. Sung, C.-Y. Chang, and T.-Y. Huang. IEEE Trans. Nanotechnol. 3 (2004): 93. 139. Van den hove, L., A. Goethals, K. Ronse, M. Van Bavel, and G. Vandenberghe. IEEE Int. Elect. Dev. Mtg. 3, (2002). 140. Molas, G., B. De Salvo, G. Ghibaudo, D. Mariolle, A. Toffoli, N. Buffet, R. Puglisi, S. Lombardo, and S. Deleonibus. IEEE Trans. Nanotechnol. 3 (2004): 42. 141. Sano, N., A. Hiroki, and K. Matsuzawa. IEEE Trans. Nanotechnol. 1 (2002): 63. 142. Zhuang, L., L. Guo, and S. Y. Chou. Appl. Phys. Lett. 72 (1998): 1205. 143. Engelhardt, M., G. Schindle, W. Steinhogl, and G. Steinlesberger. Microelectron. Eng. 64 (2002): 3. 144. Milanovic, V., and L. Doherty. In Proceedings of ASME International Mechanical Engineering Congress and Exposition (IMECE), 33392, 2002. 145. Noda, S., H. Nishimori, T. Ida, T. Arikado, K. Ichiki, T. Ozaki, and S. Samukawa. J. Vac. Sci. Technol. A 22 (2004): 1507. 146. Kim, S. J., H. J. Lee, G. Y. Yeom, and J. K. Lee. Jpn. J. Appl. Phys. 43 (2004): 7261. 147. Sommervell, M., D. Fryer, B. Osburn, K. Patterson, J. Byers, and C. Wilson. J. Vac. Sci. Technol. B 18 (2000): 2551. 148. Brown, K. Future Fab Int. 17, (2004). 149. Litt, L., B. Roman, and J. Cobb. Future Fab Int. 17, (2004). 150. Naeemi, A., and J. Meindl. IEEE Elect. Dev. Lett. 26 (2005): 84. 151. Kreup, F., A. Graham, M. Liebau, G. Duesberg, R. Seidel, and E. Unger. IEEE Int. Elect. Dev. Mtg. 683, (2004). 152. Yang, J. IEEE Circuits Dev. 1 (2004): 44. 153. Lieberman, M. A. IEEE Trans. Plasma Sci. 16 (1988): 638.

DK4126—Chapter21—23/5/2007—18:31—ANBARASAN—240450—XML MODEL CRC12a – pp. 1–69

Plasma Etch

154. 155. 156. 157. 158. 159. 160. 161. 162. 163. 164. 165. 166. 167. 168. 169. 170. 171. 172. 173. 174. 175.

21-69

Birdsall, C. K. IEEE Trans. Plasma Sci. 19 (1991): 65. White, R. D., K. F. Ness, and R. E. Robson. Appl. Surf. Sci. 192 (2002): 26. Graves, D. B., and K. F. Jensen. IEEE Trans. Plasma Sci. 14 (1986): 78. Ventzek, P. L. G., R. J. Hoekstra, and M. J. Kushner. J. Vac. Sci. Technol. B 12 (1994): 461. http://www.plasmator.com http://www.cfdrc.com Rauf, S., and M. J. Kushner. J. Appl. Phys. 83 (1998): 5087. Hoekstra, R. J., and M. J. Kushner. J. Appl. Phys. 79 (1996): 2275. Zhang, D., and M. J. Kushner. J. Appl. Phys. 87 (2000): 1060. Wakayama, G., and K. Nanbu. IEEE Trans. Plasma Sci. 31 (2003): 638. Rauf, S. IEEE Trans. Plasma Sci. 31 (2003): 471. Kinder, R., and M. J. Kushner. J. Vac. Sci. Technol. A 17 (1999): 2421. Kinder, R. L., A. R. Ellingboe, and M. J. Kushner. Plasma Sources Sci. Technol. 12 (2003): 561. Hoekstra, R. J., M. J. Grapperhaus, and M. J. Kushner. J. Vac. Sci. Technol. A 15 (1997): 1913. Abdollahi-Alibeik, S., J. P. McVittie, K. C. Saraswat, V. Sukharev, and P. Schoenborn. J. Vac. Sci. Technol. A 17 (1999): 2485. Hwang, H. H., T. R. Govindan, and M. Meyyappan. J. Electrochem. Soc. 146 (1999): 1889. Zhang, D., S. Rauf, T. Sparks, and P. L. G. Ventzek. J. Vac. Sci. Technol. B 21 (2003): 828. Sankaran, A., and M. J. Kushner. J. Vac. Sci. Technol. A 22 (2004): 1242. Barone, M. E., and D. B. Graves. J. Appl. Phys. 77 (1995): 1263. Abrams, C. F., and D. B. Graves. J. Appl. Phys. 86 (1999): 5938. Ohta, H., and S. Hamaguchi. J. Vac. Sci. Technol. A 19 (2001): 2373. Smirnov, V. V., A. V. Stengach, K. G. Gaynullin, V. A. Pavlovsky, S. Rauf, P. J. Stout, and P. L. G. Ventzek. J. Appl. Phys. 97 (2005): 093302.

DK4126—Chapter21—23/5/2007—18:31—ANBARASAN—240450—XML MODEL CRC12a – pp. 1–69

22

Equipment Reliability 22.1 22.2 22.3 22.4

Introduction ...................................................................... 22-1

Basic Definitions † Reliability of a Repairable System † Reliability Metrics

Reliability Metrics Calculations ....................................... 22-4 Applications of Reliability Metrics .................................. 22-5

Desired Values Values

22.1



Observed

Confidence Limits Calculations....................................... 22-6

Confidence Limit Calculations for Theoretical Values † Confidence Limit Calculations for Observed Values

Precise Use of the Reliability Metrics.............................. 22-7 Maintainability Metrics .................................................... 22-8 High-Level Equipment Performance Metrics................. 22-8

22.8

An Example of Reliability and High Level Performance Metrics Calculations ....................................................... 22-11

Availability (Uptime) and Utilization Metrics † Overall Equipment Efficiency † Cost of Ownership † Hierarchy of Equipment Performance Metrics

Given Values

SafeFab Solutions

Analytical/Theoretical Values

22.5 22.6 22.7

22.9

Vallabh H. Dhudshia





Metrics Calculations

Four Steps to Better Equipment Reliability ................. 22-12

Know Goals and Requirements † Design-In Reliability Build-In Reliability † Manage Reliability Growth



22.10

Reliability Testing............................................................ 22-20

22.11

Use of Equipment Reliability Discipline in Business Practices ........................................................................... 22-24 SEMI E10......................................................................... 22-25

22.12

Types of Reliability Tests † Generic Steps for Reliability Tests † Reliability Tests throughout the Equipment Program Life Cycle Phases

Benefits of Using SEMI E10



Key Elements of SEMI E10

References ....................................................................................... 22-27

Introduction

Reliability has been widely used to measure equipment performance in military and commercial industry since the early 1940s. Since then, the importance of the reliability has grown at a phenomenal rate. Now, reliability is a key equipment characteristic that has significant influence over equipment production efficiency and cost of owning and operating a piece of equipment. In addition, better reliability leads to a 22-1

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

22-2

Handbook of Semiconductor Manufacturing Technology

competitive advantage. In the semiconductor manufacturing equipment industry, reliability plays even greater role enabling semiconductor manufacturers to compete globally. Currently, some semiconductor manufacturers also track high-level metrics, such as overall equipment efficiency (OEE) or cost of ownership (CoO), for equipments in their factories. Reliability is a key element of such metrics. This chapter provides basic working knowledge of equipment reliability discipline, how to define, how to calculate reliability and related metrics, and their inter dependencies. Also, included are their application to semiconductor manufacturing operations and ways to improve them.

22.1.1

Basic Definitions

22.1.1.1

Equipment

Equipment is defined as a combination of components, parts, assemblies, modules, accessories, and embedded software integrated to perform the intended functions. At least one component must fail to cause the equipment to fail to perform its intended functions. Most semiconductor manufacturing equipments are repairable systems. A repairable system is one, which after failing to perform at least one of its intended functions, can be restored to perform all of its intended functions by any method other than replacing the entire system. A repairable system can be restored by replacing, repairing, adjusting, or cleaning the appropriate component(s); and/or reloading embedded software. All the subject matter presented here refers to such repairable systems, either semiconductor manufacturing equipment or other repairable systems. 22.1.1.2

Reliability

Equipment has many characteristics of interest to us, such as weight, volume, footprint, throughput rate, price, etc. One of the important characteristics is reliability as defined below. Reliability is a longevity measure of failure-free operation period of any equipment. It can be expressed in many different ways. One way is using the following formal definition. Reliability is the probability of performing intended functions continuously without a failure (stoppage or interruption) for a specified time under the stated operational conditions. Mathematically, it is written as:

RðtÞ Z Pr½T O t

ð22:1Þ

where t, specific time of interest; T, random variable; R(t), reliability at time t; Pr[], probability of. Example:

Rð1000 hÞ Z Pr½T O 1000 h Z 0:95

ð22:2Þ

In this example, 95% of the equipment units should continue operations past 1000 h. Four key points of the above formal definition require further explanation. 22.1.1.2.1 Failure A failure is defined as any unscheduled event that changes the equipment to a condition, where it cannot perform one of its intended functions. Any part failure, out of adjustment or contaminated parts, software or process recipe problem, facility or utility supply malfunction, or human error could cause the failure. 22.1.1.2.2 Intended Functions Every piece of equipment has its intended functions, whether they are formally documented or not. However, a given reliability level applies to a given set of functions that the equipment was designed to accomplish. If the equipment is used for functions other than its intended design, the same reliability

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

Equipment Reliability

22-3

level may not apply to these new functions. It is the manufacturer’s responsibility to see that users understand equipment’s intended functions and vice versa. 22.1.1.2.3 Specified Time The reliability level changes as the equipment ages. It is necessary to include equipment age in establishing a reliability level. Without inclusion of such a time element, any reliability level is ambiguous. Such situations can mislead a user about the specific reliability level. 22.1.1.2.4 Stated Operational Conditions Factors such as operating environment, operating stress level, operating speed, operator skill level, and maintenance procedures and policies can affect the reliability of any equipment. Therefore, a given reliability level applies to a specified set of the operational conditions. If the value of any factor varies from specified operational conditions, the reliability level may differ. Example: the reliability of a blower in a card cage, operating in its ambient environment at 60% of its rated power, will be 0.85 at 2 years after installation.

22.1.2

Reliability of a Repairable System

In a repairable system, the distribution of failure intervals between successive failures is of prime interest. If we assume that: 1. Each component failure is an independent renewal process, i.e., when a component fails, it is replaced by a new component and this does not affect any other components 2. The system is a series system with many independent components. Then, under very generic conditions, the system-level failure is a superimposed renewal process [1]. The time between two successive failures approximately follows an exponential distribution as shown below.

f ðtÞ Z eKðt=qÞ Z leKlt

ð22:3Þ

where f(t), probability density function (PDF) at time t; q, mean life; l, failure rateZ1/mean life. Note that the failure rate is constant for exponential PDF. The reliability function for the exponential PDF is given by:

RðtÞ Z eKðt=qÞ Z eKlt

ð22:4Þ

The exponential distribution is one of the most popular, simple, and widely used PDF in reliability discipline. This approximation makes reliability analysis of a repairable system very easy. We need to know only one parameter (either mean time between failures (MTBF) or failure rate l) of the distribution to be able to perform system-level reliability analyses.

22.1.3

Reliability Metrics

The reliability metrics are various terms used to quantify the numerical value of reliability levels. In semiconductor manufacturing industry, we use neither reliability R(t) nor failure rate (l) to measure level of equipment reliability. Instead, we use metrics based on mean life. These metrics consist of at least four words, as shown in Figure 22.1. Two of them, MEAN and BETWEEN, are mandatory. Other words relate to the measures of life and events. Using the algorithm given in Figure 22.1, we can define metrics appropriate for any situation. Take the word MEAN, select a word for measure of life; take the word BETWEEN, and select the desired event.

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

22-4

Handbook of Semiconductor Manufacturing Technology

Life-units

Events Failures

Time

UMs

Wafers Cycles

Mean

Between

Copies

Down events

XXXXX

FIGURE 22.1

Maintenance action

YYYYYY

Reliability metrics algorithm.

Examples 1. Mean time between failures 2. Mean cycles between unscheduled maintenance 3. Mean wafers between failures. These metrics are widely used to track reliability of semiconductor manufacturing equipment and are recommended by the Semiconductor Equipment Manufacturing International (SEMI), an association of semiconductor manufacturing equipment suppliers, in SEMI Specification E10-0304E for definition and measurement of equipment reliability, availability, and maintainability (RAM) [2].

22.2

Reliability Metrics Calculations

To calculate the numerical value of equipment reliability metrics, we need to know two basic elements of the reliability discipline, (a) number of events of interest that stops equipment from performing its intended function(s) in a given life period, and (b) amount of life-units the equipment was in operational condition during the same period. Use the following equation to calculate the desired metric.

Reliability metric Z

Operational life period Number of events during the same operational life period

ð22:5Þ

As shown in Figure 22.1, semiconductor manufacturing events are categorized as failures, down events, scheduled maintenance or unscheduled maintenance. Operational life period may be expressed in calendar time, productive time, number of cycles, or number of wafer processed. Based on these variations, some popular reliability metrics for semiconductor manufacturing equipment are given by:

MeanðproductiveÞ time between failures ðMTBFp Þ Z

Productive time Number of failures that occur during productive time

Mean cycles between failures ðMCBFÞ Z

Total equipment cycles Number of failures

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

ð22:6Þ ð22:7Þ

Equipment Reliability

22-5

The above formulae also apply to any measure of life by replacing “cycles” with the desired measures. For example,

Mean wafers between failures ðMWBFÞ Z

Number of wafers processed Number of failures

ð22:8Þ

Section 22.8 contains an example of calculations for the above metrics.

22.3

Applications of Reliability Metrics

Applications of reliability metrics are terms that originate from the reliability related activities and they are used with appropriate metrics. For example, reliability goal originates from a goal-setting activity. Goal is an application of any reliability metric, e.g., goal MTBF. The applications of these metrics can be divided into the following three categories, depending upon the origin of the activities they represent: 1. Desired values 2. Analytical/theoretical values 3. Observed values.

22.3.1

Desired Values

In this category, the value of metric levels originates from the activities that deal with desires of equipment manufacturers and users. For example: 1. The MTBF goals are what a manufacturer (or a user) wants his equipment to perform. Therefore, goal applications belong to the Desired Value category. When system-level goals or requirements are broken into department level goals or requirements, based on some logical justification, they generate the applications known as allocation, budgeting, or apportionment, which also belong to the Desired Value category. System-level reliability goals, allocated to subsystem and component, and corresponding operating environments are part of the respective design specifications for reliability. Therefore, design specification is also an application of the reliability metrics and falls in this category.

22.3.2

Analytical/Theoretical Values

In this category of applications, the value of metric levels originates from appropriate theoretical activities, such as modeling, part count calculation, and stress analysis. Following are two typical examples of the Analytical/Theoretical Values applications: 1. Inherent reliability. The values are derived from design assessment, assuming benign environments and no error in design, manufacturing, and operation. The inherent reliability is the best achievable level. 2. Expected reliability. The values are based on theoretical calculations using theoretical or observed reliability level of the parts used in the equipment.

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

22-6

22.3.3

Handbook of Semiconductor Manufacturing Technology

Observed Values

This category represents situations in which the metrics level is established based on actual in-house tests, field tests, or field operations of the equipment. The following are three typical examples: 1. Observed values. The values are derived from actual in-house tests, field tests, or field operations. These values are not altered or adjusted. 2. Assessed/Adjusted values. When observed values are adjusted to account for non-relevant failures (such as facility problems or out-of-spec consumable), they become assessed/adjusted values. 3. Confidence limit values. These values are the observed values adjusted to account for number of failures observed as described in Section 22.4. Figure 22.2 below shows the most commonly used terms for reliability metrics and their applications. The reliability metrics can be converted from one category to another. For example, if we know MTBF, we can convert it to failures per 1000 h. Applications of the metrics can neither be converted from one to another nor mixed. However, they can be compared (for example, goal value vs. observed values).

22.4

Confidence Limits Calculations

When we deal with reliability metrics, either analytical or observed, we always face a question, how much confidence do we have in the result. It varies depending upon the type of theoretical calculations made (to derive theoretical values) or number of failures observed and amount of productive time (or other measures of life) contained within the observed period. The confidence is expressed by calculating confidence limits of the calculated or observed values. Generally we are interested in lower confidence limit. Therefore, the calculation methods shown below show lower limit calculations only. Similar methodology is used to calculate upper confidence limits.

Reliability terms Reliability metrics Probabilistic

Mean life

Normalized

Percentage

MWBF

F/106 h

% Failed

MCBF

UM's/106 Cycles

% Uptime

MTBF Pr [T >1000 h] = 0.95 Pr [S] = 0.80

Applications of reliability metrics

FIGURE 22.2

Desired

Theoretical

Observed

Goals Requirements Design specifications Allocations Budget Apportionment Warranty

Calculated Inherent Assessed Predicted Expected

Observed Assessed Field value

Summary of reliability metrics and their applications.

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

Equipment Reliability

22.4.1

22-7

Confidence Limit Calculations for Theoretical Values

To calculate the confidence limit, we must have an underlying probability distribution for the calculated values. The underlying distribution depends upon the distribution of the MTBF or other values of the parts used in the calculations. Applying a law of large numbers, this distribution could be a normal distribution with a mean of the calculated values (m), for the reliability metric under consideration, and their standard deviation (s). In such cases, use the formulae given in Table 22.1 to calculate the lower confidence limit. For example, if m and s of the calculated MTBF values are 500 and 100 h respective, then

80% lower confidence limit for calculated MTBF Z 500K0:842 !100 Z 415:8 h:

22.4.2

ð22:9Þ

Confidence Limit Calculations for Observed Values

Formulae given in Section 22.2 for reliability metrics provide single value estimates for the observed performance. The lower confidence limit for any observed reliability metric (MTBFp, MWBF, MCBF, etc.,) depends upon number of failures observed and amount of productive time (or other measures of life) contained within the observed period. It is calculated using the following formula.

P% lower confidence limit for the observed metrics Z ðobserved valueÞ !ðappropriate factor for the confidence levelÞ

ð22:10Þ

For the semiconductor equipment industry, any value between 80 and 95% confidence levels is an accepted norm for reliability work. See Table 22.2 for the multiplier factor values at 80, 90, and 95% confidence levels [2]. Section 22.8 contains an example of lower confidence limit calculations.

22.5

Precise Use of the Reliability Metrics

We need the following items to fully and precisely define the reliability level of a real-life situation. 1. 2. 3. 4. 5. 6. 7. 8.

Appropriate application (e.g., goal) Appropriate reliability metric (e.g., MTBF) Appropriate numerical value (e.g., 5000) Appropriate unit for the metric (e.g., hours) Appropriate set of intended functions (e.g., metal etch process) Age of the equipment when the metric and value apply (e.g., two months after installation) Appropriate set of operational conditions (e.g., clean room environment) Appropriate confidence level (e.g., 80%) for confidence limit values.

For examples: 1. Goal MTBF is 5000 h, 2 months after installation, when used as metal etcher in clean room environment. 2. Part count calculations for poly etcher model E3000 shows inherent failure rate of 2.00 failures per 1000 h (MTBFZ500 h) under the assumptions listed (needs to have a list of all assumptions used TABLE 22.1

Formulae for Calculating Lower Confidence Limit of Theoretical Values

Confidence Limit 70% Lower confidence 80% Lower confidence 90% Lower confidence 95% Lower confidence

Formula Used limit limit limit limit

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

mK0.525s mK0.842s mK1.282s mK1.645s

22-8

Handbook of Semiconductor Manufacturing Technology

TABLE 22.2

Multiplier Factors for the Lower Confidence Limit Calculations

Number of Failures 1 2 3 4 5 10 15 20 30

80% Confidence

90% Confidence

95% Confidence

0.621 0.668 0.701 0.725 0.744 0.799 0.828 0.846 0.870

0.434 0.514 0.564 0.599 0.626 0.704 0.745 0.772 0.806

0.334 0.422 0.477 0.516 0.546 0.637 0.685 0.717 0.759

in the calculations, such as operating conditions, part functions, etc.). Assuming sZ100, 80% lower confidence limit for the calculated MTBFZ415.8 h. 3. Observed MCBFZ20,000 cycles. When the facility related failures are discounted, the adjusted MCBFZ30,000 cycles. Ninety percent lower confidence limit MCBFZ21,030 cycles (based on three equipment failures). Equipment is 6-month old and used as an oxide etcher.

22.6

Maintainability Metrics

There is no direct relationship between reliability and maintainability. Reliability deals with the operational life longevity and failures of equipment, while maintainability deals with restoring the equipment operation and the time it takes to restore it. However, both disciplines are complementary; both support high-level equipment performance metrics described in Section 22.7. Formally, maintainability is the probability that the equipment will be restored to a specific operational condition (able to perform its all intended functions) within a specified period of time, when the maintenance is performed by personnel having specified skill levels and using prescribed procedures, resources, and tools. Maintenance can be either unscheduled or scheduled. One of the most popular measures of maintainability is mean time to repair mean time to repair (MTTR), given by:

Mean time to repair ðMTTRÞ Z

Total repair time Number of repair events

ð22:11Þ

Repair time includes diagnosis, corrective actions, and verification tests, but not maintenance delays.

22.7

High-Level Equipment Performance Metrics

Recently, some semiconductor manufacturing equipment users have started using high-level equipment performance metrics to make equipment purchase decisions and improve equipment performance. In addition, these high-level equipment performance metrics are becoming increasingly important to compete in the global market because they satisfy customer’s reliability requirements in an optimum manner. Equipment reliability is the key element of these high-level metrics. The proper level of reliability is the one that yields the optimal value of the high-level metric being considered. Three most widely used high-level equipment performance metrics in semiconductor manufacturing are: 1. Availability (Uptime) and utilization 2. Overall equipment efficiency 3. Cost of ownership.

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

Equipment Reliability

22.7.1

22-9

Availability (Uptime) and Utilization Metrics

Availability is a joint measure of reliability and maintainability. It is defined as the probability that equipment will be in a condition to perform its intended functions when required. Percentage uptime is one of the most widely used metrics for availability in semiconductor manufacturing. Since equipment down time can be attributed to either the equipment, equipment supplier, or the equipment users, the uptime calculations vary accordingly. The following three kinds of uptime calculation are used in semiconductor manufacturing: (a) equipment dependent, (b) supplier dependent, and (c) operational. Equipment dependent uptime includes effect of down time caused by scheduled and unscheduled maintenance inherent with the equipment (design) and it is given by:

Equipment dependent uptime ð%Þ Z

Equipment uptime !100 ðEquipment uptime C DTE Þ

ð22:12Þ

where DTEZEquipment dependent down timesZ(unscheduled repair timeCunscheduled and scheduled time to change consumables and chemicalsCproduct test timeCpreventive maintenance time). Equipment uptime includes productive, engineering, and standby times. It does not include nonscheduled time such as holidays, shutdowns, non-working shifts, etc. [2]. Supplier-dependent uptime includes effects of all equipment dependent down times (DTE) and maintenance delays caused by the equipment supplier. It is given by:

Supplier-dependent uptime ð%Þ Z

Equipment uptime !100 ðEquipment uptime C DTE C supplier caused maintenance delaysÞ

ð22:13Þ

Operational uptime includes effects of all down time caused by the scheduled and unscheduled maintenance inherent with the equipment, maintenance delays caused by the equipment supplier and user, and any other down time caused by the equipment user (such as facility related down time). It is given by:

Operational uptime ð%Þ Z

Equipment uptime !100 ðEquipment uptime C Equipment down timeÞ

ð22:14Þ

The above uptime formulae do not include non-scheduled time, such as vacation, holidays, shutdowns, etc. Total utilization includes all time when the equipment is utilized productively. It measures the overall asset efficiency and is given by:

Total utilization ð%Þ Z

22.7.2

Productive time !100 Total time

ð22:15Þ

Overall Equipment Efficiency

Overall equipment efficiency is the most recent high-level equipment performance metric. It was developed as an equipment effectiveness metric in Japan to measure the effectiveness of a manufacturing technique called total productive maintenance (TPM). Originally, it was called overall equipment effectiveness. Semiconductor Equipment Manufacturing International Metric Committee changed it to overall equipment effectiveness [3]. Semiconductor Equipment Manufacturing International and the American Institute of Total Productive Maintenance (AITPM) are currently the major sponsor of the OEE metric in the U.S.A. Overall equipment efficiency is an all-inclusive metric of equipment productivity, i.e., it is based on reliability (MTBF), maintainability (MTTR), utilization (availability), throughput, and yield. All the above factors are grouped into the following three submetrics of equipment efficiency.

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

22-10

Handbook of Semiconductor Manufacturing Technology

1. Availability 2. Performance efficiency 3. Rate of quality. The three submetrics and OEE are mathematically related as follows:

OEE ð%Þ Z Availability !Performance Efficiency !Rate of Quality !100

ð22:16Þ

Now let us look at each OEE sub-metric in more detail. 22.7.2.1

Availability

We have already defined availability in Section 22.7.2. We can use any uptime metric in this equation depending upon which OEE we are calculating. For example, equipment-dependent OEE calculations use equipment-dependent uptime and so forth. 22.7.2.2

Performance Efficiency

The performance efficiency is based on losses incurred from idling, minor stops, and equipment speed losses. It is given by:

Performance efficiency Z

Theoretical CT !actual PPH Actual CT !theoretical PPH

ð22:17Þ

where CT, cycle time; PPH, throughput rate in parts (units) per hour. 22.7.2.3

Quality Rate

The quality rate is a measure of output quality and is given by:

Quality rate Z

Total part produceKnumber of rejects Total parts produced

ð22:18Þ

where rejects are defined as any produced part that does not meet the production criteria. 22.7.2.4

Simple OEE

There is a simple and quick way to calculate OEE without going into elaborate calculations of the above three sub-metrics.

Simple OEE ð%Þ Z

ðNumber of good units produced in t calendar hoursÞ !100 ðt !theoretical PPHÞ

ð22:19Þ

Note that this value gives only rough estimate for the OEE. It does not give any indication of improvement activities direction. There are many other ways to calculate OEE depending upon the use of the measured values. See Ref. [3] for some of the most popular ways to calculate OEE for semiconductor industry.

22.7.3

Cost of Ownership

Availability and OEE are the most widely used high-level metrics, but they have the following shortcomings. They do not include: 1. 2. 3. 4. 5. 6.

Acquisition and operational cost Effect of the production volume Product scrap loss because of poor quality output Consumable cost Waste disposal cost Taxes, insurance, and interest expenses.

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

Equipment Reliability

22-11

To overcome the above shortcomings, SEMATECH developed a CoO model [4], which calculates the true CoO per good unit produced in a given time period, usually a calendar year. The CoO depends upon the equipment acquisition cost, equipment reliability, equipment maintenance and operational costs, production throughput rate, throughput yield, and equipment utilization. The basic CoO is given by the following equation.

CoO per unit Z

FC C OC C YLC P !THP !U !Y

ð22:20Þ

where Fixed costs (FC). The FC are typically determined from a variety of items such as: purchase price, taxes and duties, transportation costs, installation cost, start-up cost, and training cost. It also depends upon allowable depreciation schedule and the length of the time period under consideration. Operating costs (OC). Operating costs for a piece of equipment are consumable, material, maintenance and repair, parts, waste disposal, and operators for the time period under consideration. Yield loss costs (YLC). Yield loss costs are those associated with lost production units that are directly attributable to equipment performance during the time period under consideration. Time period (P). Time period under consideration, usually a calendar year expressed in hours. Throughout rate (THP). Throughput rate is the actual average (for the time period under consideration) production rate of the equipment, expressed in parts per hour. Utilization (U). As defined in Section 22.8.1, expressed in fraction. Throughput yield (Y). Throughput yield also known as quality rate, is the fraction of good units produced. It is determined by

YZ

Total units producedKDefective units produced Total units produced

ð22:21Þ

Table 22.3 contains an example of a simple CoO calculation. See [4] for more elaborate CoO calculations.

22.7.4

Hierarchy of Equipment Performance Metrics

Figure 22.3 depicts the hierarchy of equipment performance metrics. As shown in the figure, when we add time dimension to quality and safety, it becomes reliability. Reliability and maintainability jointly make up availability. When production speed efficiency and production defect rate are combined with availability, they become OEE. Acquisition and operational cost make up life cycle cost (LCC) See Ref. [1] for detailed description of LCC. When scrap, waste, consumables, tax, and insurance cost are added to LCC and the total is normalized by the production volume, it becomes CoO.

22.8 An Example of Reliability and High Level Performance Metrics Calculations 22.8.1

Given Values

Assume that the following values are given for the reliability and other performance metrics calculations. Theoretical throughputZ50 wafer per hour Productive timeZ1200 h Observed throughputZ40 wafer per hour Number of defective wafersZ105 Number of failuresZ3 Total unscheduled down timeZ20 h (including supplier maintenance delay timeZ5 h and repair timeZ15 h)

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

22-12

Handbook of Semiconductor Manufacturing Technology

TABLE 22.3

A Typical Simple Cost of Ownership (CoO) Calculations CoO Input Data

Equipment acquisition costZ$1,000,000 Throughput rateZ20 units/h Operation costZ$800,000/year in 2004 Labor rateZ$50/h in 2004 Mean time between failure (MTBF)Z200 h UtilizationZ75%

Equipment lifeZ5 years, straight line depreciation Throughput yieldZ0.98 Part costZ$50,000/year in 2004 Pre ventive Maintenance timeZ10 h/month Mean time to repair (MTTR)Z2 h Inflation rateZ4% per year CoO Calculations Year

Cost factors Depreciation ($) Operational cost ($) Repair and maintenance cost ($) Yield loss ($) Total cost ($) Good unit produced CoO per unit ($)

2004

2005

2006

200,000 800,000 66,320 250,000 1,116,320 128,772 8.67

200,000 832,000 68,973 260,000 1,160,973 128,772 9.02

200,000 865,280 71,732 270,400 1,207,412 128,772 9.38

2007 200,000 899,891 74,601 281,216 1,255,708 128,772 9.75

PM timeZ50 h Facility down timeZ8 h No operator standby timeZ2 h Non-schedule (holiday) timeZ24 h.

22.8.2

Metrics Calculations

The following equipment performance metrics are calculated based on the above values. Observed mean productive time between failures (MTBFp)Z(1200)/3Z400 h Observed mean wafers between failures (MWBF)Z(1200!40)/3Z16,000 wafers Eighty percent lower confidence limitZ400!0.701Z280.4 h for MTBFp Eighty percent lower confidence limitZ16,000!0.701Z11,216 wafers for MWBF Observed mean time to repair (MTTR)Z(20K5)/3Z5 h Equipment uptimeZ1200C2Z1202 h Equipment dependent down times (DTE)Z(20K5)C50Z65 h Equipment dependent uptime (%)Z(1202)!100/(1202C65)Z94.87% Supplier dependent uptime (%)Z(1202)!100/(1202C65C5)Z94.49% Operational uptime (%)Z(1202)!100/(1202C65C5C8)Z93.90% Total utilization (%)Z(1200)!100/(1202C65C5C8C24)Z92.02% Simple OEE (%)Z(1200!40K105)!100/((1202C65C5C8C24)!50)Z73.46%.

22.9

Four Steps to Better Equipment Reliability

Four basic steps to better equipment reliability are: 1. 2. 3. 4.

Know goals and requirements Design-in reliability Built-in reliability Manage reliability growth during a. Reliability tests b. Field operations.

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

2008 200,000 935,887 77,585 292,465 1,305,937 128,772 10.14

Equipment Reliability

22-13

Cost of ownership (CoO)

Production volume

Scrap cost

Waste costs

Life cycle cost (LCC)

Overall equipment efficiency (OEE)

Acquisition cost

Production speed efficiency

Operations cost

Quality/defect rate

Availability (uptime)

Maintainability (MTTR and PM time)

Reliability (MTBF)

Safety

FIGURE 22.3

Quality

Consumable taxes, insurance and interest costs

Time

Hierarchy of high-level equipment performance metrics.

All reliability improvement activities belong to one of these steps.

22.9.1

Know Goals and Requirements

If what is required is unknown, then, it probably will not be achieved. Therefore, the first step to better reliability is to know the reliability goals and requirements, whether you are an equipment manufacturer or a user. If you are an equipment manufacturer (supplier), you need to understand the exact reliability requirements of your customer 1. Be aware of the reliability level of your competitor’s product 2. Know what reliability level is required in the market place. Considering the above inputs, set reliability goals for the equipment line at the beginning of each equipment program. If you are a customer, then it is your responsibility to make sure that equipment suppliers know your exact requirements. In either case, the reliability goals or requirements must include, at minimum, the items shown in Table 22.4.

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

22-14

22.9.1.1

Handbook of Semiconductor Manufacturing Technology

Goal Allocation

Once the system level goals are known, the equipment manufacturer must break down the equipment and system-level goals into “bite-size” goals for sub-systems, modules, and components. This makes it relatively easy for subsystem, module, or component engineers to achieve their respective product goals. 1. The process of breaking down the equipment and system-level goals into the next levels of subgoals, based on some logical justification, is called apportionment, budgeting, or allocation. This process is just like breaking down division-level budgets into department-level budgets. One widely used method is known as Advisory Group on Reliability of Electronic Equipment (AGREE) allocation method. In this method, appropriate weight factors, based on the complexity and criticality of the components are used in the calculations as shown below.

MTBFi Z MTBFs

h X iZ1

Wi =Wi

ð22:22Þ

where MTBFi, goal MTBF of the ith component; MTBFs, system level MTBF goal; Wi, weighting factor for ith component based on its complexity (1Zsimple, 10Zmost complex, sometime the weight factors are based on the inherent failure rate). An example of AGREE method calculations If a system consists of five modules, the system level MTBFS goalZ500 h, and the weight factors for each module are as follows: For Module 1, W1Z6 For Module 2, W2Z10 For Module 3, W3Z3, For Module 4, W4Z6 For Module 5, W5Z5 Then: MTBF goal for Module 1 (MTBF1)Z500!(30/6)Z500!5Z2500 h MTBF goal for Module 2 (MTBF2)Z500!(30/10)Z500!3Z1500 h MTBF goal for Module 3 (MTBF3)Z500!(30/3)Z500!10Z5000 h MTBF goal or Module 4 (MTBF4)Z500!(30/6)Z500!5Z2500 h MTBF goal for Module 5 (MTBF5)Z500!(30/5)Z500!6Z3000 h. TABLE 22.4

Must Include Items in the Reliability Requirements/Goals Item

Reliability metric and level that equipment should attain Time factor, age at which equipment should attain the reliability level Operational conditions: Temperature Humidity Duty cycle Throughput rate Process to be used Operator skill level PM policies to be followed Shipping mode Installation procedure Confidence level for attaining the reliability level Acceptable evidences for attaining the required reliability level

Examples Mean time between failure (MTBF)Z700 h Four months after installation Temperature range: 708F–758F Humidity range: 40%–45% Relative Humidity 12 h/day 15 wafers/h High density plasma etch Grade 12 or equivalent Monthly PM policy Air-cushioned truck Install by a special installation team 80% confidence that observed MTBF is equal or greater than the goal value Attaining the reliability level based on the field data, four months after installation

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

Equipment Reliability

22.9.2

22-15

Design-In Reliability

This is the most important and elaborate step to achieve better equipment reliability. Design-in reliability is a process in which reliability improvement goals are considered concurrently with other technical aspects at every activity of the design phase. Figure 22.4 depicts the design-in reliability process. Six major blocks of the process are 1. 2. 3. 4.

Use proper parts properly Use proper design techniques Design to withstand effect of external factors Avoid failures through scheduled maintenance

STEP 1

Know reliability requirements

STEP 2-A

Use proper parts correctly - Selection and specification control - Derating

Design to withstand effect of external factors - Environmental - Human factors - Software factors

STEP 2-D

Avoid failures through scheduled maintenance - Periodic preventive maintenance - Predictive maintenance

No

FIGURE 22.4

Meet reliability goals?

Yes

Go to next phase

Process of design-in reliability.

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

Design verification

STEP 2-C

STEP 2-F

STEP 2-E

Design review

Use proper design techniques - Design simplifications - Redundancy - Protective techniques

Modeling and design assessment

STEP 2-B

22-16

Handbook of Semiconductor Manufacturing Technology

5. Hold design reviews 6. Assess reliability of the design using modeling techniques. Reference [5] contains a detailed description of each block. The following is a brief summary. 22.9.2.1

Use Proper Parts Properly

Use of the proper parts properly makes up the most crucial block of the design-in reliability process. It consists of the following activities: Part selection. Before selecting any part and its supplier, determine the part type needed to perform the required functions and the environment in which it is expected to operate. The general rule for part selection is that, whenever possible, the designer should strive to use proven parts in the design and select a supplier who has proved historically to meet or exceed the part reliability requirements. Part specification. For each reliability sensitive part, its procurement specification should include: 1. Details of the intended application(s) 2. Reliability requirement level for the intended application(s) 3. Part screening procedure 4. Part qualification procedure 5. Acceptable evidences for attaining the required reliability level Part derating. Once the part is selected, perform an analysis to compare the expected stress level for the intended applications with those of the part’s rated (capacity) stress level. A technique known as derating is used to improve design reliability. In this technique, a part is selected, so that it will operate at less severe stress than it is rated as capable of operating. For example, if the expected power level is 10 W for a device, select parts that are rated for significantly higher than 10 W power. Use the appropriate derating factors for various electronics components given in Table 22.5. 22.9.2.2

Use Proper Design Techniques

Use of proper design techniques is another most crucial block of the design-in reliability process. It consists of the following activities: Design simplification. Anything that can be done to reduce the complexity of the design will, as a general rule, improve reliability. If a part is not required, eliminate it from the design. Wherever possible, reduce the number of parts through combining functions. Redundancy. This is one of the most popular methods in design to achieve the needed level of reliability. Redundancy is the provision of using more than one part to accomplishing a given function so that all parts must fail before causing a system failure. Redundancy, therefore, permits a system to operate even though some parts have failed, thus increasing system reliability. For example, if we have a simple system consisting of two identical redundant (parallel) parts, the system MTBF will be 1.5 times that of the individual part MTBF. Protective Technique. This design technique includes a means to prevent a failed part or malfunction from causing further damage to other parts. The following are some of the popular protective techniques used in equipment designs: 1. Fuses or circuit breakers to sense excessive current drain and to cut off power to prevent further damage 2. Thermostats to sense over-temperature conditions and shut down the part or system operation until the temperature returns to normal 3. Mechanical stops to prevent mechanical parts from traveling beyond their limits 4. Pressure regulators and accumulators to prevents pressure surges

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

Equipment Reliability TABLE 22.5

22-17

Derating Factors for Commonly Used Components Component

Capacitors, general Capacitors, ceramic Capacitors, supermetallized, plastic film, any tantalum Capacitors, glass dielectric, fixed mica Connectors Quartz crystals Diodes EMI and RFI filters Fuses Integrated circuits (all kinds) Resistors (all kinds) Thermistors Relays and switches

Transistors Wires and cables

Stress Category

Derating Factor

Voltage Voltage Voltage temperature Voltage temperature Current Voltage Temperature Power Voltage Current Voltage Current Current

0.5 0.5 at !858C 0.3 at !1258C 0.5 at !858C Less than 858C 0.5 at !858C Less than 858C 0.5 0.5 Less than 1258C 0.25 0.75 0.5 0.5 0.75 0.7 at !258C 0.5 at O1258C 0.7 0.8 0.75 0.8 0.5 0.5 0.75 for resistive load 0.4 for inductive load 0.2 for motors 0.1 for filament 0.75 Less than 1058C 0.6

Voltage Current Power Voltage Power Power Current

Breakdown voltage Junction temperature Current

5. Self-checking circuits (and software) to sense abnormal conditions and make necessary adjustments/compensations to restore normal conditions 6. Interlock to prevent inadvertent operations 7. Homing sequence for computer shut-downs 22.9.2.3

Design to Withstand Effect of External Factors

The operating environment is neither forgiving nor understanding. It methodically surrounds and affects every part of a system. If a part cannot sustain the effects of its environment, then reliability suffers. First, the equipment manufacturer must understand the operating environment and its potential effects. Then, he must select designs and materials that are able to withstand these effects or he must provide methods to alter and control environmental conditions within acceptable limits. Equipment design engineers must consider means to withstand the following external factors affecting reliability: 1. 2. 3. 4. 5. 6. 7. 8. 9.

Heat generation causing high temperatures Shock and vibration Moisture High vacuum Explosion Electromagnetic compatibility Human use Utilities supply abnormalities (power, water, gases, chemicals, etc.) Software design.

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

22-18

Handbook of Semiconductor Manufacturing Technology

22.9.2.4

Avoid Failures through Scheduled Maintenance

One way to improve reliability is to minimize the number of failures that occur during operation. This can be achieved in two ways: 1. Select parts that fail less frequently 2. Replace a part before its expected failure time. The latter method is known as scheduled maintenance (SM). This technique is used when it is not feasible to find a part that fails less frequently. If such a situation is properly comprehended during the design phase, it can be avoided through one of the following SM techniques. Periodic preventive maintenance. This is a fixed-period-driven maintenance procedure in which parts that are partially worn out, aged, out-of-adjustment, or contaminated, are replaced, adjusted, or cleaned before they are expected to fail. This way, system failures are forestalled during the system operations thus reducing the average failure rate. Predictive maintenance. This is a condition-driven scheduled preventive maintenance program. Instead of relying on a fixed period of life units to schedule maintenance activities, predictive maintenance uses direct monitoring of appropriate indicators to determine the proper time to perform the required maintenance activities. 22.9.2.5

Design Review

Design reviews are an essential element of the design-in reliability process. The main purposes of a design review are to assure that: 1. 2. 3. 4.

Customer requirements are satisfied The design has been studied to identify possible problems Alternatives have been considered before selecting a design All necessary improvements are based on cost trade-off studies.

Conduct design reviews on a regular basis from the initial design feasibility study through the pilot production phase. An effective design review team should have representation from each functional area involved in developing the equipment. 22.9.2.6

Reliability Assessment of the Design

Once the equipment design starts taking shape, it must be assessed to determine its reliability level and the system level effect of failure rate of each part. Modeling techniques determine analytical value of the reliability level. Failure mode and effects analysis (FMEA) technique determines the system level effect of part failures. Failure mode and effects analysis also identifies improvement opportunities. There are many commercially available software packages that perform modeling and FMEA. 22.9.2.7

Design Verification Tests

Once design is firmed, conduct design verification test to determine the reliability level under the expected use conditions (see Section 22.10). If the observed reliability level is lower than the expected, identify areas for improvement and implement corrective actions (design changes).

22.9.3

Build-In Reliability

Building-in reliability is a process that assures all parts, subsystems, modules that are made and assembled according to engineering drawings and specifications without degrading the designed reliability or introducing new failure modes. Important steps of this process are: Assembly instructions. Prepare detailed instructions for each assembly step. These instructions should include proper parts, materials, step-by-step assembly procedures, tools, limitations, inspection procedures, etc.

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

Equipment Reliability

22-19

Training. To minimize assembly errors, it is essential that every assembly operator be trained in basic assembly methods and in all the assembly operations assigned to him or her Burned-in. All parts and the system itself should be properly and adequately burned-in, debugged, or stress-screened before shipment. Product reliability acceptance test (PRAT). Conduct a PRAT on randomly selected units before shipping to assure the reliability level of the product line as it is shipped. Packaging and shipping. Equipment must be packed properly for the intended shipping mode. Select shipping mode that does not impart any undue stress on the equipment.

22.9.4

Manage Reliability Growth

Effectively managing reliability growth opportunities is a continuous improvement process. Equipment manufacturers learn from in-house reliability tests (see Section 22.10) or actual field experience. All problems observed during the reliability tests (either in-house or customers place) are documented and given to a central body (such as Failure Review Board (FRB)) for further analysis and disposition. If required, corrective actions are developed and implemented. Similarly, during field operations, customer works with the equipment supplier to collect, record, and analyze equipment failures, both hardware and software. They capture predetermined types of data about all problems observed with a particular equipment line and submit the data to the supplier. The FRB, at the supplier’s site, analyzes the failures. The resulting analysis identifies corrective actions that should be developed, verified, and implemented to prevent failures from recurring. A popular system named “Failure Reporting and Corrective Action System” (FRACAS) is used to manage this process. As shown in Figure 22.5, FRACAS is a closed-loop feedback communication channel to report, analyze, and remove failure causes. Now let us look at three key elements of FRACAS in more detail. 22.9.4.1

Failure Data Reporting

All the failures, observed either during in-house test or at customer’s site, must be recorded so all relevant and necessary data is captured in a systematic manner. A simple, easy-to-use form that is tailored to the respective equipment line should be used to record and report failure data. (Figure 22.6 depicts a typical failure reporting form). If the data volume justifies the cost of administering FRACAS, the data form can be computerized to communicate failure data. Internet and electronic mail are the most recent ways to report failure data. 22.9.4.2

Failure Review Board

The FRB is a multifunctional self-managed team that reviews, facilitates, and administers failure analysis. It also participates in assigning, developing, verifying, and implementing the resulting corrective actions. To do this job effectively, all the functional departments involved in the product line must participate on the FRB. Also FRB members must be empowered to assume responsibility, investigate failure cause, develop corrective actions, and ensure implementation of corrective actions. 22.9.4.3

Corrective Action

Any systematic action taken to eliminate or reduce the frequency of equipment failure (hardware or software) is a corrective action. Such actions may include part designs or material changes, part supplier changes, assembly procedure changes, maintenance procedure changes, operational changes, training changes, or software changes. All corrective action plans and their verification and implementation should be reviewed by the FRB on a regular basis. The FRB should also maintain a log of the corrective action status including open and closed corrective actions.

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

22-20

Handbook of Semiconductor Manufacturing Technology

Reliable equipment

Continuous improvement

In-house test

Field operations

Corrective action verification test

Corrective action

Failure review board

Failure data reports

Failure analysis

FIGURE 22.5

22.10

Failure Reporting and Corrective Action System (FRACAS) process flow.

Reliability Testing

No matter how many or how extensive the analyses we perform to calculate the reliability level of equipment, it is almost impossible to calculate the effect of all the factors that affect the reliability level. Even after engaging a meticulous reliability modeling software programs to do the reliability level calculations, we cannot theoretically derive the exact reliability level that will be observed in a reliability test or when equipment is installed at the customer’s site. This inability of our theoretical efforts necessitates performing reliability tests to find the actual reliability level of the equipment configuration of interest. The tests validate theoretical calculations and provide a proof for the expected performance indices. Reliability testing is also a very important activity of the reliability improvement programs. Information generated during the reliability test is vital to design engineers for initial designs and subsequent redesigns or refinements, and to the manufacturing engineers for fine-tuning the manufacturing process. The reliability tests also provide vital information to program managers showing technical progress and problems of an equipment line. Reliability tests can be performed at any level of integration, i.e., at the component level, part level, module level, subsystem level, or system level. Not only that, they can be performed during any equipment program life cycle phase. Among the numerous reasons to conduct reliability tests are: 1. To determine the reliability level under the expected use conditions 2. To qualify that the equipment line meets or exceeds the required reliability level

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

Equipment Reliability

22-21

SERVICE REPORT NO. _______ Received By:

Customer

Customer System ID

Module

Reported By

Equipment Serial No.

Module Serial No. Ref. Report NO.

Phone No.

Contact

Service Category:

Install Courtesy Telephone Fix Training Problem/Symptoms/Reason for Service Call

Warranty Update

Date

Time

PM UM

Development Others

Repair & Maintenance Action(s)

Problem Cause

Part Name

Part Number

Replaced

Status of Equipment Upon Leaving

Cleaned

Comments

Start Date

Time

Complete Date Time Customer Remarks

Service Engineer's Signature Customer's Signature

FIGURE 22.6

A typical failure report form.

3. To ensure that the desired level is maintained throughout the equipment life cycle phases. 4. To improve reliability by identifying and removing root failure causes.

22.10.1

Types of Reliability Tests

Since reliability testing is included in all the equipment life cycle phases, and they are conducted for numerous reasons, it follows that the testing includes many types of tests. The following reliability tests are commonly seen during a typical equipment program. 1. Burn-in tests 2. Environmental stress screening (ESS) tests

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

22-22

Handbook of Semiconductor Manufacturing Technology

3. Design verification tests 4. Reliability development/growth tests 5. Reliability qualification tests. Burn-in Tests. These test are conducted to screen out parts that fail during the early life period. They are performed at part, subsystem, or system level. Most failures observed during these tests are due to manufacturing workmanship errors, poor quality parts, and shipping damage. System level burn-in tests are also known as debug tests. Environmental Stress Screening Tests. As the title indicates, the ESS tests are conducted in an operating environment that is harsher (stressed) than the normal environment of expected use. The main purpose of the test is to weed out parts that, otherwise, would not fail under normal operating environment. This test increases confidence that all received parts are of good quality and they will last longer (i.e., have better reliability). Design Verification Tests. Design verification tests are conducted to ensure that a desired reliability level is achieved during the design phase under the expected use conditions. Most of these tests are run at the system level. Reliability Development/Growth Tests. Reliability development/growth tests are conducted to ensure that a desired reliability level is achieved during a given equipment program life cycle phase and it is improving (growing) as the program moves further along the life cycle phases. Most of these tests are run at the system level. Reliability Qualification Tests. These test are conducted to qualify that the component or equipment meets or exceeds the reliability level. These are pass–fail tests. If the demonstrated reliability level is equal or better than the required level and confidence, the equipment (or its program) is considered as meeting the requirement–passing the test or qualifying the equipment.

22.10.2

Generic Steps for Reliability Tests

Three overall steps of any typical reliability test are: 1. Test plan development 2. Test conducting 3. Test data analysis and reporting. 22.10.2.1

Test Plan Development

A well thought-out reliability test plan includes the following: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.

Test objectives Hardware, software, and process to be used Operational stresses and environment Resources required (including consumable) Sample size and test length Test procedure Data to be acquired Data form to be used Data analysis techniques Data reporting and reviewing procedures Pass–fail criteria, if required Expected outcome for each test Types of test reports Schedule of key test activities.

The reliability test plan should be formally documented and approved by high-level managers.

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

Equipment Reliability

22-23

22.10.2.1.1 Test Length The test length depends upon the desired confidence in the test results and the expected level of reliability (MTBF). Tests need to run long enough to increase confidence in the test results. However, we never have enough resources or time to test for an extended period. Therefore, statisticians have developed a method to determine the minimum test length needed to make correct decision with the required confidence in that decision. For repairable equipment, minimum test lengths are calculated to ascertain certain minimum MTBF level (target MTBF) with certain confidence, by

Minimum Test Length with P% Confidence Z ðTarget MTBFÞ !u

ð22:23Þ

where P%, desired confidence level; target MTBF, MTBF to be proved or expected; u, appropriate multiplier for P% confidence from Table 22.6. For example, if we need to prove target MTBF of 100 h, with 80% confidence in the decision, the minimum test length is calculated as follows. Target MTBFZ100 h uZ1.61 from Table 22.6 for 80% confidence level These give a minimum test lengthZ100!1.61Z161 h If we observe no or one failure in the 161 h-long test, we meet the target MTBF of 100 h. 22.10.2.1.2 Test Conducting During this step, the reliability test is conducted according to the test plan. All deviations from the formal test plan should be recorded and approved. A formal log of test events is kept to record key test parameters associated with each event. 22.10.2.1.3 Test Data Analysis and Reporting All the data collected during the test are appropriately analyzed, and conclusions are made. Use the formulae given in Section 22.2, Section 22.4, Section 22.6, and Section 22.7 to determine the observed reliability level, the associated confidence limits, and other performance parameters. You can also use SEMI E10 [2] formula to calculate the desired metrics. Failure Review Board and other interested groups should review the test data, results, and conclusions. To close a test project, a formal test report must be issued containing test objectives, test procedures, findings, conclusions, and recommendations.

22.10.3 Reliability Tests throughout the Equipment Program Life Cycle Phases Reliability tests are scattered throughout the equipment program life cycle phases. They play a very important part in the reliability improvement process. Table 22.7 lists the appropriate tests for each phase.

TABLE 22.6

Minimum Test Length Multiplier u Confidence P%

Multiplier u

10

20

50

75

80

90

95

0.11

0.22

0.69

1.38

1.61

2.30

2.99

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

22-24

Handbook of Semiconductor Manufacturing Technology TABLE 22.7

Reliability Tests throughout the Equipment Life Cycle Phases

Life Cycle Phase

Reliability Test

Concept and feasibility Design

No formal reliability test Part-level reliability qualification Design verification and reliability development Accelerated test Part-level reliability qualification Design verification Reliability qualification Reliability growth Accelerated test Burn-in Environmental stress screening System-level reliability qualification Accelerated test Reliability growth Burn-in Environmental stress screening Reliability qualification Product reliability acceptance test Accelerated test Reliability growth None recommended

Prototype

Pilot production

Production

Phase out

22.11 Use of Equipment Reliability Discipline in Business Practices To achieve equipment reliability goals/requirements, equipment manufacturers and users must include equipment reliability in their business practices as described below. Equipment suppliers. They must use equipment reliability throughout the various equipment-life cycle phases as shown in Table 22.8. Beside these activities, equipment suppliers must insists that their component suppliers use equipment reliability in their business practices. Similarly, all dealing with their customers must include equipment reliability. For example, any reference to reliability requirement in either request for quotation (RFQ) or purchase order (PO) documents must

TABLE 22.8

Uses of Equipment Reliability in Suppliers’ Business Practices

Life Cycle Phase

Activities Using Equipment Reliability Discipline

Concept and feasibility

Goal setting Apportionment Reliability plans Preliminary modeling Design-in reliability Design assessment and modeling Design verification testing Failure mode and effect analysis (FMEA) Part life tests Design review System test and reliability level assessment Design review Failure reporting, analysis, and corrective action system (FRACAS) FRACAS Product reliability assurance tests (PRAT)

Design phase

Prototype Phase and Pilot production phase Production phase

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

Equipment Reliability

22-25

include items shown in Table 22.4. If a customer does not specify reliability requirement clearly, the supplier must let the customer know that specific reliability requirements are lacking in the PO and must be clarified. Equipment users. A need for equipment reliability discipline begins when a user initiates an equipment search. At that time, the user must know the reliability requirements of the equipment they are going to acquire. These requirements must include items shown in Table 22.4. Whenever a user decides to evaluate equipment in his own factory, the reliability level calculations must be based on the formula given in Section 22.2, Section 22.4, Section 22.6, and Section 22.7. All the RFQ’s and PO’s must include appropriate reliability metrics. After the equipment has been purchased and installed in the factory, equipment users must accurately track the reliability performance level. They must implement continuous equipment improvement activities, such as failure reduction or OEE improvement programs. Table 22.9 summarizes the uses of equipment reliability by equipment users in their business practices.

22.12

SEMI E10

This chapter cannot end without describing SEMI E10, a Semiconductor Equipment and Materials International (SEMI) specification for definition and measurement of equipment RAM. A task force (consisting of semiconductor manufacturing equipment suppliers and users) developed it under the SEMI Metrics Committee. The SEMI E10 Guideline was issued in 1986 and revised several times. With a revision in 1996, SEMI E10 became a SEMI Standard and SEMI specification in 2000. See Ref. [2] for the latest revision of SEMI E10 (SEMI E10-0304E).

22.12.1

Benefits of Using SEMI E10

To create synergy between semiconductor equipment suppliers and users, they must work together for mutual gains, understand RAM expectations of each other, and speak the same language when talking about equipment RAM metrics. SEMI E10 Specification provides this language (a common basis for communication between users and suppliers of semiconductor manufacturing equipment) by providing specifications for measuring RAM metrics of equipment in manufacturing environments. For equipment suppliers, SEMI E10 RAM metrics are useful at each product life cycle phase, from early equipment design and development through production. From equipment users point of view, SEMI E10 provides an industry-wide and company-wide uniform specification to collect, analyze, track, compare (machine to machine, wafer fab to wafer fab, and industry wide), and report equipment RAM data. Accurate data collection of time allocation in each state is essential to calculate the accurate RAM metrics. Automation efforts to collect the data are also based on SEMI E10 definitions and formulae. Both CoO and factory capacity analyses use SEMI E10 RAM metrics. They also provide a basis for specifying reliability performance in equipment PO agreements. The long-term benefits of SEMI E10’s TABLE 22.9

Uses of Equipment Reliability in Equipment Users’ Business Practices Users Activity

Activities Using Equipment Reliability Discipline

Equipment search Equipment evaluations Sending RFQ’s Sending purchase orders Equipment performance tracking Equipment improvement programs Using high-level equipment performance metrics

Reliability specifications and observed value calculations Observed failure rate (or MTBF) and confidence limit calculations Reliability specifications and all the reliability metrics used in the RFQ’s Reliability specifications and all the reliability metrics used in the PO’s Observed failure rate (or MTBF) and confidence limit calculations Observed failure rate (or MTBF) and confidence limit calculations Reliability elements of the high-level metrics

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

22-26

Handbook of Semiconductor Manufacturing Technology

Total time

Operations time

Non-scheduled time - Unworked shifts, holidays - Installation, modification, rebuild or upgrade - Off-line training - Shutdown/startup

Downtime

Uptime

Engineering time

Unscheduled downtime

- Process experiments - Equipment experiments - Software qualifications

Manufacturing time

Standby time

Productive time - Regular production - Work for 3rd party - Rework - Engineering runs

- Maintenance delay - Repair time - Change of consumables/chemical - Out of spec Input - Facility related

Scheduled downtime - Maintenance delay - Production test - Preventive maintenance - Change of consumables/chemical - Setup - Facility related

- No operator - No product - No support tool

FIGURE 22.7 Categorization of total time into SEMI E10 states. (SEMI E10-0304E. Specification for Definition and Measurement of Equipment Reliability, Availability, and Maintainability (RAM), SEMI International Standard, Equipment Automation/Hardware, San Jose, CA 2004.)

international acceptance and use are the improved relationships between users and suppliers of semiconductor (SC) manufacturing equipment that will stimulate a spirit of cooperation and partnership, promoting further improvements in equipment performance. These will lead to greater business success for both users and suppliers.

22.12.2

Key Elements of SEMI E10

Two key elements of SEMI E10 are (a) events, scheduled, unscheduled, or non-scheduled, that stop equipment from performing its intended functions, and (b) arrival and departure times for the events. Events are categorized as scheduled or unscheduled. Unscheduled events are called failures. SEMI E100304E defines the failure events as follows. Any unscheduled downtime event changes the equipment to a condition, where it cannot perform its intended function. Any part failure, software or process recipe problem, facility or utility supply malfunction, or human error could cause the failure. SEMI E10-0304E uses the arrival and departure times of the scheduled, unscheduled, or non-schedule events to break down total calendar time into various time blocks as shown in Figure 22.7. These time blocks are defined as equipment states and become the basis for equipment RAM metric calculations similar to those given in Section 22.2, Section 22.4, Section 22.7, and Section 22.8 [2].

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

Equipment Reliability

22-27

References 1. 2. 3. 4. 5.

Dhudshia, V. H. Hi-Tech Equipment Reliability: A Practical Guide for Engineers and the Engineering Managers. Sunnyvale, CA: Lanchester Press, 1995. SEMI E10-0304E. Specification for Definition and Measurement of Equipment Reliability, Availability, and Maintainability (RAM). San Jose, CA: SEMI International Standard, Equipment Automation/Hardware, 2004. SEMI E79-0299. Standard for Definition and Measurement of Equipment Productivity. San Jose, CA: SEMI International Standard, Equipment Automation/Hardware, 1999. SEMATECH. Cost of Ownership Model. Austin, TX: SEMATECH, Inc., 1992 (Technology Transfer #91020473B-GEN). SEMATECH. Design Practices For Higher Equipment Reliability—Guidebook. Austin, TX: SEMATCH, Inc., 1993 (Technology Transfer #93041608A-GEN).

DK4126—Chapter22—23/5/2007—19:40—ANBARASAN—240451—XML MODEL CRC12a – pp. 1–27

23

Overview of Process Control 23.1 23.2

23.3 23.4

Introduction to Control of Systematic Yield Loss......... 23-2 The Control-Type Categories .......................................... 23-4

Abnormality Control Methods † Compensation (Target Tracking) Control Methods † Advanced Process Control: Combination of Both

History of Process Control in Semiconductor Manufacturing................................................................... 23-7 Characterization of Control Needs in Semiconductor Manufacturing................................................................... 23-9

The Four Expected Sources of Variation Requiring Compensation † Timescale of Variations † Pilots, Look-Aheads, Metrology, and Operational Practices

23.5

Basic Concepts of All Control Techniques ................... 23-14

23.6

Specific Abnormality Detection and Control Methods ..................................................... 23-18

Process Qualification † Process Capability Indices † Types of Errors: False Positives and False Negatives

Univariate Statistical Process Control † Fault Detection by Testing That There Is a Fault † Other Abnormality Control Methods and Use of Equipment Signals † Definition of the Sensitivity vs. Robustness Challenge

23.7

23.8

Specific Compensation Control Methods .................... 23-20

When Are Benefits Realizable? † Controller Goals: Tracking the Target, Rejecting Disturbances, and Ignoring Noise † Feedback/Feedforward Control † Common Compensation Control Methods Used for Run-to-Run Control † Real-Time Compensation Control Methods

Monitoring the Supervisory Run-to-Run Controller and the Controller System Advanced Process Control ............................................................... 23-34 The Other Type I, Type II Errors: Detection of Change in Overall System † Methods for Monitoring the Supervisory Controller

23.9

Stephanie Watts Butler Texas Instruments, Inc.

Continuous Process Improvement................................ 23-35 Benefits of Reducing the Effective Noise of the System Comparing How Different Machines React



23.10 Summary ......................................................................... 23.11 Acronyms and Glossary.................................................. References ....................................................................................... Further Reading .............................................................................

23-36 23-36 23-37 23-40 23-1

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

23-2

23.1

Handbook of Semiconductor Manufacturing Technology

Introduction to Control of Systematic Yield Loss

There are several different chapters in this handbook dedicated to control. As these chapters demonstrate, there are several types of control, each aimed at removing yield loss due to a particular source. All of the various control methods combined in their entirety can be viewed as “factory control.” Other articles have been focused on this synergistic concept of combining the control techniques of different areas and data sources into one holistic approach to controlling the fabrication facility [1]. Thus, this chapter will not discuss the concept of factory control further. However, the readers should consider this concept as they learn more of each of the control methods in each chapter. The increasing importance of systematic yield loss has been documented by a study by Keithley Instruments: “among the findings were that wafer misprocessing new accounts for more yield and reliability problems than contamination” [2]. This misprocessing includes both unintentional and intentional sources. Unintentional misprocessing includes operator and automation errors. Intentional misprocessing is the use of a recipe as requested by the engineer, but which recipe, coupled with the current state of the equipment, will result in less than desired results. This chapter will focus on methods other than those for defect and contamination control. In other words, this chapter will focus on methods for detecting and controlling misprocessing, also termed systematic yield loss. Consequently, for the remainder of the chapter, the term “process control” will be used to designate controlling the equipment and the process to ensure that desired results are achieved. Another reason for the application of process control is the need for improved productivity. In order to meet the historical trend of 30% per function per year reduction in cost, fab and equipment productivity must improve. In some cases, the yield is already high, but the overhead to achieve that yield is unacceptably costly. As this chapter demonstrate, application of certain types of control lead to improved productivity due to the reduction in pilot and look-ahead usage, with the yield remaining the same or even improving. Systematic yield loss and misprocessing can be considered to be from two sources. The first source is an abnormality, i.e. unusual process behavior. The second source is expected, but undesirable, variation. Thus, control methods can be divided into three categories: † Methods based upon detecting abnormalities and correcting them † Methods based upon actively compensating for expected sources of variation † Methods based upon a combination of compensation and abnormality detection In practice, which control method is used is based upon how the engineer and organization view the situation. In other words, two people may view the same situation differently and consequently feel differently about which method is more appropriate. The purpose of this chapter is to provide the readers sufficient knowledge of the different methods and their assumptions so that they can judge which method is most appropriate. In addition, this chapter will provide a cursory overview of some of the emerging techniques so that the readers can identify new technology they feel is necessary for the future. How to locate the additional knowledge, information, and resources required for implementation of any of the methods is also provided. Figure 23.1 explains what and how information is provided in this chapter, and the corresponding section number. Interdependencies of the various sections are shown so that the reader can decide which sections they may desire to read first. The chapter begins with a more complete definition of the three control categories listed above. All control methods discussed in this chapter will be relegated to one of the three control-type categories. A history of how control developed in the semiconductor industry enables the reader to understand why certain techniques have become popular and how different supplies came into being. To comprehend what control techniques are appropriate, a thorough understanding of the characteristics of the control need is necessary, i.e., the ability to dissect any systematic yield loss into its control characteristics is necessary before one can judge the appropriateness of any technique. The dissection of the control need characteristics also identifies the behaviors of the abnormal and expected

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

Overview of Process Control

23-3

Section II Control categories Advanced process control

Abnormality control methods

Section III History Who

Compensation control methods

Where Section VIII Math/specifics of controller monitoring

CIM/MES

Section IV Characterization of control needs in semiconductor manufacturing

Section XI Bibliography References Resources Information

FIGURE 23.1

Section IX Continuous process improvement methods

Section VI Math/specifics Section VII Math/specifics

Section V Process qual Process capability indices error rates

Explanation of what is covered in which sections.

variation and the situation in which it will occur. This explanation of the sources of systematic yield loss and how it relates to the different categories of control methods is the foundation for the entire chapter. The chapter then switches to preparation for the introduction of specific technologies, with first the explanation of process qualification and process capabilities and their role in control. The chapter then becomes more technical as a review of some of the mathematics highlights the assumptions involved. Because of the importance of the concept of error rates which are encountered explicitly in abnormality control methods and implicitly in compensation control methods, a simplified discussion of error rate cause and relationships will be given before any specific techniques are presented. This review of errors will be referred to frequently in the subsequent sections, since optimizing error rates is one of the predominant drivers when determining which control technique to use. Then specific abnormality and compensation control techniques are presented. The compensation control section also stresses the requirements for successful application of compensation control, i.e., just because the systematic loss is high does not always mean one can apply compensation control to reduce it. The mathematics of abnormality control methods is not covered extensively because there are many excellent texts and articles, which cover this area. Instead, mathematical concepts that are then revisited in the section on compensation control are introduced. The chapter concludes with a short review of the final, but most important, items of any control system, i.e., monitoring the supervisory controller and continuous process improvement. A bibliography is also included which the reader should find invaluable. Due to the large volume of literature, it would be impossible to review each article. Instead, the bibliography allows the reader to find further information about specific techniques and applications. The bibliography includes pointers to companies and web pages as well. A glossary provided at the end of the chapter assist the reader with definitions and acronyms. It is hoped that the material presented in this chapter will provide a strong foundation to allow the reader to comprehend other materials referred to in the bibliography.

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

23-4

23.2

Handbook of Semiconductor Manufacturing Technology

The Control-Type Categories

As noted above, systematic yield loss and misprocessing can be considered to be from two sources. The first source is an abnormality, i.e., unusual process behavior. The second source is expected, but undesirable, variation. Thus, control methods can be divided into three categories: † Methods based upon detecting abnormalities and correcting them † Methods based upon actively compensating for expected sources of variation † Methods based upon a combination of compensation and abnormality detection Each of the above categories described below. The major supposition of each of the methods will be elaborated. The characteristics of the abnormal and expected variation and the situations in which it will occur described in the Section 23.4 characterization of control needs in Semiconductor manufacturing.

23.2.1

Abnormality Control Methods

Table 23.1 lists various groups of abnormality control method. The supposition is that there is normal variation (“common cause”) and abnormal variation (“special cause”). Compensation is assumed to be undesirable or impossible for normal variation, while abnormal variation must be fixed, usually by repair performed by a human. Thus, these techniques can also be considered manual control. The groups of methods listed in Table 23.1 vary due to the source of data used by the method, but also by the final purpose of the method. Thus, note that some of the methods do not actually include a methodology for what action to take upon detecting an abnormality, i.e., they are really monitors only rather than controllers. The basis of all the methods is the use of a technique to detect the abnormal variation (fault) in the presence of normal variation. Within a group of methods, as well as between group to group the biggest difference is the particular fault detection technique used. Because of the normal variation, one encounters difficulty in developing a technique that always detects abnormal variation without falsely identifying some normal variation as a fault also. While different fault detection techniques are developed to be more mathematically appropriate to the physics of the situation, they are also developed to achieve better error rates of erroneously declaring an abnormality. Fortuitously, techniques that are more appropriate mathematically for the situation usually yield better error rates. The mathematics of the error rates will be presented later.

23.2.2

Compensation (Target Tracking) Control Methods

Table 23.2 lists various groups and alternative terms of compensation control methods; Run-to-Run control is also known by a variety of terms that are listed in Table 23.3. The supposition in compensation control is that there is expected non-random variation and random noise. Compensation is assumed desirable and possible for the expected non-random variation. Classically in the other industries in which TABLE 23.1 Terms and Groupings of Process Control Methods That Are Based upon Abnormality Detection and Correction (Manual Process Control) Statistical process control (SPC) Statistical process monitoring (SPM) Multivariate SPC or SPM Real-time SPC Equipment monitoring Excursion detection and control Fault detection (or fault identification) Fault isolation Fault classification Diagnosis Fault prognosis

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

Overview of Process Control

23-5

TABLE 23.2 Terms and Groupings of Process Control Methods That Are Based upon Compensation for Expected Variations (Automatic Process Control) Model-based process control Sensor-based process control Engineering process control Algorithmic SPC Automatic process control Automated process control Feedback/Feedforward control Specific types of control algorithms fuzzy logic, model predictive control, robust control, etc. Run-to-run control (batch to batch) Real-time control (within a batch)

compensation control methods were first developed, the compensation was determined by running calculations on computers. Thus, these techniques can also be considered automatic control. This idea of compensating for expected variation can be seen in Figure 23.2, where the goal of the controller is to reduce the variance around the target to the inherent random noise in the system. A concept that Box and Hunter have been introducing in recent presentations is that compensation control transfers the variance from the output, where it is expensive, to the input, where it can be less expensive [3]. Thus, compensation control results in cost savings. The groups of methods listed in Table 23.2 vary due to the source of data used by the method, the types of algorithms, and the time scale over which the algorithms run. Within a group, the methods vary due to the use of different algorithms for deciding the compensation. Different groups of methods are more appropriate for some types of variations. The compensation control methods all decide (1) when to make an adjustment, (2) what variables to adjust, and (3) how much to adjust those variables so that the desired results will be achieved. Different methods do a better job of compensation based upon the particular source of variation, the dynamics of that source, the random noise level, and the particular controller goals that the engineer desires. Tracking the target is typically the predominant controller goal, and thus the term “target tracking” to refer to compensation control methods. However, other goals exist in addition to target tracking [4].

23.2.3

Advanced Process Control: Combination of Both

Compensation control methods should also be aware of unexpected variation (or abnormality). This situation is when the controller operates on a system with different variances than for which the controller was designed. In other words, the random and/or non-random variances are different than expected. Although the controller may still drive the output to target, the overall quality of the result is suspect since operating in this regime has never been qualified. Thus, unmeasured variables and the metrology itself may be out of specification. The desired controller behavior would be for the controller to detect the change in process behavior and generate an alarm. Note that this situation can also arise in the abnormality based methods when the output appears to be on target but in reality the system has drifted to a state where the metrology and the system are out of specification. In this case, abnormalitybased methods have no extra ability to detect the system change. Fortuitously, the compensation behavior provides another way to determine if the system is behaving as expected. In other words, the compensation based controller can monitor not only the output changes, but how the output changes TABLE 23.3

Alternative Terms for Run to Run Control

Run-to-run (run-to-run) control Supervisory control Recipe adjustment Recipe synthesis

Run-by-run (run-by-run) control Batch-to-batch control Recipe generation Recipe tweaking

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

23-6

Handbook of Semiconductor Manufacturing Technology

Output

Mean is off target

Target

Variance includes systematic variation Run number

Target With control

Mean=Target

Output

Variance= Inherent noise

Target

Run number Target

FIGURE 23.2

Driving the output mean to the target and shrinking the variance to the inherent random noise.

in response to a given change in input as well, to determine if the over-all system has changed behavior resulting in better detectability. This desire to monitor for abnormalities in the compensation based controller leads to the concept of merging both methods into one. The merging of both compensation and abnormality based methods is called Advanced Proess Control (APC) in the semiconductor industry. The supposition in APC is that there is expected non-random (systematic) variation, expected random noise, and unexpected variation (which may be non-random or random). Compensation is assumed desirable and possible for the expected non-random (systematic) variation. Unexpected variation is to be detected and an alarm generated preventing the controller from operating under these conditions. Thus, a “fault” is a change in the variation of the system away from the expected variation. In other words, APC decides: † † † †

When to make an adjustment What variables to adjust How much to adjust those variables. If the system is responding and acting as expected, such as by checking that adjustments are not to frequent or too large or that the system has not drifted too far.

Because of the number of decisions made in APC, it is also defined as a documented analysis methodology for deciding how to run a single piece of equipment or a group of equipment to achieve desired results. Another view of APC is shown in Figure 23.3 in which the various components which run at different time scales are shown. The various controllers, monitoring systems, and data sources are shown. Manufacturing enterprise system (MES) which stores the product and process specifications, as well as tracks work in process (WIP). The Recipe is the setpoints and parameters for all the controllers on the equipment, i.e., it tells the equipment how to operate. Each layer builds upon the previous layer by using data from the previous layer plus additional information to achieve that layer’s goal of abnormality detection or compensation. Metric generation is explicitly shown in Figure 23.3 due to its importance. To emphasize that it is the controller data that is monitored to detect a system fault, the abnormality detection part of APC has also been termed “statistical process control (SPC) on the controller” or “controller SPC.”

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

Overview of Process Control

23-7

Continuous process improvement Monitor control system Supervisory (Run to Run) controller Metric generation from post run data

Goals, info from MES

Monitor regulatory controllers Recipe

Metric generation from real-time data Real-time Sensors

Actuators

Regulatory controllers

In-line and on-line metrology data

FIGURE 23.3 Components of advanced process control (APC): Optimum performance by integration of each layer. Real-time regulatory control includes endpointing; MES is manufacturing enterprise system (also known as computer integrated manufacturing system); supervisory run-to-run controller has historically been referred to as “model-based process control”; monitor for regulatory controller has historically been referred to as “fault detection and classification”; monitor for supervisory controller has historically been referred to as “statistical process control on the controller” or “overseen.”

It is also common to see APC define as:

APC Z MBPC C FDC where MBPC is model-based process control, which is commonly used for the supervisory run-to-run controller and FDC is Fault Detection and Classification, which is a common generic term for monitoring the regulatory controller. It also implies Fault Classification and Fault Prognosis. A similar concept, algorithmic Statistical Process Control, was being investigated at approximately the same time in the process industries [5,6]. The major difference is that Algorithmic Statistical Process Control focuses only on controlling the “quality variables,” i.e., only the outer three layers with ex situ data of Figure 23.3, i.e., the layers utilizing at-line and on-line data for supervisory control and monitoring of the supervisory controller. A similar philosophy of merging the best of SPC and automatic control was also being investigated by a few in the statistical and control communities [7–14]. In these investigations, it varied whether the focus was on real-time or run-to-run control. However, generally, the focus was on only one of the loops (real time or run-to-run), rather than synergistic linkage of compensation control and abnormality detection at all levels.

23.3 History of Process Control in Semiconductor Manufacturing While real-time compensation control techniques had existed for a few decades in the petrochemical, process, and aerospace industries, the development of control systems in the semiconducting industry initially was not based upon work in these other industries. While some of the academic and industrial control people were involved in more than just the semiconductor industry, an explanation of the history

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

23-8

Handbook of Semiconductor Manufacturing Technology

will show that the independence was due predominantly to the systems environment lack of real-time process and product sensors on commercial equipment, focus on Statistical Process Control, which evolved into run-to-run (batch-to-batch) control, and real-time controller monitoring. Initially most semiconductor processing equipment had very little process control other than temperature controllers on furnaces. However, by the mid to late 1980s, pressure controllers, mass flow controllers, and temperature controllers were becoming quite common on most tools. Lithography exposure equipment had better controllers for dose, focus, and alignment. Feedback control of unit processes was now being recognized as necessary for consistently achieving future smaller device geometries. As part of this recognition, the Semiconductor Research Corporation (SRC) and SEMATECH began focusing funding in the area of process control. Control was considered part of Computer Integrated Manufacturing (CIM) or Factory Science, and therefore this topic was included in the CIM and Factory Sciences Areas. An SRC Workshop (first CIM workshop) was held at the University of California at Berkeley in 1986. A second workshop was held the next year at the Massachusetts Institute of Technology. These workshops helped focus attention on process control. An SRC workshop on Real-time Tool Control was held on February 1991 in Canada, co-hosted by Techware (now Brooks Automation Canada). A SRC/DARPA CIM workshop was held in August 1991. SEMATECH then began hosting Advanced Equipment and Process Control workshops from the summer of 1991. The list of workshops is included in Table 23.4. Initially, these workshops focused on real-time process control. However, due to the lack of commercial real-time process sensors, the apparent huge hurdle to be overcome by process equipment suppliers, and a very strong interest by semiconductor manufacturers, the focus of the workshops became more increasingly focused on run-to-run control and regulatory controller monitoring. Semiconductor manufacturers could implement these controls without involvement of the equipment supplier. In addition, large benefits could be achieved in both these areas. While the workshops were driving the academic, industrial, and equipment research communities, a revolution was occurring on the factory floor. The need for increased control and the incompatibility of traditional Statistical Process Control (SPC) with many semiconductor processes was resulting in the invention of compensation control and advanced abnormality detection methods by fab process engineers [15]. So, although the typical fab process engineer was not versed in control theory, they also were working in areas traditional control theory mainly did not exist: run-to-run (batch-to-batch) control and regulatory control monitoring (batch monitoring). In addition, statisticians heavily assisted the fab engineers, a situation unusual in the process and petrochemical industries. The result was a different theoretical background for compensation control. Finally, the strong link to the Manufacturing Execution System (MES) was not present in other industries. Thus, the unique systems environment hindered the application of traditional process industries control theory to the semiconductor industry, while simultaneously enabling the development of this new area of run-to-run control. In addition, because of the strong Statistical Quality Control fab environment, the integration of various levels of control resulted in the concept of Advanced Process Control (APC) presented in the last section. TABLE 23.4

SEMATECH Advanced Equipment and Process Control Workshops

I II III IV V VI VII VIII IX X

30 July to 1 August 1991, Austin, Texas 3–5 March 1992, Mesa, Arizona October 1992, Austin, Texas 19–22 April 1993, Dallas, Texas October 1993, Dallas, Texas Fall 1994, San Antonio, Texas Fall 1995, New Orleans, Louisiana Fall 1996, Santa Fe, New Mexico Fall 1997, Lake Tahoe, Nevada Fall 1998, Vail, Colorado

Title of workshop has changed through the years.

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

Overview of Process Control

23-9

By the mid-1990s, traditional and modern control theory relevant for run-to-run control was being integrated into the fab-invented and statistically-based techniques [16,17]. In addition, the regulatory control monitoring (equipment signal monitoring) theories from semiconductor manufacturing and process industries were being cross-fertilized. Also, changes at process equipment vendors and the emergence of commercially available process sensors were resulting in the re-emergence of real-time control focus. While the control theory from other industries is being exchanged, the unique systems environment has resulted in unique systems solutions for the semiconductor industry. For further explanation of the systems issues, see Ref. [18]. Because of the emphasis in the industry on run-to-run control, regulatory control monitoring, and the concept of Advanced Process Control (APC), these areas will be more heavily focused upon in this chapter. The ability to understand exactly what are the control needs, i.e., why are abnormality and compensation control methods needed, is imperative to deciding which control technique is appropriate. This understanding will also provide insight into how control evolved in the industry since it was focused on meeting the needs of the semiconductor industry. The next section will present a dissection of the different control needs in the semiconductor industry. This dissection is based upon the historical observation of what are common control needs process to process and fab to fab.

23.4 Characterization of Control Needs in Semiconductor Manufacturing The benefits of control have been observed by many and presented in a variety of articles, as well as at the SEMATECH workshops [19,20]. Table 23.5 provides a list of benefits that can be achieved with the right control method. The benefits can be considered direct and indirect. Direct benefits impact the cost or quality of manufacturing in an easily observed manner. Indirect benefits lead to cost or quality improvements in manufacturing, sales, or design in a less than obvious manner. There are other benefits which are not listed in Table 23.5 because they are software dependent. Two such software-enabling benefits are the removal of paper logbooks from the fab, and the integration of qual data and producing data into our database. All the benefits listed in Table 23.5 can be lumped into three categories as shown in Figure 23.2 and Figure 23.4. The compensation controllers continuously drive the output to target, thereby improving TABLE 23.5

Detailed Benefits due to Advanced Process Control Direct Benefits

Reduced pilot and look-ahead usage Reduced number of quals and qual time Reduced set-up time Reduced cycle time (due to qual and set-up time) Reduced scrap (outlier reduction) Accommodate inherent machine/chamber difference Reduced (tighter) distribution around target Increased equipment uptime and productivity Decreased equipment repair time Decreased post-maintenance recovery time Improved data-driven maintenance and qual schedules Improved qualification of equipment kits Reduced capital costs (increased life of equipment) Data drive sampling plan

Indirect Benefits Improved productivity of operators and engineers Improved customer satisfaction Continuous improvement of equipment (Pareto sources of variation) Faster learning cycle for process development (faster device ramp) Process and product understanding Accommodate greater device diversity (increased process flexibility) Fab-to-fab standardization and sharing Increased confidence in alarms Increased ability to redesign products based on tighter process control

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

23-10

Handbook of Semiconductor Manufacturing Technology

Average

Target

1. Increase productivity 2. Eliminate outliers 3. Improve typical distribution • Drive average to target • Reduce variance around target

Reduce variance

Eliminate outliers

FIGURE 23.4

Outliers vs. expected typical distribution and ways in which APC results in improvements.

the distribution. The abnormality based methods detect outliers and prevent continued misprocessing. When both methods are used, as in Advanced Process Control, the sensitivity to changes is increased, and the number of outliers is reduced further. The productivity improvement is fortuitous and is due to the automation, applicability of the mathematics, and heavy emphasis on decision-making of APC. As discussed [19,20], it is the productivity improvement that is most needed. We will discuss these benefits again in the Section 23.7.1. As noted in the introduction, different methods perform better for different situations, and applying appropriate statistics is important to achieve the needed benefits [21]. Consequently, a good understanding of the situation in which the methods will be applied is required to understand which method will work satisfactorily and why the benefits listed in Table 23.5 and Figure 23.4 can be achieved. Also some of the productivity benefits are due to the way pilots (non-sellable wafers) and metrology are used in the semiconductor industry. In this section, the reader should consider whether the variation is: † Expected Random Variation † Expected Non-Random Variation † Non-expected Variation As mentioned in Section 23.2, compensation methods are aimed at correcting for expected nonrandom variation and abnormality-based methods are aimed at detecting non-expected variation. For both types of methods, expected random variation is considered natural and should not be classified as a fault, nor should compensation be attempted.

23.4.1

The Four Expected Sources of Variation Requiring Compensation

The variation for which control compensates can be categorized into four sources. Table 23.6 Provides the four categories of expected variation for which control is compensating. The sources of variation in Table 23.6 require compensation, i.e., a different action must be taken, in order for the desired results to be achieved. Note what is not included in Table 23.6. An unstable process is not included. A machine that breaks frequently is not included. Thus, while compensation is a tool that every engineer should have in their toolkit, it is not the only tool. Dynamics can be defined as the non-random behavior of the system over time, i.e., how the output would change with each run if no compensation were used. Disturbances themselves have a particular

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

Overview of Process Control TABLE 23.6

23-11

Four Expected Sources of Variation Requiring Compensation with Explanation and Examples

1. Within-run dynamic process or dynamic target / Strong function of time within a single run 2. Disturbancesa / Equipment aging, such as calibrations, chamber build-up, part wear / Machine maintenance, such as chamber clean, kit replacement / Inherent chamber-to-chamber differences / Wafer state (called “loading” in etch) 3. Change in feedforward valuesb / Incoming wafer state, which are due to results of earlier processes, such as thickness / Machine attributes, such as tube age, sputter target age 4. Change in process or product goal (e.g., set point, target) / Different device / Different step in flow (sidewall oxide etch as opposed to contact oxide etch) Usually not want to compensate for ALL disturbances, feedforward changes, just expected ones. A disturbance which can be measured and whose impact on the controlled output can be modeled can become a feedforward variable. b An unmodeled or unmeasured change in a feedforward variable is a disturbance. a

dynamic behavior which are given in Table 23.7. Feedforwards and Goal changes can be viewed as step changes, i.e., dynamics so fast as to be unmeasurable. Understanding the dynamic behavior is important because the compensation technique is dependent upon the specific dynamics of the system. In other words, what is called the systematic variation by the process engineer is called the dynamic behavior by the control engineer. A technique for step dynamics may not work as well for a disturbance whose dynamics are a moderate ramp. The size of the change is also considered when designing the compensation technique. If the change expected is large, more aggressive action may be required than if small changes are encountered.

23.4.2

Time Scale of Variations

As the same theme as dynamics, is the time constant of the system. In other words, the amount of time it takes before changes are observed. There are actually several different sources of variation within

TABLE 23.7

Equivalent Dynamics (/) and Size of Change ( ) for Disturbances

1. Equipment aging, such as calibrations, chamber build-up, part wear / Slow-to-moderate run-to-run dynamics The faster the dynamics, the greater the changes over 10 lots 2. Machine maintenance, such as chamber clean, kit replacement / Step function Size may range from small to very large, typically large 3. Inherent chamber-to-chamber differences / No dynamics Size may be significant, but can be driven to near zero with considerable work 4. Wafer state (called “loading” in etch) / Step function as switch between products due to differences in topography, open area If exists, size can be considerable / May be step function or slow dynamics if due to previous equipment disturbances If exists, effect is smaller than the effect due to different products 5. Major fault, such as mass flow controller (MFC) fails or rapidly degrades / Step function or within-run dynamics (fast run-to-run dynamics) Size may range from small to very large, typically large A disturbance that can be measured and whose impact on the controlled output can be modeled can become a feedforward variable. Even if the disturbance becomes a feedforward, the dynamics still exist.

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

23-12

Handbook of Semiconductor Manufacturing Technology

a semiconductor process system. Each of these variations and their associated time scale are given in Table 23.8. Table 23.8 assumes a single wafer processor, i.e., that each run consists of a single wafer. Not all systems are single wafer, such as batch furnaces. However, because single wafer systems represent the most complexity, only they will be analyzed. Note that there are continuous changes and discontinuous changes. For example, between each maintenance, there is the continuous slow drift of the order of a fraction of the maintenance cycle, i.e., the changes can only be observed over 100 or more wafers (actually over several lots). However, there are also discontinuous changes which occur approximately every lot (24 wafers). The same continuous and discontinuous variations occur when observing what occurs within a time scale of 24 wafers. Finally, changes occur within a single run itself. Figure 23.5 presents a plot of an example of within run variation. This example is metal etching in a plasma. While there is considerable variation within a single run, the functionality is repeatable. In other words, the mean value or length of a region may increase or decrease from run to run, but that one region has a higher mean value than another region will repeat from run to run. The length of the region may vary due to different incoming or targeted film thicknesses, as well as process drifts over larger time scales (e.g., Within a Maintenance Cycle). The mean value may change due to sensor aging, process aging, or changes in incoming wafer state. The SMALLEST size of variation is typically the within a lot, wafer to wafer, and within a region.

TABLE 23.8

Dynamics of Each Time Scale Variation and Their Causes (Assumes a Single Wafer Processor)

† Maintenance cycle-to-maintenance cycle (every O20,000 wafers) – Discontinuous change (step function) – Repairs, chamber cleans, preventive maintenance kit replacements – Attempts to return in same ideal machine state are rarely successful – Maintenance can produce the largest changes † Within a maintenance cycle (w100–1000 wafers) – Continuous change – Gradual build-up on chamber, machine ear, sensor drift † Lot to lot (every 24 wafers) – Discontinuous change (step function) – Due to incoming wafer state – Due to current process (if plot data from lots of a given type) † Other lots processes run between lots of this process – Undocumented maintenance, changes generally occur between lots † Within a lot [assuming one chamber (1–10 wafers)] – Continuous change – “First wafer effect” (which may last for more than one wafer) † Warm-up effect, de-gassing, different steady-state chamber state – Due to incoming wafer state from previous process which has first wafer effect † Wafer to wafer (every one wafer) – Discontinuous change – Due to incoming wafer state † For example, left/right track effect of upstream lithography step † Different chambers of cluster tool used for upstream processing (1–2–3–1–2–3.) – Due to current process † Randomness caused by process start-up, repeatability of equipment controllers – Undocumented maintenance, changes can also occur wafer to wafer † Within a wafer, further broken down into (Figure 23.5) – Region-to-region † Discontinuous change † Due to different processing conditions (different recipe step) or due to different materials exposed – Within a region † Continuous change † Due to changes in materials exposed and heating effects

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

Overview of Process Control

23-13

Optical emission intensity

Materials exposed

2000

Al

TiN

TiN/ Ti

1500

Overetch (Ox)

1000 500

Seconds

0 −500 0

20 1,2

3

40

60

Step 4

Gas pressure stable, plasma ignited

FIGURE 23.5

80

100

120

Step 5

Step # of recipe

Example of within a wafer variation: metal etch.

It is imperative to understand the main sources of variation for both fault detection and control. Many years of observation have shown faults generally occur between lots. Faults are next most likely to occur between wafers. One wants to catch the fault on or before the first wafer after the fault occurred. However, this can be very challenging considering large variations are expected to occur between lots, and the fault may appear small to the measurement technique. Thus, detection between wafers is easiest, but can ONLY occur if measurements are made on every wafer. In-line metrology rarely is done on every wafer. Consequently, only in situ or on-line metrology (metrology mounted on the process equipment or equipment sensors), which can easily be used for every wafer, can be used to detect faults between wafers. An additional challenge is caused by the variety of processes run on a single tool. The difference between processes is usually much larger than the size of a fault. Thus, one might propose to analyze each process separately to increase sensitivity to the fault. However, due to all the expected variations which occurs to the machine and that a given process will not be run in huge volume, the variability across lots of a single process can be huge too. Therefore the concept of global modeling, where a model is used to normalize all processes to each other, is used to allow increased sensitivity. This concept is also used in feedback/ feedforward control and is the predominant source of productivity improvement due to elimination of pilots. Understanding the time scales is important for control because they are a part of the dynamics analysis. As discussed above when introducing dynamics, the compensation technique is selected based upon the dynamics. Thus, if the major source of disturbance is machine variation caused by maintenance a different control scheme will be designed than if the major source of disturbance is the first wafer effect.

23.4.3

Pilots, Look-Aheads, Metrology, and Operational Practices

Twenty to Forty percent of wafers processed are pilots (non-production wafers). Another significant fraction are look-aheads, wafers from a production lot processed and analyzed before the rest of the production lot is processed. Pilots cannot be sold and look-aheads have a huge impact on cycle time and throughput. Thus, both are undesirable. One reason for pilots is that some of the current wafer metrology is contaminating, destructive, or cannot work with topography. Another usage of pilots is for chamber conditioning post maintenance. However, manufacturing flexibility and lack of direct measurement of the equipment/chamber state are the actual reasons for the bulk of pilot and look-ahead

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

23-14

Handbook of Semiconductor Manufacturing Technology

usage. Manufacturing flexibility mandates that a single piece of processing equipment process a variety of wafers. Thus, concern arises about how one process (called “A”) will impact the chamber which results in a different result for another process (called “B”). Without any on-tool measurements which indicate the chamber state, a wafer measurement must be used to indicate the chamber state. Therefore, when switching between processes A and B, a pilot or look-ahead will be run first to confirm the chamber state. If there is confidence that process B will most likely be ok or only need small tweaking, a look-ahead is used. If there is less confidence, a pilot is used. The same dilemma arises after maintenance, will the chamber be different and cause different process results? Again, pilots or look-aheads are used. Regardless of process interactions, because processes A and B may be very different, without advanced control, processes A data was analyzed separate from process B data. Thus, if the last 10 lots run were all of process A, a pilot or look-ahead is run before process B. Not because A is suspected to have cross-talk with process B, but because no data are available to predict how process B is run. If a global process model can be created to relate process A data to process B data, then a look-ahead need not be run. It is this ability to create a global model that reduces the number of pilots used in a production fab. In some instances, if process A and B interact, a global model may be developed which predicts this interaction. Modeling and understanding the interaction is much more difficult. Thus, interacting processes are more infrequent in the fab, while non-interacting processes are quite common. While advanced control can reduce the pilot usage for machines which run more than one process, only better on-the-tool metrology can reduce pilot usage after maintenance.

23.5

Basic Concepts of All Control Techniques

Before discussing specific techniques, there are several concepts that the reader must comprehend. The first is the Process Qualification and how it pertains to control. The next concept is the Process Capability Indices and how they are used to rank control. The final concept is error rates and their usage in tuning the control algorithm.

23.5.1

Process Qualification

Because this chapter is on process control, to not discuss the role of process qualification would be negligent. Thus, before specific control techniques are presented, a methodology for identifying the right control technique and insuring its ability to control the process satisfactory will be introduced. Included is an over-all semiconductor processing quality system is a methodology for qualifying a process for use in manufacture of production material. The methodology for qualifying a process is commonly known as a qual plan. The most well documented Qual plan in the open literature is the SEMATECH Qual Plan [22,23] Domain Solutions markets software to assist in conducting a qual plan, based on one created at IBM [24]. An important part of a Qual plan is to determine that the associated metrology is capable. Gauge studies, also known as R&R studies, which signify the analysis of repeatability and reproducibility variances, are used to assess the metrology capability. Variance component studies are also done of the process to determine that it will meet specifications with the desired yield and determine the source of variability. This study is done with the desired control system in place. Thus, evaluating the quality of the controller is part of qualifying a process. An adequate control system will guarantee that methods and practices exist which insure the capability values will be maintained in the actual production environment. In the development of the controller, the sources of variation that will be encountered must be considered to design an adequate controller.

23.5.2

Process Capability Indices

Process capability indices are used to assess the process’ ability to achieve yield. The two most common over-all metrics to assess the process’s capability are Cp and Cpk. These indices were created in the statistical Quality Control field. However, they are a useful metric even when evaluating compensation controllers.

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

Overview of Process Control

23-15

Cp strictly evaluates the process’s variability compared to the specification limits on that process:

USLKLSL 6s

Cp Z

ð23:1Þ

where USL is the upper specification limit, LSL the lower specification limit, and s the standard deviation of the process. While Cpk also considers the mean of the process, i.e., how centered the process is within its specification limits:

Cpk Z minimumðCpL ;CpU Þ where

ð23:2Þ

X KUSL 6s X KLSL CpL Z 6s X Z Average of process results CpU Z

The desired values of Cp and Cpk are commonly said to be 2 for a 6s process. Figure 23.6 shows a graphical representation of Cp and Cpk with a value of 2, along with a shift in the mean of 1.5s yielding a Cpk value of 1.5. Note that when the mean shifts and the variance does not, as shown in Figure 23.6, the Cp value remains unchanged, as also shown in the Figure 23.6. A Cpk of 2 means that only 2 parts per billion (ppb) are outside of the specification limits, while a value of 1.5 means that 3.4 parts per million (ppm) will be out of specification. Note that 2 ppb loss translates to a yield of 99.9999998% and 3.4 ppm loss equals an yield of 99.99966%. Because of the many statistical problems exist with Cp and Cpk, such as assuming both an upper and lower specification limit and only one source of random Gaussian variability, there is considerable work in the statistics field developing alternative process capability indices. The greatest issue is the lack of understanding of the large sample size required to obtain a value with good confidence, but this situation

Spec limits Mean –1.5s +1.5s

C p=2.0 C pk=1.5

C p=2.0 C pk=1.5 C p=2.0 C pk=2.0 −6 −5 −4 −3 −2 −1

0

1

2

3

4

5

6

Sigma

FIGURE 23.6 Cp and Cpk process capability indices; Cpk considers the average value while Cp does not; Cpk value of 2 is equal to 2 ppb outside of specification; Cpk value of 1.5 is equal to 3.4 ppm out of specification.

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

23-16

Handbook of Semiconductor Manufacturing Technology

would be true for any metric involving variance. Regardless of the statistical issues, Cp and Cpk provide engineers a target for which to aim, and they also convey the importance of centering the process and shrinking its variability. Two alternative metrics for judging a process’s performance were developed in the process industries control area, which are independent of the specification limits [4].

MISE Z

MIAE Z

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u t uP u error2i t iZ0

ð23:3Þ

t vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u t uP u jerrori j t iZ0

ð23:4Þ

t

where errori is the target-Y at time i, Y the controlled output, MISE the mean integral squared error, and MIAE the mean integrated absolute error. These two metrics are influenced by how centered a process is and the variability of the process. MISE is the analog of the standard deviation, but with the average replaced with the target. In other words, MISE can be considered the standard deviation around the target. MIAE is the analog of the Mean Absolute Deviation, and just like Mean Absolute Deviation, is less sensitive to outliers. Thus, process modifications improve MISE greater fraction than MIAE if the modifications remove outliers (“spikes”). These two metrics do not convey information on yield, but they do indicate impact of process and equipment modifications. The goal is to continuously shrink the values of MIAE and MISE.

23.5.3

Types of Errors: False Positives and False Negatives

The following discussion is based upon linear statistics for white gaussian random variation, but these simplified concepts are applicable to any situation. Let’s assume that one is attempting to detect a shift in the mean only with no change occurring in the variation level around the mean. As will be discussed later, this situation is the one usually assumed in abnormality control methods. A sample of size N will be used to detect the shift. Consequently, there are two probability distributions of concern, the original distribution and the distribution shifted by an amount D. The two distributions are shown in Figure 23.7. A t-test is used to determine if a shift has occurred [25]. Table 23.9 defines the two types of errors which will occur with the costs associated with each of the errors. As Figure 23.7 shows, the overlap of the two distributions results in the two types of error. It can be seen that as D decreases, b will increase if a is held

s

s

n

n

Power

m1a Reject H0

b accept H0

m0

X

FIGURE 23.7 Type I (a) and type II (b) errors from the comparison of mean values of two normal probability distribution functions for sample of size N. Mean values are a distance of D apart. Standard deviation (s) is the same for both distributions.

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

Overview of Process Control

23-17

TABLE 23.9

Types of Errors

Type

% of Errors of that Type

Indicates

Description

Costs Due to

I

Alpha (a)

False positive

II

Beta (b)

False negative

Detect an abnormality when it has not occurred Not detect an abnormality when it has occurred

Wasted time trying to track down fault that does not exist; unnecessary machine downtime Shipment of bad product, lost of quality, loss of improvement opportunity due to inability to see impact of a factor

constant. In other words, for a given sample size, smaller shifts are more difficult to detect. As the sample size N is increased, the distributions narrow since the standard deviation of the sample mean is the standard deviation of the individuals divided by the square root of N. Thus, as the sample size increases, for a constant shift of size D, b will decrease if a is held constant (or vice versa). In other words, increasing the sample size makes a shift easier to detect. To summarize, Table 23.10 lists the variables which determine the percentage of type I and type II errors, a and b, respectively, for any fault detection method. Which mathematical technique is selected is based upon its appropriateness to the situation and the availability of software and computer hardware. The size of fault that needs to be detected is based upon the necessary quality that must be achieved. However, for the mathematics, it is not the absolute size of the fault that matters, but rather the relative size of the fault compared to the normal variation, usually represented by the standard deviation. The greater the sample size, the lower are the type I and type II errors. However, larger sample sizes may take longer time to gather which may increase the amount of time before a change will be detected. In addition, many times, the sample size is determined by the physics and cost of sampling. Consequently, the remaining choice is between a and b. Once a value for ONE of these variables is selected, the value for the outer variable is also determined. As described in Table 23.9, there are costs associated with both of these errors. Thus, the trade-off between a and b is really an optimization of the economics. The value selection for the error types, for a given relative fault size to detect and sample size, is determined by how the mathematical technique itself is set-up or tuned. In other words, each fault detection technique generally has at least one parameter associated with it. A given value for the parameter(s) determines the values for a and b for a given sample, size, normal variation level, and desired fault size to detect. Many of the methods also include sample size as an explicit parameter in the equations, further highlighting the importance of the sample size. Another way error type information is calculated and explained is with the average run length (ARL). This metric is useful for cases where analytical calculation of error rates is difficult. The ARL is calculated by performing many simulations for a given fault size and standard deviation of the noise (“common cause variation”). The number of runs before a fault is detected, the run length, is recorded for each simulation. The average of these run lengths is then calculated. The ARL for a fault size of zero provides a metric for type I error. The ARL for a given non-zero fault size provides a metric for type II error. Thus, a chart or table of ARL vs. fault size per normal standard deviation is used to compare different fault

TABLE 23.10

Variables Involved in Determining a and b Error Rates

† Mathematical technique used † Size of fault being detected when compared with normal variation † Type I error (a) † Type II error (b) † Sample size of the data

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

23-18

Handbook of Semiconductor Manufacturing Technology

detection methods and/or different values of the method’s parameters. While rarely done, the whole distribution of run lengths could also be compared.

23.6

Specific Abnormality Detection and Control Methods

We will cover abnormality detection methods used in run-to-run control under compensation methods. Generally, abnormality detection and control methods are part of a total quality management (TQM) program [26]. Due the volume of existing textbook on SPC and TQM, TQM will not be covered here and SPC methods will only be quickly reviewed.

23.6.1

Univariate Statistical Process Control

The most common of the abnormality-detection-based methods in the semiconductor industry is Statistical process control (SPC). There are several books dedicated to the application of SPC [27]. SPC has been practiced in the semiconductor industry for at least 20 years and is very widespread [28]. The SPC is an entire methodology, including addressing what actions to take upon detection of an abnormality and how to inform the operators of these needed actions. In traditional SPC, the expected variation is assumed to be described by a normal (Gaussian) distribution occurring around a mean. In other words, the errors around the mean are assumed to be identically independent distributed (IID) normal. Identically independent distributed means that the value of each error for every measurement comes from the same distribution, and that each value is independent of the previous value, i.e., the errors are uncorrelated. This assumption for the distribution of y is represented as

y Z m C3

ð23:5Þ

3 Z IIDNð0;sÞ

ð23:6Þ

where y is the measurement, m the mean (average) of the distribution for y, 3 the random error in measurement y, IIDN(0,s) the Identically Independently Distributed Normal (Gaussian) distribution with mean of 0 and standard deviation of s. Another common way of representing the distribution of y is

y Z Nðm;sÞ

ð23:7Þ

where y is the measurement, N(m,s) the normal (Gaussian) distribution with mean of m and standard deviation of s. In SPC, an abnormality is assumed to be a shift in the mean of this distribution (m) or a change in the standard variation of the normal distribution (s). In SPC, the abnormality detection technique used is based on the statistics and charting of the data. Different types of statistics have a different associated charting method. Thus, the specific fault detection techniques are usually called XYZ chart, with XYZ denoting the specific statistics used. Note that many times there may be more than one actual chart per a given technique. Different techniques exist because different physical situations require different mathematics and due to efforts to obtain better type I and type II errors. Also, different techniques may be testing different hypothesis. Some are testing whether the mean (m) has shifted, while others are testing whether the standard deviation (s) has changed. Due to the statistics, it requires a much larger sample size to detect a change in the standard deviation than a change in the mean [25]. In addition, some may argue that changes in the mean are more likely to occur, although we will address this suggestion again in the section on using equipment signals. Consequently, charts to detect changes in the mean are much more common. Incidentally, a change in the variance can also cause a test for a change in

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

Overview of Process Control

23-19

the mean to trigger. Thus, although the fault detected is incorrect (a mean change when it is a variance change), an alarm is still generated indicating that something is abnormal. The most common of the SPC charts is a Shewhart chart, also known as an XBar–R (Average–Range) chart [29]. Although XBar–R charts are the must well known, the I-MR (Individuals-Moving Range) is more appropriate. The across sample and run to run variation must be the same for an XBar–R Chart to be appropriate. The variation within a wafer, wafer to wafer, or lot to lot is rarely the same, thus causing the use of a single sample (“individual”). To decrease type II error, “supplementary run rules” are used with the Shewhart Chart. These rules are generally known as the Western Electric (WECO) Rules in recognition of the source of their well known application. While WECO rules decrease Type II errors, they also increase Type I error. Other charts, such as CUSUM, have been developed to provide better Type I and II errors for smaller relative fault sizes. These other charts also lend themselves naturally to single sample size applications. However, these alternative chart types still have not achieved wide application mainly due to software limitations.

23.6.2

Fault Detection by Testing That There Is a Fault

Note that traditional Statistical Process Control Methods look for an abnormality by testing the hypothesis (H0) that the system IS as expected. For example, to test that the current mean equals the expected mean:

H0 :m Z m0

ð23:8Þ

where m is the current mean and m0 the expected mean. On the other hand, one could explicitly test that the system is NOT as expected by testing for a particular fault, for example:

H0 :m Z m0 C D

ð23:9Þ

where D is the fault size testing. For Equation 23.8, only data from when the system is “good” are required to set-up the fault detection method. However, to actually understand the a and b values, a fault size must be assumed. For Equation 23.9, data from when the system is “good” and from when the system is “bad,” i.e., experiencing the fault, are required. Thus, if a given fault is known to occur, the methodology corresponding to Equation 23.9 will generally yield better fault detection capabilities. However, if no particular fault is expected, the methodology represented by Equation 23.8 is the only approach available. Even if Equation 23.9 is used, Equation 23.8 should also be used to catch all unexpected faults. While this concept of using “bad” data to create an abnormality detection method is beginning to be presented more frequently in the literature, it still is not encountered in industrial practice.

23.6.3

Other Abnormality Control Methods and Use of Equipment Signals

Table 23.1 includes many methods in addition to univariate SPC, such as multivariate SPC and Real-time SPC. In the semiconductor industry, multivariate methods are predominantly used on equipment signals. Because equipment signals are measured during processing, the term “real time” has been associated with SPC on equipment signals. Because many of the methods used with equipment signals are broader than the univariate SPC, the generic term Fault Detection is also generally used for Abnormality Control applied to equipment signals. The sources of most of the equipment signals are regulatory controllers on the equipment. This application of abnormality control to the regulatory controllers is the layer labeled “Monitor Regulatory Controllers” in Figure 23.3. The regulatory controllers will maintain the controlled output signals close to the setpoints specified in the recipe, even under the influence of moderate faults. However, the actuator values required to maintain the controlled outputs to their setpoints in the face of

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

23-20

Handbook of Semiconductor Manufacturing Technology

faults is usually significantly different than from the situation with no faults. Thus, the actual source of realtime data used in the fault detection technique is the actuator values from the regulatory controllers. The mathematics used in equipment signal monitoring ranges from very simple to quite complex. The reader is referred elsewhere for a discussion of the mathematics [1]. The bibliography at the end is also useful for identifying typical conferences, proceedings, and web sites which also cover this topic more extensively. There is similarity in the methods to those used for monitoring the Run to Run controller (covered in Section 23.8). However, due to the volume of variables and the within run time aspect, real-time controllers are more difficult to monitor.

23.6.4

Definition of the Sensitivity vs. Robustness Challenge

In the section discussing error rates, the concepts of false positive and false negatives were introduced. Let’s define sensitivity as the ability to more easily detect errors, i.e., the test is more sensitive to smaller errors. Thus, sensitivity is related to false negatives. The smaller the false negative rate, the more sensitive is the technique. Let’s define robustness as the ability for the method to correctly function in a wide variety of expected variation. In other words, robustness is related to the false positive rate. The lower the false positive rate, the more robust is considered the method. Based upon the discussion of error rates, it should now be obvious that sensitivity and robustness are trade-offs. This trade-off is specifically highlighted because of the difficulty in achieving a robust and sensitive method. As already mentioned several times, some methods are more appropriate to a particular situation than other methods and thus will yield better error rates. Consequently, they will provide better sensitivity and robustness.

23.7

Specific Compensation Control Methods

23.7.1

When Are Benefits Realizable?

To ensure successful application of a compensation control technique, three items should be considered: 1. Requirements for compensation control to be feasible 2. Definition of goals/improvements to be achieved 3. Probability of success One must understand that certain requirements must be met for compensation control to be possible. The most basic requirement is that the system must NOT be Independently Identically Distributed (IID). IID was discussed in the section on univeriate SPC. Note that the errors (or random error) can be IID, which is actually preferable. This requirement is the same as saying that the mean itself shifts and drifts. Table 23.11 lists the remaining requirements which relates to the concepts of controllability, observability, TABLE 23.11

Requirements for Model-Based Process Control

1. Appropriate situation and predictable / Shifts or long-term drifts affect machine productivity / Typical process variations can be characterized / Changes occur infrequently/slowly enough to get adequate disturbance and process models / Currently need period adjustment 2. Controllable / Adjustments to actuators/settings affect outputs to be controlled / Adjustments to actuators/settings can be made quickly enough / Adjustments to actuators/settings can be made with enough precision 3. Observable / Data collected can be related to output to be controlled / Data are/can be collected frequently enough / Measurement dead time is short enough / Good signal-to-noise ratio

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

Overview of Process Control

23-21

and predictability. In other words, one can predict future changes based upon current and post data, and one can cause alterations in the system to counter-act the changes predicted to occur so that they are never observed in the output. Regardless of the need for productivity or yield improvement, if these basic requirements are not met, then compensation control is not possible. A clear understanding by everyone of which goals are to be achieved is required to ensure the specific controller is selected which is likely to achieve the specific goals. Table 23.5 lists a variety of possible goals. Implementers and their managers should discuss which goals are desired before possible algorithms are ever discussed. One goal not listed in Table 23.5 is which the particular process selected for implementing control will enable implementation at subsequent similar process. For example, while the improvements expected for this process may be small, it allows the development of the infrastructure necessary for more complicated, high visibility processes which are expected to realize large improvements. The probability of success relates to basic systems environment, resource management, and change management. These topics are outside the scope of this chapter. However, their importance in instituting a new methodology which is predicated upon automation and interaction with the current CIM environment cannot be stressed enough.

23.7.2 Controller Goals: Tracking the Target, Rejecting Disturbances, and Ignoring Noise While the final goal of implementing control is usually one of the benefits listed in Table 23.5, the controller itself usually has a particular goal and it is fine-tuned to meet this goal. In other words, particular controller mathematics is used which optimize a particular controller goal in order to achieve an over-all process improvement goal. At first thought, it would seem that tracking the target is the only goal of a controller. However, random noise is also present. The three classic goals of any controller are given in Table 23.12. Trade-offs are required between these goals. Controllers with fast response and which aggressively attempt to remove the influence of disturbances and process dynamics are more susceptible to noise. This susceptibility is due to the amount of data required to distinguish between a real change in the mean caused by a disturbance or process dynamics and a data point in the tails of the normal process variation. Thus, just like for abnormality control methods, Type I and II errors are also of interest in compensation control methods, although rarely discussed in a typical control textbook. In the compensation case, Type I error can be viewed as the controller making an unnecessary action which increases the variance around the target. Type II error can be viewed as the controller not taking a needed action that would have reduced the variance around the target. As the discussion on Type I and II errors mentioned, increasing the sample size requires a greater time for data collection. Thus, a controller with a given algorithm which reduces its type I error, without changing its’ Type II error, will be more sluggish to respond. Just as different fault detection algorithms provide better Type I and II errors for a given situational application, so do different compensation algorithms provide better target tracking, disturbance rejection, and noise insusceptibility. We will discuss this concept again in the section on SPC-based compensation control methods. In traditional petrochemical industry control, stability can be a real issue. In fact, in these industries, stability considerations may dominate the design and tuning of the controller. However, in run to run

TABLE 23.12

The Three Classic Goals of a Controller

1. Track the target without lag / The output should immediately go to the new target (set point) when it is changed / The output should remain on target even under the influence of process dynamics 2. Prevent disturbances from influencing the output / The output should remain on target even when disturbances occur 3. Ignore random noise / The controller should not increase product variance by responding to spurious (not real) fluctuations

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

23-22

Handbook of Semiconductor Manufacturing Technology

control, it is rarely a large concern. Thus, this lack of focus on stability concerns has also led to a difference in the approach to control between the semiconductor industry and the petrochemical industry.

23.7.3

Feedback/Feedforward Control

All compensation control techniques can be classified as Feedback or Feedforward. Note that Table 23.2 list Feedback and Feedforward control. Figure 23.8 demonstrates these two types of controllers. Feedback uses measurements about the current process results to decide how to change the process for the next sampling period. Feedforward uses measurements on incoming materials, process, or equipment to decide how to change the process for the current process. Thus, feedback control drives the average value to target (i.e., drives Cpk to equal Cp). Feedforward control, because it accounts for incoming variations, can improve the Cp value, i.e., it turns apparently random variation into non-random variation, which can be compensated for. Based on Table 23.6, feedback control is used to compensate for expected disturbances. Feedforward control is used to compensate for measured disturbances that have been modeled. Target (Setpoint) changes are encountered in both feedback and feedforward control.

23.7.4 Common Compensation Control Methods Used for Run-to-Run Control There are a variety of control methods in the control literature, although many are not currently deployed in the Semiconductor Industry. In this section, how run-to-run (batch-to-batch) control evolved in the semiconductor industry will be covered. Control methods from the petro-chemical industries will be introduced next. Finally, the critical issue of deadtime will be introduced. This section will conclude with an example that illustrates many of the concepts introduced in this section. 23.7.4.1

The Creation of SPC-Based Controllers in the Semiconductor Industry

As mentioned previously, Statistical Process Control, an abnormality detection and control method, has been practiced for many years in semiconductor manufacturing. However, in several cases, traditional SPC just would not, and could not, work [15,30]. The assumption that the expected variation can be described by IID, i.e., that the values were uncorrelated, did not hold. Because of the variety of processes run on a single piece of equipment, a given process produced a small sample size causing statistical problems. In addition, the assumption that the machine was “out-of-control” when an SPC test alarmed was not correct. To many, “out-of-control” meant the machine needed to be repaired, and yet it was not broken. Others interpret the philosophy of using a “knob” to re-turn the process when an alarm is given as compatible with SPC concepts [31,32]. However, the alarm frequency was greater than the traditional

Feedforward

Feedback

Controller

Controller

Settings for next run

Deposition

FIGURE 23.8

wafers

Settings Thickness metrology

wafers

...

wafers

Feedback and feedforward controls.

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

Etch

Overview of Process Control

23-23

SPC expectations, i.e., traditional SPC expects alarms to be infrequent and an exception to typical behavior. Thus, because of the frequency of the knob adjustment, ensuring consistency in how different operators used the knobs and documenting what values they used is extremely important. In the mid1980s, the search began for a method to determine: † When to re-adjust the process † How much to re-adjust the process In addition, the emphasis was on: † Ease of implementation due to the frequency of the adjustments † Prevention of over-controlling due to the inventor’s SPC background † Ability to use the methodology in an environment of small number of runs per process Thus, the SPC chart gained a new role: to determine when to adjust the process. By allowing the SPC chart to trigger when to adjust the process, over-control could be avoided. It was also believed to be easy to implement since the fab already had SPC charts in place. In addition, the operators and engineers were already used to taking an action when a SPC failure occurred. However, a traditional SPC chart would not be appropriate since a controlled output would be serially correlated, which violates SPC chart assumptions. Thus, a regression control chart is used [33], on which the residuals (Actual value– Predicted value) are plotted. If the predictions are good, then the residuals should be IIDN (0,s) and a SPC chart is appropriate. It is assumed that a model is used for the predictions. When the model is no longer centered the predictions will become poor, causing a fault to trigger on the chart indicating the model needs to be tuned. Of course, in such a case, the residuals may also become autocorrelated, i.e., the residuals will not be independent (the second I in IIDN). Due to the autocorrelation, the error rates (a and b) will not equal the values expected from the SPC chart set-up on truly IIDN(0,s) data. However, in practice, tuning based upon an SPC chart has worked as expected by the engineers who were used to operators manually tuning based on SPC chart on the controlled output. To handle the small run sizes, it was envisioned that models could be used to combine data from different processes into one controller. Thus, if different processes were used for different thicknesses, a model was used for thickness to combine all the data into one model:

y^ Z mx C b

ð23:10Þ

where y^ is the model prediction for output y, e.g., thickness; m the slope of the model (model parameter), model coefficient, gain; x the recipe setting, e.g., time; b the offset and is the tuned model parameter; and y the output being modeled, e.g., thickness. The next question was what data to use to tune the process. An obvious answer was to use the data involved in the WECO failure [30,31]. To show how the process was tuned, b was tuned with the data in the WECO violation set by N W P

bnew Z bold C

iZ1

ðyi Ky^i Þ NW

ð23:11Þ

where NW is the number of runs involved in WECO test failure, e.g., the 4 or 5 runs involved in a 4/5 WECO run rule failure, called the violation set. Every time the model is tuned, the history is re-set. In other words, the historical data used to calculate the next SPC test does not include any data previous to or including the run for which the last SPC trigger occurred. In addition, the history is re-set after maintenance. If the process is expected to return to a baseline state, the b value is re-set to its original baseline value. However, because the process may act

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

23-24

Handbook of Semiconductor Manufacturing Technology

significantly different after each maintenance, test runs may be used to calculate the b value: L P

bnew Z bold C

iZ1

ðyi Ky^i Þ

ð23:12Þ L where L is the number of test runs, usually 1, bold the last value of b or the baseline value of b. To calculate the setting (x) to be used in the recipe, the value of x for the next run was found by solving Equation 23.10 by setting the prediction equal to the desired target value T and using the new b value for Equation 23.11 or Equation 23.12: y^ Z T Z mx C b/ x Z

T Kb m

ð23:13Þ

where b is the most up-to-date value of b (from Equation 23.11 or best estimate) and T, the desired target value for output y. The above methodology became known as “model-based process control” in the semiconductor industry because it deployed a process model as part of the controller. A simple graphical representation is shown in Figure 23.9. Table 23.13 lists the algorithm with some generalizations that will be made clear in later sections. Representative equations to use in the algorithm is shown as sub-bullets in Table 23.13. It is the tuning of the model that provides feedback, i.e., the above algorithm is a form of feedback control. Control for setpoint changes occurs when the value for T is changed in Equation 23.13 resulting in new settings values. While Figure 23.9 shows the idealized case of a linear process, no noise, and no model mismatch, experience has proven that the method works even in the presence of these complications. Model mismatch is when the coefficients of the model other than the offset, i.e., the gains are not equal to the true values of the process or when terms, such as a quadratic term, are missing in the model. To use this method, a model needs to be fitted to the process data to obtain a value for m and an initial estimate for b in Equation 23.10. Sometimes the equations used are not of the form above, and it may appear that a model of the form of 10 is missing. However, the same mathematics is implied, i.e., the equations used can be put in the form above. For linear single input-single output systems, Equation 23.10 seems unnecessary because yZ ^ T can usually be substituted in Equation 23.11 and Equation 23.12 and an algebraic solution is possible, as shown in Equation 23.13. However, using an explicit model form is a good practice because such a form is necessary for non-linear, multivariable systems, as will be discussed later.

Output (modeled variable) 18 16 14

Original model

Target

12

Adapted model

10 8 6 4

Original recipe

2 0

FIGURE 23.9

0

1

Re-solved

2 3 4 Input (setting on machine)

5

6

Graphical representation of model-based process control for case of low error, linear process.

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

Overview of Process Control TABLE 23.13

23-25

Generalized Semiconductor-Model-Based Run-to-Run Control Algorithm

1. Model created to predict output as a function of input (process model) † Equation 23.10 † Equation 23.25 † Others / In Figure 23.9, the lower line is the original (untuned) model 2. Model is solved (optimized, inverted) to determine what value of the settings is predicted to give values for the output equal to target Note: While settings are calculated each run, they are not necessarily different each time † Equation 23.13 † Equation 23.27 † Others / In Figure 23.9, target value of 10, XZ1.8 3. Wafers are run with these settings values in the recipe 4. Measurements are made for run 5. Output metric (actual value) is calculated from measurements † For example, average thicknessZaverage (measured values) / In Figure 23.9, actual output is higher than target 6. Compare actual output to predictions to decide whether the model should be tuned; if not, skip to step 7 † Statistical process control chart † Deadband, Equation 23.24 † Others / In Figure 23.9, is determined the need to tune 7. Tune model (provides feedback) according to disturbance model † Equation 23.11 if production run † Equation 23.12 if qual run † Equation 23.22 or Equation 23.23 with l value for production or qual run † Others / In Figure 23.9, new tuned model shown (adapted model, dotted line) 8. Go to step 2 / In Figure 23.9, new solution results in output on target With example equations and relation to Figure 23.9.

23.7.4.1.1 Process Model and Disturbance Model Equation 23.10 through Equation 23.12 can also be reformulated to highlight two important concepts. † The process model which predicts how a change in the input will cause a change in the output † The disturbance model that predicts how the output changes over time (runs) Using the concept of the process model and the disturbance model, an equivalent equation can be presented.

y^ Z Process model C Disturbance model

ð23:14Þ

Thus, by comparing Equation 23.14 with Equation 23.10, Equation 23.11, and Equation 23.12, it’s obvious that:

mx

Z Process model

How b is tuned Z Disturbance model

ð23:15Þ

Z Equation 23:11 and Equation 23:12 It is the formulation of Equation 23.14 that allows expansion into other non-linear and multivariable process models and more advanced disturbance models. It also explicitly highlights that one must effectively capture the inherent behavior of the system to input changes (the process model) AND be able

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

23-26

Handbook of Semiconductor Manufacturing Technology

to predict the way disturbances will impact the system (the disturbance model) to be able to effectively control the process. Thus, Equation 23.14 demonstrates how feedback control is a mechanism for compensating for the disturbance. 23.7.4.1.2 The Biggest Benefits of “Model-Based Process Control” Several major benefits come out of the above methodology and are listed included in the list of Table 23.5. Models were originally used because processing was needed at more than one desired target, due to different products needing different results or due to the same machine being used for two different processes in the flow. Thus, a model-based methodology was created to allow one control strategy for both processes. Once the controller could handle two processes, it could handle infinitely many [30] providing increased process flexibility. In addition, since all the data were related through the model, each product run became the “test run” for the next product run, thereby reducing the need for look-ahead and qualifications runs which decreased pilot usage too. This ability to model more than one process is the “global modeling” concept introduced in the section on Characterizing Control Needs. With the automation of the above methodology, engineering, technician, and operator productivity was improved. By allowing the controller to decide when qualifications were necessary based upon the data and using the controller to perform quals, including those for post-maintenance recovery, qual runs, qual times, and post-maintenance recovery time were reduced. Using the controller data to determine cause of alarms speeds the repair time. The models can be unique per machine chamber and the controller used to drive chamber results to sameness removing the effect of chamber/hardware differences. Other benefits listed in Table 23.5 will be discussed in the section on controller monitoring. The controller described above was applied to the Plasma Enhanced Chemical Vapor Deposition (PECVD) of tetraethyl orthosilicate (TEOS) [15]. The Cpk value was increased from 2.5 to 4.5. The number of quals was reduced from one per day to one every 3 days. Due to one pilot wafer was used for each qual, reducing quals also reduced the associated number of pilots. The time spent doing quals was reduced from 1 h per qual to 15 min per qual. Other examples of improvements are given in Ref. [15]. One item to note is that the SPC based methodology appears to be so simple as to not require automation to achieve the benefits cited. However, in practice, the necessity of automating the calculations to achieve adequate quality and consistency has been demonstrated. On the other hand, it has been found that entering the new settings into the equipment does not necessarily require automation, although increased quality will be achieved with automatic recipe download to the equipment. 23.7.4.1.3 The Biggest Issues with SPC Based Process Control One issue with WECO based tuning is the resulting “bang–bang” nature of the output. Bang–bang control is the simple style of control used for home thermostats. The setting can take on only two values, on or off. The result is the temperature will alternate between above or below the desired target temperature, but rarely at the target temperature [4]. Similarly, with the WECO tuning, the output generally alternates between above and below the target. Contributing to this behavior is the small sample of data used in the tuning. Additionally, lag can sometimes be encountered due to waiting for a SPC failure. The issue of type I and II errors needed to be addressed differently when the response action is an automated control action verses. a manual control action. Table 23.14 compares the cost of type I and II errors for the case of expected variation needing compensation only, i.e., the possibility of the system encountering unexpected variation is ignored for this discussion. As can be seen from this table, the trade-off between Type I and II errors should be different for automated and manual controls. In the case of automated control, minimizing Type II errors has a heavier focus due to the smaller cost associated with Type I errors. Obviously, this trade-off must be within reason or the overall cost will not be favorable due to considerable overcontrolling. Finally as Equation 23.14 highlights, one must have a good disturbance model to effectively compensate for the disturbances. In many cases, the SPC based tuner

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

Overview of Process Control

23-27

TABLE 23.14 Comparison of Error Costs for Manual vs. Automated Control for Case of Expected Variation Needing Compensation Type

Description

I

Take control action when none is needed

II

Note take control action when one is needed

Costs in Manual Case

Costs in Automated Case

Wasted time trying to track down fault that does not exist: unnecessary machine downtime; if action finally taken, will result in overcontrol (increased variation around target) and will need to manually undo action in future Loss of quality

Control action easily taken by system: action will result in some increase in variance around target; action will need to be undone, but control system will detect overcontrol within next few runs and undo action itself Loss of quality

does not adequately represent the disturbances dynamic behavior. As noted several times, when the mathematics more adequately represents reality, Type I and II error rates are generally improved. 23.7.4.2

The Creation of EWMA-Based Controllers in the Semiconductor Industry

As the search continued for a method with better Type I and Type II trade-offs, many tried the exponentially weighted moving average (EWMA) to tune the b value of Equation 23.11 [34–39]. The EWMA has the distinction of being known as an EWMA chart to SPC experts [40], as a first older digital filter to control pracitioners [4], and as integrated moving average time series, represented as (1,0,1), to statisticians [41]. It’s invention in three different fields attests to its natural applicability to many processes, i.e., it was expected that an EWMA would be a good representation of the disturbance dynamics. The equivalency of the first-order digital low-pass filter to an EWMA can be seen by the EWMA equation

Zt Z l Vt C ð1KlÞ ZtK1

ð23:16Þ

where Zt is the filtered value at time t, Vt the variable to be filtered, l the filter factor, 1/time filter time constant. Note: Whether the filter factor is on the output variable or on the filtered variable varies article to article and software to software. This discrepancy is responsible for many an engineer spending wasted hours trying to track down a math error that is in fact is a misunderstanding in filter factor usage. To use the filter for prediction, the following equivalence is used:

V^ tC1 Z Zt

ð23:17Þ

where V^ tC1 is the prediction of the variable V for the next run (timeZtC1), Zt the filtered value of variable V for the current run (timeZt). Another way to represent the EWMA that more clearly shows how the EWMA corrects itself using a fraction of the prediction error is to rearrange Equation 23.16 and note that Equation 23.17 says the prediction for variable V at time t is the filtered value Z at time tK1

Zt Z ZtK1 C l ðVt KZtK1 Þ Z ZtK1 C l et where et is the error in predicting VtZVtKZtK1.

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

ð23:18Þ

23-28

Handbook of Semiconductor Manufacturing Technology

The EWMA is used for control by filtering the model offset b, i.e., defining

Vt Z b Z yKmx

ð23:19Þ

bnew Z Zt

ð23:20Þ

bold Z ZtK1

ð23:21Þ

bnew Z l ðyKmxÞ C ð1KlÞ bold

ð23:22Þ

and substituting into Equation 23.16

or equivalently, substituting Equation 23.19 through Equation 23.21 into Equation 23.18

^ bnew Z bold C l ðyKyÞ

ð23:23Þ

The value for l is determined by what will result in the best control for the system with its associated control issues. A different value of l may be used for production wafers and qual runs to mimic the concept in Equation 23.11 and Equation 23.12. For many cases, since only one qual is run, l for quals is 1, i.e.,

lproduction Z l lqual Z 1 The EWMA-based controller has found wide applicability in the semiconductor industry [34–39]. In some cases, the tuning is for every run [34,36–39]. In other cases, WHEN to tune is still decided by a SPC chart [35,39] or a deadband. A deadband is used for determining when to tune by

If jbnewKboldj % db; then bnew Z bold Otherwise; bnew Z calculated bnew value

ð23:24Þ

where db is the deadband value. The deadbanding prevents unnecessary control action similar to the SPC chart without suffering from unnecessary lag. It is another way to achieve type I vs. type II trade-off. It can be considered equivalent to the trigger value in an EWMA chart. It is also used to prevent what the engineer considered undesired frequent small control action, especially if the changes are being made manually to the recipe on the machine. The deadbanding may be implemented in other ways, such as checking if the input (x) would change greater than a certain amount. If not, then tuning will not occur. 23.7.4.3 Control

Extension to Multivariable, Non-Linear, Constrained Systems with Feedforward

Equation 23.10 and Equation 23.13 assume linearity and single input-single output, as well as lack constraints on the inputs or outputs. The method above can still be used by changing Equation 23.10 to

^ Cb Y^ Z f ðX; FF;Y;qÞ

ð23:25Þ

where f(x) is the generic equation, including non-linearities; X denotes more than one manipulated variable; FF denotes the feedforward measured variables; Ydenotes more than one output variable (actual value); Y^ denotes prediction for each output variable; q denotes all the parameters involved in equation f; ^ and b the current estimated value of the disturbance, i.e., bZ filteredfY Kf ðX; FF;Y;qÞg.

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

Overview of Process Control

23-29

In reality, Equation 23.25 is actually several equations, one for each output. The effect of a feedforward variable is easily seen since it is included in the model. For example, an incoming thickness value could be used as a feedforward variable for a controller on final thickness using time as the manipulated variable.

Final thickness Z initial thickness C m time C b

ð23:26Þ

Based upon the discussion of Equation 23.14

f ðXÞ Z process model and

HOW b is tuned Z disturbance model: The biggest change due to the non-linearities and the constraints is the need for a numerical solver to replace Equation 23.13. In such a case, there are several cost functions that can be minimized. A common cost function that allows trade-offs between changes in the manipulated variables (size of process adjustment) and reaching the target is

( Minimize X

N X iZ1

2

Wi ðTi KYi Þ C

k X

) wk ðXk KXk;tK1 Þ

2

ð23:27Þ

kZ1

with constraints

gj ðXÞR 0 hL ðYÞR 0 Constraints are used because not all values of X are allowable. Constraints on the outputs are used because all outputs generally have upper and/or lower specifications and some outputs may not have targets, but rather just upper or lower constraints with no losses associated with being anywhere inside the specification, only with outside the specification. Additional procedures may be sued to prevent oscillations in the solution [16]. The changes to the linear univariate algorithm can easily be understood by examining Table 23.13. The generalization of the algorithm to use alternative tuning methods, solution methods, and process models is apparent. By the inclusion of feedforward variables, feedforward control in additional to feedback control is achieved. If the model is not tuned, i.e., the desired control output is not measured, then only feedforward control would result. 23.7.4.4

Transformations to Comply with Additive Disturbance Assumption

Equation 23.14 implies the assumption that the disturbance is additive to the modeled output. This assumption puts constraints on the output used for to modeling. However, this assumption has been shown to be valid for a variety of systems. Thus, the output modeled may be different than the output controlled in order to meet the assumption. For example, let thickness be the desired output to control. However, the data shows that the additive disturbance assumption really applies to rate, i.e., it is rate that is changing. In this case, Equation 23.10 is replaced by

R^ate Z b

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

ð23:28Þ

23-30

Handbook of Semiconductor Manufacturing Technology

where rate, the modeled tuned output; r^ate, the prediction of output rate; b, the tuned offset, tuned using SPC-based methods, or EWMA, etc. Instead of solving Equation 23.10, as described in Equation 23.13, a new equation is created which is solved.

^ Thickness Z R^ate Time/ Time Z

R^ate T

ð23:29Þ

^ where thickness, the controlled output; Thickness, the prediction of controlled output; T, the target for controlled output (thickness); time, the setting in recipe used to adjust process, and r^ate the use latest tuned value. 23.7.4.5

Predictor Corrector Control

Sometimes, the EWMA model is not an adequate representation of the disturbance. This situation occurs when the disturbance is a near-constant drift run-to-run. The EWMA controller will result in constant offset. (This situation will be examined further in a later section on control of metal sputter deposition.) Thus, a way to correct the prediction of the EWMA controller was needed [16,42]. The resulting disturbance model was called the predictor corrector control (PCC) and remembering the discussions going with Equation 23.16, Equation 23.17, Equation 23.19 through Equation 23.23, and Equation 23.25

St

Z l Vt C ð1KlÞ ZtK1

T^ t

Z b Tt C ð1KbÞ T^ tK1

Tt

Z Vt KStK1

Zt

Z St C T^ t Z FilteredðsmoothedÞ value C Filtered trend

ð23:30Þ

where Zt is the filtered value of variable V at time t, Vt the variable to be filtered (ZyKmx in the simplest case)Zb, Tt the current trend (i.e., the difference between the current value and the previous filtered value), St the current smoothed value, and T^ t the current smoothed trend. As can be seen, Equation 23.30 assumes a constant trend and the purpose is to estimate not only the current value of the model offset, but also the trend (or rate of change) in the model offset. 23.7.4.6

Comparison to Other Methods

To demonstrate that an EWMA controller is equivalent to a pure integral controller, which is frequently found in use as a real-time equipment controller, or that PCC is similar to a double integrator, is beyond the scope of this chapter. However, the reader should note that to facilitate such comparisons, different equation forms derived above (such as Equation 23.22 vs. Equation 23.23) were developed to allow for easier comparison to some modern control methods. See elsewhere for a comparison to modern control techniques and the relationship to stability [16,17,42,43]. The typical semiconductor control technique may also be considered a deadbeat controller in that the set point is not filtered [4]. However, because run-to-run control is not truly a digital controller on a continuous process, the analogy is not quite exact. However, with time some readers may find a discussion of deadbeat controller helpful and they are referred elsewhere [4]. Others may find reading on real-time optimization (RTO; NOT rapid thermal oxidation) in the process industries fascinating [44]. The RTO is very similar to multivariable run-to-run control but is applied to a continuous process. Algorithmic SPC’s compensation method [5,6], which was discussed previously, is very similar to the minimum variance control of MacGregor [45]. Minimum variance control (as well as internal model control) are equivalent to the generalized semiconductor multivariable controller presented above. However, Algorithmic SPC adds an additional term in the disturbance model to account for measurement noise, which they call an extrinsic error. The main resulting difference in the controller design will be different values in the correlated noise parameters.

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

Overview of Process Control

23.7.4.7

23-31

Treatment of Dead Time (Measurement Lag)

Measurement lag is the time between the run and the controller getting the measurement. It can have very detrimental effects on the control performance. There are some existing methods for dealing with lag, although they generally assume CONSTANT lag. One method is discussed in describing the compensation method of algorithmic SPC [5], whereby they alter the best forecast for the model error term. They stress that this alternative equation must be used always should the possibility of deadtime occur since the optimal results will not be obtained otherwise. They also note the difficulty of varying deadtime, even if the deadtime is known. However, in general, while the predictions can be modified to account for the deadtime, the controller performance may be too degraded to be acceptable. Thus, one must consider measurement lag and in some cases, eliminate it, to achieve acceptable controller performance. 23.7.4.8

Summary Example Using Metal Sputter Deposition

To illustrate many of the concepts presented above, especially of the concept of the additive process and disturbance models, metal sputter deposition will be used as an example [46]. The metal to be deposited, titanium, had thickness goals spanning 400 A˚ due to the wide variety of processes used. Before the controller was installed many test runs and monitor pilots were used. The Cpk value was near acceptable, but the technician and processing overhead was too expensive. Note in this example, target means the desired value for the output (thickness), not a sputter target. First, a process model needed to be created which spanned all the processes. Figure 23.10 demonstrates a model where the output was rate, rather than the output that had a goal (thickness), and the input was thickness, rather than time. The “hidden” model is that thicknessZrate!time. The model appears to capture the process data eloquently, but would it allow the assumption of the additive disturbance model? Figure 23.11 shows that over a long time period, the model moved up and down, i.e., the model offset varied, but not the slope. Therefore, the additive disturbance model is correct. Note that what is not shown is the other attempts at process models. Within a short period of time, the control engineer learns that only a couple of functional forms will represent a wide variety of processes from lithography to sputter deposition. Thus, once these forms are known to the engineer, they merely attempt to match the data to one of these forms. Consequently, process model building is faster than one might expect at first glance.

Deposition rate (Å/s)

9.0

8.5

8.0

7.5

7.0

0

100

200

300

400

Normalized thickness

FIGURE 23.10

Process model for Ti sputter deposition.

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

500

23-32

Handbook of Semiconductor Manufacturing Technology

10

Deposition rate (Å/s)

9

8

7

Mature/old Mature/new New/new Mature/old Mature/new Old/mature Old/mature

6

5

0

100

200

300

400

500

Normalized thickness

FIGURE 23.11

Disturbance model: if process model in correct space see offset only over large time periods.

Next, the disturbance model must be created. First, an EWMA was attempted. The results are shown in Figure 23.12. An offset appears to be presented, which is more obvious in Figure 23.13. Note how the offset is near constant. Based on Figure 23.12 and Figure 23.13, the PCC model was attempted next. The results are shown in Figure 23.14 and Figure 23.15. Now, the offset is gone, and the offset is centered around 0. The final results were test runs were eliminated, reduced the number of monitor wafers by

7.5

Rate actual Rate predicted

Deposition rate (Å/s)

6.5

5.5 Collimator change 4.5

Target/collimator change

Collimator change

3.5 563 651 690 779 855 89 163 245 317 482 537 596 726 794 860 Sputter target life (KWH)

FIGURE 23.12

Incorrect disturbance model.

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

Overview of Process Control

23-33

10 8 6 Thickness error (%)

4 2 0 −2 −4 −6 −8

−10

1

3

5

7

9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 Run number

FIGURE 23.13

Offset easier to see in actual target (thickness is variable with target).

a factor 3, Cpk improved by 10%, and simplified processing for operators and reduced sustaining effort for engineers

23.7.5

Real-Time Compensation Control Methods

Table 23.15 provides a list of new activities in real-time equipment control. This table does NOT mention pressure, MFC, or radio frequency (RF) controllers on etchers. While these are extremely common

7.5

Rate predicted Rate actual

Deposition rate (Å / s)

6.5

5.5 Collimator change 4.5 Target /Collimator change 3.5 750 829

64

Collimator change

139 211 284 427 519 582 705 770 837 899 Sputter target life (KWH)

FIGURE 23.14

Better disturbance model: linear drift (predictor corrector control).

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

23-34

Handbook of Semiconductor Manufacturing Technology

10 8

Thickness error (%)

6 4 2 0 −2 −4 −6 −8 −10

FIGURE 23.15

1

3

5

7

9

11 13 15 17 19 21 23 25 27 29 31 33 35 Run number

Easier to see improvement in actual target (for thickness variable).

controllers, significant changes have not occurred in their control ability. Mass flow controllers are now becoming digital, but their basic algorithms are not changing. Changes are expected in the RF controller arena, but right now, most changes are due to RF sensor installation rather than RF control. Table 23.15 also does not cover control activities in rapid thermal processing (RTP). Chapter 11 covers RTP. In that chapter, is a section on RTP temperature control systems. The reader is referred to Chapter 11.

23.8 Monitoring the Supervisory Run-to-Run Controller and the Controller System Advanced Process Control As was mentioned when APC was introduced, APC includes abnormality methods for monitoring of the compensation controller. Thus, some concepts of type I and type II errors for a control system will now be introduced. Next algorithms for detecting abnormalities in the control system are introduced. It is amazing how these extremely simple algorithms can prevent horrendous yield loss problems due to the increased sensitivity to faults they provide.

TABLE 23.15

New Activities in Industry in Real-Time Control

† Uniformity control – Rapid thermal process – Lithography bake plate † Model predictive control on furnaces – Cycle time and performance improvement † Endpointing – Small open area † Sensors – Purchased integrated from process vendor, manufactured by sensor vendor

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

Overview of Process Control

23-35

23.8.1 The Other Type I, Type II Errors: Detection of Change in Overall System There is another error situation present in compensation control methods. This situation is when the controller operates on a system with different variances than for which the controller was designed. In other words, the random and/or non-random variances are different than expected. Although the controller may still drive the output to target, the overall quality of the result is suspect since operating in this regime has never been qualified. Thus, unmeasured variables and the metrology itself may be out of specification. The desired behavior would be for the controller to detect the change in behavior and generate an alarm. One can consider Type I error as the controller generating an alarm when the system variance has not changed, and Type II error as the controller not generating an alarm when the variation is different. Note that this situation can also arise in the abnormality based methods when the output appears to be on target, but in reality the system has drifted to a state where the metrology and the system are out of specification. In this case, abnormality based methods have no extra ability to detect the system change. Fortuitously, the compensation behavior provides another way to determine if the system is behaving as expected. In other words, the compensation based controller can monitor not only the output changes, but how the output changes in response to a given change in input as well, to determine if the over-all system has changed behavior resulting in a better overall Type I and II errors. This desire to monitor for abnormalities in the compensation based controller leads to the concept of merging both methods into one, one of the conceepts of Advanced Process Control.

23.8.2

Methods for Monitoring the Supervisory Controller

Monitoring the various aspects of the compensation controller would appear difficult. However, monitoring the feedback amount in a model based control system simplifies the abnormality detection. The simple monitoring algorithms are: † Determine if the adjustment is unusual (single or cumulative adjustment too large, too frequent) † Determine whether the output responds to the adjustment as expected, including random noise level A product called Overseer [30] was created to assist with the automatic checking of the Supervisory Controller. It added some more intricate tests, such as: † Are all equipments of a given type drifting or shifting in unison The main concept to be left with the reader is that no controller should ever be implemented without sufficient monitoring. The monitoring prevents the controller from driving the system past reasonable operation, while the control improves the sensitivity and robustness of the abnormality checking.

23.9

Continuous Process Improvement

The description of the various control methods will end with a discussion of the outer control layer in Figure 23.3, continuous process improvement. At all times, engineers are examining their process to determine how best to increase it’s productivity and yield. They employ engineering knowledge, statistics, and common sense. However, it is the data coming from the various levels of the controllers, in conjunction with data from specially designed experiments, which provides the most information about how best to improve the process or equipment.

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

23-36

23.9.1

Handbook of Semiconductor Manufacturing Technology

Benefits of Reducing the Effective Noise of the System

As mentioned in the introduction on types of errors, the type I (a) and type II (b) errors are a function of the sample size, the particular size of change (D) to be detected, and the noise level of the system (s). For a fixed sample size, both the type I and type II errors decrease as the noise (s) decreases. Conversely, because b is the probability of not detecting a difference when there is one, decreasing the noise also increases the probability of detecting a difference. The probability of being able to detect a change is called the power and is equal to 1Kb. Having good power is necessary to evaluate process improvement changes and design new products [47]. By using global modeling and feedforward variables (including wafer order), the effective noise of the system is decreased, and, consequently, the power to detect a change is increased. Thus, the procedures outlined in the section on Abnormality Detection Control Methods to enable robust fault detection are also important to continuous process improvement.

23.9.2

Comparing How Different Machines React

Investigations of why different machines of the same model number respond differently to the same inputs provides useful information that can be exploited to improve the machine. Determining that one machine is a rogue tool based upon its response to input changes generally provides increased sensitivity to a machine likely to be the source of yield loss at some point. Thus, controller data provides a great opportunity to identify rogue machines. Driving the machines to respond to inputs in a similar fashion is the greatest assurance that the outputs will be consistently similar for all equipment, a major desire by the fab’s customers.

23.10

Summary

This chapter conveyed the depth that process control in the semiconductor industry has achieved. The reader should now be able to assess the variety of control tools available and select those most appropriate for application in their facility. As part of the references section, several pointers to web pages and further resources are provided.

23.11

Acronyms and Glossary

Abnormality control methods: Methods based upon detecting abnormalities and correcting them. Alpha (a): Percentage of false positives (detecting an abnormality when one has not occurred). Advanced process control (APC): A combination of abnormality and compensation control methods. Also equal to FDC plus model-based process control. Beta (b): Percentage of false negative (not detecting an abnormality when one has occurred). CIM: Computer integrated manufacturing (may imply MES). Compensation (target tracking) control methods: Methods based upon actively compensating for expected sources of variation. Cp, Cpk: Most common process capability indices used to assess the process’ ability to achieve yield. Cpk considers how centered the process is, while Cp does not. Disturbance model: A model that predicts the way disturbances will impact the process, i.e., describes how the process ages. Dynamics: The non-random behavior of the system over time, i.e., how the output would change with each run if no compensation were used. Error rate: Rate at which a false positive error {alpha (a)} or false negative {beta (b)} occurs. EWMA: Exponentially weighted moving average (filter, type of SPC chart, first-order digital filter).

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

Overview of Process Control

23-37

FDC: Fault detection and control. Feedback control: Uses measurements about the current process results to decide how to change the process for the next sampling period. Thus, feedback control drives the average value to target (i.e., drives Cpk to equal Cp). Used to compensate for expected disturbances. Feedforward: Control uses measurements on incoming materials, process, or equipment to decide how to change the process for the current process. Feedforward control, because it accounts for incoming variations, can improve the Cp value, i.e., it turns apparently random variation into non-random variation, which can be compensated for. Used to compensate for measured disturbances that have been modeled. Gain: The change in the output for a unit change in the input. IIDN(m,s): Identically independently distributed normal means all values belong to the same distribution which is a normal (Gaussian) distribution with mean of m and standard deviation of s. Look-aheads: Wafers from a production lot processed and analyzed before the rest of the production lot is processed. MBPC: Model-based process control. WIP: Work in progress. MES: Manufacturing execution system (used for WIP management, material flow control). Metrology: Measurement science; the act of measuring; the measurement process. Pilot: Non-sellable wafer, may or may not have topography. Power: The probability to detect a particular size of change (Z1Kb). Term encountered in sample size calculations in statistical tests and SPC. Process model: A model that predicts the inherent response of the process to input changes. SPC: Statistical process control. SRC: Semiconductor research company (research consortia that funds universities). Trace: Real-time signal over time. Usually implies signals from processing equipment, but source could also be add-on sensors. Type I error: False positive; to detect an abnormality when one has not occurred. Type II error: False negative; to not detect an abnormality when one has occurred. WECO: Supplementary run rules applied in conjunction with Shewart SPC. These rules are generally known as the Western Electric (WECO) rules in recognition of the source of their well-known applications. These rules simultaneously decrease type II error and increase type I error. PCC: Predictor corrector controller. Filter for continuous drifts. Pilot: Non-sellable wafer. Used when processing or metrology could cause out of specification results, produce contamination, or create defects. Also used for machine recovery and conditioning when the equipment must be operated, since most plasma and deposition machines do not allow nonwafer processing. Qual: Short for qualification. Also, a run, usually using pilot wafers, used to qualify a machine to run a particular process(es) after maintenance or when switching between processes. Qual plan: A methodology to qualify a process for use in the manufacture of production material. Random yield loss: Yield loss due to contamination, particles (as opposed to systematic yield loss) Run: A single processing on a piece of equipment or the processing of all the wafers in the lot, which may consist of several processings on single-wafer equipment. Systematic yield loss: Unintentional and intentional misprocessing (as opposed random yield loss).

References 1. 2.

Butler, S. W., R. York, M. H. Bennett, and T. Winter. “Semiconductor Factory Control and Optimization.” Wiley Encyclopedia of Electrical and Electronics Engineering, Vol. 19, 59–86. New York: Wiley, 1999. Two-Year Keithley Study Eyes Process Control, WaferNews, Vol. 4.29, 3, 6. 28 July 1977.

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

23-38

3.

4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19.

20. 21. 22. 23.

24. 25. 26. 27. 28. 29.

Handbook of Semiconductor Manufacturing Technology

Box, G., and A. Luceno. Statistical Control by Monitoring and Feedback Adjustment. Wiley Series in Probability and Statistics, Wiley, 1997. Also, course from University of Wisconsin–Madison College of Engineering, Feedback Adjustment for SPC, How to Maximize Process Capability Using Feedback Adjustment, Box, G., J. Hunter, and S. Bisgaard. Seborg, D. E., T. F. Edgar, and D. A. Mellichamp. Process Dynamics and Control. New York: Wiley, 1989. Vander Wiel, S. A., W. T. Tucker, F. W. Faltin, and N. Doganaksoy. “Algorithmic Statistical Process Control: Concepts and an Application.” Technometrics, 34, no. 3 (1992): 286–97. Tucker, W. T., and F. W. Faltin. “Algorithmic Statistical Process Control: An Elaboration.” Technometrics 35, no. 4 (1993): 363–75. MacGregor, J. F. “Interfaces between Process Control and On-Line Statistical Process Control.” AIChE Comput. Syst. Technol. Div. Commun. 10, no. 2 (1987): 9–20. Box, G. and T. Kramer. “Statistical Process Monitoring and Feedback Adjustment: A Discussion.” Technometrics, 34, no. 3 (1992): 251–67. Hoerl, R. W. and A. C. Palm. “Discussion: Integrating SPC and APC.” Technometrics 34, no. 3 (1992): 268–72. MacGregor, J. F. “Discussion.” Technometrics, 34, no. 3 (1992): 273–5. Tucker, W. T. “Discussion.” Technometrics, 34, no. 3 (1992): 275–7. Vander Wile, S. A., and S. B. Vardeman. “Discussion.” Technometrics, 34, no. 3 (1992): 278–81. Wardrop, D. M., and C. E. Garcia. “Discussion.” Technometrics, 34, no. 3 (1992): 281–2. Box, G., and T. Kramer. “Response.” Technometrics, 34, no. 3 (1992): 282–5. Muthukrishnan, S., and J. Stefani. “SCFab Model-Based Process Control Methodology: Development and Deployment for Manufacturing Excellence.” TI Tech. J. 13, no. 5 (1996): 9–16. Butler, S. W., J. Stefani, and G. Barna. “Application of Predictor Corrector Controller to Polysilicon Gate Etching.” In Proceedings of the American Control Conference, 3003. Piscataway, NJ: IEEE, 1993. Butler, S. W. “Process Control in Semiconductor Manufacturing.” J. Vac. Sci. Tech. B 13, no. 4 (1995): 1917–23. Butler, S. W., J. Hosch, A. Diebold, and B. Van Eck. “Sensor Based Process and Tool Control.” Future Fab Int. 1, no. 2 (1997): 315–21. Butler, S. W., and T. F. Edgar. “Case Studies in Equipment Modeling and Control in the Microelectronics Industry.” In Proceedings of the Fifth Conference on Chemical Process Control (CPC V) Assessment and New Directions for Research. AIChe Symposium Series No. 316, Vol. 93, 133–44. CACHE, American Institute of Chemical Engineers, 1997. Semiconductor Industry Association (SIA). The National Technology Roadmap for Semiconductors. San Jose, CA: SIA, 1997, (408)246-2830, http://www.semichips.org Kraft, C. “TI’s Statistic’s Needs.” SEMATECH Statistics Workshop. Austin, Nov 1988. Czitrom, A. V., and K. Horrell. “SEMATECH Qual Plan: A Qualification Plan for Process and Equipment Characterization.” Future Fab Int. 1, no. 1 (1996): 45. Clark, W., K. Horrell, T. Rogelstad, and P. Spagon. SEMATECH Qualification Plan Guidelines for Engineering. SEMATECH DOC ID No.: 92061182B-GEN, SEMATECH, 1995; Horrell. K. SEMATECH Qualification Plan Overview. SEMATECH DOC ID No.: 91050538B-GEN, SEMATECH, 1993. See http://www.domainmfg.com/ for information on Starfire by Domain Manufacturing Corp., acquired by Brooks Automation 6/99. Mason, R. L., R. F. Gunst, and J. L. Hess. Statistical Design and Analysis of Experiments with Applications to Engineering and Science. 2nd ed. New York: Wiley-InterScience, 2003. Naugib, H. “The Implementation of Total Quality Management (TQM) in a Semiconductor Manufacturing Operation.” IEEE Trans. Semicond. Manuf. 6, no. 2 (1993): 156. Grant, E. L., and R. S. Leavenworth. Statistical Quality Control. New York: McGraw-Hill, 1988. Drain, D. Statistical Methods for Industrial Process Control. New York: Chapman and Hall, 1997. Shewhart, W. A. Economic Control of Quality of Manufactured Product. New York: Van Nostrand, 1931.

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

Overview of Process Control

30. 31. 32. 33. 34.

35. 36.

37. 38. 39. 40. 41. 42. 43. 44.

45. 46. 47.

23-39

Vicker, K. “Advanced Process Control in the Fab.” SEMATECH Advanced Equipment and Process Control Workshop, 338–51. Lake Tahoe, CA: SEMATECH, 1997. Guldi, R. L., C. D. Jenkins, G. M. Damminga, T. A. Baum, and T. A. Foster. “Process Optimization Tweaking Tool (POTT) and Its Applications in Controlling Oxidation Thickness.” IEEE Trans. Semicond. Manuf. 2 (1989): 54–9. Wheeler, D. J., and D. S. Chambers. Understanding Statistical Process Control, 2nd Ed. Knoxville, TN: SPC Press, 1992. Mandel, J. “The Regression Control Chart.” J. Qual. Technol. 1, no. 1 (1969): 1–9. Sachs, E., A. Hu, and A. Ingolfsson. “Run by Run Process Control: Combining SPC and Feedback Control.” IEEE Trans. Semicond. Manuf. 8, no. 1 (1995): 26–43: Boning, D., W. Moyne, T. Smith, J. Moyne, and A. Hurwitz. “Practical Issues in Run by Run Process Control.” In Proceedings of IEEE/SEMI Advanced Semiconductor Manufacturing Conference, 201–8, 1995. Mozumder, P. K., and G. G. Barna. “Statistical Feedback Control of a Plasma Etch Process.” IEEE Trans. Semicond. Manuf. 7, no. 1 (1994): 1–11. Ling, Z.-M., S. Leang, and C. J. Spanos. In-Line Supervisory Control in a Photolithographic Workcell. SRC publication C91008; also SPIE, 921 (1988):258: Bombay, B. J., and C. J. Spanos. “Application of Adaptive Equipment Models to a Photolithographic Process.” SPIE Technical Symposium on Microelectronic Processing Integration, September 1991: Leang, S., S-Y. Ma, J. Thompson, B. J. Bombay, and C. J. Spanos. “A Control System for Photolithographic Sequences.” IEEE Trans. Semicond. Manuf. 9, no. 2 (1996): 191–207. Toprac, A. Run-to-Run Control of Poly-Gate Etch SEMATECH Advanced Equipment and Process Control Workshop, 434–40. Lake Tahoe, CA: SEMATECH, 1997. Gerold, D. “Run-to-Run Control Benefits to Photolithography.” SEMATECH Advanced Equipment and Process Control Workshop. 104–12. Lake Tahoe, CA: SEMATECH, 1997, suppl. Stefani, J. A. “Practical Issues in the Deployment of a Run-to-Run Control System in a Semiconductor Manufacturing Facility.” The 1999 SPIE International Symposium on Microelectronic Manufacturing Technologies. Edinburgh, Scotland, May 1999. Hunter, J. S. “The Exponentially Weighted Moving Average.” J. Qual. Technol. 18 (1986): 203–10. Box, G. E. P., G. M. Jenkins, and G. C. Reinsel. Time Series Analysis: Forecasting and Control. Englewood Clifs, NJ: Prentice Hall, 1994. Butler, S. W., and J. Stefani. “Supervisory Run-by-Run Control of Polysilicon Gate Etch Using In Situ Ellipsometry.” IEEE Trans. Semicond. Manuf. 7, no. 2 (1994): 193. Butler, S. W., J. Stefani, M. Sullivan, S. Maung, G. Barna, and S. Henck. “An Intelligent Model Based Control System Employing In Situ Ellipsometry.” J. Vac. Sci. Tech. A, 12, no. 4 (1994): 1984–91. Marlin, T. E., and A. N. Hrymak. “Real-Time Operations Optimization of Continuous Processes.” In Proceedings of the Fifth Conference on Chemical Process Control (CPC V) Assessment and New Directions for Research. AIChE Symposium Series No. 316, Vol. 93, 156–64. CACHE, American Institute of Chemical Engineers, 1997. Harris, T. J., and J. F. MacGregor. “Design of Multivariate Linear-Quadratic Controllers Using Transfer Functions.” AIChE J. 33 (1987): 1481–95. Smith, T., D. Boning, J. Stefani, and S. W. Butler. “Run by Run Advanced Process Control of Metal Sputter Deposition.” IEEE Trans. Semicond. Manuf. 11, no. 2 (1998): 276. Bohn, R. “The Impact of Process Noise on VLSI Process Improvement.” IEEE Trans. Semicond. Manuf. 8, no. 3 (1995): 228–38.

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

23-40

Handbook of Semiconductor Manufacturing Technology

Further Reading Conferences and Supporting Organizations Integrated Metrology Consortia, http://www.integratedmeasurement.com Electrochemical Society, Inc., http://www.electrochem.org American Vacuum Society (AVS) Manufacturing Science and Technology Group (MSTG), http://www. cems.umn.edu/wweaver/mstg/mstgsubway.html International Symposium on Semiconductor Manufacturing (ISSM), http://www.issm.com Advanced Semiconductor Manufacturing Conference (ASMC), http://www.semi.org/Education/asmc/ main.html SPIE Microelectronic Manufacturing, http://www.spie.org/info/mm

SC Control and Control Software University of Michigan Controls Group, http://www.engin.umich.edu/research/controls Berkeley Computer Aided Manufacturing (BCAM), http://bcam.berkeley.edu Maryland University The Institute for Systems Research, http://www.isr.umd.edu SEMATECH and MIT Run by Run Benchmarking, http://www-mtl.mit.edu/rbrBench Triant Technologies Inc., http://www.triant.com Semy, http://www.semyh.com ObjectSpace, http://www.objectspace.com Domain Manufacturing Corp. (formerly, BBN Domain Corp; acquired by Brooks Automation 6/99), Cambridge, MA, http://www.domainmfg.com Umetrics, Winchester, MA, http://www.umetri.se (also good Chemometrics links). Brookside Software, http://www.brooksidesoftware.com Brooks Automation, Richmond, BC, Canada, http://www.brooks.com/bac.htm Fastech (Acquired by Brooks Automation 9/98), http://www.fastech.com Real Time Performance, Sunnyvale, CA, http://www.rp.com (no longer in business, but has code). Adventa (ControlWORKS, ProcessWORKS, WORKS), Dallas, TX, http://www.adventaCT.com Voyan Technology, Santa Clara, CA. PRI Automation, Inc., Billerica, MA, http://www.pria.com Bakshi, V. Fault Detection and Classification (FDC) Software Benchmarking Results, SEMATECH Technology Report 97123433A-TR, 1998. Bakshi, V. Fault Detection and Classification Software for Plasma Etchers: Summary of Commercial Product Information. SEMATECH Technology Report 97083337A-XFR, 1997.

Manufacturing Execution Systems (MES)/Computer Integrated Manufacturing (CIM)/Equipment Integration Automation: Software Used to Run and Track Fab, Perform SPC, etc. Fastech (Acquired by Brooks Automation 9/98), http://www.fastech.com Real Time Performance, Sunnyvale, CA, http://www.rp.com (no longer in business, but has code). Consillium (Acquired by Applied Materials), http://www.consilium.com Promis (Acquired by PRI Automation), http://www.promis.com

DK4126—Chapter23—23/5/2007—16:14—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

Overview of Process Control

23-41

Byrd, T. and A. Maggi. “Challenges to Plug and Play CIM.” Future Fab Int.: 77–81. Greig, M. and A. Weber. “AMD & ObjectSpace, Inc.” Future Fab Int.: 73–74. Data Analysis, Data Warehousing, Data Mining, Bit Mapping, Wafer Tracking, etc. Knight’s Technology, Sunnyvale, CA, http://www.knights.com DYM, Bedford, MA, http://www.dym.com LPA Software, South Burlington, VT, (802) 862-2068. Quadrillion, Company Information, http://www.quadrillion.com/quadinfo/htm Device Ware Corporation, http://www.dware.com Maestro, Data Management [JJT Inc.], http://www.jjt.com/data.man.html Sleuthworks, http://www.sleuthworks.com/doc SAS, http://www.sas.com NIST and SEMATECH have created an Engineering Statistics Internet (ESI) Handbook. Check the SEMATECH web page or contact Chelli, [email protected]

General Semiconductor References That May Contain Control References on Occasion SEMATECH, www.sematech.org I300I (dedicated to 300 mm issues), www.i300i.org National Technology Roadmap, http://www.sematech.org/public/roadmap/index.htm Semiconductor Subway, http://www-mtl.mit.edu/semisubway.html Semiconductor Equipment and Materials International (SEMI), http://www.semi.org Semiconductor Research Corporation (SRC), http://www.semi.org Semiconductor International, http://www.semiconductor-intl.com Solid State Technology, http://www.solid-state.com Semiconductor Online, http://www.semiconductoronline.com Semiconductor SuperSite.Net, http://supersite.net/semin2/docs/home.htm FabTech, www.fabtech.org TechWeb, http://www.techweb.com Semiconductor Process Equipment and Materials Network, http://www.smartlink.net/wbmcd/semi/cat. html semiconductor.net–The semiconductor manufacturing industry resource for products, services and information, http://www.semiconductor.net SemiSource, Semiconductor Resource Guide, published annually by Semiconductor International. Solid State Technology Resource Guide, published annually by Solid State Technology. American Vacuum Society (AVS) Buyers Guide, http://www.aip.org/avsguide

Additional Compensation Control and Controller Monitoring Articles C Kraft. U.S. patent 5,528,510, Equipment Performance Apparatus and Method, issued 6.19.96. Harris, T. J. “Assessment of Control Loop Performance.” Can. J. Chem. Eng. 67 (1989): 856–61. Scher, G. M. “Wafer Tracking Comes of Age.” Semicond. Int. (1991): 126–31. Stefani, J. A. “Practical Issues in the Deployment of a Run-to-Run Control System in a Semiconductor Manufacturing Facility.” The 1999 International Symposium on Microelectronic Manufacturing Technologies. Edinburgh, Scotland, May 1999.

DK4126—Chapter23—23/5/2007—16:15—ANBARASAN—240452—XML MODEL CRC12a – pp. 1–41

24

In-Line Metrology 24.1 24.2

Introduction ...................................................................... 24-1

Measurement Precision to Process Tolerance Ratio vs. Resolution † Manufacturing Sensitivity Analysis

Metrology for Lithography Processes: Critical Dimension Measurement and Overlay Control............. 24-5 Critical Dimension Measurement and Calibration Overlay Process Control

Alain C. Diebold SEMATECH



24.3

Metrology for Front End Processes............................... 24-20

24.4

Interconnect Process Control ........................................ 24-36

Ellipsometric Measurement of Gate Dielectric Film Thickness † Electrical Measurement of Gate Oxide Thickness † New Methods of Measuring Gate Dielectric Thickness and Nitrogen Concentration † Doping Process Control † Metrology for Measurement of Stress Enhanced Carrier Mobility Interconnect Film Thickness † Ex-Situ Chemical Mechanical Polishing Process Control–Film Flatness and Quality

24.5 In-FAB FIB ...................................................................... 24-49 Acknowledgments ........................................................................ 24-49 References...................................................................................... 24-50

Abstract This chapter on ex-situ metrology covers the in-line and at-line measurements used to control processes in pilot line fabrication or in volume manufacturing of silicon based integrated circuits. Metrology for front end (transistor) processes, lithography, and on-chip interconnect fabrication technologies are all described. As a basis for further discussion, measurement precision and resolution are described, and the measurement precision to process tolerance ratio is used to evaluate metrology capability for statistical process control.

24.1

Introduction

Metrology is an integral part of the development and manufacture of integrated circuits. The International Technology Roadmap for Semiconductors (ITRS) defines metrology as including both the off-line materials characterization and in-line measurement technologies [1]. In-line metrology is discussed in this chapter, while other chapters are devoted to other aspects of metrology. Other chapters of interest include those covering process control, off-line materials characterization, in-situ sensor based metrology, and defect detection and control technology. In-line metrology covers measurement needs for manufacturing control of transistor and on-chip interconnect fabrication including lithography. A high level view of the frequency of metrology 24-1

DK4126—Chapter24—23/5/2007—16:19—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-2

Handbook of Semiconductor Manufacturing Technology

Overlay

Transistor-> ∆L eff, ∆Vt, ∆Ioff

Implant dose Gate Ox thickness Poly Si control CD ILD thickness

Interconnect -> circuit delay, resistance

Contact etch control ILD etch control Metal thickness Defect /particle

FIGURE 24.1

Overview of the applications of metrology during integrated chip manufacturing.

measurements during manufacture is shown in Figure 24.1. First, the concepts of measurement precision and resolution are reviewed in Section 24.1. Section 24.2 covers critical dimension (CD) and overlay. Section 24.3 discusses gate dielectric thickness measurement and doping process control. Interconnect metrology is covered in Section 24.4, and other measurement tools such as focused ion beam (FIB) and scanning electron microscopy are briefly covered in Section 24.5.

24.1.1

Measurement Precision to Process Tolerance Ratio vs. Resolution

The measurement precision to process tolerance (P/T) ratio is an accepted metric for evaluating the ability of an automated metrology tool to provide data for statistical process control (SPC) [2]. While there is no generally accepted estimator for precision [3] an ideal measure for use in process control would be a function of the total measurement system error variability, s2M . A methodology for estimating P/T in this fashion can be found in SEMI E89, Guide for Measurement System Analysis [4]. Using this or a similar approach, major sources of measurement variability are enumerated and a designed experiment is employed to quantify the variance components associated with each source. s2M is a combination of the short term and the long term sources of variation. Repeatability, sr represents a lower limit to s2M . It is estimated by the sample standard deviation of repeated measurements made under identical conditions, or in the case of a designed experiment, as the square root of the mean squared error [3,4]. Reproducibility, sR, is the variation that results when measurements are made under different conditions such as reloading the wafer on a different day [3,4]. It may include multiple sources of variability and is expressed as the square root of the sum of the variance components for all sources of measurement system variation. Note: SEMI E89 treats repeatability as part of reproducibility, making s2M Z s2R . This approach is not universal. Some treat repeatability and reproducibility as mutually exclusive, in which case s2M Z s2R C s2r . Precision is calculated as CsM, where CZ6 is a common choice for processes with both upper and lower limits and CZ3 for processes with one-sided limits. Contamination limits represent a process with a one-sided (upper) process limit. Process tolerance is the range of allowed values of the process variable: upper process limitKlower process limit (ULKLL) for two-sided processes and (ULKT) or (TKLL), where T is a target value, for one-sided processes. The measurement precision, s, to P/T ratio is either: 6s/(ULKLL), 3s/(ULKT), 6s/(TKLL) [4]. Although P/T should be less than 10%, a value of 30% is often allowed. Unfortunately, the measurement precision used to determine P/T is often an approximation of the true precision. Determination of the true P/T ratio for the process range of interest requires careful

DK4126—Chapter24—23/5/2007—16:20—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

In-Line Metrology

24-3

implementation of the P/T methodology. This means that many of the measurement conditions should be varied to determine measurement stability before setting final measurement conditions when determining precision [4]. Varying time between repeated measurement allows one to observe short term issues with repeatability. In addition, reference materials should have identical feature size, shape, and composition to the processed wafer being measured. Often, there are no reference materials that meet this requirement, and P/T ratio and measurement accuracy are determined using best available reference materials. One key example is the lack of an oxide thickness standard for sub 5 nm SiO2 and nitrided oxides. When the reference material has significantly different properties (e.g., thickness), then the precision may not be representative of the precision associated with the product wafer measurement due to nonlinearity. Again, the example of CD or film thickness measurements is useful. The precision associated with measurement of a sub 2 nm gate oxide may be different than that associated with a 10 nm oxide film. If the true precision is large enough, it could mean that the metrology tool has insufficient resolution to distinguish changes over the process range of interest. One way to assure that the metrology tool has adequate resolution to determine the true P/T capability by using a series of standardized, accurate reference materials over the measurement range specified by the ULs and LLs. In Figure 24.2, we depict the difference between precision and bias. In Figure 24.3, we show how the multiple reference materials approach might work. This approach is not used. Instead, in-line metrology uses a reference wafer that is fabricated using the typical process flow. These “golden” wafers have exact materials and dimensions from the process step of interest. Often, only one suitable reference material is used for P/ T determination. The measurement of the thickness of the transistor gate dielectric at the 45 nm technology generation is expected to be difficult in the manufacturing environment. By the 45 nm technology generation, silicon oxynitride will be replaced by a higher dielectric constant material. Dielectric thickness will be written in terms of the thickness that the layer would have if it were silicon dioxide. This is known as the equivalent oxide thickness (EOT). For single layer films, the EOT can be calculated by multiplying the physical film thickness by the ratio of the dielectric constant of silicon dioxide (k(SiO2)Z3.9) to that of the high k material. For high k films with an interfacial layer of silicon dioxide of thickness tint, the EOTZ tintC(3.9/k)thighk. If the gate dielectric EOT is 0.7 nm thick and the process tolerance is 4% for 3s (process variation), then P/TZ10%Z6s/(0.1 nm) which gives a measurement variation 3sZ 0.0028 nm. The size of an atomic step on silicon is w0.15 nm and atomically flat terraces on specially prepared Si (001) are about 100–200 nm wide. The width of terraces after typical processing such as sacrificial oxidation and gate oxide growth is unknown. Thus, some have pointed to the issue of measuring film thickness to less than an atomic width. This is not an issue because the measurement can be considered to be the determination of the average thickness of a perfectly flat layer. This type of measurement precision requires analysis of a large area that averages over atomic steps at the interface

Bias in product wafer measurements X2ave

X2true

Bias and precision change due to non–linearity

Instrument calibration X1ave

X1true

Calibration bias

FIGURE 24.2 Measurement non-linearity measurement non-linearities can result in bias (difference between true and measured value) changes between the calibrated value and the range of interest. It is also possible that the bias can change inside the measurement range of interest.

DK4126—Chapter24—23/5/2007—16:20—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-4

Handbook of Semiconductor Manufacturing Technology

1

3.0

3.1

2

3

3.2

3.3

3.4

4

5

6

3.5

3.6

3.7

7

3.8

3.9

4.0

Measurement variation (precision) of experimentally determined values of reference material

FIGURE 24.3 Demonstration of resolution based on precision associated with measurement of a series of reference materials over the process specification range for this example let us assume that the process tolerance (also called process specifications) is from 3.0 to 4.0 nm. The measurement precision at 3s (variation) is shown for reference materials inside the process range. The experimental P/T capability observed using reference materials 4, 5, and 6 indicates that a single measurement of a 3.6 nm is different from one at 3.5 or 3.7 nm. Thus this fictitious metrology tool can resolve those values. According to the precision shown for reference materials 2 and 3, the tool is not able to resolve 3.2 nm from 3.3 nm at 3s.

and other atomic variations. Local variations in thickness of a 2 nm film will be a much larger percentage of total thickness than it would be for a 10 nm film. Therefore, reproducibility of stage location affects the precision of a thickness measurement especially for a small sampling area around 1 mm in diameter. Some metrologists have called for smaller area (!1 mm) measurements. The need for high precision makes this very difficult and high precision with small spot instruments (0.9 mm) are achieved by averaging over a larger area. Even electrical measurements done using capacitors average over larger areas. For contamination measurements and a P/TZ30%, sZ(ULKL)/10. Achieving a measurement variation of (ULKL)/10 requires that the detection limit must be at or below (ULKL)/10 [3]. Chemists have their own definitions of resolution, detection limit, and quantifiable detection limit which must all be considered [5]. For convenience, these topics are illustrated by discussing off-line chemical analysis of trace contaminants in a liquid. Typically, the detection limits for trace amounts of contamination in a liquid vary due to changes in measurement sensitivity. Detection limits are ultimate values for optimum circumstances. When the liquid is analyzed by inductively coupled plasma mass spectrometry (ICP-MS), the detection of one element can be interfered with by the presence of another element. The quantifiable detection limit is the limit at which reliable, repeatable detection is possible [5]. Resolution requirements are not well defined, and thus experience again provides some guidance.

24.1.2

Manufacturing Sensitivity Analysis

Ultimately, the electrical performance of the integrated circuit results from the electrical properties of the transistors and the interconnect structures. The most extensive modeling of how variations in physical properties affect electrical performance has been done for transistors [6]. The effect of small changes in key physical parameters such as gate length, gate dielectric thickness, and doping dose on key electrical parameters such as leakage current and threshold voltage were modeled for a 180 nm transistor [6,7]. Although similar modeling has been done for subsequent generations of transistors, most results are not yet published. The goal was to determine the impact of an increase in the range of a variable on the range of the electrical parameters. Modeling for this purpose is known as manufacturing sensitivity analysis. This type of information is often helpful in prioritizing metrology and understanding the electrical impact of process variation. Another type of model-based sensitivity analysis has been used to relate the electrical test signature to physical defect type for interconnect structures [8,9]. This is sometimes called defect to fault mapping.

DK4126—Chapter24—23/5/2007—16:20—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

In-Line Metrology

24-5

It is usually associated with off-line materials characterization and failure analysis laboratories that employ FIB systems, and thus it is not discussed at all in this chapter.

24.2 Metrology for Lithography Processes: Critical Dimension Measurement and Overlay Control Critical dimension and overlay measurements control two of the most important parts of manufacturing: feature delineation and the stacking of layers. In this section, CD and overlay measurement technology and application are reviewed. Detailed chapters on each CD measurement method and on overlay measurement can be found in the Handbook of Silicon Semiconductor Metrology [10]. Control of CD including line edge roughness (LER) across the die and across the wafer starts with control of the range of CDs across the photomask. Since photomasks have glass substrates, they easily charge during scanning electron microscopy based CD measurement. Thus, CD measurements on photomasks requires special considerations which are discussed below. CD metrology is also done on the patterned photoresist structures as well as after the features are etched. Issues associated with these measurement steps are also covered.

24.2.1

Critical Dimension Measurement and Calibration

Today, both critical dimension scanning electron microscopes (CD-SEM) and scatterometry are used during volume manufacture to control the process range for line width and contact via diameter (area). Cross-sectional SEM images and CD-SEM are used to evaluate process conditions after a change in recipe. Electrical measurement of the effective transistor gate length and metal line width are also done using test structures. Electrical measurements support tighter control of electrical transistor parameters that is possible with current physical CD-SEM measurements. Lithography process control using CD-SEM or scatterometry measurements can be done after exposure and development of the resist, thus permitting reprocessing of wafers that have CD values outside of process tolerance limits. CD is also measured after etch of the poly-silicon transistor gate or the trenches and vias used for interconnect. In this section, we will discuss CD-SEM and scatterometry tool and measurement issues, calibration, and electrical CD measurement. 24.2.1.1

CD-SEM Tools and Measurement Issues

A CD-SEM is a scanning electron microscope that has been designed for low voltage (!w1 keV) measurement of line width. In Figure 24.4, we illustrate the operation of a CD-SEM. In Figure 24.5, show a typical CD-SEM design. Conventional wisdom leads one to believe that low voltage operation is the only way to prevent damage to the integrated circuit (IC) chip, and at this time, commercial CD-SEMs operate at voltages between roughly 1 keVand a few 100 eV. Although CD-SEMs differ from in-line and laboratory SEMs in that very short working distances (distance between wafer and SEM electron beam lens) are used to optimize rapid, low voltage imaging, new lens designs are reducing the differences between systems [11]. In order to maintain optimum and reproducible fields over the sample, sample tilt is not used in CD measurement. Recently introduced CD-SEMs have become more capable of measuring sidewall angle. Using the electron lens to tilt the electron beam, sidewall shape and angle can be measured as discussed below. Multiple angles allow determination of sidewall angle through geometric considerations. This increases the rate at which the sample stage can scan the wafer. Precision sample stages that can locate specific wafer coordinates combined with image pattern recognition capability allow automated CD measurement at specific locations on selected die across the wafer. Two components of the SEM hardware are considered key to higher resolution and greater CD precision. First, the final electron beam lens has been designed so the magnetic field extends beyond the lens and around the part of the sample being imaged. This type of lens is referred to as an external extended field lens or snorkel [12]. The second development is the advanced secondary electron detector. The extended field also collects the secondary

DK4126—Chapter24—23/5/2007—16:20—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-6

Handbook of Semiconductor Manufacturing Technology

FE source

Top down image

Secondary electron detector Scanning coils wafer

Lens Sample stage

FIGURE 24.4 How a critical dimension scanning electron microscope (CD-SEM) works. A line scan is averaged over a length of the measured line using a specially designed SEM. The increase in secondary electron intensity at the line edge is shown. The emitted electron regions are highlighted in green. Emitted electron intensity is the largest at the edge of a line (point 1) and the least at a flat surface (point 2). Sharp features such as the line edge have more surface area for electron emission. More emitted electrons (point 3) will come from sidewalls than from flat areas (point 2).

electrons which are then passed through a Wein filter that results in energy dispersion prior to detection [13]. This allows the ultra rapid collection at 5! to 10! television (TV) scanning rates. A discussion of the origin of the CD-SEM signal and its interpretation as a line width will assist understanding CD-SEM measurements. A recent review of SEM based CD measurement, calibration, and SEM matching is a recommended Ref. [13]. A line trace of detected electron intensity vs. sample position, a SEM image of a line structure and a SEM cross-section of a line structure are shown in Figure 24.4. Both low energy secondary electrons and backscattered (inelastically) scattered electrons can be used separately or in combination to produce an image. Secondary electrons are low energy electrons emitted from the valence band of the sample after excitation by the primary electron beam [14], and they are often labeled as SE-1 electrons [13,14]. The SE-1 yield is a function of the sample material, sample shape, and the energy of the primary electron beam. Sharp features and side walls will be able to emit more secondary electrons than a flat surface. The secondary electron coefficient is a measure of the number of secondary electrons per incident electron from the primary beam. For most materials, the secondary electron coefficient peaks between 1 and 2 keV electron beam energies [13]. Electrons that scatter off sample atoms and out of the sample are known as backscattered electrons [13]. Some of these electrons will have lost energy through typically several collisions inside the sample and are thus inelastically scattered. Some authors [13,14] refer to these electrons as SE-3 secondary electrons, and others seem to prefer not designating these as secondary but as backscattered electrons [13,14]. Other backscattered electrons, known as elastically scattered electrons, hit atoms close to the surface and escape before any energy is lost. These electrons are referred to as SE-2 by some authors [13]. There will be more backscattered electrons from small sharp features and sidewalls than flat surfaces. The emitted electron intensity will be higher at the edge of the line structure as shown in Figure 24.4. In older CD-SEMs, the detected signal is a combination of both the low energy secondary electrons (SE-1) and the backscattered electrons (SE-2 and SE-3). The way in which these contributions are mixed will determine the shape of the signal and how well the line edge algorithm is able to measure line edge. The signal shown in Figure 24.4 demonstrates that one cannot interpret the signal intensity in terms of the sample height. The peak intensity cannot be interpreted as the line edge position [14,15]. Critical dimension measurement is done by averaging over a large number of line scans across a length of line deemed representative. (The SEM image is composed of many line scans with the vertical

DK4126—Chapter24—23/5/2007—16:20—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

In-Line Metrology

24-7

Schottky emitter Anode Gun alignment Final aperture Aperture alignment Aperture lens Stigmator

Detector

Objective lens

Magnetic lens Electrostatic lens

FIGURE 24.5 Diagram of CD-SEM lens. Applied materials CD-SEM uses a combined electrostatic/magnetic lens and has a through the column emitted electron detector. Figure courtesy Bob Burkhardt of applied materials.

(or horizontal) position of each scan moved by a known increment.) This averages out some of the edge roughness effects and sample variation. At the time of writing this chapter, many different methods of determining edge position exist, and each supplier can chose a method that optimizes the precision of their own instrument. Some of the algorithms are shown in Figure 24.6. It has been shown that the linewidth can vary by 100 nm according to the algorithm selected [12]. Clearly, CD-SEM measurements must be calibrated by a method that is more independent of sample shape and line materials. The above paragraph points to the need for a standardized algorithm so that measurement equipment can be matched. This is critical for manufacturing metrology. The above discussion also points to the need for a fundamental model that relates the true feature shape to the signal inside the CD-SEM [15]. Through Monte Carlo modeling, the secondary electron intensity of a linescan can be related to lineshape. In addition, the effect of the width of the electron beam can be removed. This modeling has also been used to improve the CD precision (nist paper and Schlumberger paper). The use of very low

DK4126—Chapter24—23/5/2007—16:20—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-8

Handbook of Semiconductor Manufacturing Technology

60

Fermi fit

Intensity counts

50 40 30

Linear approx

20

Derivative midpoint

10 0 100

200

300

400

500

600

Scan position (nm)

FIGURE 24.6 The algorithm used for determining the line edge impacts the measured value of line width. The red line is the signal from the SEM line scan. Three algorithms are shown: linear approximation, Fermi Fit, and derivative method. Figure 24.6 first appeared in reference 12, and it was used with the authors’ permission.

voltages also changes linescan shape as shown in Figure 24.7. Once again, the algorithm that extracts linewidth from linescan information must be altered to allow for changes in linescan shape. Over the past several years, printed linewidth has fallen below 50 nm. These resist features often have rounded tops, and a line scan over this type of feature does not have the same shape as larger rectangular lines. This is shown in Figure 24.8. In order to determine linewidth, new algorithms were developed. These algorithms use the same sort of edge determination approach as that used for measurement of beam width using a sharp edge feature [16].

Pixel position

Plot Intensity

Intensity

Plot

300 eV

100 eV

Plot

Pixel position

Intensity

800 eV

Pixel position

FIGURE 24.7 Line scan shape change during ultra low voltage CD-SEM. Decreasing voltage has a significant change in the linescan intensity vs. position. The high intensity “lobes” normally observed at the line edge disappear at ultra-low voltage. Figure courtesy Neal Sullivan (Schlumberger).

DK4126—Chapter24—23/5/2007—16:20—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

In-Line Metrology

24-9

50 nm

50 nm

FIGURE 24.8

Impact of round topped resist lines on CD-SEM line scan.

Line shape information is critical to further processing and device performance. Formation of transistor source/drain and low dose drain features is done by implantation with the poly-silicon gate acting as a mask, and thus changes in shape can alter the implanted structure thus changing the transistor’s electrical performance. Metal line resistivity can be effected by changes in line shape. Although modeling methods exist, the most used methods of controlling lineshape are the tilt beam CD-SEM and scatterometry. In Figure 24.9, we show the determination of line shape from tilt beam CD-SEM [17]. Other approaches include critical dimension atomic force microscope (CD-AFM) and destructive analysis using dual column FIB. Marchman has shown the effect on CD value associated with the density of lines in the measurement area [12]. Additional charging is sometimes observed for dense lines. The effect varied with SEM supplier, and one SEM had a 20 nm change in measurement offset. The variation in offset with SEM model was attributed to different electron beam energies. The effect of line density can be minimized by selecting the appropriate electron beam voltage for each process step [12]. The effect of the electron beam on 193 nm KrF photoresist is well documented. Due to the large variety of photoresist and antireflective coating materials, the amount of shrinkage due to exposure to the electron beam is a function of materials as well as beam energy and dosage. Photoresist shrinkage can be minimized by using measurement recipes (procedures) that reduce exposure of the measured line or via to the electron beam and by optimizing the beam energy. Many different measurement schemes have been proposed for removing the effect of resist shrinkage from the data. These include measuring the feature from twice to several times before taking the final CD measurement. Recent reports indicate that photoresist shrinkage can be reduced or eliminated by using ultra-low (w100 eV) voltage CD-SEM [18]. Measurement and control of LER and line width roughness (LWR) are becoming more critical as the transistor gate length scales. Establishing a single definition of each of these quantities is difficult since standard definitions are under consideration. In that light, the definitions used in the 2003 ITRS were selected as a start. The LWR is 3s of the CD over the spatial frequencies associated with the pitch and the drain extension [1]

1 1 ; % spatial frequences% P 0:5Xj

DK4126—Chapter24—23/5/2007—16:20—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-10

Handbook of Semiconductor Manufacturing Technology

5 Degrees

2 Degrees

500 400 300 200 100 0 –50

0

50

Side wall reconstructed by stereoscopic analysis

FIGURE 24.9 materials.

Tilt beam CD-SEM allows the reconstruction of lineshape figure courtesy Bob Burkhardt of applied

The goal was to distinguish between line width and CD changes. Although this defines the spatial frequency range used to calculate LWR, it does not define the length of line that must be used to determine these frequencies. Given the highly statistical nature of roughness, sampling of line segments [ than the pitch will be necessary while sampling scan lines !5 nm apart [19]. 24.2.1.2

CD-SEM Focus, Calibration, Tool Matching, and Manufacturability

A unified specification for CD-SEM technology has been developed for the sub 130 nm technology nodes [20]. This reference provides a broad overview of how one can judge the manufacturing compatibility of a CD-SEM. CD-SEM measurements are calibrated using reference materials, CD-AFM, comparison to crosssectioned samples, and correlation to electrical linewidth measurements. We describe each method below. SEM performance criteria include beam diameter, resolution, and sharpness [14]. The Fourier transform of a SEM image is a powerful means of monitoring sharpness [14]. The usefulness of this methodology has been demonstrated in a production environment [21]. CD-SEM magnification is calibrated using a reference material such as the NIST Standard Reference material 2090 [12]. Recently, NIST and Sandia National Laboratory have developed a CD reference material for physical and electrical CD measurement [22]. The test structure is both a microelectromechanical system, and an electrical test structure, and has specially etched, sharp sidewalls so that it can serve as a reference material for CD-SEM or CD-AFM. Since SEM measurements are strongly effected by the material (photoresist vs. poly silicon gate vs. TiN/Al/TiN vs. W/TiN), multiple reference samples

DK4126—Chapter24—23/5/2007—16:20—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

In-Line Metrology

24-11

AFM line scan for undercut line ,,

Needle tip

Cylinder tip

Line

Line

scan

scan

Boot tip

,,

Line scan

FIGURE 24.10 Probe tip shape greatly affects the observed atomic force microscope (AFM) image. There are samples where the cylindrical tips provide superior images. The cylindrical tip shape is expected to be the only mechanically stable tip shape for analysis of dense 100 nm features. Both etched fiber optic tips and carbon nanotubes are being investigated. (See from Dia, et al., Nature, 384, 147, 1996.)

provide optimum process control. A single reference material is considered appropriate for checking SEM drift and CD-AFM precision [22]. Commercialization of this reference material was not complete when this chapter was written. Martin and Wickramasinghe pioneered the application of atomic force microscopy to lithographic metrology needs [23,24]. Griffith [25–27] and Marchman [28,29] have shown the utility of CD-AFM calibration of CD-SEM measurements. Ideally, CD-AFM measurements are independent of sample material. When the CD-AFM is itself well calibrated, it should provide the accuracy and precision required for CD-SEM calibration. An ASTM standard for characterizing probe tip shape is now available [30]. This is a critical part of maintaining measurement reproducibility. The CD-AFM measurement precision has been discussed in the literature [31]. Using “boot tips” (see Figure 24.10), the precision

Isolated lines

20

10 VNDA MOD1

5

VNDA MOD2 VNDB ETCH_Ox

DUV_OX

DUV_Poly

DUV_SI

–5

ILIN_SI

0 DECA

SEM-AFM Delta (nm)

15

–10 Process level

FIGURE 24.11 The effect of materials changes on CD-SEM offset was found to be a function of CD-SEM equipment model. Each SEM uses different primary beam voltages. Taken from reference 2.3 and used with the author’s permission. Figure courtesy Hershel Marchman.

DK4126—Chapter24—23/5/2007—16:20—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-12

Handbook of Semiconductor Manufacturing Technology

360 350 340 330 320 310 300 290 280 270 260 250 240 230 220 210 200 190 180 170 160 150 140 130

X-Section dense KLA8100 Iso X-Section Iso CD=09.137 (KLA8100)–26.012

66

61

56

51

46

41

36

31

26

21

16

11

CD=KLA8100-50

6

1

CDS

KLA8100 dense

Site

FIGURE 24.12 Calibration of CD-SEM using SEM cross-sectional data. Data for both isolated and dense lines show the same off-set over a broad range of linewidths. Figure courtesy Arnie Ford.

s was found to be 2 nm for the top of the line and 1.5 nm in the middle of the line [31]. Height measurement precision was 0.5 nm and side wall angle was 0.218 for the left wall. In Figure 24.10, we show schematically how probe tip shape effects a line scan across an undercut line. This illustrates the relationship between the AFM image and the sample. The specific relationship will be different for each type of probe tip shape (e.g., sharp pointed tips vs. rounded boot tips). Calibration of CD-SEM requires that the CD-AFM have greater accuracy than the CD-SEM and appropriate precision. The different CD-SEM offsets associated with different materials found at subsequent lithographic process steps are shown in Figure 24.11. Cross-sectional SEM is an effective method of calibrating CD-SEM. CD-SEM calibration is done across a range of linewidths as shown in Figure 24.12. Calibration data for both isolated and dense lines both show a constant offset from 350 to 130 nm linewidths. In an effort to provide greater statistical significance, CD-SEM algorithms now allow for determination of linewidth for several lines in the image field. As the number of lines in this field increases, the average CD in the field provides identical information to scatterometry. Projecting into the future when it may be possible for both methods to sample the same number of lines in a measurement area, there will be one key difference between scatterometry and CD-SEM other than time per analysis area. The CD-SEM will be capable of providing the range and average of the local distribution of CD values. This development will be closely followed by the metrology community. 24.2.1.3

Scatterometry

Scatterometry refers to the use of scattered light to determine lineshape and CD. There are two main approaches to scatterometry, and Raymond’s review is recommended [32]. In the single wavelength, multi-angle method, light is scattered from a grating test structure and collected at a series of angles [33]. The intensity maximums of diffracted light occur at angles that depend on the shape and width of the lines making up the grating structure. The polarization dependence of the data is often significant. This method requires specialized equipment for data collection. In the single-angle multi-wavelength method,

DK4126—Chapter24—23/5/2007—16:20—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

In-Line Metrology

24-13

the reflected intensity versus wavelength also depends on linewidth and shape [32]. This is schematically shown in Figure 24.13. Commercial spectroscopic ellipsometers (SEs) found on in-line optical metrology systems can record the multi-wavelength single angle data. This method is also known as phase profilometry. This is schematically shown in Figure 24.14. In both methods, intensity of the reflected light is compared to a library of scattering patterns that simulate linewidths and feature shapes. Recently, computational improvements have allowed calculation of feature shape and CD without libraries. In Figure 24.13, we show both approaches to scatterometry. In order to provide an overview of scatterometry, it is useful to start with the scattering or diffraction of light from a regularly spaced grating structure [32]. The intensity of the diffracted light occurs at specific angles given by the well known grating equation:

sin qi C sin qn Z n

l d

where qi is the angle of incidence, qn is the angular location of the nth diffraction order, l is the wavelength of incident light and d is the spatial period (pitch) of the structure [32]. Very careful analysis of the diffraction pattern allows one to directly determine lineshape and width and the thickness of patterned features and the some of the layers below [32]. In order to fully interpret the scattering patterns, a series of scattering patterns are simulated for a range of feature shapes and dimensions using the Rigorous Coupled Wave Approach [32]. The single wavelength, multi angle approach is known as “2-Q” scatterometry [32]. The second method of scatterometry, single angle–multi wavelength, has two major variations. In one variation, an SE is used to measure the usual parameters D (called “Del”) and J using a diffraction grating as a target [33]. These parameters are defined below in the section on ellipsometry. The changes in D and J from that expected for an unpatterned film stack allow determination of the average line shape and CD over the measured area. The second method is to use a perpendicular optical path for a reflectometer equipped with polarized light [34]. The second method requires that the parallel and perpendicularly polarized light be oriented relative to the lines in the diffraction grating.

Laser

Scanner Variable Qd Variable Qi

Moveable detectors

Scattered light intensity

FIGURE 24.13

Diagram of single wavelength–multi angle scatterometry measurement of critical dimension (CD).

DK4126—Chapter24—23/5/2007—16:20—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-14

Handbook of Semiconductor Manufacturing Technology

Mirror Q in = Qout Multi-wavelength light source

Polarization sensitive detector

Incident polarized white light 0th order

FIGURE 24.14 Diagram of multi wavelength–single angle scatterometry measurement of CD in-line spectroscopic ellipsometers can be used to measure CD.

As shown in Figure 24.14, the average line shape can measured with a great amount of detail with all forms of scatterometry. Since it is an optical method, all forms of scatterometry can be done quickly allowing more rapid throughput when compared to other methods. When comparing scatterometry to CD-SEM, it is important to note that scatterometry measures the average CD while CD-SEM measures one value from the average. Unpublished reports have proposed the need to average from 10 to O50 individual lines (measured by CD-SEM) to determine an average CD that tracks the average measured using scatterometry. There are many difficulties associated with direct comparison of CD-SEM and scatterometry including the need to determine exactly where along the line to compare (top vs. middle vs. bottom) [35]. 24.2.1.4

Application of CD Metrology

CD measurements can be applied at a number of steps during the fabrication of shallow trench isolation, the transistor gate electrode, and interconnect Damascene patterning. For example, CD can be measured in the patterned photoresist, after the resist is trimmed and/or after the poly-silicon is etched. The use of scatterometry combined with advanced process control methods for run to run process control at the etch steps are reported to provide much tighter control of the electrical properties of transistors such as the saturation drive current [36]. Since scatterometry determines film thickness too, the shape and depth of shallow trench isolation can be determined before oxide fill. At this time contact and via CD are done using SEM, and CD-SEM is also used during transistor fabrication. Thus, scatterometry is used to supplement CD-SEM by providing tighter process control at certain key steps. Another function of CD metrology is to determine process tolerance ranges after a change in process recipe. The combination of CD-SEM with cross-sectional CD measurements enable control of sidewall

DK4126—Chapter24—23/5/2007—16:20—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

In-Line Metrology

24-15

profile and linewidth. Scatterometry has been applied to control of lines during focus exposure, and dual column FIB has been applied to contact/via control. A recipe change is illustrated by an exposure vs. focus matrix for isolated and dense 250 nm lines respectively in Figure 24.15 [37]. The optimal process range is highlighted for isolated and dense lines. This data was taken after a double post-exposure bake process. In Figure 24.16 the effect of different post exposure bakes for this two bake step process are shown. CD-SEM data is correlated to cross-sectional data from the exposure matrix, and CD-SEM is then used for SPC.

250 nm isolated lines Nominal feature: 225 nm Isolated

UVIIHS double PEB study 125°C/80s PEB#1+140°C/20s PEB#2

Exp (mJ×cm–2)® focus (mm)– –0.1

8.775

9.450

10.125

Feature size +0.1

244

232

248

Feature size +0.3

298

301

280

Feature size +0.5

296

291

Feature size +0.7

289

281

Feature size

293

10.800

251

11.475

12.150

12.825

13.500

250

266

210

211

263

226

195

249

14.175

240

194

(a) 250 nm nested lines Nominal feature: 225 nm nested Exp (mJ×cm–2)® focus (mm)–

UVIIHS double PEB study 125°C/80s PEB#1+140°C/20s PEB#2 8.775

9.450

10.125

10.800

11.475

12.150

12.825

13.500

14.175

290

270

256

248

248

239

224

208

–0.1 Feature size +0.1 Feature size +0.3

305

280

272

263

254

251

240

226

220

Feature size +0.5

307

286

268

267

249

246

230

225

220

Feature size +0.7

325

303

277

279

260

248

242

234

224

299

288

281

252

245

225

202

185

Feature size

(b)

FIGURE 24.15 Cross-sectional CD analysis of a process recipe using (a) isolated and (b) dense lines figure courtesy John Petersen based on a figure from Ref. [37].

DK4126—Chapter24—23/5/2007—16:20—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-16

Handbook of Semiconductor Manufacturing Technology

Summary table of double PEB test with UVIIHS TEST PAB PEB#1 PEB#2 Nested lines number 90s °C °C#1 s#1 °C#2 s#2

Bias (nm)

1

130

125

80

140

20

44

2

130

140

10

125

80

0

3

130

125

80

150

20

64

4

130

150

10

125

80

32

5

130

125

40

140

20

39

6

130

140

10

125

40

15

7

130

125

40

150

20

47

8

130

150

10

125

40

49

9

130

125

60

145

15

38

10

130

145

15

125

60

45

Reference

130

140

90

FIGURE 24.16 Petersen.

24.2.1.5

Isolated lines

None None

83

Example of process set-up illustrated using a double bake resist process. Figure courtesy John

Electrical CD Measurement

There are two approaches to electrical measurement of line width. The effective transistor gate length can be determined from the transconductance of the transistor. Another approach is to measure the electrical properties of a test structure such as the cross-bridge resistor test structure, which is shown in Figure 24.17. The electrical line width is determined by first measuring the sheet resistance of the van der Pauw resistor and then measuring the resistance of the bridge resistor. To be more specific, we refer to Figure 24.17 and the description of Ref. [38]. A current I is placed between pads 4 and 3 and the voltage VC 1 is measured is measured. Then the between pads 5 and 2. The polarity of the current is then reversed and voltage VK 1 current I is placed between pads 3 and 2 and the voltage VC is measured between pads 4 and 5. The polarity 2 is again reversed and voltage VK is measured. The sheet resistance is calculated using [38]: 2

Rs Z

K C pðjV1Cj C jVK 1 j C jV2 j C jV2 jÞ 4I ln 2

DK4126—Chapter24—23/5/2007—16:20—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

In-Line Metrology

24-17

Cross–bridge resistor test structure

FIGURE 24.17

3

2

1

4

5

6

Cross-bridge resistor electrical test structure for CD measurement.

The bridge resistance is calculated from voltage measurements between pads 5 and 6 while a current is placed between pads 1 and 3. VC b is the voltage for current placed between pads 3 and 1, etc., From this the electrical linewidth can be calculated using:

Rb Z

ðjVbCj C jVK R L b jÞ line width W Z b 2Ib Rs

L is the length between pads 5 and 6. In-line electrical CD measurements of test structures on product wafers have been used to control CD during volume manufacture. Automated, in-line electrical probers equipped with camera and pattern recognition are combined with automated electrical measurement systems. A high correlation of CD-SEM and electrical CD measurements has been observed. In one study, correlation coefficients of at least 0.99 were found for both poly-silicon lines and aluminum lines coated with anti-reflective coating (ARC), the slope of both lines indicated that the CD values from the CD-SEM were wider than the electrical measurements [39]. The transconductance, gm, of a transistor provides another measure of electrical gate length, Leff. Transconductance is defined as the derivative of the drain current with respect to the potential difference between the gate and source, VGS, at a constant potential difference between the drain and source [40].

gm Z

dId jV dVGS DS

Plots of the drain current vs. VGS show two different slopes, one at low VGS, and one after the transistor reaches saturation voltage, VSAT. Therefore, an ideal transistor will show two relationships between gm and Leff as follows:

gm Z fbVDS for VDS % VSAT or bVGT for VDS O VSAT g Here, b f1/Leff [40]. This type of CD measurement can be used to separate chips according to expected circuit speed due to line-width-induced gate delay.

24.2.2

Overlay Process Control

Overlay is the term used to describe the registration of the patterned structure in a layer with the patterned structure in the subsequent layer [41,42]. Most overlay metrology is done using an optical system that automatically evaluates how far from center the target pattern in the top layer is from the center of the target pattern in the layer below. Every lithographically patterned layer requires overlay control. Tight control of overlay is a critical part of IC manufacture of the three dimensional structures

DK4126—Chapter24—23/5/2007—16:20—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-18

Handbook of Semiconductor Manufacturing Technology

Box-in-box overlay structures

(a) Misalligned overlay

(b)

FIGURE 24.18

Typical overlay patterns. Figure suggested by Chris Nelson.

that form the integrated circuit. Process levels with the smallest IC structures such as the transistor gate drive the metrology tool requirements for overlay. Optical overlay metrology requires test patterns that provide high contrast through the dielectric layer that separates the gate level from the first metal level and subsequently the metal levels from the metal level below. Overlay is used after each lithographic mask step except the first. Typical overlay patterns are shown in Figure 24.18a, and a misaligned box target is shown in Figure 24.18b. A cross-sectional view of an overlay box in box structure and the line structures is shown in Figure 24.19. In Figure 24.20, we show a block diagram of a lithographic stepper and misalignment errors associated with distance between reticle and lens, reticle flatness, and wafer rotation. Overlay must be checked inside each die (intrafield) and from die to die (interfield). Interfield comparisons are done using overlay patterns from the center of the die. In Figure 24.21, we show a wafer with a four point check of interfield errors. The lithographic stepper or step-and-scan tool must correct for translation errors, scaling (different size translation errors across the wafer), orthoginality (centering of overlay pattern), and wafer rotation (see Refs. [35,36]). In practice, a monitor wafer with overlay artifacts is run each day to check inter and intra-field errors. Advanced interconnect processes will challenge overlay process control. Overlay of chemical mechanical polishing (CMP) process steps is made difficult by the fuzzy image observed for the buried box structure. Separating the overlay error due to the stepper from the error in the overlay measurement is often difficult. The origin of the fuzzy image is shown in Figure 24.22. Overlay of Damascene processes is also difficult. The overlay metrology tool consists of a specially designed optical microscope, a camera with a digital detector, and the illumination source [41,42]

Cross-sectional view of box-in-box overlay structures Resist (top box structure) Oxide

Oxide

Si wafer

Bottom box structure

FIGURE 24.19

Cross-sectional view of overlay box and line structures. Figure suggested by Chris Nelson.

DK4126—Chapter24—23/5/2007—16:20—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

In-Line Metrology

24-19

Overlay errors Stepper diagram

Stepper diagram

Reticle

Reticle

Reticle

Lens

Lens

Lens

Wafer

Wafer

Wafer

Distance error

FIGURE 24.20 Nelson.

Stepper diagram

Reticle not flat

Reticle not square

Relationship between lithographic stepper and misalignment errors. Figure suggested by Chris

Interfield overlay translation errors scaling is shown (ie., different error magnitude across the wafer)

FIGURE 24.21

Four point check of interfield errors. Figure suggested by Chris Nelson.

CMP overlay issues Top down view

Side view

Oxide Filled and polished area

FIGURE 24.22 Chemical mechanical polishing (CMP) issues for overlay measurement. Separation of overlay error from measurement error is difficult due to CMP induced edge fuzziness. Figure suggested by Chris Nelson.

DK4126—Chapter24—23/5/2007—16:20—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-20

24.3

Handbook of Semiconductor Manufacturing Technology

Metrology for Front End Processes

Transistors continue to evolve rapidly. New materials will enable current CMOS, and new transistor designs such as the FINFET are expected to extend CMOS beyond the 22 nm node. One critical challenge facing the industry is scaling the properties of transistors such as switching speed. The properties of a CMOS inverter are often used to illustrate the relationship between the transistor’s saturation drive current, Idsat, and the gate delay, tZCloadVDD/Idsat. The relationship between transistor characteristics and saturation drive current is a function of gate length. In long channel devices (CDO100 nm), saturation drive current was directly proportional to carrier mobility and inversely proportional to gate length and dielectric thickness. For ultra-short channel devices, the saturation drive current is proportional to the saturation velocity of the carriers, the transistor width, the capacitance, and on voltage. The electrical properties will transition between long and short channel behavior. Thus, 65– 32 nm node transistors will continue to use stress to improve carrier mobility, but the impact on saturation drive current will decrease. Fabrication of transistors requires a number of different processes including formation of isolation structures such as shallow trench isolation, growth of the gate dielectric, doping, and lithographic patterning of gate electrodes. A summary of the metrology steps found in typical from end processing are illustrated in Figure 24.23. In this section, the topics of gate stack film thickness and electrical measurements and dopant dose and junction measurements are discussed. A great variety of process methods are used to induce stress in the transistor channel, and each one requires a different approach to process control. Stress measurement is also reviewed. Gate dielectric thickness is routinely measured on patterned wafers using either multiple angle or SE. A recently introduced in-line x-ray photoelectron spectroscopy (XPS) based method is briefly described along with electron beam induced x-ray fluorescence (XRF). These methods have the advantage of determining nitrogen content. Electrical characterization of gate dielectrics includes non-contact corona based methods and traditional capacitance–voltage (C–V) and current–voltage (I–V) measurements. The ability of the non-contact

Pattern and implant wells

Shallow trench isolation

PR

Pattern and gate dielectric

Pattern poly/metal implant LDD Barrier height gate leakage interfaces contamination Vfb

Electrode material

Sidewall profile

Etch rate selectivity

Silicon interface

PR

Dopant activation Grain structure

Upper interfacial layer

Pattern and implant S/D

High K Dielectric

PR

Lower interfacial layer

SOURCE

DRAIN Metal diffusion

FIGURE 24.23 Typical metrology steps used during transistor fabrication. Control of shallow trench isolation, implants for well formation, gate dielectric formation, low dose drain implants, and source/drain implants all require statistically significant measurements for process control.

DK4126—Chapter24—23/5/2007—16:20—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

In-Line Metrology

24-21

methods to measure on patterned wafers is continuously evolving. Dopant dose measurements often require special test wafers, while junction measurements can be done on patterned wafers in special test areas. The use of metal gate electrodes instead of poly-silicon is predicted to occur below the 65 nm technology node, and it is briefly discussed here. The dielectric thickness is determined using a model of the optical properties of the film structure and the observed thickness value can depend on the size of the analysis area and the specifics of interface properties used in the model. Silicon oxynitride thermal films less than 2 nm in thickness and the contribution of the interface to optical and electrical properties is significant. The impact of adsorption of ambient contamination on the dielectric surface has required desorption of these films before optical thickness measurements. Electrical capacitance based measurement of effective dielectric thickness can be done using a fabricated capacitor structure or with equipment that makes the top of the capacitor through corona discharge or with the elastic metal tip approach.

24.3.1

Ellipsometric Measurement of Gate Dielectric Film Thickness

Ellipsometric measurement of film thickness is based on the change in polarization of light after reflection from the dielectric on the wafer surface. If the optical constants of the dielectric are known, the thickness can be determined using a model of the optical structure of the dielectric film on a silicon wafer. The theory of ellipsometric measurement is left to references such as Tompkins [43] and Jellison [44]. In order to better describe ellipsometric data, a brief summary is presented below. 24.3.1.1

The Theory of Ellipsometry

A brief description of elliptically polarized light and the phenomena of reflection and refraction facilitates the discussion of ellipsometry given below [43,44]. In Figure 24.24, a wave of light that is linearly polarized parallel to the plane of incidence to the sample surface is shown reflecting at an angle f. This wave is designated a “p” wave. Light that is polarized perpendicular to the plane of incidence is designated as “s” waves. When “p” and “s” waves are combined slightly out of phase, the light beam is elliptically polarized. If the phase difference is 908, the light is circularly polarized. In Figure 24.25, light is shown reflecting from an infinitely thick sample with complex index of refraction N2ZnKik where k is the extinction coefficient. The extinction coefficient is related to the adsorption coefficient of light, a, through the following relationship:

kZ

l a and IðzÞ Z I0 eKaz 4p

E

f

f

P Wave (parallel to plane of incidence)

FIGURE 24.24

Reflection of “p” polarized light from a sample surface.

DK4126—Chapter24—23/5/2007—16:20—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-22

Handbook of Semiconductor Manufacturing Technology

Reflected light

N1

Refracted light

N2 (Complex index of refraction)

FIGURE 24.25

Refracted vs. reflected light.

I(z) is the intensity of light at a depth z below the sample surface, and I0 is the initial intensity of light that enters the sample. Since ellipsometric measurement is based on the change in the elliptical polarization after reflection, we must relate the reflection coefficient to the film thickness. The Fresnel reflection coefficients for reflection from an infinitely thick sample for polarizations both parallel and perpendicular to the incident plane are: P r12 Z

N2 cos f1 KN1 cos f2 N2 cos f1 C N1 cos f2

s r12 Z

N1 cos f1 KN2 cos f2 N1 cos f1 C N2 cos f2

This leads to the complex, total reflection coefficients for a multi-layer stack:

RP Z

P Ki2b P e C r23 r12 P P Ki2b 1 C r12 r23 e

RS Z

s Ki2b s e C r23 r12 s s Ki2b 1 C r12 r23 e

In Figure 24.26, we show the multiple reflections described by the Fresnel coefficients used above. The phase thickness b is related to the physical thickness by

d b Z 2p N2 cos f2 l The approximate nature of this model is illustrated by the transmission electron micrograph of a 3 nm thermal oxide film shown in Figure 24.27. The interface between the oxide and silicon substrate is not

N1 N2 N3

FIGURE 24.26

Reflection of light from a layered sample.

DK4126—Chapter24—23/5/2007—16:20—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

In-Line Metrology

24-23

Gate oxide thickness

Si /SiO2 interface control

FIGURE 24.27 Transmission electron micrograph of a w3 nm silicon dioxide gate dielectric. This image illustrates the lack of a sharp interface between the oxide and substrate. Figure courtesy Bob McDonald.

sharp. A complete model for measurement of gate dielectric thickness includes the optical effect of the interfacial region. In addition, local thickness variations across the area and the size of a sub micron gate will be averaged in typical commercial measurement systems. Averaging over a large area results in sub 0.1 nm precision. Local thickness variations can result in threshold voltage variations. Total reflection coefficients for any model and optical constants of the oxide layer can be programmed into commercial, in-line systems. The ellipsometer actually measures the parameters D (called Del) and J. Del is the phase difference after reflection, D1KD2. Jis defined by the following equation:

tan J Z

jRP j jRs j

The fundamental equation for ellipsometry is:

rZ

Rp Z tan JeiD Rs

In Figure 24.28, we show how D and J change with thickness for a SiO2 film. After a thickness of 283.2 nm, the value of D and J will repeat. 24.3.1.2

Multi-Wavelength and Spectroscopic Ellipsometry

Gate dielectric thickness measurement requires exceedingly good precision. This has motivated the use of optical metrology systems that have multiple optical paths. These systems include single wavelength ellipsometers (SWE) for optimum precision for thin films, spectroscopic reflectometers for thicker films, and SEs for complicated films and film stacks. Some commercial ellipsometers use one of 4 single wavelengths for gate dielectric control. The commercial 4-wavelength systems use optics that simultaneously measures over a range of angles. Film thickness is determined in SEs by fitting D and J data taken at many wavelengths to a values calculated from an optical model of the sample.

DK4126—Chapter24—23/5/2007—16:20—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-24

Handbook of Semiconductor Manufacturing Technology

360

For the film, n = 1.46

300

2600Å

Del

240 Film-Free

180 120

200Å 400Å

60 0

0

10

600Å 20

800Å

30

etc.

40

50

60

70

80

90

Psi

FIGURE 24.28 Trajectory of Del and psi for silicon dioxide. This figure shows that psi and Del trajectory will repeat after 283.2 nm thus making film thickness determination ambiguous for thicker films. This is one of the reasons that multiple wavelengths are used in many film thickness tools.

24.3.1.3

Ellipsometric Systems

In Figure 24.29a–c, we show a block diagrams of multiwavelength and SEs. A rotating polarizer type ellipsometer is shown in Figure 24.29a since it is typical of the design used in many commercial systems. There are a variety of system designs, some of which incorporate a focus system that averages over a spread of angles as shown in Figure 24.29b and c. Precision requirements for ultra-thin gate dielectric measurement may drive the use of non-focused systems. Detailed technical descriptions of commercial ellipsometers are available from the technical literature [45–47]. 24.3.1.4

Optical Models for Thin Gate Films

The traditional model of an oxide layer on a silicon substrate is that of a single, flat layer that has a sharp interface with the silicon below. This layer has also been referred to as a slab. Although an interfacial layer with different physical and chemical properties is present after SiO2 or SiOxNy is grown, a comparison of the precision of optical models for SE systems has shown that including the interfacial layer usually results in a larger (worst) precision than modeling the entire film as a single layer (or slab) [48]. Attempts at modeling using an interfacial layer composed of a Bruggerman Effective Medium Approximation (BEMA) model, provides a better Goodness-Of-Fit to the experimental data but a larger precision [48]. These effects are due to too much of correlation between fit variables and usually result in unrealistic values. Alternative single layer models (without an interfacial layer) are used that provide better precision values without sacrificing the Goodness-Of-Fit. Since SWE provides improved (i.e., smaller values) precision for these thin films, the optical constant for a single SiO2 or SiOxNy layer is used. This approach assumes that the nitrogen concentration and depth profile remain uniform across the wafer and from wafer to wafer. The accuracy of oxynitride film thickness depends on the accuracy of the optical constants for that film. According to the ITRS, if the gate dielectric EOT is 1.0 nm thick and the process tolerance is 5% for 3s (process variation), then P/TZ10%Z6s/(0.1 nm) which gives a measurement variation 3sZ 0.004 nm. This precision can be achieved by desorbing the surface contamination layer that adsorbs on the wafer surface prior to measurement. A number of approaches have been made commercially available including use of infra-red laser irradiation and thermal desorption [49]. High-k dielectric materials are being investigated as potential replacements for silicon oxynitride. The optical properties of these films are highly variable according to the method of deposition and the process

DK4126—Chapter24—23/5/2007—16:21—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

In-Line Metrology

24-25

conditions. Further, there are no well known tabulated form of optical constants available for films such as hafnium oxide or silicate. Therefore, to accurately model these types of films, a parameterized model is needed that gives the optical properties of the corresponding film for the necessary wavelengths. To date, the Tauc–Lorentz model has proven a suitable choice. This model combines the classical Lorentz oscillator and the Tauc expression for the band gap of amorphous materials to give a parameterized function that models the imaginary part of the dielectric function [49]. However, two problems appear to still exist with this model: (1) no current manufacturing capable SE is capable of implementing this type of optical model. (2) There is too little optical contrast between the hik layer and the interfacial layer, the Tauc–Lorentz model is unable to distinguish the two layers and usually combines the two thicknesses as one. Better models are being developed. 24.3.1.5

Resolution for Ultra Thin SiO2 and Alternate Dielectrics

For an SWE using a laser operating at 632.8 nm, a change in thermal SiO2 thickness of 0.1 nm results in a change in Dw0.258 for films that are between 0 and 10 nm in thickness. The change in J at a single wavelength is not a straight line function between 0 and 4 nm thickness (3.1). A commercial ellipsometer

Spectralaser optical system Four wavelength simultaneous multi-angle ellipsometry with UV reflectometry Fiber optics Shutters Deuterium lamp 458 nm argon lon laser 633 nm helium neon laser 780 nm laser diode 905 nm laser diode Spatial filter/Beam combiner Polarization modulator Video microscope Ellipsometer array detector

UV objective

Grating spectrometer (a)

Auto focus detector Analyzer prism Sample Focusing objectives

UV reflectometer detector

FIGURE 24.29 Block diagrams of several commercial ellipsometer designs. (a) Four wavelength, focusing, multiangle ellipsometer system made by Rudolph Technologies, Inc. Gate oxide thickness measurements can be made using only the 632.8 nm wavelength from the HENe laser. Figure courtesy J. Sullivan, Rudolph Technologies. (b) UV-1280SE spectroscopic ellipsometer system made by KLA-Tencor. The light source is a broad band Xenon lamp, and the ellipsometer is a rotating polarizer type. Figure provided by J.J. Estabil, KLA-Tencor. (c) Thermawave Optiprobe 5240 combined absolute ellipsometer (632.8 nm HeNe laser), spectroscopic ellipsometer (450–840 nm [W halogen lamp], and 210–450 nm [deuterium deep UV source]), beam profile reflectometer, beam profile ellipsometer, deep UV spectrometer (190–840 nm). Figure courtesy W. Lee Smith, Thermawave.

DK4126—Chapter24—23/5/2007—16:21—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-26

Handbook of Semiconductor Manufacturing Technology

UV-1280SE optics schematic Sample detector

Lamp detector

DBS focus

Grating SE detector Pattern Rec

Xenon lamp SE focus flipper Fiber optic cable

UV Filter

Prism

Rotating polarizer

Analyzer

DBS Path UVSE Path

(b)

Wafer

Auto focus detector

BPE

Wave plate

TV camera

Tungsten visible source

BPR

Shutter Deuterium DUV source

P polarization array detector

S polarization array detector

Analyzer

Quad cell detector

DUVSE and Spectrometer Array detector

Shutter Grating

675 nm laser

Flip mirror Polarizer

632.8 nm HeNe laser

Analyzer Rotating wave plate

AE detector 0.9 NA lens

Polarizer

Stage

Analyzer Rotating wave plate DUVSE path

(c)

FIGURE 24.29

(continued )

DK4126—Chapter24—23/5/2007—16:21—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

AE path

In-Line Metrology

24-27

is capable of measuring changes in D and J of !0.018. Several groups have discussed the sources of measurement error which effect accuracy and precision [49–53]. An error of 0.058 in incident angle near 708 results in a small error in D of !0.028 and in Jof w0.18 when lZ632.8 nm [50]. Multiple wavelengths provide a method of averaging data that helps overcome errors. The Drude approximation predicts a linear change in D vs. thickness for thin (!10 nm), nonadsorbing (imaginary part of refractive index is small or zero) films of different refractive indices. Data for several films such as silicon dioxide and silicon nitride show that the parameter D (at lZ632.8 and 708 incident angle) increases as refractive index increases [43]. The real part of the refractive index of silicon nitride and titanium dioxide is larger than that of SiO2. Therefore, the resolution of ellipsometry to changes in a single layer film of these materials on a silicon substrate would be better than for SiO2. The refractive index of non-isotropic materials such as single crystal TiO2 must be accounted for, and an averaging procedure would provide the needed information for a film having randomly oriented grains. Since grain texture is process dependent, the optical constants of films made using new processes must be checked before using ellipsometry for process control. Thin dielectrics below 100A, Jhas been shown to be almost independent of thickness. Alternatively, D changes almost linearly with thickness. Furthermore, Dw0, for very thin dielectrics. Therefore, the sensitivity for thin dielectrics is with D. The SWE that use a Rotating Compensator has an intensity output proportional to Sin D. Considering the small angle approximation (Sin DwD), any resolution of thickness changes for thin dielectrics, will be detected using SWE with a Rotating Compensator. 24.3.1.6

Poly Si Thickness

The issues associated with single wavelength measurement of poly-crystalline silicon thickness have been discussed in the literature [43]. Optical wavelength ellipsometry is hampered by the fact that silicon is adsorbant in this range, while silicon is transparent in the infra-red region. Recently, in-line ellipsometer systems have been equipped with infra-red wavelength capability. The biggest issue is that the refractive index changes with micro-crystallinity [43]. It is also known that the micro-crystallinity of poly-silicon or a-Si can change across a wafer. Poly-silicon and a-Si thickness measurement is routinely done with commercial systems after careful consideration of the above issues.

24.3.2

Electrical Measurement of Gate Oxide Thickness

This section briefly discusses the measurement of the effective dielectric thickness using capacitors or transistors. Several references which provide details of capacitance measurements are recommended as supplementary reading [54–60]. In addition to these reviews, theoretical corrections have been used to extend capacitance measurements to 2 nm SiO2 [60]. It is useful to discuss capacitance–voltage measurement in order to better compare optical and electrical measurement methods. The effective dielectric thickness refers to the thickness of the region that acts as a dielectric in the capacitor or transistor. The electrically measured thickness can be different from that measured optically when there is depletion of carriers in the poly-silicon above and/or in the silicon below the gate dielectric. In Figure 24.30a, a plot of capacitance vs. voltage (called a C–V curve or plot) for an ideal capacitor illustrates the response of a SiO2 film that is greater than 4 nm on a uniformly doped p-type substrate [54–59]. The assumption is that the poly-silicon gate and the uniformly doped p-type silicon both act as metal plates in a perfect capacitor when the applied voltage on the gate is negative. Band bending causes positive charge accumulation at the surface of the p-type silicon just as one expects positive charge buildup on a metal plate capacitor as illustrated in Figure 24.30b. The capacitance of P-channel metal-oxide semiconductor (PMOS) structure drops as the voltage moves from negative toward zero. In a defect free structure the valence and conduction bands in the substrate are flat at an applied voltage equal to the flat band voltage. The flat band voltage for two “metal” plates having the same work function is zero as discussed in the example shown in Ref. [55]. The equilibration of the Fermi levels (between the doped poly silicon gate and uniformly doped substrate) results in band bending at zero gate voltage and ideal flat band voltage VfbZ(fmKXsKEcCEF)/q. fm is the work function of the

DK4126—Chapter24—23/5/2007—16:21—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-28

Handbook of Semiconductor Manufacturing Technology

C Total

Cox

Cox Low frequency

Metal electrode (Poly-Si gate) EF at Vg

0

–Vgate

+Vgate

V (volts)

(a)

p doped Si

Vg < 0

Flat band voltage for ideal capacitor

Econduction band Einterface states EFermi surface of metal at ground Evalence band

Accumulation of holes (majority carrier)

r Charge density SiO2

High frequency Deep depletion

SiO2

Accumulation of electrons

(b) Vacuum level qVFB XS Electron affinity of p doped Si

fm Eg

Γ EC EF

C Poly Si gate Resistance due to dielectric leakage

C Gate dielectric

C Bulk Si

C Interface(SiO2–Si)

EV

(c)

VFB = ( f m – X S – EC – EF)/q

(d)

FIGURE 24.30 Capacitance–voltage measurement of oxide thickness. (a) Typical C–V data is shown. The oxide thickness is determined from the constant capacitance at negative voltage for p-type silicon substrates. (b) The band bending diagram and charge density plot for gate electrode and p-doped substrate show the charge accumulation at the p-substrate–gate oxide interface. (c) The origin of the flat band voltage. The energy band diagram for the polysilicon/gate oxide/p-doped substrate illustrates band bending at zero applied voltage. (d) The equivalent circuit diagram shows the various contributions to the capacitance of the transistor gate.

metal, Xs is the electron affinity of the doped silicon substrate, Ec is the conduction band edge of the doped silicon substrate, EF is the Fermi level, and q is the electric charge. A band diagram of the polysilicon/SiO2/“p” type Si system based on Ref. [60] is shown in Figure 24.30c [60]. The C–V data is obtained by sweeping the voltage at either low or high frequency. This frequency has a large effect on the capacitance observed at positive voltage as shown in Figure 24.30a. Series and parallel capacitors have been used to describe the capacitance of the entire system as shown in Figure 24.30d [59]. The capacitance of the oxide is in series with charge in the semiconductor and in the poly-silicon gate. The charge in the semiconductor is in parallel with the interface charge, and Schroder divides the charge in the semiconductor into the hole accumulation charge “Cp”, the space-charge region bulk charge, and the electron inversion charge which is not shown in Figure 24.30d. When the accumulated charge in the p-type silicon is very large, and the Cp is considered to be shorted and thus it acts as a perfect metal plate in an ideal metal-oxide semiconductor (MOS) capacitor. For thicker oxides O4 nm, the gate electrode is considered to be a perfect metal capacitor plate. Using these assumptions, the effective thickness can be calculated using the following relationship, CoxZ3oxA/dox [55–60]. Cox is the maximum capacitance at negative applied gate voltage for PMOS, d is the effective dielectric thickness, A is the area of the capacitor, and 3ox is dielectric constant (real part of dielectric constant). Defects in the oxide layer result

DK4126—Chapter24—23/5/2007—16:21—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

In-Line Metrology

24-29

in the trapping of charge in the oxide layer [54–60]. This shifts the flat band voltage from its ideal value, and allows the quality of the gate dielectric to be monitored by the value of the flat band voltage. Several studies have shown an excellent correlation between Cox and ellipsometric oxide thickness. By this we mean that a plot of the uncorrected electrical thickness vs. physical thickness is linear. The electrical and physical measurements will both have contributions from the interface between the gate oxide and silicon substrate. The dielectric constant of the interfacial region is different from bulk SiO2. This is not corrected for in the electrical determination of thickness. For the very thin oxides !2 nm, C–V behavior is not ideal, i.e., the capacitance is not constant in the accumulation or depletion regions of the C–V curve. The present status of correlation between electrical and physical measurements are discussed below. Some production FABs monitor oxide thickness on both uniformly doped substrates and on PMOS and N-channel metal-oxide semiconductor (NMOS) test structures. The growth rate of SiO2 is dependent on doping type (p vs. n) and concentration especially for lightly doped regions. Therefore, the gate oxide (here we refer to O2 nm SiO2) must be measured on test structures that have implants representative of the channel region of a transistor. This means that when tight control of oxide thickness is required the oxide thickness in both the channel in the PMOS and in the NMOS regions must be measured. The assumptions made in the evaluation of the perfect MOS capacitor, such as uniform doping, are no longer valid. The interface between the silicon and the “bulk like” dielectric contributes to the measured capacitance. When this non-bulk oxide capacitance is constant or linearly varies with oxide thickness, correlation with optical (ellipsometric) measurement is possible. Depletion of charge in the poly-silicon gate and quantum states in the bent bands at the interface of the crystalline silicon with the gate dielectric layer alter the C–V behavior described above. In Figure 24.31, we show the C–V data for thermally grown SiO2 ranging in thickness from 2.5 to 1.5 nm [61]. The 2.5 nm oxide has classical C–V behavior while 1.5 and 2 nm oxides show the effect of quantum behavior and poly depletion. A procedure for removing both quantum and depletion effects from C–V data has been described [61,63]. The resulting oxide thickness can be directly compared to ellipsometric measurements. When the oxide is very thin, the voltage applied to the poly-silicon gate drops inside

18 16

Capacitance (fF/μm2)

14

TOX = (2.0 nm) TOX = (2.5 nm)

Al/TiN gate pure RTO

TOX = (1.5 nm) 4×1013 B/cm2 1×1013 As/cm2

12 10

C INV = C ACC for TOX = 2.5 nm

8 6 4

Accumulation Inversion

2 0 –3.0

–2.0

–1.0

0.0

1.0

2.0

3.0

Gate voltage (V)

FIGURE 24.31 Capacitance–voltage data for thin oxides using Al/TiN gate electrodes. All data is courtesy George Brown, Texas Instruments. C–V data for 2.5, 2.0, and 1.5 nm oxides made using rapid thermal oxidation show the increased non-ideal capacitance in accumulation and depletion as the oxide becomes thinner.

DK4126—Chapter24—23/5/2007—16:21—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-30

Handbook of Semiconductor Manufacturing Technology

the poly-silicon gate electrode. This effect is a function of oxide thickness and poly silicon doping. Modeling of the “poly depletion effect” has shown that for a 3.5 nm oxide the ratio of measured gate capacitance to true oxide capacitance is about 0.84 for a poly gate doping level of 2!1020 and w0.75 for 5!1019 [62]. For a 1.5 nm gate oxide the capacitance ratios are 0.7 and w0.55 for these gate doping levels [62]. Band bending at the silicon substrate–gate oxide interface allows the formation of quantum levels. When a negative voltage is applied and electrons are drawn towards the silicon substrate–gate dielectric interface, the accumulated electrons fill these quantum levels. The filled quantum levels increase the amount of band bending, and cause the center of the accumulated charge to shift away from the interface. These effects change capacitance significantly and must be accounted for during electrical measurement of oxide thickness. In Figure 24.32 both corrected and uncorrected theoretical C–V plots for a sub 3 nm SiO2 gate oxide are shown. Measurement of very thin SiO2 with most test equipment requires the use of high frequency (w1 MHz) and transistor structures that have a large width to gate length ratio [62,63]. Direct tunneling through thin oxides distorts the C–V measurement which can be avoided by use of high frequency (nonstatic) C–V. Static C–V measurement have been extended to !w2.5 nm SiO2 by use of special leakage compensation circuitry and numerical correction [63]. C–V metrology should be used for gate thickness on uniformly doped substrates. Simulation of C–V data for channel doped structures is not available at this time. Recently, a new method of obtaining non-contact, C–V like data called the “Quantox” has been introduced. This tool can provide oxide thickness, flat band voltage, and carrier lifetime data [64]. A corona charge, Q, makes the top “gate” for the capacitance measurements, and provides the bias sweep. The Kelvin probe measures the surface voltage at each Q bias point, and the surface photovoltage (SPV) is the transient surface voltage measured at each Q bias point when a light is flashed to photoflatten the bands. This results in a low frequency non-contact Q–V–SPV curve. This is shown in Figure 24.33a and b. The author notes that there are differences between traditional C–V and Quantox measurements. The rate of voltage sweep is different from traditional C–V measurements, and the amount of tunneling of current during Q–V–SPV measurement is considerably less. By pulsing the corona charge to deep deplete

C –V Tox = 2 nm

180

Vfb = –1

N sub 2×1017/cm3

N poly 6×1019/cm3

160

120 100 80

With PD&QM

60

With QM

40

With PD

20

FIGURE 24.32

Theoretical C–V simulation showing quantum and depletion effects.

DK4126—Chapter24—23/5/2007—16:21—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

3.

5

2. 0

1.

0 1.

0. 5

.0 –2 .5 –2 .0 –1 .5 –1 .0 –0 .5 0. 0

.5

–3

–3

Voltage (V)

0

Corrected

0

2. 5

Capacitance (pF)

140

In-Line Metrology

24-31

Corona bias, Q Corona source, CO3–,H3O+ Oxide

Kelvin probe, Surface photovoltage, Vsurf SPV Mechanical Light oscillator Transient Kelvin probe detection electronics SPV Vsurf – +

P Silicon 1.

2.

Apply Qcorona Bias Measure each ∆Q

3.

Measure Vs (=Vox+ψ). Probe vibration drives AC current.

Stop vibration, flash light, and measure SPV. 4. Repeat

(a) MOS

COS Bias Capacitance

Pickup plate

Light

Bias Surface photo voltage (SPV)

Metal dot Oxide SPV

Silicon Capacitance

SPV SPV = 0

C FB

(b)

– +

V FB

Bias

V FB

Bias

FIGURE 24.33 (a) Quantox measurement of surface photovoltage (SPV) vs. bias voltage. This figure shows the three components of a Quantox system. A corona charge is used as the gate for a corona-oxide-semiconductor (COS) capacitor measurement. The bias is swept by applying different corona charges. The Kelvin probe measures the transient SPV at each corona (Q) bias point after the bands are flattened by the flashing a light. Figure courtesy Steve Weinzierl, Keithley Instruments. (b) Comparison of traditional capacitance–voltage with quantox Q–V–SPV measurements. Figure courtesy Steve Weinzierl, Keithley instruments.

the silicon from inversion, non-contact measurements of carrier density and generation lifetime to a controlled depth are also provided. The comparison of Quantox data (not corrected for quantum and depletion effects) with ellipsometric data shows a strong correlation. Existing data shows that repeatability is 0.03 nm, 1 s. Some workers believe that Quantox measurements are not effected by hydrocarbon buildup on the oxide surface. In addition, poly depletion corrections are not required. Current–Voltage measurements also show a strong correlation to oxide thickness and may have better resolution to changes in oxide thickness than ellipsometry [60, 61, 63, 64]. Typical current–voltage data is shown for oxide thicknesses between w3 and w1.5 nm in Figure 24.34. Since the I–V data is a strong function of oxide thickness (10 A˚/cm for a 0.04 nm decrease in oxide thickness), measurement resolution

DK4126—Chapter24—23/5/2007—16:21—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-32

Handbook of Semiconductor Manufacturing Technology

N-Si: Positive bias-Substrate injection

1×103

Current density (A/cm2)

1×102 1×101 1

Data: Tox(nom) = 17.6 Å

–1

Data: Tox(nom) = 20.8 Å

–2

Data: Tox(nom) = 26.1 Å

–3

1×10

Data: Tox(nom) = 28.3 Å

1×10–4

Simmons Vox model - 17.6 Å

–5

Simmons Vox model - 20.8 Å

1×10 1×10

1×10

–6

1×10

Simmons Vox model - 26.1 Å

–7

–5

1×10

2

Area = 5×10 cm

Simmons Vox model - 28.3 Å

1×10–8 1×10–9

0

0.5

1

1.5

2

2.5

3

3.5

Applied voltage (V)

FIGURE 24.34 Current–voltage data applied to oxide thickness measurement. At 1 V bias, the current varies over seven orders of magnitude as the oxide thickness changes from w3 to w1.8 nm. Simulated data is also shown. Figure and data courtesy George Brown, Texas Instruments.

appears better than the approximate 0.258 change in D per angstom estimated for ellipsometry at 632.8 nm. On the basis of the measurement procedure reported by Brown and co-workers, the thickness associated with measured current must be calibrated using another measurement. The I–V characteristics should depend on doping characteristics such as channel dose. Unless calibrated to C–V, I–V thickness does not depend on knowledge of the static dielectric constant.

24.3.3 New Methods of Measuring Gate Dielectric Thickness and Nitrogen Concentration X-ray photoelectron spectroscopy is a well known materials characterization method that determines the elemental composition along with their chemical state [65]. The information depth for XPS is roughly the top 5 nm or less. Depth dependent information can be determined using either ion sputtering or by measuring the XPS signal at different angles of x-ray incidence [66]. Recently, a clean room compatible XPS system capable of rapid throughput has been introduced [67]. The first data for the precision of film thickness and nitrogen concentration are promising and worth of mention. This method can also be applied to high k composition and thickness metrology. The spot size of XPS is determined by the size of the area irradiated by the x-rays and the region of this area that generates electrons that enter the detector. This area is 9 mm in diameter which is larger than that analyzed by optical methods. Electron microprobe analysis is another materials characterization method that has been adapted for clean room based metrology. In this technique, x-rays are emitted after irradiation by an electron beam. In one commercial version, low energy x-rays from elements such as nitrogen and oxygen are measured using wavelength dispersive spectrometers. In order to generate enough x-rays to have favorable signal to noise, a relatively large diameter electron beam with a relatively large total current [68] is used. Preliminary evaluation indicates that the precision for nitrogen concentration is w1% of the dose between 0.8 and 5% nitrogen concentration and the precision for thickness is w1% of the thickness for

DK4126—Chapter24—23/5/2007—16:21—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

In-Line Metrology

24-33

thin films [65]. The analysis area is between 10 and 100 mm in diameter, and the electron beam energy is from 0.2 to 10 keV. This method can also be applied to dopant dose metrology.

24.3.4

Doping Process Control

The predominant method of dopant processing is implantation of ionized dopants. For the purposes of this chapter, we will refer to three types of implant steps: high energy—medium dose [1013 PC/cm2 at 700 keV, 1013 BC/cm2 at 1.5 MeV] to form retrograde well structures and [BC and PC1012– 1013 ions/cm2 at 100–200 keV] for N and P wells; medium energy—high dose [1015 AsC/cm2 at 10– 2 80 keV; 1015 BC/cm2 at 0.5–40 keV or 1015 BFC 2 =cm at 5–200 keV] to form the source and drain and 13 14 C C C C 2 [10 –10 As , P , B , or BF2 =cm at 0.2–100 keV] to form the lightly doped drain (also called drain 2 extension); and low energy–low dose [1011–1012 AsC, PC, BC, or BFC 2 =cm at 0.2–100 keV] for the threshold voltage implant. Each of these types has a different process tolerance and thus requires a different precision from the metrology tool [69]. Another issue is uniformity across the wafer. Several references have discussed implant process control including SPC and uniformity [70,71]. The three most prevalent methods of dopant dose control are four point probe, optically modulated optical reflection (commercially known as the Thermawave), and secondary ion mass spectrometry (SIMS) depth profiling. Four pt probe is used for high dose plants and Thermawave for medium to low dose process control [69]. SIMS can be applied to junction and dose measurements for all doses. The precision of optical densitometry has recently been improved, and new methods have been proposed. In this section, we discuss all these methods. 24.3.4.1

Four Point Probe

The four point probe method characterizes the active dopant dose by measuring the resistivity and relating this to the active carrier concentration. Two reviews of the fundamentals of four point probe measurements are available [72,73]. The four point probe method of measuring resistivity is based on the use of a linear array of four probes in which two probes carry the current and two probes measure voltage. The resistance of a two probe configuration includes contributions from probe contact and spreading resistance [72]. The two additional probes are used to sense voltage due to the current passing through the sample between the other probes, and current through these probes is minimized and, thus, the spreading resistance and contact terms are minimized. An alternative method involves analysis of the resistance and currents when different pairs of the four point probes are used for injecting current and measuring potential. The effect of the spreading and contact resistance can be mathematically eliminated, and thus this is a second approach to removing these effects. Therefore, the contact and spreading resistance terms cancel out in the four point configuration when the voltage probes are used instead of the two probe approach [72]. Typically, the probes are equally spaced and the current runs through the outer probes [72]. The probe spacing on traditional systems is between 0.5 and 1.5 mm [72], and greatly improved probe tips (larger radius probe tips for improved precision) are required for 180 nm technology generations. The measured resistivity is a function of the sample shape. The equation for the resistivity r is: rZ2psF(V/I), where s is the probe spacing, V is the voltage at the voltage sensing probes, and I is the current through the current carrying probes. F is the correction factor that accounts for sample shape, and it is a function of thickness, lateral dimensions, and wafer edge proximity. Four point probe measurements are done using monitor wafers having special characteristics. After implantion, the monitor wafers must be annealed to restore crystallinity and activate the dopant [69,70]. These monitor wafers must have resistivity characteristics that permit measurement of the implant dose of interest. For example, it would be difficult to measure the change in resisitivity of a low resistivity wafer (e.g., pCwafer) due to a low to medium dose implant. Epi wafers such as “p” epi on pCsubstrate also create problems since the current can pass through the lower resistance substrate which is away from the implant. Implant doses O1014 ion/cm2 are easily measured on wafers having a resistivity R20 U-cm. Doses in the 1011–1012 ions/cm2 range may be measured using high wafers with a high resistivity when

DK4126—Chapter24—23/5/2007—16:21—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-34

Handbook of Semiconductor Manufacturing Technology

the correct annealing and surface conditions are used. Yarling and Current suggest either “p” or “n” type wafers with a resistivity O100 U-cm for low dose samples [70]. Correct annealing conditions are critical. Due to the processing characteristics of different anneal/dopant combinations, the anneal must be optimized for the parameter being tested. Implant dose alone is often tested by long, high temperature furnace anneals, while implant energy is best tested with a short rapid thermal processing (RTP) anneal or, better yet, with a blocking layer (e.g., oxide layer) which only tests the deeper part of the implant. Similarly, RTP temperature control for implant diffusion is most optimally tested using an implant near the solid solubility. The doping level is most sensitive to the temperature under these conditions. 24.3.4.2

Optically Modulated Optical Reflection

Optically modulated optical reflection is the monitoring of thermal wave propagation in an implanted wafer. The commercial system used for this measurement is often called by the supplier’s name, Thermawave. Thermal waves are produced by an argon ion laser modulated at a frequency of 0.1 to 10 MHz [74]. These waves heat the adjacent surface of the wafer changing the volume of the silicon near the surface. The volume change alters the optical properties of the surface, and this is probed by measuring the reflectivity change using light from a second laser. The amount of heat transfer and the volume change per unit heat depend on the implant dose and implanted species because heat transfer is effected by the implant process damages the silicon lattice structure while adding dopant atoms to the lattice. The sample spot size is approximately 1 mm, and measurements of doses between 1011 and 1015 atoms/cm2 have been reported. Optically modulated optical reflection is distinguished by its sensitivity to low dose implants. Each implant ion typically produces 100 to 1000 damage sites so the effect from each ion is multiplied. After a certain dose is reached, there is little change in implant damage, and sensitivity to high dose is hampered. 24.3.4.3

Secondary Ion Mass Spectrometry

Secondary ion mass spectrometry can provide dopant dose, and since SIMS data is in the form of a depth profile, it naturally lends itself to junction measurements. The first dynamic SIMS systems equipped with product wafer capable stages were delivered in 1997. Since SIMS has been described in the chapter on Materials Characterization, only the application to dopant process control is discussed here [75]. SIMS is capable of simultaneously monitoring multiple implant species in the same test structure. Test structures are typically 100 mm!100 mm and larger and have been routinely incorporated in the scribe lines of product wafers. Test structures are also routinely placed in the die of product wafers especially during process and pilot line development. Implant dose is measured by integrating the secondary ion signal of the dopant element from the surface to the depth of the implant. Shallow implants remain a significant challenge for SIMS characterization [75]. The secondary ion signal is strong function of the matrix that the implanted ion resides in (oxide, interface, or bulk silicon). In addition, quantifiable secondary ion signals require that the ion sputtering process reach a steady state. When using energetic primary ion beams such as 5 to C 10 keV OC 2 or 5 to 10 keV Cs , the steady state is not reached until a considerable depth (sometimes up to 50 nm) is sputtered away. Recent studies show that for 300 eV OC 2 primary beam, a steady state is reached for crystalline, oxidized crystalline, or amorphized silicon by the time 0.7 nm has been sputtered away [76,77]. This has driven the use of low energy ion beams for shallow junctions [76–78]. Stable, very low energy ion beams around 100 eV were introduced in 1997. A rule of thumb for depth profiling is that the ion beam energy should be half the implant energy. The optimum conditions for B implant measurement depend on whether or not oxygen gas is used to dose the sample surface along with a primary ion beam of OC 2 . When possible, very low energies (down to 200 eV or less) are used for the primary beam. This can slow down the analysis procedure if the ion beam current is low, and beam energy and current should be selected according to conditions that depend on the lowest concentration of B that one needs to determine [76–78]. The sputter yield decreases below 1 keV primary beam energy and the detection limit is approximately 1017 atoms boron/cm3 for 100 eV OC 2 at normal incidence. When oxygen dosing is used,

DK4126—Chapter24—23/5/2007—16:21—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

In-Line Metrology

24-35

low energy primary ions at incidence angles from 45 to 558 have shown satisfactory results. When no oxygen dosing is used, then normal incidence should be used for the primary ion beam. Shallow arsenic implants can be analyzed using CsC at 608 incidence and 500 eV energy. 24.3.4.4

Junction Depth Measurement via Carrier Illumination

Carrier Illumination is a new, non-destructive method of measuring junction depth. Borden has described Carrier Illumination in depth [79]. The junction was considered to be the depth at which carrier concentration drops to 1018/cm3. Due to the increase in channel doping, a value of 1!1019 is becoming more relevant. The method first excites excess carriers using a 2 mm spot laser. The carriers form a quasistatic distribution that piles-up at the edge of the doped layer. The excess carriers are due to the change in the direction of flow of the excess carriers from vertical in the doped layer to a radial horizontal flow in the substrate. The depth of this junction is determined by measuring the reflectivity of a second laser [7,80].

24.3.5

Metrology for Measurement of Stress Enhanced Carrier Mobility

Stress metrology faces the difficult challenge of controlling processes used to increase carrier mobility through stress. As discussed above, greater carrier mobility increases transistor drive current and thus transistor switching speed [81]. Strained silicon substrates are also being investigated for this purpose [82]. First, the diverse set of processes is used to increase mobility as discussed, and then the metrology methods and challenges are reviewed. Stress is the force applied to a material, and it can be compressive or tensil. Stress can also be uni-axial, that is along one crystallographic direction, or bi-axial or along two, perpendicular crystallographic directions. Process induced stress is typically uni-axial, while the lattice mismatch with the substrate induces a biaxially strained silicon surface layer. Stress induced changes in physical properties are typically modeled by as if the silicon is a continuous elastic material. Recent publications indicate that uni-axial stress has some significant advantages over strained silicon layers [83]. The amount of stress required for improving carrier mobility is less for uni-axial stress. In addition, the process based approach allows one to use compressive stress for p-channel devices and tensile stress for n-channel devices while the substrate approach results in only tensile stress [83]. Process based increase in carrier mobility has been used in the manufacture of the 90 and 65 nm technology nodes. Processes induce nearly uni-axial, tensile stress to increase electron mobility and uniaxial, compressive stress to increase hole mobility. Intel uses tensile silicon nitride layers above the NMOS and replaces the source and drain in the PMOS with silicon germanium to compressively stress the channel [84]. Although direct measurement of the stress in the buried channel is impossible, one can surmise how process control is done. Measurement of silicon nitride thickness can provide control of a stable silicon nitride deposition process for the NMOS. It seems likely that a combination of CD and Ge concentration measurement could be used for controlling the PMOS mobility. Recent publications discuss the modeling how the Intel Process induces compressive stress along the h110i direction of silicon increases hole mobility [85]. Texas Instruments has found that use of recessed SiGe source and drain extensions improves PMOS drive current by 35% [86]. Fujitsu has shown that by changing process conditions, the silicon nitride layer can be used to impart both tensile and compressive stress [87]. IBM uses the stress induced by the shallow trench isolation to stress PMOS channels [88]. The amount of stress can be altered by changing the distance between the STI and the edge of the gate electrode. Future approaches to stress induced improvement of carrier mobility include the use of metal gates and strained silicon substrates [89]. Strained silicon substrates are grown on top of silicon germanium layers where the lattice mismatch provides the stress. Strained silicon can either be left on top of the SiGe or transferred to a wafer with a surface oxide layer producing the so-called sSOI. Strain can be measured by a number of different methods such as Raman spectroscopy, x-ray Diffraction, and Photoreflectance. The average stress across a wafer can be measured using the change in wafer curvature. Some of these methods are briefly described below.

DK4126—Chapter24—23/5/2007—16:21—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-36

Handbook of Semiconductor Manufacturing Technology

Raman spectroscopy measures strain using the shift in the wavelength of the silicon optical phonons. The shift can be related to the stress through the elastic equations. In the future, nano Raman systems based on near field optical microscopes will push the spatial resolution below 200 nm. High resolution– x-ray diffraction measures small changes in lattice constant (strain). Measurement of patterned wafers requires large test areas. Photoreflectance Spectroscopy can measure strain in un-patterned wafers. The electronic transition between energy levels occurs at specific energies that change when the lattice structure is stretched or compressed. Bi-axial stress splits energy levels that degenerate in un-strained silicon or germanium. In particular, the E1 transition energy at 3.392 eV is used to monitor strain. The average stress across a wafer can be calculated from the change in wafer curvature after film deposition. Die level stress can be determined from local wafer curvature using Interferometry. The new Coherent Gradient System uses a referenced interferometer to measure local curvature of the wafer. Using well known algorithms, the local stress changes can be calculated from the local wafer curvature changes when a wafer is measured before and after a process step.

24.4

Interconnect Process Control

In this section, we will refer to interconnect processes, as those that begin with the contact between the transistor and the first level of on chip interconnect metal (Metal 1) and end with the passivation layer over the final on chip metal level. In Figure 24.35, we illustrate the typical process steps that are controlled by metrology. The process schemes for interconnect are expected to be a mixture of traditional etched metal/inter level dielectric and Damascene (also known as inlaid metal) processing. Chemical Mechanical Polishing and copper will be used in both types of interconnect processes. The interconnect via and contact (also known as plug) material could be tungsten or aluminum with titanium nitride barrier layers or copper with barrier (possibly Ta) in the future. Routine physical metrology needs for interconnect processes include metal and dielectric film thickness, step coverage, CMP end point and flatness, and particle/defect control. Particle detection and control is covered in the chapter on Contamination Free Manufacturing. When interconnect processes are stable, film stress is not routinely monitored. The CMP processing may replace the need for measurement of B and P in reflow glass inter-layer dielectric (ILD) films. Electrical metrology needs include testing of contact/via resistivity and metal level defects.

FIGURE 24.35

High level overview of interconnect metrology.

DK4126—Chapter24—23/5/2007—16:21—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

In-Line Metrology

24-37

Process control needs also include the need to measure voids in copper lines and large “killer” pores in porous low k materials. Although porous low k is (at the time of writing) a research material, great progress has been made in the measurement of pore size distribution.

24.4.1

Interconnect Film Thickness

In this section, we discuss film thickness measurement for contacts (e.g., TiSi2), barrier layer (e.g., TiN, TiW, Ti/TiN stacks, or CoSi2), metallic anti-reflective coatings, interconnect dielectric layers, and all metal interconnect films (e.g., Al or Cu). Ideally, film thickness metrology would be done on patterned wafers and include measurement of the liner film thickness inside a via/contact structure. Most film thickness measurements require large, flat sampling areas. A number of commercially available methods have been applied to metal and barrier layer film thickness measurement such as acoustic [90,91], resistivity by Four Point Probe [92], x-ray reflectivity [93], XRF [94], and a new method known as Metal Illumination (MI)e [95]. A newly introduced, in-line method that is very similar to Electron Microprobe uses a relatively large electron beam to excite XRF. This is covered in the section below on determination of metal film thickness using x-ray Methods. Optical reflectivity and ellipsometry are both used for dielectric film thickness [96]. It is useful to note that ellipsometry can measure the thickness of very thin barrier layers. Off-line measurement of patterned metal thickness is possible using scanning electron microscopy and energy dispersive x-ray spectroscopy (EDS) if careful modeling of x-ray emission is done for the film stack and structure that one is analyzing. 24.4.1.1

Interconnect Metal Film Thickness Measurement Using X-Ray Methods

Metal and barrier layer film thickness can be measured in-line by commercially available x-ray reflectivity and XRF. Both methods can determine barrier thickness for barrier films under the seed copper layers. Measurement of barrier layers under the thicker electrochemically deposited copper layers used in the fabrication of metal lines is more difficult, but reports of successful use of XRF exist. Measurement on patterned layers is possible using the electron beam excited, XRF approach embodied in the Matrix 100e [97]. Although x-ray reflectivity may be applied to patterned films, its use is not widely reported at this time. Innovations in x-ray optics make it difficult to know what the true limits of x-ray methods concerning patterned layer measurements. X-ray reflectivity is a very powerful method of characterizing film thickness. x-ray reflectivity is also sometimes referred to as grazing incidence x-ray reflectivity (GI-XRR). In GI-XRR, the angle of incidence of a well collimated, monochromatic x-ray beam is reflected off a flat sample over a range of incident angles [93]. The intensity of the specular reflection is used for film thickness determination. Nonspecular x-ray scattering can also be measured, and film and interface roughness analyzed. Interference patterns in the form of intensity oscillations are formed when GI-XRR is done on single and multi-layer thin film samples. Reflectivity from a homogeneous substrate, a single, and a two layer film is shown in Figure 24.36. The film thickness of a single layer film can be determined from the angular difference between the peaks of subsequent intensity oscillations. Two periods of intensity oscillation are present when analyzing a two layer film. Because the wavelength of a monochromatic x-ray is accurately known, thickness can be accurately determined. Density measurement is more difficult. It is important to note that in-line measurement of x-ray reflectivity is done using an optical path that simultaneously collects reflected x-rays at multiple angles. Specially designed optical paths and detectors allow rapid measurements making it possible to measure multiple wafers per hour inside a clean room. In some sense, x-ray reflectivity and ellipsometry are similar in that both require development of appropriate models to interpret the data. The key to successful modeling is incorporating interfacial characteristics that allow for high precision and appropriate “Goodness of Fit” to the data. Due to the difference in electron density, very thin (0.5 nm) barrier layers have been measured below seed copper films. Due to x-ray absorption, measurement of barrier layers under electroplated copper is usually not considered to be possible.

DK4126—Chapter24—23/5/2007—16:21—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-38

Handbook of Semiconductor Manufacturing Technology

Monochromator Spatially resolving x-ray sensor

Thin-film sample X-ray tube

Reflectivity

0.0 1 0.1

0.2

Angle (degrees) 0.4 0.6 0.8 1.0

1.2

Cu fringes

0.01 0.001 0.0001

Ta fringes

FIGURE 24.36 Diagram of an x-ray fluorescence film thickness tool. The Kevex tool uses collimated probe beam and collimators for the fluoresced x-rays. Figure courtesy David Wherry and Ed Terrell, Kevex.

In-line measurement of metal film thickness has been done using both energy dispersive and wavelength dispersive XRF [94]. Micro-focus tools have a spot size of 50 mm while non-micro-focus systems typically have millimeter size spots. Film thickness can be quantified from 1 nm to 10 mm. Repeatability, which is the short term contribution to precision, is !0.5 nm for 50 nm Ti and TiN films and !1 nm for 1 mm Al films. Using modeling, film stacks can be characterized. This method applies to any barrier layer, TiN type antireflective coating, and both Al and Cu interconnect metal films. Typically, XRF is used to measure the thickness of thin films that are composed of an element or elements different than the silicon or silicon dioxide layer below, e.g., Al or TiN. In addition, the substrate composition must be constant. This greatly simplifies calculation of film thickness from the fluorescent intensity of the element(s) that compose the film. X-ray fluorescence is created by both the incident x-ray beam and fluorescence from surrounding material such as the substrate below the film. The thickness, t, of a film of element (i), Ii, is related to the measured fluorescent intensity of an infinitely thick film of element (I), I fi, the film density, r, and the effective mass attenuation coefficient, mi , by [94]:

t Z ð1=rmI Þlnð1KðIi =Ifi ÞÞ The quantity mi is a function of x-ray wavelength, angle of incidence j 0 , and angle of exit to the detector j 00 :

mi Z mil cscj 0 C mil cscj 00 mil is the mass attenuation coefficient of element I at the wavelength of x-rays used to excite the sample. Daily calibration is required due to the small, day to day variations in x-ray tube intensity and detector efficiency. In Figure 24.37, the x-ray optical paths for one type of XRF systems are shown. The x-ray beam is used as a means of exciting XRF from the film. Since the films are poly-crystalline, x-ray diffraction is difficult to minimize unless the film is highly textured. Although the depth of penetration depends on the angle of incidence and the adsorption of the material, it is of the order of microns. X-ray fluorescence can also be observed from more than a micron below the sample surface [94]. Analysis of XRF is made difficult by

DK4126—Chapter24—23/5/2007—16:21—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

In-Line Metrology

24-39

Schematic XRMF spectrometer Evacuable sample champer

Sample XYZ stage

Final collimator Primary filter/collimator

Detector collimator

Shutter

Preamplifier Lenses Illuminating sources PXS x-ray tube source (side window)

Color CCD camera

Si(Li)

Microprobe viewing system

Crystal detector

LN2 Cold finger Liquid nitrogen crystal

FIGURE 24.37 X-ray reflectivity based film thickness measurement In-line x-ray reflectivity throughput is increased by the use of optics that allow the entire reflectivity vs. angle data to be obtained at one time. X-ray reflectivity data for a copper film on a tantalum barrier layer is shown.

the adsorption of fluoresced x-rays and subsequent re-emission. Therefore, reference materials are critical to the success of this method. In addition, x-ray tube emission and x-ray detectors require daily recalibration. 24.4.1.2 Metal and Interconnect Dielectric Film Thickness Measurement Using Acoustic Methods In this subsection both of these non-destructive methods for measuring metal film thickness are described. Considering the original application of one acoustic method, dielectric film thickness measurement may also be possible. One of these systems is based on impulsive stimulated thermal scattering (ISTS) which is now referred to as laser induced surface acoustic waves. Commercially, the method is now known as SurfaceWave. Detailed descriptions of this method are available in the literature [90]. The acoustic wave travels parallel to the surface of the sample in ISTS, in contrast to picosecond laser ultrasonic sonar where the acoustic wave travels downward. The ISTS is a rapid (!1 s. per data point), small spot (15 mm!30 mm) film thickness method which has outstanding precision when applied to single layer films. The optical path has been designed for a long working distance and with solid state, long life lasers making it suitable for in situ sensor applications. In ISTS, a pair of optical pulses each having a duration of a few hundred picoseconds are overlapped in time and space on a sample’s surface. Optical interference between the crossed excitation pulses forms a spatially varying interference or “grating” pattern of alternating light (constructive interference) and dark (destructive interference) regions. Formation of the grating pattern is illustrated schematically in the inset of Figure 24.38a. The grating fringe spacing D (or wave number q) is given

DK4126—Chapter24—23/5/2007—16:21—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

FIGURE 24.38 (a) The time dependent modulation of reflectance due to the acoustic wave produced by impulsive stimulated thermal scattering (ISTS; Surface Wave) from a single copper layer. The three steps in a measurement are: 1. The probe beam strikes the surface, 2. The grating structure is formed from the overlapping parts of the excitation laser pulse and the acoustic wave is formed, 3. The acoustic wave travels away from the excitation area and the traveling surface ripples diffract the probe beam in and out of the detector. Figure courtesy Michael Gostein, Philips Advanced Metrology Systems. (b) Acoustic frequency of Cu film.

24-40 Handbook of Semiconductor Manufacturing Technology

DK4126—Chapter24—23/5/2007—16:21—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

In-Line Metrology

24-41

by: qZ4p sin(Q/2)/lZ2p/D. The crossing angle Q of the two excitation pulses. The sample absorbs radiation in the light regions, resulting in a mild heating and thermal expansion that launches coherent acoustic waves whose wavelength and direction match those of the interference pattern with wave vectorGq [90]. The acoustic waves generate a time-dependent “ripple” on the sample’s surface. The depth of modulation oscillates at the acoustic frequency, which is determined by the sample’s mechanical (e.g., elastic) and physical (e.g., thickness) properties and by the boundary conditions (e.g., adhesion) between the different layers in the sample. At the time that the acoustic waves are propagating outward from the excitation site, heat flows from the heated grating peaks to the unheated grating nulls at a rate determined by the sample’s thermal diffusivity. One advantage of this acoustic method is that by changing the angle Q, the wave vector is changed and properties such as sample density can be experimentally determined. Metal film thickness methods, such as XRF Rutherford backscattering, and acoustic, all require knowledge of the sample density. The ISTS can provide experimental density information specific to the metal film fabrication process. Changes in film density can also be monitored. The acoustic response is measured in its entirety by diffracting a probe laser pulse having duration of several hundred microseconds off the surface ripple to form a pair of signal beams (the C1 and K1 diffracted orders). One of the signal beams is detected to generate a light-induced signal waveform [90]. This diffraction mechanism is indicated schematically in the inset of Figure 24.38a. This figure also shows time-dependent data taken from a copper film. These data were collected during a period of one or two seconds. During its first 70 ns, the signal waveform oscillates according to the copper’s acoustic frequency and decays primarily due to the travel of the acoustic waves away from the excitation cite into the rest of the sample. Figure 24.38b shows the power spectrum of the copper data in Figure 24.38b to be centered at about 270 MHz. To determine film thickness, the frequency is analyzed as described in the abovementioned references [90]. Figure 24.39 illustrates the accuracy of the ISTS measurements, showing, respectively, the results from a set of copper/oxide/silicon and tantalum/oxide/silicon samples having thicknesses ranging from 20 to 180 nm. These data show the correlation between the center-point film thickness determined using ISTS (y axis) and a conventional four point probe (x axis). Film thicknesses were calculated from the four point probe data using resistivity values that are within 2% of the bulk resistivities for both tantalum and copper. The line in each of the plots has a slope of one and is used as a guide to the eye. For both the 2400 B

ISTS Thickness (Å)

2000 1600 1200 800 400 B 0

B BB B B

0

B B

B B BB BB

B

800 400

B B

1600 1200

2400 2000

4-Point probe thickness (Å)

FIGURE 24.39 Comparison of ISTS and 4-point probe measurement of film thickness. The data was taken using the InSite 300, and the figure was provided by John Hanselman of Active Impulse Systems. This technology is now provided by Philips AMS.

DK4126—Chapter24—23/5/2007—16:21—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-42

Handbook of Semiconductor Manufacturing Technology

tantalum and copper samples, the correlation between the ISTS and four point probe thickness measurements is excellent over the entire sample set. On average, these values are within 0.6 and 2.9% of each other, respectively. Deviation between the two measurements may be due to a variety of factors, such as the film’s resistivity depending on film thickness of grain structure. Such dependence seems particularly evident in the tantalum data, as the deviation of the data from the solid line systematically increases with film thickness. Impulsive stimulated thermal scattering can measure one or more layers in a multilayer structure. Two different approaches are used based on the thickness of the films [90]. The first method is applicable when both layers are thick (e.g., Ow1000 A˚) and have different acoustic properties. Two acoustic wavelengths are recorded, one short and one long. The two wavelengths probe the sample to different depths. The short wavelength response is relatively more sensitive to the top layer than the bottom one, as compared to the long wavelength response. By analyzing both responses and fitting with a first-principles model, both layer thicknesses can be determined [90]. The second method is applicable when there is a thinner layer underneath the top layer, and both layers have different thermal properties. This method is used for barrier–copper seed applications. In this case, only one acoustic wavelength is used. The frequency response of the signal waveform is sensitive roughly to the total metal mass, while the thermal decay time of the waveform is sensitive to the ratio of the two metals, due to their thermal contrast. Using this information, both metal thicknesses are determined [90]. Presently, a buried layer of down to 5 nm can be measured and controlled. One advantage of ISTS is that it can measure patterned wafer samples. In the case of a sample that consists of metal lines, the acoustic wave travels away from the excitation area along the array of metal lines. The acoustic response of the metal line/insulator array must now be modeled instead of a blanket film. The barrier layer in this type of sample is on both sides and below the metal line. Arrays of lines or vias are treated in the analysis as films with “effective” elastic constants that are a composite of those of the consistuent materials, i.e., metal and dielectric. In many cases the composite elastic constants can be estimated by simple averaging formulas. More complex structures, e.g., those with pitch similar to the acoustic wavelength, for which modeling is complicated, require a calibration to determine the effective elastic constants. In addition, the presence of vias below the metal lines complicates modeling. Due to these complications, the response vs. line thickness is usually calibrated, and routine measurement of film thickness for patterned structures is possible for metal lines, pads, and vias. The CMP process control is a typical application of patterned wafer measurements [90]. Another acoustic film thickness system was commercially introduced in 1997. One system is based on Picosecond Ultrasonic Laser Sonar Technology (PULSE). A detailed description of the physical processes involved making measurements of this type is available [91]. The PULSE technique has a high (a small numerical value) precision, sampling speed (2 to 4 s/pt), spatial resolution (less than a 10 mm diameter spot size), and it can be applied to single and multilayer metal deposition processes. In a PULSE measurement for a sample consisting of a metal film deposited on a substrate (e.g., Si) an acoustic wave is first generated by the thermal expansion caused by sub-picosecond laser light pulse absorbed at the surface of a the metal. The temperature increase (typically 58C–108C) is a function of sample depth which results in a depth dependent isotropic thermal stress [91] which gives rise to a sound wave which propagates normal to the sample surface into the bulk. This wave is partially reflected at the interface between the film and substrate [91]. For an acoustic wave, the reflection coefficient RA depends on the acoustic impedances Z (ZZdensity!sound velocity) of the film and substrate materials, and may be evaluated from the relation RAZ(ZsubKZfilm)/(ZsubCZfilm) [82]. When a reflected wave (or “echo”) returns to the free surface after a time t, it causes a small change in the sample optical reflectivity, DR. This change is monitored as a function of time by a second laser probe. Based on the sound velocities of the materials making up the sample (which for most materials are known from bulk measurements) and the echo time, the film thickness dfilm may be evaluated from the simple relation dfilmZvst. As an example, Figure 24.40 shows a PULSE measurement obtained for a sample consisting of a copper film deposited on top of a thin Ta layer (less than 20 nm thick) with a substrate consisting of a thick tetraethoxysilane (TEOS) layer (about 600 nm). The sharp feature observed in a range of time less than

DK4126—Chapter24—23/5/2007—16:21—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-43

∆R (t )

In-Line Metrology

0

100 200 300 400 500 600 700 800 900 1000 Time (ps)

FIGURE 24.40 Acoustic measurement of copper film thickness using picosecond ultrasonic laser sonar (PULSE) technology. The time dependent reflectivity observed during PULSE measurement of a copper layer on 20 nm of tantalum on tetraethoxysilane (TEOS) is shown. The ultrasonic wave travels into the copper layer partially reflecting at the Cu/Ta interface and subsequently the Ta/TEOS interface. When the ultrasonic wave returns to the surface, it alters the optical constants of the surface and this is monitored by measuring the change in reflectivity. The transit time is directly related to the speed of sound and the layer thickness. The data taken using a MetaPULSe picosecond laser ultrasonic sonar system, and the figure is courtesy Rob Stoner, Rudolph Technologies.

about 50 ps is associated with relaxation of the electrons in the copper film which initially gain energy from the ultrashort light pulse. This energy is transferred to thermal phonons in the film which gives rise to an increase in its temperature by approximately one degree. The subsequent diffusion of heat out of the film and into the underlying TEOS occurs on a timescale of hundreds of picoseconds, and this is associated with the slowly decaying “background” signal observed in the figure. The sharp feature observed at a time of about 800 ps is the echo caused by sound which has reflected from the bottom of the copper layer and returned to the surface of the sample. To determine the film thickness from these data only the echo component is used; the background is discarded. From the product of the one way travel time for sound through the film (406 ps) and the sound velocity (51.7 A˚/ps), the film thickness is found to be 2.10 mm. The time-dependent reflectivity change DR(t) measured via PULSE depends on the physical mechanisms underlying the sound generation and propagation, and sound-induced changes in optical properties. These may be simulated with very high fidelity even for samples consisting of many layers with thicknesses ranging from less than 50 A˚ to several microns. Using an iterative procedure in which the thicknesses of layers in a model for a sample are adjusted until a best fit to the measured DR(t) is obtained; the commercial PULSE system obtains typical measurement precision for thickness of less than 0.1 nm (1 A˚) with 90% confidence. This modeling methodology also allows other film properties such as density, adhesion, and surface roughness to be obtained along with thickness since these parameters have a predictable influence in the amplitudes and shapes of the optically detected echoes. In Figure 24.40, we show the PULSE signal from a 200 nm thick layer of TiN on top of aluminum. The ability to measure film density has considerable practical significance, especially for WSix, TiN, WN, and other materials whose composition and structure depend on many aspects of the deposition process such as pressure, target composition, temperature or gas mixture. The PULSE has been used to distinguish between TiN films differing in density (or composition) by only a few percent. The PULSE has also been used to detect silicide phase transitions for Ti and Co reacted with silicon at different temperatures based on changes in sound velocity, density, and optical properties [91]. The commercially available MetaPULSe system uses a solid state, compact, ultrafast laser as a source for both the generating and detecting beams, dividing the single beam into two by means of a simple beam splitter. The sensitivity of the technique can be optimized to give improved throughput and sensitivity for specific materials, and a high throughput copper system is one example [91]. The MetaPULSe system is currently available in a highly automated stand alone configuration equipped with

DK4126—Chapter24—23/5/2007—16:21—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

Handbook of Semiconductor Manufacturing Technology

∆R (t )

24-44

25

50

75 100 125 150 175 200 225 Time (ps)

FIGURE 24.41 Simulated time dependent reflectivity of PULSE measurement of a TiN layer on a thick aluminum substrate. This data shows the effect of having a metal substrate instead of the inter-layer dielectric substrate shown in Figure 5.4. Data taken using a MetaPULSe picosecond laser ultrasonic sonar system. Figure courtesy Rob Stoner, Rudolph Technologies.

an optical microscope, pattern recognition software, and precision sample stage so that patterned wafers can be monitored. Because of its long working distance and high throughput, PULSE technology can also be applied to in situ measurements. 24.4.1.3

Resolution and Precision of Acoustic Film Thickness Measurements

The resolution of ISTS determined film thickness is nearly independent of film thickness and composition [90]. The resolution and precision of ISTS are better known for single layer films. Precision values of less than 0.2 nm are typically observed. The precision is sub-angstrom for a wide variety of single layer films. The reason for the small value for precision (greater measurement precisionZsmaller value of precision) is that the general shape of the ISTS data is a damped, high frequency oscillation. This general shape does not change with film thickness, only the frequency and damping characteristics change with thickness and composition. Since the oscillations can be averaged over many periods on every sample, precision values are always small. The precision of PULSE measurements is related to the precision with which the echo times within a sample may be determined [91]. For a hypothetical measurement in which a single echo is analyzed, the precision is equal to the product of the sound velocity in the film (typically about 50 A˚/ps) and the uncertainty in determining the centroid (time) of the echo Dt. The two most important factors which influence Dt are the system noise (from mechanical, electrical, and optical fluctuations) and the linearity and precision of the mechanism used to establish the time base (i.e., the delay in time between the generating and measuring laser pulses). The latter is a mechanical delay line consisting of a retroreflector mounted on a linear stage, so that the time base linearity and precision are determined by the linear stage characteristics. In practice, a Dt of about 0.006 ps can be achieved using readily available components over a range of echo times extending from 0 to over 2 ns. This gives a precision for thickness of order 0.3– 0.6 A˚ for measurements based on a single echo, and this is typical of observed static measurement repeatability for films thicker than a few thousand Angstroms, giving a precision of order 0.1%, or better. For thinner films it is generally possible to observe several echoes within the finite delay range of the measuring system, and this gives a corresponding improvement in precision by a factor which is proportional to the number of echoes observed. Thus the precision in dimensions of length remains approximately constant for thicknesses ranging from a few hundred Angstroms to several microns. For very thin films the signal to noise ratio tends to degrade due to the increasing transparency of the films and the increasing overlap between successive echoes so that measurements become impractical for films thinner than about 25 A˚ (depending on the material). Therefore for films up to a few hundred Angstroms the measurement precision may vary from between 0.1 and 1% depending on the specific combination of optical properties and film thickness.

DK4126—Chapter24—23/5/2007—16:21—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

In-Line Metrology

24-45

For multilayer films, the measurement precision is layer specific and also dependent on the proximity of each layer to the free surface as well as the ideality of the interfaces between layers. For multilayer samples with perfect interfaces between all layers, the measurement uncertainty (in dimensions of length) for any single layer increases relative to the equivalent single film case by an amount proportional to the number of films between it and the free surface. This increase in uncertainty is usually negligible since, in general, more than one echo is observed within a buried layer less than about 1000 A˚ thick. 24.4.1.4

Optical Measurement of Interconnect Dielectric and Metal Film Thickness

Ellipsometry can measure the thickness of very thin metal and metal-like films [98]. The depth of penetration of light is directly related to the imaginary part of the refractive index, k, of a material. The intensity of light at a depth, z, is given by: IZI0 eK(4pkz/l). One way to estimate the maximum thickness of a metal film that ellipsometry can measure, presuming that the top layer is the only one being measured is to use the expression dwl/pk (5.16). Therefore, at lZ632.8, the following thicknesses can be measured: TiNw18 nm; Tiw17 nm; amorphous Siw107 nm; and Co 12 nm [98]. Since the requirements are for thicker films, ellipsometry is not in wide spread use for control of Ti and TiN deposition. As mentioned in Section 24.3.2, SWE must be supplemented in order to measure film thickness for SiO2 films thicker than w283 nm. The values of Del and Psi repeat themselves after 283 nm making the measurement interpretation ambiguous. This problem has been overcome by use of multiple wavelengths, multi-angle, and SEs make the measurement unambiguous. The correct optical constants must be used for non-thermal SiO2 films such as boron and phosphorous doped silica glass (BPSG) and boron and phosphorous doped TEOS. Thickness measurements for low k dielectrics also require knowledge of the correct optical constants. Manufacturing process control for low k dielectrics can be done even on some complicated film stacks [96]. Some of the proposed new dielectrics have different in-plane, hTE, and out-of-plane, hTM, refractive indices. These materials are characterized by a birefringence which is hTEKhTM. Some polyimide thin films show thickness dependence to their optical (and thermal, mechanical, and electrical) properties due to preferential orientation of the molecular chains in the plane of the film [99]. Ho and co-workers have published the dielectric constants (%1 MHz) and the in-plane and out -of- plane refractive indices at lZ632.8 nm [99]. As discussed in Section 24.3 of this chapter, the complex refractive index is the square root of the complex dielectric constant. At optical frequencies, the dielectric behavior is due to electric polarization while below 10 MHz the dielectric constant has contributions from electric and lattice polarization (Figure 24.41). 24.4.1.5

Metal Line Thickness and Copper Void Detection by Metal Illumination

The thickness of patterned metal lines can be measured in-line using Metal Illumination [95,100]. Borden has described this method which determines line resistance by measuring the thermal conductivity [95,100]. One can calibrate system response for un-patterned metal layers, as well as, measure the thickness of patterned metal lines. Resistance is a function of the line cross-section. The system used to measure metal line thickness as well as voids in patterned structures uses an 830 nm laser to inject heat into one or more metal lines [100]. The measurement is illustrated in Figure 24.42. Visualizing the measurement process is useful for understanding its application to void determination. Borden describes the process in detail in Ref. [100]. Absorption of light from an 830 nm laser heats the metal lines within the spot diameter of 2 mm. The heating laser is modulated at kHz frequency. The heat flows away from this spot along the un-pattered film or along the lines away from the heated spot. This results in a temperature distribution. Under the laser spot used to heat the metal, the temperature is inversely proportional to the line cross-sectional area A and thermal conductivity K (Tw1/(AK)). Because the thermal and electrical conductivities of pure metals are related by the Wiedemann-Franz law, one can determine electrical conductivity and thus resistance if one measures the temperature of the metal lines [95]. The peak temperature varies as the resistance per unit length (Tw1/(As)Zr/A), with s the electrical conductivity and r the resistivity. A 980 nm laser is used to measure the reflectance of

DK4126—Chapter24—23/5/2007—16:22—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-46

Handbook of Semiconductor Manufacturing Technology

Pump laser Probe laser

Array

Measurement spot

Beam splitter

Detector

Vision system

Objective lens

Temperature probe laser spot Heating laser spot

Metal lines Temperature

Thin line Insulator Thick line Distance

FIGURE 24.42 Metal illumination. Metal illumination can measure metal film and metal line thickness as well as detect voids in copper metal lines. The pump laser heats the metal film or metal lines, and the reflectivity of the probe laser light measures the resistivity of the metal. The resistivity value determines film or metal line thickness. Figure courtesy Peter Borden of Applied Materials.

copper as a function of temperature. The signal intensity will oscillate at the kHz frequency of the heating laser. Detection of voids in copper metal lines and contact/via structures is a difficult task. Many of the proposed methods rely on the small change in volume of copper due to the presence of voiding. The challenge is to observe these volume changes when across wafer uniformity varies due to changes in metal line thickness. Variation in metal line thickness is a result of the different polishing rates of thick lines vs. dense lines vs. isolated lines. Metal illumination and ISTS have been applied to detection of voids in patterned copper metal structures. In metal illumination, copper voids in metal lines are detected by the temperature change from the void induced reduction in the line cross-section. This effect has been experimentally verified using SEM cross-sections [100]. Voiding measurements are conducted by measuring a series of closely spaced sites (typically with site spacing smaller than the beam diameter) along a structure such as an isolated line or a via chain. This approximates a continuous scan along the line. In Reference [100], it is reported that metal illumination can detect a 1% voiding volume for voids that are 15% of the line cross-section down to 65 nm wide copper lines. At these dimensions, the void diameter is 52 nm with a density NVoidZ1.9 per microns for 90 nm wide lines and 37 nm diameter voids with a density of 2.7 per micron for 65 nm wide lines [100]. This density of voids per unit length is about equal one void in the measurement spot size of 2 mm. Ref. [100] also reports that the issues associated with CMP induced non-uniformity can be minimized by selecting the appropriate method of scanning. Voids in contacts and vias can also be detected by the change in thermal conductivity that the void induces. When a good metal line or lines are scanned with metal illumination, the heat can be conducted away from the metal lines by the presence of a good via connection to the next metal level. Thus, the

DK4126—Chapter24—23/5/2007—16:22—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

In-Line Metrology

24-47

temperature drops as one scans across the via. When the via has a void or is not connected to the metal line below, the temperature does not drop as much as it does for the good via [100]. In via measurements this method has an advantage over volume-based measurements. Structures such as via chains are designed with fat links and small vias in order to minimize the effect of a variation of link resistance. The MI method passes a current of heat through the via, and is thus predominantly sensitive to the via resistance change. Volume-based measurements will be equally sensitive to changes in link and via volume [100]. 24.4.1.6

Pore Size Distribution in Porous Low k

Characterization of the distribution of pore sizes is a critical part of the analysis of porous films. Above, we described the use of x-ray reflectivity for measurement of film thickness. This method uses the specularly reflected x-rays (angle inZangle out), and it is an excellent method of measuring low k film thickness even for complicated stacks. As the x-rays transverse the low k film, the pores scatter x-rays away from the specular angle. This is also called diffuse scattering of x-rays by pores. Analysis of the intensity of the non-specularly scattered x-rays vs. the angle of incidence of the x-ray beam provides information on the pore size and the distribution of the pore size within the porous material [101]. The diffusely scattered x-rays need to be separated from the specularly reflected x-rays. There are a number of means of separating these signals. Moving the sample slightly away from the alignment used for x-ray reflectivity (XRR) allows use of existing XRR systems. Some suppliers have developed special detectors that capture diffuse scattering from a greater range of angles. The pore size distribution and average pore size is calculated from the scattering intensity vs. angle using well established x-ray theory and a model for the pore size distribution [101].

24.4.2 Ex-Situ Chemical Mechanical Polishing Process Control–Film Flatness and Quality The purpose of CMP is to produce a flat surface over device topology for lithographic patterning. The CMP is used in both traditional metallization and Damascene (inlaid metal) processing. In traditional metallization processes, metal is deposited, patterned, etched, and overcoated with oxide which is polished flat. In Damascene (inlaid metal) processes, oxide (or low k dielectric) is deposited, patterned, and etched, and metal fill is deposited and then polished back to oxide. The CMP must be monitored for

Recess Erosion

Dishing

Sheet resistance (ohm/cm)

0.09 0.08

"A"

"B"

0.07 0.06 0.05 0.04

Line width (a) 0.475,0.8,5,10

Line space (a) 0.35,0.6,3.75,12.55

Bus width (a) 0,10,40

FIGURE 24.43 Defects in inlaid metal CMP processes. Figure from Farkas, J., and M. Freedman, In Proceedings of the Fourth Workshop on Industrial Applications of Scanned Probe Microscopy, NIST, May, 1997. With permission.

DK4126—Chapter24—23/5/2007—16:22—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-48

Handbook of Semiconductor Manufacturing Technology

Step height (Å)

Bus lines (10 – 40 mm)

Min. pitch snake (0.25– 0.8 mm)

0 –500

Condition“A”

–1000

Condition“B”

–1500

Condition“C”

–2000 –2500 0

100 200 300

400 500 600 700 800 Distance (μm)

FIGURE 24.44 Comparison of CMP process conditions using high resolution profilometry. Figure from Farkas, J., and M. Freedman, In Proceedings of the Fourth Workshop on Industrial Applications of Scanned Probe Microscopy, NIST, May, 1997. With permission.

both local flatness within a die and across the wafer. The CMP can generate some radial variation in removal across the wafer. Both traditional and Damascene process are adversely effected by this radial variation in removal rate. Over etching of contact and via structures at the edge of the wafer can result when an oxide layer over metal is over thinned toward the wafer edge. In this case, optical film thickness is used to monitor oxide CMP for traditional metallization processes to ensure that differences in thickness of the oxide layer do not result in over etching during fabrication of contact/via openings. Stylus profilometry or scanned probe microscopy are used to determine film flatness after CMP. The local topography across a die is known to induce non-uniform polishing, and particles in the slurry used during CMP can either be left on the surface or produce scratches that can result in yield loss. Scratches, pits, and residual particles have been referred to as localized irregularities [102]. Dishing, erosion, and recess result in insufficient planarity [103]. In Figure 24.43 we show some of the types of polishing defects that result in insufficient planarity. Process development requires careful monitoring of test structures. In Figure 24.44, we show an example of the recess of a CMP processed test structure under 3 different polishing conditions [103]. The test structure has several different types of metal lines: bus lines, bond pads, and minimum feature size metal lines. The CMP of layers containing contact and via

FIGURE 24.45 (KLA-Tencor).

Profilometry characterization of CMP of via and contact structures. Figure courtesy Jason Schneir

DK4126—Chapter24—23/5/2007—16:22—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

In-Line Metrology

24-49

CMP of traditional metal processes Expanded view of Local surface topography

Global oxide surface

Pattern below Oxide Wafer

CMPof damascene (inlaid metal) Expanded view of Local surface topography Global surface

Wafer

FIGURE 24.46 Process monitoring requirements for CMP of tradition metal and damascene (inlaid metal) processes. The local topography must be monitored over the area of lithographic exposure, while the radial variation of thickness must also be monitored. Optical thickness measurements can be used to determine the local variation of oxide thickness in traditional CMP to insure that via/contact etch processes do not “over etch”.

structures must also be monitored during process development. Local scans are used to characterize the recess of tungsten plugs after CMP as shown in Figure 24.45. Issues associated with local flatness and global over-polishing is shown in Figure 24.46 for both traditional and Damascene processes. Stylus profilometers and the new, high resolution profilometers provide information on planarity both locally and globally. The lithographic exposure tool images the reticle mask over the distance of a die (w25 mm!25 mm) with a limited depth of focus (!600 nm using 248 nm exposure tools for 180 nm technology generations). The KLA-Tencor HRP-200 uses a diamond probe tip with a tip radius of 0.05 mm and a scan length of up to 205 mm. A lateral resolution of 1 nm is possible when local areas are analyzed [102]. The step height repeatability is 0.8 nm for features that are 1 mm in height. Maps from wafer flatness tools (such as profilometers) may become part of the lot by lot database of metrology data management systems. In-line process control for CMP is done using test structures which are processed after a change in recipe or polishing pads on the CMP tool.

24.5

In-FAB FIB

Focused ion beam systems are now available in two versions, with dual electron beam and ion beam columns and as ion beam systems only [104]. FIB can be supplemented with a SIMS and the dual column systems can also be equipped with EDS. FIB has been used in-FAB for control of trench capacitor and contact/via structures. FIB cross-sections of these structures allow detailed imaging of critical titanium nitride

Acknowledgments I thank and acknowledge Jimmy Price, P.Y. Hung, Ben Bunday, and Hugo Celio for their assistance with revising this chapter. I gratefully acknowledge the review and figures from Peter Borden (Carrier and Metal Illumination) and discussion and review by Michael Gostein (ISTS). I also acknowledge those that helped with the chapter that appeared in the first addition of this volume: Arnie Ford, Dan Holladay,

DK4126—Chapter24—23/5/2007—16:22—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-50

Handbook of Semiconductor Manufacturing Technology

Hershel Marchman, Kevin Monahan, Chris Nelson, Paul Tobias, Harland Tompkins, George Collins, George Brown, Steve Weinzeirl, John Hauser, Jimmy Wortman, Mathew Banet, John Hanselman, Gary Schwartz, and Randy Goodall.

References 1. International Technology Roadmap for Semiconductors. Semiconductor Industry Association, 2003. 2. Eastman, S. A. “Evaluating Automated Wafer Measurement Instruments.” SEMATECH Technology Transfer Document 94112638A-XFR. 1995. A PDF file can be downloaded from SEMATECH’s public web site at http://www.sematech.org/public/docubase/abstracts/wrapper26.htm 3. Ballard, D. H., D. W. McCormack, Jr., T. L. Moore, M. Pore, J. Prins, and P. A. Tobias. “A Comparison of Gauge Study Practices.” Proceedings From the 1997 Joint Statistical Meetings of the American Statistical Association, Quality and Productivity Section. 1998. 4. “SEMI E89 Guide for Measurement System Analysis.” 3081. Zanker Road, San Jose, CA 95134: SEMI Standards. 5. Currie, L. A. “Limits for Quantitative Detection and Quantitative Determination.” Anal. Chem. 40 (1968): 586–93. 6. Zeitzoff, P. M. “Modeling of Statistical Manufacturing Sensitivity and of Process Control and Metrology Requirements for a 0.18-mm NMOSFET.” In Handbook of Silicon Semiconductor Metrology, edited by A. C. Diebold, 117–41. New York: Dekker, 2001. 7. Zeitzoff, P. M., A. F. Tasch, W. E. Moore, S. A. Khan, and D. Angelo. “Modeling of Manufacturing Sensitivity and of Statistically Based Process Control Requirements for a 0.18 mm NMOS Device.” In Characterization and Metrology for ULSI Technology, edited by D. G. Seiler, A. C. Diebold, W. M. Bullis, T. J. Shaffner, R. McDonald, and E. J. Walters, 73–82. New York: AIP Press, 1998. 8. Gaitonde, D., and D. M. H. Walker. “Test Quality and Yield Analysis Using DEFAM Defect to Fault Mapper.” Proceedings of the IEEE International Conference on CAD 1993, 78–83, 1993. 9. Khare, J., and W. Maly. “Rapid Failure Analysis Using Contamination-Defect-Fault (CDF) Simulation.” Proceedings of the Fourth IEEE/UCS/SEMI International Symposium on Semiconductor Manufacturing 1995 (ISSM ’95), IEEE Catalogue Number 95CH35841, 136, 1995. 10. Diebold, A. C., ed. Handbook of Silicon Semiconductor Metrology. New York: Marcel Dekker, 2001. 11. Joy, D. Private communication. 12. Marchman, H. M. “Scanning Electron Microscope Matching and Calibration for Critical Dimensions.” Future FAB Int. 3 (1997): 345–54. 13. Goldstein, J. I., D. E. Newbury, P. Echlin, D. C. Joy, C. Lyman, P. E. Echlin, E. Lifshin, L. Sawyer, and J. Michael. “Electron Beam–Specimen Interaction.” In Scanning Electron Microscopy and X-Ray Microanalysis 3rd ed., 689. New York: Kluwer Academic/Plenum, 2003. 14. Postek, M. T., and A. E. Vladar. “Critical Dimension Metrology and the Scanning Electron Microscope.” In Handbook of Silicon Semiconductor Metrology, edited by A. C. Diebold, 295–334. New York: Marcel Dekker, 2001. 15. Villarrubia, S., A. E. Vladar, and M. T. Postek. “A Simulation Study of Repeatability and Bias in the CD-SEM.” In Proceedings of SPIE 5038, 138–49, 2003. 16. Mayer, J., K. Huizenga, E. Solecky, C. Archie, G. W. Banks, R. Cogley, C. Nathan, and J. Robert. “New Apparent Beam Width Artifact and Measurement Methodology for CD-SEM Resolution Monitering.” APIE 5038, 699–710, 2003. 17. Bunday, B. D., M. Bishop, J. R. Swyers, and K. Lensing. “Quantitative Profile-Shape Measurement Study on a CD-SEM with Application to Etch-Bias Control and Several Different CMOS Features.” In Proceedings SPIE 5038, 383–95, 2003. 18. Sullivan, N., R. Dixson, B. Bunday, and M. Mastovich. “Electron Beam Metrology of 193 nm Resists at Ultra-low Voltage.” In Proceedings of SPIE 5038, 483–92, 2003; Sullivan, N., M. Mastovich, S. Bowdoin, and R. Brandon. “CD-SEM Acquisition Effects on 193 Resist Line Slimming.” Proceedings of SPIE 5038, 618–23, 2003.

DK4126—Chapter24—23/5/2007—16:22—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

In-Line Metrology

24-51

19. Bunday, B. Private communication. 20. Bunday, B. et al. Unified Advanced CD-SEM Specification for sub 130 nm Technology. This document is available at www.sematech.org 21. Bunday, B., and M. Davidson. “Use of Fast Fourier Transform Methods in Maintaining Stability of Production CD-SEMs”. In Proceedings of SPIE 3998, 913–22, 2000. 22. Cresswell, M. W., J. J. Sniegowski, R. N. Ghothtagore, R. A. Allen, W. F. Guthrie, A. W. Gurnell, L. W. Lindholm, R. G. Dixson, and E. C. Teague. “Recent Developments in Electrical Linewidth and Overlay Metrology for Integrated Circuit Fabrication Processes.” Jpn. J. Phys. 35 (1996): 6597–609. 23. Martin, Y., and H. K. Wickramasinghe. “Precision Micrometrology.” Future FAB Int. 1 (1996): 253–60. 24. Martin, Y., and H. K. Wickramasinghe. “Method for Imaging Sidewalls by Atomic Force Microscopy.” Appl. Phys. Lett. 64 (1994): 2498–500. 25. Feenstra, R. M., and J. E. Griffith. “Semiconductor Characterization with Scanning Probe Microscopies.” In Semiconductor Characterization: Present Status and Future Needs, edited by W. M. Bullis, D. G. Seiler, and A. C. Diebold, 295–307. New York: AIP, 1996. 26. Griffith, J. E., and D. A. Grigg. “Dimensional Metrology with Scanning Probe Microscopes.” J. Appl. Phys. 74 (1993): R83–109. 27. Griffith, J. E., H. M. Marchman, and L. C. Hopkins. “Edge Position Measurement with a Scanned Probe Microscope.” J. Vac. Sci. Technol. B12 (1994): 3567–70. 28. Marchman, H. M., J. E. Griffith, J. Z. Y. Guo, and C. K. Cellar. “Nanometer-Scale Dimensional Metrology for Advanced Lithography.” J. Vac. Sci. Technol. B12 (1994): 3585–90. 29. Marchman, H. M., and J. E. Griffith. “Scanned Probe Microscope Dimensional Metrology.” In Handbook of Silicon Semiconductor Metrology, edited by A. C. Diebold, 335–76. New York: Marcel Dekker, 2001. 30. Standard Practice for Measuring and Reporting Probe Tip Shape in Scanning Probe Microscopy, E1813-96, American Society for Testing and Materials. 31. Dixson, R., A. Guerry, M. Bennett, T. Vorburger, and M. Postek. “Toward Traceability for At-Line AFM Dimensional Metrology.” Metrology, Inspection, and Process Control for Microlithography, Proceedings of SPIE 4689, 313–35, 2002. 32. Raymond, C. J. “Scatterometry for Semiconductor Metrology.” In Handbook of Silicon Semiconductor Metrology, edited by A. C. Diebold, 477–513. New York: Marcel Dekker, 2001. 33. Opsal, J., H. Chu, Y. Wen, Y. C. Chang, and G. Li. “Fundamental Solutions for Real-Time Optical CD Metrology.” Metrology, Inspection, and Process Control for Microlithography, Proceedings of SPIE 4689, 163–76, 2002. 34. Conrad, E. W., and D. P. Paul. Method and Apparatus for Measuring the Profile of Small Repeating Lines. U.S. Patent 5, 963, 329; Moharam, M. G., T. K. Gaylord, G. T. Sinerbox, H. Werlich, and B. Yung. “Diffraction Characteristics of Photoresist Surface-Relief Grating.” Appl. Opt. 23 (1984): 3214–20. 35. Sendelbach, M., and C. Archie. “Scatterometry Measurement Precision and Accuracy below 70 nm.” Metrology, Inspection, and Process Control for Microlithography, Proceedings of SPIE 5038, 224–38, 2003. 36. Sonderman, T., M. Miller, and C. Bode. “APC as a Competitive Manufacturing Technology: Getting It Right for 300 mm.” In AEC/APC Symposium XIII, International SEMATECH, 2001. 37. Petersen, J. S., J. D. Byers, and R. A. Carpio. “The Formation of Acid Diffusion Wells in Acid Catalyzed Photoresists.” Microelec. Eng. 35 (1997): 169–74. 38. Lindholm, L. L., R. A. Allen, and M. W. Cresswell. “Microelectronic Test Structures for Feature Placement and Electrical Linewidth Metrology.” In Handbook of Critical Dimension Metrology and Process Control, edited by K. M. Monahan, 91–132. Bellingham: SPIE Optical Engineering Press, 1993. 39. Chain, E. E., M. D. Griswold, and B. P. Singh. “In-Line Electrical Probe for CD Metrology.” Process Equipment and Materials Control in Integrated Circuit Manufacture II. SPIE Proceedings 2876, edited by A. Iturraldo and T. Lin. 135–46, 1996.

DK4126—Chapter24—23/5/2007—16:22—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-52

Handbook of Semiconductor Manufacturing Technology

40. Lee, K., M. Shur, T. A. Fjeldly, and T. Ytterdal. “Basic MOSFET Theory”. In Semiconductor Device Modeling for VLSI, 240. Englewood Cliffs: Prentice-Hall, 1993. 41. Sullivan, N. T. “Semiconductor Pattern Overlay.” In Handbook of Critical Dimension Metrology and Process Control, edited by K. M. Monahan, 160–88. Bellingham: SPIE Optical Engineering Press, 1993. 42. Starikov, A. “Metrology of Image Placement.” In Handbook of Silicon Semiconductor Metrology, edited by A. C. Diebold, 411–76. New York: Marcel Dekker, 2001. 43. Tompkins, H. G. A User’s Guide to Ellipsometry. New York: Academic Press, 1993. 44. Jellison, G. E. “Physics of Optical Metrology of Silicon-Based Semiconductor Devices.” In Handbook of Silicon Semiconductor Metrology, edited by A. C. Diebold, 723–60. New York: Dekker, 2001. 45. Opsal, J., J. Fanton, J. Chen, J. Leng, L. Wei, C. Uhrich, M. Senko, C. Zaiser, and D. E. Aspnes. Broadband Spectral Operation of a Rotating-Compensator Ellipsometer. 46. System description. 47. System description. 48. Diebold, A. D., D. Venables, Y. Chabal, D. Muller, M. Welden, and E. Garfunkel. “Characterization and Production Metrology of Thin Transistor Gate Oxide Films.” Mater. Sci. Semicon. Process. 2 (1999): 103–47. 49. Hayzelden, C. “Gate Dielectric Metrology.” In Handbook of Silicon Semiconductor Metrology, edited by A. C. Diebold, 17–48. New York: Dekker, 2001. 50. Chandler-Horowitz, D., and G. A. Candela. “On the Accuracy of Ellipsometric Thickness Determination for Very Thin Films.” J. De Physique C10 (1983): 23–6. 51. Fang, S. J., and C. R. Helms. “Ellipsometric Model Studies of the Si Surface.” In Contamination Control and Defect Reduction in Semiconductor Manufacturing III. 94-9, 267–76. PV: Electrochemical Society, 1994. 52. Nguyen, N. V., D. Chandler-Horowitz, P. M. Amirtharaj, and J. G. Pellegrino. “Spectroscopic Ellipsometry Determination of the Properties of the Thin Underlying Strained Si Layer and the Roughness at Si/SiO2 Interfaces.” Appl. Phys. Lett. 64 (1994): 5599. 53. Fang, S. J., W. Chen, T. Yamanaka, and C. R. Helms. “Influence of Interface Roughness on Silicon Oxide Thickness Measured by Ellipsometry.” J. Electrochem. Soc. 144 (1997): L231–3. 54. Nicollian, E. H., and J. R. Brews. Metal Oxide Silicon Capacitor at Low Frequencies. In MOS (Metal Oxide Semiconductor) Physics and Technology, 71–98. New York: Wiley, 1982. 55. Nicollian, E. H., and J. R. Brews. “Metal Oxide Silicon Capacitor at Intermediate and High Frequencies.” In MOS (Metal Oxide Semiconductor) Physics and Technology, 99–175. New York: Wiley, 1982. 56. Blood, P., and J. W. Orton. “Capacitance–Voltage Profiling.” The Electrical Characterization of Semiconductors: Majority Carriers and Electron States, 220–65. New York: Academic Press, 1992. 57. Emerson, N. G., and B. J. Sealy. “Capacitance–Voltage and Hall Effect Measurements.” In Analysis of Microelectronic Materials and Devices, edited by M. Grasserbauer, and H. W. Werner, 865–85. New York: Wiley, 1991. 58. Schroder, D. K. “Oxide and Interface Trapped Charge.” Semiconductor Material and Device Characterization, 244–96. New York: Wiley, 1990. 59. Lee, K., M. Shur, T. A. Fjeldly, and T. Ytterdal. “Surface Charge and the Metal Insulator Semiconductor Capacitor.” Semiconductor Device Modeling for VLSI, 196–228. Englewood Cliffs: Prentice-Hall, 1993. 60. Vogel, Eric M., and VeenaMisra. “MOS Device Characterization.” In Handbook of Silicon Semiconductor Metrology, edited by A. C. Diebold, 59–96. New York: Dekker, 2001. 61. Larson, L. “Metrology for Ion Implantation.” In Handbook of Silicon Semiconductor Metrology, edited by A. C. Diebold, 49–58. New York: Dekker, 2001. 62. Hauser, J. R. and K. Ahmed. Characterization of Ultra-Thin Oxides Using Electrical C–V and I–V Measurements. In Characterization and Metrology for ULSI Technology, 449, 235–39. New York: AIP Conference Proceedings, 1998.

DK4126—Chapter24—23/5/2007—16:22—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

In-Line Metrology

24-53

63. Vogel, E. M. “Issues with Electrical Characterization of Advanced Gate Dielectrics.” In MetalOxide-Semiconductor Devices, Proceedings WoDIM 2002, edited by IMEP, Grenoble, France, 2002. 64. Weinzeirl, S. Private Communication. 65. Briggs, D. “XPS: Basic Principles, Spectral Features, and Quantitative Analysis.” In Surface Analysis by Auger and X-Ray Photoelectron Spectroscopy, edited by D. Briggs, and J. T. Grant, 31–56. UK: IM Publications and Surface Spectra Limited, 2003. 66. Cumpson, P. J. “Angle Resolved X-Ray Photoelectron Spectroscopy.” In Surface Analysis by Auger and X-Ray Photoelectron Spectroscopy, edited by D. Briggs, and J. T. Grant, 651–76. UK: IM Publications and Surface Spectra Limited, 2003. 67. Drummond, I. W. “AES Instrumentation and Performance.” In Surface Analysis by Auger and X-Ray Photoelectron Spectroscopy, edited by D. Briggs, and J. T. Grant, 117–44. UK: IM Publications and Surface Spectra Limited, 2003. 68. Goldstein, J. I., D. E. Newbury, P. Echlin, D. C. Joy, A. D. Romig, Jr., C. E. Lyman, C. Fiori, and E. Lifshin. “X-Ray Spectral Measurement: WDS and EDS.” In Scanning Electron Microscopy and X-Ray Microanalysis, 2nd ed., 273–340. New York: Plenum, 1992. 69. Larson, L. “Metrology for Ion Implantation.” In Handbook of Silicon Semiconductor Metrology, edited by A. C. Diebold. New York: Dekker, 2001. 70. Yarling, C. B., and M. I. Current. “Ion Implantation Process Measurement, Characterization, and Control.” In Ion Implantation Science and Technology, edited by J. F. Zeigler, 674–721. Poughkeepsie: Ion Implantation Technology Co., 1996. 71. Larson, L., and M. I. Current. “Doping Process Technology and Metrology.” In Characterization and Metrology for ULSI Technology, edited by D. G. Seiler, A. C. Diebold, M. Bullis, R. McDonald, and T. J. Shaffner, 143–52. New York: AIP, 1998. 72. Schroder, D. K. “Resistivity”. In Semiconductor Material and Device Characterization, 1–40. New York: Wiley, 1990. 73. Johnson, W. H. “Sheet Resistance Measurements of Interconnect Films.” In Handbook of Silicon Semiconductor Metrology, edited by A. C. Diebold, 215–44. New York: Dekker, 2001. 74. Smith, W. L., A. Rosenwaig, and D. L. Willenbourg. Appl. Phys. Lett. 47 (1985): 584. 75. Schroeder, D. K., B. Schueler, and G. S. Strossman. “Electrical, Physical, and Chemical Characterization.” Chap. 28, (this volume). 76. Dowsett, M. G. “SIMS Depth Profiling of Ultra Shallow Implants and Junctions in Silicon— Present Performance and Future Potential.” In Secondary Ion Mass Spectrometry SIMS XI, 259–64. New York: Wiley, 1998. 77. Dowsett, M. G., T. J. Ormsby, D. T. Elliner, and G. A. Cooke. “Establishment of Equilibrium in the Top Nanometers Using Sub-KeV Beams.” In Secondary Ion Mass Spectrometry SIMS XI. 371–8. New York: Wiley, 1998. 78. Dowsett, M. G., G. A. Cooke, D. T. Elliner, T. J. Ormsby, and A. Murrell. “Experimental Techniques for Ultra-Shallow Profiling Using Sub-keV Primary Ion Beams.” In Secondary Ion Mass Spectrometry SIMS XI, 285–8. New York: Wiley, 1998. 79. Borden, P., L. Bechtler, K. Lingel, and R. Nijmeijer. “Carrier Illumination of Ultra-Shallow Implants.” In Handbook of Silicon Semiconductor Metrology, edited by A. C. Diebold, 97–116. New York: Marcel Dekker, 2001. 80. Borden, P., P. Gillespie, A. Al-Bayati, and C. Lazik. “In-Line Implant/Anneal Module Monitoring of Ultra-Shallow Junctions.” Proceedings of the 14th International Conference on Ion Implant Technology, IEEE, CD Rom. 9no paper version yet. 81. Zeitzoff, P. M., J. A. Hutchby, and H. R. Huff. “MOSFET and Front-End Process Integration: Scaling Trends, Challenges, and Potential Solutions through the End of the Roadmap.” Int. J. HighSpeed Electron. Syst. 12 (2002): 267–93. 82. A˚berg, O., O. Olubuyide, C. Nı´ Chle´irigh, I. Lauer, D. A. Antoniadis, J. Li, R. Hull, and J. L. Hoyt. “Electron and Hole Mobility Enhancements in Sub-10 nm-Thick Strained Silicon Directly on Insulator Fabricated by a Bond and Etch-Back Technique.” Digest of 2004 Symposium on VLSI Technology, 52–3, (IEEE Catalog no. 04CH37571).

DK4126—Chapter24—23/5/2007—16:22—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

24-54

Handbook of Semiconductor Manufacturing Technology

83. Thompson, S. E., G. Sun, K. Wu, J. Lim, and T. Nishida. “Key Differences for Process-Induced Uniaxial vs. Substrate-Induced Biaxial Stressed Si and Ge Channel MOSFETs.” IEDM Tech. Digest (2004): 221–4. 84. Ghani, T., M. Armstrong, C. Auth, M. Bost, P. Charvat, G. Glass, T. Hoffmann, et al. IEDM Tech. Digest (2003): 978–80, (IEEE Catalog no. 03CH37457). 85. Giles, D., M. Armstrong, C. Auth, S. M. Cea, T. Ghani, T. Hoffmann, R. Kotlyar, et al. “Understanding Stress Enhanced Performance in Intel 90 nm CMOS Technology.” 2004 Symposium on VLSI Technology Technical Digest of Papers, 118–9. 86. Chidambaram, P. R., B. A. Smith, L. H. Hall, H. Bu, S. Chakravarthi, Y. Kim, A. V. Samoilov, et al. “35% Drive Current Improvement from Recessed-SiGe Drain Extensions on 37 nm Gate Length PMOS.” Digest of 2004 Symposium on VLSI Technology, 48–49. 87. Pidin, S., T. Mori, K. Inoue, S. Fukuta, N. Itoh, E. Mutoh, K. Ohkoshi, et al. “A Novel Strain Enhanced CMOS Architecture Using Selectively Deposited High Tensile and High Compressive Silicon Nitride Films.” IEDM Techn. Digest (2004): 213–6, (IEEE Catalog no. 04CH37602). 88. Bianchi, R. A., G. Bouche, and O. Roux-dit-Buisson. “Accurate Modeling of Trench Isolation Induced Mechanical Stress Effects on MOSFET Electrical Performance.” IEDM Techn. Digest (2002): 117–20, (IEEE Catalog no. 02CH37358). 89. Xiang, Q., J-S. Goo, J. Pan, B. Yu, S. Ahmed, J. Zhang, and M.-R Lin. “Strained Silicon NMOS with Nickel–Silicide Metal Gate.” 2003 Symposium on VLSI Technology Digest of Technical Papers, 101–2. 90. Gostein, M., M. Banet, M. A. Joffe, A. A. Maznev, R. Sacco, J. A. Rogers, and K. A. Nelson. “Thin Film Metrology Using Impulsive Stimulated Thermal Scattering.” In Handbook of Silicon Semiconductor Metrology, edited by A. C. Diebold, 167–96. New York: Marcel Dekker, 2001. 91. Diebold, A. C., and R. Stoner. “Metal Interconnect Process Control Using Picosecond Ultrasonics.” In Handbook of Silicon Semiconductor Metrology, edited by A. C. Diebold, 197–214. New York: Marcel Dekker, 2001. 92. Johnson, W. H. “Sheet Resistance Measurements of Interconnect Films.” In Handbook of Silicon Semiconductor Metrology, edited by A. C. Diebold, 215–44. New York: Marcel Dekker, 2001. 93. Deslattes, R. D., and R. J. Matyi. “Analysis of Thin Layer Structures by X-Ray Reflectometry.” In Handbook of Silicon Semiconductor Metrology, edited by A. C. Diebold, 789–810. New York: Marcel Dekker, 2001. 94. Lachance, G. R., and F. Claisse. “Thin Films.” In Quantitative X-Ray Fluorescence Analysis: Theory and Practice, 211–16. New York: Wiley, 1995; Jenkins, R., R. W. Gould, and D. Gedcke. Quantitative X-Ray Spectrometry. New York: Marcel Decker, 1981. 95. Borden, P., J. Madsen, and J. P. Li. “Non-Destructive, High-Resolution Metrology of Fine Metal Arrays.” Future FAB10 (2000): 261–5; Wu, C. M., M. Y. Wang, C. T. Lin, C. W. Chang, M. H. Tsai, C. H. Hsieh, S. L. Shue, et al. “Non-Destructive In-Line Cu/Low-K Measurement Using Metal Illumination Method,” to be published at the 2003 International Interconnect Technology Conference, Burlingame, CA, June 2–4, 2003. 96. Diebold, A. C., W. W. Chism, T. G. Dziura, and A. Kanan. “Metrology for On-Chip Interconnect Dielectrics.” In Handbook of Silicon Semiconductor Metrology, edited by A. C. Diebold, 149–66. New York: Marcel Dekker, 2001. 97. Matriz 100 Reference. 98. Asinovshy, L. Limits of Thickness Measurement. In Rudolph Technologies Applications Note, August 9, 1994. 99. Kiene, M., M. Morgan, J. H. Zhao, C. Hu, T. Co, and P. S. Ho. “Characterization of Low Dielectric Constant Materials.” In Handbook of Silicon Semiconductor Metrology, edited by A. C. Diebold, 245–78. New York: Marcel Dekker, 2001. 100. Borden, P. A., J. P. Li, S. Smith, A. C. Diebold, and W. Chism. “Line and Via Voiding Measurements in Damascene Copper Lines Using Metal Illumination.” IEEE Trans. Semicond. Manufact. 16, (2003): 409–16. 101. Rauscher, M., et al. Phys. Rev. B 52 (1995): 16855.

DK4126—Chapter24—23/5/2007—16:22—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

In-Line Metrology

24-55

102. Mathai, A., and C. Hayzelden. “High-Resolution Profilometry for CMP and Etch Metrology.” In Handbook of Silicon Semiconductor Metrology, edited by A. C. Diebold, 279–94. New York: Marcel Dekker, 2001. 103. Farkas, J., and M. Freeman. “New Requirements for Planarity and Defect Metrology in Soft Metal CMP.” In Proceedings of the Fourth Workshop on Industrial Applications of Scanned Probe Microscopy. NIST, May, 1997. 104. Diebold, A. C., and R. McDonald. “The At-Line Characterization Laboratory of the 90s: Characterization Laboratories (FAB-LABS) Used to Ramp-Up New FABS and Maintain High Yield.” Future FAB Int. 1, no. 3 (1997): 323–30.

DK4126—Chapter24—23/5/2007—16:22—ANBARASAN—240453—XML MODEL CRC12a – pp. 1–55

25

In-Situ Metrology 25.1 25.2 25.3

Gabriel G. Barna Texas Instruments, Inc.

Brad VanEck SEMATECH

25.1

Introduction ...................................................................... 25-1 Process State Sensors ........................................................ 25-4

Temperature † Gas Phase Reactant Concentration Properties † Wall Deposition Sensor



RF

Wafer-State Sensors ........................................................ 25-31

Film Thickness and Uniformity



Feature Profile

25.4

Measurement Techniques for Potential Sensors........... 25-47

25.5

Software for In-Situ Metrology ..................................... 25-53

25.6

Ellipsometry



Epi Resistivity and Thickness

Data Collection Software † FDC Analysis Software Model-Based Process Control Software



Use of In-Situ Metrology in SC Manufacturing .......... 25-55

Fault Detection and Classification † Fault Interdiction Fault Prognosis † Model-Based Process Control



References .................................................................................... 25-57

Introduction

Since the early 1960s, semiconductor manufacturing has historically relied on statistical process control (SPC) for maintaining processes within prescribed specification limits. This is fundamentally a passive activity based on the principle that the process parameters—the hardware settings—be held invariant over long periods of time. The SPC then tracks certain unique, individual metrics of this process— typically some wafer-state parameter—and declares the process to be out-of-control when the established control limits are exceeded with a specified statistical significance. While this approach has established benefits, it suffers from (a) its myopic view of the processing domain—looking at one, or only a few parameters and (b) its delayed recognition of a problem situation—looking at metrics generated only once in a while or with a significant time delay relative to the rate of processing of wafers. In the late 1990s, while semiconductor manufacturing continues to pursue the ever-tightening specifications due to the well-known problems associated with the decreasing feature size and increased wafer size, it became clear that both these constraints have to be removed in order to stay competitive in the field. Specific requirements are that † processing anomalies be determined by examining a much wider domain of parameters; † processing anomalies be detected in shorter timeframes; within-wafer or at least wafer-to-wafer; † processing emphasis be focused on decreasing the variance of the wafer-state parameters instead of controlling the variance of the set points. Advanced process control (APC) is the current paradigm that attempts to solve these three specific problems. Under this methodology, the fault detection and classification (FDC) component addresses 25-1

DK4126—Chapter25—23/5/2007—19:42—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-2

Handbook of Semiconductor Manufacturing Technology

the first two requirements, model-based process control (MBPC) addresses the last one. In contrast to the SPC methodology, APC is a closed-loop, interactive method where the processing of every wafer is closely monitored in a time scale that is much more relevant (within-wafer or wafer-to-wafer) to the manufacturing process. When a problem is detected, the controller can determine whether to adjust the process parameters (for small deviations from the normal operating conditions) or to stop the misprocessing of subsequent wafers (for major deviations from the standard operating conditions). The APC paradigm is a major shift in operational methods and requires a complex, flexible architecture to be in place to execute the above requirements. A schematic representation of this architecture is provided in Figure 25.1. Briefly, this system starts with the processing tool and sets of in-situ sensors and ex-situ metrology tools to provide data on the performance of the tool. When the performance exceeds some pre-defined specifications, actions can be taken to either terminate the processing or reoptimize the settings of the equipment parameters via the model tuner and the pertinent process models. The purpose of this brief APC overview was to provide the context for this chapter on in-situ metrology. In-situ sensors are becoming more widespread, as they are one of the key components of this APC paradigm. The goal of this chapter is to detail the fundamentals of in-situ process and wafer-state sensors in original equipment manufacturer (OEM) tools, and their use in APC, as this is the path that semiconductor manufacturing now has to aggressively pursue. This message is clearly articulated in the 1997 version of the National technology roadmap for semiconductors (NTRS), which states: “To enable this mode of operation (APC), key sensors are required for critical equipment, process, and wafer-state parameters. It is essential that these sensors have excellent stability, reliability, reproducibility, and ease of use to provide high quality data with the statistical significance needed to support integrated manufacturing” [1]. Hence, this chapter will provide information for the in-situ sensors that are commercially available (i.e., not test-bed prototypes) and are currently, or soon will be, used in OEM tools for the measurement and control of process state and wafer-state properties. In-situ sensors are those that monitor the process state of the tool or the state of the wafer during the processing of each wafer. For the sake of completeness, in-line sensors will also be included, as some of the sensor technologies are only applicable in this format. In-line sensors measure wafer state in some location close to the processing, such as a cool-down station in a deposition reactor or a metrology module on a lithography track system. Metrology tools that are currently used off-line, but are seen to be moving towards a simpler in-situ or in-line sensor embodiment CONTROLLER Adaptation strategy Model tuner

Out of bounds

SPC Methods

Tune

Ex situ Metrology

SPC

Process models Solution strategy Targets Feedforward

Solver (Optimize) Out of bounds

Diagnosis and maintenance

FIGURE 25.1

Predicted outputs Optimal settings

Equipment

Data reduction

In-situ Sensor

Architecture for advanced process control.

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25-3

(e.g., ellipsometer, Fourier transform infrared (FTIR) film thickness) will be included. Sensors used for tool development (e.g., intrusive RF probes, spatially resolved spectroscopy) are not included. Sensors used for gas flow and pressure control are also omitted, as these are well-established technologies; while in-situ particle sensors are covered in the chapter on contamination free manufacturing. For each sensor described, the following information will be included based on input from the sensor manufacturer: † the fundamental operating principle behind each sensor, with greater detail for the less-common ones; † practical issues in the use and interfacing of these sensors. When a number of manufacturers exist for a given sensor, references will be provided to several manufacturers; although there is no claim that this list will be totally inclusive. The sensors included in this chapter are ones that provide most of the features of an ideal in-situ sensor. These features are low TABLE 25.1

Web Links to Major Companies Manufacturing Sensors for Semiconductor Manufacturing

Sensor Company

Web Site

Nanometrics

http://www.nanometrics.com

Advanced Energy Digital Instruments

http://www.advanced-energy.com http://www.di.com

ENI Ferran Scientific Ircon KLA-Tencor

http://www.enipower.com http://www.ferran.com/main.html http://www.designinfo.com/ircon/ html http://www.kla-tencor.com

Leybold Inficon, Inc.

http://www.inficon.com

Lucas Labs Luxtron Ocean Optics Nova Measuring Instruments

http://www.LucasLabs.com http://www.luxtron.com http://www.oceanoptics.com/ homepage.asp http://www.nova.co.il

On-Line Technologies, Inc.

http://www.online-ftir.com

Panametrics

http://www.industry.net/ panametrics http://www.prinst.com http://www.quantumlogic.com http://www.rudolphtech.com/ home http://www.semitest.com http://www.scientificsystems. physics.dcu.ie http://www.sctec.com/sctinfo.htm http://www.sig-inst.com http://www.microphotonics.com/ sofie.html http://www.sopra-sa.com http://spectra-rga.com http://www.specinst.com http://www.thermawave.com http://www.thermionics.com http://www.verityinst.com

Princeton Instruments Quantum Logic Corporation Rudolph Technologies, Inc. SemiTest Scientific Systems SC Technology Sigma Instruments Sofie Instruments Sopra Spectra International Spectral Instruments Therma-Wave Thermionics Verity Instruments

Product Scatterometer, Fourier transform infrared (FTIR) RF systems, sensors AFM for chemical–mechanical polishing (CMP) (offline) RF probes Residual gas analysis (RGA) Noncontact IR thermometers Stress, thin film measurement, wafer inspection, metrology Residual gas analyzer (RGA), leak detectors, full wafer interferometry, quartz crystal microbalance (QCM) Plasma diagnostics Endpoint for plasma and CMP Spectrometer-on-a-card Spectrophotometric integrated thickness monitors for CMP FTIR spectrometer for wafer state and gas analysis Moisture, O2 analyzers Imaging, spectroscopy Pyrometry Ellipsometer Epi layer resistivity Langmuir probe, plasma impedance Reflectometry for coaters QCM Interferometer, optical emission spectroscopy (OES), ellipsometer Spectroscopic ellipsometry RGA Spectrophotometers, CCD cameras Thin film and implant metrology Diffuse reflectance spectroscopy OES, spectral ellipsometer

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-4

Handbook of Semiconductor Manufacturing Technology

cost, reliability, ease of integration into the processing tool, with sensitivity to equipment, and process variations over a broad range of processing conditions. The highest level sorting will be by the major process state (temperature, gas phase composition, plasma properties, etc.) and wafer-state (film thickness, thickness uniformity, resist thickness and profile, etc.) sensors. The major focus is on the technology behind each sensor. Applications will be described only when they are not necessarily obvious from the nature of the sensor. Any particular application example is not intended to promote that particular brand of sensor, but (1) it may be the only available sensor based on that technology or (2) the specifics may be required to provide a proper explanation for the use of that type of sensor. Links to the major companies selling sensors are in Table 25.1.

25.2

Process State Sensors

Sensors exist for monitoring both the process state of a particular tool and the wafer state of the processed wafer. The wafer state is of course, the critical parameter to be controlled, hence measurement of the appropriate wafer-state property is clearly the most effective means for monitoring and controlling a manufacturing process. However, this is not always possible due to † lack of an appropriate sensor (technology limitation); † lack of integration of appropriate sensors into processing tools (cost, reliability limitations). In the above cases, the alternative is to monitor the process state of the manufacturing tool. In many cases, this is an easier task achieved with less expensive sensors. Nonintrusive RF sensors can be connected to the RF input lines, or the tuner, of a RF-powered processing tool. A range of optical techniques exists which require only an optical access to the processing chamber. Historically, the most predominant use of such process state sensors has been for endpoint determination. This is generally performed by the continuous measurement of an appropriate signal (e.g., intensity at a specific wavelength) during the processing of a wafer, looking for a change in the magnitude of the signal. Aside from endpoint determination, the availability of process state sensors in OEM processing tools is generally paced by integration issues (electrical, optical, cost, reliability). In addition, there is generally a lack of the necessary models that relate these process state measurements to the critical wafer-state properties. Especially due to this limitation, many of the process state sensors are typically employed for fault detection. This is the simplest use of such sensors, where the output is monitored in a univariate or multivariate statistical method, to determine deviations from the “normal” processing conditions. This methodology has a significant payback to manufacturing yield by an early determination of operational anomalies and hence the decreased misprocessing of wafers. Using process state sensors for process control requires much more rigorous models between the sensor signal(s) and the wafer-state parameter. The following is a description of the sensors that have, or soon will be, integrated into OEM processing tools for use in FDC or MBPC.

25.2.1

Temperature

The measurement and control of wafer temperature and its uniformity across the wafer are critical in a number of processing tools, such as rapid thermal processing (RTP), chemical rapor deposition (CVD), physical rapor deposition (PVD), and epitaxial (EPI) used for film growth and annealing. Historically, the most commonly used temperature measurement techniques are thermocouples and infrared pyrometry. Infrared pyrometry is based on analysis of the optical emission from a hot surface. It is dependent on two main variables: field of view of the detector and the optical properties of the material, such as refractive indices and emissivity. While useful only above 4508C due to the low emissivity of semiconductors in infrared, pyrometry has been commercialized and is widely utilized in semi conductor (SC) manufacturing tools. A newer technique is diffuse reflection spectroscopy (DRS), which provides noncontact, in-situ optical method for determining the temperature of semiconducting

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology TABLE 25.2

25-5

Pro and Con for Four Temperature Measurement Techniques

Technique

Advantages

Thermocouple

Easy to use Low cost

Pyrometer

All-optical, noninvasive Requires a single optical port

Diffuse reflectance spectroscopy

Optical, noninvasive Directly measures substrate temperature Insensitive to background radiation Can be applied to a wide range of optical access geometries Wafer temperature mapping capability Sample emissivity not a factor Wide temperature range

Acoustic thermometry

Disadvantages Cannot be used in hostile environment Requires mechanical contact with sample Sensitivity depends on placement Unknown or variable wafer backside emissivity Limited temperature range Sensitive to all sources of light in environment Requires two optical ports Relatively weak signal level

Intrusive to the reaction chamber Physical contact required

Source: Adapted from input by Booth, J., Thermionics Northwest, Port Townsend, WA. http://www.thermionics.com

substrates. The technique is based on the optical properties of semiconductors; specifically that the absorption coefficient rapidly increases for photon energies near the band gap of the material. Hence, a semiconducting wafer goes from being opaque to transparent in a spectral region corresponding to its band gap energy. A temperature change of the semiconductor is accompanied by a change in the band gap, which is then reflected as a shift of this absorption edge. Recently, an acoustic technology has been developed. The advantages and disadvantages of these four techniques are presented in Table 25.2. Thermocouples are sometimes used for temperature measurement in processing tools. Since they have to be located remotely from the wafer, temperature errors of more than 1008C are possible; with no means for monitoring the temperature distribution across the wafer. Hence, this sensor is not widely used in SC manufacturing tools, so it will be omitted from this discussion. 25.2.1.1

Pyrometry

Precise wafer temperature measurement and tight temperature control during processing continue to be required because temperature is the most important process parameter for most deposition and annealing processes performed at elevated temperature [2]. As device features become smaller, tighter control of thermal conditions is required for successful device fabrication. Optical temperature measurement is historically the primary method for in-situ wafer temperature sensing. Known as pyrometry, optical fiber thermometry, or radiation thermometry, it uses the wafer’s thermal emission to determine the temperature. The optical fibers (sapphire and quartz), or a lens, are mounted on an optically transparent window on the processing tool and collects the emitted light from, in most cases, the backside of the wafer. The collected light is then directed to a photo detector where the light is converted into an electrical signal. 25.2.1.1.1 Theory of Operation All pyrometric measurements are based on the Planck equation written in 1900, which describes a blackbody emitter. This equation basically expresses the fact that if the amount of light emitted is known and measured at a given wavelength, then the temperature can be calculated. As a consequence of this phenomenon, all pyrometers are made of the following four basic components: † collection optics for the emitted radiation; † light detector;

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-6

Handbook of Semiconductor Manufacturing Technology

† amplifiers; † signal processing. There are thousands of pyrometer designs and patents. A thorough description of the theory and the many designs, as well as the most recent changes in this field are well summarized in recent books and publications [3–6]. The two largest problems and limitations with most pyrometric measurements are the unknown emissivity of the sample—which must be known to account for the deviations from black-body behavior, and stray background light. In addition, the measurement suffers from a number of potential errors from a variety of sources. While the errors are often small, they are interactive and vary with time. The following is a summary of these sources of error, roughly in order of importance: 1. 2. 3. 4. 5.

Wafer emissivity: worst case is coated backsides with the wafer supported on pins. Background light: worst case is RTP and high energy plasma reactors. Wafer transmission: worst at low temperatures and longer wavelengths. Calibration: it has to be done reproducibly and to a traceable standard. Access to the wafer: retrofits are difficult for integrating the sensor into the chamber.

The following problems will become much more important as the previous problems are minimized by chamber and pyrometer design. 6. Pyrometer detector drift: electronics (amplifiers) and photo detectors drift over time. 7. Dirt on collection optics: deposition and outgassing coat the fiber or lens. 8. Changes in alignment: moving the sensor slightly can cause an error by looking at a different place on the wafer or by changing the effective emissivity of the environment. 9. Changes in the view angle: changes the effective emissivity and hence the measured temperature. 10. Changes in wavelength selective filters: oxidation over years will change the filters. Careful design of the entire pyrometer environment, not just the pyrometer itself, can serve to minimize these problems. Nearly all single-wafer equipment manufactures now have pyrometry options for their tools. Properly designed sensor systems in single-wafer tools can accurately measure wafers quickly (onetenth of a second) from about 250 to 12008C with resolution to 0.058C. The continually increasing process requirements will continue to push the pyrometric measurement limits to lower temperatures for plasma assisted CVD, cobalt silicide RTP, and plasma etch. The need for lower temperatures will mandate more efficient fiber optics. There will be a continued improvement in real-time emissivity measurement. Physical properties other than thermal radiance will also be used for measuring wafer temperatures. Repeatability will have to improve. The problem of unknown or changing emissivity is being addressed with the implementation of the ripple technique [7], which takes advantage of the modulation in background light to measure real-time wafer emissivity. This method can measure emissivity to G0.001 at a rate of 20 times per second. 25.2.1.2

Diffuse Reflectance Spectroscopy

25.2.1.2.1 Theory of Operation Semiconductor physics provides a method for the direct measurement of substrate temperature, based on the principle that the band gap in semiconductors is temperature dependent [8]. This dependence can be described by a Varshni equation [9]

Eg ðTÞ Z Eg ðT Z 0ÞK

aT 2 ðb C TÞ

ð25:1Þ

where a and b are empirically determined constants. The behavior of the band gap is reflected in the absorption properties of the material. If a wafer is illuminated with a broadband light source, photons with energy greater than the band gap energy are absorbed as they pass through the material. The wafer is transparent to lower energy (longer wavelength) photons. The transition region where the material goes

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25-7

1.0

GaAs (semi-insulating) spectrum

Signal (arb.)

0.8 0.6

Absorption edge

0.4 0.2 0.0

830

Transparent region

Opaque region

880

930

980

1030 1080 1130 1180 1230

Wavelength (nm)

FIGURE 25.2 Spectrum of 350-mm thick gallium arsenide (GaAs) wafer showing the absorption edge where the wafer goes from being opaque to transparent. (Adapted from input by Booth, J., Thermionics Northwest, Port Townsend, WA. http://www.thermionics.com)

from being transparent to opaque occurs over a relatively narrow energy (or wavelength) range. Thus, a plot of light signal level passing through the material as a function of wavelength or energy (its spectrum) yields a step-like absorption edge, as shown in Figure 25.2. The position of the absorption edge shifts in energy or wavelength as the substrate temperature changes. The magnitude of the shift is material dependent. 25.2.1.2.2 Band Edge Measurement Techniques There are several different techniques for measuring the optical absorption of semiconductors in-situ: transmission, specular reflection, and diffuse reflection. Transmission measurements observe light that passes through the substrate. This technique has the advantage of providing a high signal-to-noise ratio (SNR). The main drawbacks are that to implement this approach one must have optical access to both sides of the sample and one is limited to single point temperature monitoring. Reflection measurements from a sample can be separated into two components: specular reflection and diffuse reflection. Specularly reflected light is the familiar “angle of incidence equals angle of reflection” component. The diffusely scattered portion is light that is reflected from the sample over 2p steradians and carries with it color and texture information about the sample. The main differences between diffuse and specular reflection are: 1. Diffuse reflection can be seen over a wide range of angles, while specular reflection can only be observed from a narrow region in space. 2. Diffuse reflection is much weaker than specular reflection. 3. Both diffuse and specular reflections carry with them color and texture information. However, it is much easier to read this information from diffuse reflection because it is not buried under a large background signal level. 25.2.1.2.3 Band Edge Thermometry Limitations There are two limitations to band-edge temperature measurement techniques: (1) free carrier absorption in the material and (2) coating the substrates with opaque material.

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-8

Handbook of Semiconductor Manufacturing Technology

In semiconductors, there are a number of different phenomena that contribute to light absorption in the material. One of these is the so-called free carrier absorption. This absorption term is related to the number of carriers excited into the conduction band from the valence band. Free carriers are thermally excited and the free carrier absorption increases with sample temperature. This absorption occurs over a broad wavelength range and the material becomes more opaque as the temperature rises. The substrate band edge feature decreases in intensity until, above a material dependent threshold temperature, it can no longer be seen. In general, the smaller the band gap, the lower the temperature at which the substrate will become opaque. For silicon, the upper temperature limit for DRS temperature measurement is approximately 6008C, while for gallium arsenide the upper limit is estimated to be above 8008C. The second limitation arises when the substrate is covered with an opaque material such as a metal. In this situation, light cannot penetrate the metal layer and consequently no band edge spectra can be recovered. Two remedies for this situation are: leaving an open area on the substrate for temperature monitoring purposes and viewing the substrate from the nonmetallized side. 25.2.1.2.4 Temperature Measurement: Practice and Applications The initial application of band edge thermometry [10] measured the heater radiation that passed through a substrate to deduce the sample temperature. This approach suffered from a decreasing SNR as the heater temperature dropped. As in pyrometry, it was susceptible to contamination from other light sources present in the processing environment. Later on [11–13], two key innovations were introduced to the original transmission technique: (1) the use of an external modulated light source to illuminate the sample and (2) collection of the diffusely reflected source light to measure the band edge. The DRS technique is shown schematically in Figure 25.3. Part of the light is specularly reflected, while the other is transmitted through the sample. At the back surface, some of the transmitted light is diffusely scattered back towards the front of the sample. The collection optics are placed in a nonspecular position and the captured diffusely scattered light is then analyzed. The net result is a transmission-type measurement from only one side of the wafer. A DRS temperature monitor [14] is shown schematically in Figure 25.4. The system consists of a main DRS module housing the monochromator, power supplies, and lock-in amplifier electronics. The light source and collection optics attach to the exterior of the processing chamber. Data collection and analysis are controlled by a stand-alone computer. The system can output analog voltages for direct substrate temperature control and can communicate with other devices using an RS-232 interface. The device is capable of 1 s updates with point to point temperature reproducibility of G0.28C. The system can read

Lamp

Specular refelection Diffuse refelection

Wafer Wafer mount

FIGURE 25.3 Schematic showing the diffuse reflection spectroscopy measurement technique. (Adapted from input by Booth, J., Thermionics Northwest, Port Townsend, WA. http://www.thermionics.com)

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25-9

Light collection module

DRS-1000 Unit

Light input module Substrate material

Computer

Chamber

FIGURE 25.4 Diffuse reflection spectroscopy 1000e temperature monitoring system schematic. (Adapted from input by Booth, J., Thermionics Northwest, Port Townsend, WA. http://www.thermionics.com)

silicon, gallium arsenide, and indium phosphide substrates from below room temperature to 600, O800, O7008C, respectively. The DRS technology has been applied to both compound semiconductor and silicon processing. In molecular beam epitaxy and its related technologies, such as chemical beam epitaxy (CBE), material is grown layer by layer by opening shutters to molecular sources. The quality of the layers depends, in part, on the temperature and temperature uniformity of the substrate. In typical growth environments, the wafer temperature is controlled by a combination of thermocouple readings and pyrometer readings. The DRS can monitor and control the temperature of a gallium arsenide (GaAs) wafer in a CBE tool to well within G18C [15]. Even though band edge thermometry has a fixed upper temperature limit of w6008C for silicon, this technique is still applicable to several silicon processing steps, such as silicon etching [16], wafer cleaning, and wafer ashing. 25.2.1.3

Acoustic Wafer Temperature Sensor

Recently, a new technology has been developed for real-time wafer temperature measurement in semiconductor processing tools [17]. This product [18] is based on state-of-the-art acoustic thermometry technologies developed at Stanford University. This sensor fills the need for real-time wafer temperature measurement independent of wafer emissivity, especially in the sub-6008C process regime. It is compatible with plasma processes and mechanical or electrostatic wafer clamping arrangements. 25.2.1.3.1 Theory of Operation The velocity of acoustic waves in silicon is a very linear function of temperature. Acoustic thermometry accurately measures velocity of an acoustic wave on the silicon wafer to determine the temperature of the wafer. The acoustic thermometer determines the velocity by very accurately measuring a delay between two points at a known distance. In its simplest implementation, the acoustic thermometer contacts the wafer with two pins, as shown in Figure 25.5. One pin is a transmitter, and the other a receiver. Both pins have a piezoelectric transducer mounted to their lower end. Both piezoelectric transducers can turn an electrical excitation into an acoustic excitation, or vice versa. The tops of the two pins touch the wafer, to allow the transfer of acoustic energy between the wafer and the pins. The pins can be of any inert, relatively stiff material, such as quartz (fused silica) or alumina. An electrical excitation pulse excites the transmitter pin’s transducer to initiate the measurement. This excites an acoustic wave that propagates up the transmitter pin. When the wave reaches the top

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-10

Handbook of Semiconductor Manufacturing Technology

Si wafer

Transmitter signal

Acoustic wave propagation Transmitter pin with transducer

Excitation pluse

Receiver pin with transducer

Echo signal

Receiver signal

Signal transmitted through Si wafer Acoustic wave time of flight through Si wafer

FIGURE 25.5 Geometry and electrical signal for acoustic thermometry. (Adapted from input by Sensys Instruments, Sunnyvale, CA.)

of the pin, two things happen to the acoustic energy. Most of the energy reflects back down the pin. The reflected energy gives rise to an electrical echo signal at the transmitter’s transducer. A small amount of the acoustic energy enters the silicon as an acoustic wave, which propagates out from the transmitter pin. When the acoustic wave reaches the receiver pin, a small portion of its energy excites an acoustic wave that propagates down that pin. That wave produces the electrical signal at the receiver pin’s transducer. The echo signal corresponds to propagation up a pin and down a pin. The received signal corresponds to propagation up a pin, across the wafer, and down a pin. If the pins are identical, the delay between the received and the echo signals corresponds to propagation on the wafer over the known distance between the transmitter and the receiver pins. The ratio of the distance to the delay is the velocity of the acoustic wave in the wafer, and is thus an indicator of the wafer temperature. The temperature sensor is the wafer itself. Since the acoustic thermometer measures the velocity of propagation between the two pins, it measures an average temperature of the wafer along the propagation path between the pins, and not just the temperatures at the pins. Additional pins can yield average temperatures over multiple zones on the wafer. The acoustic wave extends across the whole thickness of the wafer. Thus, the measured temperature is an average over the thickness of the wafer as well. It is not particularly sensitive to the lower surface of the wafer, even though the pins only touch the lower surface. This sensor can be designed to operate as the lift pin mechanism in single-wafer processing tools. The sensor does not influence or contaminate the wafer environment. The pin assembly has to be integrated into the OEM tool for the successful deployment of this technology.

25.2.2

Gas Phase Reactant Concentration

Most semiconductor manufacturing processes are chemical in nature. Typically, there is a chemical reaction between a gas phase chemical species (or mixture) with the surface layer of the silicon wafer. These reactions can be kinetically controlled, hence the interest in the wafer temperature. But they can also be controlled by the composition and concentration of the gas phase reactants. These parameters therefore have to be monitored and controlled in order to provide consistent, reproducible reactions, and wafer-state properties. Processing tools always control the primary components of these gas phase mixtures (flow rate of the gases, pressure). However, there is still a clear need for measuring the composition and/or the concentration of individual species to detect:

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25-11

† changes in the chemistry as a function of the chemical reaction with the wafer, i.e., endpoint for etch; † changes in the chemistry due to spurious conditions or faults (e.g., leaks, wrong gas); † rapid changes or a slow drift in the gas phase chemical composition due to reactor chamber effects, such as cleaning, residue formation on the walls, wear of consumable parts within the reactor. A large group of in-situ sensors for gas phase monitoring are spectroscopic in nature, as these are truly nonintrusive sensors. They require optical access through a single (sometimes opposing) windows, which are made of materials that can provide a vacuum seal and are transparent to the wavelengths being employed. The sensors analyze the composition of the gas phase via absorption or emission methods, as appropriate for a given process. The spectral range is from the UV through the IR, depending upon the nature of the information required. Mass spectroscopy, in the commonly used residual gas analysis (RGA) mode, provides another class of in-situ sensors used for monitoring gas composition. These are based on the analysis of the masses of the species (specifically, m/e) entrained in the gas flow. Sampling can be performed via a pin-hole orifice to the processing chamber, or by sampling the effluent from the reactor. A typical installation requires differential pumping of the RGA, although some of the recent systems do not have this requirement in selected low-pressure applications. 25.2.2.1

Optical Emission Spectroscopy

Optical emission spectroscopy (OES) is based on [19] monitoring the light emitted from a plasma during wafer processing and is used to gain information about the state of the tool and the process. It exploits the fact that an excited plasma emits light at discrete wavelengths which are characteristic of the chemical species present in the plasma. The intensity of the light at a particular wavelength is generally proportional to both the concentration of the associated chemical species and the degree of plasma excitation. An OES system consists of a viewport to the plasma chamber, an optical coupling system, an optical detector incorporating some means of isolating the wavelength of interest, and a computer or processor to acquire and analyze the spectral image. The viewport is either a window in the reactor or a direct optical feedthrough into the chamber. The OES requires a direct view of the portion of the plasma immediately above the wafer, but not the wafer itself, so the placement of the viewport is not too restrictive. If ultraviolet wavelengths are to be monitored, the window must be of fused silica and not ordinary glass. A number of OES sensor systems are commercially available [20,21] and most OEM plasma tools come with their own on-board OES systems. The typical configuration is shown in Figure 25.6. The optical components and the other associated concerns with OES systems are described in the next sections.

Monochromator Plasma

Reactor

Fiber optic coupler

Data acquisition computer

FIGURE 25.6 Optical emission sensor configuration. (Adapted from input by Whelan, M., Verity Instruments, Carrollton, TX. http://www.verityinst.com)

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-12

Handbook of Semiconductor Manufacturing Technology

25.2.2.1.1 Fixed-Wavelength Systems There are several types of optical detectors for OES. Simple systems use fixed bandpass filters for wavelength discrimination. These are stacks of dielectric films, and have a bandpass of typically 1–10 nm and a peak transmission of about 50%. The light that is passed by the filter is converted to an electrical signal either by a photodiode or by a photomultiplier tube (PMT). The advantages of these systems are low cost and high optical throughput; disadvantages are the limited spectral information and the mechanical complexity involved in changing the wavelength being monitored. 25.2.2.1.2 Monochromators and Spectrographs More flexibility is afforded by systems which incorporate a monochromator. A monochromator consists of a narrow entrance slit, a diffraction grating, and an exit slit. Light falling on the grating is dispersed into a spectrum, the diffraction angle being dependent upon wavelength. The light is reimaged onto another slit, which provides wavelength discrimination. By turning the grating, any wavelength within the range of the instrument can be selected. Changing the width of the entrance and exit slits changes the bandpass of the system. Automated systems in which the wavelength can be altered automatically under computer control are often used. The throughput of a monochromator is much lower than that of a bandpass filter, hence PMTs are normally used for light detection in these systems. A variant of the monochromator is the spectrograph. It uses a fixed grating, and a solid-state detector array instead of an exit slit and PMT. The advantage of a spectrograph over a monochromator is that many wavelengths can be monitored at once. This is significant for situations where information has to be based on an entire spectral scan, not from only a single spectral peak (an example is in Section 25.2.2.1.6.2). In the typical installation, light is transferred from the viewport on the tool to the detector head via an optical coupler. Some fixed-wavelength devices mount directly onto the chamber, and no coupler is needed. In other cases, all that is required is to mount the detector against the window on the chamber. More typically, however, space or field-of-view considerations require the use of an optical coupling system. Optical fiber bundles are typically used to bring light into monochromators and spectrographs. This permits locating the detector at a convenient place away from the reactor. Attention to details such as f-number matching is required to prevent unnecessary loss of sensitivity. Suppliers of OES equipment can generally provide couplers which are matched to their equipment. In the typical installation in a semiconductor production environment, an OES system usually consists of a detector head interfaced to a PC running a Windows or NT environment. The computer contains an A/D card for digitization of the analog signal from the detector, and runs an application that performs various functions: processing, data display, endpoint detection, and communication with the process controller. 25.2.2.1.3 Charge-Coupled Device Array Spectrometers A charge-coupled device (CCD) array spectrometer consists of a slit, dispersion grating, and a CCD array detector. Light is typically delivered to the slit by a subminiature (SMA) connected fiber optic cable between the spectrometer and a fixture on the window of the reactor. A large number of CCD array spectrometers exist in the commercial market. They can be sorted into two main categories based on the characteristics of the CCD array and the form-factor of the final product. The first category is characterized [22] by the “spectrometer-on-a-card” format, primarily aimed as a portable diagnostic tool for use with existing PCs. At least two such systems are currently available [23,24]. This means the spectrometer, say with a 2048 pixel CCD array, and all associated electronics can fit inside the PC, plugging into an expansion slot with an external fiber optic connection to the light source. This has some obvious benefits: † Compact size: A substantial benefit in the high-cost clean room space. † Portability: It can be used as a roaming diagnostic tool.

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25-13

† Multiple spectrometers on a single card: Simultaneous spectral analysis in multichamber tools, or spatially resolved analysis [85] in a single chamber. † Low cost: Cost is not a barrier to the use of this tool. While there are benefits to these systems, there are some limitations that need to be understood. Specifically, the CCD array typically used in these systems has four inherent deficiencies (relative to PMTs), which if not adequately offset, will prevent the CCD array from surpassing or even matching PMT system performance. The four inherent CCD array deficiencies (relative to PMTs) are: 1. small detector pixel height (limited aspect ratio, e.g., 1:1), which limits the sensitivity of certain CCD array devices, 2. absence of inherent CCD array device “gain” (unity gain), which further limits the sensitivity, 3. poor UV response of certain CCD array devices, and 4. limited spectral resolution, due to typical CCD array configuration into very short focal length optical spectrographs, which exhibit more limited wavelength dispersion than is sometimes the case for PMTs, and which generally also exhibit severe, uncompensated, internal spectral imaging aberrations normally inherent with very short focal length optical spectrograph designs. Fortunately, solutions exist that provide offsetting factors for each of the above-mentioned CCD array deficiencies. A brief description of these solutions provides background for understanding the key characteristics of CCD array spectrometers. Concerning the problem of “small detector pixel height,” offsetting factors include greatly reduced CCD dark current and noise (especially with small pixel areas), availability of selected array devices having greater than 1:1 (h:w) pixel aspect ratio (e.g., 20:1), and availability of 1D (vertical), internal, secondary, light-concentrating optics with certain CCD array spectrographs. A relatively “tall” spectral image is thereby height-focused onto a much shorter array pixel, thus concentrating the light and increasing the signal, without increasing the dark current or affecting spectral resolution (image width). Concerning the problem of “absence of inherent CCD array device gain (unity gain)” relative to high gain PMTs, offsetting CCD array sensitivity factors include the natural integrating properties of array pixels, and an inherent CCD array quantum efficiency, which far exceeds that of PMTs. Collectively, these offsetting factors are so effective that CCD arrays can be rendered sufficiently sensitive to achieve a “full well” device charge count (saturation) for prominent spectral features within the range of 400 ms (or less) exposure time, even with the dimmest of plasma etching experiments. When the light level is quite high, CCD array exposure times may typically be as low as 10 ms, or even less. The high light level allows, for example, 20 or even 40 separate (10 ms) exposures to be digitally filtered and signal averaged (co-addition) for each of 2048 array pixels. Digitally filtering and signal averaging this, many exposures provide a major statistical enhancement of the SNR. In addition, data from several adjacent wavelength pixels may optionally be binned (software summation) in real time for even more SNR enhancement, in cases where spectral resolution is not critical. Concerning the problem of “poor UV response,” offsetting factors exist, in the form of fluorophore coatings applied directly to the detector pixels. Satisfactory UV response is thereby achieved. The problem of “limited spectral resolution” is one of the most basic problems in using CCD array systems. At most, CCD arrays are only about 1-in. long (e.g., 21–28 mm). This means the entire spectrum must be compressed to fit the 28-mm array length, which limits the spectrographic wavelength dispersion that may be employed. There is an additional resolution and spectral range tradeoff in the choice of gratings. The total wavelength coverage interval of a CCD array is determined by array dimensions and the spectrograph focal length and grating ruling density, which together establish the wavelength dispersion. For an array of fixed dimensions, and a spectrograph of fixed focal length, coarsely ruled gratings (600 grooves/mm) provide less dispersion and hence lower resolution, but a larger total wavelength coverage interval. Finely ruled gratings (1200 or 2400 grooves/mm) provide more dispersion and higher resolution, but a smaller total wavelength coverage interval. Centering of a given wavelength range is specified by the user and is fixed at the factory by adjusting the grating angle.

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-14

Handbook of Semiconductor Manufacturing Technology

The second category of spectrographs is characterized by high-performance CCD arrays, with applications aimed at stand-alone use (PC, or laptop, not necessarily required) or integration into OEM processing tools. These are based [19] on research-grade CCD spectrographs that are available with performance that equals or exceeds that of PMT-based systems. For maximum sensitivity, they employ cooled, back-illuminated CCD area arrays. The CCD is operated in a line-binning mode, so that light from the entire vertical extent of the slit is collected. These devices have peak quantum efficiencies greater than 90%, and over 40% throughout the typical spectral range of interest (200–950 nm), when compared with a peak value of 20% typical of a PMT. This means that about one photoelectron is created for every two photons to reach the detector. The best such devices have readout noise of only a few electrons, so that the signal-to-noise performance approaches the theoretical limit determined by the photon shot noise. However, the traditional size, cost, and complexity of these instruments make them impractical for use for routine monitoring and process control. Nonetheless, many of these features are beginning to appear in instruments priced near, or even below $10K. Spectrographs in this price range are available [20,25,26], which employ a cooled, backilluminated CCD with a 3-mm slit height, and whose sensitivity matches or exceeds that of currently available PMT-based monochromators. If cost is the prevailing factor, lower cost can be achieved using front-illuminated CCDs. Performance suffers, since the quantum efficiency is reduced by a factor of 2, and the spectral response of these devices cuts off below 400 nm. Nonetheless, this is a cost-effective approach for less-demanding applications. The issues of size and complexity are being addressed as well. One approach is to integrate the optical head together with the data acquisition and process-control functions into a single unit [20] with a small footprint and an on-board digital signal processor (DSP) for data analysis. Such a system can be integrated with the host computer for OEM applications, or be connected to a laptop for roaming applications. Another advantage of such high-performance arrays is that they can be utilized as an imaging spectrograph, where the entrance slit is divided into multiple sections that can couple to different chambers. The resulting spectra can be read independently from the CCD. In this way, multiple spectra can be run on the same instrument. 25.2.2.1.4 Calibration, Interface, and Upkeep Issues Implementing an OES analysis for a new process requires some expertise on the part of the process engineer. First, the spectral line or lines to be monitored must be chosen based upon a fundamental understanding of the spectral signature. Routine acquisition and signal analysis can then be performed by the sensor. The practical issue of the etching of the window or optical feed-through (or deposition on these components) has to be handled by cleaning or replacing these components. Some OEM vendors address these issues by heating, or recessing, the windows (to slow down deposition), or the installation of a honey-comb-like structure over the window (to cut down the deposition or etch on the major crosssectional area of the window). 25.2.2.1.5 Routine Application—End Point Detection By far the most widespread use of OES sensors is for endpoint detection. Historically, such sensors have been routinely used in plasma etch reactors for decades, since the process state (plasma) is a rich source of useful information. The fundamental principle behind endpoint detection is that as the etch proceeds from one layer (the primary layer being etched) to the underlying layer (the substrate), the gas phase composition of the plasma changes. For example, when etching a typical TiN/Al/TiN stack on an oxide substrate with a Cl-containing chemistry, there is a significant decrease in the AlCl product species with a corresponding increase in the Cl-reactant species, as the etch transitions from the bulk Al to the TiN and oxide layers. So a continuous monitoring of the 261-nm Al emission line intensity will show a decrease during the time when the Al film disappears. Traditional endpoint detection techniques have historically relied on numerical methods, such as threshold crossing, first derivative, or other combinatorial algorithms, which are manually devised to conform to the characteristics of a family of endpoint shapes

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25-15

and can be tuned to declare endpoint for a typically anticipated signal change. Besides the endpoint indication—which is by far the most commonly generated information from this data—the slope of the endpoint signal (at the endpoint) can be used as an indicator of the nonuniformity of the etch process [27]. From the sensor point of view, endpoint detection has been well established for the last 15–20 years. The majority of OEM plasma etch tools have endpoint detection hardware integrated into the tool. Historically, these were simple, inexpensive photodetectors that viewed the plasma emission through an appropriately selected optical bandpass filter and an optically transparent window in the side of the reactor. In the newer generation tools, this optical signal is obtained by the use of short focal length grating monochromator that can be manually scanned to the correct wavelength for the specific process. These have the advantage of the readily variable wavelength selection, the higher spectral resolution required to optically separate closely overlapping spectral lines, and a higher sensitivity of the PMT detector (vs. the photodiode detectors). 25.2.2.1.6 Emerging Application—Endpoint Detection for Low Exposed Areas For etch processes where the material etched is a significant percentage of the wafer surface area (e.g., metal etch), there is a large change in the plasma chemistry when this material is etched off, hence the endpoint signal is very strong and easily detected. The latest challenge in low-exposed area endpoint detection is for processes such as oxide etch in contact holes where the exposed area of oxide is less than 1% of the wafer area. This drives a very small chemistry change at endpoint, which in turn generates a very small change in the large emission signal intensity. The following sections provide details on the newly emerging software methods for detecting endpoint based on single-wavelength endpoint curves (Section 25.2.2.1.6.1) or on full spectral data (Section 25.2.2.1.6.2). 25.2.2.1.6.1 Neural Network Endpoint Detection The shape of the typical single-wavelength endpoint curve is the natural by-product of the processing tool characteristics, the product design, and the influence of the tools and processes that precede the endpoint-defined tool step. As such, the endpoint shape of a given process exhibits a statistical variation derived from the numerous preceding processes that affect the state of the wafer supplied to the tool requiring endpoint control. It then becomes the challenge of the process engineer, working in conjunction with the endpoint controller system, to effect a practical control algorithm. This algorithm has to be unique enough to recognize the endpoint, but also general enough to comprehend the pattern variability, which is a consequence of the tool change (namely, window absorbance) and the product mix. This challenge can be imposing, requiring a lengthy empirical evaluation of numerous endpoint data files in an attempt to achieve the correct numerical recipe, which accurately and reliably declares endpoint for the full suite of endpoint pattern variations. One approach [19] to this problem is a neural network-based endpoint detection algorithm [28]. It utilizes a fast training neural network pattern recognition scheme to determine the endpoint signature. Unlike traditional feed forward neural networks which require many pattern samples to build an effective network, the methodology employed with this approach minimizes the number of representative sample data files required for training to typically less than 10. The process engineer is not burdened with numerical recipe optimization. The following simple three-step procedure outlines the technique. 1. Acquire representative data files exhibiting a full range of endpoint patterns; new patterns can be later introduced into this data set. 2. Tag the endpoint patterns in the collected data files, i.e., identify the region in each data set that contains the endpoint. 3. Train the network—an automatic procedure completed in a few minutes. This technology has been successfully demonstrated and used in etching oxide with as low as 0.1% open area. Ultimate limits for any specific application are tied to a number of variables. These include the type of tool, process, optical detector, and appropriate selection of emission wavelength(s) to monitor. As a caveat, it is important to note that this technique must “see” the evolution of a distinguishable pattern

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-16

Handbook of Semiconductor Manufacturing Technology

in order to learn the shape and correctly identify its occurrence as endpoint. The shape may be subtle and complex, but it must be identifiable for successful results. Another useful feature of this type of endpoint detector is its ability to recognize complex, unusual, nonmonotonic patterns. Since this technique employs a pattern recognizer, it is not limited by complex signal variations that can prove daunting for numerical recipes. 25.2.2.1.6.2 Evolving Window Factor Analysis of Full Spectra With only one wavelength being monitored, random plasma fluctuations or detector/amplifier noise excursions can obscure the small intensity change that would otherwise serve to characterize endpoint in oxide etching of wafers exhibiting open areas of 1% or less. An alternate solution to this problem is to use the full spectrum available from the plasma. It is clear that significantly more useful data can be had if one measures the intensities in a broad spectral range vs. at a single wavelength of a chosen species. Since the spectral data still have to be obtained at a fast-enough rate to detect the endpoint, this drives the use of CCD array detectors. With the necessary dispersion optics, these CCD arrays can simultaneously measure the spectral intensities across a broad spectral range at a resolution determined by a number of factors described in Section 25.2.2.1.3. This sensor change generates a new dimension in the endpoint data, the spectral dimension (vs. a single wavelength). This, in turn, necessitates the development of algorithms that can make good use of this additional information. In one particular case [23], an evolving window factor analysis (EWFA) algorithm is employed to obtain the endpoint information from the multivariate spectral data. An EWFA is a variant of the more classical evolving factor analysis (EFA) technique used for the analysis of ordered—in this case, by time— multivariate data. The EFA follows the singular values (factors) of a data matrix as new rows (samples) are added. In a manner similar to principal component analysis (PCA) analysis, EFA determines how many factors are included in the data matrix and then plots these against the ordered variable (time). Such algorithms are well defined and routinely available [29]. The EWFA variation is to consider a moving window of data samples (as in Figure 25.7), for computational ease. The resulting data is a time-series plot of the appearance, or disappearance, of certain factors in the data set. So looking at the multiple spectral lines provide an increased endpoint signal sensitivity. A typical representation of this EWFA is in Figure 25.8, which shows two of the factors. Factor 3 (value 3) shows the temporal nature of the rotating magnetic field in this processing tool. Factor 4 (value 4) shows the endpoint signals from the etching of a four-layer oxide stack; the four endpoint signals are clearly identified in spite of the other temporal variations in the process (the rotating magnetic field). Automated endpoint detection in oxide etching has been shown to work with this technique down to 0.1% open area.

Time Wavelength

Window 1  1st set of eigenvalues ("coefficients") Window 2  2nd set of eigenvalues Window 3  3rd set of eigenvalues

FIGURE 25.7 Omaha, NE.)

Evolving window factor analysis data matrix. (Courtesy of Bob Fry, Cetac Technologies, Inc.,

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25-17

Endpoint signals from oxide etch of test wafer Test wafer: BPSG - TEOS stack AMAT MXP+etch chamber 5000

77.5 s

130 s

177 s

300 s 267.5 s end of etch

Magnitude

4000 Value 3

3000 2000 1000 0

Value 4

0

50

100 150 200 250 Seconds (0=initiation of main etch)

300

350

FIGURE 25.8 Evolving window factor analysis endpoint signal on “Value 4”. (Courtesy of Bob Fry, Cetac Technologies, Inc., Omaha, NE.)

25.2.2.2

Fourier Transform Infrared

Infrared spectroscopy, in the mid-IR range of 1–20 mm, can provide a wealth of information about gas properties, including species temperature, composition, and concentration. Its application to gas phase analysis in SC manufacturing tools has been more limited than the use of visible spectroscopy for analyzing the optical emission of plasma processes. But there are some useful applications based on infrared spectroscopy, hence the technology will be described. 25.2.2.2.1 Theory of Operation Complete spectra from 1.5 to 25 mm wavelength can be obtained in fractions of a second using a FTIR spectrometer. The core of a FTIR is typically a Michelson Interferometer consisting of a beam splitter and two mirrors, one of which moves [30]. As shown in Figure 25.9, incoming radiation in a parallel beam impinges on the beam splitter and is split roughly in half into beams directed at the mirrors. The reflected light recombines at the beamsplitter to form the outgoing radiation. If the mirrors are equidistant from the beam splitter, then the radiation recombines constructively. If the paths differ by one-fourth wavelength, then the beams combine destructively. As the moving mirror travels at constant velocity, the radiation is amplitude modulated, with each frequency being modulated at a unique frequency that is proportional to the velocity and inversely proportional to the wavelength. Thus, radiation with twice the wavelength is modulated at half the frequency. The key requirements for such a FTIR spectrometer are that they are vibration immune, rugged, permanently aligned, and thermally stable. Another key issue for accurate quantitative analysis is detector linearity. Mercury cadmium telluride (MCT) detectors are high sensitivity infrared detectors, but are notoriously nonlinear. Detector correction methods are required to linearize the response. All these requirements have been addressed, making FTIR a commercially available sensor [31] for possible use in SC manufacturing. 25.2.2.2.2 Exhaust Gas Monitoring Applications Most of the sensors described in this chapter are used for sensing process or wafer-state properties during or after a given manufacturing process. However, FTIR lends itself well to another important aspect of SC manufacturing; fault detection based on exhaust gas monitoring from a reactor.

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-18

Handbook of Semiconductor Manufacturing Technology

V Moving mirror

Beam splitter Incoming radiation

Fixed mirror Outgoing amplitude modulated radiation

FIGURE 25.9 Modulation of radiation by a moving mirror interferometer. (Adapted from input by Peter Solomon, On-Line Technologies, East Hartford, CT.)

In one particular study [32] on a high-density plasma reactor, the spectrometer was positioned above the exhaust duct of the etch chamber. The IR beam was directed to a set of focusing and steering mirrors and into a multipasso mirror assembly enclosed in an exhaust line tee. This tee was placed between the turbo and the mechanical pumps. The multipass cell generated 20 passes through the tee to provide a 5-m path length. The exhaust gas passed through this in-line gas cell and spectra were collected at 1/cm resolution. The data obtained in this study suggests that FTIR measurements can provide: 1. Exhaust gas monitoring, after the turbo-pump, providing a reproducible and rapid measurement of a rich variety of compounds produced during the wafer etch. 2. Identification of the mix of compounds which can be used to interpret an etching sequence, or the cleaning of a reactor by a reactive plasma. 3. Identification for the effects of incorrect chucking, incorrect plasma power, air leaks, low pressure gas feed. 4. Data for use in fault detection, for a reliable and automated fault detection and classification system. FTIR can also be used for the analysis of the efficiency of large scale, volatile organic compound abatement systems. 25.2.2.3

Mass Spectroscopy/Residual Gas Analysis

In addition to the optical methods previously described, gases can also be analyzed by mass spectroscopy of the molecular species and their fragmented parts. The in-situ mass spectrometric sensor for gas analysis is commonly known as a residual gas analyzer. 25.2.2.3.1 Conventional Residual Gas Analyzer Quadrupole devices are by far the most widely used in semiconductor manufacturing applications, typically for the maintenance and troubleshooting of process tools [33]. Leak checking, testing for gas contamination, or moisture and periodic RGA for tool qualification have been the main uses of RGA. In other applications, RGAs are used to establish a correlation between wafer’s quality and measured contamination in the process chamber. Recently, RGAs have been used for in-situ monitoring to prevent

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25-19

Filament A Focusing lens

Exit aperture

Quadrupole rods

Quadrupole rods

Ion box

Collector

Filament B

FIGURE 25.10 Quadrupole residual gas analyzer. (Adapted from input by Ferran, R.J. and Boumsellek, S., Ferran Scientific, Inc., San Diego, CA. http://www.ferran.com)

accidental scrap and reduce wafer-to-wafer variability. To be effective, RGAs have to be able to directly monitor both tool baseline pressures and process chemistries in a nonintrusive fashion. 25.2.2.3.1.1 Theory of Operation Conventional RGAs operate by sampling the gases of interest through an orifice between the container for the gases (the processing chamber or exhaust duct in an SC manufacturing tool) and the residual gas analyzer (shown in Figure 25.10). In a conventional RGA, the pressure must be reduced below typical processing chamber pressures prior to ionization. This requires differential pumping and sampling of the process gases, making conventional RGAs a relatively bulky and expensive package. The following brief description of the three basic components of a quadrupole mass spectrometer analyzer—the ionizer, the mass filter, and the detector—are provided to facilitate the understanding of the sensors based on this technology. 25.2.2.3.1.2 Ionizer Gas ionization is usually achieved using an electron-impact type process. Electrons are emitted from a hot filament (22008C) using an electric current. Few metals have a low enough work function to supply currents in the milliampere range at such temperatures. Filaments are usually coated with materials with better thermo-emission properties. Typical coatings are thoria and yttria and typical base metals are tungsten, iridium, and rhenium. Electrons are then accelerated to acquire an energy in the 30–70 eV range, which corresponds to the highest ionization cross-sections for several gases. The ionization occurs in an enclosed area called the ion source. There are many types of sources, but the major distinction is between open and closed sources. The higher the pressure in the source, the greater is the sensitivity to minor constituents. The sensitivity is the minimum detectable pressure relative to the maximum number of ions produced in the source. A closed ion source has small apertures to introduce the sample gas from the process environment, to allow the electrons to enter the source, and to extract the ions into the mass filter. With the use of an auxiliary pump, the filaments and the mass filter and the detector are kept at a much lower pressure than the source. In addition to greater sensitivity, the advantages associated with closed sources are: (1) prolonging the filament lifetime in the presence of corrosive gases and (2) enabling electron multipliers to be used as ion detectors. However, the high complexity and cost associated with the aperture’s precision alignment and the required high vacuum pump make closed-source-type instruments very expensive. 25.2.2.3.1.3 Mass Filter The ions are extracted from the source and are focused into the entrance aperture of the mass filter with an energy, Vz. The mass filter is the cavity enclosed by the four parallel quadrupole rods arranged in a square configuration (see Figure 25.11). Typical diameter and length of the cylindrical rods are at

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-20

Handbook of Semiconductor Manufacturing Technology

r r0 MicropoleTM Typical quadrupole

FIGURE 25.11 Array concept. (Adapted from input by Ferran, R.J. and Boumsellek, S., Ferran Scientific, Inc., San Diego, CA. http://www.ferran.com)

least 6 and 100 mm, respectively. The species moving through the filter are singly or multiply charged atoms or molecules. Filtering is the common term for selecting ions with a particular mass-to-charge ratio that possess a stable trajectory enabling it to reach the detector, while all other ions (with unstable trajectories) are filtered out. Filtering is accomplished by subjecting the ions to lateral forces generated by the combination of dc and RF voltages on the rods. The filtered mass and the mass resolution are given by

mZ

7 !106 V f 2 r02

ð25:2Þ

4 !109 Vz f 2 l2

ð25:3Þ

Dm Z

where V is the amplitude of the RF voltage, f is the RF frequency, r0 is the radius of the inscribed circle, l is the length of the mass filter, and Vz is the ion energy. 25.2.2.3.1.4 Detector The filtered ions are accelerated at an exit aperture to reach the detector. Two detection techniques are generally used: Faraday cups and electron multipliers. Faraday cups are in the shape of cavities in which collected ions and any secondary electrons are trapped to generate a current. The current is then converted to a voltage using a sensitive electrometer circuit. The limit of detection of these devices is gated by the ability to make more sensitive electrometers. Fundamental limitations associated with Johnson noise in resistors and the noise in the semiconductor junctions determine the lowest detectable current. Alternatively, there are techniques for multiplying the current in vacuum using a continuous dynode electron multiplier. This is shaped as a curved glass tube with the inside coating made of a highresistivity surface (PbO–Bi2O3 glass) with a high secondary electron emission coefficient. A high voltage (3 kV typically) is applied between the ends of the tube. When filtered ions strike the active surface, a shower of electrons are produced and accelerated towards the opposite wall of the surface. Each electron leads to the emission of more electrons and the process is repeated along the length of the tube causing an avalanche of electrons. A multiplication or gain up to 107 can be achieved. However, the ability to emit electrons decreases with time. The time scale depends on the total number of electrons emitted, which in turn depends on the number of incident ions. At high pressures, large amounts of ions strike the surface causing a high rate of depletion and hence a shorter lifetime. Another important phenomenon related to the operation at high pressures is the “positive feedback.” As the number of positive ions increases inside the tube, the gain can be drastically reduced since ions, accelerated in the opposite direction, interfere with the electron multiplication process. These phenomena limit the practical use of electron multipliers to the low-pressure (!10K5 Torr) range.

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25-21

25.2.2.3.2 Sensor-Type RGAs 25.2.2.3.2.1 Component Choices A recent key development in RGA technology is the evolution of sensor-type RGA. These have miniaturized quadrupoles which allow mass-filter operation at nearly three orders of magnitude higher pressure, hence not requiring differential pumping of the sensor for many applications. The basis of these new systems is the substantially shortened quadrupoles, which provide a short path for ions to travel to the detector, hence minimizing the effect of collisional losses at the higher pressures. Their small size allows them to be mounted at several strategic and/or convenient locations without sacrificing any foot-print. This represents a major breakthrough with regards to the sensor size, cost, and easeof-use. A number of sensor manufacturers provide such sensors [34,35]. But any miniaturization attempt has to be carried out without sacrificing mass spectrometry performances of conventional RGAs in term of sensitivity, mass resolution, and mass range. The optimal use of these sensors requires an understanding of the interactions between the pressure range, the detection technique, and the required sensitivity. At low pressures (below 10K5 Torr), which conveniently coincides with the optimum operating pressure of the high-gain electron multipliers, high sensitivity to low partial pressure contaminants is readily achieved. This provides capability for sensitive determination of background moisture levels and low level leaks in vacuum system. But with these smaller sensors currently available, the shorter path length of the ions allows RGA mass filters to operate at pressures in the milliTorr range, which also enables the direct monitoring of many semiconductor manufacturing processes. However, these pressures are too high for efficient operation of the electron multiplier detector. So one solution is to return to the use of a pressure throttling device (orifice) and a high vacuum pump. Aside from the cost and size penalties of this approach, there are more serious considerations that have to do with the retained gases, or lack thereof, on the analyzer chamber walls. This leads to measurements which do not necessarily reflect the gas composition of the process chamber, but reflect more on the state of the analyzer. This is very noticeable when the pressure is very low in the analyzer chamber. The lower the pressure in the analyzer chamber, the lower the required pressure of the high vacuum pump, since species not pumped will provide a background measurement which must be taken into account and may be variable with time and temperature. Another solution is to use a Faraday cup detector at these milliTorr pressures, but this sacrifices sensitivity due to the lower gain inherent in these detectors. The sensitivity is further decreased by the geometrical aspects of these miniature sensors since the reduction of the rod size, and hence of the diameter of the inscribed circle between the four rods, results in a smaller acceptance area for the ionized species to reach the detector. A recent solution to this sensitivity issue at higher pressures has been the development of an array detector. In this configuration, an array of miniature quadrupoles compensates for the loss of sensitivity. The mass resolution and the mass range are maintained by increasing the RF frequency as seen in Equation 25.2 and Equation 25.3. While the array concept was introduced several decades ago, volume production has only recently been enabled by the new fabrication technologies used to handle microparts. This sensor [36] comprises a 4!4 array of identical cylindrical rods (1-mm diameter) arranged in a grid-like pattern, where the cavities between the rods form a 3!3 array of miniature quadrupole mass spectrometers. The length of the rods is only 10 mm which enables the operation of the sensor at higher pressures (10 mTorr). It occupies less than 4 cm3 total volume. The manufacturing method uses glassto-metal technology to seal the rods and electrical pins. This technology provides lower manufacturing cost and physically identical sensors that are simple to calibrate. The replacement cost of these sensors is low enough to consider them to be consumables. 25.2.2.3.2.2 Calibration and Lifetime The RGA sensor calibration is performed against capacitance manometers and need no field calibration other than for fault detection. The data are displayed in Torr or other acceptable pressure units and can be directly compared with the process pressure gauge. There are recommended practices published by the

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-22

Handbook of Semiconductor Manufacturing Technology

American vacuum society (AVS) for calibrating the low-pressure devices, but the miniature high-pressure RGAs were developed subsequent to the practices being established. At the high operating temperatures, RGA filaments react strongly with the ambient gases. This interaction leads to different failure mechanisms depending on the pressure and the chemical nature of these gases. Tungsten and rhenium filaments are volatile in oxidizing atmospheres (such as oxygen), while thoria and yttria-coated iridium filaments are volatile in reducing atmospheres (such as hydrogen). Corrosive gases, such as chlorine and fluorine, reduce filaments lifetime drastically. In fact, lifetime is inversely proportional to the pressure in the range of 10K5 to 10K2 Torr. This corrosion-limited lifetime favors systems that have readily and inexpensively replaceable detectors. 25.2.2.3.2.3 Sensor Interface to OEM Tools Since both the process tools and the RGA sensors are vacuum devices, the pressure connections to the vacuum should be made with good vacuum practice in mind. Lower pressure operation increasingly requires the use of large diameter, short lines made with materials and processes, which provide low retentivity. Wherever possible, the source of the sensor should be in good pneumatic communication with the gases to be measured. Care should be taken to avoid condensation in the RGA of species from the process. This may require that the sensor and its mounting be heated externally. Temperatures as high as 1508C are common in these cases. Higher temperatures may require separating portions of the system from the sensor, which may deteriorate the system performance and add to the cost. Quadrupole devices operate internally at very high RF voltages and therefore may radiate in the process chamber or to the ambient outside the chamber. Good grounding practices such as those used with process plasma RF generators are important. Compared to other instruments on the process tool, RGAs generate large amounts of 3D data (mass, pressure, time) in a very short time. Invariably, the data from these devices are transmitted on command by a serial link. RS232, RS485, and proprietary protocols are all in use. Since multiple devices are commonly bused together, data systems such as a “Sensor Bus” will normally be used. Efficient use of the data by the tool controller represents a major challenge on the way to a full integration of these sensors into OEM tools. 25.2.2.4

Acoustic Composition Measurement

Although generally not well recognized, the composition of a known binary mixture of two gases can be determined by measuring the speed of sound in the mixture [37]. Very high sensitivity (w1 ppm) is available when a high molecular weight precursor is diluted in a low molecular weight carrier gas. Acoustic gas composition measurement is inherently stable and consequently accuracy is maintained over the long term. There are no components that wear and the energy levels imparted to the gasses are very low and do not induce any unintended reactions. These features make this technique ideal for many metal organic chemical vapor deposition (MOCVD) and CVD processes. This technique is not readily applicable if the individual gas species are unknown or if more than two species are present, because many combinations of different gas species may be blended to produce the same speed of sound. This lack of uniqueness does not pose a problem for blending gasses since a cascaded arrangement of sensors and controllers may be used to add one gas at a time. Each successive transducer will use the information for the blend from the previous instrument as one of its component gas’ thermodynamic constants. The most obvious application is to determine the gas composition flowing through the tubes that supply mixed gasses to a reactor. However, there may be additional value to sampling gasses from within the reactor chamber or its outlet. This dual transducer arrangement can provide information on the efficiency, stability, and “health” of the ongoing process. 25.2.2.4.1 Theory of Operation The speed of sound, C, in a pure gas is related to the gasses’ fundamental thermodynamic properties as follows [38]

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25-23

CZ

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi gRT=M

ð25:4Þ

where g is the specific heat ratio, Cp/Cv, R is the universal gas constant, T is the Kelvin temperature, and M is the molecular weight. The same equation form holds precisely for a mixture of gasses when appropriate values for g and M are calculated based on the relative abundance of the individual species. Likewise, it is only an algebraic exercise to solve the resulting equation for the relative concentration of a mixture when the speed of sound is known or measured [39]. 25.2.2.4.2 Sensor Configurations Building a composition-measuring instrument using this fundamental thermal physics has been accomplished in two distinct ways. The first implementation measures the transit time for an ultrasonic (w15 kHz) pulse through the gas [40]. This time-of-flight implementation only requires a high resolution timer to measure the time between when a sound pulse is generated and its arrival at a receiver a distance, L, away. The second implementation measures the resonant frequency of a small chamber filled with the target gas mixture [39], as in Figure 25.12. All wetted components of this chamber are fabricated from high purity and electro-polished stainless steel and inconel. A precisely controlled frequency generator is used to stimulate the gas at one end of the chamber and the intensity of the transmitted sound is measured at the opposite end. Algorithms are designed to maintain the applied frequency at the gas chamber’s resonance frequency. The chamber is precisely temperature controlled (to less than G0.038C) and is carefully shaped to resonate in the fundamental mode at a low audio frequency (0.3–4.6 kHz). Low frequency operation allows the use of metal diaphragms which avoids process gas contact with the acoustic generators and receivers. Low frequency sound is also more efficiently propagated through the chamber than ultrasonic frequencies, resulting in useful operation at pressures as low as 70 Torr. Since the chamber’s length is fixed and its temperature is carefully controlled, the speed of sound, C, is simply related to the resonant frequency, F, as CZ2FL, where L is the effective distance between the sending and the receiving elements. It is possible to resolve a gas filled chamber’s resonant frequency, and therefore the speed of sound of the gas, to less than 1 part in 50,000 using the resonant technique. Even though the frequency generation method employed can generate frequencies only 0.1 Hz apart, even greater resolution may be achieved by measuring the amplitude at several frequencies around resonance and curve fitting the instrument’s response to the theoretical shape of the resonance peak.

Transducer Resonance chamber − + To sending microphone drive and antidrive



Outlet tube

Heater 1 PRT

Diaphragm Receiver microphone cartridge − To + + 5 Amplitude detection bias

+

Sending microphone cartridge

Insulation and case Chamber body Heater 2

Inlet tube

FIGURE 25.12 Cross-section of a transducer for a low frequency resonance type acoustic gas analyzer. (Adapted from input by Gogol, C.A., Leybold Inficon, East Syracuse, NY. http://www.inficon.com)

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-24

Handbook of Semiconductor Manufacturing Technology

Power line

D/A

Optional power supply

Concentration Control

CFGR 1 CFGR 2

24Vdc

O ut O ut In

Power SYS I/O Controller

To MFC set point input 0 – 5Vdc P

MFC

Carrier gas

MFC

RS232

Serial link (optional)

System controller or computer

Sensor

Interconnection cable

Back pressure controller

Reactor

Acoustic transducer Bubbler Dilution flow

FIGURE 25.13 Typical installation of an acoustic composition measurement system. (Adapted from input by Gogol, C.A., Leybold Inficon, East Syracuse, NY. http://www.inficon.com)

There is little effect on the gas supply dynamics as either implementation adds little additional volume (!25 cm3) to the reactor’s delivery system. A typical installation of an acoustic composition measuring and control system [41] is shown in Figure 25.13. It consists of two major components. First, a transducer is directly inserted into the reactor’s supply line. Second, an electronic control console is used for controlling the sensor’s temperature, determining the speed of sound, computing the composition, generating feedback control voltages for mass flow sensors, and analog and digital transmission of relevant process control data. The gas delivery tube is cut and reconnected so the gas mixture passes through the transducer. The transducer’s temperature may be set to match the transport needs of the materials or match heated delivery tubes. This installation demonstrates simultaneous control of both the bubbler flow and the dilution flow, thus maintaining constant composition at constant total flow. 25.2.2.4.3 Sensor Sensitivity, Stability, and Calibration Figure 25.14 demonstrates how the speed of sound varies with relative composition for some common gas pairs. The steep slope at low concentrations of a high molecular weight gas in a light carrier allows concentrations as small as 1 ppm to be measured. The sensitivity of these techniques is strongly influenced by the difference in mass between the species and the particular range of compositions. The technique is most sensitive for low concentrations (less than 5 mol%) of a high molecular weight species in a light gas. Even when the molecular weight differences are small, e.g., O3 in O2 or N2 in Ar, it is generally easy to discern compositions differing by 0.1% or less for all concentrations. Acoustic analysis is stable and highly reproducible over long periods of time. Reproducibility can be further improved with daily calibration. Calibration is simple if the installation of the sensor permits pure carrier gas to flow through the sensor. Calibration is the renormalization of the instrument’s

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25-25

Frequency vs. mole fraction Acoustic composition monitor

5000 4500 Frequency (Hz)

4000 3500 3000 2500 2000 1500 1000 500 0

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Mole fraction O3 in O2

N2 in Ar

TMIn in H2

H2 in WF6

FIGURE 25.14 Graph of frequency vs. mole fraction for various binary gas mixtures. (Adapted from input by C.A. Gogol, Leybold Inficon, East Syracuse, NY. http://www.inficon.com)

effective path length, L. This is easily accomplished at the point of installation by measuring a known pure gas, generally the carrier gas. The calibration process is brief, only requiring the sensor and its supply lines to be sufficiently flushed in order to dilute any remaining precursor so that it causes no measurable effect on the speed of sound. This condition is readily observable because the speed of sound will asymptotically stabilize once the other gasses are sufficiently flushed. Lower temperatures and higher pressures are more easily measured with acoustic techniques as acoustic transmission is dependent on media density. At low gas density, it is difficult to transmit sufficient sound intensity to overcome the parasitic sounds transmitted through the sensor structure itself or to distinguish the signal from the other sounds in the environment. These problems have presently limited the successful operation of this technology to situations where the supply gas already exceeds or can be compressed to pressures over 70 Torr. 25.2.2.4.4 Sensor Integration Integration of this sensor with the reactor may be either analog or digitally based. These sensors’ electronics implementation is dominantly digital, but the composition signal is also available on a precision analog output which allows the concentration to be interpreted by precise analog readout. The preferred interfacing is digital, as the transducer’s inherent precision and dynamic range exceed that normally carried as an analog process control signal. A digital interface allows a graphic display to be used to advantage to uncover process flaws, such as improper reactor gas line switching and to readily view the concentration instabilities that indicate source depletion. It is also straightforward to save a detailed record of each process run when utilizing a digital interface. The future direction for the development of this instrumentation is to enhance operation from the present operating limit of 658C and allow its use at temperatures in excess of 1208C. Another productive path will be to learn how to apply these simple and robust transducers in networked arrangements. It is felt that they will have measurement capabilities beyond the present 2-gas limit and may reliably provide information on reactor efficiency and perhaps infer wafer properties such as the rate of film growth.

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-26

25.2.3

Handbook of Semiconductor Manufacturing Technology

RF Properties

A significant number of etch and deposition tools are RF powered. For these tools, the process state includes the characteristics of the RF conditions, such as the delivered power or the current, voltage, and phase information—at the fundamental frequency and the higher harmonics. The major benefit of RF sensors is that they are readily available, relatively inexpensive, and readily couple into the RF circuit. Their major drawback is that the accuracy of the measured parameters is a complex function of the placement and impedance matching of the sensor in the RF path. Specifically, any RF sensor of fixed impedance, placed between the matching network and the dynamically changing impedance of the plasma chamber will generate measurements of questionable accuracy. Calibrating the sensor for variable load impedances can mitigate this issue. But viewed pragmatically, this complexity is the reason why commercial plasma tools are not yet controlled by postmatch RF sensors. If accuracy is not the primary requirement, these sensors can generate useful data for endpoint determination or fault detection. 25.2.3.1

Sensor Technologies

An RF sensor [42] is a device that produces output signal(s) that are of a definite and defined relationship to the electrical energy present in or passing through the sensor. To allow for the placement of sensing elements into controlled and reproducible electromagnetic field conditions, RF sensors are typically designed and built around transmission line structures. Minimum system disruption by the RF sensor is important to guarantee that the load seen by the source network is the same with or without the RF sensor. In other words, the measurement device should not significantly change the load it is trying to measure. A typical RF sensor will produce the following insertion disruptions: 1. Capacitance to ground, 2. Series inductance. A small capacitance to ground is essential in preventing the sensor from increasing the reactance of the load as seen by the source network. The series inductance is generally designed in combination with the capacitance to ground to produce the characteristic operating impedance of the sensor (usually 50 U for RF applications). A small series resistance allows the sensor to have low insertion loss (i.e., dissipating power in the sensor instead of the load). Having a small value for the series resistance is crucial to maintaining a high Q (quality factor) of the load network and allowing for high system efficiencies. The following describe the two major types of RF sensors: directional couplers and voltage/current sensors. Directional coupler is the most common type of RF sensor. It is generally used to measure the forward and reverse power at the generator output by directionally sensing the RF components at the sensor. These values are generally accurate, as the 50 U sensor is connected to the stable 50 U input to the matching network. Commercially available directional couplers are typically rated in terms of the forward and reverse power, the sensing element can withstand and usually have a specific coupling coefficient (e.g., K30 dB) over a specified frequency bandwidth and a characteristic impedance (typically 50 U). Directional couplers are available from a number of vendors [43,44]. VI sensors are the second most common types of RF sensor. This sensor’s operation relies upon the electrical principles of capacitance and mutual inductance. A capacitor is formed when two conductors are placed in parallel to one another and separated by a certain distance. If the output of the capacitor is connected to a suitable shaping network, the voltage drop across the created capacitor can be controlled to produce an output signal with repeatable attenuation. The ideal capacitive structure, when built into a transmission line, is as wide and short as possible. This allows for maximum voltage coupling (by maximizing the capacitance) and minimum current coupling (by minimizing the mutual inductance) into the sensor device network. A mutual inductor is easily formed when two conductors are placed in parallel with each other. An ac current traveling in one conductor will produce an ac current in the other conductor traveling 1808 out of phase with it. The ideal inductive structure, when built into a

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25-27

transmission line, is as long and thin as possible. This allows for maximum current coupling (by maximizing the mutual inductance) and minimum voltage coupling (by minimizing the capacitance) into the sensor device network. It is important to be aware of a possible current contamination in the voltage sensor and voltage contamination in the current sensor. If this occurs, the dynamic impedance range and maximum sensor accuracy will be sacrificed. One important thing to recognize when using voltage and current sensors is that each independent sensor must look at the same position on the coaxial line. Also consider that the sensor and the associated cables and electronics have been specifically calibrated as a unit; hence, no component can be arbitrarily changed without recalibration. If these rules are not followed, the standing wave ratio seen by each sensor will be different—allowing for errors in the produced signals. Voltage and current sensors are typically rated in terms of how many amperes or volts the sensor can tolerate. This rating is similar to the transformer rating specified in terms of VA (volts!amperes). The VI sensors are also commercially available [45–48]. 25.2.3.2

Measurement Technologies

A measurement technology is necessary to process the outputs of the RF sensor. In the past, measurement techniques have been typically analog-based signal processing. Since the advent of the DSP, more and more measurement techniques have migrated to the digital world. For any type of measurement technique to perform well, it must have the following minimum characteristics: † † † † † †

reproducible results—stable vs. time and environmental conditions; wide frequency range; wide sensitivity range; impedance independent accuracy; G1808 phase measurement capability; flexible calibration and calculation algorithms.

Having a measurement technique with reproducible results is a must for any sensor system. Dayto-day reproducibility allows for maximum reliability of the sensor, while unit-to-unit reproducibility allows for data interpretation to be consistent for each unit purchased. An excellent unit-to-unit reproducibility is absolutely necessary if a sensor system is to be used in a manufacturing environment. Inherent in reproducibility is low drift. Low drift overtime in a sensor system’s readings is necessary for day-to-day and measurement-to-measurement reproducibility. Also, because of the large temperature ranges produced by many of the new plasma processes, low temperature drift is necessary to maintain maximum accuracy. Many single frequency sensor systems are available on the market today, but a sensor system with a measurement technology that performs over a wide frequency range allows the user to look at harmonics (for a single frequency processes) and mixing products (for multiple frequency processes) without incurring additional cost. Hence, a sensor system with a wide frequency range has the lowest cost of ownership. Especially, if the sensor is used over a wide frequency range, a wide range of sensitivity is required. The magnitudes of the signals at the fundamental vs. the upper harmonics can be significantly different, hence requiring a large dynamic range in the sensor sensitivity. Some sensor systems have accuracy specifications that depend upon the impedance of the load. For maximum reproducible accuracy, a sensor system that uses a measurement technology with impedance independent accuracy must be employed. The most important values to be measured are the fundamental electrical parameters of jVj, jIj, and !Z (the phase angle of the load, or the phase angle between the voltage and the current). These three parameters are the building blocks of all other electrical parameters (such as power, impedance, reflection coefficient, etc.). Some sensor system vendors specify their accuracy in terms of nonelemental parameters; in this case, a little algebra is necessary to transform the specifications to the elemental parameters.

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-28

Handbook of Semiconductor Manufacturing Technology

Passive loads (formed with capacitors, inductors, and resistors) can only result in impedance phase angles in the G908 range, while active loads can produce any phase angle over the G1808 range. Due to the complicated physical processes that govern electron and ion transport in a plasma, the resulting electrical impedance produced by the plasma is active. Hence, to allow for proper measurement of a plasma load, the sensor system must be capable of determining phase angles in the G1808 range—all possible phase angles. Another consideration for a sensor system is its upgrade path. Typical analog techniques process, the sensor signals with circuitry. Due to the fact that any technology improvement requires circuit redesign, analog processing does not allow for a low cost upgrade path for technology improvements. Hence, the lowest cost of ownership in a sensor design is achieved with a digital technique that allows for signal processing upgrades with new versions of software. 25.2.3.3

Signal Processing

Once the sensor signal is obtained (see Section 25.2.3.2), it has to be processed to derive the parameters of interest. In some cases, signal processing requires the down-conversion of the RF signals to a lower frequency that is more easily digitized. Once in the digital domain, DSP algorithms provide a very efficient and flexible way to process these sensor signals. In contrast to available analog signal processing methods, digital signal processing is done completely with software, not hardware. Hence, the flexibility of calculation and calibration algorithms is very high. Any improvements to sensor or calculation technology can be implemented in software—drastically reducing the design cycle for improvements in the signal processing technology. Another important advantage of having a DSP-based embedded system in the design is completely self-contained operation. Additional hardware is not necessary to support operation of the unit because all calibration information can be stored in DSP nonvolatile memory. In addition, the DSP can allow for user selectable high speed filtering of data. A RF sensor system should be able to extract the following data at the frequency of interest: jVj jIj jZj q PD PF PR PRE G

Root mean square (RMS) voltage RMS current Impedance magnitude of load Phase angle of load Delivered (load) power Forward power Reverse power Reactive (imaginary) power Reflection coefficient

V A W Degrees or radians W W W VAR No unit

Due to the mathematical relationships of the above nine parameters to each other, the RF sensor system must be able to directly measure three out of the nine parameters to properly calculate the remaining six. The accuracy with which each of these three fundamental parameters is measured determines the accuracy to which the other six parameters can be calculated and overall sensor system quality. A broadband RF sensor system will allow the user to extract data at harmonics to more thoroughly characterize the behavior of the RF plasma and RF system. 25.2.3.4

Sensor Installation and Use

The two typical installations of a RF sensor system are shown in Figure 25.15 and Figure 25.16. As demonstrated, the RF sensor can be mounted either before or after the matching network. One thing to realize is that any such 50 U sensor will perturb the VjIjphase values that existed in a non-50 U path without the sensor in place. Impedance mismatch between the sensor and the point where it is inserted will generate RF reflections; thereby influencing the RF environment. This does not negate their utility, but one needs to consider that the measurement itself changes the RF environment. The second issue is that, whether the sensor is located pre- or post-match, it reads the instantaneous VjIj phase values at that point along the transmission path. These values are indicative of the standing wave characteristics at that point

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25-29

RF generator

RF sensor

RF matching network

Plasma chamber

FIGURE 25.15 RF sensor mounting, pre-match. (Adapted from input by Kevin S., Gerrish, ENI Technology, Inc., Rochester, NY.)

in the transmission path. However, these values will be influenced by the plasma properties, which is the primary reason for the use of these sensors for endpoint or fault detection. The changing impedance of the plasma creates changes in the standing wave characteristics along the transmission path, most dramatically between the tuner and the plasma. Hence these sensors, located either pre- or post-match, will see changes in the plasma. One benefit for locating sensors pre-match is the relative ease of mounting the sensor with standard coaxial coupling, assuming that a useful signal can be obtained in this location. The analysis and interpretation of the available sensor data require that one comprehend that the sensor measures the instantaneous VjIj phase values at one specific location in the RF transmission path. What happens at another location (namely the plasma) can be inferred by correlation (i.e., a change in the standard measured values) or by means of a full RF-circuit model. Such models are generally very difficult to generate; hence, the majority of the RF sensor data analysis is performed by the simpler correlative method. 25.2.3.5

Applications of an RF Sensor System

In spite of the previously described limitations, RF sensors can be gainfully utilized in a number of applications. In some cases, they are relatively “easy” and inexpensive add-on sensors, and have shown benefits in applications where accuracy is not a key parameter (as long as sensor reproducibility persists). The following sections describe the examples of these applications. 25.2.3.5.1 Etching Endpoint Detection, Deposition Thickness For this application, the RF sensor system can be mounted before or after the RF matching network. Even in a pre-match location, an RF sensor system with enough sensitivity can detect the small variation in plasma impedance that depicts an etching endpoint or accompanies a deposition process. Using VI sensors in a pre-match configuration on a plasma etcher, endpoint signals have been seen in the higher harmonics [85]. For an oxide deposition, a capacitance change will also be seen by the RF sensor. The value of the observed capacitance could be correlated to the thickness of the deposited film. Similar information can be obtained from sensors placed post-match. 25.2.3.5.2 Harmonic Signature Analysis Due to the nonlinear characteristics of the plasma, the pure sine wave from the RF generator will be turned into a harmonic-rich waveform by the plasma. The number of RF harmonics present, as well as the characteristics of each, will depend on the plasma chamber geometry, the type of energy source for the plasma chamber, and the type of process being run.

RF generator

RF matching network

RF sensor

Plasma chamber

FIGURE 25.16 RF sensor mounting, post-match. (Adapted from input by Kevin S., Gerrish, ENI Technology, Inc., Rochester, NY.)

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-30

Handbook of Semiconductor Manufacturing Technology

Proper use of this technique would create a “harmonic fingerprint” of the process when it is running well. Future process fingerprints would be compared, probably by a multivariate numerical technique, to the master fingerprint at regular intervals. Any significant change between the two would indicate a process shift, allowing the chamber to be taken off-line before a complete lot of wafers is destroyed. If enough data are taken, a database could be created allowing proper interpretation of the bad harmonic fingerprint. Examples of anomalies expected to be found by this technique are broken wafer, misplaced wafer, dirty chamber (i.e., chamber cleaning required). It is wise not to limit the harmonic range. Experiments have indicated [49] that higher harmonics (10th through the 20th) can contain stronger process correlation than lower harmonics, as well as different information. A particular RF investigation on a cluster-tool polysilicon etcher chamber B, it was found that the seventh harmonic, or 94.92 MHz, had a good etch endpoint trace. Looking at the higher harmonics, the 13th harmonic, or 176.28 MHz, showed a strong reaction to chamber D’s RF power cycle. This indicated that the 13th harmonic should not be used to characterize chamber B. At the 16th harmonic, or 216.96 MHz, an endpoint signal was found with much better endpoint (EPT) characteristics than the seventh harmonic. No other harmonics, up to the 20th, produced any usable EPT information. 25.2.3.5.3 Measurement of Power Delivered to Plasma The more difficult application is the accurate measurement and control of power delivered to the plasma. A typical RF system will regulate the output power of the RF generator to very high accuracy. Unfortunately, every RF delivery system has losses. In most cases, the losses change as a function of generator power due to plasma impedance changes. Also, the losses in a RF delivery system may increase as the system ages (e.g., wear of the mechanical components in the tuner). This means that the actual power delivered to the plasma is always less than the output power of the generator, and may change from wafer-to-wafer and lot-to-lot. A properly designed and calibrated RF sensor connected between the matching network and the plasma chamber allows for measurement of power delivered to the plasma. With valid measurement of the true delivered RF power, the RF sensor system can be used to compensate for the losses described above and decrease wafer-to-wafer processing variations. This approach could provide a better tuning algorithm using the impedance information from the RF sensor to correctly set the capacitor values of the RF matching network; although additional feedback will be required to prevent excessive reflected power from damaging the generator. This is all leading towards the implementation of a feedback loop from the post-match sensor to the RF generator. The goal is to provide more consistent and accurate RF power delivered to the plasma in commercial plasma tools.

25.2.4

Wall Deposition Sensor

Process state sensors have classically focused on determining the species concentrations, pressure, gas flow, and RF power characteristics in the processing environment during the processing of each wafer. In the recent quest for more control over the reproducibility of processing tools, attention has recently been focused on the deposits generated on the internal surfaces of processing tools. Such deposits are formed in both deposition and etch tools; typically at a greater rate in deposition systems. These deposits have several detrimental effects: † Provide a slowly changing chamber-wall state, which can influence the process chemistry. † At some point, the deposit can start to flake off the walls and potentially contaminate the wafer. The last point drives the mechanical and plasma cleaning of tools, in hopes of preventing this source of particle contamination. Without an appropriate sensor, the frequency of these cleaning cycles is based on empirical rules; with an incorrect guess risking either wafer contamination or expensive, unnecessary downtime for tool cleaning.

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25-31

Inner wall deposits

Chamber outer wall

excited

Piezoelectric transducer

Process environment

echo

+ V

FIGURE 25.17

25.2.4.1



Cross-sectional view of a chamber with the piezoelectric transducer attached to the outer wall.

Theory of Operation

A sensor has recently been developed [17] for real time, noninvasive monitoring of chamber wall deposits in etch and deposition process tools. This sensor [50] can be used in an R&D or production environment for optimizing clean cycles and reducing particle contamination. It operates on the principle of acoustic reflectometry (Figure 25.17). A piezoelectric transducer is attached to the outer wall surface of a process chamber. A short electric pulse applied to the electrodes of the piezoelectric transducer excites an acoustic wave that propagates from the outer wall of the chamber toward the inner wall. If the inner wall is bare (with no deposits), the acoustic wave is reflected as an echo from the boundary between the inner wall and the process environment. This echo propagates to the outer wall where the transducer converts it into a detectable electrical signal. The transit time of the round trip by the acoustic wave is the fundamental measurement. When a film is deposited on the inner wall, any change in the thickness of the deposit causes a proportional change in the transit time. A change in temperature also changes the transit time, but in a very predictable manner. Hence the sensor acoustically monitors, in real time, changes in the average temperature along the cross-section of the chamber wall, with a real-time temperature compensation applied to the measurement described above. The sensor system consists of a personal computer that houses the electronics and the transducer module. The transducer module is attached to the outer wall (or window) at the chosen location on the process chamber. The transducer module is a cylinder 2 in. in diameter and 2 in. in height. The primary use of this sensor is for determining and optimizing the chemistry, frequency, and duration of clean cycles for etch and deposition tools.

25.3

Wafer-State Sensors

As stated previously, process state sensors have predominantly been used for endpoint determination and fault detection, and in some recent cases for dynamic process control. But clearly, wafer-state sensors provide more direct information for all these tasks. Such wafer-state sensors are slowly being integrated into processing tools, paced by issues of: customer pull, sensor reliability, cost of integration, etc.

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-32

Handbook of Semiconductor Manufacturing Technology

The following is a description of the wafer-state sensors that have, or are currently overcoming these barriers and are being integrated into OEM tools.

25.3.1

Film Thickness and Uniformity

The thickness of optically transparent thin films (silicon, dielectrics, resists) on a reflective substrate is measured via the analysis of the interaction of electromagnetic radiation with such a film, or film stack. These methods rely on single wavelength (laser) or spectral (white light) sources, impinging on the sample at normal incidence (interferometry and reflectometry) or at some angle off-normal (reflectometry, ellipsometry). The wavelength range is from the UV through the IR. The interaction of the light with the material can be detected through a polarization change (ellipsometry), a change in the phase (interferometry), or a change in the reflected amplitude (reflectometry). Optical models are used to extract the different physical parameters of the films (e.g., thickness) from the known optical indices of the individual layers. These techniques are well-established standard methods for off-line film thickness measurement, and hence the methods will only be briefly described. The emphasis will be on the deployment of these techniques as sensors in OEM tools. 25.3.1.1

Optical Sensors

The spectral reflectivity of transparent thin films on reflective substrate materials is modulated by optical interference. The effect of the interference on the measured spectrum is a function of the film and the substrate refractive indices. If the dispersion components of the refractive indices are known over the wavelength range, the thickness of the surface film can be found using a Fourier transform technique. For thin layers (!100 nm), the method of spectral fitting is very effective. Once the film thickness has been found, a theoretical reflectance spectrum can be determined and superimposed on the measured spectrum. This ensures a very high level of reliability for the film thickness measurement. 25.3.1.1.1 Reflectometry Technique 25.3.1.1.1.1 Theory of Operation The thickness of films on a silicon wafer is measured by means of spectrophotometry, utilizing the theory of interference in thin films [51]. The basic procedure is to measure the spectral reflectance of the desired sample. The spectral data are then interpreted to determine the thickness of the top layer of the measured stack. The actual reflectance Ract(l) is measured and fitted to Rtheor(l) to find the thickness (d) of the last layer. Rtheor(l) is calculated according to the specific optical model of the measured stack. The “goodnessof-fit” parameter measures the difference between the measured and the theoretical results and is used as a criterion of correct interpretation. Figure 25.18 shows a graph of Rtheor(l) for a layer of 10,000 A˚ SiO2 on Si substrate. The fitting algorithm used for data processing has to treat several issues such as spectral calibration, noise filtering, recognition of characterizing points (minima, maxima, etc.) and calculating a first-order approximation for the thickness and the final fine fitting. 25.3.1.1.1.2 Optical Overview The optical path of one specific reflectometer [52] is shown in Figure 25.19. In this case, the specular reflection is monitored at the incident angle normal to the wafer surface, and the radiation source is in the visible range. Briefly, the light emitted from the lamp (11) travels through an optical fiber (10) until reaching a condenser lens (9). The light beam then reaches a beam splitter (3) where it is split; half of the light passes through the beam splitter, while the other half is reflected downwards, focused by a tube lens (2) and an objective lens (1) onto the target (wafer). After being reflected by the target, the light beam travels back

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25-33

0.5

Reflectance

0.4 0.3 0.2 0.1 0 4000

4500

5000

5500

6000

6500

7000

7500

8000

Wavelength (nm × 10)

FIGURE 25.18 Reflectance of SiO2 on Si in water. (Adapted from input by Ran Kipper, Nova Measuring Instruments Ltd., Weizman Scientific Park, Rehovoth, Israel.)

7. Spectrophotometer

6. Relay lens

5. Relay lens 4. Pin hole mirror

8. CCD Camera

9. Condenser 3. Beam splitter 10. Optical fiber

2. Tube lens

11. Lamp 1. Objective lens

Wafer

FIGURE 25.19 Optical path of light beam in NovaScan 210. (Adapted from input by Ran Kipper, Nova Measuring Instruments Ltd., Weizman Scientific Park, Rehovoth, Israel.)

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-34

Handbook of Semiconductor Manufacturing Technology

through the objective lens (1), tube lens (2), and through the beam splitter (3) until it reaches a “pinhole” mirror (4). From there, the light is sent in two directions: a. A portion of the light (the image of the wafer surface) is reflected from the “pin-hole” mirror (4), focused by a relay lens (5) onto a CCD camera (8) where it is processed and sent to the monitor for viewing by the operator. b. The light that passes through the “pin-hole” is also focused by a relay lens (6), then reflected by a flat mirror towards the spectrophotometer (7), which measures the spectrum of the desired point. This information is then digitized and processed by the computer for the computation of film thickness. The above spectrophotometer also includes an auto-focusing sensor for dynamic focusing on the wafer surface during the movement of the optical head over the wafer. 25.3.1.1.1.3 System Integration, In-Line Measurement While this chapter is focused on in-situ metrology, there are some well-established in-line measurement techniques that are worth including as they provide routine and useful information for APC (specifically, wafer-to-wafer control). Two embodiments of in-line reflectometry for film thickness measurements will be described in this section; one for use in chemical–mechanical polishing (CMP) and the other for epi film growth. Reflectometry is used to monitor and control film thickness in CMP operations. When CMP is used to planarize and remove part of a blanket film, such as in oxide CMP, there is no detectable endpoint since no new films are exposed. The only way to monitor and control such a process is by a sensor that measures the thickness of the film. This is a very difficult task for a slurry-covered wafer that is in motion, hence the measurement is performed in-line in the rinse station of the CMP tool. A commercially available reflectometry-based sensor [52] is currently being used for CMP tool monitoring. Its primary benefits are as follows: † Provides thickness measurement data for every product wafer, required for rapid feedback control of the CMP process. † Performs measurements in parallel to the processing of the next wafer, hence not affecting system throughput unless a very large number of measurements are required. † In-water measurement capability obviates the need to clean and dry wafers before measurements. † Additional clean-room space and labor required for off-line measurements are eliminated. Only one component, the measurement unit, has to be integrated into the polisher. The compact size of this unit, with a footprint only w40% larger than the wafer, enables easy integration into the process equipment. Two such implementations in commercial CMP tools are represented in Figure 25.20 and Figure 25.21. Two different delivery system principles are applied for the integration of the measurement system into OEM tools. In one case (Figure 25.20), the wafer handler transfers wafers down from the wafer loading station to the water tub of the measuring unit and back. In another configuration (Figure 25.21), the measurement unit replaces the unload water track of the polisher. It receives the wafer, performs the measurement process, and delivers the wafer to the unload cassette. In both cases, the wafer is wet during the measurement. A second commercially available implementation of reflectometry (in this case using an IR source and non-normal incidence) is the use of FTIR measurement of epi thickness. The in-line measurement of epi thickness has been achieved by the integration of a compact FTIR spectrometer [53] to an Applied Materials Epi Centura Cluster tool, as shown in Figure 25.22. The cool down chamber top plate is modified to install a CaF2 IR transparent window, and the FTIR and transfer optics are bolted to the top plate. The IR beam from the FTIR is focused to a 5-mm spot on the wafer surface, and the specular reflection is collected and focused onto a thermoelectrically cooled MCT detector. Reflectance spectra can be collected in less than 1 s. Reference spectra are obtained using a bare silicon wafer surface mounted within the cool-down chamber. Epi thickness measurements are made after processing, while the wafers

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25-35

Control panel

Monitor and keyboard

Right top E-stop

Left side polish section

Right side polish section

Wafer handler Novascan MU

Lower right front service door

FIGURE 25.20 NovaScan system integrated in Strasbaugh Model 6DS-SP Planarizer. (Adapted from input by Ran Kipper, Nova Measuring Instruments Ltd., Weizman Scientific Park, Rehovoth, Israel.)

are temporarily parked in the Cluster Tool’s cool-down chamber, without interrupting or delaying the wafer flow. A simulated reflectance spectrum is computed from parametric models for the doping profile, the dielectric functions (DFs) of the epi film and the substrate, and a multilayer reflectance model. The models for the wavelength-dependent complex DFs include dispersion and absorption due to freecarriers, phonons, impurities, and interband transitions. The models are tailored to the unique optical and electronic properties of each material. The reflectance model computes the infrared reflectance of films with multilayered and graded compositional profiles using a transfer matrix formalism [53,54]. The model parameters are iteratively adjusted to fit the measured spectrum.

FIGURE 25.21 NovaScan in IPEC 372M and 472 Polisher. (Adapted from input by Ran Kipper, Nova Measuring Instruments Ltd., Weizman Scientific Park, Rehovoth, Israel.)

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-36

Handbook of Semiconductor Manufacturing Technology

FTIR

Calcium fluoride window

Reference

Detector Cool-Down chamber top plate Wafer

FIGURE 25.22 Configuration of On-Line Technologies, Inc. FTIR on Applied Materials’ Centura 5200. (From NovaScan, Nova Measuring Instruments Ltd., Weizman Scientific Park, Rehovoth, Israel.)

Gauge tests demonstrate the relative accuracy of this first principle analysis of epi layer thickness to be in the range of 0.5–2 nm (5–20 A˚). Comparison to destructive secondary ion mass spectroscopy (SIMS) and spreading resistance analysis (SRP) measurements shows the absolute accuracy to be within the accuracy of these standard measurements. 25.3.1.1.2 Interferometric Technique 25.3.1.1.2.1 Theory of Operation Interferometry is a well-established technique for the optical measurement of thin, optically transparent films. Some of the light impinging on such a thin film reflects from the top of the film and some from the bottom of the film. The light reflected from the bottom travels farther and the difference in path length results in a difference in phase. After reflection, the light following the two paths recombines and interferes, with the resulting light intensity a periodic function of the film thickness. The change in film thickness for one interferometric cycle is l/2n cos q, where l is the observation wavelength, n is the index of refraction of the film, and q is the angle of refraction within the film. 25.3.1.1.2.2 Full Wafer Imaging Sensor [55] The full wafer imaging (FWI) sensor is a novel sensor developed in the early 1990s [57] based on this interferometric technique. It uses an imaging detector to make spatially resolved measurements of the light reflected from the wafer surface during etching or deposition processes. This sensor takes advantage of the fact that the reflectivity of a thin film on the wafer surface is generally a function of the thickness of the film. By quantifying the changes in reflectivity as the film thickness changes, the FWI sensor determines spatially resolved etching or deposition rate, rate uniformity, spatially resolved endpoint, endpoint uniformity, and selectivity. These measurements are performed on every wafer, providing both real-time endpoint and runby-run data for process monitoring and control. The operation of this particular sensor relies on a number of optical phenomena: † Optical emission. Optical emission from the plasma is the preferred light source for FWI sensors, because it is simpler than using an external light source and it allows direct detection of optical emission endpoint. If plasma light is not available, an external light source can be added.

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25-37

A narrow bandpass filter is used to select the measurement wavelength. Different wavelengths are best suited to different types of process conditions; the most important characteristics being the intensity of the plasma optical emission as a function of the wavelength, the film thickness, and the film’s index of refraction. In general, a shorter wavelength gives better rate resolution, but cannot be used in certain situations, e.g., a 0.3-mm thick layer of amorphous silicon is typically opaque in the blue, but transparent in the red. † Interferometry for transparent thin films. In practice, during an etching or deposition process, the intensity of light reflected from the wafer surface varies periodically in time. The interferometric signal is nearly periodic in time in most processes because the process rate is nearly constant, even though the signal is strictly periodic in film thickness rather than in time. † Interferometry for trench etching. In trench etching, the interference is between light reflected from the top of the substrate or mask and light reflected from the bottom of the trench. A coherent light source, e.g., a laser, must be used because the interference is between two spatially distinct positions. Etching rate is calculated using the same types of techniques discussed above for thin film interferometry. Endpoint time is predicted by dividing the desired trench depth by the measured etching rate. † Reflectometry for nontransparent films. Light impinging on a nontransparent film reflects only from the top of the film, so there is no interference. However, the reflectivity of the nontransparent film that is being etched is different from the reflectivity of the underlying material. Thus, the intensity of reflected light changes at endpoint. This method is typically applied to endpoint detection in metal etching. From a system viewpoint, the FWI sensor requires a high data acquisition rate and uses computationally intensive analyses. So the typical configuration consists of a high end PC, advanced software, and one or more independent CCD-based sensor heads interfaced to the computer via the peripheral component interconnect (PCI) bus. Each sensor head records images of a wafer during processing, with each of the few hundred thousand pixels of the CCD acting as an independent detector. The full images provide visual information about the wafer and the process, while the signals from thousands of detectors provide quantitative determination of endpoint, etching or deposition rate, and uniformity. The simultaneous use of thousands of independent detectors greatly enhances accuracy and reliability through the use of statistical methods. The FWI sensor can be connected to sensor bus by adding a card to the PC. Connecting the sensor head directly to sensor bus is not practical, due to the high data rate and large amount of computation. Figure 25.23 shows a schematic diagram of the FWI sensor head installation. The sensor head is mounted directly onto a semiconductor etching or deposition tool on a window that provides a view of the wafer during processing. A top-down view is not necessary, but mounting the sensor nearly parallel to the wafer surface is undesirable, because it greatly reduces spatial resolution, one of the technique’s principle benefits. For both interferometry and reflectometry, spatially resolved results are determined by applying the same calculation method to hundreds or thousands of locations distributed across the wafer surface. These results are used to generate full wafer maps and/or to calculate statistics for the entire wafer, such as average and uniformity. Several methods can be used to find the etching or deposition rate from the periodic interference signal. The simplest way is to count peaks, but this is accurate only if there are a large number of interferometric cycles, which is not common in most semiconductor processes. For example, a 0.3-mm thick layer of polysilicon contains only 3.8 interferometric cycles. The accuracy of simple peak counting is one-half of a cycle, which is only 13% in this example. The accuracy can be improved somewhat by interpolating between peaks, but the accuracy is still fairly low. In addition, false peaks caused by noise in the signal often plague peak counting methods. A more accurate way to determine rate is to multiply the change in film thickness per interferometric cycle by the frequency (number of cycles per second). The simplest way to find the desired frequency is to

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-38

Handbook of Semiconductor Manufacturing Technology

Sensor head

installation at a variety of angles

Etching or deposition tool

Plasma l

l

FIGURE 25.23 Full wafer imaging (FWI) sensor head mounting. (Adapted from input by Conner, W.T., Leybold Inficon, Inc., East Syracuse, NY. http://www.inficon.com)

use a fast Fourier transform (FFT) to convert from the time domain to the frequency domain. A local maximum in the signal vs. frequency then specifies the frequency to be used in the rate calculation. Accuracy can be further increased by starting with an FFT to provide an initial guess of the frequency and then fitting the signal vs. time to an empirical function that models the physical signal. This combined method is more accurate than the FFT alone if there are few cycles, if the interferometric signal is not a pure sine wave, or if the SNR is low—all of which occur commonly in semiconductor processing. In either method, a frequency window can be used to designate, which maximum in the FFT is used to calculate rate. This is a useful way to measure selectivity in etching processes where two materials, e.g., the mask and the film of interest, are etching simultaneously. For transparent thin films, endpoint can be detected or predicted. The detection method relies on the fact that the periodic modulation of reflected light intensity ceases at endpoint. Endpoint is found by detecting the deviation of the observed signal from an interferometric model. The prediction method uses the measured rate and the desired thickness change to predict the endpoint time. Prediction is the only available endpoint method for deposition processes. It is also necessary in those etching processes where the film is not completely removed. The FWI technique has been used on devices with feature sizes down to 0.1 mm, aspect ratios up to 50:1, percent open area as low as 5%, film thickness greater than 2 mm, and substrate sizes larger than 300 mm. Endpoint, etching or deposition rate, and uniformity can be monitored for a variety of transparent thin films, including polysilicon, amorphous silicon, silicon dioxide, nitride, and photoresist. For nontransparent materials, such as aluminum, tungsten silicide, chrome, and tantalum, rate cannot be measured directly, but spatially resolved etching endpoint and thus endpoint uniformity have been determined. Examples of the use of FWI are shown in the following figure. Figure 25.24 is an example of the signals from three different CCD pixels recorded during a polysilicon gate etching process. Each pixel imaged a small, distinct region; an image of the wafer is included to indicate the position of these pixels. Two of the pixels are on the wafer and display a periodic signal due to the change in thickness of the thin film. The pixel off the wafer shows the optical emission signal, which rises at endpoint. Analysis of the periodic

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25-39

Signal

Image

Optical emission signal

Endpoint

Time

FIGURE 25.24 Signal from three positions: two on the wafer and one off the wafer. (Adapted from input by Conner, W. T., Leybold Inficon, Inc., East Syracuse, NY. http://www.inficon.com)

signal is used to determine rate and/or endpoint, while analysis of the optical emission signal is used to independently detect the average endpoint time for the entire wafer. Figure 25.25 is an example of a full wafer etching rate uniformity surface plot. The plot was generated from rate calculations at 4000 locations on a rectangular grid covering the wafer. Two trends are evident. First, the etching rate at the center of the wafer is lower than at the edge. Second, variations within each die are visible as a regular array of peaks and valleys in the etching rate surface plot. The deepest of these valleys go all the way to zero and correspond to areas of pure photoresist mask, which did not etch appreciably in this high selectivity process. Figure 25.26 is an example where an FWI sensor was used to automatically monitor every product wafer. Results for each wafer were determined and displayed, while the next wafer was being loaded into the processing chamber. The figure shows the etching rate and uniformity for four consecutive product wafer lots. The process was stable (no large fluctuations in rate or uniformity), but not very uniform

product

product

product

4000

blanket

blanket

blanket

blanket

Rate (A/min)

3500

10 9 8 7

3000

6

2500

5

2000

4

1500

3

1000

2

500 0

warm-ups (no film) 20

warm-ups (no film) 40

warm-ups (no film) 60

warm-ups (no film) 80

100

Uniformity (% 1−s)

product 4500

1 0

Wafer number

FIGURE 25.25 Full wafer etching rate map. AverageZ2765 A˚/min, uniformityZ3.9% 1Ks. (Adapted from input by Conner, W. T., Leybold Inficon, Inc., East Syracuse, NY. http://www.inficon.com)

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

Handbook of Semiconductor Manufacturing Technology

Rate

25-40

X

Y

FIGURE 25.26 Rate and uniformity for four product wafer lots. (Adapted from input by Conner, W. T., Leybold Inficon, Inc., East Syracuse, NY. http://www.inficon.com)

(7%; 1Ks). Furthermore, pattern-dependent etching is clearly evident. At the beginning of each lot, several bare silicon warm-up wafers and one blanket (not patterned) wafer were run, then the patterned product wafers were run. The blanket wafers etched about 10% slower and much more uniformly than the product wafers. The difference between the blanket and the product wafers demonstrates the need to use real product wafers to monitor a process. Sensor calibration has been achieved by a comparison between FWI sensors and ex-situ film thickness metrology instruments. The agreement is generally good, even though the two systems do not measure exactly the same thing. The FWI measures dynamic changes in film thickness, while the ex-situ instruments measure static film thickness. It is typical to take the thickness—before minus thickness—after measured ex-situ and divide this by the total processing time to get the ex-situ rate and uniformity values that are compared with the rate and uniformity measured in-situ by the FWI. Integration to the processing tool is required to obtain the benefits provided by an FWI sensor. There are two main technical issues. First, a window that provides a view of the wafer during processing is required. Between wet cleans, this window must remain transparent enough that the wafer stays visible. Second, communication between the tool’s software and the FWI sensor’s software is useful to identify the process and wafer/lot and to synchronize data acquisition. Both of these technical needs must be met whether the FWI runs on a separate computer or on a subsystem of the tool controller. The FWI sensor provides different benefits to different users. In R&D, it provides immediate feedback and detailed information that speeds up process or equipment development and process transfer from tool-to-tool. In integrated circuit (IC) production, every product wafer can be monitored by the FWI sensor so that each wafer serves as a test wafer for the next. This means that fewer test, monitor, qualification, and pilot wafers are required—a significant savings in a high-volume Fab. Also, fewer wafers are destroyed before faults or excursions are detected; and data are provided for SPC of the process. 25.3.1.2

Current Sensor for Film Removal in CMP

There are two distinct measurement and control problems in CMP. The first is the measurement of the thickness of a layer during the blanket thinning of that layer by CMP. Since there is no endpoint to this process, the only way to control it is via a pre- and post-measurement of the specific layer thickness.

DK4126—Chapter25—23/5/2007—19:43—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25-41

This has already been discussed in Section 25.3.1.1.1.3. The second is the determination of the endpoint when polishing a metal layer on an oxide substrate. This is an easier problem that has been solved by monitoring the current to the motor that rotates the wafer carrier. Most systems rotate the carrier at constant RPM. This requires the current supplied to the motor to increase or decrease depending on the amount of drag. Fortunately, this drag changes fairly reproducibly as the system polishes through the metal and reaches the oxide interface. A variety of other factors also influence the total motor current, hence there is considerable noise to this signal; however with proper signal conditioning, the endpoint can be detected. In one such endpoint system [58], the motor current signal is amplified, normalized, and high frequency components are removed by digitally filtering. Proprietary software is then used to call endpoint from the resultant trace. Other film stacks such as poly/oxide, ploy/nitride, or oxide/nitride may find this technique useful as well but this will need to be tested and will likely require different signal conditioning and endpoint software algorithms. The motor current signal also appears to be useful to diagnose other tool parameters as well. Correlation between deviations of “known good” data traces and specific tool problems may allow the user to diagnose tool failures and to signal preventative maintenance. This sensor typically comes integrated into the tool, by the OEM supplier. 25.3.1.3

Photo-Acoustic Metrology

Section 25.3.1.1 described optical methods for the measurement of optically transparent films. There is also a need for a simple measurement of metal film thickness. This section [59] describes an impulsive stimulated thermal scattering (ISTS) method for noncontact measurement of metal film thickness in semiconductor manufacturing and process control. The method, based on an all-optical photoacoustic technique, determines thickness and uniformity of exposed or buried metal films in multilayer stacks with repeatability at the angstrom level. It can also be used to monitor CMP processes and profile thin metal films near the edge of a wafer. The method is being investigated for use in monitoring both the concentration and the depth of ions, including low-energy low-dose boron ions implanted into silicon wafers. While currently this technology is implemented in an off-line tool, it has the potential to be developed as an in-situ sensor for measuring properties of both metal films and ion-implanted wafers. 25.3.1.3.1 The Photoacoustic Measurement Technique The photoacoustic measurement method used in this tool [60] is illustrated schematically in the inset to Figure 25.27. Two excitation laser pulses having a duration of about 500 ps are overlapped at the sample to form an optical interference pattern containing alternating “light” (constructive interference) and

Signal amplitude

1 2

−10

0

10

20

30

40

50

2

60

70

80

90

Time (ns)

FIGURE 25.27 Signal waveform measured from a 1-mm aluminum film. (Adapted from input by Hanselman, J., Active Impulse Systems, Natick, MA.)

DK4126—Chapter25—23/5/2007—19:44—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-42

Handbook of Semiconductor Manufacturing Technology

“dark” (destructive interference) regions. Optical absorption of radiation in the light regions leads to sudden heating and thermal expansion (Box 1 in Figure 25.27). This launches acoustic waves whose wavelength and orientation match those of the interference pattern, resulting in a time-dependent surface “ripple” that oscillates at the acoustic wave frequency [61]. A probe laser beam irradiates the surface ripple and is diffracted to form a signal beam that is modulated by the oscillating surface ripple (Box 2 in Figure 25.27). (The displacement of the surface is grossly exaggerated for purposes of illustration.) The signal beam is then detected and digitized in real time, resulting in a signal waveform such as the one in Figure 25.27. With this method, data are measured in real time with very high SNRs: the data shown were collected from a 1-mm aluminum film in about 1 s. The acoustic wave that is excited and monitored in these measurements is a waveguide or “drumhead” mode, whose velocity is a sensitive function of the film thickness. The film thickness is calculated from the measured acoustic frequency, the spatial period of the interference pattern (i.e., the acoustic wavelength), and the mechanical properties (i.e., density, and sound velocity) of the sample. The thickness determined in this manner correlates directly to traditional techniques, such as 4-point probe measurement and scanning electron microscopy (SEM) thickness determination. Moreover, the acoustic wavelength that is excited in the film can be rapidly changed in an automated fashion. Data collected at several different acoustic wavelengths can be used to determine sample properties in addition to film thickness. In particular, thermal diffusivities and the viscoelastic properties of the sample can be measured. A modified form of the optical technique used to determine film thickness can be used to monitor the concentration of ions implanted in semiconducting materials. In this case, the waveform of the diffracted signal depends on the concentration and energy of the implanted ions. Ion concentration and depth can be separately determined from parameters of the measured signal. 25.3.1.3.2 Hardware Configuration The photoacoustic hardware is a small-scale optical system housed in a casting measuring approximately 50!50!10 cm. The optical system uses two solid-state lasers: a Nd:YAG microchip laser generates the 500 ps excitation pulses, and a diode probe laser generates the probe beam that measures the surface ripple. A compact optical system delivers these beams to a sample with a working distance of 80 mm. The spot size for the measurement is 25!100 mm. For each laser pulse, the optical signal is converted by a fast photodetector to an electrical waveform that is digitized by a high-speed A/D converter. The digitized signal is further processed by a computer to extract the acoustic frequency and other waveform parameters. A thickness algorithm calculates the film thickness from the measured acoustic frequency, the selected acoustic wavelength, and the mechanical properties of the sample. Integrated metrology requires in-situ or in-line monitors that can attach directly to cluster tools and monitor film properties of a wafer in, or emerging from, the process chamber. This photoacoustic measurement technology fulfills many of the requirements for such a sensor. As described above, it is compact, fast, does not require moving parts and has the long working distance necessary for optical measurement through a viewing port. While currently this technology exists as an off-line metrology tool, it is easily adaptable as an in-line sensor. With appropriate optical access to the processing chamber, it is possible to evolve this methodology to an in-situ sensor for measuring properties of both metal films and ion-implanted wafers. 25.3.1.3.3 Applications The principle application of this technology is for the measurement of metal film thickness in single and multilayer structures. Figure 25.28 shows 49-point contour maps of a 5000 A˚ tungsten film deposited directly on a silicon; the map on the left was measured nondestructively with the InSite 300 in about 1 min, while the map on the right was measured destructively with a four-point probe in about 4 min. The contours of the maps are nearly identical, both showing thickness variations of about 500 A˚ across the surface of the film.

DK4126—Chapter25—23/5/2007—19:44—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

Insite 300

30

5100

10 0 5100 5000 4900 4800 4700 4600 4500

−10 −20 −30 −40

30 20 10 0 −10

−50 −40 −30 −20 −10 0

10 20 30 40 50

Distance along X-axis (mm)

(b)

5100 5000 4900 4800 4700 4600

−20 −30 −40 −50

−50

(a)

4 Point probe

40

40 20

50

Distance along Y-axis (mm)

Distance along Y-axis (mm)

50

25-43

−50 −40 −30 −20 −10 0

10 20 30 40 50

Distance along X-axis (mm)

FIGURE 25.28 Comparison of 49-point contour maps of a tungsten film measured nondestructively using the Insite 300 (left) and destructively using a four-point probe (right). (Adapted from input by Hanselman, J., Active Impulse Systems, Natick, MA.)

This tool can also measure the thickness of one or more layers in a multilayer structure, such as a 1000 A˚ TiW film buried beneath a 2000 A˚ aluminum film. In this case, the system is “tuned” to explicitly measure the relatively dense buried film (TiW has a density of about 13,000 kg/m3, compared with 2700 kg/m3 for aluminum). This tuning is done by first initiating a low-frequency acoustic wave that is sensitive to changes in the TiW thickness, but relatively insensitive to changes in the aluminum thickness. These data are processed to generate the TiW contour map. The system then initiates a relatively highfrequency acoustic wave that is sensitive to the combined thickness changes in the TiW/aluminum structure. A contour map of the outer aluminum film can be generated from this combined data. Full characterization of the uniformity of deposited metal films requires measurement to the edge of the film. This is particularly important for monitoring sputtering and CVD tools, which are often configured to deposit a film blanketing the entire wafer except in an “edge-exclusion zone” along the outer perimeter of the wafer. For films deposited by a process with such an edge-exclusion zone, the thickness drops from its nominal value to zero within a few hundred microns from the wafer edge. It is important to verify that edge specifications are met, as devices bordering the edge-exclusion zone can represent close to 10% of the total number of devices on a 200-mm wafer or 7% for a 300-mm wafer. Besides the contact issue, probe techniques are limited in this regard by their probe spacing and electrical issues near the edge of the wafer. The small spot size used in this methodology makes it possible to profile this narrow edge-exclusion zone. This technique has also been applied to the measurement of the thickness of an Al film during CMP. In one particular application, 49-point contour maps were generated from data measured prior to polishing and following 30-s intervals of polishing. Prior to polishing, the film had no distinct pattern. The CMP process imparted a “bull’s eye” contour to the film that is evident after about 60 s and becomes more pronounced as the polishing process continues. The data also indicate that the average removal rate is not constant, varying from ca. 60 A˚/s during the first 30 s to ca. 120 A˚/s in the final 30-s interval. Measurement at the center and edge of a wafer can be performed in a few seconds, making this approach attractive for real-time monitoring of CMP removal rates. A variation of ISTS, called impulsive stimulated scattering (ISS), has been used to measure both the concentration and the energy of ions implanted in silicon wafers. An ISS is an optical technique that initiates and detects both electronic and acoustic responses in the implanted semiconductor lattice. In these measurements, the signal waveform shows features that vary with the concentration of the

DK4126—Chapter25—23/5/2007—19:44—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-44

Handbook of Semiconductor Manufacturing Technology

implanted ions, and a separate parameter that varies with the depth of the implanted ions. Although results at this early phase are preliminary, ISS has effectively measured arsenic, phosphorous, and boron ions implanted at energies ranging from 3 keV to 3 MeV, and concentrations ranging from 1!1011 to 1!1016 cmK2. The method appears to be particularly effective for measuring shallow-junction boron implants made at low energies and/or low concentrations. The ISS measures samples rapidly (less than 2 s) and remotely (12 cm working distance), making in-situ measurements a real possibility.

25.3.2

Feature Profile

Measurement for the control of lithography has classically relied on off-line metrology techniques, such as SEM, and more recently on atomic force microscopy (AFM). The SEMs are not applicable to in-situ measurements. An AFM, due to its very small field of view and slow scan rates, is also not likely to become even an in-line sensor for routine feature size measurements. Scatterometry is the only current technology that is capable of evolving into an in-line sensor for feature size measurements. 25.3.2.1

Scatterometer

25.3.2.1.1 Theory of Operation Scatterometry, as applied in the semiconductor industry [62], is a nondestructive optical technique used to estimate wafer-state parameters, such as critical dimension, film thicknesses, and profile. The original work evolved from R&D work at the University of New Mexico [63,64], and provided estimates of waferstate information by an analysis of light scattered, or diffracted, from a periodic sample such as resist lines in a grating. This light pattern, often referred to as a “signature,” can be used to identify the shape and spatial features of the scattering structure itself. For periodic patterns, the scattered light consists of distinct diffraction orders at angular locations specified by the grating equation:

sin qi C sin qn Z

nl d

ð25:5Þ

where qi is the angle of incidence, qn is the angular location of the nth diffraction order, l is the wavelength of incident light, and d is the spatial period (pitch) of the structure. The fraction of incident power diffracted into any order is very sensitive to the shape and dimensional parameters of the diffracting structure, and thus may be used to characterize that structure itself [65]. In addition to the period of the structure, which can be determined quite easily, the thickness of the photoresist, the width of the resist line, and the thicknesses of several underlying film layers can also be measured by analyzing the scatter pattern. In commercial scatterometers, the signature is generated by a variety of methods, such as varying the angle, wavelength, or polarization of the incident light [66,67]. The scatterometric analysis can best be defined in two steps. First, in what is referred to as the forward problem, the diffracted light “fingerprint” or “signature” from a periodic grating is measured using a scatterometer. As mentioned, this light can be either a single-wavelength laser beam that is varied over multiple angles, or a multiwavelength source at a fixed angle. Using the grating equation, the detector is able to track any diffraction order as the angle of incidence (or wavelength) is varied. Thus, the intensity of a particular diffraction order is measured as a function of incident angle (this is known as a scatter signature). Figure 25.29 illustrates this technique and shows the resulting trace of the zeroth order intensity vs. the incident angle [68]. In the second step, known as the inverse problem, the scatter signature is used to determine the shape of the lines of the periodic structure which diffracts the incident light. To solve this problem, the grating shape is parameterized [69], and a parameter space is defined by allowing each grating parameter to vary over a certain range. A diffraction model, most commonly rigorous coupled-wave theory [70], is used to generate a library of scatter signatures for all combinations of parameters, and an analysis algorithm is used to compare experimental and theoretical data. The parameters of the theoretical signature that match most closely with the experimental signature are taken to be the parameters of the unknown

DK4126—Chapter25—23/5/2007—19:44—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25-45

Grating scatterometer

qi

−2 Order

−3 Order

−1 Order 0 Order

0 Order intensity

Incident laser beam +1 Order Angle of incidence

FIGURE 25.29 Diffracted orders for a grating scatterometer. (From Bushman, S., and S. Farrer, Process, Equipment, and Materials Control in Integrated Circuit Manufacturing III, Proceedings of SPIE, 3213, 79–90, 1–2 October 1997, August 1997.)

sample. One algorithm that can be used to select the closest match between theoretical and measured traces is based on minimizing a cost function such as the mean squared error, which is given by

MSE Z

1 N

N P iZ0 1 N

ðxi Kx^i Þ2

N P

iZ0

ðxi Þ

ð25:6Þ

2

where N is the number of angle measurements, xi is the measured reference trace, and x^ is the candidate trace from the theoretical library. It should be noted that because the technique relies on a theoretical model, calibration is not necessary. As an alternative to the library search method to determine the signature that minimizes the cost function, “real-time” regression can be used to estimate each parameter, although this technique is limited by the computing power used for the analysis [71]. Figure 25.30 depicts an example of an experimental signature in comparison with theoretical data, and illustrates the sensitivity of the technique for linewidth measurements. In the figure, the two theoretical scatter signatures correspond to two linewidths which differ by 10 nm. The difference between the two signatures is quite noticeable. The experimental data for this sample—a 1-mm pitch photoresist grating with nominal 0.5-mm lines—matches most closely with the 564-nm linewidth. Thus, the signatures provide a useful means for characterizing the diffracting features. 25.3.2.1.2 Applications As a metrology technique, scatterometry provides a number of advantages over more traditional techniques of CD-SEM and AFM for characterizing feature profiles. Since scatterometery is an optical technique, it has a clear advantage in measurement time as there is no additional overhead for placing the sample in a vacuum system—as in the case of CD-SEM, or sampling the surface—as in the case of the AFM. Also, scatterometers have the ability to estimate sidewall angle information, maintain relative immunity to resist shrinkage, and provide capability to measure notched or footed profiles. However, scatterometry does require knowledge of the optical properties of additional films, in order to ensure accurate models of grating and line shape parameters [72,73].

DK4126—Chapter25—23/5/2007—19:44—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-46

Handbook of Semiconductor Manufacturing Technology

0.5

Diffracted power (normalized)

Experimental data Linewidth=564 nm

0.4

Linewidth=574 nm

0.3

0.2

0.1

0

0

10

20 30 40 Incident angle (degrees)

50

60

FIGURE 25.30 Scatter signatures for two linewidths differing by 10 nm. (Adapted from input by Christoper Raymond, Accent Optical Technologies, Bend, OR, http://www.accentopto.com)

Since the introduction of commercial scatterometry in 1995, the technique has been applied to a number of semiconductor industry applications [74]. Initial applications focused online/space gratings of simple resist profiles approximated as trapezoids on silicon substrates [75], and further developments of this application have led to lithography process monitoring [76]. Figure 25.31 provides a crosssectional representation of the more complex features that are common in resist, etch, spacer, or contact gratings. To accommodate these profile shapes, multiple trapezoids—each with its own width, depth, and sidewall angle and possibly with rounded corners—can be used. The analysis of a resist grating on the polysilicon gate film stack (Figure 25.31A) or the silicon trench isolation film stack (Figure 25.31B), is more complex than the analysis of a resist grating on silicon due to the additional underlying films that now have to be optically characterized. Post-etch, the polysilicon line, and the shallow trench isolation (STI) trench are not generally modeled by single trapezoids, but require multiple trapezoids (possibly with rounded corners) to capture the profile of interest (Figure 25.31C and Figure 25.31D) [7,11,14,77]. With improvements in computing capability, more complex scatterometry applications can now be performed. Latest applications in scatterometry include the 2D offset spacer grating where a conformal film surrounds the standard polysilicon gate [78] as shown in Figure 25.31E. Even greater complexity is found in contact and metal etched arrays which are 3D in nature, as shown in the topdown view in Figure 25.31F [79–81], which require more sophisticated models that account for the additional dimensionality. Additional complexity can be added in each of these applications—if more accuracy is required—by considering additional parameterization of the profile, or by including additional film layers, as necessary. For some applications, scatterometry is one of the few methods available to estimate complex profile and film information for multilayer stacks, such as back-end trench layers [6] or memory structures [82]. As scatterometry has become more accepted in the semiconductor industry, further development has been invested in migrating scatterometry to an in-situ metrology. Current commercial examples include the addition of a scatterometer head on the litho track and into the robot handler for plasma etch [83]. 25.3.2.1.3 Summary/Conclusions In summary, scatterometry is an optical technique where raw signature data collected from the sample under test are compared with a signature estimated from a detailed physical model that includes all film

DK4126—Chapter25—23/5/2007—19:44—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25-47

Resist SiON

Poly

SRN Resist

Poly BARC

Oxide

Nitride c-Si

(a)

Oxide

c-Si

(b)

c-Si

Oxide (c)

Poly

Nitride

Offset stack

Oxide

Feature

y x

c-Si Trench Oxide (d)

(e)

FIGURE 25.31

Substrate

c-Si (f)

Graphical depiction of common scatterometry feature profiles.

and grating information. One method to solve the detailed physical model for the signature is to use a technique called rigorous coupled-wave theory [3]. As this is a computationally expensive algorithm, the common practice is to generate a discrete library of all feasible variations (pitch, CD, sidewall angle, film optical properties, etc.) for a given structure. This library of signatures is then compared with the physical signature and the parameter set of the closest fit match (in a least-squares sense) is returned as the estimate for the physical structure. Regression-based solutions are also available to improve the resolution of this technique. Numerous applications have been developed using scatterometry in the semiconductor process flow, in particular in the areas of lithography and etch process control. Recent advances in computing speed have improved the feasibility of using the technique for more complex 3D structures.

25.4

Measurement Techniques for Potential Sensors

A number of metrology tools exist based on sensing methods that are currently implemented on large, complex hardware. These methods are currently used for ex-situ measurements. However, adaptations of these techniques can become implemented as in-situ tools in OEM processing tools. This evolution will be paced by our abilities to generate less expensive and more compact versions of these tools, and by the need to implement such methods as in-situ sensors for fault detection or MBPC. Since some of these methods are likely to find their way into OEM processing tools within the next 3–5 years, a brief description of these methodologies is warranted.

DK4126—Chapter25—23/5/2007—19:44—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-48

25.4.1

Handbook of Semiconductor Manufacturing Technology

Ellipsometry

Ellipsometry, in its single-wavelength, dual-wavelength, or spectral embodiments, is a well-established technique for the measurement of film thickness. The fundamentals of ellipsometry are described in the chapter on in-line metrology. So the following emphasizes the in-situ aspects of this sensor. 25.4.1.1

Theory of Operation

Ellipsometry is the science of the measurement of the state of polarization of light [19]. The polarization of light, in the usual conventions, corresponds to the spatial orientation of the E-field part of the electromagnetic wave. Since light is a transverse wave, the polarization is 2D; there are two independent orientations of the polarization in a vector space sense. The linear bases are usually referred to as the P and S components. For light that reflects from a surface, or is transmitted through an object, the P polarization lies in the plane of reflection (or transmission), and the S polarization is perpendicular to the plane of reflection. In addition to specifying the amount of P-type and S-type components, the phase difference between them is also important. The phase lag is a measure of the difference in the time origin of the two (P and S) electric field vibrations. For the case where there is a nonzero phase lag the E-field vector traces out a curve in space. The projection of this curve onto a plane that is normal to the direction of propagation of the light is generally an ellipse, thus the origin of the name “ellipsometry.” A special case of the ellipse is a circle. There are two unique circular polarizations referred to as left-handed and right-handed depending on whether the P-polarization component leads or lags the S-polarization component. At times it is convenient to use the circular basis set in place of the linear basis set. The full state of polarization of light requires the specification of a coordinate system and four numbers, which includes the amount of unpolarized light. Unpolarized light implies that the polarization is randomly and rapidly changing in time. This naturally occurs for a light source that consists of a large number of independent, random emitters. The reflectivity of light from a surface depends on the state of incident polarization. For example at a specific angle for many materials, the P-polarization has no reflection, whereas the S-polarization does. The angle at which this occurs is the Brewster angle of the material, it is a function of the index of refraction of the material. Using fully polarized light, the index of refraction of a bulk material may be readily measured by finding the Brewster angle. Figure 25.32 illustrates the difference in reflection for P and S polarizations as a function of incidence angle for 0.5 mm light (green light). The Brewster angle at about 768 is the point where the P polarization

1.2

P and S reflection for Si

Reflectivity

1 0.8 0.6 0.4 0.2 0

1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 Angle

FIGURE 25.32 Reflection from silicon for 0.5 mm light for P and S polarizations. (Adapted from input by Whelan, M., Verity Instruments, Carrollton, TX. http://www.verityinst.com)

DK4126—Chapter25—23/5/2007—19:44—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-49

1.09

1.05

1.01

0.97

0.93

0.89

0.85

0.81

0.77

0.73

0.69

0.65

0.61

0.57

0.53

0.45

0.41

0.37

0.33

0.25

0

0.49

P and S reflectivity for SiO2 on Si

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.29

Reflectivity

In-Situ Metrology

Wavelength

FIGURE 25.33 Reflection from SiO2 layer on Si for P and S polarization’s as a function of wavelength. (Adapted from input by Whelan, M., Verity Instruments, Carrollton, TX. http://www.verityinst.com)

reflectivity goes to 0. At normal incidence and grazing incidence, there is generally much less information about the surface that can be derived from measurements in the change of polarization. By taking ratios, all the information about the change in the state of polarization may be determined by specifying two numbers. These are known as the ellipsometric parameters j and D. Psi (j) is usually given as the angle whose tangent is the ratio of the magnitudes of the P and S components of the reflected light. Delta (D) is the relative phase between the P and S components. By measuring j and D as a function of incident angle, or as a function of wavelength at a fixed incident angle, much information may be determined about the reflecting surface including the effect of thin films (thickness and compositions). Figure 25.33 shows the reflectivity of the P and S polarizations from a sample of silicon dioxide (220-nm thick) layer on silicon substrate, as a function of wavelength. This example illustrates the type of data from which the thin film thickness and material composition is inferred by regressing on the ellipsometric equation that are defined by this system. 25.4.1.2

System Integration

When the wavelength of the incident light varies (using a white light source) for a fixed incident angle the term spectral ellipsometry is used. For a fixed wavelength (laser source) with variable incident angle the term variable angle ellipsometry is used. Instruments that vary both angle and wavelength are variable angle spectral ellipsometers. Figure 25.34 is a schematic representation of a spectral ellipsometer. The white light source is a broadband emitter such as a Xenon arch discharge. The fixed polarizer passes a specific linear polarization component. The polarization modulator is a device that changes the polarization in a known manner such as a photo-elastic polarization modulator. In some instruments, the function of these two devices is replaced with a polarization element that is mechanically rotated. The analyzer selects out a specific state of polarization of the reflected light. Since the state of the incident polarization is well defined, the effects of reflection from the sample can be determined. The spectrograph analyzes the white light source into a number of spectral components. The use of spectral ellipsometers for in-situ monitoring and control has been limited by the cost of these units. An additional constraint has been the complexity of the integration of these optics (two opposing windows required) into standard OEM processing tools. The cost issue is slowly improving through the development of lower-cost ellipsometers. When warranted, the optical complexity can be overcome [84]. As processing complexity and the inherent cost of misprocessing 200–450-mm wafers continues to increase, spectral ellipsometers will likely find their way into OEM tools for in-situ monitoring and control of thin film growth and composition in real time.

DK4126—Chapter25—23/5/2007—19:44—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-50

Handbook of Semiconductor Manufacturing Technology

Schematic of a spectral ellipsometer

Lig sou ht rce

Fix e pol d ariz er Polari z mo dul er ato r

zer aly An rizer a pol

ec Sp

h rap trog

Sample

FIGURE 25.34 Schematic representation of a spectral ellipsometer. (Adapted from input by Whelan, M., Verity Instruments, Carrollton, TX. http://www.verityinst.com)

25.4.2

Epi Resistivity and Thickness

In the process of depositing an epi layer of silicon, resistivity, and thickness of the epi layer are the two parameters of greatest interest [85]. Traditional methods of monitoring epi layer resistivity measures either the average resistivity of the layer, as is the case of a four-point probe, or the resistivity as a function of depth into the epi layer, as is the case with a Hg probe or CV Schottky diode. These traditional methods are all destructive, as probe marks, contamination due to Hg, and metal dots all contaminate the wafer. A new technique has recently been developed [86] that performs a nondestructive measurement of epi layer resistivity and profile. The technique used is conceptually quite similar to Hg probe or CV Schottky measurements. While the technique was introduced as a stand-alone metrology tool, an in-line version incorporated into the cooling station of an epi reactor is an obvious extension of the technology. 25.4.2.1

Theory of Operation

Both the CV Schottky diode and Hg probe techniques place an electrode on the surface of the semiconductor and then measure the depletion width by looking at the capacitance across the depletion width. They vary the depletion width by varying the voltage on the electrode and measure the capacitance of the depletion width at each electrode voltage. Similarly, this technique positions an electrode near the semiconductor surface, although in this case it does not touch the wafer. It then measures the depletion width for each of multiple voltages on the electrode. The technique used to position the electrode near the semiconductor surface, but not touching it, is similar to the air-bearing effect used in computer hard disk drives, and is shown in Figure 25.35. A disk

Ceramic air filter

Pressurized air

Bellows

Wafer

Reatoring force due to escaping air F=kx

FIGURE 25.35

Air-bearing mechanism. (Adapted from input by Charlie Kohn, SemiTest, Inc., Billerica, MA.)

DK4126—Chapter25—23/5/2007—19:44—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25-51

whose bottom surface is made of porous, inert material is pushed toward the wafer surface by air pressure above the porous surface. As air escapes through the porous surface, a cushion of air forms on the wafer, and the air cushion acts like a spring and prevents the porous surface from touching the wafer. The porosity and air pressure are designed such that the disk floats approximately 2 mm above the wafer surface. A stainless steel bellows acts to constrain the pressurized air and to raise the porous disk when the air pressure is reduced. Note that if the air pressure fails the disk moves up, rather than falling down, and damaging the wafer. Similarly, an electrical failure would not damage the wafer surface. The mechanism is simple, as no electrical or computer feedback of any kind is required. It is analogous to suspending an object between two springs of different spring constants. The porous disk has a hole in the center, and a sensor element is mounted in the hole to prevent the pressurized air from escaping. The sensor consists of an electrode that is 1 mm in diameter. The electrode is made of a material that is electrically conductive and optically transparent. The electrode protrudes from the bottom of the porous disk, such that during the measurement it is located about one-half micron above the wafer surface. A block diagram of the measurement system is shown in Figure 25.36. As with Hg probe and CV Schottky measurements, depletion width is measured by looking at the capacitance of the depletion layer. The system actually measures the capacitance from the wafer chuck to the electrode, which is the series combination of three capacitances; the capacitance from the wafer chuck to the wafer, in series with the capacitance of the depletion layer, and in series with the capacitance of the air gap. The capacitance of the wafer chuck to the wafer can be ignored, as the area of the wafer is so much larger than the area of the electrode. Even with a 6-in. wafer, the ratio of the areas is more than 22,000, and although the effective separations of the capacitor plates may be unequal, it is reasonable to consider the wafer chuck to wafer capacitance as a short circuit. The capacitance of the air gap cannot be treated so easily, but because there is some electrode voltage at which the semiconductor surface is in accumulation and the capacitance of the depletion width is infinite, the capacitance of the air gap can be measured. Assuming that the actual air gap does not vary with changing electrode voltage, the capacitance of the air gap is the measured capacitance at its maximum value. Subtracting the capacitance of the air gap from

1 MHz detector

Rigid mounting Bellows Ceramic air filter

Guard electrode

A/D converters

Programmable power supply 10 mS

D/A converters

Electrode Guard Air bearing Silicon wafer Wafer vacuum chuck

Programmable power supply

Processor

1 MHz driver

FIGURE 25.36 Billerica, MA.)

Block diagram of complete system. (Adapted from input by Charlie Kohn, SemiTest, Inc.,

DK4126—Chapter25—23/5/2007—19:44—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-52

Handbook of Semiconductor Manufacturing Technology

the measured capacitance provides the capacitance of the depletion width. If the air bearing does not have infinite stiffness and the air gap changes as a result of the varying electrostatic attractive force created during the measurement, then it is possible to model the behavior and calculate the air gap capacitance at any electrode voltage. At every step in electrode voltage, the capacitance is measured and the charge on the electrode is calculated as the integral of CdV. The relevant equation necessary to compute the profile of Nsc as a function of depth, W, are as follows

W Z 3s 30 A

1 1 K Ctotal Cair

dQ Z Cmeas dV dQ Nsc ðWÞ Z qA dW

ð25:7Þ

where A is the area of the electrode, 3 refers to dielectric constant, and q is the elementary charge. Unlike in traditional Hg probe or CV Schottky measurements, the electrode voltage in this system varies rapidly. A full sweep from accumulation to deep depletion is done in about 10 ms, and data from multiple sweeps are averaged in order to reduce the effect of noise. The fast sweep period also serves to reduce inaccuracies due to interface states and carrier generation. The system displays either plots of resistivity vs. depth or Nsc vs. depth. Resistivity is obtained by converting as per the American society for testing and materials (ASTM) standard. A typical profile produced by the system is shown in Figure 25.37. Repeatability and reproducibility are quite reasonable compared with other techniques. Resistivity of a single wafer measured at 8-h intervals over a 3-day period showed a measurement error of 0.75% (1Ks). 25.4.2.2

Calibration and Performance Range

The system is calibrated by presenting it with one or more wafers of known resistivity. The system then creates a piecewise linear calibration curve so that there is complete agreement at the calibration points and between calibration points the system interpolates to obtain good calibration. The more calibration points there are, the better the performance across the entire range of calibrated values. Doping concentration profile can be generated within depletion depth. Figure 25.38 shows the maximum epi layer depth (for p-type silicon) which the system can deplete. As with mercury probe or

Test ID:

001562 1/22/98 10:41:05 AM

Doping concentration vs. Depth

Nsc (cm−3)

1×1016

1×1015

1×1014

2.00 Logarithmic

FIGURE 25.37

2.25

2.50

2.75

3.00

3.25

3.50

3.75

Depth (μm)

4.00

4.25

4.50

4.75

5.00

5.25

5.50

Epimet measurement profile. (Adapted from input by Charlie Kohn, SemiTest, Inc., Billerica, MA.)

DK4126—Chapter25—23/5/2007—19:44—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25-53

Depletion depth (μm)

10.0

3 volt Schottky 10 volt Schottky Epimet maximum depth

1.0

0.1

FIGURE 25.38 MA.)

1

10 Resistivity (ohm –cm)

100

Epimet operating range, p-type epi. (Adapted from input by Charlie Kohn, SemiTest, Inc., Billerica,

CV Schottky diode, the maximum depletion is a function of resistivity, for a given applied voltage. For reference, Figure 25.38 also shows the maximum depth which can be measured by mercury probe using 3 and 10 V bias voltages. Development is underway to increase the maximum layer depth, which can be measured, i.e., move the line “up” on the graph.

25.5

Software for In-Situ Metrology

In-situ sensors in OEM tools are an absolute prerequisite for providing the benefits of the APC paradigm to SC manufacturing operations. But the sensors themselves are just one part of the system required to generate the necessary information for APC. Extensive software is required to turn the sensor data into useful information for APC decisions. Software is required for data collection, data analysis for FDC and the controllers that perform MBPC.

25.5.1

Data Collection Software

The two major sources of data used in APC applications are signals from the processing tool and from add-on sensors connected to the tool. The former is generally collected through the Semiconductor Equipment Communications Standard (SECS) interface available on the tool. The SECS protocol enables the user to configure bi-directional communications between tools and data collection systems. This standard is a means for independent manufacturers to produce equipment and/or hosts, which can be connected without requiring specific knowledge of each other. There are two components to SECS. The SEMI Equipment Communications Standard E4 (SECS-I) defines the physical communication interface for the exchange of messages between semiconductor processing equipment (manufacturing, metrology, assembly, and packaging) and a host, which is a computer or network of computers. This standard describes the physical connector, signal levels, data rate, and logical protocols required to exchange messages between the host and the equipment over a serial point-to-point data path. This standard does

DK4126—Chapter25—23/5/2007—19:44—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-54

Handbook of Semiconductor Manufacturing Technology

not define the data contained within a message. The second component is the software standard such as SEMI Equipment Communications Standard E5 (SECS-II), that is a message content standard that determines the meaning of the messages. While SECS-I has solved the hardware interface issues, SECS-II implementation contains enough tool-specific latitude to make interfacing to individual tools a timeconsuming task. The genetic model for communications and control of manufacturing equipment (GEM) standard, built upon the SEMI E5 (SECS-II) standard, specifies the behavior of a generic equipment model which contains a minimal set of basic functions that any type of semiconductor manufacturing equipment should support. The SECS-II standard provides the definition of messages and related data items exchanged between host and equipment. The GEM standard defines which SECSII messages should be used, in what situations, and what the resulting activity should be. Brookside software [87] is a commonly used package for obtaining machine data from the SECS port on plasma etchers. Add-on sensors are most often sophisticated enough (OES, RGA.) that they are run by a dedicated PC. This PC also collects the raw sensor data and transfers it to the analysis software package.

25.5.2

FDC Analysis Software

Once the machine and sensor data are properly formatted and available on a routine basis, FDC is achieved by the univariate or multivariate analysis of individual or multiple signals, respectively. For plasma etching, univariate analysis has been performed on the endpoint signal for a long time [88,89], primarily due to its simplicity and effectiveness in detecting processing faults. The basis of these analyses is to examine the shape of a single signal, and use some algorithm to detect a significant variation between the signal from the currently analyzed wafer relative to an accepted “reference trace” from a good wafer. In the mid-1990s, multivariate FDC became feasible, enabled by a number of software vendors [90–96] with capabilities for analyzing machine and sensor data. The capabilities of these vendors to provide multivariate FDC was evaluated in a SEMATECH FDC Benchmarking study [97]. These analyses determine the correlation between the various time-series signals coming from the tool and associated sensors. Models are generated from a number of “good wafers” that represent the normal variability of the individual signals. The major issues are the choice of the signals, the choice of the “good wafers,” and the methods used for the analysis of this large volume of time-series data. An important feature of any such multivariate FDC technique is that the method be robust to the long-term steady drift in the sensors signals (that are not associated with processing faults), while it stays sensitive to small but significant signal variations at any particular time [98]. The availability of pertinent information from the factory computer-integrated manufacturing (CIM) system (e.g., lot number, wafer number, log point) is a critical component for both univariate and multivariate FDC; as these form the basis for sorting the machine and sensor signals into the necessary groups for analysis. This analysis can be performed on a local computer adjacent to the tool or on the factory network. A major driver for determining where to run the analysis is the network latency period; real-time analyses are generally performed on a local computer, while wafer-to-wafer or lot-to-lot analyses can be performed on the CIM network.

25.5.3

Model-Based Process Control Software

The MBPC is based on models of the wafer-state properties as a function of the input parameters. With such models, and routinely available feed-forward or feedback data from the factory CIM system, MBPC can be performed to keep the output parameters of interest under tight control. The numerical basis for this can be quite straightforward, and is now enabled by commercial software that performs these calculations [99,100]. Full automation of such methods, where the MBPC software obtains the necessary information from factory automation and downloads the new recipe to the tool, is a complex and generally lengthy integration task. Such integration is generally reserved for situations where benefits of MBPC have been proven on a local, off-line basis.

DK4126—Chapter25—23/5/2007—19:44—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25.6

25-55

Use of In-Situ Metrology in SC Manufacturing

Semiconductor manufacturing has historically relied on SPC for maintaining processes within prescribed specification limits. This passive activity, where action is taken only after a process variation limit has been exceeded, is no longer adequate as device geometries shrink into the 0.25 mm range and beyond. Active control of the process is required to keep the wafer properties within the ever-decreasing specification limits. In order to achieve this tighter control, a new paradigm called APC is emerging. All APC activities are predicated on the timely observability of the process. This is the major driving force for the implementation of in-situ sensors in OEM processing tools. These sensors determine the process, wafer or machine states during the sequential processing of wafers and hence provide the necessary information for APC. The two major components of APC are FDC and MBPC. The operational benefits of APC, which are the main drivers for this operating paradigm, are: 1. Fault detection, classification: detect and analyze faults for enhanced yield and faster tool repair. 2. Fault interdiction: eliminate the continuing misprocessing of wafers. 3. Fault prognosis: convert from scheduled to preventive maintenance to reduce future misprocessing, resulting in higher equipment availability. 4. MBPC: increase yield by keeping process on target, reduce pilot wafer usage.

25.6.1

Fault Detection and Classification

An FDC determines anomalous processing conditions by univariate or multivariate (single or multiple signals) analysis methods. These conditions can be intermittent in nature, or can result from a gradual drift in the processing tool. The first task [101] of FDC is fault detection, i.e., determining that during the processing of a particular wafer, the sensor signatures indicate a “non-normal” state. This requires a model to be generated that represents the normal process states (sensor signals) of the tool. This is a bigger problem than might first be envisioned, since the “normal” state is not a stationary point, but a slowly changing trajectory through time. So a major requirement of these FDC methods is that they are robust against the normal drifts in the system for extended time periods, yet stay sensitive to small excursions at any point in time. The primary reason for this is that the use of such methodology in a manufacturing operation requires that there is minimal model “upkeep,” adequate sensitivity to errors but a minimal number of false alarms. Data from successive wafers are then analyzed against this model and the appropriate statistics indicates whether the wafer was processed in the “normal” way or that some aspect of the process was anomalous. Once a fault is detected, the next task is fault classification. Depending on the FDC algorithm, the model can be used in a “reverse” sense to establish what signal generated the fault. Even if not very specific (e.g., fault was due to a pressure error), such information is very valuable in narrowing the focus of R&M by isolating which variables causing the fault. Note that to this point, the only indication generated by the FDC system is that the conditions during the processing of a given wafer were anomalous from the “normal” state. This does not necessarily indicate that the wafer has been misprocessed (although there is a good reason to suspect that). Even without having this final link between the process anomalies and the resulting wafer conditions, FDC can be quite valuable particularly to Equipment Engineers, who are tasked with keeping tools operating in a consistent manner. For them, knowing that a tool is behaving in an anomalous fashion is key; and their task is much simplified if the FDC method points them to the source of the problem. However, the FDC method is only truly complete when the correlation is established between the various types of faults and wafer parametrics (e.g., yield, defects). In many cases, this is a difficult task if for no other reason than the time interval between a specific process (especially towards the beginning of the process flow) and the parametric testing that occurs towards the end of the flow. These data are also generally in different databases, making correlation difficult. The SC manufacturers are currently instituting “data warehousing,” which will facilitate the correlation between machine state and wafer properties. Once this task

DK4126—Chapter25—23/5/2007—19:44—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-56

Handbook of Semiconductor Manufacturing Technology

is completed, then the FDC system becomes a very effective early-warning system for faults, and can have substantial financial impact in routine manufacturing.

25.6.2

Fault Interdiction

The simplest, most practical and beneficial use of in-situ metrology and fault detection analysis in SC manufacturing is for interdiction to the tool controller in the case of anomalous processing conditions. All processing tools monitor and control their setpoint values and shut themselves off if these are not attained. However, they can be totally insensitive to problems caused by the uncontrolled variables (e.g., wafers placed into an etcher with no resist, leaks in the gas lines). But problems from such sources can, in many cases, be detected from the analysis of a single sensor signal. The most notable example is the plasma emission signal in any plasma process. There are very few problems that do not have a noticeable affect on some portion of this signal. Hence, any method that analyzes the form of this signal can readily detect anomalies in the process. This analysis can be done in real time (during the processing of the wafer) or after the process has been completed. A statistically significant deviation from the expected signal can be used to shut off the reactor either in real time or before the next wafer is loaded into the process chamber. This ensures that no more wafers will be exposed to the anomalous processing conditions. In other tools, the detection of subtle faults might require the analysis of multiple signals with some multivariate analysis algorithm. With the approach of 12-in. wafers, preventing a full boat of wafers from misprocessing can easily represent a $250K saving.

25.6.3

Fault Prognosis

In addition to handling intermittent problems by fault interdiction, there is a need to handle problems due to slow changes in the manufacturing tool. Significant operational benefits can be had if certain faults can be predicted before they occur, and are prevented by scheduled maintenance. This has several benefits: (1) the maintenance can be done when the tool is idle (increasing tool availability) and (2) the gradually developed problem can be fixed before it contributes to wafer misprocessing (increasing yield). Typical of this kind of problem is the wear of electrodes in plasma systems, the degradation of pump capacity, etc. If the appropriate metric that is a good indicator of this problem is obtained from the tool controller or an add-on sensor, the value of this metric can be tracked. When it approaches limits that have previously been identified to represent an unacceptable situation, the tool can be scheduled for R&M. A good example is the tracking of a throttle valve signal in any vacuum processor. As the pump degrades, the throttle valve will gradually open more and more to maintain a given pressure at a given flow (i.e., for a given recipe). As this signal gets close to the “100% open” level, it is clear that the pump has to be serviced.

25.6.4

Model-Based Process Control

The goal of MBPC is to keep the output of any process close to a target value by manipulating the control “knobs” of the process. This is achieved by obtaining data from wafer-state sensors, where possible. If these are unavailable, “virtual sensors” can be generated from process state measurements using models that relate the process state characteristics to the wafer state of interest. This, of course, is a significant complication, to be attacked only when necessary. Once the real, or imputed, wafer state is routinely obtained, the sequential data are analyzed by SPC methods as suggested in Figure 25.1. When a statistically significant deviation is reached, process models are tuned and a new set of process conditions are calculated. Process models are typically simple linear (thicknessZrate!timeCc) or more complex polynomial models that represent the relationship between the input/output parameters. The tuner is a methodology for updating these models. In this way, the variance is transferred from the very costly process output to the less-expensive process input. This topic is addressed much more thoroughly in Chapter 23.

DK4126—Chapter25—23/5/2007—19:44—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25-57

References 1. National Technology Roadmap for Semiconductors, 183, 1997. 2. Adapted from input by C. Schietinger, Luxtron Corporation, Santa Clara, CA. 3. Roozeboom, F. Advances in Rapid Thermal and Integrated Processing NATO Series. Dordrecht, The Netherlands: Kluwer Academic Publishers, 1996. 4. Schietinger, C., and B. Adams. “A Review of Wafer Temperature Measurement Using Optical Fibers and Ripple Pyrometry.” RTP ’97, New Orleans, LA. September 3–5, 1997. 5. Schietinger, C., and E. Jensen. “Wafer Temperature Measurements: Status Utilizing Optical Fibers.” Mater. Res. Soc. Symp. Proc. 429 (1996): 289–90. 6. DeWitt, D. P., and G. D. Nutter. Theory and Practice of Radiation Thermometry. New York: Wiley, 1988. 7. Schietinger, C., B. Adams, and C. Yarling. “Ripple Technique: A Novel Non-Contact Wafer Emissivity and Temperature Method for RTP.” Mater. Res. Soc. Symp. Proc. 224 (1991): 23–31. 8. Adapted from input by Jim Booth, Thermionics Northwest, Port Townsend, WA, http://www. thermionics.com 9. Varshni, Y. P. Physica 34 (1967): 149. 10. Hellman, E. S., and J. S. Harris Jr. J. Cryst. Growth 81 (1986): 38. 11. Weilmeier, M. K., K. M. Colbow, T. Tiedje, T. van Burren, and L. Xu. Can. J. Phys. 69 (1991): 422. 12. Johnson, S. R., C. Lavoie, T. Tiedje, and J. A. MacKenzie. J. Vac. Sci. Technol. B11 (1993): 1007. 13. Johnson, S. R., C. Lavoie, E. Nodwell, M. K. Nissen, T. Tiedje, and J. A. MacKenzie. J. Vac. Sci. Technol. B12 (1994): 1225. 14. DRS 1000e Temperature Monitor, Thermionics Northwest, Inc., Port Townsend, WA, http://www. thermionics.com 15. Wang, Z., S. L. Kwan, T. P. Pearsall, J. L. Booth, B. T. Beard, and S. R. Johnson. J. Vac. Sci. Technol. B15 (1997): 116. 16. Booth, J. L., B. T. Beard, J. E. Stevens, M. G. Blain, and T. L. Meisenheimer. J. Vac. Sci. Technol. A14 (1996): 2356. 17. Adapted from input by Sensys Instruments, Sunnyvale, CA. 18. WTS 100 Wafer Temperature Sensor, Sensys Instruments, Sunnyvale, CA. 19. Adapted from input by Mike Whelan, Verity Instruments, Carrollton, TX, http://www.verityinst. com 20. Model SD1024 Smart Detector Spectrograph, Verity Instruments, Carrollton, TX, http://www. verityinst.com 21. Model 1015 DS, Luxtron Corporation, Santa Clara, CA, http://www.luxtron.com 22. Courtesy of Bob Fry, Cetac Technologies, Inc., Omaha, NE. 23. EP-2000 Spectrometer, Cetac Technologies, Inc., Omaha, NE. 24. PC1000 Miniature Fiber Optic Spectrometer, Ocean Optics, Inc., Dunedin, FL, http://www. oceanoptics.com/homepage.asp 25. Princeton Instruments, Inc., Trenton, NJ, http://www.prinst.com 26. Spectral Instruments, Tucson, Arizona, http://www.specinst.com 27. Mozumder, P. K., and G. G. Barna. “Statistical Feedback Control of a Plasma Etch Process.” IEEE Trans. Semicond. Manuf. 7, no. 1 (1994): 1. 28. Neural Endpointer, Verity Instruments, Carrollton, TX, http://www.verityinst.com 29. Wise, B. M., and N. Gallagher. “PLS_Toolbox.” Eigenvector Technologies, Manson, WA. 30. Adapted from input by Peter Solomon, On-Line Technologies, East Hartford, CT. 31. On-Line 2100 FT-IR spectromenter, On-Line Technologies, East Hartford, CT. 32. Solomon, P. R, P. A. Rosenthal, C. M. Nelson, M. L. Spartz, J. Mott, R. Mundt, and A. Perry. “A Fault Detection System Employing FT-IR Exhaust Gas Monitoring.” Presented at SEMATECH Advanced Process and Equipment Control Program, Proceedings (Supplement), 290–294. Lake Tahoe, Nevada, September 20–24, 1997. 33. Adapted from input by R. J. Ferran and S. Boumsellek. Ferran Scientific, Inc., San Diego, CA, http://www.ferran.com

DK4126—Chapter25—23/5/2007—19:44—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-58

Handbook of Semiconductor Manufacturing Technology

34. Transpector XPR, Leybold Inficon, East Syracuse, NY, http://www.inficon.com 35. Micropolee Sensor System, Ferran Scientific, San Diego, CA, http://www.ferran.com/main.html 36. Ferran, R. J., and S. Boumsellek. “High-Pressure Effects in Miniature Arrays of Quadrupole Analyzers for Residual Gas Analysis from 10K9 to 10K2 Torr.” J. Vac. Sci. Technol. A14 (1996): 1258. 37. Adapted from input by C. A. Gogol, Leybold Inficon, East Syracuse, NY, http://www.inficon.com 38. Tallman, C. A. “Acoustic Gas Analyzer.” ISA Trans. 17, no. 1 (1977): 97–104. 39. Wajid, A., C. Gogol, C. Hurd, M. Hetzel, A. Spina, R. Lum, M. McDonald, and R. J. Capic. “A HighSpeed High-Sensitivity Acoustic Cell for In-Line Continuous Monitoring of MOCVD Precursor Gasses.” J. Cryst. Growth 170 (1997): 237–41. 40. Stagg, J. P. “Reagent Concentration Measurements in Metal Organic Vapour Phase Epitaxy (MOVPE) Using an Ultrasonic Cell.” Chemtronics 3 (1988): 44–9. 41. Leybold Inficon’s “Composer,” Acoustic Gas Composition Controller. 42. Adapted from input by Kevin S. Gerrish, ENI Technology, Inc., Rochester, NY. 43. Bird Electronic Corporation, Solon, OH. 44. Applied Energy Systems, Inc., Malvern, PA, http://www.aenergysys.com 45. ENI Technology, Inc., Rochester, NY. 46. Fourth State Technology, Inc., Austin, TX. 47. Comdel, Inc., Gloucester, MA. 48. Advanced Energy Industries, Inc., Fort Collins, CO. 49. Buck, D. Texas Instruments, personal communication. 50. CSS 100 Chamber-Wall State Sensor, Sensys Instruments, Sunnyvale, CA. 51. Adapted from input by Ran Kipper, Nova Measuring Instruments Ltd., Weizman Scientific Park, Rehovoth, Israel. 52. NovaScan, Nova Measuring Instruments Ltd., Weizman Scientific Park, Rehovoth, Israel. 53. Model 2100 Process FT-IR, On-Line Technologies, East Hartford, CT. 54. Buffeteau, T., and B. Desbat. Appl. Spectrosc. 43, no. 6 (1989): 1027–32. 55. Abeles, F. Advanced Optical Techniques. Vol. 143. North-Holland, Amsterdam, 1967, chap. 5. 56. Adapted from input by William T. Conner, Leybold Inficon, Inc., East Syracuse, NY, http://www. inficon.com 57. Dalton, T., W. T. Conner, and H. Sawin. JECS 141 (1994): 1893. 58. Model 2450 Endpoint Controller, Luxtron Corporation, Santa Clara, CA, http://www.luxtron.com/ index.html 59. Adapted from input by John Hanselman, Active Impulse Systems, Natick, MA. 60. InSite 300, Active Impulse Systems, Natick, MA. 61. Rogers, J. A., M. Fuchs, M. J. Banet, J. B. Hanselman, R. Logan, and K. A. Nelson. Appl. phys. Lett. 71, no. 2, (1997). 62. Adapted from input by Christoper Raymond, Accent Optical Technologies, Bend, OR, http://www. accentopto.com 63. McNeil, J. R., S. Naqvi, S. Gaspar, K. Hickman, K. Bishop, L. Milner, R. Krukar, and G. Petersen. “Scatterometry Applied to Microelectronic Processing.” Microlithogr. World 1, no. 15 (1992): 16–22. 64. Raymond, C. “Scatterometry for Semiconductor Metrology.” In Handbook of Silicon Semiconductor Metrology, edited by A. C. Diebold, 485–95. New York: Marcel Dekker, 2001. 65. Raymond, C. J., S. S. H. Naqvi, and J. R. McNeil, “Resist and Etched Line Profile Characterization Using Scatterometry.” In SPIE Microlithography Proceedings of SPIE. Vol. 3050, 476–486, 1997; Jones, S. K., ed. “Metrology, Inspection, and Process Control for Microlithography XI.” In Proceedings of SPIE. Vol. 3050, 476–486, July 1997. 66. Jekauc, I., J. Moffitt, S. Shakya, E. Donohue, P. Dasari, C. J. Raymond, and M. Littau. “Metal Etcher Qualification Using Angular Scatterometry.” Metrology, Inspection, and Process Control for Microlithography XIX, edited by R. M. Silver, Proceedings of SPIE. Vol. 5752, July 2005. 67. Ukraintsev, V. A., M. Kulkarni, C. Baum, and K. Kirmse. “Spectral Scatterometry for 2D Trench Metrology of Low-k Dual-Damascene Interconnect.” Metrology, Inspection, and Process Control for Microlithography XVI, edited by D. J. C. Herr, Proceedings of SPIE. Vol. 4689, 189–95, July 2002.

DK4126—Chapter25—23/5/2007—19:44—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

In-Situ Metrology

25-59

68. Bushman, S., and S. Farrer. “Scatterometry Measurements for Process Monitoring of Polysilicon Gate Etch.” Process, Equipment, and Materials Control in Integrated Circuit Manufacturing III, edited by A. Ghanbari and A. J. Toprac. Proceedings of SPIE. Vol. 3213, 79–90, 1–2 October 1997 (August 1997). 69. Naqvi, S. S. H., R. H. Krukar, J. R. McNeil, J. E. Franke, T. M. Niemczyk, D. M. Haaland, R. A. Gottscho, and A. Kornblit. “Etch Depth Estimation of Large-Period Silicon Gratings with Multivariate Calibration of Rigorously Simulated Diffraction Profiles.” J. Opt. Soc. Am. A 11, no. 9 (1994): 2485–93. 70. Naqvi, S. S. H., J. R. McNeil, R. H. Krukar, and K. P. Bishop. “Scatterometry and Simulation of Diffraction-Based Metrology.” Microlithogr. World 2, no. 3, (1993). 71. Leray, P., S. Cheng, S. Kremer, M. Ercken, and I. Pollentier. “Optimization of Scatterometry Parameters for Shallow Trench Isolation (STI) Monitor.” Metrology, Inspection, and Process Control for Microlithography XVIII, edited by R. M. Silver. Proceedings of SPIE. Vol. 5375, 2004. 72. Sendelbach, M., and C. Archie. “Scatterometry Measurement Precision and Accuracy below 70 nm.” Metrology, Inspection, and Process Control for Microlithography XVII, edited by D. J. Herr, Proceeding of SPIE. Vol. 5038, 2003. 73. Smith, B., S. Bushman, and C. Baum. “Scatterometer Sensitivity for Statistical Process Control: Importance of Modeling for In-Direct Measurements.” In Characterization and Metrology for ULSI Technology 2005, edited by D. G. Seiler et al., AIP Vol. 788, 437–41. 2005. 74. Adapted from input by Scott Bushman, Texas Instruments. 75. Baum, C., and S. Farrer. “Resist Line Width and Profile Measurement Using Scatterometry.” SEMATECH AEC-APC Conference, Vail, CO, 1999. 76. Kostoulas, Y., C. J. Raymond, and M. Littau. “Scatterometry for Lithography Process Control and Characterization in IC Manufacturing.” Mater. Res. Soc. Symp. Proc. 692, (2002). 77. Baum, C. C., R. Soper, S. Farrer, and J. Shohet. “Scatterometry for Post-etch Polysilicon Gate Metrology.” Metrology, Inspection, and Process Control for Microlithography XIII, edited by B. Singh, Proceeding of SPIE. Vol. 3677, 148–58, 1999. 78. Chen, R. C.-J., F.-C. Chen, Y.-Y. Luo, B.-C. Perng, Y.-H. Chiu, and H.-J. Tao, “Application of Spectroscopic Ellipsometry Based Scatterometry for Ultra Thin Spacer Structure.” Metrology, Inspection, and Process Control for Microlithography XVIII, edited by R. M. Silver, Proceedings of SPIE. Vol. 5375, 2004. 79. Raymond, C. J., M. Littau, B. Youn, C. -J. Sohn, J. A. Kim, and Y. S. Kang. “Applications of Angular Scatterometry for the Measurement of Multiply-Periodic Features.” In Metrology, Inspection, and Process Control for Microlithography XVII, edited by D. J. Herr, Proceedings of SPIE, Vol. 5038, 2003. 80. Quintanilha, R., P. Thony, D. Henry, and J. Hazart. “3D-Features Analysis Using Spectroscopic Scatterometry.” In Metrology, Inspection, and Process Control for Microlithography XVIII, edited by R. M. Silver, Proceedings of SPIE, Vol. 5375, 2004. 81. Reinig, P., R. Dost, M. Mort, T. Hingst, U. Mantz, J. Moffitt, S. Shakya, C. J. Raymond, and M. Littau. “Metrology of Deep Trench Etched Memory Structures Using 3D Scatterometry.” In Metrology, Inspection, and Process Control for Microlithography XIX, edited by R. M. Silver, Proceedings of SPIE, Vol. 5752, 2005. 82. Rathsack, B. M., S. G. Bushman, F. G. Celii, S. F. Ayres, and R. Kris, “Inline Sidewall Angle Monitoring of Memory Capacitor Profiles.” In Metrology, Inspection, and Process Control for Microlithography XIX, edited by R. M. Silver, Proceedings of SPIE, Vol. 5752, 2005. 83. Huang, H. T., and L. F. Terry Jr.. “Spectroscopic Ellipsometry and Reflectometry from Gratings (Scatterometry) for Critical Dimension Measurement and In Situ, Real-Time Process Monitoring.” Thin Solid Films (2004): 455–6, see also 828–36. 84. Barna, G., L. M. Loewenstein, K. J. Brankner, S. W. Butler, P. K. Mozumder, J. A. Stefani, S. A. Henck., et al. “Sensor Integration into Plasma Etch Reactors of a Developmental Pilot Line.” J. Vac. Sci. Technol. B12, no. 4 (1994): 2860. 85. Adapted from input by Charlie Kohn, SemiTest, Inc., Billerica, MA. 86. Epimet, SemiTest, Inc., Billerica, MA. 87. Brookside Software, http://www.brooksidesoftware.com

DK4126—Chapter25—23/5/2007—19:44—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

25-60

Handbook of Semiconductor Manufacturing Technology

88. Barna, G. G. “Automatic Problem Detection and Documentation in a Plasma Etch Reactor.” IEEE Trans. Semicond. Manuf. 5, no. 1 (1992): 56. 89. BBN Domain Corporation: Cambridge, MA. 90. Perception Technologies: Albany, CA. 91. Triant Technologies, Nanaimo, BC, Canada V9S 1G5. 92. Umetrics, Winchester, MA. 93. Brookside Software, San Mateo, CA. 94. Brooks Automation, Richamond, BC, Canada V7A 4V4. 95. Pattern Associates, Inc., Evanston, IL. 96. Real Time Performance, Sunnyvale, CA. 97. SEMATECH Fault Detection and Classification Workshop, February 18–19, 1997. 98. Barna, G. G. “Procedures for Implementing Sensor-Based Fault Detection and Classification (FDC) for Advanced Process Control (APC).” Sematech Technical Transfer Document # 97013235AXFR, October, 10, 1997. 99. MiTex Solutions, Inc., Canton, MI. 100. ProcessWORKS, a member of the WORKS family of products being commercialized by Texas Instruments, Dallas, TX. 101. Barna, G. G. “APC in the Semiconductor Industry, History and Near Term Prognosis.” In SEMI/IEEE ASMC 96 Workshop, 364–9. Cambridge, MA, November 14, 1996.

DK4126—Chapter25—23/5/2007—19:44—ANBARASAN—240454—XML MODEL CRC12a – pp. 1–60

26

Yield Modeling 26.1 26.2 26.3 26.4

Ron Ross

Texas Instruments, Inc.

Nick Atchison Multigig, Inc.

26.1

Introduction ...................................................................... 26-1 Cluster Analysis................................................................. 26-2 Yield Models...................................................................... 26-4

Random-Defect Yield Models



General Yield Model

Yield Limits ....................................................................... 26-7 Random Defect Yield Limits † Systematic Yield Limits— Method 1 † Systematic Yield Limits—Method 2 † Test Yield Limits

26.5 Summary ......................................................................... 26-19 References...................................................................................... 26-19

Introduction

Yield modeling has been used for many years in the semiconductor industry. Yield models are now used not only for yield analysis, but also as the basis for automated yield analysis programs. Historically, the term “yield model” has referred to the mathematical representation of the effect of randomly distributed “defects” on the percentage of the integrated circuits (or die) on a wafer that are “good.” Good means that they pass all parametric and functional tests that are specified for the product. The mathematical representations are typically derived from statistical distribution functions, such as the Poisson distribution or the Bose–Einstein statistics. Certain assumptions are then made about the variations in the spatial distributions of the “killing” defects on the wafers and mathematical formulas are derived from the results. References will be provided to yield analysis patents that use the yield models presented in this chapter. The use of yield models that are based on statistical distributions is often successful for accurately calculating yields for products, which have their yield limited only by random defects. However, a complete and much more useful yield model will also account for “systematic” yield losses. Systematic yield losses can result from process, design or test problems. Die that fail for systematic problems are not randomly distributed over the wafer area, but are often confined to given regions, such as the outer periphery or the center. Systematic failures do not depend on die area, as do failures that are due to random defects. The total yield for a given product can be expressed as the product of the systematic yield and the random yield:

Y Z Ys !Yr Typically, the second term in this product is the only one that is “modeled,” in the statistical model equation used to calculate the yield limits due to various types of random defects that arise from different manufacturing process steps or process equipment. Thus, the statistical modeling “partitions” the random yield limits (or, conversely, yield losses) into components that are due to different types 26-1

DK4126—Chapter26—23/5/2007—19:46—CRCPAG—240455—XML MODEL CRC12a – pp. 1–19

26-2

Handbook of Semiconductor Manufacturing Technology

of defects. The term Ys is left simply as a single factor and not partitioned. Ys is often “estimated,” or it can be calculated by performing “cluster analysis,” which will be discussed in this chapter. A complete yield model partitions the term Ys into its sub-components to create a pareto of systematic losses so yield improvement projects can be prioritized. Two methods for doing this will be discussed in this chapter. A complete yield model should, therefore, have the following characteristics: 1. 2. 3. 4.

It must account for all sources of yield loss, both random and systematic. The total modeled or calculated yield should agree well with the actual yield. It should ideally give insight into possible causes of the yield loss. It should be able to partition and quantify yield losses resulting from design, process, test, and random defects. 5. It should provide the basic methods for automated yield analysis tools.

Yield modeling is worthwhile and advantageous because: 1. It makes possible the use of existing process and test data to quantify and paretoize all sources of yield loss. 2. It can substantially improve the yield learning rate for new products. 3. It makes accurate yield forecasting possible, which aids in planning. 4. It results in logical prioritizing of resources to work on yield enhancement projects with the highest payback. 5. It helps to set product specifications that match process capability. 6. It can provide the primary algorithms needed to create automated yield analysis programs. (See yield analysis patents in the References section.) This chapter will first cover cluster analysis for calculating Ys and Yr. A brief review of the wellknown random defect yield models will be given. The use of one of these models (the negative binomial model) for calculating individual defect yield limits will be explained. Two methods for calculating systematic yield limits will be discussed. The calculation of test yield limits will also be briefly presented.

26.2

Cluster Analysis

Cluster analysis or “window analysis,” introduced by Seeds [1,2] was used extensively by Stapper at IBM [3]. This analysis is performed using actual wafer probe bin maps for finished wafers. The die are partitioned into groups or “blocks” of 1, 2, 3, 4 (2!2), 6 (3!2), 9 (3!3), etc. A simple example with groupings of 1!1, 1!2, 2!2, and 2!3 is shown in Figure 26.1. The percent yield is then calculated for each grouping, with the stipulation that the block is only considered to be a yielding block if all die within the block passed wafer probe testing (e.g., in the 2!2 block, all four dice must have yielded at wafer probe to count the block as good). For example, if there are 600 possible candidates on the wafer, and 480 tested good, the yield of the 1!1 block is simply 480/600Z80%. For the 1!2 blocks, the total possible candidates would be 300, and if 216 of these had both die pass wafer probe, the yield is 216/300Z72%. The 2!2 blocks, for example, have 150 total candidates, and if 90 of the blocks contain all four dice that tested good, the yield is 90/150Z60%. This simulates what the yield would be for similar products of the same technology with 2!, 3!, 4!, 6!, and 9! the die area of the actual product being analyzed. Because of statistical variations in yield across the wafer surface and from wafer to wafer, the above block yield calculations are performed on a relatively large number (w100–500 if possible) of wafers and the yields for each block size are averaged. When the averages have been computed, the values are plotted on a graph of yield vs. block size, as shown in Figure 26.2.

DK4126—Chapter26—23/5/2007—19:46—CRCPAG—240455—XML MODEL CRC12a – pp. 1–19

Yield Modeling

FIGURE 26.1

26-3

1X1 YIELD=86/96=0.896

1X2 YIELD=39/48=0.812

2X2 YIELD=13/22=0.591

2X3 YIELD=8/16=0.500

Cluster analysis groupings.

1 0.9 0.8 Probe yield

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

FIGURE 26.2

0

1

2

3 5 6 4 7 Die area (# of dice in cluster)

8

Cluster analysis graph.

DK4126—Chapter26—23/5/2007—19:46—CRCPAG—240455—XML MODEL CRC12a – pp. 1–19

9

10

26-4

Handbook of Semiconductor Manufacturing Technology

A “best-fit” curve fitting routine is then applied to the points on the graph, using one of the yield model equations described later. For illustration, the negative binomial model will be used:

Y Z Ys

1 1 C AD a

ð26:1Þ

a

where, Ys (systematic yield limit); D (density of random fatal defects in units of defects per die); and a (cluster factor) are the parameters whose values are optimized for the best fit of the equation to the block yield data points. The analysis thus provides “best” values for Ys, D, and a (cluster factor) for a statistically significant sample of wafers for a given product. Here Ys corresponds to the intercept of the best-fit curve to the y-axis and is the measure of the tendency of the defects to “cluster” or to depart from total randomness. D is the average number of fatal or killing defects per die. To calculate the killing defect density in terms of defects per unit area, D is divided by the die area of the product in question. The random defect yield limit (Yr) can then be calculated either: 1. By using the equation

Yr Z

1 0 1 C AD a

a

ð26:2Þ

where D0 is the defect density in terms of defects per unit area, or 2. By using

Yr Z

Y Ys

ð26:3Þ

where Y is the average yield of the 1!1 block for the wafers used in the analysis. If a statistically appropriate sample size was used for the analysis and if the negative binomial model provides a good fit, the two methods should agree very closely. The use of cluster analysis provides two major benefits. The first is that it quantifies the systematic and random yield limits, so priority can be placed on further analysis and yield improvement efforts, which results in the greatest payback. The second is that it provides an excellent cross-check for the results of the yield modeling which partitions the random and systematic yield limits into their subcomponents. After the partitioning is completed, if all sources of yield loss are accounted for, the product of all random defect yield limits should equal the Yr obtained from the cluster analysis. Likewise, the product of all independent systematic yield limits should equal the Ys from the cluster analysis.

26.3

Yield Models

26.3.1

Random-Defect Yield Models

If it is assumed that a wafer has a given number of fatal defects that are spread randomly over the wafer area, then the average number per chip would be A!D0, where A is the chip area and D0 is the total number divided by the total wafer area. If the defects are completely random in their spatial distribution, the probability of finding a given number k, of defects on any single die is given by the Poisson distribution:

PðkÞ Z

ðlk !expðKlÞÞ k!

DK4126—Chapter26—23/5/2007—19:46—CRCPAG—240455—XML MODEL CRC12a – pp. 1–19

ð26:4Þ

Yield Modeling

26-5

where

l Z A !D0 The yield is then defined as the probability of a die having zero defects (kZ0), so:

Yr Z Pð0Þ Z eKl Z eKAD0

ð26:5Þ

This is the Poisson yield model. In most cases, this model is found to predict yields that are lower than actual yields for a given product with a specified D0. This is because the defect density varies by region of the wafer and from wafer to wafer. This results in a higher probability of a given die having multiple killing defects than would be predicted by the random model, thus leaving other dice without any killing defects which, in turn, means higher yields than predicted. To take into account the variation in the defect density, the yield model is modified to include a defect density distribution: N ð

Yr Z FðDÞeKAD dD

ð26:6Þ

0

Several of the defect yield models used by wafer manufacturers result from solving this equation with various assumptions for the form of F(D). Of course, if F(D) is a delta function at D0, the Poisson model results. If a Gaussian distribution is approximated by a triangular distribution as shown in Figure 26.3, the Murphy Model is obtained:

Yr Z

1KeKAD0 AD0

2

ð26:7Þ

An exponential distribution:

FðDÞ Z

1 KD=D0 e D0

ð26:8Þ

1

Frequency

0.8 0.6 0.4 0.2 f (D)

0

FIGURE 26.3

0

0.2

0.4 0.6 0.8 Defect density (D)

1

Triangular defect-density distribution.

DK4126—Chapter26—23/5/2007—19:46—CRCPAG—240455—XML MODEL CRC12a – pp. 1–19

1.2

26-6

Handbook of Semiconductor Manufacturing Technology

results in the Seeds model:

pffiffiffiffiffiffi Yr Z eK AD0

ð26:9Þ

Another common model is the Bose–Einstein model, which attempts to take into account the number of mask steps and assumes an average defect density of D0 per mask level. This model takes the following form:

Yr Z

1 ð1 C AD0 Þn

ð26:10Þ

where n is the number of mask levels. The main weakness with this model is that it assumes the same defect density for all levels. This is usually not the case. For example, the killing defect density is usually much higher for metal layers than for front-end layers. The Price model is a special case of the Bose– Einstein model which sets nZ1. The last model that will be considered is the negative binomial model. This has been discussed extensively by Stapper, formerly of IBM [4]. This can be derived by setting F(D) in Equation 26.6 to the gamma distribution. The resulting random yield model is:

Yr Z

1 1 C al

ð26:11Þ

a

where a is the cluster factor. This factor has physical significance and is related to the amount of clustering or non-randomness of the killing defects. A small value of a indicates a high degree of clustering and a larger value indicates a lower degree of clustering. A small value of a means that the probability of having multiple killing defects on the same die or on adjacent dice is greater than for a large a. Small a also means that the variation in D0 is greater across the wafer area. As the value of a nears infinity, it can be shown that Equation 26.11 reduces to Equation 26.5. The determination of the value of a was explained in the previous section. A comparison of the defect-yield limit predictions for the various models presented here is shown in Figure 26.4. It is seen that the Seeds model is the most pessimistic and the Bose–Einstein model is the

1 0.9

Defect limited yield

0.8 0.7 0.6 0.5 0.4 0.3 Y(Poisson) Y(Murphy) Y(Seeds) Y(Bose−Einstein (n=10)) Y(Negative Binomial)

0.2 0.1 0

FIGURE 26.4

0

0.5

1 1.5 Die size (cm2)

2

Random defect-limited yield vs. die size.

DK4126—Chapter26—23/5/2007—19:46—CRCPAG—240455—XML MODEL CRC12a – pp. 1–19

2.5

Yield Modeling

26-7

1 0.9

Defect yield limit

0.8 0.7 0.6 0.5 0.4 0.3

Alpha=1 Alpha=2 Alpha=4 Alpha=10 Alpha=100 Alpha=1000

0.2 0.1 0

FIGURE 26.5

0

0.5

1 1.5 Die area (cm2)

2

2.5

Negative binomial model—alpha comparison.

most optimistic. For these curves, a D0 of 0.25/cm2 has been assumed, and for the Bose–Einstien model, nZ8 was assumed (so D0Z0.25/8Z0.03125/cm2/layer). Figure 26.5 depicts Yr curves for the negative binomial model for various values of a, again for D0Z0.25/cm2. It is seen from this graph that higher values of a result in a more pessimistic yield prediction.

26.3.2

General Yield Model

A general yield model can be expressed as a product of all independent yield limits

YZ

Y i

Y ri

Y j

YPj

Y k

YDk

Y l

YTl

ð26:12Þ

where Yri YPj YDk YTl

the the the the

random defect yield limit for defect type i; systematic process yield limit for process parameter j; systematic design yield limit for design sensitivity k; systematic or random yield limit for test problem l.

It is important to note that these yield limits must be independent of each other. This means that the various yield detractors do not affect each other or correlate with each other. If they do, double counting of yield problems ensues resulting in total yield calculations that are lower than actual yield. If some of the yield limits are interdependent, all but one of each interdependent group must be eliminated, or they must be proportioned according to the degree of correlation [5]. In subsequent sections, it will be explained how to calculate each of the various types of yield limits.

26.4

Yield Limits

26.4.1

Random Defect Yield Limits

The most accurate method for calculating yield limits resulting from random defects is to implement a thorough in-line inspection program using such tools as the KLA23xx.

DK4126—Chapter26—23/5/2007—19:46—CRCPAG—240455—XML MODEL CRC12a – pp. 1–19

26-8

Handbook of Semiconductor Manufacturing Technology

There are two other requirements for obtaining accurate results: 1. Defect wafer map to wafer probe map overlay capability must exist, so it can be accurately determined upon which die each defect falls. 2. It is highly desirable to have a computer program that calculates the “kill ratio” or “killer probability” for each type of defect at each critical layer. It is also important to have a consistent defect classification scheme, preferably automatic defect classification (ADC). This method works best for products in full production because fairly large numbers of wafers must be inspected (R100 wafers) to obtain accurate killer probabilities. In order to get accurate and consistent results for these calculations, a large enough sample of the defects detected by the inspection equipment must be reviewed or classified. This sample size must be adjusted to take into account such considerations as inspection capacity and average numbers of defects on typical wafers. If the number of defects at a particular layer is small (!w100), then it is a good practice to review all of the defects. If the number is large, then at least 50–100 defects per wafer, per level, should be classified. It is very important that a random sample of the defects be classified, and not just the “large” ones or the “interesting” ones. The killer probability is easily calculated as:

Pki Z 1K

Yd Yc

ð26:13Þ

where Yd the probe yield of all of the die that had defects of type i at a particular layer, and Yc the probe yield of all of the die that did not have defects of type i. Assume, for the sake of illustration, that 100 wafers were inspected at a given layer (e.g., poly), and that a total of 1000 dice were found with defects of a particular type (e.g., notching). Assume also that 9000 dice do not have defects of this type. Also, assume that, of the 1000 dice with defects, 300 of them were later tested to be good at wafer probe, and of the 9000 without defects, 6300 of them were later tested to be good at wafer probe. Now, the yield of the dice with defects (Yd) would be 300/1000Z30%, and the yield of the dice without defects (Yc) would be 6300/9000Z70%. The killer probability would then be:

Pki Z 1K

0:3 0:7

Z 0:571

ð26:14Þ

After the killer probabilities have been calculated for all types of defects, the yield limits for each type of defect can be calculated by using the following equation:

Yi Z

1

P !A!D a 1 C ki a i

ð26:15Þ

where A, die area; Di, defect density for defect type i; and Di, is calculated from:

Di Z

Ni AT

where, Ni, the total number of dice with defects of type i and AT, total area inspected. There will, of course, be one of these yield limit equations for each different type of classified defect at each inspected layer. It is important to note that Ni is the “normalized” number of defects of type i. This means that, if fewer than 100% of the defects are classified (or reviewed), the total number of defects of type i is projected

DK4126—Chapter26—23/5/2007—19:46—CRCPAG—240455—XML MODEL CRC12a – pp. 1–19

Yield Modeling

26-9

onto the total population by the equation:

Ni Z

NiR N NR

ð26:16Þ

where NiR the number of defects of type i in the reviewed sample NR the total number of defects reviewed (of all types); and N the total number of defects detected (of all types). It must also be noted that only Equation 26.15 works for unclustered (or random) defects.

26.4.2

Systematic Yield Limits—Method 1

The first method for calculating individual systematic yield limits is called Limited Yield Analysis. This method only gives accurate results for products in high volume production. This is because there are many variables that affect yield, and this method uses wafer averages, and determines the average effect of electrical parameters (e.g., threshold voltages or transistor gains) on the yield. The actual number of wafers required to calculate a yield limit in the range of 0.97 (3% loss) and with a standard deviation of about 8% in the yield of the product has been found empirically to be about 2000 wafers. For higher yield limits (less loss than 3%), or for a higher standard deviation, the number must be even greater. The method works for either process or design-related systematic problems. The yield limits are calculated from the formula: PkMAX ð

Y Dk Z

FðPk ÞYðPk ÞdPk

ð26:17Þ

PkMIN

where YDk Pk F(Pk) Y(Pk)

design yield limit due to electrical parameter k; values of electrical parameter k; normalized distribution function of parameter k; and normalized probe yield as a function of parameter k.

Equation 26.17 also works for YPk (process systematic yield limits). A detailed explanation of how this integral is evaluated is in order. Using specific examples is the best way to do this. Figure 26.6 shows a histogram (or frequency distribution) for a particular electrical parameter (poly sheet resistance in this example) for a product manufactured using a BiCMOS process. The specification limits for this parameter are 800–1200 ohms/square. The data results from measuring poly sheet on five sites per wafer on a large number of wafers. The histogram represents the average values of the five sites for each wafer. To use this distribution in Equation 26.17, it must be normalized so that the total area of the histogram is 1.0. This is done by simply dividing the number of wafers in each range (or bar) by the total number of wafers in the sample. The key element in the analysis is now the “grouping” of the wafers into three groups with equal numbers of wafers in each group. The first group consists of all wafers with average poly sheet in the lower third of the distribution. The second group consists of wafers with medium values of poly sheet. The third group includes all wafers with high values of poly sheet. Again, the groups are not formed by equal ranges of poly sheet, but by equal numbers of wafers in each group. After the wafers are grouped in this manner, the average of the electrical parameter (poly sheet) is computed for each group. Also, the average of the wafer probe yield is computed for each group. This results in three points that can then be plotted as shown in Figure 26.7.

DK4126—Chapter26—23/5/2007—19:46—CRCPAG—240455—XML MODEL CRC12a – pp. 1–19

26-10

Handbook of Semiconductor Manufacturing Technology

70 60

Frequency

50 40 30 20 10 0

910 930

950

970

990 1010 1030 1050 1070 1090 1110 1130

Poly sheet rho (ohms/square)

FIGURE 26.6

Poly sheet resistance frequency distribution.

A function (in this case, a parabola) can then be fitted to these three points. The function must then be normalized so that the highest yield of the three points is equal to 1.0. The normalized values for the other two points are then computed by dividing the average number of good dice for the two groups by the average number of good dice for the highest yielding group. For example, if the highest average yield is 400 DPW, and the other two averages are 360 and 370 DPW, the corresponding normalized values

0.9 0.85

Product yield

0.8 0.75 0.7 0.65 0.6 0.55 0.5 920

FIGURE 26.7

940

960 980 1000 1020 1040 Poly sheet rho (ohms/square)

Mean product yield vs. poly sheet resistance.

DK4126—Chapter26—23/5/2007—19:46—CRCPAG—240455—XML MODEL CRC12a – pp. 1–19

1060

1080

Yield Modeling

26-11

would be 360/400Z0.90 and 370/400Z0.925. After the curve has been normalized in this manner, the resulting function becomes Y(Pk) in Equation 26.17. Equation 26.17 can then be evaluated (numerically because of the irregularity of the frequency distribution) to compute the corresponding yield limit. In this special example, the yield limit was calculated to be 0.936, which translates into a yield loss of 6.4% because of the probe yield sensitivity to poly sheet. It is important to understand why the electrical test data and the yield data are grouped and averaged as explained above. There are hundreds of variables that might affect wafer probe yields, including both random and systematic variables. As long as these variables are independent of the variable in question (poly sheet in the example), there is no reason to expect that the other variables should group themselves in any particular way within the three groups partitioned according to the value of poly sheet. In other words, there is no reason to expect that wafers with low poly sheet should have more defects of a particular type than wafers with high poly sheet, for example. Likewise, for example there should be no reason to expect that wafers with low poly sheet would have a different value of contact resistance compared to wafers with high poly sheet. Therefore, if the sample size is large enough, all of the other independent variables should average out in a similar manner for each of the three groups of wafers, leaving only the effect on yield of the variable being analyzed. Essentially, the averaging is a method for looking at the variation in yield due to one parameter, where all other effects being equal for the three groups, have been “removed” from the analysis. It is simply cleaner than trying to look at and interpret a scatter plot of the raw data. If two electrical parameters correlate with each other (e.g., NMOS effective channel length (Leff ) and NMOS drive current), then when the wafers are grouped according to ranges of one variable, the other parameter will also show a systematic grouping. For example, the wafers with low Leff would also tend to have higher drive current. If the limited yield analysis shows that one parameter affects yield, the other correlating parameter will also affect yield. If the correlation is very good, the yield limits should be very nearly equal for the two parameters. In generating the yield limit pareto, only one of the two yield limits would be used. The one to be used must be chosen by engineering judgement, based on knowledge of the product in question. Regarding the use of just three groupings of the data, two justifications are in order. Three points are the minimum required for determining the shape of the curve, whether it is linear or has a peak and drops off on both sides (as in Figure 26.7). Normally, a more complex relationship (e.g., two peaks) would not be expected. The second reason for using three points is because this maximizes the number of wafers in each group (compared to using more than three points) so the averaging described above is most effective for detecting significant yield differences among the groups. In other words, a 3% yield difference, for example, would be more significant with three groups than with more groups because more data points are included in each group. In general, if the yield varies by value of an electrical parameter, such as transistor parameters, resistance or capacitor values while the said parameters are within their specification limits, a design issue is indicated. If the yield varies by electrical test values of leakage current between metal or poly lines, or current leakage between nodes of transistors (e.g., emitter-to-base leakage, etc.), a process problem is indicated.

26.4.3

Systematic Yield Limits—Method 2

In this section, a powerful method for determining systematic yield limits called product sensitivity analysis (PSA) is presented. This has been described in detail by Ross and Atchison [6]. This has three advantages over the previous method. The first is that significantly fewer wafers are required for the analysis. This is because the analysis uses site data (by x–y location on the wafers) as opposed to wafer averages. The second advantage is that the analysis can be done earlier in the development cycle, after only w100 wafers have been processed and tested. This makes it possible to detect systematic process and design problems early and fix them before the product is shipped to customers. The third advantage is

DK4126—Chapter26—23/5/2007—19:46—CRCPAG—240455—XML MODEL CRC12a – pp. 1–19

26-12

Handbook of Semiconductor Manufacturing Technology

that PSA gives insight into possible causes of the yield loss. This will be explained in more detail as the analysis is described. Product sensitivity analysis determines how parameters measured at wafer probe (e.g., Icc currents, offset voltages, cutoff frequencies, etc.) vary with electrical parameters (e.g., Vtn, Leff, Hfe, poly sheet, etc.). This method does not work for functional pass/fail tests performed at wafer probe. In preparation for the analysis, a wafer-probe test-yield loss pareto is generated as shown in the example of Figure 26.8. The tests for which actual parametric values are measured and which cause the greatest yield loss are then chosen for the analysis. The analysis can then be performed for these parametric tests vs. all electrical parameters. The example used here will be for the wafer probe test highest on the pareto (test 2304). Figure 26.9 shows the actual distribution of this parameter. It can be seen that the distribution is far off-center between the specification limits. It is required that all parameters measured at wafer probe be recorded and stored by x–y coordinates on the wafer. Also, electrical parameters must be measured either in scribe-line test modules or in drop-in test modules at several sites on the wafers (preferably at least five per wafer). The analysis then proceeds as follows for each wafer probe parameter. The wafer probe and electrical parameters are screened and data points outside the “normal” or “typical” distributions are excluded. This may be done, for example, using:

3 USL Z Q3 C !IQR 2

ð26:18Þ

where

IQR Z Q3KQ1 and LSL low screen limit, and USL high screen limit Next, the values of the probe parameter for the product dice immediately surrounding each electrical test site are averaged and paired with the corresponding values for all electrical parameters of the sites. This results in a table similar to the abbreviated hypothetical example shown in Table 26.1. The first column identifies lot, wafer number and site. The next column is the average value for the wafer probe parameter for dice surrounding each electrical test site. The following columns contain the values for the electrical parameters.

Tue Dec 1 10:12:54 1998

INDEXED PARETO CHART #PCT N= AL=

UNITS :

16

Z INDEXED PARETO CHART STD DEV= 17.8175 OTHER % = 10

MEAN= 9.17875 AU=

TEST

SL=

X–BAR

STD DEV

SU=

PL=

# OBS

68.4854

0

1

46.6332

3412_DNL_1FFHTO2

28.1407

0

1

19.1616

3411_DNL_1FDHTO2

20.8943

0

1

14.2274

1902_OFFSET_FREQ

12.5252

0

1

8.52868

2308_PWM_H_CDLY_

3.41887

0

1

2.32798

11

9.12125

OTHER

%

0

FIGURE 26.8

MED= 2.5127

PU=

%

2304_ICOS2_1_SP_

PAGE 1 OF 1

20

Wafer-probe test-yield loss pareto analysis.

DK4126—Chapter26—23/5/2007—19:46—CRCPAG—240455—XML MODEL CRC12a – pp. 1–19

40

60

80

100

Yield Modeling

26-13

Histogram

Fri Sep 25 08:23:50 1998 HISTOGRAM

2304_ICOS2_1_SP _1 N= 43731 CELL CELL 32.5 33.5 34.5 35.5 36.5 37.5 38.5 39.5 40.5 41.5 42.5 43.5 44.5 45.5 46.5 47.5 48.5 49.5 50.5 51.5

MIDPOINT PRCNT 0 0 0 0 0 0.0023 0 0 0.0023 0.0892 0.9467 3.8028 10.489 17.326 17.246 17.352 11.584 9.0394 7.9394 4.1801

UNITS: UA

MED=47.0125 MEAN=47.1664 STD DEV=2.09352 CELL PERCENTAGE # OBS 0 0 0 0 0 1 0 0 1 39 414 1663 4587 7577 7542 7588 5066 3953 3472 1828

Histogram lower bound=32

Points below lower bound=7

Histogram upper bound=52

Points above upper bound=612

Accept:

FIGURE 26.9

Spec:

Prod:

Frequency distribution for test 2304 (highest contributor to yield loss in Figure 26.8 pareto).

For each electrical parameter, the data are then grouped into three ranges, each with equal numbers of sites regardless of which wafer or lot they come from, according to the value of the electrical parameter. For the three groups, the average of the electrical parameter is computed, as is the average of the wafer probe parameter. This gives three points to plot on a graph, just as in the case of the limited yield analysis. A best-fit line is then calculated and plotted on the graph. The upper and lower specification limits for both the electrical parameter and the wafer probe parameter must also be imported into the computer program. These are plotted as vertical lines for the electrical parameter (independent variable) and horizontal lines for the wafer probe parameter (dependent variable). The standard deviation (s) is computed for the wafer probe parameter for each of the three groups. The G3s bars are plotted for each of the three points on the graph, and the G3s lines are plotted above and below the best-fit line. These are forced to be parallel to the best-fit line. The result is a graph like

DK4126—Chapter26—23/5/2007—19:46—CRCPAG—240455—XML MODEL CRC12a – pp. 1–19

26-14 TABLE 26.1 Lot_Wafer_Site L107_1_1 L107_1_2 L107_1_3 L107_1_4 L107_1_5 L107_2_1 L107_2_2 L107_2_3 L107_2_4 L107_2_5 L107_3_1 L107_3_2 L107_3_3 L107_3_4 L107_3_5 L108_1_1 L108_1_2 L108_1_3 L108_1_4 L108_1_5 L108_2_1 L108_2_2 L108_2_3 L108_2_4 L108_2_5

Handbook of Semiconductor Manufacturing Technology Parametric and Product Dice Date Table Mean of Probe Test 2034

Poly Sheet

NMOS Leff

NPN Beta

78 74 73 73 73 74 72 71 73 72 73 75 74 71 70 80 79 81 80 77 78 72 80 73 73

194 201 198 202 203 196 201 199 201 203 197 200 197 203 205 190 192 190 191 193 194 199 192 197 198

0.65 0.68 0.64 0.63 0.64 0.66 0.68 0.63 0.64 0.65 0.63 0.67 0.62 0.65 0.63 0.61 0.65 0.62 0.63 0.62 0.63 0.67 0.64 0.64 0.62

85 92 83 88 86 88 98 85 91 93 82 90 84 85 84 88 96 85 86 87 86 93 85 89 90

the example shown in Figure 26.10. This is for probe test 2304, whose distribution is shown in Figure 26.9. Note that the frequency distributions for the electrical parameter are also plotted on the graph. If the slope of the best-fit line is large relative to the wafer probe parameter spec limits, a sensitivity is indicated. Of course, a statistical test (such as t-test) should be performed to confirm that the differences in the wafer probe parameter among the three groups are statistically significant at the 95% level. Also, the three averages for the wafer probe parameter should monotonically increase or decrease with increasing value of the electrical parameter. If these conditions are met, the sensitivity is of interest and should be further studied. If the C3s line intersects the upper spec limit of the wafer probe parameter while the electrical parameter is still within its spec limits, yield loss can result for sites with electrical parameter values between this intersection and the specification limit. Yield loss can similarly occur if the K3s line intersects the lower specification limit of the wafer probe parameter at a point between the electrical specification limits. In Figure 26.10, yield loss would start to occur near sites with values of poly2 sheet resistance (P2MSRES) below about 188 ohms/square. This value is well within the specification limits of 160–240 ohms/square. For electrical parameters such as Vtn, Hfe, poly sheet, etc., where specification limits were set in advance and agreed upon by the designers, if the line intersect as described in Figure 26.10, any yield loss is due to design sensitivities. Of course, if the electrical parameter is tightly controlled in a narrow range where the C3s lines are within the probe specification limits, no yield loss would occur even though there is a design sensitivity. The method for calculating yield limits is illustrated in Figure 26.11. Here, the best-fit line intersects the upper spec limit of the wafer probe parameter within the actual distribution range of the electrical parameter. A vertical line is extended to the horizontal axis. This line cuts off a certain fraction of the distribution. The yield loss is then simply the ratio of the area of the distribution to the left of the line

DK4126—Chapter26—23/5/2007—19:46—CRCPAG—240455—XML MODEL CRC12a – pp. 1–19

Yield Modeling

26-15

06-OCT-1998 14:22 Page 1

CYC_PLT_P2MSRES

13.0

58

12.5

56

11.5

54

11.0

52

10.0

12.0

10.5 9.5

MUL_2304_ICO

50

9.0

48

8.5 8.0

46

7.5

44

7.0 6.5

42

6.0

40

5.5 5.0

38

4.5

36

4.0 3.5

34

3.0

32

2.5 2.0

30

1.5

28

1.0 0.5

26 120 130 140 150 160 170 180 190 200 210 220 230 240 250 260 270 280

0.0

P2MSRES Curve 1 −0.25218*X +94.874519 ESL_LOW ESL_HIGH PSL_LOW PSL_HIGH MIN_EXT MAX_EXT histogram BASE DATA UNO UNO UNO MAX_ERR MIN_ERR

FIGURE 26.10

Best-line fit (and G3s) for “3-point analysis” of test 2034.

(in this example) to the total area of the distribution. The yield limit is then:

YDk Z 1K

AðFXl Þ AðFXt Þ

where YDk design yield limit due to electrical parameter k; AðFXl Þ the area under the distribution to the left of the vertical line; and AðFXt Þ the total area under the distribution This analysis is repeated for all electrical parameters and for each of the wafer probe parameters high on the yield-loss pareto. All of the graphs with no intersection of the G3s lines with the wafer probe specification limits can then be eliminated. Also, graphs that do show potential yield loss because

DK4126—Chapter26—23/5/2007—19:46—CRCPAG—240455—XML MODEL CRC12a – pp. 1–19

26-16

Handbook of Semiconductor Manufacturing Technology

120

Probe test 2034

100 80 60 40 PRB TST 2034 Poly 2 Distribution

20 Yield loss 0 140

160

180

200

220

240

260

Poly 2 sheet

FIGURE 26.11

Product sensitivity analysis graph: probe test 2034 vs. poly2 sheet resistance—yield limit calculation.

of sensitivities to electrical parameters that correlate with other electrical parameters are screened, as described for limited yield analysis. This leaves only one yield limit per sensitivity that is independent of all others. The systematic yield limits are then summarized in a pareto.

26.4.4

Test Yield Limits

Test yield loss can result from such problems as mis-calibration of test hardware, poor contact of probe tips, electrical noise, etc. The calculation of two types of yield limits will be described here. The first type is typically systematic in nature and results from differences between various testers, probe cards, interface boards, etc. A “set-up” is defined as a unique combination of tester, probe card, interface board, and any other component that might be changed periodically from one tester to another. The number of wafers that should be run using each set-up in order to obtain statistically valid results depends on the variability in yield and the magnitude of the yield difference to be detected. One formula that can be used if the yield distribution is reasonably close to normal is:

N Z ðZa C Zb Þ2

s2 d2

where Za Zb s d

Z factor for the probability of a of making an a type error; Z factor for the probability of b of making a b type error; standard deviation in the yield; and the yield difference to be detected.

The z factors can be looked up in a standard table for the normal distribution. For example, if it is desired to have less than 10% probability of making an error of either type (projecting a yield difference when there is none, or projecting no yield difference when there is one), z is read from the table as 1.64. This assumes that the yield difference can be either positive or negative. If the s for the yield is 8% and it

DK4126—Chapter26—23/5/2007—19:46—CRCPAG—240455—XML MODEL CRC12a – pp. 1–19

Yield Modeling

26-17

is desired to detect a yield deference of 1%, N can be calculated as:

N Z ð1:64 C 1:64Þ2

0:082 Z 689 wafers 0:012

As an example, it is assumed that three different set-ups have been used to test 3000 wafers with the following results: Set-Up

N

1 2 3

% Yield

1200 800 1000

87 85 88

It is assumed that set-up “3” gives the optimum yield for this product. The test set-up yield limit is then:

YT S Z

1200 !0:87 C 800 !0:85 C 1000 !0:88 Z 0:986 3000 !0:88

The general formula can be expressed as:

P YT S Z

Ni !Yi N !YOPT

where Ni the number of wafers tested with set-up i; Yi mean yield of wafers tested with set-up i; N total number of wafers tested; and YOPT optimum yield (or mean yield of the best set-up. The number of wafers required for this type of yield calculation can be reduced substantially by splitting lots at wafer probe and testing the splits on different set-ups. With an added test of statistical significance (e.g., the F test), a yield difference could then be detected with only one lot. Another type of yield loss can happen because of some product dice having one or more parameters measured at wafer probe that are close to the upper or lower specification limits. In this case, electrical noise or slight variations in tester voltage or current input levels can cause the dice to fail when they are really good. If the dice are tested repeatedly, they sometimes pass and sometimes fail. These dice are sometimes called “twinkling die.” If correlation wafers are tested repeatedly to ensure proper test set-ups, the probe results can be used to calculate yield limits due to tester non-repeatability or twinkling dice. The formula is:

P

YTT Z 1K

ni Ti NG !Nt

where ni Ti NG Nt

the the the the

number of twinkling dice that fail i times; number of times these dice failed; gross number of dice in the (correlation) wafer; and number of times the wafer was tested

DK4126—Chapter26—23/5/2007—19:46—CRCPAG—240455—XML MODEL CRC12a – pp. 1–19

26-18

Handbook of Semiconductor Manufacturing Technology

For a simple example, it is assumed that NGZ600 and that the correlation wafer was tested four times (NtZ4). Furthermore, the number of dice that failed various numbers of times are assumed to be: Ti

ni

1 2 3

27 9 1

therefore,

YTT Z 1K

1 !3 C 9 !2 C 27 !1 Z 0:980 600 !4

The calculation of a few types of yield limits has not been described in detail in this chapter. For example, special yield test structures that are run early in the technology development cycle can be used to determine defect densities at various layers. This information can be used, along with critical area analysis, as soon as the first product layout is complete, to calculate defect yield limits for the new product. Also, yield limits can be calculated for clustered defects, but these must be treated separately from the defects that are random [7]. TABLE 26.2

BCA IYM Activities Summary Sheet

Design/product yield Yb (Beta) Y1 (Leff ) Yr (P1000/250) Design/prod limiter Test limited yield Yt (test) System process limiter Yi (IEBS) Ym1 (Metal 1) Ym2 (Metal 2) Ym (Metal) Yv(Via) Yc (Contact) System procs limiter Random defect limiter M2 bridge particle M2 bridge no part M2 extra pattern Metal 2 total M1 bridge particle M1 bridge no part M1 extra pattern Metal 1 total Metal 0 Scratches Residual metal Poly bridging Missing Co/Via Random defect limiter Calculated yield Actual yield

Actual Q1CY97

PRML March

PRML April

PRML May

Active Projects

0.994 0.994 0.991 0.979 0.982

0.995 0.992 0.991 0.978 0.986

0.992 0.996 0.997 0.985 0.986

0.998 0.998 0.999 0.995 0.986

DOE, characterization and spec adjustment DOE, characterization and spec adjustment DOE, characterization and spec adjustment

0.995 0.993 0.996 0.989 0.997 0.997 0.978

0.995 0.985 0.997 0.982 1 1 0.977

0.992 0.998 0.994 0.992 1 0.991 0.975

0.996 0.997 0.996 0.993 0.997 0.999 0.985

0.986 0.996 0.999 0.98 0.985 0.994 0.995 0.973 0.994 0.974 0.999 0.997 0.999 0.919 0.863 0.861

0.985 0.996 0.999 0.98 0.984 0.9948 0.996 0.975 0.996 0.973 0.999 0.998 0.999 0.922 0.869 0.889

0.984 0.995 1 0.979 0.984 0.9924 0.999 0.976 0.997 0.978 1 0.997 0.999 0.927 0.878 0.87

0.985 0.996 1 0.981 0.9812 0.9904 1 0.972 0.992 0.98 1 0.997 1 0.924 0.893 0.889

Probe card/setup improvements/OLI/test robustness

SRAM strip back analysis

IDIRT2 defect reduction team

Metal 1 defect reduction team Mechanical damage and scratch reduction

DK4126—Chapter26—23/5/2007—19:46—CRCPAG—240455—XML MODEL CRC12a – pp. 1–19

Yield Modeling

26.5

26-19

Summary

A complete mathematical yield model has been discussed which accounts for yield limits resulting from design, process, test and random defect problems. The yield model is expressed as the product of all of the independent yield limits. Table 26.2 gives an actual example of all of the calculated yield limits for a product that was in production several years ago. The grand limit is shown and compared with the actual yield for a given month. It can be seen that the numbers agree very well. This means that nearly all of the sources of yield loss were accounted for. Equations and methods have been described for calculating yield limits for most of the types of yield loss that occur in the four main categories. The effectiveness of the methodology has been shown by the example in Table 26.2.

References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.

Seeds, R. B. “Yield, Economic, and Logistic Models for Complex Digital Arrays.” IEEE International Convention Record Part 6 (1967): 61–6. Seeds, R. B. “Yield and Cost Analysis of Bipolar LSI,” Presented at the 1967 International Electron Device Meeting Keynote Session, October 1967. Stapper, C. H. “Large-Error Fault Clusters and Fault Tolerance in VLSI Circuits: A Review.” Journal of Research and Development 33, no. 2 (1989): 162–73. Stapper, C. H. “Fact and Fiction in Yield Modeling.” Microelectronics Journal 20, no. 1–2 (1989): 129–46. Ross, R., and N. Atchison. “A Useful Method for Calculation of Parametric Yield Limits.” Texas Instruments Technical Journal 15, no. 4 (1998): 74–7. Ross, R., and N. Atchison. “A New Method for Analyzing Probe Yield Sensitivities to IC Design.” Texas Instruments Technical Journal 15, no. 4 (1998): 78–82. Ross, R., and N. Atchison. “The Calculation of Wafer Probe Yield Limits from In-Line Defect Monitor Data.” Texas Instruments Technical Journal 15, no. 4 (1998): 83–7. Atchison, N., and R. Ross. Patent granted U.S. PTO No. 6,393,602, May 21, 2002. Atchison, N., and R. Ross. Patent granted UP PTO No. 6,324,481, Nov. 27, 2001. Atchison, N., and R. Ross. Patent granted U.S. PTO No. 6,210,983, Apr. 3, 2001. Atchison, N. Yield Analysis Web Page holomirage.com

DK4126—Chapter26—23/5/2007—19:46—CRCPAG—240455—XML MODEL CRC12a – pp. 1–19

27

Yield Management 27.1 27.2

27.3

Louis Breaux Sean Collins Texas Instruments, Inc.

27.1

Introduction ...................................................................... 27-1 Sources and Types of Random Defects........................... 27-2

Diffusion and Implant † Surface Preparation † Photolithography † Etch Deposition and Oxide or W Chemical–Mechanical Polishing (CMP) † Cu CMP and Damascene Multi-Level Metalization † Wafer Edge Engineering

Yield Management Methodology .................................... 27-8

Management Priority and Motivation † Process and Equipment Control † Product-Based Defect Detection and Analysis (Line Monitor) † Yield Impact Prediction/Verification † Root Cause Isolation

27.4 Summary ......................................................................... 27-28 References...................................................................................... 27-28

Introduction

The remarkable progress in integrated circuit manufacturing can, in no small part, be attributed to the evolution and advancement in the area of yield enhancement and yield management in the past decades [11]. This progress has been recognized not only just within the wafer manufacturing community but also by the wafer packaging community [12]. Considering imitation is the greatest compliment, then the fact that the yield management lessons and methodologies from the manufacturing “front end” are being adapted and applied to “back end” processes such as wafer bump and packaging is indeed an acknowledgement of the tremendous successes achieved [12]. Contamination-free manufacturing (CFM) originated as a term in the late 1980s. It was intended to describe the practice of semiconductor manufacturing under “ultraclean” conditions resulting in “perfect” yields. However, it is self-evident that perfect yields and ultraclean processes are goals that one strives for, but never achieves in totality under the constraints of time and money. Hence, the discussion in this chapter is primarily directed towards the methodology of “yield management” and the CFM practices that are relevant in achieving the highest yields possible in the shortest time. Probe yield can be defined as:

Y Z Ys Yr

ð27:1Þ

In this definition, (100KYs) is the percentage of yield lost to systematic issues, which are not randomly distributed and tend to impact all or most die on a wafer. The process/design marginalities, parametric test conditions or reticle defects are common causes of systematic yield loss. They are typically encountered in the early phase of new device qualification and once addressed, tend not to recur. The percentage of yield lost to random defects on the wafer surface (100KYr) is mostly due to contamination. 27-1

DK4126—Chapter27—23/5/2007—19:51—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

27-2

Handbook of Semiconductor Manufacturing Technology

TABLE 27.1

Chip Size Trends

Year of First Product Shipment Technology Generations Min Dimensions Dynamic random access memory (DRAM) half-pitch (nm) MPU/ASIC M1 half-pitch (nm) MPU printed gate length (nm) Functions/Chip DRAM bits/chip-generation DRAM gbit/cm2 at production MPU functions per chip at introduction- million transistors (mtransistors) MPU functions per chip at production(mtransistors) High performance MPU functions per chip(mtransistors) Chip Size (mm2) DRAM-at production (mm2) DRAM-at introduction (mm2) MPU-at production (mm2) MPU-at introduction (mm2) High Performance MPU at production (mm2) Lithographic field size (mm2) Maximum Substrate Diameter (nm) Bulk or epitaxial or SOIa wafer

2003

2005

2007

2010

2013

2016

2018

100

80

65

45

32

22

18

120 65

95 45

76 35

54 25

38 18

27 13

21 10

1G 0.77 180

1G 1.31 285

2G 2.22 453

4G 5.19 1546

8G 10.37 3092

32G 24.89 6184

32G 39.51 9816

153

243

386

773

1546

3092

4908

439

697

1106

2212

4424

8848

14,045

139 485 140 280 310 22!32 704

82 568 140 280 310 22!32 704

97 662 140 280 310 22!32 704

83 563 140 280 310 22!32 704

83 560 140 280 310 22!32 704

138 464 140 280 310 22!32 704

87 292 140 280 310 22!32 704

300

300

300

300

450

450

450

Source: The International Technology Roadmap for Semiconductors, International SEMATECH, Austin, TX, 2003 Edition. a SOI-silicon on insulator.

Yr Z eKAD

ð27:2Þ

where A is the area of the die and D is the defect density. Chapter 29, describes various methods used to calculate Ys and Yr for a given process flow and device. In modern factories using sub-0.25 mm design rules, 40%–50% of the total yield loss in the first year of production can be attributed to random defects. Prior chapters have dealt with process and design requirements for eliminating systematic yield losses. This chapter is primarily focused on the management of these random defect sources. The international technology roadmap for semiconductors (ITRS) National Roadmap for Semiconductors shows the overall industry trends in design rule and chip size (Table 27.1). These trends have direct impact on yield as seen in Table 27.2 [1]. Shrinking design rules imply smaller killing defects, larger chip sizes cause lower yields for comparable defect densities, and more process steps means more sources of contamination/defects. The ITRS roadmap (Table 27.2) estimates significant and necessary reductions in unit process defect densities (microprocessor unit (MPU) Random particle per wafer pass) to achieve 75% MPU yield (when compared to a 90 nm design rule microprocessor at 75% yield) in the first production year of 22 nm design rule microprocessor.

27.2

Sources and Types of Random Defects

In the cast of bad actors, particles invariably play the lead role. Ionic, metallic, and organic impurities as well as trace amounts of moisture or oxygen also commonly cause random defects. Contamination can result from virtually all aspects of semiconductor manufacturing including processes, equipment, raw materials (chemicals, gases, wafers), fluid storage and delivery systems, wafer transport and storage (cassettes, stockers, and wafer boxes), cleanroom and people. Equipment, processes, and process materials contribute a majority of contamination to the wafer surface. Contamination from cleanroom,

DK4126—Chapter27—23/5/2007—19:51—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

Yield Management TABLE 27.2

27-3

Yield Model and Defect Budget Technology Requirements

Year of first product shipment

2003

2005

2007

2010

2013

2016

2018

DRAM half-pitch (nm) 100 80 65 45 32 22 18 MPU/ASIC M1 half-pitch (nm) 120 95 76 54 38 27 21 MPU Printed gate length (nm) 65 45 35 25 18 13 10 Critical defect size (nm) 54 40 33 23 16 11 9 DRAM Random defect D0 at production 2216 3751 3190 3722 3722 2233 3545 chip size and 89.5% (faults/m2) MPU Random defect D0 at production 1395 1395 1395 1395 1395 1395 1395 chip size and 83% (faults/m2) Electrical D0(faults/m2) at critical defect 2210 2210 2210 2210 2210 2210 2210 size or greater 75% Yield+ Chip size (mm2) 140 140 140 140 140 140 140 # Mask levels - MPU 29 33 33 35 35 39 39 Random faults/mask 48 42 42 40 40 36 36 MPU Random particles per wafer pass (PWP) budget (defects/m2) for generic tool type scaled to 54 nm critical defect size or greater CMP clean 397 195 129 58 29 12 8 CMP insulator 961 472 312 141 71 30 20 CMP metal 1086 534 352 159 81 34 23 Coat/develop/bake 174 85 56 25 13 5 4 CVD insulator 854 420 277 125 63 27 18 CVD oxide mask 1124 552 364 165 83 35 24 Dielectric track 273 134 89 40 20 9 6 Furnace CVD 487 239 158 71 36 15 10 Furnace fast ramp 441 217 143 65 33 14 9 Furnace oxide/anneal 285 140 92 42 21 9 6 Implant high current 381 187 124 56 28 12 8 Implant low/medium current 348 171 113 51 26 11 7 Lithography cell 294 145 95 43 22 9 6 Lithography stepper 279 137 91 41 21 9 6 Measure CD 332 163 108 49 25 10 7 Measure film 285 140 92 42 21 9 6 Measure overlay 264 130 86 39 20 8 6 Metal CVD 519 255 168 76 38 16 11 Metal electroplate 268 132 87 39 20 8 6 Metal etch 1153 566 374 169 85 36 24 Metal PVD 591 291 192 87 44 19 12 Plasma etch 1049 515 340 154 78 33 22 Plasma strip 485 238 157 71 36 15 10 RTP CVD 317 156 103 46 23 10 7 RTP oxide/anneal 208 102 67 30 15 7 4 Vapor phase clean 729 358 236 107 54 23 15 Wafer handling 33 16 11 5 2 1 1 Wet bench 474 233 154 70 35 15 10 Solutions exist

Solutions being pursued

No known solution

Source: The International Technology Roadmap for Semiconductors, International SEMATECH, Austin, TX, 2003 Edition.

DK4126—Chapter27—23/5/2007—19:51—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

27-4

Handbook of Semiconductor Manufacturing Technology

people, and wafer transport/storage represents a very small portion of yield loss (!10%). Particles originate from: † sputtering/etching of electrode, chamber materials, † flaking of deposits from chamber walls and wafer holders (chucks, clamps, pins, etc.), † reaction of chemical species with moisture or oxygen leaks generating solid phase by-products, e.g., gas phase nucleation in tetraethyl orthosilicate Si(OC2H5)4 (TEOS) and W-chemical vapor deposition (CVD) processes † malfunction of filters, purifiers, and other fluid delivery system components, † abrasion of wafer handlers due to misalignment, † condensation due to poorly optimized process, pump/vent schemes, † re-deposition of by-products during wet processing due to non-optimized wet processes, † stress cracking of deposited films on the wafer surface, † gas and liquid chemicals. Process and equipment control are essential for high yields. Typical contamination and defect reduction sources for different processes are briefly described below.

27.2.1

Diffusion and Implant

Besides device design and process integration, trace ion/metal contamination control represents the biggest yield challenge. These impurities are found in silicon starting materials, inert and process gases, liquids, deionized (DI)-water, and ion bombardment debris. Crystalline oxide precipitates, stacking faults, and other silicon structural defects can also impact yield and device performance. During gate oxidation, silicidation and annealing operations, moisture and oxygen entering the reaction chamber (from incoming wafers, system leaks, or reactant gases) can change interface properties, oxide thickness or sheet resistance. Very small particles (!0.1 mm) have the potential of causing early gate oxide breakdown in gate oxides with thickness less than 100 A. The particles are also a major contamination source in polysilicon, nitride, and WSi2 low-pressure chemical vapor deposition (LPCVD) processes due to wall/clamp deposition and subsequent flaking of deposited films. Newer materials for oxides, advanced deposition methodologies, and newer precursor chemicals (typically more complex molecules) for film growth introduce more sources for particle generation or defectivity and require attention at the process introduction to develop reduced-particle or particle-free processes.

27.2.2

Surface Preparation

Most surface preparation processes use acids, bases, solvents, and DI-water. In recent years, chemical quality (with the exception of some solvents) has improved to the point where ions, metals are in the low part per billion to part per trillion levels. However, alkali metals like Ca, Na, Mg, and transition metals like Cu, Fe, Ni can cause gate oxide degradation, junction leakage and device reliability problems even at these low levels. The purpose of surface preparation is to remove surface contaminants or residuals from prior processing. The efficiency of the surface preparation process can determine the resultant level of defects. The improper or inefficient removal of metallic contaminants can result in undesirable enhanced growth or nucleation sites for later film growth which result in defect sites. Such contamination is typically not visible to inline inspection prior to the film deposition. Surface preparation can cause visible defectivity as precipitates or other residue can remain afterwards on the wafers caused by either inefficient rinsing or drying or chemical incompatibility with substrate materials.

27.2.3

Photolithography

Airborne amine contamination is a known cause ofchemically amplified deep ultraviolet (DUV) photoresist profile degradation. The chemical filters and the non-chemically amplified DUV resists are now available to minimize the impact of ambient amines. Airborne hydrocarbon contamination also

DK4126—Chapter27—23/5/2007—19:51—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

Yield Management

27-5

deposits on stepper lens causing lens clouding. Particles in photopolymers are particularly difficult to filter and backside wafer particles and film residues introduced from coater tracks and other equipment cause stepper defocus. Improved stepper chuck design to minimize the wafer contact appears to be the best means of minimizing this problem. An overall wafer backside cleanliness program across the entire fab also helps. Wafer backside inspection tools now exist, which can provide insight into the level of backside contamination experienced by the fab. Besides wafer level contamination, critical dimension (CD) control has a major impact on device speed and yield. Most of the sources of CD loss are from photolithography equipment and related processes. Critical levels such as gate definition, moat patterning and metal 1 push the limits of DUV lithography and etch processes. Depth of focus can be very small resulting in scumming, pattern erosion, and microbridging. For sub-0.25 mm design rules, photolithography-related process defects represent a major component of yield loss. Designers use selective size adjusts, phase shifters, optical proximity corrections, and serifs to make it easier to print critical geometries. These additional features on masks can make it harder for mask makers to build defect-free masks. Mask inspection, though typically handled within the lithography groups, is becoming a larger focus of the inline defect inspection groups due to the complicated nature of the newer generation masks. Mask inspection tools result in a dilemma, as a result of reporting the large numbers of defects that ultimately may not result in printable defects. Qualification of the masks is more and more requiring high sensitivity inline inspection of strategically printed wafers to determine the level of printable defectivity and marginality of the photo process with respect to the design elements for the device. Further these masks and mask materials are sensitive to degradation from continued use and exposure to the DUV light sources and must be periodically checked for evidence of this degradation. In order to push optical lithography to the 65 nm design rule node and potentially beyond, new and innovative resist and resist “assist” products are being introduced which create the need for process defect characterization and optimization. Defect characterization on resist levels has typically been problematic due to the anti-reflective nature of the films. Also inspection techniques with DUV illumination or electron-beam inspection (EBI) that can provide the resolution needed for these design rules may tend to have a deleterious effect of the resist films themselves. Line edge roughness (LER) has also become a critical issue for the 90 nm and below design rules at critical layers as device performance and device metrology can be affected.

27.2.4 Etch Deposition and Oxide or W Chemical–Mechanical Polishing (CMP) Particles, etch residues and incomplete etches are the biggest yield challenges for interconnect processes. Metal (Al, Ti, TiN) and dielectric deposition, and etch processes generate more killer particles (including a high percentage of large particles greater than 1X design rule) than all other processes combined (with regards to Al-based backend process technologies). Many of the sources of these defects are summarized at the beginning of Section 27.2. Scratches and associated scratch debris from chemical-mechanical polishing (CMP) also cause yield loss (see Figure 27.1). Cu deposition, done by electrochemical (liquid immersion) means rather than physical vapor deposition (PVD), CVD or sputtering has unique defects when compared to the other metals as a result. Filling of high aspect ratios is a significant source of concern but voids are generally not detectable by optical-based inspection techniques. Voids of various kinds (buried and internal to film, large and visible or coalescence of small voids in later processing) are a major concern for Cu deposition and subsequent processing, since smaller voids will tend to coalesce into larger voids with later processing (annealing, stress, etc.). Often the as-deposited Cu film will not exhibit large enough voiding to produce electric signal but with temperature and stress the voids can coalesce to result in electric fails. Every process and equipment should be monitored and controlled such that contaminants are not formed, or are detected and eliminated before they impact the wafer. However, short product lifecycles and the pressures of introducing new products to market on time have resulted in major design rule shrinks and process changes every 6–12 months. The manufacturing environment is becoming one of

DK4126—Chapter27—23/5/2007—19:51—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

27-6

Handbook of Semiconductor Manufacturing Technology

Via 2 Barrier deposition — Oxide hole to metal lines

Tungsten 2 CMP — Microscratch under laser illumination FREEZE

Pc_no 6 te

Metal 1 Etch — Pattern bridging

1800V

−100 nm × 50.000

(P:1/2X)

Metal 1 Etch — Peanut defect

FIGURE 27.1 Random defects seen from etch, deposition and chemical–mechanical polishing (CMP) processes in %0.5 mm logic fabs.

constant process/equipment change. In this environment, total elimination of random defects is unlikely. Transfer of well-characterized processes from development to manufacturing, and a system to detect and solve yield problems in manufacturing are essential for rapid yield ramps.

27.2.5

Cu CMP and Damascene Multi-Level Metalization

As mentioned previously, the introduction of Cu dual- or single-damascene processing for metal interconnect levels has resulted in new and significant defect mechanisms. These processes have also resulted in significant challenges to the Yield Management groups in terms of strategy and detection of these new defect issues. As interconnect capacitance becomes a more significant factor in the overall speed of device performance the need for lower K-value dielectric materials for inter- and intra-level oxides to counter this effect causes integration and defect problems. Some of the new defects introduced by this processing technology include the following (see also Figure 27.2): † pits or surface voids in the Cu material, † delamination or “blistering” between films in the dielectric and Cu stack due to stress and adhesion, † surface residues post Cu CMP cleaning which may becomes sites for initiating delamination or block subsequent via connection to the Cu layer, † Cu hillocks which are spikes of Cu that rise out of the Cu film after polish and as a result of subsequent heat treatment, † Cu corrosion where exposure to photon energy results in Cu filaments or Cu extrusions from the trench resulting in shorts, † incomplete Cu polish resulting in electric shorts, and † resist poisoning from low-K dielectric stack material effluents that interact with DUV resists.

DK4126—Chapter27—23/5/2007—19:51—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

Yield Management

ILD particles

27-7

Pattern / Blocked etch

Resist poisoning

Voids/Pitting

Incomplete Cu polish Blisters/Peeling

Corrosion

ILD Defect/Ripout

FIGURE 27.2 technology.

27.2.6

Via punchthrough

Carbon residue

Missing Cu

Examples of typical defects found in the Cu backend with damascene and low-K dielectric process

Wafer Edge Engineering

A critical factor in yield improvement has been the proper engineering of the wafer edge region. Film peeling and delamination can be exacerbated at the edge of the wafer (typically the outer 1–3 mm) where the edge bead and exclusion area occurs. This can be due to the improper matching of the films applied to this region (e.g., allowing films to be in contact that normally do not adhere) which occurs due to mismatch of the exclusion zone between photo, etch, and film deposition tools. Some film depositions that do not have edge exclusion can leave residues at the bevel of the wafer that later peel or become loosened and re-deposit back on the active part of the wafer. It has been well-known that the scribe area can also be a source of defects if the scribe process leaves residue. That residue can be moved across the wafer as a result of subsequent processing (typically wet processing). Another edge-related issue is the impact of remaining resist left in the exclusion region at the edge due to improper removal as part of the lithography process. This remaining resist can cause the build up of films or material that may peel from those localized areas. Finally, many sites print reticle patterns outside of the “yieldable” area of the wafer in order to accommodate issues with CMP and other processing. This results in partial die printed. These occur in areas where film and other process uniformity is outside of the expected process window. As a result some of these patterns may be marginal and can break off during later processing and be re-deposited on the inner portions of the wafer causing fails. See Figure 27.3, for examples, of wafer edge regions which can result in defectivity further in due to peeling and delamination. It is important that yield management strategies comprehend the region of the wafer outside of the “yieldable” region which can become a source of significant yield-limiting defectivity.

DK4126—Chapter27—23/5/2007—19:51—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

27-8

Handbook of Semiconductor Manufacturing Technology

200 μ

100.0 μm

Dielectric film blistering/peeling at edge

W a (~ fer 1 m ed m ge )

Resist extension into the exclusion region at edge

50.0 μm

PC_No 3

1 μm

Remaining film at wafer edge due to resist extension creating step

FIGURE 27.3

27.3

Wafer edge defectivity examples.

Yield Management Methodology

The function of a yield management program is to improve yields by predicting, detecting, reducing, and preventing yield losses in the shortest possible time. The intensity of these roles differs depending on the maturity of a manufacturing process. Figure 27.4 shows the stages of a yield ramp. Stage 1 refers to development and early productization, Stage 2 to final productization, Stage 3 to technology transfer and startup in a manufacturing fab, Stage 4 to volume ramp in manufacturing, and Stage 5 to achieving yield entitlement. During new process development and early productization (Stage 1), the role of a yield management program is to develop and implement an inspection/analysis plan to support process, material and equipment characterization, and diagnose systematic process integration issues. Defect inspection and analysis requirements are defined, and new equipment is evaluated and selected. During this stage, predictive yield models are developed, fab yield entitlement is estimated and unit operation defect reduction targets are set. As previously discussed, a technology node shrink requires baseline defect reduction to achieve similar yields. Hence, a baseline equipment defect reduction program should be initiated in the development/productization fab as well as at the manufacturing site that will receive the new process. Since yields are likely to be low at this stage, metrics of success are improvements in equipment defectivity and reduction of integration-related yield loss.

DK4126—Chapter27—23/5/2007—19:52—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

Yield Management

27-9

Stage 1

Stage 2

Stage 3

Stage 4

Stage 5

Complete Chip Multiple in-line learning

EOL

Control loop correlation

• >10 Areas with 0.5 to 1.0 potential • Eliminate NVA

Yield

Independent in-line modules

Test parametrics Integration Design

Drivers: • Small defect focus • Multiple learning loops • Excursion detection • # Engineers focused on yield • Clear signal (S/N ratio)

Defects

Early development

Final development

Mfg startup

Volume ramp

High vol. Mfg

Time

FIGURE 27.4

Stages of yield ramp for new technologies.

As productization progresses to Stage 2, process recipes become firmer, systematic yield losses become well understood (if not solved!) and the focus shifts to demonstration of manufacturability. For yield management, the emphasis shifts from systematic to random defect reduction. A large number of inspection steps at high sensitivity of defect detection are introduced into the process flow to characterize defects from each process segment. A defect pareto is established and baseline improvement is addressed through focused teams. A strategy to deal with excursion control is also implemented. Defects are thoroughly characterized using inspection and analytical instruments. A database of observed defects, root causes, and problem solutions is developed and documented. The yield model developed in Stage 1 is verified and refined with probe data. Stage 2 is also the time when engineers from the receiving fab come to the development fab to be trained on new tools and methods. Metrics for Stage 2 include electrical yield, yield model accuracy, process/equipment defect levels and project schedule. Stage 3 moves from development to the manufacturing site (or if the sites are the same it transfers ownership from a development team to the manufacturing team). Various companies have developed methodologies (such as “copy exactly”) to achieve the same or higher yields in production immediately after technology transfer. Copying equipment and processes facilitates duplication of productization yields in manufacturing. It is also important to copy the same inspection equipment set in manufacturing. This allows for transfer of inspection recipes, defect baselines, and yield models with minimum modifications. Initially the same number of inspection steps and sampling plan should be implemented in manufacturing. Once a baseline pareto is established and critical processes are fingerprinted, some of these inspections can be dropped and the sampling plan reduced. The key metrics for this stage are yield parity with the mother fab (or better), clear in line defect pareto and achievement of baseline defect reduction goals. In Stage 4, the number of process equipment and the number of lots in line increases dramatically. The objective is to ramp volume and improves on the yields demonstrated in Stage 3, while at the same time qualifying new equipment and processes. If the process is transferred from development to

DK4126—Chapter27—23/5/2007—19:52—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

27-10

Handbook of Semiconductor Manufacturing Technology

manufacturing at high yields, the primary objective may be to maintain these yields in the midst of all the equipment additions. A strong, well-trained yield management team with established methods is the key to success. This team must set up and lead crossfunctional yield improvement actions to tackle problems that invariably arise. Major yield gains are often achieved by solving a few yield problems at the top of the defect pareto. In Stage 5, a fab has completed volume ramp and the yield curve is starting to plateau. Additional yield gains require defect reduction at many points in the process flow. There are no silver bullets or low hanging fruits The entire fab needs to be involved in baseline improvement to see 0.5%–1.0% yield gains. Management commitment to yield improvement is probably the biggest success factor. Excursions have to be detected early so as to minimize the number of impacted lots. Small yield variances and tails in the yield distribution have to be analyzed carefully to squeeze out additional yield improvements. A re-examination of a fab’s yield entitlement is warranted at this stage along with cost benefit analysis of the gap between fab yields and entitlement. Additional yield gains may require new equipment or major device redesigns. At each of the above stages, the following elements form the basis of a good yield management program: † † † † † † †

management priority and motivation process and equipment control product-based defect detection and analysis yield impact prediction root cause isolation implementation of a solution verification of the solution.

There are important differences between process and equipment control and line monitoring methodologies which are part of the product-based defect detection. Principally the process and equipment control methodologies are carried out using unpatterned wafers and the appropriate inspection systems for that purpose. Line monitor methodologies are principally performed on patterned product wafers at pre-planned inspection points in the process flow on the corresponding appropriate inspection tools. It is important that both strategies exist for a successful yield management methodology. Equipment control methods focus on the defects produced by unit processes such as etch, photo, CMP, etc. While unit process defectivity is a critical concern there are significant defect issues that result from unit process interactions usually called integration defects. The line monitoring methods are designed to detect these integration defects, including defects related to topography and patterning issues. Hybrid methods exist to allow increased capability to use similar inspections to cover both strategies. In particular, some sites utilize product-based tool monitoring. In this methodology, lots are sampled for inspection both before and after a unit process. The added defectivity is considered the unit process defectivity. This method, however, suffers from several issues of its own: 1. replacing lower cost inspection (unpatterned wafers) for higher cost inspection (for this purpose the typical inspection tools that would be used would be of medium cost comparatively), 2. issues with sensitivity for some processes before and after processing (e.g., film deposition may highlight previous defects that were not previously detected, thus causing an errant high adder defects), 3. logistical issues with sampling lots appropriately to cover the entire process tool base for a particular unit process, and 4 increasing the cycle time of production lots by implementation of more rigorous and more frequent inline inspections. One of the advantages of the hybrid too monitoring method is particularly attractive to 300 mm substrate wafers fabs in that the use of pilot wafers and associated costs can be dramatically reduced.

DK4126—Chapter27—23/5/2007—19:52—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

Yield Management

27-11

Another important advantage of the hybrid method is the ability to correlate tool defectivity with defectivity on product. Whether the increased costs associated with the increased logistics, cycle time, and inspections offsets the savings in wafer costs must be evaluated for each fab. Furthermore the ability to successfully identify defects of interest and pinpoint the source should be a consideration.

27.3.1

Management Priority and Motivation

Every fab has multiple priorities, such as cost, cycle time, yield, and wafer output. The rate of yield improvement is closely tied to the priority assigned by management to yield. Progressive fabs set clear goals and have visible indices to track yield and offer incentives for achieving or exceeding goals. Employees at all levels are encouraged and motivated to improve process control and reduce defects. It cannot be emphasized enough the importance of management support at all levels to prioritize yield improvement efforts. Yield improvement is a multi-disciplinary effort requiring the cooperation of all groups to achieve lasting success. Most managers would agree that yield improvement is an important and vital need but many do not translate that into actual goals since such efforts are not perceived to be under their direct control. Such goals appear to be ill-defined with regards to their typical daily activities. A culture where yield improvement activities is emphasized and rewarded must be developed and continually encouraged since the work is often in conflict with other goals such as improved lot cycle time.

27.3.2

Process and Equipment Control

The mechanisms of particle formation in process equipment are different in each of the cases discussed in Section 27.2 and a fundamental understanding of the physics and chemistry of many of these mechanisms is lacking. Some of these sources, such as condensation or gas phase nucleation can be systematically eliminated through process recipe optimization and elimination of system leaks. Others, such as flaking can be minimized through regular chamber cleaning, use of parts with high surface area or use of special procedures like “PVD pasting”. During Ti, TiN sputter deposition, chamber walls are “pasted” with Ti after each wafer is processed to “glue” any flakes to the chamber wall. Regular preventive maintenance can also go a long way to minimize particle events. In many instances the only recourse is to clean wafers after a particularly dirty process. For example, brush or megasonic scrubbers are commonly used to clean product wafers after sputter deposition or chemical mechanical planarization. Regardless of the origin, it is important to know the quality of process/equipment on a regular basis. Factories monitor particle levels of all equipment using unpatterned pilot wafers once per shift or once per day. Wafers are scanned through an unpatterned wafer inspection tool before and after a pilot run to get the number density of particles added by the process tool. To obtain good readings, it is important to process pilot wafers under conditions similar to those used for product wafers. Far too often, factories monitor equipment without running the process, i.e., they do not pass process gases through the chamber or do not turn the radio frequency (RF) power on. Under such conditions, these measurements are poor, or completely irrelevant indicators of the health of process or equipment. Pilot wafer-based equipment monitoring and control is the most common approach to determining if a piece of equipment is in a “ready” state for production. They are used to qualify a new process, bring a tool to production readiness after a maintenance operation, or for routine checkup. Particle measurements are plotted on control charts and standard statistical process control (SPC) rules are applied to determine the next step. Since equipment can be shut down based on pilot wafer readings, it is important to establish credibility of these measurements by periodically correlating them to product wafer measurements using patterned wafer inspection equipment. With higher throughput patterned wafer inspection equipment, there is a trend to do away with unpatterned wafer inspections altogether and replace them with product inspections. This reduces pilot wafer costs and equipment down time associated with monitoring. It also eliminates the need to correlate pilot and product measurements. It is also needed when monitoring

DK4126—Chapter27—23/5/2007—19:52—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

27-12

Handbook of Semiconductor Manufacturing Technology

processes that are prone to an increasing frequency of wafer-to-wafer defect excursions as a tool-related failure mechanism. In this case, monitoring defectivity of one or two wafers on a daily basis does not give a true picture of the health of the tool. Figure 27.5 shows defect counts of 12 unpatterned tool monitor wafers run sequentially in a process tool in poor condition as seen in the defect excursion rate. Single wafer dielectric deposition tools with in situ cleans linked to wafer count or deposition thickness are a prime example of such a tool. In this case, unpatterned tool monitoring capability is only needed for diagnostic and post-maintenance qualification before a return to service. As a result, it is unlikely that unpatterned wafer-based equipment monitoring will disappear in the near future and also patterned wafer inspection machines are still 4–6 times more expensive and 3–10 times slower. Particle wafer monitors are limited in their usefulness because they only provide a snapshot of process quality a few times a day. Ideally, we would like sensors that monitor the process in real time. Such in situ process sensors (ISPMs) have been demonstrated to work in only a limited number of process and equipment combinations. 27.3.2.1

In Situ Contamination Monitoring

Real time contamination monitoring of process equipment is desirable because it reduces the time to detect an excursion or baseline shift. Such sensors have been used for many years to monitor particles, moisture, and oxygen in bulk gases, particles and total organic carbon (TOC) in DI-water, and particles in liquid chemical distribution systems. Sensors for use on process equipment have had limited success. In situ particle sensors have been successfully applied to monitor some etch, CVD, diffusion, ion implant and wet cleaning equipment. These sensors are installed in the exhaust line, downstream from the process chamber. Residual gas analyzers have been used for process diagnostics on a few processes, but their application has been limited by their size and the expertise required to understand complex spectral information. When they work, ISPMs provide rapid information about contaminants generated during actual process conditions. Most in situ particle monitors detect particles using scattered light. Figure 27.6 is a schematic of one such sensor. A laser and set of optics create an intense-focused beam of light projected through an area particles travel. Most in situ sensors use laser diodes as the optical source because they are small, powerful, reliable, and relatively inexpensive [8]. As particles traverse the beam, light scatters in short pulses to one or more detectors. The detectors, typically photodiodes, convert the light pulse to an electrical signal that is counted over time. The small, modular design allows flexibility in locating the sensor in the exhaust line near the process chamber of the tool. Since ISPMs are near the process chamber

Defect count

657

137

FIGURE 27.5

26

29

10

2

3

3

12

6

1

2

3

4

5

6 7 Wafer

8

9

10

1

17

11

12

Defect counts on 12 unpatterned tool monitors run sequentially in a process prone to excursions.

DK4126—Chapter27—23/5/2007—19:52—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

Yield Management

27-13

Particle

Laser

Focused beam

Optics

Detector

FIGURE 27.6

Schematic of an in situ particle sensor.

where they are exposed to vibration, corrosive chemicals, electrical noise and optical interference, the sensors must be robust. The application requires a high signal-to-noise ratio and the sensor often uses optical filters to reduce interference. An example of a successful particle ISPM application is on AMAT P5000 tungsten CVD systems (see Figure 27.7). Menon and Grobelny [9] reported good correlation of ISPM data to in line wafer defect data and metal short fails, document particle “signatures” for excursions, and verify that the NF3 clean step is the main source of particles in the process. The ISPM data in Figure 27.7a indicated a shift in the particle performance of the W-CVD chamber nearly 3 weeks before the pilot wafer particle check failed (Figure 27.7b). Note in Figure 27.7b that the particle levels on the pilot wafer check are not statistically different before and after the chamber clean. The pilot wafer particle check did not indicate a particle problem with the W-CVD chamber until there was a catastrophic flaking of particles (the chamber was then cleaned). In Figure 27.7a, the ISPM data show a distinctive increase in particles well before the chamber was cleaned. A major deficiency of current ISPM sensors is that they do not “plug and play”. Each equipment application requires customization. Because they are installed downstream of the process chamber, they detect particles only if pressure and fluid transport streamlines are conducive to moving particles from the chamber to the detection point. In low pressure processes like sputter deposition, gravitational forces are much larger than drag forces, causing particles to deposit in the chamber rather than being transported through the exhaust line. Takahashi and Daugherty [10] have published an extensive review of this technology, its advantages and limitations. In situ process sensors, however, address only a portion of the defectivity inline that affects yield: that related to fall-on particle contamination. Defects due to process integration, for instance, would not be captured by this technique. In the past few years ISPMs have been integrated into the semiconductor process equipment tools where they are applicable and are now part of the standard control system. Finally, one of the drawbacks of ISPM monitoring is the relative lack of ability to correlate to yieldlimiting defectivity initially. There are multiple stages in some processes that produce particle type effluents but are not causing particulate contamination on the wafer surface, at least in such a way that causes yield loss. The ability to directly demonstrate correlation of a detected defect problem to yield loss is one of the significant tools in yield management. By drawing such causation the yield engineer is able to more effectively drive management and engineering resources to eliminate the defect problem. As defect monitoring methods move away from patterned wafer analysis such causation becomes more difficult to ascertain. In the case of ISPMs there is a need to focus on those stages in the process which ultimately produce the yield-impacting defectivity. Typically such correlations are determined by studying the correlation between the ISPM readings from various process stages and the impact on wafers in the system at the time. Therefore, new processes will need such characterization in order to successfully implement ISPM monitoring.

DK4126—Chapter27—23/5/2007—19:52—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

27-14

Handbook of Semiconductor Manufacturing Technology

500

ISPM Counts from deposition chamber expanded scale 5000+

In situ counts

400

300

200

100

0 12/6/95

12/8/95 12/11/95 12/15/95 12/21/95 12/28/95

1/1/96

Time

(a)

50

Pilot wafer level metric expanded scale 900

Wafer particle count

40

30

20 Average 3.9

Average 2.8

10

0 11/27/95

12/7/95

12/17/95

(b)

12/27/95

1/6/96

1/16/96

Time

FIGURE 27.7 Results from an in situ process sensors (ISPM) application for tungsten chemical vapor (CVD) deposition in an applied materials P-5000 chamber. (From Menon, V. and Grobelny, M., AVS Symposium, San Jose, CA, 1997.)

27.3.2.2

Unpatterned Wafer Monitoring

Since ISPMs address a limited number of applications on process tools there is still a heavy reliance on particle measurements on test wafers to qualify, if not monitor process equipment for defectivity. The majority of these measurements is done using unpatterned pilot or test wafers (unless the site has opted for a tool monitor method on product wafers). A variety of inspection systems that are design to inspect wafers with blanket films or no films and have no pattern are available. These typically will use laser-based scattering inspection methodology. Due to the lack of pattern “interference” with the inspection, these systems can achieve relatively high sensitivity to particles, scratches, pits, etch blocks, other defects, and even to surface roughness of the films themselves. The inspections are typically fast in that a 200 mm wafer can be inspected in about 1 min or less. Most patterned wafer inspection tools can also perform unpatterned wafer inspection as well but typically at a reduced throughput.

DK4126—Chapter27—23/5/2007—19:52—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

Yield Management

27-15

Since unpatterned wafer inspection tools cost much less than the patterned wafer tools, such application is not considered appropriate unless the type of illumination and detection available on the patterned wafer inspection provides an added benefit. Unpatterned wafer inspection tools are typically sensitive to the film or film stack on the wafer and thus need to be calibrated or optimized to obtain maximum sensitivity. A unique aspect of these systems is that they can be calibrated to traceable known size National Institute of Standards and Technology (NIST). These standards are typically polystyrene latex (PSL) spheres that can be obtained in several calibrated sizes. The ability to detect these spheres of differing size bins on particular film stacks allows a measure of sensitivity that is traceable to NIST. However, the defects that are encountered in semiconductor manufacturing are mostly not spherical like the PSL spheres and are of a wide variety of materials that differ from PSL. Due to the fact that the illumination light interacts differently to the defects of interest, the sensitivity to these defects may be significantly different than that achieved using PSL spheres. In addition, the NISTstandard size PSL spheres are currently larger than the size range of defects currently of interest and the calibration to these sizes may not be particularly relevant to existing semiconductor manufacturing needs. Achieving sensitivity to defects of interest (90 nm and below design rules) is more often accomplished by optimizing the signalto-noise using the variety of on board analysis capabilities than the PSL sphere technique since the latter requires special equipment and techniques to properly produce calibration wafers. It is important that scanning electron microscopy (SEM) review of the defects detected on the unpatterned tool monitor be performed at initial monitor setup and periodically to aid in trouble shooting tool defect excursions. The SEM review at monitor setup is used to build an understanding of the defect types that are present on the monitor. This is important for several reasons. One is to make sure that the monitoring is detecting the defects of interest known as the defect of interest (DOI). The DOI are the defects which have impact on yield and product reliability as determined through the defectivity seen inline and from end of line failure analysis (FA). This method has also increases the fundamental understanding of the process-related defectivity allowing a clearer differentiation from integration-related defect mechanisms. Setting up tool defect monitors solely on indicated defect size relative to PSLs often results in monitoring of nuisance rather than the defects of interest or missing a defect type entirely which is perceived as the noise floor of the wafer. 27.3.2.3

Wafer Backside Inspection

As lithographic depth of field process windows decrease with shrinking design rules the impact of wafer backside contamination on frontside defectivity increases [13–15]. Particles, residues, or pits and damage on the backside of the wafer can be correlated to lithographic “hot” spots or out-of-focus spots. Backside defectivity can also result in localized defectivity in etch or other processes if there is a lack of contact between the substrate and the tool chuck which causes improper heat distribution or if severe enough can cause arcing in severe enough cases. Finally, backside particles and residues can be passed on to the wafer frontside during subsequent processing. Tools specifically designed to inspect the wafer backside which have been available for several years to address this need however techniques to do so using prior unpatterned wafer tool sets have been in place, usually requiring the sacrifice of the frontside of the wafer. Some manufacturing facilities include wafer backside measurements to qualify and monitor tools or processes which are historically likely to cause wafer backside contamination. An example of wafer backside inspection capturing a defect which results in frontside defectivity is shown in Figure 27.8. Higher backside sensitivity and less noise can be expected with 300 mm wafers when compared to 200 mm or smaller wafer sizes particularly at the early stages of the process since 300 mm wafers will be polished on the backside. As the wafers proceed through the process, though, the results of the processing and handling add significant backside defectivity which is not necessarily relevant to front side defectivity and ultimately yield. Analysis of backside wafer maps is largely based upon “signatures,” that is, spatiallydistributed patterns of defects in recognizable shapes due to the high defect counts typically encountered. Applications and data analysis of wafer backside inspection defectivity and patterns are still in a relatively

DK4126—Chapter27—23/5/2007—19:52—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

27-16

Handbook of Semiconductor Manufacturing Technology

10μm

Backside inspection result map

FIGURE 27.8

Frontside inspection result map

Example of a hotspot induced by backside defects.

early stage of development. These analyses are aimed at reducing the high defectivity counts and patterns to identify the backside defects and defect signatures of interest.

27.3.3

Product-Based Defect Detection and Analysis (Line Monitor)

We have always relied on electrical test, with associated electrical and physical analyses, as a definitive measure of yield loss. However, cycletime to detect a defect problem can be 30–60 days (see Figure 27.9). For advanced flows up to 11 layers of metal the cycletime can be in excess of 60 days. In the last 15 years, in-line defect detection equipment has matured to a point where they are deployed at critical points in the process flow to characterize process defectivity (I.L.M. in Figure 27.9). This has reduced the time to detect random defects to a few days or even hours.

Station monitors additions enhance yield learning and reduce risk number of learning cycles

1

n

# Lots at risk

E-test

ILM

Station

In-situ hours

Time Days

E-test random / systematic systematic parametric equipment correlation ILM correlation Layer characterization in-line e-test correlation ADC Equipment defectivity

Weeks

Months

FIGURE 27.9 Relative timescales to detect defect problems in a factory. (From Menon, V. and Grobelny, M., AVS Symposium, San Jose, CA, 1997.)

DK4126—Chapter27—23/5/2007—19:52—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

Yield Management

27-17

Deposition

Photo

Etch

Monitor inspection

Monitor inspection

Clean Test

ISPM Monitor inspection Product inspection

Product inspection

Monitor inspection

Product inspection

Defect review

ADC SEM/EDX Analysis

Yield management system

Factory operating system

FIGURE 27.10 Typical tool set and inspection locations for yield management. (Thicker lines indicate data flow; thinner lines indicate wafer movement.)

A typical tool set deployment for defect detection and analysis is shown in Figure 27.10 and consists of inspection, review, and analysis tools, and a yield management system. Patterned wafer inspection equipment are used after deposition, etch or photolithography operations, while unpatterned wafer inspection equipment are used for equipment monitoring. These inspection tools provide size, number density, and coordinate location of detected defects. In addition, some of the current inspection tools have the capability to allow for classification of the defects with some added time from training the inspection tool based on defect features Automatic Defect Classification (ADC). The inspection tools may also provide optical images of selected defects. Optical and SEM review equipment are used to view and classify defects which can augment the inspection tool capability or be used independently if the inspection tool classification capability is too limited for the defects in question. Defect review and classification provide visual confirmation and additional information (shape, texture, color, location above or below a process film etc.). A yield management software/hardware system collects and analyzes data from these inspection and review tools, and correlates them to electrical test, parametric test, and factory computer integrated manufacturing (CIM) data (such as equipment ID, process recipe, etc.). One should also not underestimate the need for defect images to provide. The objectives of an optimal inspection plan are to provide a baseline defect density profile of the process and to detect defect excursions immediately after they occur. A sound sampling strategy will minimize the detection delay after a defect excursion thus minimizing material at risk. Obviously, one needs to accomplish this at a reasonable cost of inspection. Figure 27.11 depicts questions that need to be answered before an in-line defect detection plan can be put in place [2]. Where should one inspect? What percentage of lots should one inspect? How many wafers per lot? What percentage of the wafer area? At what defect size sensitivity? Typically, 2–3 product wafers from 50 to 100% of all lots are inspected on patterned wafer inspection tools. The same wafers are inspected at different points in the line, allowing for calculation of cumulative number of defects in the line as well as partitioning of defects from different process segments. A 0.25 mm logic or memory process flow may have 25–40 inspection points depending on the number of deposition,

DK4126—Chapter27—23/5/2007—19:52—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

27-18

Handbook of Semiconductor Manufacturing Technology

Inspection tools?

Inspection steps?

% of lots?

Wafers per lot?

%Wafer area?

Sensitivity?

Isolation >0.39 μm pixel

Poly Contact

>0.20 μm pixel

Metal 1 >0.12 μm pixel # Defects

Via Metal 2

Defect size

FIGURE 27.11 Factors to be considered while developing an in-line defect sampling plan. (From Akella, R., Jang, W., Kuo, W.W., Nurani, R.K., and Wang, E.H., KLA Yield Management Seminar, Santa Clara, CA, 1996.)

CMP, etch, and photolithography steps. For 65 nm logic process technology flows, the inspection log points scales roughly with the increase in number of metal layers from typically four to seven or more. The inspection sensitivity is process dependent, and the user has to balance the need to detect as many true defects as possible with the need to keep wafer throughput high and false alarms low. Figure 27.12 is an example of an in-line defect pareto. It shows the number of defects detected at each inspection point after any previously detected defects that have been subtracted. Essentially, it provides for “loop” level defect density. The risk of not detecting a defect excursion increases as the sampling rate drops. In studies by Intel [3] and Advanced micro devices (AMD) [4], lot to lot variability was found to be much higher than wafer to wafer variability. Hence it is better to inspect more lots than more wafers per lot.

1800 1600 1400 1200 1000 800 600 400 200 0

FIGURE 27.12

Gate etch

Pad 3

Metal1 etch

Metal etch

Example of an inline defect pareto.

DK4126—Chapter27—23/5/2007—19:52—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

Via etch

Yield Management

27-19

350 300 250 200 150 100 50 0

Nuisance Defective Sm/Med Deformed surface STI Def. metal embed pattern fall on PC patt. lines PC

FIGURE 27.13

Resist peeling

P Def. P Blocked P Residue P Broken P Missing patt. etch ped. patt.

Manually classified pareto for a single inspection point (metal 1 etch).

The goal of defect review and classification is to provide an additional information on specific defects that allow for rapid root cause identification. This represents the best way to segment loop level defect density to individual process steps within the loop. Each patterned wafer inspection will have a classified defect pareto. Different companies use different approaches to defect review and classification using optical microscopes and SEMs. Some review all defects detected in-line, while others review only “out of control” wafers or a certain percentage of defects on each wafer. Manual review and classification is done by the trained operators who refer to a database of defect pictures and a classification “naming” code. For example, the number of large defects will be separated from small defects, particles will be separated from etch residue, or defects lying on top of the pattern will be separated from those under the pattern. Figure 27.13 is a typical pareto of classified defects. As mentioned previously, ADC algorithms have been incorporated into patterned wafer inspection tools so that the classification can be done automatically after each inspection. This has resulted in a big improvement in the accuracy and precision of classification over that of human operators. Figure 27.14 shows the performance of an ADC system that is built into a patterned wafer inspection tool. Accuracy of classification is determined relative to an “expert” human. Automatic defect classification accuracy is reported to be over 75%. In contrast, accuracy of human operators across multiple shifts is typically below 50%. Another advantage of ADC is the reduction in time to results. When classification is done manually on a separate optical review station, additional queue time is introduced between inspection and classification. In the study by Bennett et al. [5] ADC was over three times faster than manual classification (Figure 27.15). This technology is fast gaining acceptance as a valuable tool for yield management. Original ADC algorithms required the collection of a sample of defect images after the completion of the initial inspection. This is due to the need for higher resolution imaging than typically used for the inspection itself. Unfortunately this leads to a trade-off with throughput and inspection tool capacity as it adds time to the overall inspection. Newer algorithms have allowed for use of the information available from the defect during the inspection which allows for classification of all defects without significant

DK4126—Chapter27—23/5/2007—19:52—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

27-20

Handbook of Semiconductor Manufacturing Technology

% Accuracy

77−93% Accuracy

TRET

PAD 3

Gate

M1ET

Accuracy = # defects correctly classified by ADC / # defects correctly classified by expert humans

# Classes

10 −19 Defect classes

TRET

PAD 3

Gate

M1ET

FIGURE 27.14 Typical accuracy from an automated defect classification system applied to a 0.25 mm process flow. (From Bennett, M., Garvin, J., Hightower, J., Moalem, Y., and Reddy, M., KLA Yield Management Seminar, San Francisco, CA, 1997.)

• KLA 2132 / Manual off-line classification Defect detection

Queue

60 min

120 min

212 min 30 min Classification

• KLA 2132/ADC Defect detection

68 min

60 min 6 min

2 min

Classification

Analysis

2 min Analysis

Sample plan 2–200 mm wafers 100% coverage 0.39 μm sensitivity 50 defects classified/wafer Random mode

FIGURE 27.15 Comparison of (the time required) classification manual vs. automatic defect classification (ADC). (From Bennett, M., Garvin, J., Hightower, J., Moalem, Y., and Reddy, M., KLA Yield Management Seminar, San Francisco, CA, 1997.)

DK4126—Chapter27—23/5/2007—19:52—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

Yield Management

27-21

added overhead in the inspection process. Since this is typically performed at a lower resolution imaging the ability to categorize into detailed bins is limited. However, since categorical information on all defects can be obtained during the scan this allows for different applications such as nuisance filtering and directed post-inspection review sampling. With nuisance filtering, a nuisance-defect type that may be hard to remove by other filtering methods which can be classified and ignored in the final defect reporting. Directed post-inspection review sampling means that the post-inspection review sampling (as typically used for optical or SEM review) can be weighted towards defects of particular “coarse” defect type bins identified from the initial inspection to provide finer detail and to highlight defects that have a stronger potential to be yield problems. Such sampling has be done based on defect size distributions in the past but now similar algorithms can be applied to a wider range of attributes. Typical sampling has been done in the past using algorithms that select defects based on a random sampling. The primary challenges of defect inspection for current process technology are, in general, similar in nature to prior recent generations: (1) ability to detect smaller defects; (2) faster inspections with adequate sensitivity; (3) ability to detect defects of interest (not just smaller but low contrast or low profile defects); (4) ability to screen out noise sources; and (5) ability to quickly and effectively detect defects in high aspect ratio inspection (HARI) structures [23,24]. Some of the challenges in defect inspection and analysis are in the areas of small defect detection (!0.1 mm), fast detection of residues and partial etches at the bottoms of contacts or isolation trenches, and the ability to separate signal from noise (due to surface roughness, metal grains, high reflectivity films, 3D topography, etc.). A variety of inspection systems and techniques are available for the yield engineer to apply to a given problem to detect DOI. A yield management strategy includes the application of the appropriate inspection technology to a given process level to insure adequate ability to capture all relevant defects while minimizing the detection of nuisance events. The following sections review the automated inspection technologies available to date for application to the semiconductor process. The following review should be considered as generally applicable and any detailed information on operation and features should be requested from the system manufacturer. 27.3.3.1

Brightfield Inspection

In the early days of semiconductor manufacturing, manual optical microscope inspections were a primary means for detection of the manufacturing defects inline. Microscopes would typically have two modes of operation: (1) brightfield; which is the use of the primary reflected light from the illuminator to view the wafer surface and (2) darkfield which placed a filter to block the primary reflected light and instead viewed only the backscattered (off-normal) light. The two primary inspection systems in use today use the same basic principles for automated inspection. Due to the need for high resolution on 90, 65 nm and smaller design rule features the brightfield systems have had to migrate to UV and DUV illumination. However, caution may need to be exercised with regards to the types of films being exposed to DUV illumination in particle since film damage may occur. In addition to the illumination source wavelength reduction, these is a need for higher magnification for detection of smaller defects. Typically the magnification is referred in terms of “pixel size” since the image is digitized and the pixel size corresponds to one side of square region of the wafer that maps into a single pixel of the detector. The brightfield systems in use today generally will collect digitized images of similar sites in adjacent die (or analogously similar sites in adjacent cells in a repeating memory or array mode). A particular pixel is assigned an appropriate grayscale value depending on the amount of light that is reflected from that region. Simply speaking these images are compared by image subtraction producing a difference image. In the ideal difference image the background is dark, while an area affected by a defect in one of the images (usually three images or locations are compared) appears bright due to the pixel grayscale difference. In addition to the different illumination sources (white light, UV, and DUV) and pixel sizes (down to 0.12 mm) the current brightfield platforms offer a variety of other options for inspection including a darkfield-like transformation of the digital image and a variety of digital “filters” and screening methods.

DK4126—Chapter27—23/5/2007—19:52—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

27-22

Handbook of Semiconductor Manufacturing Technology

Best known methods for brightfield inspection application include post-etch and post-photolithography processes which should be considered as an initial guideline for best application of an inspection technology. 27.3.3.2

Darkfield Inspection

Darkfield inspection technology typically refers to systems that are based upon laser light for illumination of the substrate surface. This illumination can be applied at an angle to the substrate surface or with normal incidence to the wafer surface. Due to the nature of laser illumination the light will be monochromatic and of high intensity. As a result the detection of light for defectivity occurs at angles off the 0th order reflected path. Thus these systems will collect scattered light which can come from both the pattern on the wafer and any defects. The high intensity is needed for sensitivity as the scattered reflected light from a defect or particle may be substantially less than the 0th order reflected beam light, however, too high of a laser intensity or power could potentially damage the film. Detection and sensitivity are largely related to the ability to screen out the scattering due to the pattern and noise and highlight the scattering signal due to defects. Fortunately pattern features will tend to scatter in certain directions (depending on the angle of illumination, wavelength, and other factors) which allows for good sensitivity to defect scattering at other directions. Also special spatial filtering can be applied to array type structures since the arrays structures scatter in well-defined patterns based on the array periodicity. This special filtering can allow even higher sensitivity in the arrays. The basic principle behind this is the fact that light scattered from a periodic structure will be diffracted into well-defined spatial nodes due to constructive and destructive interference effects. The size, number, and shape of the nodes will depend on the periodicity of the array and the angle of the incident light. This diffraction pattern can be considered the Fourier transformation of the array. By creating a spatial filter to block these nodes, one can filter out all of the light scattered due to pattern in these structures. The remaining scattered light that is detected is due to defects with a much high signal-to-noise capability than can be obtain from non-array type pattern. Depending on the illumination angle which varies from normal incidence to high angles on these types of tools the sensitivity to types of defect can vary. Systems which employ off-normal angles of incident light will tend to have higher sensitivity to defects that either extend out of the current process layer of the wafer (e.g., particles) or extend below the surface of the current process level (e.g., pits or holes). Systems that use normal incidence illumination will tend to have higher sensitivity to planar defects such as pattern issues or blocked etch. More advanced systems attempt to provide a combination of both the normal and the angled illumination to attempt to provide a broader range of defect type sensitivity. In the past, darkfield tools have primarily been utilized to inspect post-film deposition or post-CMP processes since the substrate at those layers is relatively flat and uniform and the primary defect of interest are particles or pits and scratches. With the advances to darkfield technology in the past few years the applications of this technology have expanded beyond these levels to include some of the more traditional levels associated with brightfield tools. For a particular application it is recommended that a verification and comparison be made to determine the trade-offs in defect capture with each inspection technology. 27.3.3.3

Electron Beam Inspection

To meet the need of the continued reduction in feature size and the need to quickly determine the extent of “non-visual” or electrical defects inline has come to the development of inspection systems based on EBI in the past several years. Such technology has developed successfully beyond an engineering analysis tool to the point of allowing cost-effective line monitoring [16,20]. This technology not only provides the ability for higher sensitivity due to smaller pixels and higher resolution imaging but also provides one of the important uses which has been the capability of voltage contrast (VC) imaging, especially for multilevel metal structures. In fact, as brightfield inspection technology capabilities have improved with smaller pixels and UV and DUV illumination the primary application of EBI for line monitoring has been its application to multi-level metal and contact/via levels to detect electrical defectivity using VC

DK4126—Chapter27—23/5/2007—19:52—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

Yield Management

27-23

phenomenon. In addition the increased usage of damascene Cu process technology where voids, underetched vias or other electrical defect mechanisms are increasingly hidden to the optical inspection methods has created a gap in the inline inspection characterization of defectivity that EBI is well-suited for [21]. As a result this technology has seen extensive use for post-Cu CMP (Metal 1 and Metal 2) inspection as well as post-W CMP (contacts) inspection. Examples of defects detected by EBI using VC (both bright and dark) and shown in Figure 27.16 providing both the image from EBI as well as the determination of the cause by focused ion beam (FIB) analysis. From a detection and inspection standpoint EBI is essentially a brightfield-analogous inspection technology which uses electron beam imaging instead of optical imaging with all the associated issues (for example, charging potential). Analogous to “reflected light” for bright field inspection would be the emitted and detected secondary electrons. Considerations for proper detection optimizing in this case are the beam landing energy, beam current, and field conditions imposed, which create either a retarding or a extracting condition for the emitted electrons. Materials issues play a large role in EBI applications. Insulating materials can lead to charging which inhibits effective inspection and must be countered. Some materials may interact with a given beam energy which could result in defectivity itself. Proper consideration should be given if using such technology on materials such as resist or low-K dielectric layers. Electron beam inspection has advantages over the traditional electrical probe techniques due to its ability to resolve electrical issues below the typical resolution of probe and the ability to provide accurate defect locations compared to having large area probe structures which require follow-on physical analysis to determine the nature and location of the fail and depend to some degree on engineering judgment as to the cause. Electron beam inspection has been demonstrated to effectively detect defects causing open vias inline that did not cause sufficient signal at inline electrical probe but did result in yield loss at functional test [16]. The HARI structures provide another opportunity for the unique application of EBI since residues or under-etch conditions at the bottom of such structures may respond to e-beam methods while being

Metal 2 Cu CMP – Bright VC defect / Contact short

Metal 2 Cu CMP – Dark VC defect / Via 1 open

Metal 2 trench etch– Bright VC defect / Missing metal 1

Metal 2 Cu CMP – Dark VC defect / Contact open

FIGURE 27.16 Examples of voltage contrast (VC) defects from electron beam inspection (EBI) and the corresponding results of the focused ion beam (FIB) cross-section.

DK4126—Chapter27—23/5/2007—19:52—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

27-24

Handbook of Semiconductor Manufacturing Technology

beyond the detection capability of optical methods. Examples of such structures include contacts, vias, and deep trenches for memory. Other applications of EBI include inspection of contacts and vias after etch, inspection of gates after gate etch to highlight electrically shorted gates [17], inspection of post-silicidation to find silicide shorts or other silicide-related defectivity and monitoring and reduction of process-induced substrate dislocation defectivity [18]. Since EBI is heavily used in detection of underlying or non-visual electrical defects this technology has increased the need and use of FIB defect analysis. FIB analysis allows one to cross-section defective areas to provide a visual analysis of the cause of the electrical fault. In conjunction with EBI, this analytical technique can provide a significant level of quick feedback for electrical defects that would normally not be detected until final probe [19]. 27.3.3.4

Macro Inspection

As the industry moves to more and more automation in the overall process and the post-process verification requires consistency and quantitative analysis and there is a need to remove the human element in these metrology steps. As a result there is a market for a lower sensitivity inspection tool. This tool could effectively remove the historic needs for human-operated microscope evaluations for after develop (lithography) inspection among other applications. A macro inspection tool, as it is commonly referred, operates by taking a low resolution image of the entire wafer and comparing that image to a stored image of an “ideal” wafer. A typical resolution capability of such tools is 30–50 mm. Some of the application and capabilities of the macro inspection tools are the detection of large defect events, verification of proper reticle (in some cases), checking for out-of-focus conditions from lithography (such as hot spots and reticle field tilt), scratches, and back end of line applications, such as post-polyimide or other final layers to detect large defect issues. Since these tools typically can inspect a wafer very quickly these systems can potentially be applied at most any process to provide a continuous check for process stability and variation post-processing. Also since many current yield issues that affect the fabs are events that occur randomly at low levels whether it would be a few wafers in a lot or on a percentage of lots the ability to monitor a large number of wafers and lots is attractive as long as the system is capable of detecting the problems. Due to the ability to create such inspection systems with minimal overall size these systems can be incorporated into the process tools themselves or they can be acquired as stand-alone systems, the former being a preferable configuration for unit process monitoring, while the latter allowing more general accessibility. These systems can come close to the ideal of having a real-time process defectivity monitor on product wafers albeit without a high level of sensitivity to many defect issues. A drawback is the need to create inspection recipe setups for every device and each process level in the case of a lithography tool, for instance. 27.3.3.5

Wafer Edge Inspection

Relatively newer to the market are tools to inspect the region of the wafer edge outside of the printed area of the wafer. In order to obtain high, competitive yields in today’s industry the yield of the die at the edge of the wafer becomes strategically important. To accommodate this need and the need for consistent and reliable data these tools have been developed. Often the source of defectivity that results in loss of edge die or even inner die is directly attributed to defects that originate from the edge bead exclusion region and even the bevel edge of the wafer. Most work to reduce this defectivity has been done by either drawing a correlation to defectivity seen further in the wafer (and thus using the previously existing inspection tools) or by manually using a microscope or SEM to investigate the wafer edge. Now there are tools to perform automated inspection of the wafer edge and bevel and provide meaningful data on the defectivity seen. Typical defects of interest are peeling or delaminating films, residual films, chips, cracks, and particles. Like the macro inspection tool the wafer edge inspection tools will attempt to replace a typically manual application with automation that can provide better quantitative results where that is needed. It can also

DK4126—Chapter27—23/5/2007—19:53—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

Yield Management

27-25

be applied to a large number of lots and wafers without redirecting manpower resources to this effort during those instances when evaluation of the wafer edge is needed.

27.3.4

Yield Impact Prediction/Verification

Only a percentage of defects observed in line cause device failures because many defects land on noncritical areas of the chip, or are of a size smaller than the minimum geometry, or are of a material that does not cause electrical shorting. “Kill ratio” for a process level is defined as the ratio of the number of killer defects to the number of observed defects at that level. The Kill ratio has to be calculated to estimate yield loss associated with any product inspection. To obtain the kill ratios, electrical test wafer maps are overlaid on in-line defect maps. One-to-one correlation of a defect location on both maps indicates a killer defect. This procedure is easier in concept than in practice sometimes, multiple defects are observed at a fail location, while at other times no physical defects are observed. This correlation of in-line to endof line probe data were easier with memory chips (or memory areas of a chip) where detailed bit maps can be generated. For more complex logic chips, this correlation is usually done with bin level test data or bit map data from the cache memory area. These days there are yield management software systems that can automatically generate these correlations. In the absence of this correlation, defect classification information from optical or SEM review is the next best way to experimentally obtain kill ratio. Critical area estimators are also available to model the percent of a chip area that is sensitive to defects of different defect sizes. The most common model uses “dot throwers” where the model simulates imaginary defects dropping randomly over the area of the chip design database. The model then runs these defect distributions through a “shapes-checker” to look for certain fail criteria, such as shorts, blocks, partial opens, etc. These models provide probability of failure curves as a function of defect size for different device levels. Figure 27.17 is an example of a family of these curves for a 0.5 mm device [6]. In line inspection data provide a plot of defect count vs. defect size. The product of these probability curves with inline defect vs. size plots for the same device levels equals the number of “kills” at each defect size. Martin [7] defined a term “defect density learning rate (LR)” to quantify the rate of yield improvement and excursion level in a factory.

LR Z f1KðDn =Do Þ1=n g100%

ð27:3Þ

where n, time period; Dn, new defect density; Do, old defect density

Poly Cont

Probability of fail

Met1 Via1 Met2 Via2 Met3

0

FIGURE 27.17 TX, 1997, 62.)

5

10

15

20 25 Defect size

30

35

40

TLM device of 0.5 mm critical area curves. (From Winter, T., Proceedings of SPIE, Vol. 3215, Austin,

DK4126—Chapter27—23/5/2007—19:53—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

Handbook of Semiconductor Manufacturing Technology

Learning rate 0.5 μm DRAM fab

10% 0% −10%

Month

Apr

Feb Mar

Dec

94-Jan

Oct Nov

−30%

Sep

Learning rate calculation: (1/n) Lr = (1–(Dn/Do) )*100% n = time period, Dn = new def. dens., Do= old def. dens.

−20%

93-Apr

Apr

Feb

Mar

Dec

94-Jan

Oct

Nov

Sep

Jul

Aug

Jun

May

93-Apr

Murphy yield model: Y=[(1–e−AD)/(AD)] A = die size, D = defect density, Y = fractional yield

20%

Jul

Defect density

Lrn Rt % /Mo Cum Lrn Rt / Mo

30%

Aug

Learning rate− %/Mo

Defect density −d/cm2

Defect density trend 0.5 μm DRAM fab murphy model–d/cm2

May Jun

27-26

Month

FIGURE 27.18 Measuring a fab’s yield methodology and results. (From Martin, R., KLA Yield Management Seminar, Geneva, Switzerland, 1996.)

Figure 27.18 shows an example of LR for a 0.5 mm dynamic random access memory (DRAM) fab [7]. Figure 27.18a shows the defect density over a period of time, while Figure 27.18b is a calculation of LR. In this case, the fab had a cumulative LR of 5% and 4 excursions in 15 months or an excursion rate of 27%. The LR is expected to be high during the early portion of new product introduction, but it tapers off as the fab approaches yield entitlement. The defect density at yield entitlement is the “best realistic” defect density capability of a fab for a device, and is dependent on design rule, layout efficiency, die per wafer and equipment set capability. This entitlement defect density is empirically determined from past experience, best wafer/lot yield analysis and benchmarking against other devices or companies. Martin [7] has published empirical curves of entitlement defect density vs. polysilicon design rule for different generations of equipment capability (see Figure 27.19). In-line to end-of-line defect correlators, critical area extraction, spatial signature analysis routines and defect partitioning software are elements of an overall fab yield management software system. Such a system must not only automatically identify process steps at which critical defects occur, but also it must

Process entitlement defect density –d /cm2

0.65 μm Eqp set 0.50 μm Eqp set 0.35 μm Eqp set

0.35

0.5

0.65

0.8

0.95

1.1

1.25

Process design rules –μm

FIGURE 27.19 Empirical curves of process entitlement defect density vs. process design rule. (From Martin, R., KLA Yield Management Seminar, Geneva, Switzerland, 1996.)

DK4126—Chapter27—23/5/2007—19:53—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

Yield Management

27-27

predict yield impact and couple to defect knowledge databases or decision trees to direct operators to specific corrective actions. Some of these capabilities are currently available, others are imminent because of improved software and networking systems.

27.3.5

Root Cause Isolation

The purpose of root cause isolation is to determine the sources of problematic defects so that the efforts of the process engineers may be focused on specific areas of the process to fix the defect. Root cause isolation is made up of two parts: (1) prioritization of defect types and (2) isolation of the defect source. The prioritization of defect types requires consistent defect detection and classification, either manual or automatic, to understand the levels of specific defect types. Defect paretos can be created to prioritize defects by level and type. From the paretos, the process level and defect type(s) may be chosen for the defect isolation effort. The isolation of defect sources can utilize defect information from many sources. Equipment monitoring charts, in situ equipment sensor data and equipment commonality analysis may lead to the errant equipment and process. Sometimes, a library (or database) of commonly observed defect images and energy dispersive x-ray (EDX) spectra can be very useful in the defects. For example, Figure 27.20 shows the EDX spectra of and etch O-ring. However, further detailed analyses using SEM, EDX, FIB, and other analytical tools may be required to track the problem to a component within a specific piece of equipment. A tool for root source isolation is the use of Wafer Positional Analysis [22]. For this analysis to be effective the wafer fab must track the wafer positions in the cassette throughout the process flow and randomly sort the wafers by cassette position at a variety of pre-determined points in the flow. From this database a number of metrics, such as yield, can be plotted by the wafer position to determine if a pattern is visible (e.g., odd–even yield pattern) by the wafer position for a section of the process. This can focus the root cause efforts to a particular section of the process flow with the resolution limited by the number of random sorting steps. Often, defect isolation requires scanning and reviewing wafers at different steps in a process sector to understand the sources of the defects. This is done by from the following process: Operator : Hemant amin client : All ISIS users

CPS 1.0

Si

0.8

0.6 Al N

0.4

Mg Ti

0.2 Pc_No 7 Ver 3

100 nm 1500V×150.000

(P:1/2X)

0.0 0

SEM of defect

2

4 EDS of defect

6

8 Energy (kev)

FIGURE 27.20 Scanning electron microscopy (SEM) image and energy dispersive x-ray (EDX) spectrum of an oxide etcher o-ring-induced wafer defect.

DK4126—Chapter27—23/5/2007—19:53—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

27-28

Handbook of Semiconductor Manufacturing Technology

1. Scanning and reviewing wafers at a step prior to the first step of the defect source. This captures the incoming defects to the isolation process. 2. Scanning wafers and reviewing the adders at each possible point of the defect source. Scan and review the same wafers inspected at the previous steps. 3. Scanning wafers and reviewing all defects (adders and defects common to the previous steps) at the last step in the isolation procedure. Create a pareto of all defect types. For the chosen defect types, extract the point in the inspection steps where the defects first appeared. This should point to the source of the defect. Another iteration of this procedure on a narrow range of the process may be necessary to determine the source of the defects to specific tools at specific steps. This phase of yield enhancement usually takes the longest time since it requires slow and systematic “forensic” analyses. Having identified the problem, implementation of a solution can be fairly quick. The fix may be a design change, a simple component change, a chamber clean or a process recipe modification. Once a solution is found, several tests need to be run to verify the fix and final verification is usually based on in-line defect levels and electrical test results.

27.4

Summary

Rapid yield ramps will continue to require continuous improvement in defect levels from processes and equipment along with a sound yield management methodology. In this chapter, yield management methods were reviewed along with details about defect sources, process/equipment control and the application of defect detection and analysis tools, yield models and software for yield enhancement. Overall strategies for yield management during various stages of new process development were also covered.

References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.

The International Technology Roadmap for Semiconductors, Austin, TX: International SEMATECH, 2003 Edition. (See also http://public.itrs.net/, accessed on 9 February 2007). Akella, R., W. Jang, W. W. Kuo, R. K. Nurani, and E. H. Wang. “Defect Sampling Strategies for Yield Management.” In KLA Yield Management Seminar, Santa Clara, CA, 1996. Williams, R. and R. Nurani. “Sampling Plan Optimization for a Multi-product Scenario in Semiconductor Manufacturing.” In KLA-Tencor Yield Management Seminar, SEMICON/West, San Francisco, CA, 1997. McIntyre, M., R. Nurani, R. Akella, and A. Strojwas. “Key Considerations in Sampling Methodologies and Yield Prediction.” In KLA Yield Management Seminar, Makuhari, Japan, 1996. Bennett, M., J. Garvin, J. Hightower, Y. Moalem, and M. Reddy. “The Matching of Multiple IMPACT Systems in Production.” In KLA Yield Management Seminar, San Francisco, CA, 1997. Winter, T. “Electrical Defect Density Modeling for Different Technology Nodes, Process Complexity and Critical Areas.” In Proceedings of SPIE, Vol. 3215, 62, Austin, TX, 1997. Martin, R. “Benchmarking Fab Yield Opportunities.” In KLA Yield Management Seminar, Geneva, Switzerland, 1996. Borden, P. Microcontamination 9, no. 1 (1991): 43. Menon, V., and M. Grobelny. “Recent Experiences with In Situ Contamination Monitoring and Control.” In AVS Symposium, San Jose, CA, 1997. Takahashi, K., and J. Daugherty. “Current Capabilities and Limitations of In Situ Particle Monitors in Silicon Processing Equipment.” J. Vac. Sci. Technol., A 14, no. 6 (1996): 2983–93. Guldi, R. “In-Line Defect Reduction From a Historical Perspective and Its Implications for Future Integrated Circuit Manufacturing.” IEEE Trans. Semicond. Manuf. 17, no. 4 (2004): 629–40.

DK4126—Chapter27—23/5/2007—19:53—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

Yield Management

12. 13. 14. 15. 16. 17. 18.

19. 20. 21. 22. 23. 24.

27-29

Nelson, D., and G. Stark. “How Automated Visual Inspection and CD Metrology Will Impact Wafer-Level Packaging.” Chip Scale Rev. July (2002): 63–9. Cheema, L. A., L. Olmer, and O. D. Patterson. “Wafer Back Side Inspection Applications for Yield Protection and Enhancement.” In Proceedings of the 2002 IEEE/SEMI Advanced Semiconductor Manufacturing Conference, 30 April–5 May, 64–71, Boston, MA, 2002. Lederer, K., M. Scholze, U. Strohbach, A. Wocko, T. Reuter, and A. Schoenauer. “Wafer Backside Inspection Applications in Lithography.” In Proceedings of the 2003 IEEE/SEMI Advanced Semiconductor Manufacturing Conference, 31 March–1 April, 1–8, Munich, Germany, 2003. Cheema, L. A., L. Olmer, O. D. Patterson, S. Lopez, and M. Burns. “Yield Enhancement from Wafer Backside Inspection.” Solid State Technol. 46, no. 9 (2003): 57–60. Soucek, M., J. Anderson, H. Chahal, D. W. Price, K. Boahen, and L. Breaux. “Electrical Line Monitoring in a 300 mm Copper Fab.” Semicond. Int. 26, no. 8 (2003): 80–90. Patterson, O. D., B. Crevasee, K. Harris, B. B. Patel, and G. Cochran. “Reducing Gate-Level Electrical Defectivity Rapidly Using Voltage-Contrast Test Structures.” Micro Mag. 21, no. 8 (2003): 45–55. Baltzinger, J.-L., S. Desmercieres, S. Lasserre, P. Champonnois, and M. Mercier. “E-Beam Inspection of Disclocations: Product Monitoring and Process Change Validation.” In Proceedings of the 2004 IEEE/SEMI Advanced Semiconductor Manufacturing Conference, 4–6 May, 359–66, Boston, MA, 2004. Rathert, J., and J. Teshima. “Dual Approach to Understanding Failure.” Eur. Semicond. 24, no. 6 (2002): 41–4. Ache, A., and K. Wu. “Production Implementation of State-of-the-Art Electron Beam Inspection.” In Proceedings of the IEEE/SEMI Advanced Semiconductor Manufacturing Conference, 4–6 May, 344–7. Boston, MA, 2004. Henry, T., D. W. Price, and R. Fiordalice. “E-Beam Inspection: Best Practices for Copper Logic and Foundry Fabs.” In Proceedings of the 2003 IEEE International Symposium on Semiconductor Manufacturing, 30 September–2 October, 396–9. San Jose, CA, 2003. Kittler, R., M. McIntyre, C. Bode, T. Sonderman, S. Reeves, and S. Zika. “Achieving Rapid Yield Improvement.” Semicond. Int. 27, no. 8 (2004): 53–60. Jarvis, R., and M. Retersdorf. “Can Technology Keep Pace with High Aspect Ratio Inspection?” Solid State Technol. 46, no. 11 (2003): 49–50 see also 52. Goel, H., and D. Dance. “Yield Enhancement Challenges for 90 nm and Beyond.” In Proceedings of the 2003 IEEE/SEMI Advanced Semiconductor Manufacturing Conference, 31 March–1 April, 262–5, Munich, Germany, 2003.

DK4126—Chapter27—23/5/2007—19:53—CRCPAG—240456—XML MODEL CRC12a – pp. 1–29

28

Electrical, Physical, and Chemical Characterization Dieter K. Schroder Arizona State University

28.1 28.2

Bruno W. Schueler Revera Inc.

Thomas Shaffner National Institute of Standards and Technology

Greg S. Strossman Evans Analytical Group LLC

28.1

28.3

Introduction ...................................................................... 28-1 Electrical Characterization ............................................... 28-2

Resistivity and Carrier Concentration † MOSFET Device Characterization † Oxide Charges, Interface States, and Oxide Integrity † Defects and Carrier Lifetimes † ChargeBased Measurements † Probe Measurements

Physical and Chemical Characterization ...................... 28-30

High Spatial Resolution Imaging † Dopants and Impurities † Surface and Thin Film Composition and Chemistry † Stress and Physical Defects

References .................................................................................... 28-62

Introduction

Most of the characterization techniques described in this chapter are typically applied outside the semiconductor cleanroom environment, at on- or off-site analytical laboratories. These include a variety of electrical measurements, as well as some of the more widely used physical, chemical, and high-resolution imaging procedures, which are applied routinely to manufacturing in a problem-solving mode. They are typically not routine enough for real-time applications inside the wafer fab, mostly because of technique complexity and difficulty in interpreting the data. However, some newer instruments have been adapted for use in fabs, usually in a near-line configuration as opposed to an in-line one. Measurements required at the front end, such as critical dimension (CD) and overlay, film thickness, and wafer contamination, can be considered in-line, where immediate feedback for process refinement is required. Reviews of these tools can be found in Chapter 24 and Chapter 25 of this handbook. Likewise, the diagnostics of field returns in a failure analysis mode involve yet another class of analysis and tools, which is discussed in Chapter 29. The collection of characterization instruments for near-line or analytical laboratory applications continues to grow in number and specialization, and it is therefore necessary to limit the scope of this chapter to only those most frequently used. A number of good books and encyclopedias are available for further study [1–4]. Also, characterization symposium proceedings are routinely published by the Materials Research Society, the American Vacuum Society, the Electrochemical Society, the American Physical Society, the American Chemical Society, and the Society for Photo-Optical Instrumentation Engineers. Other international conferences specialize in ion mass spectrometry [5], electron microscopy 28-1

DK4126—Chapter28—23/5/2007—16:24—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-2

Handbook of Semiconductor Manufacturing Technology

[6], x-ray diffraction (XRD) [7], scanning probe microscopy (SPM) [8], and characterization overviews aligned to the National Technology Roadmap for Semiconductors [9]. The format adopted in this chapter is designed to help the reader quickly extract the basic purpose and concepts fundamental to the operation of each technique. This is followed by a brief description of the most prominent strengths as well as shortcomings. The respective headings, Purpose, Method, Strengths, and Weaknesses apply throughout the chapter. Key references are provided to guide readers who are interested in a more detailed study.

28.2

Electrical Characterization

28.2.1

Resistivity and Carrier Concentration

28.2.1.1

Four-Point Probe

Purpose. The four-point probe technique is the most common method for measuring the semiconductor resistivity and sheet resistance [3]. It is an absolute measurement without recourse to calibrated standards and is most commonly used to generate sheet resistance contour maps. Method. Two-point probe methods would appear to be easier to implement, because only two probes need to be manipulated, but the interpretation of the measured data is more difficult. For the two-point probe or two-contact arrangement of Figure 28.1a, each contact serves as a current and as a voltage probe. The total resistance RT is given by

RT Z V=I Z 2RW C 2RC C RDUT

ð28:1Þ

where RW is the wire and probe resistance, RC the contact resistance, and RDUT the resistance of the device under test. Clearly, it is impossible to determine RDUT with this measurement arrangement. Although the current path in the four-contact arrangement in Figure 28.1b is identical to that in Figure 28.1a, the voltage is now measured with two additional contacts. Although the voltage path contains RW and RC as well, the current flowing through the voltage path is very low due to the high input impedance of the voltmeter (around 1012 U or higher), making the voltage drops across RW and RC negligibly small, and the measured voltage is essentially the voltage drop across the DUT. Using four, rather than two, probes, we have eliminated parasitic voltage drops. Such four contact measurements are usually referred to as Kelvin measurements, after Lord Kelvin. Typical probe radii are 30–500 mm and probe spacing ranges from 0.5 to 1.5 mm. Smaller probe spacing allows measurements closer to wafer edges, an important consideration during wafer mapping. Probes to measure metal films should not be mixed with probes to measure semiconductors. For some applications, e.g., magnetic tunnel junctions, polymer films, and semiconductor defects, microscopic

RW

RW

RC I

DUT

V RDUT

(a)

FIGURE 28.1

RW

RC I

DUT

V RDUT

RC (b)

RW

Two-terminal and four-terminal resistance measurement arrangements.

DK4126—Chapter28—23/5/2007—16:25—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

RC

Electrical, Physical, and Chemical Characterization

28-3

four-point probes with 1.5 mm probe spacing have been used [10]. A four-point probe consisting of independently driven actuators for use in a scanning electron microscope (SEM) with probe spacing from 18 to 500 mm has been developed [11]. For an arbitrarily shaped sample, the resistivity is given by

r Z 2psFðV=IÞ ½ohm

cm

ð28:2Þ

where s is the probe spacing and F is a correction factor that depends on the sample geometry. F corrects for probe location near sample edges, for sample thickness, sample diameter, probe placement, and sample temperature. For collinear or in-line probes with equal probe spacing, the wafer thickness correction factor F is [12]

FZ

t=s 2 lnf½sin hðt=sÞ =½sin hðt=2sÞ g

ð28:3Þ

for a non-conducting bottom wafer surface boundary, where t is the wafer or layer thickness. For thin, uniformly doped samples, t%s/2, the resistivity r and the sheet resistance Rsh are given as

rZ

p V V t Z 4:532t ½ohm ln 2 I I

r p V V Rsh Z Z Z 4:532 ½ohms=square ð28:4Þ t ln 2 I I

cm ;

The sheet resistance for non-uniform samples of thickness t is

Rsh Z

Ðt 0

1 ½1=rðxÞ dx

Z

Ðt 0

1 sðxÞdx

Z

Ðt

1

ð28:5Þ

q ½nðxÞmn ðxÞ C pðxÞmp ðxÞ dx 0

where s is the conductivity. Rsh is proportional to the integrated conductivity or implant dose. A sheet resistance measurement integrates the entire doping density profile into one simple measurement. An example of contour maps is shown in Figure 28.2. The key to high-precision four-point probe measurements is the use of two measurement configurations at each probe location, known as the “dual configuration” or the “configuration switched” method [13]. The first configuration is usually with current into probe 1 and out of probe 4 and voltage sensed across probes 2 and 3. The second measurement is made with current driven through probes 1

(a)

(b)

FIGURE 28.2 Four-point probe sheet resistance contour maps; (a) boron, 1015 cmK2, 40 keV, Rsh,avZ98.5 U per square; (b) arsenic, 1015 cmK2, 80 keV, Rsh,avZ98.7 U per square; 1% intervals. 200 mm diameter Si wafers. Data courtesy of Marylou Meloni, Varian Ion Implant Systems.

DK4126—Chapter28—23/5/2007—16:25—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-4

Handbook of Semiconductor Manufacturing Technology

and 3 and voltage measured across probes 2 and 4. The advantages are that the probes no longer need to be in a high-symmetry orientation (being perpendicular or parallel to the wafer radius of a circular wafer or to the length or width of a rectangular sample), the lateral dimensions of the specimen do not have to be known since the geometric correction factor results directly from the two measurements, and the two measurements self-correct for the actual probe spacing. Although the collinear probe configuration is the most common four-point probe arrangement, other arrangements are also possible. Arrangement of the points in a square has the advantage of a smaller area. The square arrangement is more commonly used, not as an array of four mechanical probes, but rather as contacts to square semiconductor samples. The theoretical foundation of measurements on irregularly shaped samples is based on conformal mapping developed by van der Pauw [14], provided the contacts are at the circumference of the sample, the contacts are sufficiently small, the sample is uniformly thick, and the surface of the sample is singly connected, i.e., the sample does not contain any isolated holes. For the flat sample of a conducting material of arbitrary shape, with contacts 1, 2, 3, and 4 along the periphery, as shown in Figure 28.3, the resistance R12,34 is defined as

R12;34 Z

V34 I12

ð28:6Þ

where the current I12 enters the sample through contact 1 and leaves through contact 2 and V34ZV3KV4 is the voltage difference between the contacts 3 and 4. R23,41 is defined similarly. The resistivity is given by

rZ

p ðR12;34 C R23;41 Þ t F lnð2Þ 2

ð28:7Þ

where F is a function only of the ratio RrZR12,34/R23,41, satisfying the relation

Rr K1 F exp½lnð2Þ=F Z arcosh Rr C 1 lnð2Þ 2

ð28:8Þ

For symmetrical samples such as the circle or square in Figure 28.4, RrZ1 and FZ1. The van der Pauw equations are based on the assumption of negligibly small contacts located on the sample periphery. Real contacts have finite dimensions and may not be exactly on the periphery of the sample. Corner contacts introduce less error than contacts placed in the center of the sample sides. However, if the contact length is less than about 10% of the side length, the correction is negligible for either contact placement [15].

1

4

3 2

FIGURE 28.3

Arbitrarily shaped sample with four contacts.

DK4126—Chapter28—23/5/2007—16:25—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-5

L W

(a)

FIGURE 28.4

(b)

(d)

(c)

Typical symmetrical circular and square sample geometries.

Geometries other than those in Figure 28.4a and b are also used. One of these is the Greek cross in Figure 28.4c. Using photolithographic techniques, it is possible to make such structures very small and place many of them on a wafer for uniformity characterization. The sheet resistance of the shaded area is determined in such measurements. For structures with LZW, the contacts should be placed so that d% L/6 from the edge of the cross, where d is the distance of the contact from the edge [16]. Surface leakage can introduce errors if L is too large [17]. A variety of cross-sheet resistor structures have been investigated and their performance compares well with conventional bridge-type structures [18]. The measured voltages in cross and van der Pauw structures are lower than those in conventional bridge structures. The cross and the bridge structures are combined in the cross—bridge structure in Figure 28.5, allowing the sheet resistance and the linewidth to be determined [19]. Such measurements have shown high levels of repeatability. For a linewidth of 1 mm, the repeatability has been demonstrated to be on the order of 1 nm [20]. Precisions of 0.005 mm and lines as narrow as 0.1 mm have been measured. The sheet resistance is given by

Rsh Z

p V34 ln 2 I12

ð28:9Þ

where V34ZV3KV4 and I12 is the current flowing into contact I1 and out of contact I2. The right part of Figure 28.5 is a bridge resistor to determine the linewidth W. The voltage along the bridge resistor is

V45 Z

Rsh LI26 W

ð28:10Þ

where V45ZV4KV5 and I26 is the current flowing from contact 2 to contact 6. From Equation 28.10, the linewidth is

WZ

1

W

d 2

ð28:11Þ

L

d 6

s 3

FIGURE 28.5

Rsh LI26 V45

4

t

5

A cross-bridge sheet resistance and linewidth test structure.

DK4126—Chapter28—23/5/2007—16:25—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-6

Handbook of Semiconductor Manufacturing Technology

with Rsh determined from the cross structure and Equation 28.9. A key assumption in this measurement is that the sheet resistance be identical for the entire test structure. Since the bridge structure in Figure 28.5 is suitable for resistance measurements, it can be used to characterize “dishing” during chemical–mechanical polishing of semiconductor wafers, where soft metal lines tend to polish thinner in the central portion than at the edges leading to non-uniform thickness. This is particularly important for soft metals such as copper. With the resistance inversely proportional to metal thickness, resistance measurements can be used to determine the amount of dishing [21]. An assumption in Equation 28.11 is that the sheet resistance in the bridge portion of the test structure is the same as that in the cross portion, i.e., in both cross-hatched areas. If that is not true, W will be in error [22]. What exactly is L? Is it the center-to-center spacing as illustrated in Figure 28.5? That depends on the exact layout of the structure. With arms 4 and 5 extending only below the measured line as in Figure 28.5, L is approximately as shown. For symmetrical structures, i.e., arms 4 and 5 extending above as well as below the line, an effective length is LeffzLKW1, where W1 is the arm width. For long structures, i.e., Lz20W, this correction is negligible, but for short lines, it must be considered, because the contact arms distort the current path. Other considerations are t%W, W%0.005L, dR2t, t%0.03s, and s%d [23]. Strengths and Weaknesses. The method’s strength lies in its established use and the fact that it is an absolute measurement without recourse to calibrated standards. It has been used for many years in the semiconductor industry and is well understood. With the advent of wafer mapping, the four-point probe has become a very powerful process-monitoring tool. The equipment is commercially available. The weakness of the four-point probe technique is the surface damage it produces and the metal it deposits on the sample. The damage is not very severe but sufficient not to allow measurements on product wafers. The probe also samples a relatively large volume of the wafer, preventing high-resolution measurements. 28.2.1.2

Modulated Photoreflectance

Purpose. Modulated photoreflectance generates dose contour maps of ion-implanted samples without the need for implant activation, because the contour map is measured immediately after implantation. Method. It measures the modulation of the optical reflectance of a sample in response to waves that are generated, when a semiconductor sample is subjected to periodic heat stimuli. In the modulated photoreflectance or thermal wave method, an ArC ion laser beam is modulated at a frequency of 0.1–10 MHz. A periodic temperature variation is established in the semiconductor in response to this periodic heat stimulus, with an amplitude around 108C in silicon. The thermal wave diffusion length at a 1 MHz modulation frequency is 2–3 mm [24]. The small temperature variations cause small volume changes of the wafer near the surface. These changes include both thermoelastic and optical effects, and they are detected with a second laser—the probe beam—by measuring a reflectivity change. The apparatus and a contour map are illustrated in Figure 28.6. Both pump and probe laser beams are focused to approximately 1 mm diameter spots, giving the technique a spatial resolution of around 1 mm, allowing measurements on patterned wafers. Modulated photoreflectance is a comparative technique. To convert from thermal wave signal to implant dose requires calibrated standards with known implant doses. The ability to determine ion implant densities by thermal waves depends on the conversion of the single-crystal substrate to a partially disordered layer by the implant process. The thermal-wave-induced thermoelastic and optical effects are changed in proportion to the number of implanted ions. Modulated photoreflectance implant monitoring is subject to post-implant damage relaxation. The technique is contactless and non-destructive and has been used to measure implant doses from 1011 to 1015 cmK2 [25]. Measurements can be made on bare and on oxidized wafers. The ability to characterize oxidized samples has the advantage of allowing measurements of implants through an oxide without removing it. The technique can discriminate between implant species, since the lattice damage increases with implant atom size and the thermal wave signal depends on the lattice damage. It has been used for ion implantation monitoring, wafer polish damage, and reactive and plasma etch damage studies.

DK4126—Chapter28—23/5/2007—16:25—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

Thermal wave signal

Detector

28-7

Pump laser

Probe laser

Damaged layer Sample

(a)

(b)

FIGURE 28.6 (a) Schematic diagram of the modulated photoreflectance apparatus and modulated photoreflectance contour maps; (b) boron, 5!1012 cmK2, 30 keV, 600 TW units; 0.5% intervals. 200 mm diameter Si wafers. Data courtesy of Marylou Meloni, Varian Ion Implant Systems.

Strengths and Weaknesses. The major strength of modulated photoreflectance is the ability to measure the implant dose immediately after implantation, without wafer annealing. This is a significant timesaver. It has the ability to detect low-dose implants and to display the information as contour maps. The equipment is commercially available. Its main weakness is the qualitative nature of the measurement. The thermal wave signal is proportional to the damage in the sample, but the precise dose is difficult to determine quantitatively. 28.2.1.3

Capacitance–Voltage Profiling

Purpose. The capacitance–voltage (C–V) technique is used to determine the doping density profile of lightly and moderately doped regions. Method. In the C–V technique, the width of a reverse-biased space-charge region (scr) of a semiconductor junction device [Schottky diode, pn junction, metal-oxide semiconductor (MOS) device] is changed with an applied dc voltage [26]. The capacitance is determined by superimposing a small-amplitude ac voltage on the dc voltage. The ac voltage typically varies at frequencies of 10 kHz to 1 MHz with amplitude of 10–20 mV, but other frequencies and other voltages can be used. The capacitance of a reverse-biased junction is

CZ

Ks 30 A W

ð28:12Þ

where Ks is the semiconductor dielectric constant, 30 the permittivity of free space, A the area, and W the scr width. The doping density is related to the capacitance C and dC/dV through the relation

C3 2 NA ðWÞ ZK Z qKs 30 A2 dC=dV qKs 30 A2 dð1=C 2 Þ=dV

ð28:13Þ

The region that is profiled is the edge of the reverse-biased scr width, W, given by

WZ

Ks 30 A C

DK4126—Chapter28—23/5/2007—16:25—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

ð28:14Þ

28-8

Handbook of Semiconductor Manufacturing Technology

The doping density is obtained from a C–V curve by taking the slopes dC/dV or d(1/C2)/dV. For a Schottky barrier diode, there is no ambiguity in the scr width since it can only spread into the substrate. The scr spreading into the metal is totally negligible. The doping profile equations are equally well applicable for asymmetrical pCn and nCp junctions. Metal-oxide semiconductor capacitors (MOS-C) and MOS field effect transisters (MOSFETs) can also be profiled [27] However, care must be taken to eliminate minority carriers. For MOSFETs, minority carriers are eliminated by reverse biasing the source–drain junctions. For a MOS-C, the device must remain in deep depletion during the measurement to eliminate minority carrier contribution to the measurement, ensured with a rapidly varying dc ramp voltage or a pulsed gate voltage. The capacitance is measured immediately after the pulse before minority carriers can be generated. Equation 28.13 applies to MOS-Cs when both interface states and minority carriers can be neglected, but the scr width expression must be modified to

W Z Ks 30 A

1 1 K C Cox

ð28:15Þ

The spatial resolution of the measured profile is limited by the Debye length, because the capacitance is determined by the movement of majority carriers and the majority carrier distribution cannot follow abrupt spatial changes in dopant density profiles. Detailed calculations show that if a doping density step occurs within one Debye length, the majority carrier density differs appreciably from the doping density profile [28]. The profile limits are determined by Debye length, LD, considerations near the surface, and by breakdown scr width limits deeper within the sample, as illustrated in Figure 28.7. A contactless capacitance and doping profiling version uses a contact held in close proximity to the semiconductor wafer. The sensor electrode, 1 mm diameter and coated with high dielectric strength thin film, is surrounded by an independently biased guard electrode. The sensor electrode is held above the wafer by a porous ceramic air bearing, which provides for a very stable distance from the wafer as long as the load on the air bearing does not change, as shown in Figure 28.8. The controlled load is provided by pressurizing a bellows. As air escapes through the porous surface, a cushion of air forms on the wafer, which acts like a spring and prevents the porous surface from touching the wafer. The porosity and air pressure are designed such that the disk floats approximately 0.5 mm above the wafer surface. A stainless steel bellows acts to constrain the pressurized air and to raise the porous disk when the air pressure is

103

Distance (μm)

102

WBD

101 100 10−1

3LD

10−2 10−3 1013

1014

1015

1016

1017

NA,N D

(cm−3)

1018

1019

1020

FIGURE 28.7 Spatial profiling limits. The “3LD” line is the lower limit for metal-oxide semiconductor capacitor profiling, and the “WBD” line is the upper profile limit governed by bulk breakdown.

DK4126—Chapter28—23/5/2007—16:25—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

Pressurized bellows

28-9

Porous ceramic

Air Wafer 0.5 μm

FIGURE 28.8 Contactless doping profiling arrangement. Pressurized air maintains the electrode at approximately 0.5 mm above the sample surface.

reduced. If the air pressure fails, the disk moves up, rather than falling down and damaging the wafer [29]. To prepare the wafer, it is placed in a low-concentration ozone environment at a temperature of about 4508C, reducing the surface charge on the wafer, especially critical for n-Si, makes it more uniform, reduces the surface generation velocity, and allows deeper depletion [30]. A recent comparison of epitaxial resistivity profiles by the contactless with Hg-probe C–V measurements compared very favorably [31]. The capacitance of the air gap is measured by biasing the semiconductor surface in accumulation. Light is used to collapse any possible scr due to surface charge while the sensor is lowered and while the air gap modulation due to the electrostatic attraction is determined to eliminate any series space-charge capacitance. Assuming that the air gap does not vary with changing electrode voltage, the capacitance of the air gap is the measured capacitance at its maximum value. The doping density profile is determined from Equation 28.13 and Equation 28.15 with Cox in Equation 28.15 replaced by Cair. 28.2.1.4

Lateral Doping Profiling

The two main techniques that have emerged for lateral doping density profiling are scanning capacitance microscopy (SCM) and scanning spreading resistance microscopy (SSRM) [32]. The SCM has received much attention as a lateral profiling tool [33]. A small-area capacitive probe measures the capacitance of a metal/semiconductor or a MOS contact. The SCM combines atomic force microscopy (AFM) with highly sensitive capacitance measurements. The SCM is able to measure the local C–V characteristics between the SCM tip and a semiconductor with nanometer resolution. The metallized AFM tip is used for imaging the wafer topography in conventional contact mode and also serves as an electrode for simultaneously measuring the MOS capacitance. The SCM images of actively biased cross-sectional MOSFETs and of operating pn junctions allow visualization of the operation of semiconductor devices. The semiconductor device is usually cleaved or polished so that the device cross-section is exposed, as shown in Figure 28.9, although the sample top, without cleaving, can also be measured. An oxide is deposited on the cross-sectional area and the probe is scanned across the area in the contact mode, measuring the capacitance variations in the nanometric probe/oxide/silicon MOS-C by applying a highfrequency (hf) ac voltage between the probe and the semiconductor. For constant electrical bias, the scr in the MOS-C is wider for lower doping densities. Dedicated simulation models are necessary to obtain a realistic conversion curve that relates the local SCM signal with the local carrier density. A schematic of the measurement in Figure 28.10 shows the conducting AFM tip on the oxidized sample, the C–V, and dC/dV curves. The voltage is applied to the substrate in this case. In some cases, it is applied to the tip. The shape of the dC/dV curve identifies the doping type. The SCM is sensitive to carrier densities from 1015 to 1020 cmK3, with a lateral resolution of 20–150 nm, depending on tip geometry and dopant density. Extraction of absolute dopant densities requires reverse simulation incorporating tip geometry

DK4126—Chapter28—23/5/2007—16:25—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-10

Handbook of Semiconductor Manufacturing Technology

B

S

G

D

p n

Oxide covered surface

SCM cantilever

To dC/dV circuit

FIGURE 28.9

Scanning capacitance schematic.

and sample oxide thickness. Example SCM maps are given in Figure 28.11, showing the formation of a channel in a MOSFET with increasing gate voltage [34]. Two standard SCM methods have been developed for two-dimensional dopant profiling: in the DC mode, a constant amplitude ac bias voltage is applied between the tip and sample, and in the DV mode, a feedback loop adjusts the applied ac bias voltage to keep the change in capacitance, DC, constant as the tip is moved from one region to another [35]. In the former, the ac bias voltage produces a corresponding change in capacitance measured by a lock-in amplifier. As the tip moves from a region of high dopant density to a more lightly doped region, the lock-in amplifier output increases owing to the larger C–V curve slope in the lightly doped region. In the latter, a feedback loop adjusts the applied ac bias voltage to keep DC constant as the tip is moved from one region to another. In this case, the magnitude of the required ac bias voltage is measured to determine the dopant density. The advantage of the DC mode is simplicity. The disadvantage of this system is that a large ac bias voltage (several volts ac) is needed to measure finite SCM signal at high doping densities. When this same

Capacitance sensor

C

(b)

V

(a)

dc + ac voltage

dC/dV

p−type V n−type (c)

FIGURE 28.10 (a) Schematic of the atomic force microscopy/scanning capacitance microscopy (AFM/SCM) design, (b) C–V curve of n-type substrate with bias applied to the substrate (c) dC/dV curve. The sign identifies the dopant type.

DK4126—Chapter28—23/5/2007—16:25—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

TiSi2

gate

28-11

TiSi2

p −type light

n-well p + source

p + drain VG = 0 V

(a)

heavy/ insulating

dC =0 dV

VG = −1.05 V

(b)

light VG = −1.75 V

(c)

n −type

0.5 μm

FIGURE 28.11 Sequence of SCM images of an Si, p-channel MOSFETwith VDZK0.1 V, VSZVBZ0 V, and VGZ(a) 0, (b) K1.05, and (c) 1.75 V. The progression of the SCM images shows the formation of a conducting channel between the source and the drain. The schematic drawing in (a) shows the approximate locations of the polysilicon gate, titanium nitride spacers, and the titanium silicide contacts. Images were acquired with VacZ2.0 V peak to peak and VdcZ0 applied to the SCM tip. (After Nakakura, C.Y., Tangyunyong, P., Hetherington, D.L., and Shaneyfelt, M.R., Rev. Sci. Instrum., 74, 127–133, 2003.)

voltage is applied to lightly doped silicon, it creates a larger depletion volume, reducing the spatial resolution and making accurate modeling more difficult. The advantage of the DV method is that the physical geometry of the depletion problem remains relatively constant as the tip is scanned from a lightly to heavily doped region. The disadvantage is that an additional feedback loop is required. For reproducible measurements, samples must be prepared carefully. Factors that influence the repeatability and the reproducibility of SCM measurements include sample-related problems (mobile and fixed oxide charges, interface states, non-uniform oxide thickness, surface humidity and contamination, sample aging, and water-related oxide traps), tip-related problems (increase of the tip radius, fracture of the tip apex, mechanical wear out of the metal coating, and contaminants on the tip picked up from the sample), and problems related to the electrical operating conditions (amplitude of the ac probing signal in the capacitance sensor, scanning rate, compensation of the stray capacitance, electricfield-induced oxide growth, and dc tip bias voltage). The SSRM, based on the AFM, uses a small conductive tip to measure the local spreading resistance [36]. The resistance is measured between a sharp conductive tip and a large back surface contact. A precisely controlled force is used while the tip is stepped across the sample. The SSRM sensitivity and dynamic range are similar to conventional spreading resistance. The small contact size and small stepping distance allow measurements on the device cross-section with no probe conditioning. The high spatial

DK4126—Chapter28—23/5/2007—16:25—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-12

Handbook of Semiconductor Manufacturing Technology

resolution allows direct two-dimensional nanometer spreading resistance profiling (nano-SRP) measurements, without the need for special test structures. Spatial resolution of 3 nm has been demonstrated [37]. For one- or two-dimensional carrier density profile measurements, the sample is cleaved to obtain a cross-section. The cleavage plane is polished using decreasing grit-size abrasive paper and finally colloidal silica to obtain a flat silicon surface. After polishing, the sample is cleaned to eliminate contaminants and finally rinsed in deionized water. The sole limitation is the requirement that the structure be sufficiently wide for the profile in the direction perpendicular to the cross-section of the sample to be uniform. The AFM equipment is a standard commercially available equipment. A conductive cantilever with a highly doped ion-implanted diamond tip can be used as a resistance probe. Diamond protects the tip from deformation due to the rather high loads (w50–100 mN) required to penetrate the native oxide layer and make good electrical contact. Coating the tip with a thin tungsten layer improves the conductivity. Like conventional SRP, nano-SRP needs a calibration curve to convert the measured resistances into carrier densities. The resistance is measured at a bias of w5 mV, as in conventional SRP. Scanning the tip over the cross-section of the sample provides a two-dimensional map of the local spreading resistance with a spatial resolution set by the tip radius of typically 10–15 nm. A straight conversion of spreading resistance to local resistivity is made. As in conventional spreading resistance measurements, a proper model must be used to interpret the experimental data. It is frequently assumed that the contact between the probe and the sample is ohmic. However, it has been shown that the contact is not ohmic [38]. The I–V curves vary from an ohmic-like shape in heavily doped areas to a rectifying in lightly doped areas and that surface states induced by the sample preparation influence the I–V curves. The presence of surface states due to sample polishing reduces the current, particularly pronounced in lightly doped areas. Strengths and Weaknesses. The method’s strength lies in its ability to give the carrier density profile with little data processing. A simple differentiation of the C–V data suffices. It is an ideal method for moderately doped materials and is non-destructive when a mercury probe is used. It is well established with available commercial equipment. The major weakness of the differential capacitance profiling method is its limited profile depth, limited at the surface by the Debye length and in depth by junction breakdown. The latter limitation is particularly serious for the heavily doped regions. Further limitations are due to the Debye limit for abrupt profiles, which applies to all carrier profiling techniques.

28.2.2

MOSFET Device Characterization

28.2.2.1

Threshold Voltage

Purpose. The measurement of threshold voltage, VT, is required for process control and for channel length/width and series resistance determination. The threshold voltage for n-channel devices, accounting for short- and narrow-channel effects, and ion implantation, is given by

a VT Z VFB C 2fF C 1K L

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2qKs 30 NA ð2fF KVBS Þ qDi b C C Cox Cox W

ð28:16Þ

where L is the gate length, W the gate width, VFB the flatband voltage, a the short-channel parameter, b the narrow-channel parameter, VBS the substrate–source voltage, and Di the acceptor implant dose. Method. One of the most common threshold voltage measurement technique is the linear extrapolation method with the drain current measured as a function of gate voltage at a low drain voltage of typically 50–100 mV to ensure operation in the linear MOSFET region [39]. The drain current vs. gate voltage curve is extrapolated to IDZ0 and the threshold voltage is determined from the extrapolated gate voltage VGi by

VT Z VGi KVD =2

ð28:17Þ

Equation 28.17 is strictly only valid for negligible series resistance. Fortunately, series resistance is usually

DK4126—Chapter28—23/5/2007—16:25—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-13

negligible at the low drain currents where threshold voltage measurements are made, but it can be appreciable in lightly doped drain (LDD) devices. The ID–VG curve is non-linear at gate voltages below VT due to subthreshold current and above VT due to series resistance and mobility degradation effects. It is a common practice to find the point of maximum slope on the ID–VG curve by a maximum in the transconductance, gmZDID/DVG, fit a straight line to the ID–VG curve at that point and extrapolate to IDZ0, as illustrated in Figure 28.12a giving VTZ0.9 V. Failure to correct for series resistance and mobility degradation leads to an underestimate in VT [40]. It is obvious from Figure 28.12a that the drain current at the threshold voltage is higher than zero. This is utilized in the constant drain current method, where the gate voltage at a specified threshold drain current, IT, is taken to be the threshold voltage. This technique lends itself readily to threshold voltage mapping. In order to make IT independent of device geometry, ITZID/(Weff/Leff ) is sometimes specified at a current around 10–50 nA but other values have been used [41]. In Figure 28.12b, the threshold voltages for IDZ1 mA and IDZ10 mA are shown. It is quite obvious that these two VTs differ from each other and from the linear extrapolated value. Nevertheless, the method has found wide application, provided a consistent drain current is chosen. In the subthreshold method, the drain current is measured as a function of gate voltage below threshold and plotted as log(ID) vs. VG. The subthreshold current depends linearly on gate voltage in such a semilog plot. The gate voltage at which the plot departs from linearity is sometimes taken as the threshold voltage. However, for the data of Figure 28.12b, this point yields a threshold voltage of VTZ0.87 V, somewhat lower than that determined by the linear extrapolation method (VTZ0.9 V). The drain current ratio method was developed to avoid the dependence of the extracted VT on the mobility degradation and parasitic series resistance [42]. The drain current is

ID Z

Weff meff Cox ðVGS KVT ÞVDS ðLKDLÞ C Weff meff Cox ðVGS KVT ÞRSD

ð28:18Þ

m0 1 C qðVGS KVT Þ

ð28:19Þ

Using

meff Z

1.5×10−4

5 × 10−5

1×10−4

VGi=0.95 V

5×10−5

VD=0.1 V 0.5

1

1.5

Gate voltage (V)

2

0×10−0

(b)

10−4 VD=0.1 V 10−5 VT 10−6 (Subthreshold) 10−7 10−8 10−9 10−10 VT (1μA) 10−11 10−12 0 0.5

5×10−5 4×10−5 3×10−5

1

VT 2×10−5 (10μA) VT 1×10−5 (linear) 0×10−0

Drain current (A)

2×10−4

1 × 10−4

0 × 10−00 (a)

gm

1.5 × 10−4

2.5×10−4

Id

Drain current (A)

Drain current (A)

2 × 10−4

Transconductance (S)

where m0 is the low-field mobility and q the mobility degradation factor, allows Equation 28.18 to be written as

1.5

Gate voltage (V)

FIGURE 28.12 Threshold voltage determination by the (a) linear extrapolation technique, (b) by the threshold drain current, and the subthreshold techniques, toxZ17 nm, W/LZ20 mm/0.8 mm. Data courtesy of M. Stuhl, Medtronic Corp.

DK4126—Chapter28—23/5/2007—16:25—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-14

Handbook of Semiconductor Manufacturing Technology

ID Z

WCox m0 ðV KVT ÞVDS L 1 C qeff ðVGS KVT Þ GS

ð28:20Þ

where

qeff Z q C ðW=LÞm0 Cox RSD

ð28:21Þ

The transconductance is given by

gm Z

vID WCox m0 Z V vVGS L ½1 C qeff ðVGS KVT Þ 2 DS

ð28:22aÞ

1=2 The ID Kgm ratio

ID pffiffiffiffiffiffi Z gm

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi WCox m0 VDS ðVGS KVT Þ L

ð28:22bÞ

is a linear function of gate voltage, whose intercept on the gate voltage axis is the threshold voltage. This method is valid, provided the gate voltage is confined to small variations near VT and the assumptions VDS/2!!(VGSKVT) and vRSD/vVGSz0 are satisfied. The low-field mobility m0 can be determined from 1=2 the slope of the ID Kgm vs. VGSKVT plot and the mobility degradation factor is

qeff Z

ID Kgm ðVGS KVT Þ gm ðVGS KVT Þ2

ð28:23Þ

from which q can de determined, provided RSD is known. A good overview of threshold voltage measurement techniques is given in Reference [43]. A comparison of several methods was carried out as a function of channel length [44], showing that the threshold voltage can vary widely depending on how it is measured. In all threshold voltage measurements, it is important to state the sample measurement temperature since VT depends on temperature. A typical VT temperature coefficient is K2 mV8CK1, but it can be higher [45]. Strengths and Weaknesses. The strength of the linear extrapolation technique is the common and accepted usage of this method. Its weaknesses are the necessity to differentiate the ID–VG curve and fitting of a line, although these steps are automated today. The strength of the threshold drain current method is its simplicity. Its weakness is the choice of the threshold current; different IT result in different VT. 28.2.2.2

Effective Channel Length and Source–Drain Resistance

Purpose. The purpose is to determine the MOSFETeffective channel length or width and the source/drain series resistance. Method. The MOSFET current–voltage equation, valid for low drain voltage, is

ID Z

Weff meff Cox ðVGS KVT ÞVDS ðLKDLÞ C Weff meff Cox ðVGS KVT ÞRSD

ð28:24Þ

where WeffZWKDW, LeffZLKDL, VT is the threshold voltage, W the gate width, L the gate length, Cox the oxide capacitance per unit area, meff the effective mobility, and RSD the sum of source and drain resistance. W and L usually refer to the mask dimensions. Equation 28.24 is the basis for most techniques to determine RSD, meff, Leff, and Weff. The techniques usually require at least two devices of different channel lengths [46].

DK4126—Chapter28—23/5/2007—16:25—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-15

In one commonly used method, RmZVD/ID is plotted vs. L as a function of gate voltage for devices with differing L [47]. The lines intersect at one point giving both RSD and DL on the Rm and the L axes, respectively. If the Rm vs. L lines fail to intersect at a common point, one can use linear regression to extract both RSD and DL [48]. A variation of this method allows DL, RSD, m0, and q to be extracted, with m0 the lowfield mobility and q the mobility degradation factor [49]. First, Rm is plotted against 1/(VGKVT), giving slope mZ(LKDL)/Weffm0Cox and intercept RmiZ[RSDCq(LKDL)/Weffm0Cox]ZRSDCqm. Then plotting m vs. L, with slope 1/Weffm0Cox and intercept on the L axis of DL, allows m0 and DL to be determined. Lastly, Rmi is plotted against m, giving q from the slope and RSD from the Rmi axis intercept. A technique for any mobility variation with gate voltage and any RSD is the “shift and ratio” method [50]. It uses one large device and several small devices (varying channel lengths, constant channel width). Slope SZdRm/dVG is plotted vs. VG for the large and one small device. One curve is shifted horizontally by a varying amount d and the ratio rZS(VG)/S(VGKd) between the two devices is computed as a function of VG. When S is shifted by a voltage equal to the threshold voltage difference between the two devices, r is nearly constant, which is the key in this measurement. The method has been successfully used for MOSFETs with channel lengths below 0.2 mm. The best range for VG is from slightly above VT to about 1 V above VT. For LDD devices, one should use low gate overdrives to ensure high S allowing dRSD/ dVG to be neglected. Once DL is found, RSD can be determined. Strengths and Weaknesses. The strength of the “Rm vs. L” method is its simplicity. However, the Rm–L lines may not intersect at a single point, pointing to its weakness, especially for LDD devices. 28.2.2.3

Hot Carriers

Purpose. Hot carrier measurements determine the susceptibility of devices to hot carrier (electrons and holes) degradation. Hot carriers are of concern in integrated circuits, because when electrons and/or holes gain energy in an electric field, they can be injected into the oxide to become oxide-trapped charge, they can drift through the oxide, they can create interface-trapped charge, and they can generate photons [51]. Method. The term hot carriers is somewhat misleading. The carriers are energetic. The carrier temperature T and energy E are related through the expression EZkT. At room temperature, Ez25 meV for TZ300 K. When carriers gain energy by being accelerated in an electric field, their energy E increases. For example, TZ1.2!104 K for EZ1 eV. Hence, the name hot carriers means energetic carriers, not that the entire device is hot. One method to determine hot carrier degradation in n-channel devices is to bias the device at maximum substrate current. The substrate current dependence on gate voltage is shown in Figure 28.13a. The substrate current depends on the channel lateral electric field. At low VG, with the device in saturation, the lateral electric field increases with increasing gate voltage until VGzVD/3KVD/2. Isub increases to a maximum at that gate voltage for n-channel devices. For higher gate voltages, the device enters its linear region, the lateral electric field decreases as does the substrate current. To characterize the device susceptibility to hot carriers, the MOSFET is biased at maximum substrate current Isub,max for a certain time and one device parameter, e.g., saturation drain current, threshold voltage, mobility, transconductance, or interface trap density, is measured [52]. The transconductance is often used. This process is repeated until the measured parameter has changed by some amount (typically 10%–20%), as shown in Figure 28.13b for IDsat. This time is the lifetime. Next, the substrate current is changed by choosing different gate/drain voltages and the process is repeated and plotted as lifetime vs. Isub, as shown in Figure 28.13c. The data points, measured over a restricted range, are extrapolated to the chip life, typically 10 years, giving the maximum Isub, which should not be exceeded during device operation. The chief degradation mechanism for n-channel MOSFETs is believed to be interface trap generation and the substrate current is a good monitor for such damage. There are, of course, other measurements that could be used, such as interface trap measurements, for example. Isub is commonly used because it is simple to measure. The main degradation mechanism for p-channel devices is believed to be trapped electrons near the gate/drain interface and it manifests itself at a maximum in gate current. Hence, instead

DK4126—Chapter28—23/5/2007—16:25—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-16

Handbook of Semiconductor Manufacturing Technology

100

1 0.8

6.8 V

∆IDsat/IDsat (%)

Isub (μA)

Isub=1 μA/μm

VD=7 V 6.6 V 6.4 V 6.2 V 6.0 V

0.6 0.4

Lifetime 20%

0.2 0 (a)

0

1

2

3

4

5

6

7

10 102

8

VG (V)

(b)

103 104 Stress time (s)

105

106

Lifetime (h)

10 Years 104

102

20% ∆IDsat IDsat Degradatin

100 (c)

10

100

1000

Isub /W (μA /μm)

FIGURE 28.13 (a) Substrate current, (b) drain current degradation, (c) lifetime plots for hot carrier degradation. Substrate current plots courtesy of L. Liu, Motorola.

of Isub for n-channel devices, in p-channel devices IG is monitored during degradation measurements [53]. Hot carrier damage can be reduced by reducing the electric field at the drain by, for example, forming LDD and using deuterium instead of hydrogen during post-metallization anneal at temperatures around 4008C–4508C, since the Si–D bond is stronger than the Si–H bond [54]. To characterize plasma-induced damage, a common test structure is the antenna structure with a large conducting area, consisting of polysilicon or metal layers, attached to a MOSFET or MOS-C gate [55]. Frequently, the antenna resides on a thicker oxide than the MOSFET gate oxide. The ratio between antenna area and gate oxide area has typical values of 500–5000. The antenna test structure is placed into a plasma environment; charge builds up on the antenna and channels gate current through the MOSFET gate oxide where it generates damage that is subsequently detected by measuring the transconductance, drain current, threshold voltage, etc. The highest VT sensitivity exists for gate oxides 4–5 nm or thicker. Below 4 nm, the gate leakage current is a more suitable measure. Another test structure, the charge monitor, is based on an electrically erasable programmable read-only memory structure, consisting of a MOSFET with a floating gate inserted between the substrate and the control gate. The control gate is a large-area collecting electrode [56]. The device is exposed to the plasma, charge builds up, and develops a control gate voltage. Part of that control gate voltage is capacitively coupled to the floating gate. For sufficiently high floating gate voltage, charge is injected from the substrate and is trapped on the floating gate changing the device threshold voltage. The threshold voltage is subsequently measured and converted to charge for a contour map of the plasma charge distribution. The potential sensors are implemented in pairs, where one sensor measures negative and the other positive potentials.

DK4126—Chapter28—23/5/2007—16:25—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-17

Strengths and Weaknesses. The strength of the Isub measurement is its simplicity. The weakness is that measurements are made under accelerated stress conditions, i.e., when Isub is higher than normal. The resulting data are subsequently extrapolated to normal operating voltages, but the same degradation mechanisms active at high stress may not be active for normal operating conditions.

28.2.3

Oxide Charges, Interface States, and Oxide Integrity

28.2.3.1

Mobile Charge, Interface States

Purpose. The purpose of these measurements is the determination of mobile charge in insulators and interface-trapped charge at the SiO2/Si interface. Method. Mobile charge in SiO2 is due primarily to the ionic impurities NaC, LiC, KC, and perhaps C H . Sodium is usually the dominant contaminant, but potassium may be introduced during chemical– mechanical polishing and lithium may originate from some pump oils. There are two chief methods: bias-temperature stress (BTS) and triangular voltage sweep (TVS). In the BTS technique, the MOS-C is heated to about 1508C–2008C for 5–10 min with a gate bias to produce an oxide electric field of around 106 V/cm. The device is then cooled to room temperature under bias and a C–V curve is measured. The procedure is then repeated with the opposite bias polarity. The mobile charge Qm is determined from the flatband voltage shift, DVFB, according to the equation

Qm ZKCox DVFB

ð28:25Þ

The reproducibility of such measurements becomes questionable as mobile charge densities approach 109 cmK2. For example, the flatband voltage shift in a 10 nm thick oxide due to a 109 cmK2 mobile ion density is 0.5 mV. This is difficult to measure. In the TVS method, the MOS-C is held at an elevated, constant temperature of 2008C–3008C and both low-frequency (lf) and hf C–V curves are measured, as shown in Figure 28.14 [57]. Mobile ions are detected as the difference between the two curves [58]. The quasi-static method is the most common interface-trapped charge measurement method for MOSC. One measures the lf and the hf C–V curves at room temperature. Interface states are assumed to respond to the lf, but not to the hf curve. In terms of the measured Clf and Chf, the interface trap density Dit is

1000 t ox = 100 nm

Capacitance (pF)

900 800

Nm = 1.3×1010 cm−2 l-f

700 600 500

4.1 × 109 cm−2

400 300 −3

h-f −2

−1

0 1 Gate voltage (V)

2

3

FIGURE 28.14 High-frequency and low-frequency capacitance–gate voltage curves measured at TZ2508C. The mobile charge density is determined form the area between the two curves. (After Stauffer, L., Wiley, T., Tiwald, T., Hance, R., Rai-Choudhury, P., and Schroder, D.K., Solid-State Technol., 38, S3–S8, 1995.)

DK4126—Chapter28—23/5/2007—16:25—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-18

Handbook of Semiconductor Manufacturing Technology

Dit Z

Cox q2

Clf =Cox C =C K hf ox 1KClf =Cox 1KChf =Cox

ð28:26Þ

Equation 28.26 gives Dit from the onset of inversion to a surface potential toward the majority carrier band edge where the ac measurement frequency equals the inverse of the interface trap emission time constant. Typically, this corresponds to an energy of about 0.2 eV from the majority carrier band edge. The lower limit of Dit determined with the quasi-static technique lies around 1010 cmK2 eVK1. Why is CitZq2Dit used here when most texts use CitZqDit? CitZqDit is quoted in well-respected texts [59]. But if we substitute units, something is not right. With Dit in cmK2 eVK1 (the usual units) and q in C, the units for Cit are

Coul Coul F Z Z cm2 eV cm2 CoulKVolt cm2 Coul using eVZCoulKVolt; VoltZCoul/F. This suggests that the correct definition should be CitZq2Dit. We must keep in mind, however, that in the expression E(eV)ZqV, qZ1 not 1.6!q!10K19! Hence, CitZ q2DitZ1!1.6!10K19Dit. If Dit is given in cmK2 JK1, then CitZ(1.6!10K19)2Dit. This was first pointed out to me by Kwok Ng and can also be found in his book [60]. Interface states in MOSFETs, with gate areas too small for capacitance measurements, are most commonly determined with the charge pumping technique. The MOSFET source and drain are tied together and slightly reverse biased. A time-varying gate voltage (square, triangular, trapezoidal, sinusoidal, or tri-level) of sufficient amplitude to drive the surface under the gate into inversion and accumulation is applied. The charge pumping current, measured at the substrate, at the source/drain tied together, or at the source and drain separately, is given by

Icp Z qAfDit DE

ð28:27Þ

where A is the gate area and f the gate voltage frequency. The basic charge pumping technique gives an average value of Dit over the energy interval DE. The energy distribution of the interface traps can be obtained with the tri-level charge pumping technique [61]. Strengths and Weaknesses. The strength of BTS is the simplicity of only hf C–V measurements; its weakness is the inability to distinguish between different mobile species and the time-consuming measurements requiring two heating/cooling cycles. The strength of TVS is the ability to measure more than one species, and measure mobile ions in interlevel dielectrics, with only one heating step. Its weakness is the necessity of measuring both hf and lf C–V curves. The strength of the quasi-static method is the simplicity of measuring the capacitances of MOS-C. Its main weakness is its sensitivity in the low 1010 cmK2 eVK1 and the fact that a rather large-area capacitor must be used. The strength of charge pumping lies in the ability to characterize MOSFETs. 28.2.3.2

Oxide Integrity

Purpose. Oxide integrity measurements are made to determine the breakdown voltage, breakdown electric field, breakdown time, or charge, all of which are indicative of oxide quality. Method. Oxide integrity is determined by time-zero and time-dependent measurements. The time-zero method is an IGKVG measurement of an MOS device to oxide breakdown. Time-dependent measurements consist of constant or stepped gate current with the gate voltage being monitored, or of constant or stepped gate voltage monitoring the gate current as a function of time. When the oxide is driven into breakdown, one defines a charge-to-breakdown QBD as tð BD

QBD Z

JG dt 0

DK4126—Chapter28—23/5/2007—16:25—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

ð28:28Þ

Electrical, Physical, and Chemical Characterization

−ln (1−F )

100

t ox = 30 nm

28-19

Intrinsic breakdown

10−1 Defect-related breakdown 10−2

10−3

0

2

4

6

8

10

12

Oxide electric field (MV/cm)

FIGURE 28.15

Cumulative failure vs. electric field. originally published in Phil. J. Res., 40, 1985 (Philips Research).

where tBD is the time to breakdown and JG the gate current density. QBD is the charge density flowing through the oxide necessary to break it down. In the stepped current technique, the current is applied for a certain time, e.g., 10 s, it is then increased by a factor of 10 for the same time, etc., until the oxide breaks down [62]. Since, in this method the current starts at a low value, e.g., 10K5 A/cm2, it is a more sensitive technique to bring out B-mode failures, e.g., oxide failures in the intermediate gate oxide electric field range of approximately 3–8 MV/cm. Oxide breakdown data are usually presented as the number of failures or cumulative failures as a function of oxide electric field or charge-to-breakdown. The statistics of oxide breakdown are described by extreme value distributions [63]. The assumptions underlying the use of extreme value distribution functions are (1) a breakdown may take place at any spot out of a large number of spots, (2) the spot with the lowest dielectric strength gives rise to the breakdown event, (3) and the probability of breakdown at a given spot is independent of the occurrence of breakdown at other spots. For a device with area A and defect density D, the cumulative failure F is

F Z 1KexpðKADÞ or Klnð1KFÞ Z AD

ð28:29Þ

as illustrated in Figure 28.15. Such a plot is known as a Weibull plot [64]. Choosing a particular electric field Eox (in Figure 28.15 it is 10 MV cmK1) gives a value of Kln(1KF) equal to AD. Hence, this point gives the defect density, provided the area is known, e.g., Kln(1KF)Z0.05, given DZ5 cmK2 for AZ 0.01 cm2. Strengths and Weaknesses. The strength of constant or stepped current oxide integrity measurements lies in its simplicity and the ease of QBD extraction. A weakness of typical oxide integrity measurements is the problem all accelerated stress measurements face, namely, is the failure mechanism under accelerated stress the same as that for normal operating conditions. However, since failure under normal conditions takes many years, one is forced into accelerated stress measurements.

28.2.4

Defects and Carrier Lifetimes

28.2.4.1

Deep-Level Transient Spectroscopy

Purpose. Deep-level transient spectroscopy (DLTS) is used to determine energy level, density, and capture cross-sections of deep-level impurities. Method. For DLTS measurements, the device must be a junction device that is repetitively pulsed into reverse bias and the capacitance, current, or charge is measured as a function of time. The capacitance

DK4126—Chapter28—23/5/2007—16:25—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-20

Handbook of Semiconductor Manufacturing Technology

is most commonly measured. If the C–t curve from a transient capacitance experiment is processed so that a selected decay rate produces a maximum output, then a signal whose decay time changes monotonically with time reaches a peak when the decay rate passes through the rate window of a boxcar averager or the frequency of a lock-in amplifier. When observing a repetitive C–t transient through such a rate window while varying the decay time constant by varying the sample temperature, a peak appears in the output plot [65]. The C–t transient follows the exponential time dependence

CðtÞ Z C0 1K

NT Kt exp 2ND te

ð28:30Þ

where C0 is the steady-state capacitance, NT the deep-level impurity density, ND the doping density, and te the emission time constant given by

te Z

expððEc KET Þ=kTÞ gn sn T 2

ð28:31Þ

where ET is the energy level, gn is a constant, and sn is the majority carrier capture cross-section. The C–t waveform is sampled at times tZt1 and tZt2 and the capacitance at t2 is subtracted from the capacitance at t1, i.e., dCZC(t1)KC(t2). Such a difference signal is a standard output feature of a double boxcar instrument. The temperature is slowly scanned, while the device is repetitively pulsed between zero and reverse bias. A difference signal is generated when the time constant is on the order of the gate separation t2Kt1, and the capacitance difference passes through a maximum as a function of temperature. This is the DLTS peak. A plot of te,max vs. 1/T yields ET and sn, while the magnitude of the dCKT peaks yields NT. te,max is obtained from the sampling times t2 and t1 as

te;max Z

t2 Kt1 lnðt2 =t1 Þ

ð28:32Þ

0.1 Fe–B Fei Cr–B Cri

0.23 0.18 Co

Ni

0.53 0.52 0.38 0.33 0.32 0.26 0.3 0.22

Mn

Ti

Mo Mo

W

W

0.15

Ec

Ei n−Si

0.27

0.34

p−Si

0.4

0.47

0.27

0.41

n−Si

0.4

p−Si

0.23

0.29

0.12 0.09 0.26

n-Si

Well-maintained DLTS systems can detect dCmax/C0z10K5 to 10K4, allowing impurity densities on the order of 10K5 to 10K4ND to be determined. For substrate doping densities of 1015 cmK3, one can determine impurity densities to around 1011 cmK3. The equipment is commercially available. Energy levels of some of the more important deep-level impurities in silicon are shown in Figure 28.16. Strengths and Weaknesses. The major strengths of DLTS are the spectroscopic nature of the measurement, allowing species identification, and its high sensitivity. Its major weakness is the need

Cu

Ev

FIGURE 28.16 Energy levels of some impurities in silicon. Energies are indicated for each level. Numbers above Ei are given with respect to Ec, those below Ei with respect to Ev.

DK4126—Chapter28—23/5/2007—16:25—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-21

for sophisticated measurement equipment including sample cooling. It is also not always possible to assign a unique impurity to a given DLTS spectrum, making impurity identification difficult at times. 28.2.4.2

Recombination Lifetime

Purpose. Recombination lifetimes are most commonly measured to determine impurity density or cleanliness of semiconductors. There is no basic lower limit of impurity density that can be detected through lifetime measurements. Method. There are many techniques to determine the recombination lifetime. The two most common methods are photoconductance decay (PCD) and surface photovoltage (SPV). During PCD, electron– hole pairs (ehps) are created by optical excitation, and their decay is monitored as a function of time following the cessation of the excitation. This measurement can be contactless by monitoring the reflectance of a microwave signal incident on the sample. The excess carrier density decay for low-level injection is given by Dn(t)ZDn(0)exp(Kt/tr,eff ) where the effective recombination lifetime, tr,eff, is a combination bulk, tB, and surface lifetime, tS, given by

1 1 1 Z C ; tr;eff tB tS

tB Z

1 ; svth NT

tS Z

t 2svth Nit

ð28:33Þ

where s is the capture cross-section, vth the thermal velocity, NT the impurity density, and Nit the interface trap density. The surface lifetime is a function of surface recombination velocity sr and sample thickness t. For reasonably low sr, we can write tSZt/2sr. When tB dominates, then a measure of tr,eff is a measure of NT. Hence, a simple recombination lifetime measurement yields information about the level of bulk contamination. For high bulk lifetime material, one can use tr,eff measurements to determine the state of the surface, because tr,effztS [66]. The SPV technique yields the minority carrier diffusion length, L, related to tr,eff by LZ(Dtr,eff )1/2, where D is the diffusion constant. An example of using minority carrier diffusion length measurements to determine Fe in Si is shown in Figure 28.17. Iron in p-type Si has the unique property of being in one of two states. When a Fe-contaminated, B-doped Si wafer has been at room temperature for a few hours, the iron forms pairs with boron. Upon heating at around 2008C for a few minutes or illuminating the device, the Fe–B pairs dissociate into interstitial iron, Fei, and substitutional boron. The recombination

Ln,eff (cm)

100

10−1 Fe−B

Fei

10−2 sr = 100 cm/s 10−3 9 10

1010

1011

1012

1013

1014

NFe (cm−3)

FIGURE 28.17 Effective minority carrier diffusion length vs. iron density in p-type Si. (Data from Schroder, D.K., Semiconductor Material and Device Characterization, Wiley, New York, 1998.)

DK4126—Chapter28—23/5/2007—16:25—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-22

Handbook of Semiconductor Manufacturing Technology

properties of Fei differ from those of Fe–B. By measuring the diffusion length of a Fe-contaminated sample before (Li) and after (Lf ) Fe–B pair dissociation, NFe is [67]

NFe Z 1:05 !1016

1 1 K L2f L2i

ðcmK3 Þ

ð28:34Þ

with diffusion lengths in units of microns. Strengths and Weaknesses. The strengths of both techniques are the contactless nature of the measurements with no sample preparation. The weaknesses are the effect of the sample on the measurement. What is measured is an effective lifetime or diffusion length, which is not always equal to the true value because of sample geometry or surface recombination. 28.2.4.3

Generation Lifetime

Purpose. Generation lifetime is measured to characterize selected regions of a device for contamination. It is especially useful to characterize thin layers, e.g., epitaxial layers, denuded zones, and silicon on insulator (SOI). Method. The generation lifetime tg is related to the recombination lifetime tr by [68]

tg Z tr expðjET KEi j=kTÞ

ð28:35Þ

Generally, tgz(50–100)tr [41]. The pulsed MOS-C lifetime measuring technique is popular because it is simple and the ubiquitous MOS-C is found on many test structures. The MOS-C is pulsed into deep depletion, and the C–t curve is measured. The capacitance relaxes from deep depletion to equilibrium by thermal ehp generation. The effective generation lifetime tg,eff is shown in Figure 28.18 as a function of iron concentration, NFe. tg,eff depends on the surface generation velocity sg. NFe can be determined from generation lifetime through the relation [69] 8

NFe Z 8 !10

1 1 K tg;f tg;i

! ðcmK3 Þ

ð28:36Þ

with tg in seconds. tg,f and tg,i are the final and initial generation lifetimes, i.e., after and before Fe–B pair annihilation.

tg,eff (s)

100

0

10−1

1

10−2

10

10−3

Ohsawa • Miyazaki

sg = 100 cm/s

10−4 10−5 109

1010

1011

1012

1013

1014

NFe (cm−3)

FIGURE 28.18

Effective generation lifetime vs. iron density in p-type Si.

DK4126—Chapter28—23/5/2007—16:25—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-23

The MOS-C pulsed C–t technique has found wide acceptance because it is easily implemented with commercial equipment. The measured C–t transient times are usually quite long with times of tens of seconds to minutes being common. The relaxation time tf is related to tg by

NA tg ð28:37Þ ni This equation brings out a very important feature of the pulsed MOS-C technique, which is the magnification factor NA/ni built into the measurement. Values of tg range over many orders of magnitude, but representative values for silicon devices are 10K4 to 10K3 s. Equation 28.37 predicts the actual C–t transient time to be 10–1000 s. These long times point out the great virtue of this measurement technique. To measure lifetimes in the microsecond range, it is only necessary to measure capacitance relaxation times on the order of seconds. The long measurement times are also a disadvantage because wafer mapping takes a long time, but the times can be reduced by optical excitation [70]. Various pulsed or swept MOS C–t techniques have been proposed [71]. A more recent implementation replaces the MOS gate with corona charge, making it a contactless method [72]. It is also possible to determine the generation lifetime from diode leakage current measurements. A particularly good test structure for this purpose is the gate-controlled diode [73]. Strengths and Weaknesses. The strength of generation lifetime measurements is the ease of the measurement and the confinement of the sampled depth to the scr width, easily controlled with the bias voltage. Hence, this technique is well suited for epitaxial layers, denuded zones, and SOI layer characterization. Its major weaknesses are the need for sample preparation, i.e., the sample needs to be oxidized, and the long measurements times. tf z10

28.2.5

Charge-Based Measurements

Purpose. Charge-based measurements are rapid “contactless” measurements requiring deposited charge, rather than metal or polysilicon gates. As such, they are rapid measurements providing oxide charge, thickness, and breakdown. They are also used to determine recombination and generation lifetimes. Method. In charge-based measurements evaporated, sputtered, or deposited, gates are replaced by corona charge and such measurements are used during the development of integrated circuits and for manufacturing control. To be effective, such test structures should provide rapid feedback to the pilot or manufacturing line. Charge, in these measurements, is used in two basic ways: as the “gate” in MOS-type measurements, where the charge replaces the metal or polysilicon gate, and as a surface-modifying method, where the charge controls the surface potential. IBM developed corona charge for semiconductor characterization during 1983–1992 [74]. However, due to lack of commercial instruments, the technique was initially only sparingly used. Later, it was developed into commercial products. During charge-based measurements, charge is deposited on the wafer, as shown in Figure 28.19, and the semiconductor response is measured with a Kelvin probe, first proposed by Kelvin in 1881 [75]. Kronik and Shapira give an excellent explanation of such probes and applications [76]. Deposited charge was first used in the characterization of oxide leakage current and mobile charge drift [77]. Ions are deposited on a surface at atmospheric pressure through an electric field applied to a source of ions. The corona source consists of a wire, a series of wires, a single point, or multiple points located a few millimeter or centimeter above the sample surface. A potential of 5–10 kV of either polarity is applied to the corona source. For a negative source potential, positive ions bombard the source while free electrons are rapidly captured by ambient molecules to form negative ions. Typically, a few seconds are required to charge an insulating surface to a saturation potential. A surface voltage is generated by the deposited charge or work and is most commonly detected with a non-contacting Kelvin probe. It is a small plate, 2–4 mm in diameter, held typically 0.1–1 mm above the sample and vibrated vertically changing the capacitance between probe and sample at frequencies of typically 500–600 Hz [78].

DK4126—Chapter28—23/5/2007—16:25—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-24

Handbook of Semiconductor Manufacturing Technology

± 5–10 kv

Deposit charge

H3O+ or CO−3

Measure charge

FIGURE 28.19 Schematic illustration of point and wire electrode corona charging methods. The deposited charge is precisely measured with the op-amp charge meter.

The surface voltage dependence on surface charge lends itself to measurements of charge in the insulator on a semiconductor wafer or charge on the wafer, i.e., oxide charge, interface-trapped charge, plasma damage charge, or other charges. One determines the surface potential of an oxidized wafer by measuring the surface voltage with and without intense light and deposit corona charge until the surface potential becomes zero. The deposited corona charge is equal in magnitude but opposite in sign to the original oxide charge [79]. The accuracy and precision of this charge-based measurement is identical for thin and thick oxides. Charge-based oxide charge measurements have an advantage over voltage-based measurements. For example, to determine the oxide charge of an MOS device, one can measure the charge or the voltage. The relationship between the oxide voltage uncertainty DVox and oxide charge uncertainty DQox is

DQox Z Cox DVox Z

Kox 3o DVox tox

ð28:38Þ

Suppose the oxide charge is determined from a voltage measurement with an uncertainty of DVoxZ 1 mV. DQox varies from 2.2!1010 to 2.2!1011 cmK2 for oxide thicknesses from 10 to 1 nm. In voltage-based measurements, there is a large uncertainty in oxide charge. For charge-based measurements, there is a charge uncertainty that is independent of oxide thickness and is on the order of DQox/qZ109 cmK2. To determine the oxide thickness, corona charge density Q is deposited on the oxidized wafer and the surface voltages are measured in the dark and under intense light [80], giving the surface voltage Vs, which is plotted vs. deposited charge density, as shown in Figure 28.20 [81]. In accumulation or inversion, the curves are linear and the oxide thickness is

DK4126—Chapter28—23/5/2007—16:26—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-25

2

VS (V)

1 Inversion

0 −1

Depletion Accumulation tox = 5 nm 10 nm

−2 −3 −1 × 10−6

FIGURE 28.20

−5 × 10−7

0 Q (C/cm2)

5 ×10−7

1×10−6

Surface voltage vs. surface charge density for two oxide thicknesses.

Cox Z

dQ ; dVS

tox Z

Kox 30 dV Z Kox 30 S Cox dQ

ð28:39Þ

This method is not subject to the polysilicon gate depletion effects of MOS-C measurements [82]. It is also not affected by probe punchthrough and is relatively insensitive to oxide pinhole leakage currents. Interface traps distort the VS–Q curve and the interface trap density is determined from that distortion. For oxide leakage current measurement, corona charge is deposited on an oxidized wafer and the Kelvin probe voltage is measured as a function of time. If the charge leaks through the oxide, the voltage decreases with time. The device is biased into accumulation and the oxide leakage current is related to the voltage through the relationship [83]

Ileak Z Cox

dVP ðtÞ I 0 VP ðtÞ Z leak t dt Cox

ð28:40Þ

When the device is biased into accumulation, charge builds up on the oxide. However, when the charge density is too high, it leaks through the oxide by Fowler–Nordheim or direct tunneling and the surface voltage becomes clamped. The deposited charge density is related to the oxide electric field 3ox through the relationship

Q Z Kox 30 3ox Z 3:45 !10K13 3ox

ð28:41Þ

for SiO2. Strengths and Weaknesses. The strength of corona-charge-based systems is the contactless nature of the measurements allowing some semiconductor processes to be monitored without having to fabricate test structures as well as the variety of semiconductor parameters that can be determined. A weakness is the specialized nature of the equipment not as routinely found as are current–voltage or capacitance–voltage systems.

28.2.6

Probe Measurements

Purpose. Probe measurements are usually made to obtain microscopic information of a number of material parameters. In the extreme, one can obtain lateral resolution on the order of 0.1 nm and vertical resolution of 0.01 nm.

DK4126—Chapter28—23/5/2007—16:26—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-26

Handbook of Semiconductor Manufacturing Technology

Method. The SPM refers to techniques in which a sharp tip is scanned across a sample surface at very small distances to obtain two- or three-dimensional images of the surface at nanometer or better lateral and/or vertical resolutions [84]. A chief advantage of probe-based measurements is their high resolution. Other than transmission electron microscopy (TEM), it is the only technique for imaging at atomic resolution. A myriad of SPM instruments has been developed over the past decade, and one can sense current, voltage, resistance, force, temperature, magnetic field, work function, and so on with these instruments at high resolution [85]. Scanning tunneling microscopy (STM) shown schematically in Figure 28.21 [86] consists of a very sharp metallic probe scanned across the sample at distances of about 1 nm, with a bias voltage lower than the work function of the tip or the sample between the tip and the sample. Experimental evidence suggests “mini tips” of !10 nm radii form at the tip of the probes [87]. Piezoelectric elements provide the scanning mechanism. Early implementations used the three-arm tripod arrangement in Figure 9.23, which is subject to low resonance frequencies and was later changed to the tubular implementation containing four symmetric electrodes. Applying equal but opposite voltages to opposing electrodes causes the tube to bend due to contraction and expansion. The inner wall is contacted by a single electrode for actuation voltages for vertical movement [88]. With the probe tip very close to the sample surface, a tunnel current of typically 1 nA flows across the gap. The current is given by [89]

CV J Z 1 exp K2d d

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi! pffiffiffiffiffiffi 8p2 mFB CV Z 1 expðK1:025d FB Þ 2 h d

ð28:42Þ

for d in A˚ and FB in eV, where C1 is a constant, V is the voltage, d the gap spacing between the tip and sample, and FB an effective work function defined by FBZ(FB1CFB2)/2 with FB1 and FB2 the work functions of the tip and sample, respectively. For a typical work function of FBz4 eV, a gap spacing change from 10 to 11 A˚ changes the current density by about a factor of 8.

y x Piezoelectric scanners

Feedback generator

d Sample

FIGURE 28.21

Schematic illustration of a scanning tunneling microscope.

DK4126—Chapter28—23/5/2007—16:26—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

z

Electrical, Physical, and Chemical Characterization

28-27

There are two modes of operation. In the first, the gap spacing is held constant, as the probe is scanned in the x and y dimensions, through a feedback circuit holding the current constant. The voltage on the piezoelectric transducer is then proportional to the vertical displacement giving a contour plot. In the second mode, the probe is scanned across the sample with varying gap and current. The current is now used to determine the wafer flatness. Holding the probe above a given location of the sample and varying the probe voltage gives the tunneling spectroscopy current, allowing the band gap and the density of states to be probed. By using the STM in its spectroscopic mode, the instrument probes the electronic states of a surface located within a few electron volts on either side of the Fermi energy. The AFM was introduced in 1986 to examine the surface of insulating samples [90]. The AFM operates by measuring the force between a probe and the sample and has evolved into a mature instrument providing new insights into the fields of surface science, electrochemistry, biology, and technology [91]. This force depends on the nature of the sample, the distance between the probe and the sample, the probe geometry, and sample surface contamination. The AFM principle is illustrated in Figure 28.22. The instrument consists of a cantilever with a sharp tip mounted on its end. The cantilever is usually formed from silicon, silicon oxide, or silicon nitride and is typically 100 mm long, 20 mm wide, and 0.1 mm thick. The vertical sensitivity depends on the cantilever length. For topographic imaging, the tip is brought into continuous or intermittent contact with the sample and scanned across the sample surface. Depending on the design, piezoelectric scanners translate either the sample under the cantilever or the cantilever over the sample. Moving the sample is simpler because the optical detection system need not move. The motion of the cantilever can be sensed by one of several methods [92]. A common technique is to sense the light reflected from the cantilever into a two-segment or four-segment, position-sensitive photodiode, as shown in Figure 28.22 [93]. The cantilever motion causes the reflected light to impinge on different segments of the photodiode. Vertical motion is detected by zZ(ACC)K(BCD) and horizontal motion by xZ(ACB)K(CCD). Holding the signal constant, equivalent to constant cantilever deflection, by varying the sample height through a feedback arrangement gives the sample height variation. For the beam cantilever, the resonance frequency is given by

1 f0 Z 2p

rffiffiffiffiffi k m

ð28:43Þ

Laser diode

A B C D

Piezo oscillator

Photo diode Cantilever Sample z x

FIGURE 28.22

y

Piezoelectric tube scanner

Schematic illustration of an atomic force microscope.

DK4126—Chapter28—23/5/2007—16:26—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-28

Handbook of Semiconductor Manufacturing Technology

where k is the spring constant and m the mass of the cantilever. Typical resonance frequencies lie in the 50–500 kHz range. In the contact mode, the probe tip is dragged across the surface and the resulting image is a topographical map of the sample surface. While this technique has been very successful for many samples, it has some drawbacks. The dragging motion of the probe tip, combined with adhesive forces between the tip and the surface, can damage both the sample and probe and create artifacts in the data. Under ambient air conditions, most surfaces are covered by a layer of condensed water vapor and other contaminants. When the scanning tip touches this layer, capillary action causes a meniscus to form and surface tension pulls the cantilever into the layer. Trapped electrostatic charge on the tip and sample contribute additional adhesive forces. These downward forces increase the overall force on the sample and, when combined with lateral shear forces caused by the scanning motion, can distort measurement data and damage the sample. In the non-contact mode, the instrument senses van der Waal attractive forces between the surface and the probe tip held above the sample surface. Unfortunately, these forces are substantially weaker than the contact mode forces—so weak in fact that the tip must be given a small oscillation and ac detection methods are used to detect the small forces between the tip and sample. The attractive forces also extend only a short distance from the surface, where the adsorbed gas layer may occupy a large fraction of their useful range. Hence, even when the sample–tip separation is successfully maintained, non-contact mode provides lower resolution than either contact or tapping mode. Tapping mode imaging overcomes the limitations of the conventional scanning modes by alternately placing the tip in contact with the surface to provide high resolution and then lifting the tip off the surface to avoid dragging the tip across the surface [94]. It is implemented in ambient air by oscillating the cantilever assembly at or near the cantilever’s resonant frequency with a piezoelectric crystal. The piezo motion causes the cantilever to oscillate when the tip does not contact the surface. The oscillating tip is then moved toward the surface until it begins to lightly touch or “tap” the surface. During scanning, the vertically oscillating tip alternately contacts the surface and lifts off, generally at a frequency of 50–500 kHz. As the oscillating cantilever contacts the surface intermittently, energy loss caused by the tip contacting the surface reduces the oscillation amplitude that is then used to identify and measure surface features. When the tip passes over a bump on the surface, the oscillation amplitude decreases. Conversely, when the tip passes over a depression, the cantilever has more room to oscillate and the amplitude increases approaching the maximum free air amplitude. The oscillation amplitude of the tip is measured and the feedback loop adjusts the tip–sample separation maintaining a constant amplitude and force on the sample.

FIGURE 28.23 Non-contact AFM image of metal lines showing the grains and grain boundaries. 10 mm!10 mm scan area. Courtesy of Veeco Corp.

DK4126—Chapter28—23/5/2007—16:26—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-29

1.0V

250.0nM

0.5V

0.0 nM

500

10.0 7.5

2.00 V

500.0nM

0.0 V

10.0

1.00

1000 nm

Tapping mode imaging works well for soft, adhesive, or fragile samples, allowing high-resolution topographic imaging of sample surfaces that are easily damaged or otherwise difficult to image by other AFM techniques. It overcomes problems associated with friction, adhesion, electrostatic forces, and other difficulties that can plague conventional AFM scanning methods. An AFM image is shown in Figure 28.23. In scanning Kelvin probe microscopy (SKPM), the probe, typically held 30–50 nm above the sample, is scanned across the surface and the potential is measured. Frequently, this measurement is combined with AFM measurements. During the first AFM scan, the sample topography is measured and during the second scan, in the SKPM mode, the surface potential is determined [95]. The conducting probe and conducting substrate can be treated as a capacitor with the gap spacing being the spacing between the probe and sample surface. The dc and ac voltages are applied to the tip (sometimes the voltage is applied to the sample with the tip held at ground potential). This leads to an oscillating electrostatic force between the tip and sample from which the surface potential can be determined. An advantage of force over current measurements is that the latter is proportional to the probe size while the former is independent of it. The frequency is chosen equal or close to the cantilever resonance frequency, which is typically around several 100 kHz. An ac voltage of constant amplitude together with a dc voltage is applied. A lock-in technique allows extraction of the first harmonic signal in the form of the first harmonic tip deflection proportional to Fu. Using a feedback loop, the oscillation amplitude is minimized by adjusting Vdc. The detection technique

7.5

0

5.0 2.5

5.0

7.5

0 0

0 10.0 μm

2.5 5.0

7.5

1.00

7.5

2.00 V

10.0

10.0 7.5 5.0

(c)

2.5 2.5

5.0

7.5

0

0

5.0 0

0 10.0 μm

(b)

2.00 V

(a)

2.5

1.00

0

5.0

2.5

0

0 10.0 μm

2.5

2.5 5.0

7.5

0 10.0 μm

(d)

FIGURE 28.24 Polycrystalline ZnO (a) AFM surface topograph, (b) surface potential map with secondary phases and a 100 mV potential difference, (c) positive, and (d) negative voltage applied to the sample showing a 0.3 V potential drop at the grain boundary. The direction of potential drops inverts with bias. (After Bonnell, D.A. and Kalinin, S., Proceedings of the International Meeting on Polycrystalline Semiconductors, ed. Bonnaud, O., MohammedBrahim, T., Strunk, H.P., and Werner, J.H., 33–47, Scitech Publ. Uettikon am See, Switzerland, 2001.)

DK4126—Chapter28—23/5/2007—16:26—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-30

Handbook of Semiconductor Manufacturing Technology

is the AFM method with a measure of the feedback voltage Vdc being a measure of the surface potential. The null technique renders the measurement independent of dC/dz or to variations in the sensitivity of the system to applied forces. The SKPM has also been combined with optical excitation, similar to the SPV measurements [96]. The AFM and SKPM plots in Figure 28.24 are an effective illustration of surface potentials [97]. The AFM topograph (Figure 28.24a) exhibits no differences associated with multiple phases or grain boundaries in this ZnO sample. In the surface potential map with no external perturbation (Figure 28.24b), a depression of approximately 60 mV is observed due to the difference in work functions of the ZnO surface and pyrochlore phase. The surface potential map with the sample under applied lateral bias shows a potential drop at the grain boundaries in Figure 28.24c and d. Strengths and Weaknesses. The strength of probe microscopy lies in the variety of possible measurements (topography, electric field, temperature, magnetic field, etc.) and their high resolution to atomic scale. Weaknesses include the measurement time and the fragility of the probes, although recent equipment has become automated and is more rugged than early versions.

28.3

Physical and Chemical Characterization

28.3.1

High Spatial Resolution Imaging

28.3.1.1

Scanning Electron Microscopy

Purpose. The SEM is a versatile instrument for imaging the microstructure of solid surfaces with subnanometer spatial resolution [4,98–103]. It is well suited for routine inspections of the intricate details of an integrated circuit in an analytical laboratory or a cleanroom environment. When combined with sectioning and chemical etching techniques, it can be used to delineate stacked metal and oxide films and a variety of crystal defects [104–108]. It is normally configured with an x-ray detector capable of identifying elements of the periodic table in metal films, particles, and generic contamination. Commercial instruments offer 300 mm wafer handling and coordinate driven stages for automated defect review [109]. Method. The basic components are an electron gun, a lens system, scanning coils, an electron collector, and a screen for viewing the image, as shown in Figure 28.25, along with a typical SEM image. Microscopes typically operate in the 1–30 kV range (beam voltage), although there is occasion to use voltages near 1 kV for insulating samples [110]. The condenser and objective lenses focus beam electrons into a small spot, the diameter of which determines the spatial resolution of the microscope, which is near 1.0 nm. Scanning coils deflect the spot in a television-like raster over the surface of the sample. Modern instruments are equipped with field emission electron guns [111] of high brightness, which enhance imaging quality and throughput. The column and sample chamber are evacuated to 10K4 Pa (8!10K7 Torr) or better. Image contrast arises from secondary, backscattered, and specimen electrons that are generated as incident electrons penetrate into the sample. The volume of excitation depends on the scattering pattern of electrons. Figure 28.26 shows a schematic of the excitation volume and the various secondary signals produced. Penetration depth (R0) is given approximately by

R0 Z

0:0552 V01:67 r

ð28:44Þ

where V0 is the beam voltage and r the density of the sample [112]. The low-energy (0–50 eV) secondary electrons (SEs) that are ejected from the near-surface region provide the best lateral resolution. These electrons are also most sensitive to roughness, work function, and charge buildup on the sample. Elastically scattered beam electrons contribute to shadowing effects. All are normally collected by a detector of the Everhart–Thornley type [113] positioned near the sample, or in some designs just above the objective lens. Better imaging resolution can usually be obtained from the upper detector, known as

DK4126—Chapter28—23/5/2007—16:26—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-31

Electron gun Condenser lens

Secondary electron detector

Objective lens

Amplifier

Sample

Cathode ray tube

0235

X450

10 μm WD27

(b)

(a)

FIGURE 28.25 bond wire (b).

15kv

Basic configuration of a scanning electron microscope (a). Micrograph of bond pad with detached

the immersion lens detector because this detector is effectively shielded from the backscatter electron signal arising from greater depth and, hence, lateral dispersion. Strengths and Weaknesses. The SEM is easy to use and techniques of sample preparation are straightforward. It is applicable to almost all facets of integrated circuit manufacturing and process development. It also provides a platform for generating and detecting signals other than SE emission, which are generated by the scanning electron beam. This leads to a variety of analytical applications, some of which are listed in Table 28.1. The most widely used of these is x-ray spectroscopy, which can be acquired from select points on the sample [114–116]. X-ray detectors are of the energy dispersive [117,118] or wavelength dispersive (WDS) [98,116] type, which are optimized for rapid analysis (several minutes) or high spectral resolution (!10 eV), respectively. High-energy resolution is useful in quantitative measurements where spectral peak overlaps need to be avoided, as in the determination of phosphorous concentration in a phosphosilicate glass film. Incident electron beam Sample surface

Source of secondary electron signal Source of backscattered electrons Source of electron-excited characteristic x-rays Source of bremsstrahlung Source of secondary fluoresence X-ray resolution

FIGURE 28.26 Diagram showing a cross-section of the excitation volume induced by electron bombardment in a solid. More energetic secondary emissions (backscatter electrons, x-ray emission, fluorescence) can be detected from deeper within the material due to longer mean free paths. At greater depth, the scattering has dispersed the primary beam to a greater degree, compromising the lateral resolutions associated with these signals.

DK4126—Chapter28—23/5/2007—16:26—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-32

Handbook of Semiconductor Manufacturing Technology

TABLE 28.1

Useful Signals Generated by the Scanning Electron Beam

Technique/Reference

Acronym

X-ray spectroscopy [114–116] Auger spectroscopy [119–122] Elemental contrast [98,99] Voltage contrast [123–125] Electron-beam-induced current [126,127] Cathodoluminescence [128–133] Kikuchi patterns [134,135]

EDS, WDS AES Z contrast VC EBIC

Scanning electron acoustic microscopy [136]

SEAM

CL Kikuchi

Signal Detected 0.3–25 kV x-rays Auger electrons Backscattered electrons Secondary electrons Electron–hole recombination Visible light Diffracted electrons Electron-induced acoustic waves

Application Example Elemental composition Elemental composition Heavy metal silicides Line voltage activity pn junction imaging Photonics devices Polysilicon and Al grain orientations Subsurface cracks and voids

X-ray detectors based on Si and Ge crystals are available, the current Ge detectors offering improved spectral resolution. Light element performance can be enhanced by the use of ultrathin windows or windowless detectors. Energy dispersive systems that normally require a liquid nitrogen ambient are available with self-contained Peltier cooling, with some compromise in energy resolution [137]. New microcalorimeter designs promise excellent spectral resolution (approaching 1 eV) and short (several minutes) analysis time [138–144]. It is difficult to use SEM imaging in applications requiring high accuracy, such as CD measurements. This is because extraction of the edge coordinate from an irregular intensity profile is blurred by electron emission enhancement near the edge of a feature. The emission enhancement (edge bright-up), seen at abrupt changes in sample topography, can be reduced by low-voltage operation. The width of lithography features can thus be measured with a high level of precision; the accuracy is usually limited by the calibration methods employed. Automated linewidth measurement systems enhance the reproducibility of such measurements. The electron beam in an SEM delivers charge to the surface and subsurface region of the sample. In metals and most semiconductors, excess electrons are conducted to ground, but in oxides, nitrides, and packaging materials, charge buildup occurs, giving rise to unpredictable contrasts that complicate image formation and interpretation. Most of this trouble is related to the electric field external to the sample that grows as a consequence of the excess charge [110]. Charging artifacts are controlled by applying a 3 nm metal coating (such as Au, AuPd, Pt, or Cr) over the sample by sputter deposition or thermal evaporation. When continuous and grounded, this effectively eliminates the external field. It is sometimes possible to use a backscattered electron (BSE) detector, which is insensitive to low-energy charging effects, but does not change beam-to-sample interactions. Another approach is to reduce the energy of the primary electron beam, which establishes a charge balance in the absence of a conducting path to ground [110]. Charge balancing can also be achieved in a low-vacuum SEM, in which the sample chamber vacuum can be degraded to a few Torr. In this mode, the column vacuum is protected from the sample chamber vacuum by a differential pumping aperture. Ionization of the gas molecules in the sample chamber automatically charge compensates insulating samples without the need for coating the sample with a conductor or using a low beam voltage. Beam scattering by the chamber gas degrades the instrument resolution and the standard Everhart–Thornley detector cannot be used in this mode. 28.3.1.2

Transmission Electron Microscopy

Purpose. The TEM provides ultrahigh-resolution images of material defects and ultra large-scale integration ULSI device geometry and structure [4,145–153]. With crystalline and polycrystalline materials, it achieves atomic resolution. It is virtually the only tool capable of imaging point defects

DK4126—Chapter28—23/5/2007—16:26—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-33

related to thermal, mechanical, and implant processing, and is used routinely to measure grain size distributions in polycrystalline materials, as well as the thickness of gate oxides and capacitor composite dielectrics with high accuracy. It is assuming an increasingly important role in semiconductor process development. Method. Operation of the TEM depends on the sample being very thin. The incident electron beam of the microscope must pass entirely through the sample, which needs to be prepared as a free-standing film only ten to hundreds of nanometers thick. Partially for this reason, the operating beam voltage of the TEM is about an order of magnitude higher than the SEM, and normally falls within the 100–400 kV range. The typical configuration of a TEM is shown in Figure 28.27. The electron gun is similar to that in an SEM, in which a tungsten filament, LaB6 cathode, or field emission tip can be used. A condenser lens system shapes the electron beam as it emerges from the gun so that it floods the sample over a broad area. Alternatively, field emission sources are capable of delivering 0.1–1 nA into a probe about 2 nm in diameter. Currents in this range are necessary when performing analytical measurements, such as energy dispersive x-ray spectroscopy (EDS) and electron energy loss spectroscopy (EELS) analyses [103,146,150,151,154]. When one or more of these methods is applied in a materials or device analysis, the practice is referred to as analytical electron microscopy. The sample is supported on a small copper grid about 3 mm in diameter, which is inserted into a holder positioned at the center of the column. Most holders provide capability for tilt (G308)

Electron gun

Condenser lenses

Condenser aperture Sample

0.2 μm

Objective lens Objective aperture Selected area aperture Poly

Intermediate lenses

Ccd

Gate oxide

Projector lens

Final image

Silicon

5nm

Fluorescent screen

FIGURE 28.27 Schematic diagram of a transmission electron microscopy (TEM) column displaying the path of the electrons through the column. The lower magnification image (upper right) is from a memory array cut along the bit line direction of the array. The high magnification view (lower right) shows the lattice image in the silicon substrate and polysilicon gate from one of the transistors in the array. The lattice image allows for a direct calibration measurement of the gate oxide thickness using the silicon lattice. Data courtesy of Amer Corp.

DK4126—Chapter28—23/5/2007—16:26—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-34

Handbook of Semiconductor Manufacturing Technology

and translation (G2 mm) to position a select spot under the beam. Tilt angles up to G608 are routinely available, usually with some loss of resolution because the pole piece gap of the objective lens has to be widened to accommodate the increased tilt angle. An interlock chamber serves to introduce the sample without disturbing the vacuum within the column, which near the stage is in the 10K5 Pa (10K7 Torr) range. The objective lens is of short focal length, and produces below it an image of the sample enlarged several hundred-fold or more. This intermediate image is further enlarged by a series of intermediate and projector lenses to give a final image, which can be viewed on a fluorescent screen or projected onto a charge-coupled imaging device. In the latter case, the image is directly acquired in digital form and can be processed and archived in a computer database. The objective aperture (50–200 mm in diameter) is positioned in the back focal plane of the objective lens. It can be moved about during the operation of the microscope, and positioned to block select segments of the electron beam as it spreads out from the underside of the sample. A selective area aperture in the first image plane of the system can be used to select a region from which an electron diffraction pattern can be formed; alternatively the incident beam can be made a small diameter and parallel to the optic axis to define a small diffracting region. Elemental and microstructural information can thus be obtained from regions smaller than 20 nm. Overall, the lenses, apertures, and gun configuration provide working magnifications ranging from a few thousand to several million for viewing the sample. Most semiconductor device analysis can be comfortably carried out at 2000–200,000!. Atomic resolution imaging is more difficult and requires 500,000! or more. In this mode, the technique is called high-resolution transmission electron microscopy. The basics of electron scattering in the TEM are similar to those for the SEM with two important exceptions. First, the sample is very thin, so the scattering volume is quite small. Second, the higher energy electrons used in the TEM are scattered through angles that are small compared with the SEM, and are almost always deflected in the forward direction. The angle of scattering (a) determines which electrons are able to pass through the objective aperture, and which are not. Generally, a for electrons that contribute to the image is less than 0.58, depending on the size of the aperture selected. Electrons deflected by this amount or less contribute to brightness on the screen, while those deflected more are blocked by the aperture, and result in gray or dark regions. If the aperture allows the direct beam to pass, a bright field image is formed. If the aperture is positioned to pass one or more diffracted beams and blocks the direct beam, the microscope is operating in the dark field mode. The magnitude of a is determined by the density (r) and thickness (t) of the sample. Thickness reduces the intensity of the beam (I0) according to the expression

I Z eKt=L I0

ð28:45Þ

where L is the mean free path between scattering events. In the special case, when tZL, an incident electron on the average will encounter only one scattering event. L has been determined experimentally, with Si being near 0.12 mm, and metals like Cr, Ge, Pd, and Pt about an order of magnitude smaller [155]. The thickness of most cross-sections prepared for TEM is of the order of L; for atomic resolution imaging, it must be in the tens of nanometer range to give high-quality images. Many TEM instruments are fitted with scan coils so that the illumination can be rastered over the sample. The pre-specimen lenses in these instruments can be configured to provide a small focused spot, !0.5 nm for field emission sources. The transmitted beam falls on a detector and is used to modulate the brightness of a display CRT scanning in synchrony with the beam. The resulting image is a scanning transmission image and the technique is referred to as scanning TEM. Image contrast is formed via a post-specimen aperture as in conventional TEM. Strengths and Weaknesses. The distinctive advantage of the TEM is exceptionally high spatial resolution, even atomic resolution, which can be achieved in nearly routine applications. An example

DK4126—Chapter28—23/5/2007—16:26—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization TABLE 28.2

28-35

Application of TEM Sample Preparation Procedures

Technique

Semiconductors

Metals

Organics

Dimple polish Chemical thinning Ion milling Focused ion beam Tripod select area Jet polishing Cleaving Replica casting Ultra-microtomy

F F F F F O O R R

F F O F R F O F R

R R R O R R R F F

F, frequently used; O, occasionally used; R, rarely used.

is shown in Figure 28.27. Capabilities can be expanded beyond direct imaging by collecting signals other than electron scattering, which are simultaneously generated within the sample. These include EDS, EELS, electron diffraction, and electron holography. The challenge with TEM is sample preparation. Techniques typically involve a combination of mechanical polishing, chemical etching, and ion beam milling. Most reflect the experience of the individual operator who learns through trial and error which procedure works best for the material or device at hand. Some generic approaches developed over the years include those listed in Table 28.2. References for all of these are available in a series of volumes on TEM sample preparation published by the Materials Research Society [156–158]. 28.3.1.3

Focused Ion Beam Sample Preparation

Purpose. The focused ion beam (FIB) tool uses a narrow pencil of GaC ions to selectively carve from the bulk sample a cross-section suitable for optical microscopy, SEM or TEM imaging, and microanalysis. It is capable of cutting through metal and polysilicon interconnects as well as oxide and nitride layers neatly and precisely, with little or no damage to adjacent structures. Via holes can be created to expose underlying metal landing pads for mechanical probe contact. It is possible to routinely expose a microdefect buried deep within select transistors in a multi-megabit array or prepare a smooth surface for polycrystalline grain size imaging enhanced by ion channeling. Modern instruments accommodate production size wafers and are cleanroom compatible. Method. The FIB instruments use a finely focused probe of GaC ions to etch any small region identified at the surface of an integrated circuit [4,159–162]. Typically, a high-current broad beam (0.5 mm diameter) is applied for an initial rough cut, followed by a tighter focus (100–10 nm) low-current probe for final polishing. Ion-induced electron images resemble SEM images, with the exception that channeling effects are pronounced in crystalline and polycrystalline materials [163].

TABLE 28.3 Columns

Comparison between Scanning Electron Microscope (SEM) and Focused Ion Beam (FIB) Microprobe SEM

Beam Source Beam energy Beam current Minimum spot Lens design Surface effects Etch rate

Electrons Tungsten, LaB6 or field emission tip 100 V–30 kV O30 nAmp !1.5 nm Magnetic Secondary and backscattered electrons None

FIB C

Ga ions Field emission liquid GaC tip 5–50 kV 1–10 nAmp !10 nm Electrostatic Sample sputtering and secondary electron generation 2 mm3/sy10 nA

DK4126—Chapter28—23/5/2007—16:26—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-36

Handbook of Semiconductor Manufacturing Technology

GaC ions are extracted from a liquid droplet in the ion gun by an intense electric field, similar to the way electrons are drawn from a field emission tip in an SEM. The tip of the liquid is only 100 nm across, so it is possible to demagnify this into a probe less than 10 nm in diameter at the sample’s surface. The basic components in the FIB include lenses, defining apertures, and scanning coils to raster the probe across the sample. A comparison between the SEM and the FIB is given in Table 28.3. The larger spot of the GaC probe results in lower resolution than that possible in the SEM, but for most applications it is adequate to locate the region of interest for milling. Nevertheless, most modern FIB vendors offer an independent SEM column as an accessory. These dual beam instruments are more costly, but provide the best possible imaging resolution, even in situ while milling takes place. Figure 28.28 shows a schematic of a dual beam FIB along with an SEM image of a cross-section produced by the instrument. Cross-section analysis of such high aspect ratio vias would be difficult and Electron gun Ion gun

Stage

Sample

500 nm Mag Det FWD E-Beam 02/01/00 120 kx TLD-S 4.99 3.00kv 23:41:58 CEA B4644 0.22u lines

2 μm Mag Det FWD E-Beam 02/01/00 35.0 kx TLD-S 4.99 5.00kv 22:32:29 CEA B4644 0.22u lines

FIGURE 28.28 Diagram of a dual beam focused ion beam (FIB) instrument. The images show low and high magnification views of a FIB cross-section through high aspect ratio vias. Voids in the metal fill are evident.

DK4126—Chapter28—23/5/2007—16:26—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization TABLE 28.4

28-37

Chemistries for Assisted FIB Etching

Etch Gas

Al

Cl2 Br2 ICl ClF XeF2 WF6

X X X X

W

Si

X X X

X X X X X

SiO2

X X

time consuming using standard SEM sample preparation methods; however, the FIB can accomplish tasks of this scope within a couple of hours or less. It is possible to enhance the etch rate of select materials by bleeding a small quantity of gas through a small jet near the sample while milling is in progress. Some of the gases used for microcircuit applications are shown in Table 28.4. Strengths and Weaknesses. Milling with the focused GaC ion beam provides unique capability for preparing a cross-section at any predetermined spot on an integrated circuit with !10 nm polishing precision. Applications to single bit sectioning in a dynamic random access memory array, for example, is extremely difficult by any other approach. It is also possible to deposit metals such as W, Pt, or Au within the same FIB machine with the aid of the GaC beam [164,165]. This can be done with submicron resolution to connect two lines together, or to provide bond pads for offline mechanical probing of small or buried structures. The instrument serves a dual role in this regard. It can be used to prepare select cross-sections by precision milling, as well as rewire intricate circuits for electrical probing or SEM voltage contrast imaging. Most of the GaC ions scatter into the environment, but a significant number are implanted into the sample, which amorphizes the outer 30 nm of crystalline samples. Also, GaC injection is known to alter the electrical properties of single transistors [166]. For production wafers, GaC contamination is of concern, particularly if they are to be inserted back into the process line. The basic chemistry of the deposition process is complex and not very well understood. Deposited films typically contain impurities like carbon from the vacuum, which can result in a hard, brittle alloy with high electrical resistivity [167]. New gas precursors for depositing oxides and thin dielectrics are being evaluated [168]. 28.3.1.4

Scanning Probe Microscopy

Purpose. The SPM is able to provide images of single atoms at the surface of a sample and can also be used as an ultra-sensitive profilometer for measuring roughness. The latter is the most frequently used for semiconductor process and development applications. While the conventional stylus profilometer may sense vertical relief differences as small as 10 nm, the scanning probe easily extends into the 0.01 nm regime when required. Hence, it is a unique tool for assessing roughness related to silicon wafer polishing, implant damage, plasma etching, chemical cleanups, and a variety of other process operations. Method. The SPMs function by positioning a very sharp needle within fractions of a nanometer of the surface of a sample using piezoelectric micro-manipulators. The two most common forms of SPM (STM and AFM) were described in detail in Section 28.2.6, earlier in this chapter. Similarly, SCM and SSRM were described in Section 28.2.1.4. There are many other variations to the SPM theme, all of which ultimately depend on the design and function of the tip itself. For example, a tiny thermocouple fabricated just at the apex of the probe is the basis for the scanning thermal microscope, while a tapered quartz fiber channels visible light through a 20 nm aperture in the near-field scanning optical microscope. This list can be readily expanded to include a dozen or more other techniques, all of which are covered in a growing number of articles, books, conferences, and specialized symposia [8,169–175]. Some of them are listed in Table 28.5.

DK4126—Chapter28—23/5/2007—16:26—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-38

Handbook of Semiconductor Manufacturing Technology TABLE 28.5

Variations of Scanning Probe Techniques

Acronym

Technique

AFM BEEM CFM IFM MFM MRFM MSMS Nano-NMR Nano-Field Nano-SRP NSOM SCM SCPM SEcM SICM SKPM SThM STOS STM

Reference

Atomic force microscopy Ballistic electron emission microscopy Chemical force microscopy Interfacial force microscope Magnetic force microscopy Magnetic resonance force microscopy Micromagnetic scanning microprobe system Nanometer nuclear magnetic resonance Nanometer electric field gradient Nanometer spreading resistance profiling Near-field scanning optical microscopy Scanning capacitance microscopy Scanning chemical potential microscopy Scanning electrochemical microscopy Scanning ion-conductance microscopy Scanning Kelvin probe microscopy Scanning thermal microscopy Scanning tunneling optical spectroscopy Scanning tunneling microscopy

[173] [176] [177] [178] [179] [180,181] [182] [183] [184] [185–187] [188–190] [191–193] [194] [195] [194] [196] [172] [197] [173]

Strengths and Weaknesses. The AFM is sensitive enough to trace out the contours of single-surface atoms on the sample in a matter of minutes, but as with the STM, it is more frequently applied in industrial laboratories for surface roughness mapping (Figure 28.29). In this mode, the X and Y scan ranges are typically limited to 100–200 mm or less, while the vertical (Z) retains sensitivity to subnanometer changes over the scanned area. Instruments suitable for the cleanroom environment accommodate 200 mm wafers, and feature automated cassette loading and inspection. The application of STM to practical problems is limited to conducting samples, which are capable of carrying electrons away from the tip and surface. Even with sub-nanoampere tunneling currents, charge buildup in thin surface oxides or contamination films is enough to mask or destroy the signal. Materials like Si, Al, Ti, and W readily oxidize and are frequently difficult to image reproducibly. Models that explore the parameters governing the tip and sample confirm that the situation is quite complex [198–200]. This is particularly true when operating in an atomic resolution mode. Practical knowledge about tips is based on experience with their fabrication and use. A tungsten tip prepared by electrochemical etching with a perfectly smooth end of small radius does not always provide

RMS (Rq): 2.851 nm

RMS (Rq): 24.830nm

0

0 5

5 10

10 μm

FIGURE 28.29 (left).

μm

AFM scans comparing the surfaces of stainless steel tubing before (right) and after electropolishing

DK4126—Chapter28—23/5/2007—16:26—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-39

atomic resolution at first. Tips that do, often give unpredictable and non-reproducible tunneling current curves. Removal of oxides by annealing or exposure to high field strength before and during operation is usually beneficial. Tips fabricated from Pt to Ir wire may work well for tunneling spectroscopy, even when extremely dull at the end. Serious CD measurements are limited by the shape of the tip itself. Attempts to reconstruct the tip silhouette during analysis are being made, with the intent of removing the influence of shape by mathematical deconvolution [201,202].

28.3.2

Dopants and Impurities

28.3.2.1

Secondary Ion Mass Spectrometry

Purpose. Secondary ion mass spectrometry (SIMS) offers a unique combination of small analytical spot and high detection sensitivity and measurement precision, enabling the technique to monitor dopants and impurities within a patterned contact or junction. At high material removal (sputter) rates, SIMS is especially well suited for characterization of dopant distributions prepared by diffusion or ion implantation with typical detection limits of or below 1015 atoms per cubic centimeter. The technique is capable of reaching parts per million atomic (ppma) sensitivity within a contact or junction only 1–10 mm across. SIMS has been the workhorse of the industry for verifying implanter performance, implanter dose matching, and anneal processes, because it can readily determine the dose and shape of the dopant profile, as well as reveal metal impurities. Given appropriate and carefully controlled analysis protocols, dose measurement precisions well below 1% can be routinely achieved. Using low sputter rates and appropriate spectrometer configuration, SIMS can also provide high-sensitivity surface metal and organic contamination analysis in 50!50 mm2 areas at sub-nanometer analysis depths. Numerous books, conferences, and workshops specializing in SIMS address the wide range of applications and technique improvements, as well as its limitations [4,203–207,218–220]. Method. All SIMS tools employ a focused primary ion beam of appropriate kinetic energy to strike the analysis site, causing the emission of charged and (mostly) neutral secondary atoms and molecules from the bombarded surface. The charged particles (secondary ions) of selected polarity are injected into the analyzer for mass separation. The emission of charged and neutral particles due to the primary ion impact obviously involves an erosion of the analysis site. For constant primary ion energy, ion species and impact angle on the sample surface, the erosion rate is proportional to the rate of primary ions striking the surface during analysis (i.e., the primary ion current). The use of high primary ion current densities, while monitoring appropriate secondary ion masses, allows for fast surface erosion rates and is ideally suited for profiling beneath the surface of the sample. At the other extreme, highly surface-sensitive analysis is usually performed using very low sputter rates of a few nanometer per hour, allowing a survey of inorganic, metal, or organic contaminants from the top few monolayers of a single analysis site. Although SIMS can in principle be performed with almost any energetic ionized element of the periodic table, only a few are employed in routine measurements. For depth profiling applications, the most C commonly used primary ion species are OC 2 and Cs at impact energies from 250 eV to 15 keV, depending on the analysis task. The sensitivity of a SIMS measurement is strongly affected by the ion bombardment conditions and the composition of the matrix. In general, non-reactive ion beams such as ArC, XeC, and GaC produce secondary ion yields from matrices, which are orders of magnitude lower than those C measured with reactive beams such as OC 2 or Cs [207–209]. For this reason, SIMS instruments generally C have two separate primary ion columns to produce sputter beams: OC 2 and Cs for the analysis of elements that are likely to form positive and negative secondary ions, respectively (electropositive and electroC negative elements). Positive secondary ion analysis using OC 2 , Ga , or other inert primary ion beams in conjunction with O2 gas flooding is also often employed to enhance secondary ion formation and to reduce or eliminate surface roughening in ultra-shallow depth profile applications. As a general rule, the analysis of shallow structures requires the use of low-energy primary ion beams to minimize ion-beam-induced mixing of the sample and to improve depth resolution. It was recognized early on that high primary ion bombardment energies skew the shape of implant profiles more than lower bombardment energies [210–212]. In addition, it was demonstrated [213] that the primary ion

DK4126—Chapter28—23/5/2007—16:27—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-40

Handbook of Semiconductor Manufacturing Technology

bombardment results in essentially isotropic mixing of surface and subsurface layers. The expected broadening of structures toward and away from the surface was calculated to be on the order of the primary ion range [214], in reasonable agreement with measured decay lengths [215]. For example, a 500 eV boron implant into silicon has a projected range of around 3.6 nm and a range straggle of 1.9 nm [216]. Ultra-shallow depth profile analyses of such implants are therefore generally performed using %500 eV OC 2 primary ion beams. In general, secondary ions of at least the dopant and a signal corresponding to the substrate (matrix signal) are measured as a function of time during the depth profile. The ratio of the measured dopant signal strength to the strength of the matrix signal is then converted to a dopant concentration by applying a normalization factor called a “relative sensitivity factor” (RSF). Similarly, the profile shape of the dopant is derived by additionally applying a calibrated/measured sputter rate to the measured data curves. Sufficient mass separation of the secondary ion signals is required so that the ion species monitored during a depth profile are in fact due to the dopant. The ability to separate two atoms or molecules of the same charge at mass m, which differ in mass by the amount Dm is described by the mass resolution as

mass resolution Z

m Dm

ð28:46Þ

A higher mass resolution value of a spectrometer implies better ability to unambiguously measure and identify the element or molecule of interest. For example, the analysis of 11B in silicon requires only a mass resolution of 11 because the closest interfering signal is due to 12C. Higher mass resolution is required for separating interferences between species like 31PC and (30SiH)C or (28Si2)C and 56FeC, which occur in the SIMS analysis of silicon. The most common mass spectrometer types employed in depth profile analysis are magnetic sector instruments of the Nier–Johnson geometry, as well as quadrupole mass analyzers. Both instrument types employ continuous primary ion beams and can detect only one secondary ion species at a given instant during the depth profile. In practice, at least two secondary ion species are monitored during a depth profile by periodically switching the analyzer to detect the secondary ion species of interest. Because while one mass is monitored, all other signals of interest are not detected, the ultimate detection sensitivity will decrease with the number of ion species monitored during the profile. Magnetic-sector-based SIMS spectrometers can provide mass resolution in excess of 40,000, whereas quadrupole SIMS spectrometers operate at m/Dm of 350. For applications requiring high mass resolution (e.g., P in Si) magnetic sector SIMS is obviously the instrument of choice. In situations where low mass resolution is acceptable (e.g., ultra-shallow B in Si), Quadrupole SIMS instruments can often match or outperform their magnetic sector counterparts, in part because of the relative ease in utilizing sub-kilo-electronvolt primary ion beams at various impact angles for ultra-shallow depth profile analysis and also for their simplicity of operation. Only the most recently developed magnetic sector instruments are able to perform depth profile analysis with sub-kilo-electronvolt primary ion beams. The range of depth profile analyses may be anywhere between a few nanometers to several microns. Although originally focused toward surface analysis of the topmost monolayer(s), Time-of-flight (ToF) SIMS spectrometers have lately been employed in depth profiling applications. Time-of-flightSIMS employs a pulsed primary ion beam of nanosecond duration to strike the surface, generate secondary ions, and transport them through an electrostatic analyzer to the detector. Because all secondary ions basically travel along the same path through the analyzer and mass separation is solely due to flight time differences from the sample to the detector, a ToF analyzer is capable of detecting any secondary species of given polarity over a mass range 1 kDa to some 10 kDa and m/Dm up to 15,000. As a consequence, ToF-SIMS has unsurpassed overall (parallel) detection sensitivity per surface layer. The material removal rate using a pulsed primary ion beam alone is so slow (some nanometer per hour) that the depth profile analysis requires the additional alternating use of a sputter ion beam (10 s of ms in duration) to advance in depth. None of the material removed by the sputter beam can be mass analyzed. A ToF-SIMS depth profile is in essence a sequence of complete surface analyses at increasing depth from

DK4126—Chapter28—23/5/2007—16:27—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-41

the surface, i.e., the depth profiles of all detectable ions species are collected in a single analysis. Depending on the mode of operation, the depth range of analyses covered with ToF-SIMS is between the surface to (reasonably) some 100 nm. For details on mass spectrometer types, ion beam scanning and crater edge rejection methods, etc., the reader is referred elsewhere [207,217]. Secondary ion mass spectrometry analysis performed at “high” sample erosion rate is often referred to as dynamic SIMS, whereas the other extreme, where erosion rates of a few nanometer per hour are employed is often called static SIMS and was originally applied to organic surface analysis. Today’s SIMS depth profiling applications cover bulk dopant analysis, high, and ultralow energy ion implant characterization, gate dielectric characterization, and surface metal contamination analysis. The terminology has therefore become somewhat arbitrary—especially when applied to distinguish dedicated depth profiling instruments from ToF-SIMS analyzers. In practice, the purpose of the SIMS analysis will first dictate the primary ion bombardment conditions (erosion rate, ion species, energy, impact angle, and beam size) and then which type of SIMS instrument is best suited to provide the primary ion beam conditions and the required mass resolution, detection sensitivity, data collection rates, precision, etc. For example, the quantification of specific bulk dopants or high- and medium-energy ion implants requires large analysis depth and thus the high erosion rates provided by high-energy ion beams that are well accommodated by dedicated magnetic sector (or quadrupole)-based depth profiling spectrometers to

1E+21

Concentration (atoms/cc)

1E+20

2

1E+19

3

1a

1E+18

1b

1E+17

1E+16

0

20

40

60

80

Depth (nm)

FIGURE 28.30 Depth profiles of B implants into Si, processed by three different spike anneals (1, 2, and 3) to investigate transient enhanced diffusion. The depth profiles of process 1 were taken before (1a) and after (1b) the depth profiles of wafers 2 and 3, demonstrating !1% reproducibility/stability of the measurements of a quadrupole Secondary Ion Mass Spectroscopy (SIMS) instrument.

DK4126—Chapter28—23/5/2007—16:27—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-42

Handbook of Semiconductor Manufacturing Technology

1E+05

1E+22 Silicon-> Oxygen-> Nitrogen

1E+03 1E+20 1E+02

Counts per second

Concentration (atoms/cc)

1E+21

1E+04

1E+19 1E+01

1E+18

0

5

10

15

20

25

1E+00

Depth (nm)

FIGURE 28.31 Depth profile of a 3.7 nm thick oxynitride film, obtained on a quadrupole SIMS instrument, using 500 eV CsC primary ion bombardment. The nitrogen dose of the film is 3.7!1014 atm/cm2. Nitrogen is primarily located at the Si substrate interface, as expected from the film processing conditions.

achieve ppma detection limits. On the other hand, the characterization of ultra-shallow implants requires the use of low-energy (500 eV–2 keV) primary ion beams, readily available in modern quadrupole SIMS instruments and only recently realized in the newest magnetic-sector-type spectrometers. The primary applications are low-energy implant dose matching, characterization of anneal processes (Figure 28.30), as well as characterization of ultrathin films such as gate oxides (Figure 28.31). Ultra-shallow depth profiling is also possible with ToF-SIMS instruments (but will not improve the depth resolution) if equipped with suitable dedicated sputter guns. Finally, the characterization of surface metal contamination caused by the ion implanter or qualification of cleaning processes requires high detection sensitivity (ppm or better), as well as high mass resolution, which can be readily achieved with magnetic sector or ToF-SIMS instruments. The different analytical and instrumentation approaches offer flexibility for achieving the best performance possible in the analysis of submicron circuit junctions, oxide composite films, and surface organic contamination. A comparison of SIMS techniques is given in Table 28.6. Strengths and Weaknesses. SIMS is one of the most destructive surface analytical techniques, but it is also one of the most sensitive and reproducible techniques available. This applies to most elements in the periodic table, including H, C, N, and O, although these are also common contaminants found in the vacuum environment. The SIMS can reproducibly detect sub-% dose and in-depth distribution variations in low- and high-energy ion implants, as well as ultrathin films. Analysis on small areas is available with all instrument types discussed, as long as sufficiently wellfocused and intense primary ion beams can be produced (microprobe). Most commonly employed magnetic-sector-type spectrometers are designed to provide microns spatial resolution with either tightly

DK4126—Chapter28—23/5/2007—16:27—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-43

TABLE 28.6 Application Areas of Different Secondary Ion Mass Spectroscopy (SIMS) Techniques Based on Different Instrument Design Configurations Magnetic Sector SIMS Main Features m/Dm Mass detection Primary beams Optimized for Application Bulk dopant Deep implants Shallow implants Thin films Surface metals Organics Small-area Insulators

Quadrupole SIMS

ToF-SIMS

High Single/few masses C OC 2 , Cs (inert) Depth profiling

350 Single/several masses C OC 2 , Cs (inert) Depth profiling

High Full range (w10 kDa) C GaC (OC 2 , Cs , inert) Surface analysis

Yes Yes Only latest tools Only latest tools Yes No (elements only) Several microns Positive ions difficult

Some Most Yes (modern tools) Yes (modern tools) No Marginal Several microns Yes

N/A N/A Possible (multiple guns) Possible Yes Yes 100 nm (GaC) Yes

focused or relatively large primary ion beams, utilizing the direct imaging capabilities of the spectrometer (microscope). Most ToF-SIMS instruments are equipped with GaC liquid metal ion guns, which can provide 100 nm spatial resolution. Depth profile analysis of deep and ultrathin structures is mostly performed using either magnetic sector or quadrupole SIMS instruments. Surface sensitivity to organic and inorganic contaminants is the real strength of ToF-SIMS, because it automatically provides a survey of practically all contaminants in one analysis. When equipped with additional appropriate sputter guns, these instruments can also be used for depth profile analysis of thin films or shallow implants. The interaction of the primary ion beam with the surface and the secondary ion formation is rather complicated and generally requires thorough investigation of potential analytical artifacts, such as secondary ion yield changes near the surface, deeper interfaces, sputter rate variations, surface roughening, ion-beam-induced dopant diffusion, etc. [203,204,218–224]. Because of potential sputter artifacts and order-of-magnitude spreads in secondary ion production for different elements in various material matrices, it is necessary to use a set of carefully prepared reference standards to calibrate the measurement. Ideally, these standards should be as close to the unknown sample as possible, both in dopant or impurity level, and in matrix composition to carry out quantitative analysis. In the optimum case, precisions near 0.5% can be realized. 28.3.2.2

Optical Spectroscopy

Purpose. The primary optical techniques for high-sensitivity detection of dopants and impurities in semiconductor solids are Fourier transform infrared (FTIR) and photoluminescence (PL) spectroscopies. These reach parts per billion atomic and lower when performed at temperatures near 4–15 K. The FTIR works best for bulk silicon, which is transparent in the infrared, and is frequently applied even at room temperature for the quantification of oxygen and carbon in Czochralski (CZ) silicon crystals and wafers. The PL at visible light frequencies penetrates only about 0.5–5 mm into silicon and hence is suitable for the analysis of epitaxial films. In some applications, low-temperature PL reaches sub-parts per trillion atomic detectability. A comparison of the methods is given in Table 28.7. Methods. Infrared spectroscopy is based on the fact that molecules have discrete absorption frequencies associated with their vibrational and rotational motions. When a sample is placed in infrared light, it will selectively absorb at those resonant frequencies of the dopant or impurity species and pass the remainder. The absorption is associated with a change in the dipole moment of the molecule. The transmittance (TS, expressed in wave numbers) for a normal incidence infrared probe on a double-side-polished silicon slab of thickness d is given by

DK4126—Chapter28—23/5/2007—16:27—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-44 TABLE 28.7

Handbook of Semiconductor Manufacturing Technology Comparison of Optical Spectroscopy Techniques for Silicon Analysis

Technique/Reference Fourier Transform Infrared (FTIR) Room temperature [225–228]

Sample Interaction

Application

Infrared absorption

Oxygen in czochralski silicon quantified at ppma level parts per billion atomic sensitivity to dopants and impurities in Si

Light emission

Spatial signal mapping of compound semiconductors Sub-ppb sensitivity to dopants and electrically active impurities

Low temperature [229–231] Photoluminescence Room temperature [229,232] Low temperature [229,233–238]

TS Z ½ð1KRÞ2 expðKadÞ =½1KR2 expðK2adÞ

ð28:47Þ

where R is the internal surface reflectivity and a the absorption coefficient. For determination of interstitial oxygen in CZ silicon, for example, the 1107 cmK1 band related to stretching modes of Si–O bonds is measured [225,226,231]. Infrared transmission (TF) through a float zone slab of the same thickness d (as before) combined in ratio with Equation 28.27 yields

T Z TS =TF eexp½KðaS KaF Þd Z expðKa0 dÞ

ð28:48Þ

This is a form of the Lambert–Beer law, which relates transmitted intensity to concentration of an absorbing species. The simplicity facilitates determination of a0, which is then converted to ppma or atoms per cubic centimeter through a constant of calibration. Calibration factors at room temperature for oxygen in silicon are available through ASTM, JEIDA, DIN, and others [225,239–243]. The FTIR is a form of infrared spectroscopy in which the absorption spectrum is acquired using an interferometer instead of a diffraction grating, so that a large acceptance aperture can be realized for increased sensitivity. The absorption spectrum is obtained by taking the Fourier transform of the measured interferogram, usually near liquid helium temperature for best performance. Spectra of dopants in silicon are typically recorded within the 200–400 wave number (cmK1) range. The PL technique floods the sample with visible light (from an Ar 514.5 nm laser, for example) to create a population inversion of electronic excited states, and relaxation through radiative emission then follows. Electrically active impurity and defect centers can be identified with ultra-sensitivity by analyzing this emission, which is also in the visible light range. Trace levels of P, B, and As can be quantitatively measured at 109–1015 atoms per cubic centimeter levels in silicon. Like FTIR, modern PL spectrometers use an interferometer for the Fourier transform photoluminescence (FTPL), which has the advantage of large aperture and increased signal intensity. Strengths and Weaknesses. Low-temperature implementation of FTIR and FTPL is an inconvenience, but it has a sizeable sensitivity advantage for the dopant and impurity applications listed in Table 28.8. TABLE 28.8 (FTPL)

Strengths and Weaknesses of Low Temperature FTIR and Fourier Transform Photoluminescence Measurement

FTIR

FTPL

Substitutional carbon Interstitial oxygen Dopants O1015 atoms per cubic centimeter Good sample throughput Microspot low-temperature analysis Dopants !1012 atoms per cubic centimeter Surface and epi analysis Dislocations Other deep-level centers

Yes Yes Yes Yes No No No No No

No No No No No Yes Yes Yes Yes

DK4126—Chapter28—23/5/2007—16:27—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-45

The FTIR is more suitable for bulk silicon analysis, while FTPL is more surface sensitive (0.5–5.0 mm penetration, depending on wavelength). In principle, both are non-destructive, although wafer stages for low-temperature operation are not available, so wafer samples must be broken into smaller pieces for analysis. Microspot FTIR or FTPL instruments that operate at low temperature are not yet commercially available for trace element analysis. 28.3.2.3

Radiochemical Methods for Trace Elements

Purpose. The radiochemical techniques most commonly applied to semiconductor materials are listed in Table 28.9. Of these, neutron activation analysis (NAA) is the most sensitive for trace element analysis. It is predominantly applied to bulk silicon (CZ and float zone) and can provide ppb or better sensitivities for many impurities of importance to semiconductor manufacturers. The other activation methods are more suited for profiling thin film structures. Method. In NAA, the sample is exposed to a flux of thermal neutrons in a nuclear reactor. Radioactive isotopes are formed from stable elements in the matrix so that trace impurities can be identified by detecting gamma rays emitted from the products of (n, g) nuclear reactions. This is normally performed for daughters of the elements irradiated that have half-lives of 24 h or more, which permits offline detection using doped Si or Ge crystal g-detectors. For elements with daughter half-lives less than this (2H, 11B, 15N, 19F, and others), the experiment is performed at the reactor site while the sample is being neutron irradiated, and is called prompt gamma activation analysis. In charged particle activation analysis, the sample is activated with an ion beam of suitable type and energy. Artificial radioisotopes are created by nuclear reactions on matrix atoms as well as the impurities of interest. Helium is a common projectile that offers flexibility for light elements including C, N, O, F, and Be [256]. Quantitative analysis is based on the identification of the recreated radioisotopes, most often using high-resolution g-ray spectroscopy. Helium irradiation of oxygen in silicon, for example [16O(3He, p)18F], produces activated 18F, which is a metastable positron emitter that leads to 511 keV g-rays characteristic of positron–electron annihilation. In nuclear reaction analysis, g-rays generated directly from the parent species are detected. The g-energy identifies the species, while intensity relates to concentration. Hydrogen can be quantified in thin films using a 15N projectile, for example. The threshold, or Q resonance, of the 1H(15N,a,g)12C reaction is very sharp, and occurs only at that depth in the sample where the 15N projectile has slowed in the solid to exactly 6.385 (G0.005) MeV. Hence, it is possible to determine a depth profile of 1H by ramping the projectile energy [252]. Neutron depth profiling likewise relies on the absorption of a particle (a-particle in this case) to generate depth profiles of B, N, and Li [253]. Strengths and Weaknesses. Activation techniques are inherently isotope specific and quantitative once simple geometrical calibrations of the equipment are made. They are well suited for absolute isotope quantification and generation of reference standards for other techniques. As the measurements are dependent on nuclear properties, they are completely decoupled from the chemical bonding environment.

TABLE 28.9

Radiochemical Techniques Used in Semiconductor Applications

Technique/Reference Neutron activation analysis (NAA) [244–246] Prompt gamma activation analysis (PGAA) [247,248] Charged particle activation analysis (CPAA) [249,250] Nuclear reaction analysis (NRA) [251,252] Neutron depth profiling (NDP) [253–255]

Distinguishing Feature

Sensitivity

Application

Bulk analysis

ppta

Silicon impurities

NAA with short half-lives

ppma

Light elements

Particle-induced nuclear reactions

ppma

Oxygen in silicon

Particle-induced gamma-rays

0.1%

H profiles in films

a-Particle path length absorption

0.1%

B, Li, N profiles

DK4126—Chapter28—23/5/2007—16:27—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-46

Handbook of Semiconductor Manufacturing Technology

Once activated, there is no chance of cross contamination with mobile ions, like Na or K. These techniques offer exceptional sensitivity for dopants and impurities and are well suited to isotope doping experiments; however, they cannot be applied with equal success to all elements of the periodic table. A reactor or accelerator facility is required, so the methods are not necessarily routinely available or low in cost.

28.3.3

Surface and Thin Film Composition and Chemistry

28.3.3.1

Rutherford Backscattering Spectrometry and Related Techniques

Purpose. Rutherford backscattering spectrometry (RBS) is a technique based in classical physics involving scattering of ionized particles by nuclei in the sample being analyzed. Common uses of RBS include quantitative depth profiling, areal concentration measurements (atoms per square centimeter), and crystal quality and impurity lattice site analysis. Its primary application is quantitative depth profiling to determine elemental compositions of thin films and multilayer structures. It is well suited for the analysis of complex silicides and multilayer metallizations, and provides data accurate to 5% or better without the use of standards. The related techniques outlined in Table 28.10 extend the methodology to ultrathin films, physical defects, impurity analysis, and hydrogen quantification. Method. In conventional RBS, high-energy (1–3 MeV) mono-energetic ions of mass m, atomic number Z1, and energy E0 are directed at the surface of a sample using a particle accelerator. 4He ions (alpha particles) are typically used. A small fraction of these ions pass close enough to atomic nuclei in the material so that Coulombic forces between the two nuclei cause the lighter ion to be scattered. A schematic of this process is shown in Figure 28.32. While the nuclei do not actually collide, the process can be modeled as an elastic collision using classical physics. The energy of the ion (E) after the encounter is related to the mass of the target nucleus (M) through the kinematic factor, which expresses conservation of energy and momentum

E KZ Z E0

!2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi M 2 Km2 sin2 Q C m cos Q M Cm

ð28:49Þ

where E is the energy of the projectile before scattering, E0 the energy of the projectile after scattering, m the mass of the projectile, M the mass of the target nucleus, and Q the angle of scattering in the laboratory system. Sensitivity is near 0.01% (5!1018 atoms per cubic centimeter), and is related to the differential scattering cross-section (ds/dU), defined by TABLE 28.10

Backscattering Techniques Used in Semiconductor Applications

Technique/Reference Rutherford backscattering spectrometry (RBS) [257–259] Heavy ion backscattering (HIBS) [259,260] Medium-energy ion scattering (MEIS) [258,261] Elastic recoil detection analysis (ERDA) [262,263] Particle-induced x-ray emission (PIXE) [264,265] Ion channeling [257,266]

Distinguishing Feature OMeV particle scattering

Application

Forward knock-on scattering

w1 mm film composition profiles Metal impurities at high sensitivity w10 nm film composition profiles Hydrogen analysis in thin films

Ion-induced x-ray detection

Elemental analysis

Beam aligned to lattice

Surface lattice damage

Heavy incident particle !MeV particle scattering

DK4126—Chapter28—23/5/2007—16:27—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-47

Grazing angle detector ∼100ο Backscattered He Ion

Incident He++ ions

Sample Normal angle detector

5 nm

2.27 MeV He, 160° RBS

12 Yield (Thousands)

∼160ο

10 8

160°

6

92°

Si

4

Cr

2 0 12 Yield (Thousands)

CrSi2

Si

2.275 MeV He, 92° RBS

10 8 6

Si

4

Cr

Si

2 0

100

0

200

300

400

500

Channel number

FIGURE 28.32 Illustration of the Rutherford backscattering spectrometry (RBS) process showing both normal and grazing angle configurations. The plots show how the grazing angle detector enhances surface sensitivity in the analysis of a 5 nm Cr silicide film on Si. Note how the Si signal from the CrSi film is only evident using the grazing angle detector (lower plot).

ds Z1 Z2 e2 Z dU 4E

2

4 sin4 Q

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi m2 2 1K M 2 sin Q C cos Q qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 m2 1K M 2 sin Q

DK4126—Chapter28—23/5/2007—16:27—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

2

ð28:50Þ

28-48

Handbook of Semiconductor Manufacturing Technology

where Z1 is the atomic number of the projectile, Z2 the atomic number of the target atom, and e the electronic charge. The kinematic energy loss provides the mass of the target nucleus. Additionally, energy loss occurs as the ion slows down in the solid (dE/dx). This allows calculation of the depth of a feature beneath the surface of the sample. The scattering geometry also affects depth resolution and it is possible to improve depth resolution for thin films using a grazing angle detector (Figure 28.32). In all cases, the mass and depth scales must be deconvolved to extract the complete profile. An example of RBS data from a single thin film on substrate is shown in Figure 28.32. In addition to elemental compositional information, RBS can also be used to study the structure of single-crystal samples. When a sample is channeled, the rows of atoms in the lattice are aligned parallel to the incident He ion beam. The bombarding He will backscatter from the first few monolayers of material at the same rate as a non-aligned sample, but backscattering from buried atoms in the lattice will be drastically reduced since these atoms are shielded from the incident He atoms by the atoms in the surface layers. For example, the backscattering signal from a single-crystal Si sample which is in channeling alignment along the h100i axis will be approximately 3% of the backscattering signal from a non-aligned crystal, or amorphous or polycrystalline Si. By measuring the reduction in backscattering when a sample is channeled, it is possible to quantitatively measure and profile the crystal perfection of a sample [267]. Channeling can also be used for background reduction to help improve the RBS sensitivity for light elements. For example, it is difficult to accurately measure N concentrations in TiN films deposited on Si substrates due to the overlapped signal from the Si substrate. By channeling the substrate, the substrate signal is reduced, thus improving the sensitivity for the N peak, which is superimposed on the Si signal. Since TiN layers are typically polycrystalline, the channeling does not affect the backscattering signals from the Ti or N [267]. Some of the related techniques listed in Table 28.10 are fundamentally the same as RBS. For example, heavy ion backscattering (HIBS) simply uses heavier ions for the primary beam. Collision cross-sections are higher for heavier primary ions resulting in improved sensitivities. The HIBS is particularly useful for detecting trace levels of heavy metals in light element matrices. If the incident beam has higher mass than the matrix, then the matrix elements will be forward scattered and not contribute to the signal allowing interference-free signals. Medium-energy ion scattering (MEIS) utilizes a primary ion beam in the range of 50–300 keV, which is optimum for providing classical Rutherford scattering coupled with very good surface specificity, approximately 2–10 nm. The MEIS therefore is a powerful tool for characterizing ultrathin films, shallow ion implants, trace element analysis, and studies of thin film crystallinity via ion channeling. Elastic recoil detection (ERD) analysis takes advantage of the fact that elements in a material lighter than the primary ions will be forward scattered upon collision with the primaries. In conventional RBS, which uses 4He ions, a special case of ERD known as hydrogen forward scattering is often employed to quantify the hydrogen content of thin films. Particle-induced x-ray emission (PIXE) is an elemental analysis technique that detects x-rays that are induced by the collision of the primary particles with the atoms in the sample. The interaction causes the removal of core electrons leading to the emission of x-rays with specific energies when outer shell electrons drop to fill the core shell vacancies. The x-ray energies are independent of the excitation process and are element specific. Since these x-rays are produced constantly during an RBS analysis, PIXE requires only that an RBS instrument be fitted with a suitable x-ray detector. As an accessory on RBS instruments, PIXE is useful for heavy element identification when the elements of interest have only small differences in RBS energies but distinct differences in PIXE spectra. There are also dedicated PIXE instruments; however, these typically use HC bombardment instead of the HeC used in RBS. Strengths and Weaknesses. Since the kinematic factor and energy loss (dE/dx) curves (also known as “stopping power”) are known prior to the analysis, backscatter data provide mass and depth information to about 5% accuracy or better without the use of standards. Because of the standardless quantitative analysis capability, RBS is often used to standardize results from other techniques that rely on sensitivity

DK4126—Chapter28—23/5/2007—16:27—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-49

factors that can vary with the sample matrix, such as Auger. The technique works best for heavy elements in a light matrix, but can be extended to thin oxynitride thin films, for example, with the medium-energy modification. Backscattering requires an accelerator capable of producing 1–3 MeV energy H or He ion beams within a 1 mm spot on the sample. Hence, RBS is not a small spot technique, but with specialized high-energy ion optics, it is possible to reduce the spot size to 2 mm [268]. 28.3.3.2

X-Ray Fluorescence and Total Reflection X-Ray Fluorescence

Purpose. X-ray fluorescence (XRF) is ideal for rapid qualitative and quantitative analysis of atomic constituents. It is a particularly useful tool for the initial analysis of an unknown contamination, in that it accommodates solid or liquid samples, metals, and insulators, and production size wafers. With proper calibration, it can be used to monitor the thickness of metal films between 20 nm and several microns thick, with precisions as high as 0.01% [269–271]. The XRF is basically a bulk evaluation method, with a sampling depth in the 10 mm range, determined by penetration of the incident x-ray radiation and the escape depth of the characteristic fluorescence. Detection limits for XRF are on the order of 10 ppm. Total reflection XRF (TXRF) is a special case of XRF for which dedicated instruments are available. In TXRF, the x-rays impinge on the sample at a grazing angle, less than the critical angle (fc) for total reflection (in the mrad range). In this manner, the excitation depth is limited to a few nanometers, giving rise to a strong fluorescence signal from the near surface of the sample. The TXRF requires the target sample to have a smooth surface and is commonly used in the semiconductor industry for measuring transition metal contamination on wafer surfaces, either single-point measurements or complete maps of entire wafer surfaces [272–274]. For semiconductor applications, TXRF instruments, which can be fully automated, typically have at least the sample introduction area housed in a cleanroom environment to avoid contaminating sample surfaces with airborne particulate matter prior to analysis. Method. In XRF, incident x-rays are used to eject inner shell electrons from atoms of the sample via the photoelectric effect. The atom then relaxes through the emission of an x-ray with energy characteristic of the parent atom and intensity proportional to the amount of the element present. Conventional instruments use a stationary x-ray target, typically made of elemental Cr, Cu, Ag, Mo, W, or Au (with characteristic energies of 5.411, 8.041, 22.104, 17.443, 9.60, or 9.711 keV, respectively). The spectrum of x-rays that irradiate the sample include the broad band Bremsstrahlung background. X-ray detectors may be of the EDS or wavelength dispersive spectroscopy (WDS) type. Calibration for quantitative measurement of film thickness requires carefully prepared reference samples of identical composition. A schematic showing the configuration of a TXRF instrument is shown in Figure 28.33. The TXRF requires a specular surface since the primary angle of incidence is extremely small (!fc), and is ideal for highly polished samples like silicon wafers. The critical angle (in mrad) is given by K11

4c Z 3:72 !10

rffiffiffiffiffi ne E

ð28:51Þ

where fc is the critical angle, ne the electron density (in cmK3), and E the x-ray energy in keV [272]. For a typical WLb source (9.66 keV) on a silicon wafer, fc is about 3 mrad (0.28). When the incident x-ray beam hits a smooth surface at angles less than fc, total external reflection occurs. Under these conditions, where the incident beam is 100% reflected, an evanescent wave is formed at the reflecting surface. The penetration depth of this wave is defined as the depth where the intensity decays to 1/e (37%) of its initial value, generally limited to a few nanometers. It is this evanescent wave that generates the fluorescent signal within the solid, hence resulting is a strongly surface sensitive measurement. The sensitivity of TXRF is in the 109–1012 atoms per square centimeter range, although improvements in instrument configuration and specialized synchrotron applications are pressing this detection limit range several orders of magnitude lower. A technique that can be used to maximize TXRF detection sensitivities is the vapor phase decomposition (VPD), which concentrates the impurities by chemical evaporation before the analysis

DK4126—Chapter28—23/5/2007—16:27—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-50

Handbook of Semiconductor Manufacturing Technology

SSD detector X-ray fluorescence Incident x-rays

Scattered x-rays ~100 Å

Intensity (cps)

1.2

Cr

Si

WLβ Fe

1.0 0.8 0.6 0.4

Mn S

0.2 0.0 0.0

Zn

Cl K Ca

2.0

4.0

Ti

V

Co Ni 6.0

Cu 8.0

Fluorescence x-ray energy (keV)

10.0

FIGURE 28.33 The upper panel shows the configuration for a total reflection x-ray fluorescence (TXRF) measurement. The lower panel shows a typical TXRF spectrum from a Si wafer analysis.

is performed. In this procedure, a wafer is exposed to a dilute HF vapor, which slowly etches any silicon oxide on the surface of the wafer, subject to reactions of the type

6HF C SiO2 / H2 SiF6 C 2H2 O ðresidueÞ

ð28:52Þ

In effect, this integrates over the etched depth by removing silicon matrix atoms and leaving the impurity behind. Variations of the VPD technique include fluid drop scanning, in which a small droplet of HF is moved systematically over the surface of the wafer by a custom-designed holder that keeps it from slipping away. The drop, containing the collected residue, is then left to dry on the wafer for TXRF analysis. Strengths and Weaknesses. Conventional XRF provides rapid identification for samples of unknown composition or origin, either in solid or liquid form. Elemental qualitative composition to 5–10 wt% accuracy can be determined in a matter of 15–20 min. Accuracies approaching 0.01 wt% are possible, provided suitable reference samples of similar composition and uniformity are available. First principles determination of wt% based on raw x-ray intensities is more difficult with XRF, because the interplay between the x-ray energy, matrix absorption, and secondary fluorescence introduces significant complications [275]. Total reflection XRF has become an industry standard for quantifying trace metal impurities on semiconductor wafers (Figure 28.33). Instruments are available, which are fully automated, such that wafer boxes can be introduced directly into the instrument, requiring no human contact with individual wafers. Outside of fabs, TXRF instruments can have sample introduction and removal stations housed in cleanroom environments to avoid particulate contamination of the wafer surfaces. Its strengths include simultaneous multi-element analysis with very low detection limits and simple quantification by internal standardization over a wide dynamic range. In addition, it is a non-destructive measurement technique [274].

DK4126—Chapter28—23/5/2007—16:27—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-51

Generally, TXRF is not suitable for the detection of low-Z elements (Z!14), attributable to problems associated with fluorescence excitation, energy-dispersive detection, and quantitative analysis [274]. The TXRF is also sensitive to surface roughness, which increases the background signal and degrades the detection limits. If the roughness is great enough, the conditions necessary for total reflection will not be met and the result will be a glancing angle XRF measurement instead of a TXRF measurement, invalidating the TXRF quantification procedures. Since it is difficult to focus x-rays, small spot XRF tools are not readily available for high spatial resolution. Most commercial instruments provide a 100–500 mm spot, with 30 mm reported in the extreme [276]. Recent synchrotron experiments approach 0.5 mm [277], and capillary optics may offer additional improvement [278]. In commercial TXRF instruments, the spot size is determined by the EDS detector area (1 cm2). The beam is intentionally made wide enough to excite the entire area under the detector in order to achieve the required detection limits (Figure 28.33). 28.3.3.3

X-Ray Photoelectron Spectroscopy

Purpose. X-ray photoelectron spectroscopy (XPS), also known as electron spectroscopy for chemical analysis (ESCA), is an analytical technique that can provide information about the surface and near-surface region of materials and, for very thin layers, interface chemistry. With sputter depth profiling, XPS can provide composition (and sometimes chemistry) as a function of depth into the sample. The XPS is sensitive to the outermost atoms or molecules of a sample (typically 0–10 nm) with sub-monolayer detectability. All elements with the exception of H and He can be detected and quantified. Since atoms such as C, N, O, and F are the major constituents of most surface contamination including airborne molecular contamination, surface corrosion or oxidation, and residues from cleaning or process steps, XPS is a valuable chemical characterization tool applied to all stages of semiconductor manufacturing [279]. X-ray photoelectron spectroscopy is also very useful for determining composition and chemistry of deposited or grown thin films. For films on the order of 10 nm thick or less, an XPS surface analysis can provide a “bulk” characterization of the film. In recent years, XPS has been routinely used for characterizing ultrathin materials such as barrier metals and gate dielectric materials. For instance, XPS can measure thickness and nitrogen dose in ultrathin silicon oxynitride films with extremely high precision [280]. Thicker films may require sputter depth profiling to provide film composition and chemistry. Sputter depth profiling is accomplished by alternating ion etch steps and data acquisition steps within the sputter crater to determine elemental concentrations as a function of depth. The sputtering conditions can be varied to provide etch step sizes of !0.5 nm to O1 mm per step. The XPS depth profiling is suitable for bulk thin film (single or multilayer) analysis as well as for investigating buried interface chemistry. Method. In XPS, the sample surface is bombarded with x-rays, typically AlKa (1486.6 eV) or MgKa (1253.6 eV), causing the discharge of core level electrons, a process known as the photoelectric effect. Ejected photoelectrons are of discrete energies relating to the specific parent atoms, given by the expression

KE Z hnKBE C 4

ð28:53Þ

where KE is the kinetic energy of the photoelectron (in eV), Z Planck’s constant, n the frequency of incident x-ray, BE the binding energy of the electron in the atom (in eV), and f the spectrometer work function (w3–4 eV). Photoelectrons are generated within the x-ray penetration depth (typically many microns), but since the photoelectron energies are low (less than 1486 eV for AlKa), only the photoelectrons within the top three photoelectron escape depths are detected. This is the origin of surface selectivity for XPS [281]. Escape depths are on the order of 1.5–3.5 nm, which leads to an analysis depth of approximately 5–10 nm. Typically, 95% of the signal originates from within this depth. Quantitative analysis with XPS is accomplished by determining the atom fractions of each constituent and normalizing to 100% of the detected elements. The atom fraction of constituent element Cx is

DK4126—Chapter28—23/5/2007—16:27—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-52

Handbook of Semiconductor Manufacturing Technology

represented by the following equation:

ðI =S Þ Cx Z Px x ðIi =Si Þ

ð28:54Þ

where I is the intensity of the photoelectron peak (measured as the peak area) and S the atomic sensitivity factor. While atomic sensitivity factors may be somewhat matrix dependent, primarily due to differences in the photoelectron mean free paths and to a lesser extent the photoelectron cross-sections in different materials, the ratios of these values are nearly constant. The result is nearly matrix-free quantification. Therefore, RSFs for a given spectrometer can be used to provide atomic percent of the detected elements with absolute accuracies on the order of 10% or better if the peaks have sufficient signal-to-noise ratios. A unique property of XPS rests on the fact that the core level electrons nearest to the valence shells often exhibit shifts in binding energy due to the specific chemical environment of the atom. For example, consider the binding energy of Si in various chemical forms: elemental Si and some silicides have photoelectron binding energies of approximately 99 eV, Si in silicon carbide (SiC) a binding energy of about 100.6 eV, in silicon nitride (Si3N4) a binding energy of approximately 101.8 eV, and in SiO2 a binding energy of approximately 103.3–103.5 eV. Figure 28.34 shows a typical nitrogen spectrum from an ultrathin silicon oxynitride gate dielectric film where multiple bonding states for nitrogen are observed. Binding energies for elements in various chemical forms have been tabulated [282].

5

4

3

B.E. 396.69 398.19 400.18 402.88

Shift 0.00 1.50 3.48 6.18

N1 N2 N3 N4

2

N2 N3

1

0

N1

N4

40 6

40 4

40 40 39 2 0 8 Binding energy (eV)

39 6

39 4

FIGURE 28.34 X-ray photoelectron spectroscopy high-resolution spectrum of nitrogen from an ultrathin silicon oxynitride gate dielectric film. Several chemical forms of N are identified, including Si3N4 (pure silicon nitride, peak N1) and three SiOxNy peaks. Peaks N2 and N3 result from bonding arrangements where one or more oxygen atoms replace nitrogen in the Si3N4 tetrahedral coordination. The highest binding energy peak (N4) represents the N–O bond from a nitroso or bridging bond. In this example where the peaks partially overlap, non-linear least squares (NLLS) curve fitting was applied in order to determine the relative percentages of each species within the analytical volume.

DK4126—Chapter28—23/5/2007—16:27—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-53

Strengths and Weaknesses. XPS is a very good survey technique for characterizing surfaces or residues in cases where the composition is unknown. The ability to provide chemical bonding information coupled with quantitative analysis also makes the technique useful for characterizing chemical changes at surfaces due to material processing. Furthermore, the sampling depth makes it feasible to obtain bulk characterization of very thin films on the order of 10 nm or less. Perhaps one of the most useful attributes of XPS is in analyzing insulating materials. Because XPS probes the surface with electrically neutral photons and the interaction only ejects photoelectrons (resulting in a positively charged surface), it is simply a matter of replacing the ejected electrons to return the surface to charge neutrality. This is commonly done by “flooding” the area with low-energy thermal electrons that are then electrostatically attracted to the positively charged areas, thereby restoring charge neutrality. The detection limits for XPS are on average in the part per thousand range under typical analysis conditions. Therefore, the technique is not generally suitable for trace elemental analysis. The sensitivities for some, usually, heavier elements may be nearly an order of magnitude better than the average, while the very lightest elements (Be, Li) have sensitivities closer to 1 atomic percent. Sputter depth profiling can produce artifacts depending on the material being sputtered. Artifacts can be incorrect stoichiometries due to preferential sputtering of different components within the material, or degradation of the materials under the energetic ion bombardment such that chemical state identifications are not reliable. Recent advances in ion sources have made strides in mitigating the latter problem [283]. Scanning XPS for the purpose of imaging is not easy to achieve since it is extremely difficult to steer or raster the x-ray beam itself. However, it is possible to scan the XPS analyzer spot (within a large x-ray irradiation area) to achieve XPS elemental maps. Another approach is to use a scanned electron beam source over the x-ray anode surface to effectively produce a scanned x-ray beam. These methods, however, only provide spatial resolution on the order of a few microns or so. 28.3.3.4

Auger Electron Spectroscopy

Purpose. Auger electron spectroscopy (AES) has many similarities to XPS, including a comparable range of detection limits and sampling depth. Like XPS, AES can detect all elements except for H and He and therefore serves as a useful survey measurement for surface characterization. Sputter depth profiling provides a means for bulk characterization of thin films or film stacks as well as measuring interface composition. The depth of information for AES extends from the surface up to a maximum of approximately 10 nm depending on the energy of the measured Auger electrons. However, the majority of Auger electrons for many common elements occur at relatively low kinetic energies, which results in an average sampling depth for AES of 3–5 nm (shallower than the average XPS sampling depth) [284]. Auger electron spectroscopy uses a focused electron beam for excitation, which extends the analysis to localized spots as small as 15 nm across. The small spot capability makes AES suitable for analyses such as characterization of via opening cleanups, small particles, or general surface contamination confined to areas too small for XPS analysis. Auger depth profiles are common in the semiconductor industry because of its ability to obtain profiles from small areas. The AES is often used in conjunction with FIB to obtain elemental compositions of buried defects or layers. The FIB is used to prepare cross-sections which are then analyzed by AES. There are AES instruments available that integrate a FIB column for this purpose. Method. In AES, the sample is bombarded with a focused electron beam causing the ejection of an inner shell electron, similar to XPS. The Auger process, first described by P. Auger [285], is an electronic rearrangement that serves to relax the atom from the excited state brought on by the inner shell vacancy. It is a two-electron process first involving an electron from a higher energy shell filling the inner shell vacancy followed by a second electron from the higher energy shell leaving the atom for energy conservation. It is this latter Auger electron that provides the basis for AES. Electrons resulting from the Auger process have discrete energies relating to the atomic number of the parent atom. As with photoelectrons, the Auger electrons are low-kinetic-energy electrons and the associated mean free paths ensure that only the Auger electrons from near the surface escape without energy loss, hence providing the surface sensitivity of the technique.

DK4126—Chapter28—23/5/2007—16:27—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-54

Handbook of Semiconductor Manufacturing Technology

Quantitative analysis of Auger data is very similar to that for XPS. Most algorithms for concentration estimates involve the incorporation of intensity ratio measurements into equations of the type

ðI =S Þ Pi Z P i i j ðIj =Sj Þ

ð28:55Þ

which express the atomic percent (Pi) of element i as a function of the total Auger current Ii, sensitivity factor Si, and corresponding ratios for all of the other elements detected [119,120,286]. Because the electron beam probe–sample interaction also produces secondary and BSEs that contribute to a generally increasing background with increasing energy, the Auger electron peaks situated on top of this background can be rather small and, historically, were more difficult to detect. For this reason, peakto-peak values from differentiated Auger spectra (d(E!N(E))/dE) are usually used in place of Ii. However, this practice has changed somewhat in recent years. Quantitative algorithms now rely on the measurement of peak areas taken directly from peaks in the N!N(E) spectrum [287]. Sensitivity factors have been adapted to achieve the desired balance between quantitative accuracy and simplicity [288]. The most convenient and widely applied method is to use published values for pure element standards. Errors in this approach can be quite large (20%–50%), but they are often not cause for concern in production applications where gross problems can be resolved with semiquantitative estimates. In cases of heavily contaminated surfaces, errors in excess of 80% have been willingly tolerated [289]. However, this method of using pure element sensitivity factors can give 2%–5% reproducibility in sequential measurements. Strengths and Weaknesses. The similarities between AES and XPS mean that the techniques share some of the same strengths and weaknesses. The AES is a good survey technique rendering itself useful for characterizing unknown contaminants or materials. The AES is not considered a trace analysis technique. The small spot capability makes Auger the best technique for determining compositions of submicron diameter particles, as well as surface or depth profile analysis in small areas. The superior imaging capabilities of modern AES instruments combined with the speed at which a typical AES survey spectrum can be acquired is on par with most SEM/energy dispersive X-ray systems. Coupling AES with FIB crosssectioning makes the technique a powerful tool for characterizing buried defects or layers (Figure 28.35). In practice, modern AES instruments benefit from the use of a rastered electron beam to provide realtime SE imaging of the analysis areas/defects (referred to as scanning Auger microscopy or SAM). Therefore, in addition to Auger survey spectra, SE images, BSE images, Auger elemental maps, and Auger elemental line scans are typical supporting data available in a successful AES analysis. Because the excitation source is electron bombardment, AES is very susceptible to charging and is generally not suitable for insulating materials. However, thin insulating layers on a conductive substrate can often be accommodated if the primary electron current penetrates into the underlying conductive layer. The mechanics of depth profiling by AES is identical to that of XPS. Depth profiles are typically prepared by alternating ion etching steps with the data acquisition steps. As with XPS, artifacts are introduced if the exposed surface becomes rough or damaged [218,290,291].

28.3.4

Stress and Physical Defects

28.3.4.1

X-Ray Diffraction

Purpose. XRD is able to unambiguously identify the composition of crystalline and polycrystalline semiconductor powders, thin films, and substrates. It also provides unique information about the strain, grain size, crystalline phases, preferred orientation, and defect structure of polysilicon Al, Cu, Au, and other metal layers. Method. Diffraction techniques require that the sample contain single-crystal or polycrystalline components. On the nanometer scale, these are made up of parallel planes that are spaced at regular intervals, given by the cell constants a, b, and c. For the familiar cubic crystal, these form an orthogonal

DK4126—Chapter28—23/5/2007—16:27—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-55

Surface

Pt. 3 . 3

Pt. 1 . 1 . 2

Pt. 2

FIB Crater

15

sel111.spe

× 105

Si

5,000 × FIB Cut

Si

Point 1 O

C

10

W

W Point 2

W W

N

Si W

O

c/s Si 5

N

Point 3

Si

Si 0 200

400

600

800 1000 1200 1400 1600 1800 2000

Kinetic energy (ev)

FIGURE 28.35 The FIB cross-section of a buried defect on a wafer surface after WSi deposition. The Auger electron spectroscopy analysis from the surface (point 3), center of cross-sectioned defect (point 1), and sectioned surface below defect (point 2) indicate that the defect is carbonaceous.

DK4126—Chapter28—23/5/2007—16:27—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-56

Handbook of Semiconductor Manufacturing Technology

TABLE 28.11

Diffraction Techniques Used for Semiconductor Applications

Technique Kinematic Theory Powder diffractiona Laue diffractiona Seeman–Bohlin diffractiona Pole figuresa High-precision cell constantsa Bragg spacing comparator Dynamic theory Anomalous x-ray transmission Diffuse x-ray scattering X-ray reflectivitya a

Information Provided

Reference

Lattice parameter for identification Crystal orientation Crystalline phases in thin films Crystallographic texture and orientation Quantitative strain analysis Oxygen effect on Si lattice parameter

[293,294] [293,295] [293,294] [135,296] [297,298] [298,299]

Gauge of crystal perfection Bulk precipitate size and number density Nanometer roughness and defects

[300,301] [299,302] [303,304]

More frequently used.

coordinate system. An arbitrary set of planes can be described by the Miller indices (h, k, l), which define where the plane intersects the coordinate axes in integer multiples of a, b, and c, respectively. The distance (d) between (hkl) planes in a cubic crystal (aZbZc) is given by

a dhkl Z pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 h C k 2 C l2

ð28:56Þ

The condition for constructive interference of x-rays of wavelength l from planes of spacing dhkl is given by Bragg’s law,

nl Z 2dhkl sin qhkl

ð28:57Þ

where qhkl is the angle between the atomic planes and the incident (and diffracted) beam, and n is a positive integer (1, 2, 3, .) denoting the order of the reflection. The diffraction techniques listed first in Table 28.11 depend on this equation, which is referred to as the kinematic description. Those listed last are more appropriately described by the dynamical theory, in which the interactions of x-ray wavelets at all points throughout the irradiated volume are taken into account. The examples in Figure 28.36 illustrate a practical case of using both XRD and x-ray reflection for characterizing a high-k dielectric film. Procedures and tools for evaluation of reference x-ray powder patterns rely on the powder diffraction file (PDF) published by the Joint Committee on Powder Diffraction Standards (JCPDS) of the International Centre for Diffraction Data [292,293]. These are in excess of 30,000 reference spectra relevant to semiconductor applications and are available on CD-ROM or as database upgrades for microcomputer search and retrieval. In residual stress measurements by diffraction, the strain in the crystal lattice is measured, and residual stress then calculated assuming linear elastic distortion of the crystal lattice. Although the term stress measurement has come into common use, stress is an extrinsic property that is not directly measurable. High-precision changes in cell constants based on Bragg’s law can be performed with a four-circle diffractometer to determine the strain. Values of Dd/d as small as 10K6 are possible, and in special experiments (Bragg spacing comparator) this can be extended to 10K9. Also, the topography methods covered in the following sections can be applied to measure strain induced by a thin film deposited uniformly over a wafer, as can x-ray linewidth broadening. Strengths and Weaknesses. XRD techniques are considered to be non-destructive, and suitable for contactless process monitoring, although few such applications are found today in the wafer fab environment. They are generally fast, requiring 10–30 min acquisition time for a good spectrum or lattice parameter measurement. Extensive diffraction tables are on CD-ROM (JCPDS) for rapid search

DK4126—Chapter28—23/5/2007—16:27—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-57

106

Intensity (cps)

105 104 103 102 101 100 0

5,000

10,000 w−2q (s)

(a) 3.0 Intensity (cps)

2.5

Amorphous peak

20,000

15,000

Crystalline peaks

2.0 1.5 1.0 0.5 0.0 −0.5 10

(b)

20

30

40

50

60

70

80

w−2q (deg)

FIGURE 28.36 X-ray reflectance analysis pinpointed the thickness of a HfSiON thin film at 25.02 A˚ (a). By measuring the crystalline and amorphous peaks from the same film, an x-ray diffraction (XRD) analysis determined that the crystalline fraction was 51.7% (b). Data courtesy of Bede X-ray Metrology. Sample courtesy of Manuel Quevedo, SEMATECH.

and retrieval. X-ray generators of the rotating anode type provide high power (10–18 kW) relative to fixed target sources (4–5 kW), which are typically more reliable and less costly. Small spot analysis is limited with XRD techniques in general, except in special experiments involving an intense synchrotron source, or advanced Fresnel zone plate lenses [305]. With the Gandolphi camera, it is possible to measure patterns from a single particle as small as 30 mm in diameter with a laboratory setup, but the technique may require hours of acquisition time. 28.3.4.2

X-Ray Topography/X-Ray Reflectance

Purpose. X-ray topography refers to a detailed description and mapping of physical features of a crystalline solid, such as a silicon wafer, either throughout the bulk or in the near-surface region, depending on the camera used. The images formed reveal surface relief, wafer warpage, small changes in crystallographic orientation, strains associated with epi-films and lattice defects, oxygen precipitates, and thermal process deformations.

DK4126—Chapter28—23/5/2007—16:27—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-58

Handbook of Semiconductor Manufacturing Technology

TABLE 28.12

Topography Techniques Applied to Semiconductor Materials and Processing

Technique Surface Berg–Barrett topography Double-crystal topography Triple-crystal topography Bulk Scanning Lang topography Section topography Moire Lang topography

Information Provided

Reference

Wafer surface defect image High-strain resolution images High-strain resolution images

[301] [297,306,307] [297,299,307,308]

Wafer volume defect image; wafer warpage Wafer cross-section defect image Superimposed SIMOX (separation by implantation of oxygen) lattice rotation

[301,309] [299,310,311] [312]

Method. In x-ray topography, the intensity of a select Bragg diffraction spot is measured as a function of position across a crystalline solid, such as a silicon wafer. Deviations in the intensity relate to small changes in the variables of the Bragg equation 28.57 brought about by defects in the crystal lattice, or inclusions such as dopants, precipitates, and impurities in the material. Kinematic theory (the Bragg equation) accounts for intensities from mosaic structures such as imperfect silicon, polysilicon, or aluminum metallization, which consist of many small slightly misaligned grains. The dynamical theory of diffraction treats the stronger wave interactions present in nearly perfect single crystals, including silicon wafers, for which multiple diffraction and extinction effects occur. A variety of topographic cameras are available for forming images of the surface or bulk of wafers, as listed in Table 28.12. The surface methods may flood the wafer with CuKa (8.041 keV) x-rays at grazing incidence (Berg–Barrett), for example, or position it at a steeper angle for more penetration into the solid. This may be as little as 0.1–1.0 mm, or as much as 20–150 mm, depending on the geometry. Incident x-rays of higher energy, such as MoKa (17.443 keV), penetrate entirely through the wafer, and are recorded on the backside by a sheet of .photographic film. These are the transmission methods, which probe the bulk using scanning (Lang) or stationary (section topography) cameras. The lateral resolution in topography is not limited by wavelength of the x-rays, but by grain size of the recording medium, which for the best nuclear emulsion films is about 1 mm. Although this is clearly inadequate for direct imaging of point defects, dislocations, and atom aggregates, the strain fields associated with these generally extend over distances of several microns or more. This is illustrated in Figure 28.37, where transmission mode (bulk analysis) and reflection mode (surface) scans are used to map defects both in and on Si wafers. The superior strain sensitivity of double- and triple-crystal configurations derives from the low divergence of x-rays presented to the sample by a high-quality reference crystal. The expression relating diffracted intensity contrast (DI/I) at a Bragg angle Q to defect-induced lattice distortions is given by

DI=I f ðDd=dÞtan Q C Da C Db

ð28:58Þ

The sample descriptives are Dd/d (relative change in lattice spacing) and the component of local lattice rotation, Da. It is evident that any offset (Db) caused by divergence in the incident probe must be small, relative to these for a successful measurement to take place. With oxygen defects in silicon, for example, both Dd/d and Da assume a range of values between G5!10K6 for as-grown wafers (no precipitation annealing). This is beyond the reach of Lang topography (10K4), but is detectable with the double-crystal methods (10K6). Strengths and Weaknesses. As with the diffraction techniques, x-ray topography is considered to be nondestructive, although prolonged exposure of wafers with integrated circuit patterns could in principle damage sensitive junctions and thin oxides. A high-power (10–18 kW) rotating anode x-ray generator is required for the double- and triple-crystal cameras to compensate for large intensity loss from the reference diffracting crystals, which are used to reduce divergence in the incident beam. It is also of advantage for the other methods of Table 28.12, rendering acquisition times of 45–100 min feasible in most cases.

DK4126—Chapter28—23/5/2007—16:27—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

(b)

FIGURE 28.37 XRD imaging (i.e., x-ray topography) was used in a transmission mode to provide detailed images of thermal slip dislocations in a wafer (above left). The transmission mode measurement allows these slip bands to be detected at an early stage from within the bulk of the wafer. Data courtesy of Bede X-ray Metrology. XRD imaging can also be utilized in a reflection mode to image surface defects, such as the pin mark with surrounding threading dislocations (above right). The scan (at right) was obtained at a resolution of 5 mm. Data courtesy of Bede X-ray Metrology. Sample courtesy of Dr. Frans Voogt, Phillips Semiconductor, the Netherlands.

(a)

2 mm

Electrical, Physical, and Chemical Characterization 28-59

DK4126—Chapter28—23/5/2007—16:28—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-60

28.3.4.3

Handbook of Semiconductor Manufacturing Technology

X-Ray Rocking Curves and Pole Figures

Purpose. Rocking curves provide quantification of the information available in topography images. They provide a measure of surface polish quality, strain relating to implant and thermal processing, orientation anomalies, as well as lattice match and interface integrity of heterostructure stacks based on silicon. Large collections of rocking curves (pole figure projection) are useful for determining preferred grain orientation in silicide and thin metal films, which relates to electromigration, voiding, and metal interconnect reliability. Method. X-ray rocking curves exploit the detailed information available in a single Bragg diffraction peak acquired from a highly crystalline solid, such as a silicon wafer or epitaxial film [297,301]. The rocking curve itself is measured by recording the intensity profile of the reflection as the sample is continuously turned through a small angle (5–10 s of arc in most cases) in and out of the diffracting condition. Rocking curves are related to x-ray topography images and can be acquired using the same double- and triple-crystal cameras. However, they are generally more quantitative than topography, in that numbers can be extracted based on the width, height, and skirts of the intensity profile. If the sample is a polycrystalline film or substrate, it is no longer suitable for rocking curve quantification. In principle, however, it is possible to acquire individual rocking curves from each grain, provided the x-ray source is focused small enough to sample one grain at a time. Such a measurement would indicate crystal quality of each grain, as well as orientation with respect to the incident probe. The collection of many such orientations is the basis of a pole figure, in which grain tilts are plotted in a spherical polar projection. In practice, such small x-ray probes are not available in the analytical laboratory, although focused electron beams have been used in a similar way to map grain orientations within patterned metallizations [135]. Generally, broader beams are applied to large aggregates of grains embodied within a continuous film of polycrystalline silicon or aluminum, for example. In this case, the entire sample is rotated about small angles beneath the incident x-rays, using a specially constructed pole figure camera. As each grain comes into the diffracting condition for a given reflection, one or more points of intensity are recorded on the projection at positions commensurate with its tilt away from the surface normal. Pole figures provide an average measure of preferred grain orientation. Strengths and Weaknesses. As with the topographies, a high-intensity rotating anode x-ray generator (10–18 kW) is preferable for rocking curve acquisitions to compensate for reference crystal intensity loss, although a weaker fixed target source (4–5 kW), can be used when high sample throughput is not as important. Detailed analysis of the fine structure in rocking curve profiles is based on theoretical models, which rely on iterative numerical computation for convergence. However, these are able to provide unique strain profiles relating to implant, anneal, and etch processes. X-ray techniques require special attention to safety procedures, which can be managed effectively. 28.3.4.4

Raman Spectroscopy

Purpose. Raman Spectroscopy is an analytical technique that utilizes scattering of laser light from materials in order to measure vibrational frequencies of chemical bonds. While Raman has many capabilities for material characterization of solids, liquids, or with proper instrumentation, gases, a unique ability of the technique is measuring stress or strain in materials or thin films. The magnitude and the nature of stress/strain (tensile or compressive) can be evaluated by comparing the observed frequencies of Raman bands from strained materials with those of strain-free references (Figure 28.38). The peak shifting is approximately linearly proportional to the magnitude of the strain. Some examples of materials where strain has been measured by Raman spectroscopy include [313] 1. semiconductor heteroepitaxial layers and strained superlattices, 2. heterostructures consisting of elements with different thermal expansion coefficients, such as SOI structures, and 3. semiconductor surfaces prepared by polishing, ion implantation, ion etching, etc.

DK4126—Chapter28—23/5/2007—16:28—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-61

Si–Si substrate Si–Si pure

Electrical, Physical, and Chemical Characterization

50,000

20,000

Si–Ge

30,000 Ge–Ge

Intensity (a.u.)

40,000

10,000

0 200

300

400

500

600

Wavenumber (cm−1)

FIGURE 28.38 Raman spectrum obtained from a 0.1–0.2 mm thick Si epilayer grown on an SiGe substrate with 30% Ge in it. The Raman spectrum has spectral contribution from both the epilayer and the substrate. The Si–Si epilayer phonon vibration appears as a shoulder at 510.9 cmK1 on the stronger Si–Si substrate vibration at 499.9 cmK1. An overlaid spectrum from a stress-free Si sample shows that the Si epilayer vibration is red shifted by w9.5 cmK1 from that of the pure Si, demonstrating a tensile strain in the Si epilayer. The cause of the strain is the difference in lattice parameters between Ge and Si.

Method. The Raman effect results from the interaction of vibrational motions of molecules with electromagnetic radiation. In Raman spectroscopy, a sample is irradiated by an intense laser beam in the UV–visible range of frequency n0. The light scattering process involves both Rayleigh and Raman scattering. Rayleigh scattering accounts for O99.9% of the scattered light and has the same frequency as the incident beam (n0). The significantly weaker Raman scattered light has frequency n0Gnm, where nm is a vibrational frequency of a molecule from the sample. The two components of Raman scattering are known as the Stokes and anti-Stokes lines, the Stokes lines being the stronger of the two. By measuring the shifts in frequency of the Stokes (and anti-Stokes) lines from the incident beam frequency, a vibrational spectrum characteristic of the molecular bonds making up the irradiated sample can be constructed [313,314]. Commercial Raman microscopes can achieve lateral resolutions of approximately 1 mm. Raman microscopes can also be utilized in confocal mode in order to limit the depth of analysis or to examine buried layers in a material, assuming that the visible light laser can penetrate to the buried layer in question. In confocal microscopy, the Raman scattering signal originating from above and below the focal plane will not reach the detector due to a pinhole aperture in front of the detector that blocks these out-of-focus light rays. Only the Raman signal from within the plane of focus (w2 mm depth resolution) will reach the detector [313]. Strengths and Weaknesses. In general, Raman spectroscopy combines all of the advantages associated with FTIR for chemical analysis with a significantly better lateral resolution. The ability to focus the visible light laser to a spot size of approximately 1 mm makes the technique ideal for mapping strain,

DK4126—Chapter28—23/5/2007—16:28—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-62

Handbook of Semiconductor Manufacturing Technology

or other differences manifested in vibrational spectra, over surfaces. For stress/strain measurements, the technique is non-destructive and does not require sample preparation. However, not all molecular vibrations are Raman active. A vibration must cause a change in the polarizability of the molecule in order to produce a Raman signal. In addition, fluorescent materials are difficult or impossible to analyze because the weak Raman signal can be inundated by the laser-induced fluorescent background. Since a powerful laser source is necessary in order to detect the weak Raman scattering bands, localized heating and/or photodecomposition can occur over the course of an analysis. Raman has limited depth resolution, typically about 2 mm in confocal mode; however, for some strongly absorbing materials, the depth resolution can be smaller, in the submicron to nanometer range.

References 1. Materials Handbook Nineth Edition: Volume 10 Materials Characterization. Metals Park, OH: American Society for Metals, 1986. 2. Encyclopedia of Materials Characterization. Boston: Butterworth-Heinemann, 1992. 3. Schroder, D. K. Semiconductor Material and Device Characterization. 3rd ed., New York: Wiley, 2006 4. Runyan, W. R., and T. J. Shaffner. Semiconductor Measurements and Instrumentation. New York: McGraw-Hill, 1998. 5. Secondary Ion Mass Spectrometry: SIMS IX. New York: Wiley, 1994. 6. Electron Microscopy Society of America. EMSA, 1998. 7. 47th Annual Denver X-Ray Conference. Newtown Square, PA: International Centre for Diffraction Data, 1998. 8. Diebold, A., K. Shih, R. Colton, and J. Dagata, eds. Industrial Applications of Scanned Probe Microscopy. Gaithersburg, MD: NIST, 1994. 9. Seiler, D. G. International Conference on Characterization and Metrology for ULSI Technology. Woodbury, NY: American Institute of Physics, 1998. 10. Worledge, D. C. “Reduction of Positional Errors in a Four-Point Probe Resistance Measurement.” Appl. Phys. Lett. 84 (2004): 1695–7. 11. Ishikawa, M., M. Yoshimura, and K. Ueda. “Development of Four-Probe Microscopy for Electric Conductivity Measurement.” Jpn J. Appl. Phys. 44 (2005): 1502–3. 12. Albers, J., and H. L. Berkowitz. “An Alternative Approach to the Calculation of Four-Probe Resistances on Nonuniform Structures.” J. Electrochem. Soc. 132 (1985): 2453–6 Weller, R. A. “An Algorithm for Computing Linear Four-Point Probe Thickness Correction Factors.” Rev. Sci. Instrum. 72 (2001): 3580–6. 13. Perloff, D. S., J. N. Gan, and F. E. Wahl. “Dose Accuracy and Doping Uniformity of Ion Implantation Equipment.” Solid State Technol. 24 (1981): 112–20; “ASTM Standard F1529-94 Standard Method for Sheet Resistance Uniformity by In-Line Four-Point Probe With the DualConfiguration Procedure.” 1996 Annual Book of ASTM Standards, West Conshohocken, PA: American Society for Testing and Materials, 1996. 14. van der Pauw, L. J. “A Method of Measuring Specific Resistivity and Hall Effect of Discs of Arbitrary Shape.” Philips Res. Rep. 13 (1958): 1–9 van der Pauw, L. J. “A Method of Measuring the Resistivity and Hall Coefficient on Lamellae of Arbitrary Shape.” Philips Tech. Rev. 20 (1958): 220–4. 15. Versnel, W. “Analysis of Symmetrical van der Pauw Structures with Finite Contacts.” Solid-State Electron. 21 (1978): 1261–8 Chwang, R., B. J. Smith, and C. R. Crowell. “Contact Size Effects on the van der Pauw Method for Resistivity and Hall Coefficient Measurement.” Solid-State Electron. 17 (1974): 1217–27. 16. Sun, Y., J. Shi, and Q. Meng. “Measurement of Sheet Resistance of Cross Microareas Using a Modified van der Pauw Method.” Semicond. Sci. Technol. 11 (1996): 805–11. 17. Buehler, M. G., and W. R. Thurber. “An Experimental Study of Various Cross Sheet Resistor Test Structures.” J. Electrochem. Soc. 125 (1978): 645–50.

DK4126—Chapter28—23/5/2007—16:28—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-63

18. Buehler, M. G., S. D. Grant, and W. R. Thurber. “Bridge and van der Pauw Sheet Resistors for Characterizing the Line Width of Conducting Layers.” J. Electrochem. Soc. 125 (1978): 650–4. 19. Buehler, M. G., and C. W. Hershey. “The Split-Cross-Bridge Resistor for Measuring the Sheet Resistance, Linewidth, and Line Spacing of Conducting Layers.” IEEE Trans. Elect. Dev. ED-33 (1986): 1572–9; “ASTM Standard F1261M-95 Standard Test Method for Determining the Average Electrical Width of a Straight, Thin-Film Metal Line.” 1996 Annual Book of ASTM Standards, West Conshohocken, PA: American Society for Testing and Materials, 1996. 20. Cresswell, M. W., J. J. Sniegowski, R. N. Goshtagore, R. A. Allen, W. F. Guthrie, and L. W. Linholm. “Electrical Linewidth Test Structures Fabricated in Mono-Crystalline Films for Reference-Material Applications.” In Proceedings of the International Conference on Microelectronic Test Structures, 16–24. Monterey, CA, 1997. 21. Chang, R., Y. Cao, and C. J. Spanos. “Modeling the Electrical Effects of Metal Dishing due to CMP for On-Chip Interconnect Optimization.” IEEE Trans. Elect. Dev. 51 (2004): 1577–83. 22. Allen, R. A., M. W. Cresswell, and L. M. Buck. “A New Test Structure for the Electrical Measurement of the Width of Short Features with Arbitrarily Wide Voltage Taps.” IEEE Elect. Dev. Lett. 13 (1992): 322–4. 23. Storms, G., S. Cheng, and I. Pollentier. “Electrical Linewidth Metrology for Sub-65 nm Applications.” Proc. SPIE 5375 (2004): 614–28. 24. Rosencwaig, A. “Thermal-Wave Imaging.” Science 218 (1982): 223–8. 25. Smith, W. L., A. Rosencwaig, and D. L. Willenborg. “Ion Implant Monitoring with Thermal Wave Technology.” Appl. Phys. Lett. 47 (1985): 584–6; Smith, W. L., A. Rosencwaig, and D. L. Willenborg. “Ion Implant Monitoring with Thermal Wave Technology.” Solid-State Technol. 29 (1986): 85–92. 26. Schroder, D. K. Semiconductor Material and Device Characterization. 3rd ed., New York: Wiley, 2006. 27. van Gelder, W., and E. H. Nicollian. “Silicon Impurity Distribution as Revealed by Pulsed MOS C–V Measurements.” J. Electrochem. Soc. 118 (1971): 138–41. 28. Johnson, W. C., and P. T. Panousis. “The Influence of Debye Length on the C–V Measurement of Doping Profiles.” IEEE Trans. Elect. Dev. ED-18 (1971): 965–73. 29. Barna, G. G., B. Van Eck, and J. W. Hosch. “In Situ Metrology.” In Handbook of Silicon Semiconductor Technology, edited by A. C. Diebold, New York: Dekker, 2001. 30. Rommel, M. Semitest Inc., private correspondence. 31. Woolford, K., L. Newfield, and C. Panczyk. Monitoring Epitaxial Resistivity Profiles without Wafer Damage. Micro, July/August, (www.micromagazine.com) 2002. 32. Huang, Y., and C. C. Williams. “Capacitance–Voltage Measurement and Modeling on a Nanometer Scale by Scanning C–V Microscopy.” J. Vac. Sci. Technol. B12 (1994): 369–72. 33. Neubauer, G., A. Erickson, C. C. Williams, J. J. Kopanski, M. Rodgers, and D. Adderton. “TwoDimensional Scanning Capacitance Microscopy Measurements of Cross-Sectioned Very Large Scale Integration Test Structures.” J. Vac. Sci. Technol. B14 (1996): 426–32 McMurray, J. S., J. Kim, and C. C. Williams. “Quantitative Measurement of Two-Dimensional Dopant Profile by Crosssectional Scanning Capacitance Microscopy.” J. Vac. Sci. Technol. B15 (1997): 1011–4. 34. Nakakura, C. Y., P. Tangyunyong, D. L. Hetherington, and M. R. Shaneyfelt. “Method for the Study of Semiconductor Device Operation Using Scanning Capacitance Microscopy.” Rev. Sci. Instrum. 74 (2003): 127–33. 35. Williams, C. C. “Two-Dimensional Dopant Profiling by Scanning Capacitance Microscopy.” Annu. Rev. Mater. Sci. 29 (1999): 471–504. 36. Vandervorst, W., P. Eyben, S. Callewaert, T. Hantschel, N. Duhayon, M. Xu, T. Trenkler, and T. Clarysse. “Towards Routine, Quantitative Two-Dimensional Carrier Profiling with Scanning Spreading Resistance Microscopy.” In Characterization and Metrology for ULSI Technology, edited by D. G. Seiler, A. C. Diebold, T. J. Shaffner, R. McDonald, W. M. Bullis, P. J. Smith, and E. M. Secula, American Institute of Physics, Vol. 550, 613–9, 2000. 37. Eyben, P., N. Duhayon, D. Alvarez, and W. Vandervorst. “Assessing the Resolution Limits of Scanning Spreading Resistance Microscopy and Scanning Capacitance Microscopy.” In

DK4126—Chapter28—23/5/2007—16:28—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-64

38. 39. 40. 41. 42. 43. 44. 45. 46.

47. 48. 49. 50. 51. 52. 53. 54.

55.

Handbook of Semiconductor Manufacturing Technology

Characterization and Metrology for VLSI Technology: 2003 International Conference, edited by D. G. Seiler, A. C. Diebold, T. J. Shaffner, R. McDonald, S. Zollner, R. P. Khosla, and E. M. Secula, American Institute of Physics, Vol. 683, 678–84, 2003. Eyben, P., S. Denis, T. Clarysse, and W. Vandervorst. “Progress towards a Physical Contact Model for Scanning Spreading Resistance Microscopy.” Mater. Sci. Eng. B102 (2003): 132–7. “ASTM Standard F617M-95: Standard Method for Measuring MOSFET Linear Threshold Voltage.” In 1996 Annual Book of ASTM Standards, Conshohocken, PA: American Society for Testing and Materials, 1996. Wong, H. S., M. H. White, T. J. Krutsick, and R. V. Booth. “Modeling of Transconductance Degradation and Extraction of Threshold Voltage in Thin Oxide MOSFETs.” Solid-State Electron. 30 (1987): 953–68. Jain, S. “Measurement of Threshold Voltage and Channel Length of Submicron MOSFETs.” Proc. IEE 135, no. Pt I (1988): 162–4. Ghibaudo, G. “New Method for the Extraction of MOSFET Parameters.” Electron. Lett. 24 (1988): 543–5. Ortiz-Conde, A., F. J. Garcia Sanchez, J. J. Liou, A. Cerdeira, M. Estrada, and Y. Yue. “A Review of Recent MOSFET Threshold Voltage Extraction Methods.” Microelectron. Reliab. 42 (2002): 583–96. Terada, K., and K.-I. Nishiyama. “Comparison of MOSFET-Threshold-Voltage Extraction Methods.” Solid-State Electron. 45 (2001): 35–40. Klaassen, F. M., and W. Hes. “On the Temperature Coefficient of the MOSFET Threshold Voltage.” Solid-State Electron. 29 (1986): 787–9. Ng, K. K., and J. R. Brews. “Measuring the Effective Channel Length of MOSFETs.” IEEE Circ. Dev. 6 (1990): 33–8; McAndrew, C. C., and P. A. Layman. “MOSFET Effective Channel Length, Threshold Voltage, and Series Resistance Determination by Robust Optimization.” IEEE Trans. Elect. Dev. 39 (1992): 2298–311. Terada, K., and H. Muta. “A New Method to Determine Effective MOSFET Channel Length.” Jpn J. Appl. Phys. 18 (1979): 953–9; Chern, J. G. J., P. Chang, R. F. Motta, and N. Godinho. “A New Method to Determine MOSFET Channel Length.” IEEE Elect. Dev. Lett. EDL-1 (1980): 170–3. Laux, S. E. “Accuracy of an Effective Channel Length/External Resistance Extraction Algorithm for MOSFET’s.” IEEE Trans. Elect. Dev. ED-31 (1984): 1245–51. De La Moneda, F. H., H. N. Kotecha, and M. Shatzkes. “Measurement of MOSFET Constants.” IEEE Elect. Dev. Lett. EDL-3 (1982): 10–2. Taur, Y., D. S. Zicherman, D. R. Lombardi, P. R. Restle, C. H. Hsu, H. I. Hanafi, M. R. Wordeman, B. Davari, and G. G. Shahidi. “A New “Shift and Ratio” Method for MOSFET Channel-Length Extraction.” IEEE Elect. Dev. Lett. 13 (1992): 267–9. Takeda, E., C. Y. Yang, and A. Miura-Hamada. Hot Carrier Effects in MOS Devices. San Diego, CA: Academic Press, 1995 Acovic, A., G. La Rosa, and Y. C. Sun. “A Review of Hot-Carrier Degradation Mechanisms in MOSFETs.” Microelectron. Reliab. 36 (1996): 845–69. Chang, W. H., B. Davari, M. R. Wordeman, Y. Taur, C. C. H. Hsu, and M. D. Rodriguez. “A HighPerformance 0.25 mm CMOS Technology.” IEEE Trans. Elect. Dev. 39 (1992): 959–66. Yue, J. T. “Reliability.” In ULSI Technology, edited by C. Y. Chang, and S. M. Sze, New York: McGraw-Hill, 1996. Li, E., E. Rosenbaum, J. Tao, and P. Fang. “Projecting Lifetime of Deep Submicron MOSFETs.” IEEE Trans. Elect. Dev. 48 (2001): 671–8; Cheng, K., and J. W. Lyding. “An Analytical Model to Project MOS Transistor Lifetime Improvement by Deuterium Passivation of Interface Traps.” IEEE Elect. Dev. Lett. 24 (2003): 655–7. Shin, H. C., and C. M. Hu. “Dependence of Plasma-Induced Oxide Charging Current on Al Antenna Geometry.” IEEE Elect. Dev. Lett. 13 (1992): 600–2; Eriguchi, K., Y. Uraoka, H. Nakagawa, T. Tamaki, M. Kubota, and N. Nomura. “Quantitative Evaluation of Gate Oxide Damage during Plasma Processing Using Antenna Structure Capacitors.” Jpn J. Appl. Phys. 33 (1994): 83–7.

DK4126—Chapter28—23/5/2007—16:28—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-65

56. Shideler, J., S. Reno, R. Bammi, C. Messick, A. Cowley, and W. Lukas. “A New Technique for Solving Wafer Charging Problems.” Semicond. Int. 18 (1995): 153–8; Lukaszek, W. “Understanding and Controlling Wafer Charging Damage.” Solid State Technol. 41 (1998): 101–12. 57. Kuhn, M., and D. J. Silversmith. “Ionic Contamination and Transport of Mobile Ions in MOS Structures.” J. Electrochem. Soc. 118 (1971): 966–70; Hillen, M. W., and J. F. Verwey. “Mobile Ions in SiO2 Layers on Si.” In Instabilities in Silicon Devices: Silicon Passivation and Related Instabilities, edited by G. Barbottin, and A. Vapaille, 403–39. Amsterdam: Elsevier, 1971. 58. Stauffer, L., T. Wiley, T. Tiwald, R. Hance, P. Rai-Choudhury, and D. K. Schroder. “Mobile Ion Monitoring by Triangular Voltage Sweep.” Solid-State Technol. 38 (1995): S3–S8. 59. Nicollian, E. H., and J. R. Brews. MOS Physics and Technology. New York: Wiley, 1982. 60. Ng, K. K. Complete Guide to Semiconductor Devices. 2nd ed, 183. Wiley-InterScience: New York, 2002. 61. Saks, N. S., and M. G. Ancona. “Determination of Interface Trap Capture Cross Sections Using Three-Level Charge Pumping.” IEEE Elect. Dev. Lett. 11 (1990): 339–41; Siergiej, R. R., M. H. White, and N. S. Saks. “Theory and Measurement of Quantization Effects on Si–SiO2 Interface Trap Modeling.” Solid-State Electron. 35 (1992): 843–54. 62. Yoneda, K., K. Okuma, K. Hagiwara, and Y. Todokoro. “The Reliability Evaluation of Thin Silicon Dioxide Using the Stepped Current TDDB Technique.” J. Electrochem. Soc. 142 (1995): 596–600. 63. Wolters, D. R., and J. R. Verwey. “Breakdown and Wear-Out Phenomena in SiO2 Films.” In Instabilities in Silicon Devices, edited by B. Barbottin, and A. Vapaille, 315–62. Amsterdam: NorthHolland, 1986; Wolters, D. R. “Breakdown and Wearout Phenomena in SiO2.” In Insulating Films on Semiconductors, edited by M. Schulz, and G. Pensl, 180–94. Berlin: Springer, 1986. 64. Wolters, D. R., and J. J. van der Schoot. “Dielectric Breakdown in MOS Devices.” Philips J. Res. 40 (1985): 115–92. 65. Lang D. V. “Deep-Level Transient Spectroscopy: A New Method to Characterize Traps in Semiconductors.” J. Appl. Phys. 45 (1974): 3023–32; Lang, D. V. “Fast Capacitance Transient Apparatus: Application to ZnO and O Centers in GaP p–n Junctions.” J. Appl. Phys. 45 (1974): 3014–22. ASTM Standard F 978-90; Miller, G. L., D. V. Lang, and L. C.Kimerling. “Capacitance Transient Spectroscopy.” In Annual Review Material Science, Vol. 7, edited by R. A. Huggins, R. H. Bube, and R. W. Roberts, Annual Review Material Science, 377–448. Palo Alto, CA: Annual Reviews, 1974. 66. M’saad, H., J. Michel, J. J. Lappe, and L. C. Kimerling. “Electronic Passivation of Silicon Surfaces by Halogens.” J. Electron. Mat. 23 (1994): 487–91. 67. Zoth, G., and W. Bergholz. “A Fast, Preparation-Free Method to Detect Iron in Silicon.” J. Appl. Phys. 67 (1990): 6764–71. 68. Schroder, D. K. “The Concept of Generation and Recombination Lifetimes in Semiconductors.” IEEE Trans. Elect. Dev. ED-29 (1982): 1336–8. 69. Obermeier, G., and D. Huber. “Iron Detection in Polished and Epitaxial Wafers Using Generation Lifetime Measurements.” J. Appl. Phys. 81 (1997): 7345–9. 70. Lee, S. Y., and D. K. Schroder. “Measurement Time Reduction for Generation Lifetimes.” IEEE Trans. Elect. Dev. 46 (1999): 1016–21. 71. Schroder, D. K. Semiconductor Material and Device Characterization. 3rd ed., New York: Wiley, 2006. 72. Schroder, D. K., M. S. Fung, R. L. Verkuil, S. Pandey, W. H. Howland, and M. Kleefstra. “CoronaOxide-Semiconductor Generation Lifetime Characterization.” Solid-State Electron. 42 (1998): 505–12. 73. Grove, A. S., and D. J. Fitzgerald. “Surface Effects on pn Junctions: Characteristics of Surface SpaceCharge Regions Under Non-Equilibrium Conditions.” Solid-State Electron. 9 (1966): 783–806; Fitzgerald, D. J., and A. S. Grove. “Surface Recombination in Semiconductors.” Surf. Sci. 9 (1968): 347–69. 74. Fung, M. S., and R. L. Verkuil. “Contactless Measurement of Silicon Generation Leakage and Crystal Defects by a Corona-Pulsed Deep-Depletion Potential Transient Technique.” Extended Abstracts, Chicago, IL; The Electrochemical Society Meet, 1988; Verkuil, R. L., and M. S. Fung. “Contactless Silicon Doping Measurements by Means of a Corona-Oxide-Semiconductor (COS)

DK4126—Chapter28—23/5/2007—16:28—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-66

75. 76. 77.

78. 79.

80.

81. 82. 83. 84. 85. 86. 87. 88. 89. 90. 91. 92. 93.

Handbook of Semiconductor Manufacturing Technology

Technique.” Extended Abstracts, Chicago, IL: The Electrochemical Society Meet, 1988; Fung, M. S., and R. L. Verkuil. “Process Learning by Nondestructive Lifetime Testing.” In Semiconductor Silicon 1990, edited by H. R. Huff, K. G. Barraclough, and J. I. Chikawa, 924–50. Pennington, NJ: The Electrochemical Society, 1990; Verkuil, R. L., and M. S. Fung. “A Contactless Alternative to MOS Charge Measurements by Means of a Corona-Oxide-Semiconductor (COS) Technique.” Extended Abstracts, Chicago, IL: The Electrochemical Society Meet, 1988. Kelvin, L. “On a Method of Measuring Contact Electricity.” Nature (1881) Kelvin, L. “Contact Electricity of Metals.” Philos. Mag. 46 (1898): 82–121. Kronik, L., and Y. Shapira. “Surface Photovoltage Phenomena: Theory, Experiment, and Applications.” Surf. Sci. Rep. 37 (1999): 1–206. Williams, R., and M. H. Woods. “High Electric Fields in Silicon Dioxide Produced by Corona Charging.” J. Appl. Phys. 44 (1973): 1026–8 Weinberg, Z. A. “Tunneling of Electrons from Si into Thermally Grown SiO2.” Solid-State Electron. 20 (1977): 11–18; Woods, M. H., and R. Williams. “Injection and Removal of Ionic Charge at Room Temperature through the Interface of Air with SiO2.” J. Appl. Phys. 44 (1973): 5506–10. Schroder, D. K. “Surface Voltage and Surface Photovoltage: History, Theory and Applications.” Meas. Sci. Technol. 12 (2001): R16–R31; Schroder, D. K. “Contactless Surface Charge Semiconductor Characterization.” Mater. Sci. Eng. B91–92 (2002): 196–210. Weinzierl, S. R., and T. G. Miller. “Non-Contact Corona-Based Process Control Measurements: Where We’ve Been and Where We’re Headed.” In Analytical and Diagnostic Techniques for Semiconductor Materials, Devices, and Processes, edited by B. O. Kolbesen, C. Claeys, P. Stallhofer, F. Tardif, J. Benton, T. Shaffner, D. Schroder, P. Kishino, and P. Rai-Choudhury, 342–50. Pennington, NJ: The Electrochemical Society, 1999 (ECS 99-16). Roy, P. K., C. Chacon, Y. Ma, I. C. Kizilyalli, G. S. Horner, R. L. Verkuil, and T. G. Miller. “NonContact Characterization of Ultrathin Dielectrics for the Gigabit Era.” In Diagnostic Techniques for Semiconductor Materials and Devices, edited by P. Rai-Choudhury, J. L. Benton, D. K. Schroder, and T. J. Shaffner, 280–94. Pennington, NJ: The Electrochemical Society, 1997 (PV97-12). Miller, T. G. “A New Approach for Measuring Oxide Thickness.” Semicond. Int. 18 (1995): 147–8. Lo, S. H., D. A. Buchanan, and Y. Taur. “Modeling and Characterization of Quantization, Polysilicon Depletion, and Direct Tunneling Effects in MOSFETs With Ultrathin Oxides.” IBM J. Res. Dev. 43 (1999): 327–37. Weinberg, Z. A., W. C. Johnson, and M. A. Lampert. “High-Field Transport in SiO2 on Silicon Induced by Corona Charging of the Unmetallized Surface.” J. Appl. Phys. 47 (1976): 248–55. Bonnell, D. A. Scanning Probe Microscopy and Spectroscopy. 2nd ed. New York: Wiley-VCH, 2001. Shaffner, T. J. “Characterization Challenges for the ULSI Era.” In Diagnostic Techniques for Semiconductor Materials and Devices, edited by P. Rai-Choudhury, J. L. Benton, D. K. Schroder, and T. J. Shaffner, 1–15. Pennington, NJ: The Electrochemical Society, 1997. Hamers, R. J., and D. F. Padowitz. “Methods of Tunneling Spectroscopy with the STM.” In Scanning Probe Microscopy and Spectroscopy, 2nd ed., edited by D. Bonnell, New York: Wiley-VCH, 2001 (chap. 4). Smith, R. L., and G. S. Rohrer. “The Preparation of Tip and Sample Surfaces for Scanning Probe Experiments.” In Scanning Probe Microscopy and Spectroscopy, 2nd ed., edited by D. Bonnell, New York: Wiley-VCH, 2001 (chap. 6). Meyer, E., H. J. Hug, and R. Bennewitz. Scanning Probe Microscopy. Berlin: Springer, 2004. Simmons, J. “Generalized Formula for the Electric Tunnel Effect between Similar Electrodes Separated by a Thin Insulating Film.” J. Appl. Phys. 34 (1963): 1793–803. Binnig, G., C. F. Quate, and C. H. Gerber. “Atomic Force Microscope.” Phys. Rev. Lett. 56 (1986): 930–3. Quate, C. F. “The AFM as a Tool for Surface Imaging.” Surf. Sci. 299–300 (1994): 980–95. Sarid, D. Scanning Force Microscopy with Applications to Electric, Magnetic, and Atomic Forces. Revised Edition New York: Oxford University Press, 1994. Meyer, G., and N. M. Amer. “Novel Optical Approach to Atomic Force Microscopy.” Appl. Phys. Lett. 53 (1988): 045–1047.

DK4126—Chapter28—23/5/2007—16:28—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-67

94. Zhong, Q., D. Inniss, K. Kjoller, and V. B. Elings. “Fractured Polymer/Silica Fiber Surface Studied by Tapping Mode Atomic Force Microscopy.” Surf. Sci. Lett. 290 (1993): L668–L92. 95. Nonnenmacher, M., M. P. Boyle, and H. K. Wickramasinghe. “Kelvin Probe Microscopy.” Appl. Phys. Lett. 58 (1991): 2921–3. 96. Weaver, J. M. R., and H. K. Wickramasinghe. “Semiconductor Characterization by Scanning Force Microscope Surface Photovoltage Microscopy.” J. Vac. Sci. Technol. B9 (1991): 1562–5. 97. Bonnell, D. A., and S. Kalinin. “Local Potential at Atomically Abrupt Oxide Grain Boundaries by Scanning Probe Microscopy.” In Proceedings of the International Meeting on Polycrystalline Semiconductors, edited by O. Bonnaud, T. Mohammed-Brahim, H. P. Strunk, and J. H. Werner, Solid State Phenomena, 33–47. Switzerland: Scitech Publ. Uettikon am See, 2001. 98. Goldstein, J. I., D. E. Newbury, P. Echlin, D. C. Joy, C. Fiori, and E. Lifshin. Scanning Electron Microscopy and X-Ray Microanalysis. New York: Plenum Press, 1981. 99. Verhoeven, J. D. “Scanning Electron Microscopy.” In Metals Handbook, Ninth Edition: Volume 10 Materials Characterization, edited by R. E. Whan, K. Mills, J. R. Davis, J. D. Destefani, D. A. Dieterich, G. M. Crankovic, H. J. Frissell, D. M. Jenkins, W. H. Cubberly, and R. L. Stedfeld, 490–515. Metals Park, OH: American Society for Metals, 1986. 100. Wells, O. C., A. Boyde, E. Lifshin, and A. Rezanowich. Scanning Electron Microscopy. New York: McGraw-Hill, 1974. 101. Murr, L. E. Scanning Electron Microscopy. New York: Marcel Dekker, Inc., 1982. 102. Goldstein, J. I., and H. Yakowitz. Practical Scanning Electron Microscopy. New York: Plenum Press, 1975. 103. Joy, D. C., A. D. Romig Jr.., and J. I. Goldstein. Principles of Analytical Electron Microscopy. New York: Plenum Press, 1986. 104. Belcher, R. W., G. P. Hart, and W. R. Wade. “Preparation of Semiconductor Devices for Scanning Electron Microscopy and Quantification Using Etchback Methods.” Scanning Electron Microsc. II (1984): 613–24. 105. Koellen, D. S., D. I. Saxon, and K. E. Wendel. “Cross-Sectional Analysis of Silicon Metal Oxide Semiconductor Devices Using the Scanning Electron Microscope.” Scanning Electron Microsc. I (1985): 43–53. 106. Mills, T. “Precision VLSI Cross-Sectioning and Staining.” Proc. IEEE: Rel. Phys. (1983): 324–31. 107. Angelides, P. G. “Precision Cross Sectional Analysis of LSI and VLSI Devices.” Proc. IEEE: Rel. Phys. (1981): 134–8. 108. Hammond, B. R., and T. R. Vogel. “Non-Encapsulated Microsectioning as a Construction and Failure Analysis Technique.” Proc. IEEE: Rel. Phys. (1982): 221–3. 109. Gill, M., and E. Woster. “The In-Fab SEM/EDX Integration Challenge.” Semicond. Int. 16 (1993): 78–82. 110. Shaffner, T. J., and J. W. S. Hearle. “Recent Advances in Understanding Specimen Charging.” Scanning Electron Microsc. I (1976): 61–82. 111. Kuroda, K., S. Hosoki, and T. Komoda. “Observation of Tungsten Field Emitter Tips with an Ultra-High Resoltuion Field Emission Scanning Electron Microscope.” Scanning Microsc. 1 (1987): 911–7. 112. Kanaya, K., and S. Okayama. “Penetration and Energy-Loss Theory of Electrons in Solid Targets.” J. Phys. D: Appl. Phys. 5 (1972): 43–58. 113. Everhart, T. E., and R. F. M. Thornley. “Wide-Band Detector for Micro-Microampere Low-Energy Electron Currents.” J. Sci. Instrum. 37 (1960): 246–8. 114. McKinley, T. D., K. F. J. Heinrich, and D. B. Wittry. The Electron Microprobe. New York: Wiley, 1966. 115. Castaing, R. “Electron Probe Microanalysis.” In Advances in Electronics and Electron Physics, edited by L. Marton, and C. Marton, 317–86. New York: Academic Press, 1960. 116. Reed, S. J. B. Electron Microprobe Analysis. New York: Cambridge University Press, 1993. 117. Fitzgerald, R., K. Keil, and K. F. J. Heinrich. “Solid-State Energy-Dispersion Spectrometer for Electron-Microprobe X-Ray Analysis.” Science 159 (1968): 528–30.

DK4126—Chapter28—23/5/2007—16:28—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-68

Handbook of Semiconductor Manufacturing Technology

118. Titchmarsh, J. M. “Energy Dispersive X-Ray Analysis (EDX) in the TEM/STEM.” In Quantitative Microbeam Analysis, edited by A. G. Fitzgerald, B. E. Storey, and D. Fabian, 275–301. Bristol, UK: Institute of Physics, 1995. 119. Powell, C. J., and M. P. Seah. “Precision, Accuracy, and Uncertainty in Quantitative Surface Analyses by Auger-Electron Spectroscopy and X-Ray Photoelectron Spectroscopy.” J. Vac. Sci. Technol. A 8 (1990): 735–63. 120. Seah, M. P., and G. C. Smith. “Quantitative AES and XPS: Calibration of Electron Spectrometers for True Spectral Measurements—VAMAS Round Robins and Parameters for Reference Spectral Data Banks.” Vacuum 41 (1990): 1601–4. 121. Mroczkowski, S., and D. Lichtman. “Calculated Auger Yields and Sensitivity Factors for KLL-NOO Transitions with 1–10 kV Primary Beams.” J. Vac. Sci. Technol. A 3 (1985): 1860–5. 122. Shaffner, T. J. “Surface Characterization for VLSI.” In Materials and Process Characterization, edited by N. G. Einspruch, and G. B. Larrabee, 497–527. New York: Academic Press, 1983. 123. Aton, T. J., K. A. Joyner, C. H. Blanton, A. T. Appel, M. G. Harward, M. H. Bennett-Lilley, and S. S. Mahant-Shetti. “Using Scanning Electron Beams for Testing Microstructure Isolation and Continuity.” Proc. IEEE: Rel. Phys. (1991): 239–44. 124. Menzel, E., and R. Buchanan. “Electron Beam Probing of Integrated Circuits.” Solid-State Technol. 28 (1985): 63–70. 125. Todokoro, H., and S. Yoneda. “Electron Beam Tester with 10 ps Time Resolution.” Proc. Int. Test Conf. (1986): 600–6. 126. Gonzales, A. J. “On the Electron Beam Induced Current Analysis of Semiconductor Devices.” Scanning Electron Microsc. IV (1974): 941–8. 127. Bresse, J. F. “Electron Beam Induced Current in Silicon Planar p–n Junctions: Physical Model of Carrier Generation, Determination of Some Physical Parameters in Silicon.” Scanning Electron Microsc. I (1972): 105–12. 128. Holt, D. B., and F. M. Saba. “The Cathodoluminescence Mode of the Scanning Electron Microscope: A Powerful Microcharacterization Technique.” Scanning Electron Microsc. III (1985): 1023–45. 129. Yacobi, B. G., and D. B. Holt. “Cathodoluminescence Scanning Electron Microscopy of Semiconductors.” J. Appl. Phys. 59 (1986): R1–R24. 130. Roedel, R. J., S. Myhajlenko, J. L. Edwards, and K. Rowley. “Cathodoluminescence Characterization of Semiconductor Materials.” Proc. Electrochem. Soc. 88–20 (1988): 185–96. 131. Pfefferkorn, G., W. Brocker, and M. Hastenrath. “The Cathodoluminescence Method in the Scanning Electron Microscope.” Scanning Electron Microsc. I (1980): 251–8. 132. Holt, D. B., and S. Datta. “The Cathodoluminescent Mode as an Analytical Technique: Its Development and Prospects.” Scanning Electron Microsc. I (1980): 259–78. 133. Heard, P. “Cathodoluminescence—Interesting Phenomenon or Useful Technique?” Microsc. Anal. (1996): 25–7. 134. Link Analytical, AN10000 X-Ray Microanalysis System Bucks, U.K. 1994. 135. Dingley, D. J., and K. Baba-Kishi. “Use of Electron Back Scatter Diffraction Patterns for Determination of Crystal Symmetry Elements.” Scanning Electron Microsc. II (1986): 383–91. 136. Moore, T. M., S. Matteson, W. M. Duncan, and R. J. Matyi. “Microstructural Characterization of GaAs Substrates.” Mat. Res. Soc. Symp. Proc. 69 (1986): 379–84. 137. Amptek, I. X-Ray detector: XR-100T Bedford, MA. 1995. 138. Silver, E., M. LeGros, N. Madden, J. Beeman, and E. Haller. “High-Resolution, Broad-Band Microcalorimeters for X-Ray Microanalysis.” X-Ray Spectrom. 25 (1996): 115–22. 139. Wollman, D. A., K. D. Irwin, G. C. Hilton, L. L. Dulcie, D. E. Newbury, and J. M. Martinis. “HighResolution, Energy-Dispersive Microcalorimeter Spectrometer for X-Ray Microanalysis.” J. Microsc. 188 (1997): 196–223. 140. Silver, E., and M. LeGros. “The Application of a High Resolution, Broad Band Microcalorimeter to the SEM-Based Microanalysis Problem.” X-Ray Spectrom. (1995): 1–13. 141. Robinson, M. “A Microcalorimeter for High Resolution, Broad Band X-Ray Microanalysis 94-01. ” 1–24. Livermore, CA: Lawrence Livermore National Laboratory, 1994.

DK4126—Chapter28—23/5/2007—16:28—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-69

142. McCammon, D., W. Cui, M. Juda, J. Morgenthaler, J. Zhang, R. L. Kelley, S. S. Holt, G. M. Madejski, S. H. Moseley, and A. E. Szymkowiak. “Thermal Calorimeters for High Resolution X-Ray Spectroscopy.” Nucl. Instrum. Methods A 326 (1993): 157–65. 143. LeGros, M., E. Silver, D. Schneider, J. McDonald, S. Bardin, R. Schuch, N. Madden, and J. Beeman. “The First High Resolution, Broad Band X-Ray Spectroscopy of Ion-Surface Interactions Using a Microcalorimeter.” Nucl. Instrum. Methods A 357 (1996): 110–4. 144. Lesyna, L., D. D. Marzio, S. Gottesman, and M. Kesselman. “Advanced X-Ray Detectors for the Analysis of Materials.” J. Low Temp. Phys. 93 (1993): 779–84. 145. Williams, D. B., and C. B. Carter. Transmission Electron Microscopy. New York: Plenum Press, 1996. 146. Reimer, L. Transmission Electron Microscopy. New York: Springer, 1989. 147. Williams, D. B. Practical Analytical Electron Microscopy in Materials Science. Mahwah, NJ: Philips Electronic Instruments, Inc., 1984. 148. Marcus, R. B., and T. T. Sheng. Transmission Electron Microscopy of Silicon VLSI Circuits and Structures. New York: Wiley, 1983. 149. Murr, L. E. Electron and Ion Microscopy and Microanalysis. New York: Marcel Dekker, Inc., 1982. 150. Buseck, P., J. Cowley, and L. Eyring, eds. High-Resolution Transmission Electron Microscopy and Associated Techniques. Oxford, U.K.: Oxford University Press, 1988. 151. Cullis, A. G., ed. Microscopy of Semiconducting Materials 1991. Bristol, U.K.: IOP Publishing, Ltd, 1991. 152. Materials Problem Solving with the Transmission Electron Microscope. Pittsburgh, PA: Materials Research Society, 1986. 153. High-Resolution Transmission Electron Microscopy. Oxford, U.K.: Oxford University Press, 1988. 154. Egerton, R. F. Electron Energy Loss Spectroscopy in the Electron Microscope. New York: Plenum Press, 1986. 155. Hall, C. E. Introduction to Electron Microscopy. New York: McGraw Hill, 1966. 156. Specimen Preparation for Transmission Electron Microscopy of Materials IV. Warrendale, PA: Materials Research Society, 1997. 157. Specimen Preparation for Transmission Electron Microscopy of Materials. Pittsburgh, PA: Materials Research Society, 1988. 158. Specimen Preparation for Transmission Electron Microscopy of Materials II. Pittsburgh, PA: Materials Research Society, 1990. 159. Giannuzzi, L. A., J. L. Drown, S. R. Brown, R. B. Irwin, and F. A. Stevie. “Focused Ion Beam Milling and Micromanipulation Lift-Out for Site Specific Cross-Section TEM Specimen Preparation.” In Specimen Preparation for Transmission Electron Microscopy of Materials IV, edited by R. M. Anderson, and S. D. Walck, 19–27. Warrendale, PA: Materials Research Society, 1997. 160. Su, DH-I., H. T. Shishido, F. Tsai, L. Liang, and F. C. Mercado. “A Detailed Procedure for Reliable Preparation of TEM Samples Using FIB Milling.” In Specimen Preparation for Transmission Electron Microscopy of Materials IV, edited by R. M. Anderson, and S. D. Walck, 105–16. Warrendale, PA: Materials Research Society, 1997. 161. Shaapur, F., T. Stark, T. Woodward, and R. J. Graham. “Evaluation of a New Strategy for Transverse TEM Specimen Preparation by Focused-Ion-Beam Thinning.” In Specimen Preparation for Transmission Electron Microscopy of Materials IV, edited by R. M. Anderson, and S. D. Walck, 173–80. Warrendale, PA: Materials Research Society, 1997. 162. Tsujimoto, K., S. Tsuji, H. Takatsuji, K. Kuroda, H. Saka, and N. Miura. “Cross-Sectional TEM Sample Preparation Method Using FIB Etching for Thin-Film Transistor.” In Specimen Preparation for Transmission Electron Microscopy of Materials IV, edited by R. M. Anderson, and S. D. Walck, 207–15. Warrendale, PA: Materials Research Society, 1997. 163. Pramanik, D., and J. Glanville. “Aluminum Film Analysis with the Focused Ion Beam Microscope.” Solid-State Technol. 55 (1990): 77–80. 164. Gamo, K., N. Takakura, N. Samoto, R. Shimizu, and S. Namba. “Ion Beam Assisted Deposition of Metal Organic Films Using Focused Ion Beams.” Jpn J. Appl. Phys. 23 (1984): L293–L5. 165. Shedd, G. M., H. Lezec, A. D. Dubner, and J. Melngailis. “Focused Ion Beam Induced Deposition of Gold.” Appl. Phys. Lett. 49 (1986): 1584–6.

DK4126—Chapter28—23/5/2007—16:28—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-70

Handbook of Semiconductor Manufacturing Technology

166. Mashiko, Y., H. Morimoto, H. Koyama, S. Kawazu, T. Kaito, and T. Adachi. “A New VLSI Diagnosis Technique: Focused Ion Beam Assisted Multi-Level Circuit Probing.” Proc. IEEE: Rel. Phys. (1987): 111–7. 167. Matusiewicz, G. R., S. J. Kirch, V. J. Seeley, and P. G. Blauner. “The Role of Focused Ion Beams in Physical Failure Analysis.” Proc. IEEE: Rel. Phys. (1991): 167–70. 168. Komano, H., Y. Ogawa, and T. Takigawa. “Silicon Oxide Film Formation by Focused Ion Beam (FIB)-Assisted Deposition.” Jpn J. Appl. Phys. 28 (1989): 2372–5. 169. Binnig, G., and H. Rohrer. “The Scanning Tunneling Microscope.” Sci. Am. 253 (1985): 50–6. 170. Binnig, G., and H. Rohrer. “Scanning Tunneling Microscopy—From Birth to Adolescence.” Rev. Mod. Phys. 59 (1987): 615–25. 171. Binnig, G., H. Rohrer, C. Gerber, and E. Weibel. “Surface Studies by Scanning Tunneling Microscopy.” Phys. Rev. Lett. 49 (1982): 57–61. 172. Wickramasinghe, H. K. “Scanned-Probe Microscopes.” Sci. Am. October (1989): 98–105. 173. Chen, C. J. Introduction to Scanning Tunneling Microscopy. Oxford, U.K.: Oxford University Press, 1993. 174. Marrian, C. R. K., ed. Technology of Proximal Probe Lithography. Bellingham, WA: SPIE Optical Engineering Press, 1993. 175. STM ’90, The Fifth International Conference on Scanning Tunneling Microscopy/Spectroscopy and NANO I, The First International Conference on Nanometer Scale Science and Technology. American Vacuum Society, 1990. 176. Bell, L. D., and W. J. Kaiser. “Imaging Subsurface Interfaces by Ballistic-Electron-Emission Microscopy.” In Diagnostic Techniques for Semiconductor Materials and Devices, edited by T. J. Shaffner, and D. K. Schroder, 97–108. Pennington, NJ: The Electrochemical Society, 1988. 177. Baum, R. “Chemical Force Microscopy: Method Maps Functional Groups on Surfaces.” Chem. Eng. News 73 (1994): 6. 178. Houston, J. E., and T. A. Michalske. “The Interfacial-Force Microscope.” Nature 356 (1992): 266–7. 179. Babcock, K. “Magnetic Force Microscopy.” Photon. Spectra (1994). 180. Sidles, J. A., J. L. Garbini, K. J. Bruland, D. Rugar, O. Zuger, S. Hoen, and C. S. Yannoni. “Magnetic Resonance Force Microscopy.” Rev. Mod. Phys. 67 (1995): 249–65. 181. Wago, K., O. Zuger, R. Kendrick, C. S. Yannoni, and D. Rugar. “Low-Temperature Magnetic Resonance Force Detection.” J. Vac. Sci. Technol. B 14 (1996): 1197–201. 182. Electric field Measurements with the MultiModee AFM Santa Barbara, CA: Digital Instruments Inc., 1996. 183. Rugar, D., O. Zuger, S. Hoen, C. S. Yannoni, H.-M. Vieth, and R. D. Kendrick. “Force Detection of Nuclear Magnetic Resonance.” Science 264 (1994): 1560–3. 184. Thompson, C. A., R. W. Cross, and A. B. Kos. “Micromagnetic Scanning Microprobe System.” Rev. Sci. Instrum. 65 (1994): 383–9. 185. Vandervorst, W., T. Clarysse, P. De Wolf, L. Hellemans, J. Snauwaert, V. Privitera, and V. Raineri. “On the Determination of Two-Dimensional Carrier Distributions.” Nucl. Instrum. Methods B 96 (1995): 123–32. 186. Clarysse, T., P. De Wolf, H. Bender, and W. Vandervorst. “Recent Insights into the Physical Modeling of the Spreading Resistance Point Contact.” J. Vac. Sci. Technol. B 14 (1996): 358–68. 187. Shafai, C., D. J. Thomson, M. Simard-Normandin, G. Mattiussi, and P. J. Scanlon. “Delineation of Semiconductor Doping by Scanning Resistance Microscopy.” Appl. Phys. Lett. 64 (1994): 342–4. 188. Paesler, M. A., and P. J. Moyer. Near-Field Optics Theory, Instrumentation, and Applications. New York: Wiley, 1996. 189. Betzig, E., and J. K. Trautman. “Near-Field Optics: Microscopy, Spectroscopy, and Surface Modification beyond the Diffraction Limit.” Science 257 (1992): 189–96. 190. Duncan, W. M. “Near-Field Scanning Optical Microscope for Microelectronic Materials and Devices.” J. Vac. Sci. Technol. A 14 (1996): 1914–8. 191. Kopanski, J. J., J. F. Marchiando, and J. R. Lowney. “Scanning Capacitance Microscopy Measurements and Modeling: Progress towards Dopant Profiling of Silicon.” J. Vac. Sci. Technol. B 14 (1996): 242–7.

DK4126—Chapter28—23/5/2007—16:28—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-71

192. Erickson, A., L. Sadwick, G. Neubauer, J. Kopanski, D. Adderton, and M. Rogers. “Quantitative Scanning Capacitance Microscopy Analysis of Two-Dimensional Dopant Concentrations at Nanoscale Dimensions.” J. Electron. Mat. 25 (1996): 301–4. 193. Neubauer, G., A. Erickson, C. C. Williams, J. J. Kopanski, M. Rodgers, and D. Adderton. “TwoDimensional Scanning Capacitance Microscopy Measurements of Cross-Sectioned Very Large Scale Integration Test Structures.” J. Vac. Sci. Technol. B 14 (1996): 426–32. 194. Krieger, J. “Use of Scanning Probe Microscopy Expanding.” Chem. Eng. News (1993): 30–1. 195. Liu, H., F. F. Fan, W. Lin, and A. J. Bard. “Scanning Electrochemical and Tunneling Ultramicroelectrode Microscope for High-Resolution Examination of Electrode Surfaces in Solution.” J. Am. Chem. Soc. 108 (1986): 3838–9. 196. Hochwitz, T., A. K. Henning, C. Levey, C. Daghlian, J. Slinkman, J. Never, P. Kaszuba, R. Gluck, R. Wells, and J. Pekarik. “Imaging Integrated Circuit Dopant Profiles with the Force-Based Scanning Kelvin Probe Microscope.” J. Vac. Sci. Technol. B 14 (1996): 440–6. 197. Wessels, B. W., and L. Q. Qian. “Scanning Tunneling Optical Spectroscopy of Semiconductor Thin Films and Quantum Wells.” J. Vac. Sci. Technol. B 10 (1992): 1803–6. 198. Shluger, A., C. Pisani, C. Roetti, and R. Orlando. “Ab Initio Simulation of the Interaction between Ionic Crystal Surfaces and the Atomic Force Microscope Tip.” J. Vac. Sci. Technol. A 8 (1990): 3967–72. 199. Ivanov, G. K., M. A. Kozhushner, and I. I. Oleinik. “Direct and Inverse Problems in the Theory of Scanning Tunneling Microscopy.” Surf. Sci. 331–333 (1995): 1191–6. 200. Chen, C. J. “In-Situ Characterization of Tip Electronic Structure in Scanning Tunneling Microscopy.” Ultramicroscopy 42–44 (1992): 147–53. 201. Keller, D. “Reconstruction of STM and AFM Images Distorted by Finite-Size Tips.” Surf. Sci. 253 (1991): 353–64. 202. Griffith, J. E., D. A. Grigg, G. P. Kochanski, M. J. Vasile, and P. E. Russell. “Metrology with Scanning Probe Microscopes.” In Technology of Proximal Probe Lithography, edited by C. R. K. Marian, 364–89. Bellingham, WA: SPIE Optical Engineering Press, 1993. 203. Benninghoven, A., J. L. Hunter, Jr., B. W. Schueler, H. H. Smith, H. W. Werner, eds. “Secondary Ion Mass Spectrometry: SIMS XIV, (San Diego).” Applied Surface Science, Vol. 231–232, 475–478. North-Holland: Elsevier, 2004. 204. Benninghoven, A., Y. Nihei, M. Kudo, Y. Homma, H. Yurimoto, and H. W. Werner, eds. “Secondary Ion Mass Spectrometry: SIMS XIII, (Nara).” Applied Surface Science, Vol. 203–204. North-Holland: Elsevier, 2003. 205. Benninghoven, A., P. Bertrand, H.-N.Migeon, and H. W. Werner, eds. Secondary Ion Mass Spectrometry: SIMS XII, (Brussels). Amsterdam: Elsevier, 2000. 206. Gillen, G., R. Lareau, J. Bennett, and F. Stevie, eds. Secondary Ion Mass Spectrometry: SIMS XI, (Orlando). New York: Wiley, 1998. 207. Wilson, R. G., F. A. Stevie, and C. A. Magee. Secondary Ion Mass Spectrometry: A Practical Handbook for Depth Profiling and Bulk Impurity Analysis. New York: Wiley, 1989. 208. Schumacher, M., H. N. Migeon, and B. Rasser. Proc. SIMS VIII Conf. Amsterdam (1991): 49. 209. Janssens, T., and W. Vandervorst. Proc. SIMS XII Conf. Brussels (1999): 151. 210. Maul, J., F. Schultz, and K. Wittmaack. Rad. Effects 18 (1973): 211. 211. McHugh, J. A. Rad. Effects 21 (1974): 209. 212. Williams, P. Proc. SIMS XI, Orlando (1997): 3. 213. Mayer, J. W., B. Y. Tsaur, S. S. Lau, and L-S. Hung. Nucl. Instrum. Methods B182/183 (1981): 1. 214. Littmark, U., and W. O. Hofer. Nucl. Instrum. Methods 168 (1981): 329. 215. Wittmaack, K. Vacuum 34 (1984): 119. 216. SRIM-2000, J. Ziegler. 217. Vickerman, J. C., and D. Briggs. ToF SIMS: Surface Analysis by Mass Spectrometry. Manchester, NH: IM Publications and Surface Spectra Ltd, 2001. 218. Vandervorst, W., et al. “Errors in Near-Surface and Interfacial Profiling of Boron and Arsenic.” Appl. Surf. Sci. 231–232 (2004): 618. 219. Jiang, Z., et al. Appl. Phys. Lett. 73 (1998): 315.

DK4126—Chapter28—23/5/2007—16:28—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-72

Handbook of Semiconductor Manufacturing Technology

220. Wittmaack, et al. J. Vac. Sci. Technol. B16 (1998): 272. 221. Several articles on sputter rate transients in: J. Vac. Sci. Tech. B18 (2000). Schueler, B., and D. F. Reich. J. Vac. Sci. Tech. B18 (2000): 496; Cooke, G. A., T. J. Ormsby, M. G. Dowsett, C. Parry, A. Murell, and E. J. H. Collard. J. Vac. Sci. Tech. B18 (2000): 493 Wittmack, K., J. Griesche, H. J. Osten, and S. B. Patel. J. Vac. Sci. Tech. B18 (2000): 524; Ronsheim, P. A., and J. J. Murphy. J. Vac. Sci. Tech. B18 (2000): 501. 222. Williams, P., and J. E. Baker. Nucl. Instrum. Methods 182/183 (1981): 15. 223. Boudewijn, P. R., H. W. P. Akerboom, and N. M. C. Kempeners. Spectrochim. Acta 39B (1984): 1567. 224. Hues, S. M., and P. Williams. Nucl. Instrum. Methods B15 (1986): 206. 225. Bullis, W. M. “Oxygen Concentration Measurement.” In Oxygen in Silicon, edited by F. Shimura, 94. Boston, MA: Academic Press, 1994. 226. Shaffner, T. J., and D. K. Schroder. “Characterization Techniques for Oxygen in Silicon.” In Oxygen in Silicon, edited by F. Shimura, 53–93. New York: Academic Press, 1994. 227. Pajot, B. “Characterization of Oxygen in Silicon by Infrared Absorption.” Analysis 5 (1977): 32–42. 228. Bullis, W. M., S. Perkowitz, and D. G. Seiler. Survey of Optical Characterization Methods for Materials, Processing, and Manufacturing in the Semiconductor Industry. Washington, DC: National Institute of Standards and Technology, 1995 (NIST 400-98). 229. Perkowitz, S., D. G. Seiler, and W. M. Duncan. “Optical Characterization in Microelectronics Manufacturing.” J. Res. NIST 99 (1994): 605–39. 230. Baber, S. C. “Net and Total Shallow Impurity Analysis of Silicon by Low Temperature Fourier Transform Infrared Spectroscopy.” Thin Solid Films 72 (1980): 201–10. 231. Wagner, P. “Infrared Absorption of Interstitial Oxygen in Silicon at Low Temperatures.” Appl. Phys. A 53 (1991): 20–5. 232. Moore, C. J. L., and C. J. Miner. “A Spatially Resolved Spectrally Resolved Photoluminescence Mapping System.” J. Cryst. Growth 103 (1990): 21–7. 233. Tajima, M., T. Masui, D. Itoh, and T. Nishino. “Calibration of the Photoluminescence Method for Determining As and Al Concentrations in Si.” J. Electrochem. Soc. 137 (1990): 3544–51. 234. Duncan, W. M., and M. L. Eastwood. “Fourier Transform Photoluminescence Analysis of Semiconductor Materials.” Proc. SPIE 822 (1987): 172–80. 235. Thewalt, M. L. W., A. G. Steele, and J. E. Huffman. “Photoluminescence Studies of UltrahighPurity Epitaxial Silicon.” Appl. Phys. Lett. 49 (1986): 1444–6. 236. Tajima, M. “Determination of Boron and Phosphorus Concentration in Silicon by Photoluminescence Analysis.” Appl. Phys. Lett. 32 (1978): 719–21. 237. Wolfe, J. P., and A. Mysyrowicz. “Excitonic Matter.” Sci. Am. 250 (1984): 98–107. 238. Smith, K. K. “Photoluminescence of Semiconductor Materials.” Thin Solid Films 84 (1981): 171–82. 239. Murray, R., K. Graff, B. Pajot, K. Strijckmans, S. Vandendriessche, B. Griepink, and H. Marchandise. “Interlaboratory Determination of Oxygen in Silicon for Certified Reference Materials.” J. Electrochem. Soc. 139 (1992): 3582–6. 240. Baghdadi, A., W. M. Bullis, M. C. Croarkin, Y. Li, R. I. Scace, R. W. Series, P. Stallhofer, and M. Watanabe. “Interlaboratory Determination of the Calibration Factor for the Measurement of the Interstitial Oxygen Content of Silicon by Infrared Absorption.” J. Electrochem. Soc. 136 (1989): 2015–24. 241. “Standard Test Methods for Oxygen Precipitation Characterization of Silicon Wafers by Measurement of Interstitial Oxygen Reduction.” ASTM F1239, 210–1. American Society for Testing and Materials, 1989. 242. Vandendriessche, S., B. Griepink, H. Marchandise, B. Pajot, R. Murray, K. Graff, and K. Strijckmans. The Certification of a Reference Material for the Determination of Oxygen in Semiconductor Silicon by Infrared Spectrometry. CRM 369 CA: 115(10)104911g, American Chemical Society, 1992, p. 339. 243. Gladden, W. K., S. R. Slaughter, W. M. Duncan, and A. Baghdadi. Automatic Determination of the Interstitial Oxygen Content of Silicon Wafers Polished on Both Sides. Washington, DC: National Institute of Standards and Technology, 1988 (NIST 400-81).

DK4126—Chapter28—23/5/2007—16:28—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-73

244. Becker, D. A., R. M. Lindstrom, and T. Z. Hossain. “International Intercomparison for Trace Elements in Silicon Semiconductor Wafers by Neutron Activation Analysis.” In Semiconductor Characterization: Present Status and Future Needs, edited by W. M. Bullis, D. G. Seiler, and A. C. Diebold, 335–41. Woodbury, NY: American Institute of Physics, 1996. 245. McGuire, S. C., T. Z. Hossain, A. J. Filo, C. C. Swanson, and J. P. Lavine. “Neutron Activation for Semiconductor Material Characterization.” In Semiconductor Characterization: Present Status and Future Needs, edited by W. M. Bullis, D. G. Seiler, and A. C. Diebold, 329–34. Woodbury, NY: American Institute of Physics, 1996. 246. Smith, A. R., R. J. McDonald, H. Manini, D. L. Hurley, E. B. Norman, M. C. Vella, and R. W. Odom. “Low-Background Instrumental Neutron Activation Analysis of Silicon Semiconductor Materials.” J. Electrochem. Soc. 143 (1996): 339–46. 247. Paul, R. L., and R. M. Lindstrom. “Applications of Cold Neutron Prompt Gamma Activation Analysis to Characterization of Semiconductors.” In Seimiconductor Characterization: Present Status and Future Needs, edited by W. M. Bullis, D. G. Seiler, and A. C. Diebold, 342–5. Woodbury, NY: American Institute of Physics, 1996. 248. Lindstrom, R. M. “Activation Analysis of Electronics Materials.” In Microelectronics Processing: Inorganic Materials Characterization, edited by L. A. Casper, 294–307. Washington, DC: American Chemical Society, 1986. 249. Blondiaux, G., J-L. Debrun, and C. J. Maggiore. “Charged Particle Activation Analysis.” In Handbook of Modern Ion Beam Materials Analysis, edited by J. R. Tesmer, M. Nastasi, J. C. Barbour, C. J. Maggiore, and J. W. Mayer, 205–30. Pittsburgh, PA: Materials Research Society, 1995. 250. Haas, E. W., and R. Hofmann. “The Application of Radioanalytical Methods in Semiconductor Technology.” Solid-State Electron. 30 (1987): 329–37. 251. Hirvonen, J-P. “Nuclear Reaction Analysis: Particle-Gamma Reactions.” In Handbook of Modern Ion Beam Materials Analysis, edited by J. R. Tesmer, M. Nastasi, J. C. Barbour, C. J. Maggiore, and J. W. Mayer, 167–92. Pittsburgh, PA: Materials Research Society, 1995. 252. Lanford, W. A. “Nuclear Reactions for Hydrogen Analysis.” In Handbook of Modern Ion Beam Materials Analysis, edited by J. R. Tesmer, M. Nastasi, J. C. Barbour, C. J. Maggiore, and J. W. Mayer, 193–204. Pittsburgh, PA: Materials Research Society, 1995. 253. Downing, R. G., and G. P. Lamaze. “Nondestructive Characterization of Semiconductor Materials Using Neutron Depth Profiling.” In Semiconductor Characterization: Present Status and Future Needs, edited by W. M. Bullis, D. G. Seiler, and A. C. Diebold, 346–50. Woodbury, NY: American Institute of Physics, 1996. 254. Downing, R. G., J. T. Maki, and R. F. Fleming. “Application of Neutron Depth Profiling to Microelectronic Materials Processing.” In Microelectronics Processing: Inorganic Materials Characterization, edited by L. A. Casper, 163–80. Washington, DC: American Chemical Society, 1986. 255. Ehrstein, J. R., R. G. Downing, B. R. Stallard, D. S. Simons, and R. F. Fleming. “Comparison of Depth Profiling 10B in Silicon Using Spreading Resistance Profiling, Secondary Ion Mass Spectrometry, and Neutron Depth Profiling.” In Semiconductor Processing, ASTM STP 850, edited by D. C. Gupta, 409–25. Philadelphia, PA: American Society for Testing and Materials, 1984. 256. Ricci, E., and R. L. Hahn. “Rapid Calculation of Sensitivities, Interferences, and Optimum Bombarding Energies in 3He Activation Analysis.” Anal. Chem. 40 (1968): 54. 257. Feldman, L. C., and J. W. Mayer. Fundamentals of Surface and Thin Film Analysis. New York: NorthHolland, 1986. 258. Leavitt, J. A., L. C. McIntyre Jr.., and M. R. Weller. “Backscattering Spectrometry.” In Handbook of Modern Ion Beam Materials Analysis, edited by J. R. Tesmer, and M. Nastasi, 37–81. Pittsburgh, PA: Materials Research Society, 1995. 259. Anthony, J. M. “Ion Beam Characterization of Semiconductors.” In Materials Characterization: Present Status and Future Needs, edited by W. M. Bullis, D. G. Seiler, and A. C. Diebold, 366–76. Woodbury, NY: American Institute of Physics, 1996. 260. Jacobsen, F. M., M. J. Zarcone, D. Steski, K. Smith, P. Thieberger, K. G. Lynn, J. Throwe, and M. Cholewa. “Detection of Heavy Trace Impurities in Silicon.” Semicond. Int. (1996): 243–8.

DK4126—Chapter28—23/5/2007—16:28—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-74

Handbook of Semiconductor Manufacturing Technology

261. Mendenhall, M. H., and R. A. Weller. “High-Resolution Medium-Energy Backscattering Spectrometry.” Nucl. Instrum. Methods B 59/60 (1991): 120–3. 262. Baglin, J. E. E. “Elastic Recoil Spectrometry.” In Encyclopedia of Materials Characterization, edited by C. R. Brundle, C. A. Evans Jr.., and S. Wilson, 488–501. Boston, MA: Butterworth-Heinemann, 1992. 263. Barbour, J. C., and B. L. Doyle. “Elastic Recoil Detection: ERD.” In Handbook of Modern Ion Beam Materials Analysis, edited by J. R. Tesmer, and M. Nastasi, 83–138. Pittsburgh, PA: Materials Research Society, 1995. 264. Musket, R. G. “Particle-Induced X-Ray Emission.” In Encyclopedia of Materials Characterization, edited by C. R. Brundle, C. A. Evans Jr.., and S. Wilson, 357–69. Boston, MA: ButterworthHeinemann, 1992. 265. Tabacniks, M. H., A. J. Kellock, and J. E. E. Gaglin. “PIXE for Thin Film Analysis.” In Application of Accelerators in Research and Industry: Proceedings of the Fourteenth International Conference, edited by J. L. Duggan, and I. L. Morgan, 563–6. Woodbury, NY: American Institute of Physics, 1997. 266. Swanson, M. L. “Channeling.” In Handbook of Modern Ion Beam Materials Analysis, edited by J. R. Tesmer, and M. Nastasi, 231–300. Pittsburgh, PA: Materials Research Society, 1995. 267. Evans Analytical Group. Rutherford Backscattering Spectroscopy Theory Tutorial. http://www. eaglabs.com/en-US/references/tutorial/rbstheo/chanling.html (accessed on February 21, 2007). 268. Morris, W. G., S. Fesseha, and H. Bakhru. “Microbeam RBS and PIXE Applied to Microelectronics.” Nucl. Instrum. Methods B 24/25 (1987): 635–7. 269. Cross, B. J., D. C. Wherry, and T. H. Briggs. “New Methods for High-Performance X-Ray Fluorescence Thickness Measurements.” Plating Surf. Finishing (1988): 1–7. 270. Parekh, N., C. Nieuwenhuizen, J. Borstrok, and O. Elgersma. “Analysis of Thin Films in Silicon Integrated Circuit Technology by X-Ray Fluorescence Spectrometry.” J. Electrochem. Soc. 138 (1991): 1460–5. 271. Ernst, S., C. Lee, and J. Lee. “Thickness Measurement of Aluminum, Titanium, Titanium Silicide, and Tungsten Silicide Films by X-Ray Fluorescence.” J. Electrochem. Soc. 135 (1988): 2111–3. 272. Eichinger, P. “Total Reflection X-Ray Fluorescence.” In Encyclopedia of Materials Characterization, edited by C. R. Brundle, C. A. Evans Jr., S. Wilson, and L. E. Fitzpatrick, 349–56. Boston, MA: Butterworth-Heinemann, 1992. 273. Diebold, A. C., and B. Doris. “A Survey of Non-Destructive Surface Characterization Methods Used to Insure Reliable Gate Oxide Fabrication for Silicon IC Devices.” Surf. Interface Anal. 20 (1993): 127–39. 274. Klockenka¨mper, R. Total-Reflection X-Ray Fluorescence Analysis. New York: Wiley, 1997. 275. Jenkins, R., R. W. Gould, and D. Gedcke. Quantitative X-Ray Spectrometry. New York: Marcel Dekker, Inc., 1997. 276. Nichols, M. C., D. R. Boehme, R. W. Ryon, D. Wherry, B. Cross, and G. Aden. “Parameters Affecting X-Ray Microfluorescence (XRMF) Analysis.” In Advances in X-Ray Analysis, edited by C. S. Barrett, J. V. Gilfrich, R. Jenkins, D. E. Leyden, J. C. Russ, and P. K. Predecki, 45–51. New York: Plenum Publishing Corporation, 1987. 277. Isaacs, E. D., K. Evans-Lutterodt, M. A. Marcus, A. A. Macdowell, W. Lehnert, J. M. Vandenberg, S. Sputz., et al. “X-Ray Microbeam Techniques and Applications.” In Diagnostic Techniques for Semiconductor Materials and Devices, edited by P. Rai-Choudhury, J. L. Benton, D. K. Schroder, and T. J. Shaffner, 49–67. Pennington, NJ: The Electrochemical Society, 1997. 278. Attaelmanan, A., P. Voglis, A. Rindby, S. Larsson, and P. Engstrom. “Improved Capillary Optics Applied to Microbeam X-Ray Fluorescence: Resolution and Sensitivity.” Rev. Sci. Instrum. 66 (1995): 24–7. 279. Brundle, C. R. “X-Ray Photoelectron Spectroscopy.” In Encyclopedia of Materials Characterization, edited by C. R. Brundle, C. A. Evans Jr., S. Wilson, and L. E. Fitzpatrick, 282–99. Boston, MA: Butterworth-Heinemann, 1992. 280. Nieveen, W. In Proceedings 207th ECS Meeting, Vol. PV 2005-01, 208–22, Quebec, Canada 2005. 281. Seah, M. P., and W. A. Dench. “Quantitative Electron Spectroscopy of Surfaces: A Standard Data Base for Electron Inelastic Mean Free Paths in Solids.” Surf. Interface Anal. 1 (1979): 2–11.

DK4126—Chapter28—23/5/2007—16:28—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

Electrical, Physical, and Chemical Characterization

28-75

282. Moulder, J. F., W. F. Stickle, P. E. Sobol, and K. D. Bomben. Handbook of X-Ray Photoelectron Spectroscopy. Eden Prairie, MN: Physical Electronics Inc., 1995. 283. Moulder, J., ed. The PHI Interface. Vol. 21 (1), 6. Chanhassen, MN: Physical Electronics, 2005. 284. Strausser, Y. E. “Auger Electron Spectroscopy.” In Encyclopedia of Materials Characterization, edited by C. R. Brundle, C. A. Evans Jr., S. Wilson, and L. E. Fitzpatrick, 310–23. Boston, MA: Butterworth-Heinemann, 1992. 285. Auger, M. P., and M. J. Perrin. “Sur les Rayons b Secondaires Produits dans un Gaz par des Rayons X.” Comptes rendus 180 (1925): 65–8 (orig.). 286. Briggs, D., and M. P. Seah. Practical Surface Analysis by Auger and X-Ray Photoelectron Spectroscopy. New York: Wiley, 1984. 287. Harris, L. A. “Analysis of Materials by Electron-Excited Auger Electrons.” J. Appl. Phys. 39 (1968): 1419–27. 288. Handbook of Auger Electron Spectroscopy. Eden Prairie, MN: Physical Electronics Industries, Inc. 1976. 289. Shaffner, T. J. “Rapid Semi-Quantitative Analysis for Routine Applications of Scanning Auger Microscopy.” Scanning Electron Microsc. I (1980): 479–86. 290. Hall, P. M., and J. M. Morabito. “Compositional Depth Profiling by Auger Electron Spectroscopy.” CRC Crit. Solid State Mater. Sci. 8 (1978): 53–67. 291. Hofmann, S., and A. Zalar. “Auger Electron Spectroscopy Depth Profiling of Ni/Cr Multilayers by Sputtering with NC 2 Ions.” Thin Solid Films 60 (1979): 201–11. 292. McCarthy, G. J., J. M. Holzer, W. M. Syvinski, K. J. Martin, and R. G. Garvey. “Evaluation of Reference X-Ray Diffraction Patterns in the ICDD Powder Diffraction File.” In Advances in X-Ray Analysis, edited by C. S. Barrett, J. V. Gilfrich, I. C. Noyan, T. C. Huang, and P. K. Predecki, 369–76. New York: Plenum Press, 1991. 293. Goehner, R. P., and M. C. Nichols. “X-Ray Powder Diffraction.” In Metals Handbook Ninth Edition: Volume 10 Materials Characterization, edited by R. E. Whan, 333–43. Metals Park, OH: American Society for Metals, 1986. 294. Toney, M. F. “X-Ray Diffraction.” In Encyclopedia of Materials Characterization, edited by R. C. Brundle, C. A. J. Evans, and S. Wilson, 198–213. Boston, MA: Butterworth-Heinemann, 1992. 295. Cullity, B. D. Elements of X-Ray Diffraction. Reading, MA: Addison-Wesley Publishing Co., 1978. 296. Adams, B. L. “Crystallographic Texture Measurement and Analysis.” In Metals Handbook Ninth Edition: Volume 10 Materials Characterization, edited by R. E. Whan, 357–79. Metals Park, OH: American Society for Metals, 1986. 297. Bowen, D. K., and B. K. Tanner. High Resolution X-Ray Diffractometry and Topography. London, U.K.: Taylor & Francis Publishers Ltd, 1998. 298. Hart, M. “Bragg Angle Measurement and Mapping.” J. Cryst. Growth 55 (1981): 409–27. 299. Tanner, B. K. “X-Ray Scattering for Semiconductor Characterization.” In Semiconductor Characterization: Present Status and Future Needs, edited by W. M. Bullis, D. G. Seiler, and A. C. Diebold, 263–72. New York: American Institute of Physics, 1996. 300. Patel, J. R. “X-Ray Anomalous Transmission and Topography of Oxygen Precipitation in Silicon.” J. Appl. Phys. 44 (1973): 3903–6. 301. Pangborn, R. N. “X-Ray Topography.” In Metals Handbook Ninth Edition: Volume 10 Materials Characterization, edited by R. E. Whan, 365–79. Metals Park, OH: American Society for Metals, 1986. 302. Patel, J. R. “X-Ray Diffuse Scattering from Silicon Containing Oxygen Clusters.” J. Appl. Cryst. 8 (1975): 186–91. 303. Koppel, L. N., and L. Parobek. “Thin-Film Metrology by Rapid X-Ray Reflectometry.” In International Conference on Characterization and Metrology for ULSI Technology, edited by D. G. Seiler, 1. New York: American Institute of Physics, 1998. 304. Miyauchi, A., K. Usami, and T. Suzuki. “X-Ray Reflectivity Measurement of an Interface Layer between a Low Temperature Silicon Epitaxial Layer and HF-Treated Silicon Substrate.” J. Electrochem. Soc. 141 (1994): 1370–4.

DK4126—Chapter28—23/5/2007—16:28—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

28-76

Handbook of Semiconductor Manufacturing Technology

305. Padmore, H. A., and P. Pianetta. “X-Ray Microscopy and TXRF: Emerging Synchrotron Techniques for Semiconductor Characterization.” In International Conference on Characterization and Metrology for ULSI Technology, edited by D. G. Seiler, New York: American Institute of Physics, 1998. 306. Qadri, S. B., D. Ma, and M. Peckerar. “Double-Crystal X-Ray Topographic Determination of Local Strain in Metal-Oxide-Semiconductor Device Structures.” Appl. Phys. Lett. 51 (1987): 1827–9. 307. Zaumseil, P., U. Winter, M. Servidori, and F. Cembali. “Determination of Defect and Strain Distribution in Ion Implanted and Annealed Silicon by X-Ray Triple Crystal Diffractometry.” In Gettering and Defect Engineering in Semiconductor Technology/GADEST 1987, edited by H. Richter, 195–9. Germany: Academy Sciences, 1987. 308. Kawado, S., S. Kojima, and I. Maekawa. “Influence of Crystal Imperfection on High-Resolution Diffraction Profiles of Silicon Single Crystals Measured by Highly Collimated X-Ray Beams.” Appl. Phys. Lett. 58 (1991): 2246–8. 309. Huff, H. R., H. F. Schaake, J. T. Robinson, S. C. Baber, and D. Wong. “Some Observations on Oxygen Precipitation/Gettering in Device Processed Czochralski Silicon.” J. Electrochem. Soc. 130 (1983): 1551–5. 310. Tuomi, T., M. Tuominen, E. Prieur, J. Partanen, J. Lahtinen, and J. Laakkonen. “Synchrotron Section Topographic Study of Czochralski-Grown Silicon Wafers for Advanced Memory Circuits.” J. Electrochem. Soc. 142 (1995): 1699–701. 311. Partanen, J., T. Tuomi, and K. Katayama. “Comparison of Defect Images and Density between Synchrotron Section Topography and Infrared Light Scattering Microscopy in Heat-Treated Czochralski Silicon Crystals.” J. Electrochem. Soc. 139 (1992): 599–604. 312. Jiang, B. L., F. Shimura, and G. A. Rozgonyi. “X-Ray Moire Pattern in Dislocation-Free Silicon-onInsulator Wafers Prepared by Oxygen Ion Implantation.” Appl. Phys. Lett. 56 (1990): 352–4. 313. Turrell, G., and J. Corset. Raman Microscopy, Developments and Applications. London, U.K.: Academic Press Limited, 1996. 314. Ferraro, J. R., and K. Nakamoto. Introductory Raman Spectroscopy. San Diego, CA: Academic Press, Inc., 1994.

DK4126—Chapter28—23/5/2007—16:28—ANBARASAN—240457—XML MODEL CRC12a – pp. 1–76.

29

Failure Analysis 29.1 29.2

29.3

29.4

Lawrence C. Wagner

Introduction ...................................................................... 29-1 Failure Site Isolation......................................................... 29-4

Electrical Characterization † Tools for Bench Electrical Characterization † Die Exposure Techniques † Global Techniques † Probing

Physical Analysis Tools ................................................... 29-14

Package Analysis † Deprocessing † Parallel Polishing Cross-Section Analysis † Microscopy † Transmission Electron Microscopy



Chemical Characterization............................................. 29-18

X-Ray Analysis (Energy Dispersive) † Auger † Secondary Ion Mass Spectroscopy (SIMS) † Microspot FTIR † Others

29.5 Future of Failure Analysis .............................................. 29-21 References...................................................................................... 29-22

Texas Instruments, Inc.

29.1

Introduction

Failure analysis (FA) describes the process of diagnosis of defective integrated circuits (ICs) [1]. Historically, this has been most closely tied to the analysis of packaged devices, primarily customer returns and qualification failures. However, defective circuits that require analysis occur at all stages of manufacture and use. Some of the primary applications include design debug, product and process ramps, yield enhancement, reliability test failures, and customer returns (see Table 29.1). There is a fairly common tool set for these diagnostic activities although the implementations may vary significantly. In general, the FA process consists of two phases (see general FA process flow in Figure 29.1). The first is determining the electrical cause of failure or failure site isolation. This is the process of narrowing the scope of analysis from a complex integrated circuit down to a much simpler problem, the analysis of a single failing net, transistor, thin film, or junction. The tools used in the isolation process begin with an electrical test. In fact, electrical testing can in some cases provide failure site isolation by itself. This is particularly true in case of single bit memory failures where the failure site is quickly isolated through electrical test alone to a very small area. The temperature and voltage dependence of the single bit failure can further isolate the failure to a particular structure in the memory bit [2]. Logic failures can also be isolated or partially isolated by electrical testing. This is particularly true of IC’s with scan based Design for Test structures (DFT) [3,4]. Electrical testing also provides a method of placing the device in a failing condition. This failing condition is required for the application of the various physical failure site isolation tools. These tools can be broadly categorized 29-1

DK4126—Chapter29—23/5/2007—19:59—CRCPAG—240458—XML MODEL CRC12a – pp. 1–26.

29-2 TABLE 29.1

Handbook of Semiconductor Manufacturing Technology Failure Analysis (FA) Is a Loosely Defined Term

Diagnostic Activity Design debug Wafer fab yield improvement: defect based Wafer fab yield improvement: parametric based Wafer fab yield improvement: unmodeled loss Assembly test yield improvement

Primary Failure Site Isolation Techniques

Primary Physical Analysis

Mechanical probe E-beam probe Electrical data analysis overlaid on defect map Global techniques Electrical analysis of test structures

Circuit analysis Deprocessing Focused ion beam (FIB) cross section Cross section of test structures

Mechanical probe E-beam probe Emission microscopy

Circuit analysis Deprocessing Cross section

Qualification failures

Non-destructive package Decapsulation visual/ analysis: x-ray, scanning scanning electron acoustic microscopy, time microscope inspection domain reflectometry All All

Customer returns

All

All

Comments Focused on identification of design errors or marginality Statistical approach with emphasis on efficient analysis of a large sample size Interest is primarily in improvement in parameter means and distribution Typically associated with a design feature which interacts with the wafer fab process for a higher defect density Statistically based on most common failure bins Typically unique failures must be successfully analyzed No statistical population Sample sizes vary greatly and statistical significance varies

Several diagnostic activities that are commonly referred to as FA are summarized in Table 29.1. Failure analysis can have significantly different connotations in various semiconductor environments. They share a common tool set with varying emphasis and application.

as global and probing techniques. Global techniques use secondary events such as thermal or light generation from a failure site to isolate a failure. These techniques are particularly powerful because they can be performed quickly and do not require an understanding of the circuit operation. Probing techniques allow direct electric measurements on electrical nets or nodes in the failed device. Localization of defects by probing is frequently tedious and time consuming. The failure site isolation process generally leads to an electrical cause of failure, for example a net shorted to ground or an open net. It may also provide an accurate location of the failure, for example the precise location of an open conductor or shunted signal lines. Once isolated, the physical cause of the failure remains to be determined. This is achieved through physical observation of the defect. Additional electrical characterization may be required during the process of exposing the defective area to further narrow down the location and type of failure. When a particle or other contamination is involved, chemical analysis also forms a critical part of understanding the source of the contamination. However, the physical cause of failure can cover a wide range of possible failure mechanisms. In a design debug activity, the physical cause may be a layout anomaly. In a package related failure, it could be an adhesion failure between the mold compound and die surface. The task of physical analysis utilizes a broad set of tools, which include various forms of microscopy as well as chemical analysis. Understanding the failure may entail other tasks such as circuit simulations to verify that the physical cause of failure matches the electrical cause of failure. This is particularly true for design debug activities. This is also often the case for subtle analog circuit failures where small process shifts or circuit element mismatches can play a critical role. In a broader sense, the FA process described above is an element of improvement processes in the semiconductor industry (see Figure 29.2). The first part of this process is determining which failures are

DK4126—Chapter29—23/5/2007—19:59—CRCPAG—240458—XML MODEL CRC12a – pp. 1–26.

Failure Analysis

29-3

Die related defect

Package related defect

Electrical characterization

Electrical characterization

Die exposure

Electrical cause of failure

Global failure site isolation

Probe failure site isolation

Package analysis

Defect exposure

Defect/die exposure

Physical cause of failure

Physical inspection

Chemical analysis

Physical inspection

Chemical analysis

FIGURE 29.1 The general process flow of failure analysis (FA) can be broken down in the determination of electrical cause of failure followed by determination of the physical cause of failure.

most significant to analyze and understand. The failure modes can provide signatures [5,6] of various categories of failures to sample from. A good example is yield analysis, where Pareto distributions of failure modes, electrical signatures of failure, drive the selection of devices to analyze. Time is expended in analysis of the most common failures because elimination of these failures will have the greatest potential impact on yield. Closure of the improvement process consists of root cause identification and corrective actions. The root cause of the failure goes beyond understanding the physical cause of the failure. A good example is the conductive particle creating a short circuit. The physical cause of this failure can be fully understood in terms of the resistance, composition, and location of the particle in the FA process. The root cause of this failure additionally requires an understanding of how the particle was generated in the wafer processing tool. For example, the root cause of the particle might be mechanical wear in a load lock to the wafer fab tool, which is generating particles from a particular component in the load lock with the same composition as the particle.

Failure selection

Electrical cause of failure

Physical cause of failure

Root cause of failure

Corrective action

FA processes

FIGURE 29.2 General flow of diagnosis in semiconductor is shown. FA generally includes determining the electrical and the physical cause of failure.

DK4126—Chapter29—23/5/2007—19:59—CRCPAG—240458—XML MODEL CRC12a – pp. 1–26.

29-4

29.2

Handbook of Semiconductor Manufacturing Technology

Failure Site Isolation

As discussed above, failure site isolation begins with an electrical characterization [7]. Understanding how a device fails electrically is a critical first step in the failure site isolation flow. In some cases, it can provide a relatively complete understanding of the failure e.g., single bit memory failures. In most cases, it provides a good understanding of how to proceed with physical failure site isolation. It also provides an electrical test set-up to place the device into a failing condition. This is essential for the various techniques used in the physical failure isolation process. In addition, die exposure is a requirement to proceed to the physical failure site isolation except for the analysis of unpackaged devices or wafers. Global techniques are typically the first set of techniques applied, since they provide quick isolations when successful. If the global techniques are unsuccessful, more time consuming probing techniques must be employed. In some cases, a more detailed understanding of the component level failure is obtained by isolating and characterizing the failed electrical component.

29.2.1

Electrical Characterization

Electrical characterization begins with the analysis of the electrical failure signature. In many cases, this is the datalog from automatic test equipment (ATE) data. If ATE is not readily available, other methods may be attempted before shipping a part to a test floor. These include curve tracing, system functionality testing, (typically a populated circuit board with a socket in place of the component which is to be tested and used to very system operation) and emulation boards. A datalog is simply an electronic output of the results from various tests performed as a part of the production test program. Where a large number of datalogs are to be examined such as in wafer fab yield analysis, it is customary to use binning (putting failures into categories by type of electrical test failed) to provide a first pass sorting of the devices. This helps to determine which devices to focus analysis on. In addition, this binning may be combined with wafer mapping to identify patterns within the wafer lots that may also drive the direction of the analysis. As die identification becomes more common, it becomes realistic to apply the concepts of wafer mapping into reliability and customer return failures [8]. The interpretation of datalogs is a key first step in the FA process. In general, the focus is on a hierarchy of failure modes. This hierarchy is used to determine the simplest electrical test conditions that can be used to create the failing condition. The first level in this hierarchy is continuity failures, opens and shorts. An open or shorted pin usually generates a large number of parametric and functional failures. Since the parametric and functional failures are due to the open or short pin, the only electrical characteristic, which must be analyzed, is the continuity failure. For example, an output shorted to ground will result in parametric failures such as voltage output high (VOH) as well as functional failures at each vector where the shorted output is expected to be high. These tend to drive quick isolation since the continuity testing is typically a measure of the assembly interconnections and diodes directly connected to the pin. Hence the failure must occur within the pin circuitry or in the package connection to the pin. The second level of the failure hierarchy is parametric failures. There are a number of parameters that are measured for each input and for each output of a circuit during production electrical testing. Many of these parameters can be measured without regard to the bias of other pins except power and ground pins. However, measurements of some output pin parameters can require biasing that output into a particular logic state in order to make the measurement. In such cases, an output parameter failure may represent only a failure to get the output into the correct state for the measurement and the failure, which must be analyzed and may in fact be a functional failure preventing the output from being in the anticipated state. However most input or output parametric failures are generally isolated to the circuitry directly attached to the failing pin. In addition to input and output parameters, power supply leakage parameters are critical for FA. While the leakage between the power supplies can typically occur in any area of the die, these leakage

DK4126—Chapter29—23/5/2007—19:59—CRCPAG—240458—XML MODEL CRC12a – pp. 1–26.

Failure Analysis

29-5

failures are particularly amenable to the global isolation techniques discussed below. Functional failures without power supply leakages, on the other hand, are typically isolated using scan design for test methods or must be probed for isolation. In complementary metal-oxide-silicon (CMOS) circuits, quiescent leakage current (IDDQ) leakage is typically measured in states where a low leakage level is anticipated for a good device. This IDDQ leakage may be state dependent i.e., it may only occur at specific logical conditions or it may be state independent. For state independent leakage, electrical biasing for isolation is not as critical. However, for state dependent leakages, the capability to put the device into the failing condition is essential for successful analysis. Essentially, the device must be electrically preconditioned into the failing state in order to measure or isolate the IDDQ failure. This can require sequentially entering a large number of input patterns. Static CMOS logic typically has an extremely low IDDQ, which allows detection of very low level leakages in the device logic. For device without low leakage states, power supply leakage may be a useful parameter for FA even if it is not measured as part of the production test program. By comparing the leakage on a failing device to that on a known good device, it may be possible to isolate the source of the increased leakage which may be causing the observed failure. Variations in IDDQ with temperature can also provide useful insights into the type of failure mechanism being observed [9]. The final category of failures is functional failures. These are failures that do not exhibit abnormal IDDQ leakages or the other parametric anomalies. Certain categories of functional failures can be isolated with good precision in the IC. This includes memory failures for which electrical data may precisely identify an area for physical analysis. This also includes scan failures for which tools exist to isolate scan failures to certain nets, in some case as precisely as a single net in the circuit. With the exception of the categories identified above, function failures are typically the most difficult type of failure to analyze. This is especially true of frequency dependent and temperature dependent functional failures. Functional FA is particularly demanding because global techniques tend to be less successful on this category of failures, forcing the analyst to resort to probing approaches.

29.2.2

Tools for Bench Electrical Characterization

Curve tracer [10] and parametric analyzer I–V curves are typical adequate to understand continuity failures. While the datalog will identify the failing pins, I–V curves provide a better understanding of the failure. For example, a high resistance on a pin can be viewed as open on a datalog. Parametric FA requirements range from simple leakage failures, which can be observed with a curve tracer or parametric analyzer to more difficult parameters, which require control of various pins on a device. This typically occurs to put an output into a particular logic state. For example, measurement of VOH and current, output high (IOH) requires putting the output into a high state. Depending on the device type, this may require control of at least one and usually more input pins. One of the factors, which contribute to the increased difficulty of FA, is the increase in device complexity. In terms of the electrical characterization, this is reflected in the increase in typical pin counts as well as transistor count. The tools used for the electrical characterization of devices and biasing for failure site isolation depends very heavily on the complexity of the devices (See Table 29.2). For low pin count devices (2–28 pins), electrical characterization is best performed with simple bench test electronics. For continuity failures, this includes curve tracers and parametric analyzers. For parametric failures, this includes a range of meters and power supplies as well as customized test boxes, which are frequently hand built. Powered curve tracing, I–V curves generated with a static bias on the pins provides a great deal of information about the possible location of the failure in the input or output structure. Analysis of functional failures usually also requires simple function or pattern generation capability as well as oscilloscopes for output measurement. Medium pin count devices through 300 pins typically require a different range of equipment. The management and measurement of a larger number of pins require a level of automation for efficiency. Typically a pin matrix with an automated curve tracer is significantly more efficient for continuity failures

DK4126—Chapter29—23/5/2007—19:59—CRCPAG—240458—XML MODEL CRC12a – pp. 1–26.

29-6 TABLE 29.2

Handbook of Semiconductor Manufacturing Technology Various Types of Electronics Are Used for Electrical Characterization of Failed Devices I–V Characterization

Low pin count (2–28 pins)

Standard curve tracer Parameter analyzer

Medium pin count (28–300 pins)

Pin matrices Automated curve tracer

High pin count (O300 pins)

Automated curve tracer

Ultra high pin count (O1000 pins)

Automated curve tracers ATE

Parametric Analysis Power supply Meters Parameter analyzers Custom boxes Pin matrices Automated curve tracer Power supplies Meters Automated curve tracers ATE Automated curve tracers ATE

Functional Testing/Timing Pulse/pattern generators Oscilloscopes High pin count pattern generators Logic analyzers Application specific integrated circuit verification testers Automatic test equipment (ATE) ATE

A limited summary of the most common tools is presented above.

than manual curve tracers. Parametric analysis may still be performed on a bench with a pin matrix replacing the custom boxes. Some of the parametric analysis can also be performed on the automated curve tracers. Functional FA requires a much higher level of pin control and simple bench electronics are increasingly ineffective as the pin count increases through this range. Through the low part of this range of pin counts, high pin count pattern generators and logic analyzers become the primary tools for characterization. As the pin counts continue to grow, integration of these components in a single box occurs in the form of the application specific integrated circuit (ASIC) verification tester. For high pin count devices (greater than 300 pins), there remain cases such as continuity and simple parametric failures, where the automated curve tracer can still be useful. However, where control of a significant number of pins is required, the use of ATE or production test equipment is increasingly indicated [11]. The issues of correlation and set-up time for ASIC verification testers make them significantly less effective in this pin count range. In general, the greater the pin count of products, the less effective the lab scale testers become versus production testers. For ultra high pin count devices with more than a thousand pins, ATE is commonly used for all electrical functions as a cost-effective method for all electrical measurement. For FA of a single high-volume product, the usefulness of the ASIC verification tester may extend well beyond this pin count range. For diverse product mixes, they tend to become limited at lower pin counts. An additional consideration is that the electrical stimulus must be compatible with the physical failure site isolation tools. The device must be biased with the failure site isolation tool. For static DC biases, this can be achieved in a number of ways including cable harness fixtures even for relatively high pin count devices. For functional failures, this is a much more complex issue since cable can add significant performance complexity to correlation issues. While the impact of cable harnesses on the analysis of timing failures is fairly clear, the timing of edges can also be critical to assure achieving the expected electrical state during electrical preconditioning.

29.2.3

Die Exposure Techniques

In addition to a suitable electrical stimulus, failure site isolation typically requires some level of sample preparation in order to expose the die [12]. Analysis of wafers or unpackaged dice is the obvious exception. Historically, this has been decapsulation or delidding. Decapsulation is the process of

DK4126—Chapter29—23/5/2007—20:00—CRCPAG—240458—XML MODEL CRC12a – pp. 1–26.

Failure Analysis

29-7

removing mold compound from the die surface of a plastic encapsulated device. Typically, the mold compound is removed only in the area above the die and bond wires. Hot fuming sulfuric and nitric acids are the most commonly used decapsulating agents. Jet etch systems [13] employing these acids or mixtures of these acids have become the dominant method for decapsulation. Jet etchers provide a safer alternative to the older techniques described below as well as a more consistent delivery of the hot decapsulation agent. A low-level vacuum is used to create the jet and hold the inverted device in place. The hole in the fluorocarbon vacuum seal for the device provides a masking effect to control the size of the decapsulation hole. The consistent delivery of hot acid is essential for reducing the decapsulation time. In addition to impacting productivity, long exposure of the mold compound to the acid can result in absorption of the acid. This can lead to swelling of the mold compound and mechanical damage to the bond wires. Older techniques, which employed immersion in a decapsulating acid or dropping the acid onto a cavity prepared in the device may also be used [14]. In general, the goal of decapsulation is to expose the top surface of the die while maintaining electrical continuity (Table 29.3). In cases where it is more important to maintain the chemical integrity of the internal surfaces for analysis, dry decapsulation may be employed. This is particularly an important approach in cases where metallization corrosion is observed or suspected. Thermomechanical decapsulation [15,16] is a specialized procedure typically reserved for failures, which are expected to be due to metallization corrosion. While corrosion analysis has been the primary thrust of thermomechanical decapsulation, use on failures prone to recovery with wet chemical decapsulation has also occurred. The primary disadvantage of this approach is the loss of electrical continuity. The primary advantage for corrosion failures is that contamination or corrosion products are not removed from the die surface by dissolution, allowing chemical analysis of the corrosion initiators and byproducts. In the case of decapsulation recoverable failures, this approach may yield a higher percentage of devices, which continue to failure site isolation electrically. A variety of approaches have been employed which involve various combinations of grinding, breaking, and heating. One technique employed is to crack a heated device along the upper lead frame to mold compound interface. If the die surface is not exposed by the fracture, the top of the device is heated until the mold compound softens and the die can be lifted out with tweezers. In an earlier version, the backside was ground away to expose the die. The device was heated and the die lifted or pried out of the mold compound. A third approach has been to heat the device in a fume hood until it begins to smoke and twist the package. Other variations have been employed with varied success rates. Techniques are often selected based on the sample size and required success rate. Plasma decapsulation has also been used for plastic packages [17]. It has the potential advantage of high selectivity typically using an ashing plasma (primarily oxygen with some fluorocarbon). This allows minimized etching of the die and lead frame materials. The primary disadvantage is the time required. Since filler material in the mold compound is typically not etched by the oxygen plasma, the etching of the polymer is slowed, as filler material is exposed. This makes it essential to remove the exposed filler TABLE 29.3

Summary of Standard Approaches for Exposing the Die in Various Package Types

Package Type Plastic encapsulated IC

Method Jet etch Dry decapsulation Plasma decapsulation Laser decapsulation

Cavity package Backside

Mechanical Mechanical

Comments Required for many modern mold compounds Flexibility of various acids and mixtures of acids Thermal mechanical fracture of package to expose die without chemical—useful for chemical analysis follow-up Time consuming approach used for niche applications where acid is altering the failure mode or defect Expensive and difficult to control damage to the device being decapsulated Both grind off lids and lid seal fracture knife or razor edge Combination grinding and mechanical followed by polish to a mirror finish

DK4126—Chapter29—23/5/2007—20:00—CRCPAG—240458—XML MODEL CRC12a – pp. 1–26.

29-8

Handbook of Semiconductor Manufacturing Technology

material periodically. Recent efforts with laser decapsulation have also been attempted. Efforts have been limited by the cost of equipment and ability to completely remove the mold compound and maintain electrical performance of the device [18,19]. Delidding is employed for cavity packages [20]. There are a variety of techniques available. The delidding of hermetic parts is normally a relatively straightforward mechanical process of either breaking the seal or grinding off the lid or lid seal. For ceramic lid devices, the choice between breaking or grinding is usually determined by the extra risk to maintain lead integrity in breaking the lid seal versus the longer time required to remove a lid by grinding. Fixtures with knife-edges for cracking open the lid seal on many packages provide a relative safe and convenient approach. Metal lid seals can usually be fractured by tapping a sharp edge such as a razor blade into the seal. Mechanical milling may be required if access to the edges of the lid is limited e.g., cavity down pin grid arrays. As flip-chip mounting has become more important, backside failure site isolation techniques are more commonly used. Backside techniques are also increasing in popularity for the analysis of wire-bonded devices as the number of metallization levels increases on the die. The increase in the number of metallization levels greatly reduces observability of the die. Backside analysis requires that the back of the die be exposed and usually polished in preparation for backside analysis. These techniques range from the very simple to the complex procedures. In many cases, it is possible to use standard cross-sectioning equipment to polish the back of the die. Where localized polish is required for thinner silicon, tools range from mechanical mills and ultrasonic mills to focused ion beam (FIB) and laser enhanced silicon removal processes can be employed [21–23]. In addition, the desirability of backside analysis for devices with many levels of metal or high metal area coverage, techniques have been developed to repackage wire bonded devices for backside analysis. Typically, the device is thinned to a thickness of approximately 100 mm for backside analysis. Since many of the techniques for backside analysis employ an infrared (IR) light probe, it is desirable to reduce the absorption of IR light in the sample, improving signal to noise. On the other hand, thinning to 100 mm generates devices, which retain reasonable mechanical stability. In cases where further thinning is required, FIB and chemically enhanced laser removal are most commonly used for local thinning of the device.

29.2.4

Global Techniques

Global Techniques provide a methodology for quick isolation of failure location. The two most dominant techniques have been hot spot detection (thermal) and emission microscopy [24,25]. In addition, a number of techniques also have been developed to take advantage of particular types of defects, which can be impacted by localized heating or carrier generation. 29.2.4.1

Hot Spot Detection

Historically, the earliest forms of hot spot detection were spatially resolved IR mapping and a boiling freon technique. These generally lacked good spatial and thermal resolution. With the advent of double level metallization, a global thermal approach became critical for detection of interlevel metallization shorts and liquid crystal was developed as an excellent approach for detection of these resistive shorts as well as defects at the transistor level. The thermally sensitive liquid crystal is applied to the surface of the failed device, typically using a volatile solvent to spread the liquid crystal more uniformly. Typically a liquid crystal with a thermal transition just above room temperature is ideal for hot spot detection [26–29]. A liquid crystal commonly known as K-18 has become the standard liquid crystal employed, although liquid crystals with other transition temperatures are used for failures at specific temperatures for higher power devices. It has a transition at approximately 308C. Spots in the order of few microns are achievable with power sensitivity in the order of 100’s of microwatts. The hot spots appear as a dark region in the liquid crystal. Polarizing lens enhance the visibility of the spot. Applying bias in slow AC fashion allows some control of the spot size with duty cycle as well as enhancing visibility by creating a “blinking” effect. Thermal sensitivity can be enhanced by temperature control. Ideally, the liquid crystal should be controlled at a temperature just below the transition point. This reduces the power required to

DK4126—Chapter29—23/5/2007—20:00—CRCPAG—240458—XML MODEL CRC12a – pp. 1–26.

Failure Analysis

29-9

create an observable transition, effectively increasing sensitivity. Liquid crystal proved to be a powerful technique for typical leakages on 5 V devices. As power dissipation that can be generated at a defect site has fallen with decreasing power supply voltages, thermal techniques have become somewhat less useful, although they remain a critical part of the suite of global failure site isolation tools. In addition, reduction in feature size has made the large spot observed inadequate for good fail site isolation. Fluorescent Thermomicrographic Imaging (FMI) has been used for hot spot detection [30,31]. A material, primarily EuTTA, with a high thermal coefficient of fluorescence is applied to the device, much as liquid crystal. Fluorescence maps are generated using charge coupled device (CCD) cameras. The fluorescence image of the biased failing devices is mathematically compared to the fluorescence image from the unbiased die on a pixel by pixel basis. Hot spots are indicated by areas of reduced fluorescent intensity. Black body radiation measurements have also been used for many years for measurement of localized temperature. This was originally somewhat limited by spatial resolution and used primarily for localizing heat on printed circuit boards. The spatial resolution and thermal sensitivity of this approach have been recently improved to make this approach more applicable to die level failure site isolation. The ability to use such techniques from the backside to observe thermal behavior in the active areas of devices has also contributed to greater use of this thermal measurement approach [32]. 29.2.4.2

Emission Microscopy

Emission microscopy was also developed [33] to address failure site isolation issues with double level metallization. In general, light emitted due to electron-hole recombination and hot electrons is detected by a light amplifier, which maps the location of the light intensity [25,34,35]. The resulting emission image is overlaid onto an optical microscope image. This precisely identifies the location of the emission and in most cases the defect site. It should be noted that defects away from an emission site could be the cause of emission. For example shorted metallization can result in saturated transistors, which will emit at the saturated transistor but not at the site of the short. Emission microscopy has seen limitations due to the rapid increase in the number of metallization layers. Since metal is opaque, the increase in layers significantly reduces the observability of emission sites. Emission sites, however, are observable from the backside of a device. This makes emission microscopy a key tool for backside FA. Several factors are important in utilizing emission microscopes from the backside [35,36]. The first is transmission of IR light through the silicon. For P type material, the absorption of IR light increases rapidly with doping level. Thus, higher substrate doping levels increase the importance of thinning. Another factor is an important consideration for emission microscopy from both the front and back. In general, stronger emissions have been reported in the IR region than in the visible region for many of the common emission mechanisms. 29.2.4.3

Charging Based Global Techniques

Two techniques to observe open circuits have been developed which are based on charging the interconnects in CMOS circuits with an electron beam. Charge induced voltage alteration (CIVA) [37] and Low energy charge induced voltage alteration (LECIVA) [38] have been used to identify open circuits. The technique is predicated on the assumption that CMOS signal traces predominantly connect a source/drain contact to a gate of another transistor. Generally, any charging on such a trace will be bled off at the source/drain contact. However, if the line is open, the gate side will charge, altering the behavior of the transistor. A key feature of these techniques is operation of the device at a constant current, monitoring voltage changes. This facilitates measurement and makes the techniques more sensitive. Light induced voltage alteration (LIVA) [39] is a laser based equivalent to CIVA. 29.2.4.4

Thermal Generation Based Global Techniques

Several techniques based on heating interconnects with a scanning laser beam have been developed. Optical Beam Induce Resistance Change (OBIRCH) [40,41] is a technique for detection of resistive elements. It detects resistance changes in the metallization due to localized heating. In areas without anomalies, thermal redistribution is uniform and effectively reduces the temperature. In an area with a

DK4126—Chapter29—23/5/2007—20:00—CRCPAG—240458—XML MODEL CRC12a – pp. 1–26.

29-10

Handbook of Semiconductor Manufacturing Technology

discontinuity, heating occurs with a greater change in resistance than in the continuous area. It is assumed that defects, which create an open or high resistance in a line, also reduce the thermal conductivity in the area. Since this is a laser-based technique, it is possible to perform OBIRCH from the backside using an IR laser. In any event, IR lasers (wavelengths with energies below the bandgap of silicon) are required for these thermal analysis techniques in order to prevent photocurrent generation. Seebeck Effect Imaging (SEI) has also been reported using the Seebeck effect to generate potential on the open portion of a net connected to a gate. The effect is monitored using constant current biasing and measuring the voltage changes similar to the xIVA approaches discussed above. Thermally Induced Voltage Alteration [42] (TIVA) has been used in a similar fashion to detect power consumption changes due to shorts in metallization. Thermally induced voltage alteration is very similar to OBIRCH, but using the constant current methodology to enhance sensitivity. Additional techniques using heating lasers have developed to observe different electrical characteristics of the device. These include techniques which map electrical characteristics of a device, which may not be simple measurement. A common example is measurement of a pass–fail condition, which has been shown to be an effective method for resistive via isolation [43–45]. Many acronyms have been generated using somewhat similar approaches. The common feature is the monitoring of some electrical characteristic, as the heating laser is rastered over the device under test. 29.2.4.5

Carrier Generation Based Techniques (Semi-Global)

The generation of carriers in silicon can be used to modify the behavior of the circuit while in operation. When the carriers are generated well away from a junction, recombination of the carriers is likely with no net impact on the operation of the devices. However if the carriers are generated near a junction, the carriers can result in a net increase in current flow, which can be monitored as a power supply current. Two sources of carrier generation have been employed. Electron beam induced current [46] (EBIC) uses an electron beam while optical beam induced current (OBIC) [47] uses a light beam, typically a laser. These techniques have not been broadly used on very large scale integration (VLSI) devices for a number of reasons. They are not classical global techniques since an understanding of the circuit of operation is required to interpret the results as well as to determine the sites to irradiate. Electron beam induced current requires that the electron beam penetrate to the silicon in order to generate carriers. This also becomes impractical as the number of layers of metallization increases. OBIC suffers from similar disadvantages on VLSI devices because the metallization is opaque to light. However, OBIC may become more popular as a technique for backside analysis since access to the active silicon areas is not blocked by opaque metallization from the back. Light near the bandgap of silicon will transmit through silicon and have enough energy for carrier generation. This will make OBIC potentially a useful backside analysis technique. Progress in magnetic imaging has led to the possibility of tracking currents on the device with adequate spatial resolution to accurately identify the current path on a layout. Most of the activity in this area has been on package level shorts (see below) and leakage, but is finding increased applications at the die level [48–50].

29.2.5

Probing

Probing is the oldest method for failure site isolation [51]. Generically, probing is the measurement of the electrical signals within an IC under test. This can range from contact measurement with a mechanical probe or noncontact measurement with an electron beam probe or optical methods. 29.2.5.1

Mechanical Probing

Mechanical probing has remained as a valuable tool in the FA of functional failures. Computer aided probe placement has served to extend the life of the optical microscope-based mechanical probe into the sub-micron regime although probing is difficult. The rapid increase in typical device complexity has made mechanical probing, a less desirable approach for isolating VLSI failure sites due to the increase in

DK4126—Chapter29—23/5/2007—20:00—CRCPAG—240458—XML MODEL CRC12a – pp. 1–26.

Failure Analysis

29-11

the number of signals that must typically be probed. The simultaneous decrease in feature size has served to increase the difficulty of contacting a particular signal trace as well as to increase the impact of loading on measurements. High impedance probes can overcome some of the loading issues. Typically, field effect transistor (FETs) are used at the probe tip to provide the high effective impedance. Increase in device complexity have been overcome through the use of computer aided design (CAD) navigation (using the CAD database or lay-out with automated stage movement to “drive” the device to a predetermined location or to precisely locate an anomalous event such as light emission) with higher resolution stages to assist in locating the nodes that must be probed. Once the defect has been localized, it is essential to be able to track that location through subsequent analysis processes. Relocating the identified defective areas in VLSI devices in other failure tools is a time consuming task without CAD navigation. Contacting difficulties have been in part alleviated with computerized probe control and migration from optical microscopes to scanning electron microscope (SEM) and FIB based probe systems in order to improve resolution and depth of field issues. In addition, atomic force microscope (AFM) based methods have been developed. The radius of probe tips has become smaller to allow contacting the finer pitch metallization. For all of the advancements, mechanical probing of a large number of nets remains a tedious and difficult task. In spite of the problems, mechanical probing remains a very important part of the failure analyst’s arsenal of tools. It is particularly useful in isolating problems, which require precise measurement of DC voltage. This type of problem arises regularly in the analysis of analog devices, while under power and in the characterization of failed circuit components. In addition, glitches and single shot events are not readily detected by electron or optical beam probing since they function primarily as sampling oscilloscopes. It is also an invaluable tool in the characterization of failed components of an integrated circuit. With current technologies, an increase in the number of failures, which are the result of transistor shifts or transistor process margin has led to an increased interest in the probing of transistors after partial deprocessing [52,53]. An alternative for topside probing is the scanning probe microscope for electrical measurements. Localized measurements of electrical properties such as capacitance and spreading resistance are possible with AFM derivatives. Comparisons of doping profiles and detection of leaky and open contacts are among the many features which can be detected [52–54]. 29.2.5.2

Electron Beam Probing

Electron beam probing provides an essentially load-less probe with high bandwidth. E-beam probe capability covers a broad range of applications. In its simplest form, it is the qualitative observation of high and low voltages as reflected in dark and bright conductors respectively. Most e-beam effects are best observed at low electron beam energies (typically about 1 KV). In this range, the surface remains nearly electrically neutral i.e., incident electrons absorbed at the surface are offset by secondary and backscattered electrons coming from the surface. By the addition of AC device operation, dynamic voltage contrast can provide information about the AC performance at various nodes. By adjusting the SEM sweep rate and frequency of device operation, a striped appearance of the AC biased portions of the circuit is achieved. Hence, it is possible to determine qualitatively where the device is toggling and where it is not operating at the appropriate frequency. Other enhancements to basic voltage contrast effect include image processing capability and quantitative measurements of voltages and waveforms. E-beam probe systems are used routinely as noncontact sampling oscilloscopes. By operating as a sampling oscilloscope, the impact of surface charging may be further offset. As with any sampling oscilloscope, a triggering input is required with a repetitive pattern set loop. Image processing enhancements for the most part use comparisons of good and bad devices, operating in the same mode. Image subtraction results in identification of areas where different voltage levels occur on the good and bad device. Operation of systems from workstations also allows direct access to design databases and use of the database in navigation around the circuit. Modern systems employ CAD navigation to quickly locate nodes and measure waveforms or extract timing information for all of the techniques discussed. Measurement is similar to a digitizing oscilloscope. This means that all

DK4126—Chapter29—23/5/2007—20:00—CRCPAG—240458—XML MODEL CRC12a – pp. 1–26.

29-12

Handbook of Semiconductor Manufacturing Technology

of the triggering and looping issues associated with a digitizing oscilloscope are factors in electron beam probing. Probe point extraction has become an issue as the number of levels of metallization has increased. Historically, electron beam signals can be measured through one and very occasionally more layers of dielectric by capacitive coupling. Probing through passivation or probing the top metal layer minus one on depassivated devices had been routinely possible. A second useful approach to probing underlying metallization has been to anisotropically remove dielectric covering all the layers of metallization. This approach is also limited by overlying metallization spacing for both mechanical and electron beam probing. Mechanical probing is limited by spatial restriction in getting a probe tip through the upper metallization. Electron beam probing is limited by cross talk from the upper metallization. In addition, the large-scale removal of dielectric has a significant impact on the parasitic capacitance within the IC and hence device operation, particularly speed. The capabilities of a FIB are ideally suited to bringing points to the surface for either mechanical or electron beam probing. Focused ion beam can generate high aspect ratio holes to the underlying metallization. The metal deposition feature of the FIB can be used to fill the hole and create small pad for mechanical or e-beam probing. The FIB can also be used to mill holes from the backside on flip-chip devices. These processes are often facilitated by the more rapid chemically enhanced laser removal. There are several limitations for probing on leading edge technologies. The first is the increase in the number of metallization layers. This will reduce the accessibility of nodes even with FIB point extraction. The second is the increased use of flip-chip attachment. This will make at-speed probing of devices impractical except from the backside due to difficulties in providing an electrical stimulus. A third factor, which will particularly impact electron beam probing, is cross talk [51]. 29.2.5.3

Isolating the Component

Mechanical probing also remains a powerful tool for characterization of isolated failed components. Electrical characterization of failed components is often critical to understand the root cause of a failure. This may be in the form of transistor leakage measurement, measurement of current drive of a transistor, verification of a high via resistance, or verification of a metal open or short. True isolation of a failed component may require cutting conductors, isolating the failed component from the remainder of the integrated circuit. It may also entail making probe contacts to conductors on different thin film layers. A wide variety of tools have been developed for severing conductors in order to allow isolation of failure sites. The classical method for severing conductors is laser ablation. Its primary advantages are relatively low cost, speed, ease of use, and reasonable control of damage. Severing broad stripes may be difficult due to heat sinking by the metal but nibbling at the broad stripe can be effective. Spatial resolution into the 0.5 mm regime has been demonstrated but is clearly preferred for most deep sub-micron work. Ablative laser systems provide an additional capability for removing selected areas of dielectric in order to provide opening to contact metallization. In cases where failed components recover during deprocessing, parallel polishing may also be employed to expose the conductors for mechanical probing. In fact, for deep submicron technology, the preferred methodology is to polish to a contact or via level to expose contacts or vias for direct contact. Slight etching of the dielectric is commonly used to make the contact or via rise slightly above the flat surface. The FIB provides a wealth of capability for such probing. It cuts conductors allowing component isolation. It provides selected area deprocessing of both dielectrics (to expose conductors) and metallization (to perform work through buses). It creates probe points by drilling high aspect ratio holes to a conductor, filling the hole with metal and expanding the top to form a probe point [22,55,56] from either the topside or backside. In FIB, a gallium ion beam is rastered across a sample resulting in sputtering the surface away. The diverse applications of FIB have been largely arisen with the use of gases, which are bled onto the sample during the beam rastering. The earliest gases were organometallic compounds of tungsten and platinum that the gallium beam decomposed to form conductors. When combined with the natural cutting capability of FIB, this allowed for the rewiring of circuits. This has become a critical part of design debug. It allows repair of identified design problems.

DK4126—Chapter29—23/5/2007—20:00—CRCPAG—240458—XML MODEL CRC12a – pp. 1–26.

Failure Analysis TABLE 29.4

29-13

The Various Gas Enhanced FIB Processes Are Summarized

Gases

Application

Organometallics

Metal deposition

Tetraethylorthosilicate (TEOS)

Oxide deposition

Halogens Xenon difluoride Water

Enhance dielectric etch Enhanced metal etch Enhanced organic etch

Proprietary etch processes

Copper and low-k dielectrics

Typical Uses Metal interconnects for rewiring Sacrificial layer to avoid rounding in section near the top surface Dielectric to allow more complex rewiring such as passivation removal of areas of buses High aspect vias for rewiring Metal removal Sections of photoresist Selected area removal of Polyimides Advanced material removal

This allows the debug to continue on the repaired devices without the processing of additional silicon through a wafer fab. This drives high confidence in redesign success. It has also allowed customers to obtain prototype samples much more quickly. Recently, the addition of oxide deposition capability has greatly expanded the potential for rewiring. Enhanced etching (see Table 29.4) has also improved the capabilities for device repair. In enhanced etching, gases are bled onto the sample, which react with the surface to form gaseous compounds. The gaseous compounds can be removed much more rapidly than simple sputtering allows and etching is enhanced. These gases also provide a level of selectivity since they may react with one material to generate a gas and not with another. The FIB is also a high-resolution cross-section tool as described below. 29.2.5.4

Backside Probing

As flip-chip becomes more popular, backside acquisition of signals has become a requirement. The first approach developed was to use a FIB to mill a backside hole into field oxide areas and use e-beam probing of the exposed node (alternatively mechanical probing of a FIB constructed probe pad). This method suffers significant limitation in terms of time required to create the FIB holes and the number of nets, which can be accessed without damaging transistors. Measurement of waveforms (Laser Voltage Probe or LVP) using an optical probe, which is sensitive to changes in refractive index with electrical potential based on the Franz–Keldysh effect has been developed [51,57]. Laser voltage probe has a number of advantages. Since measurements are made at the transistor rather than in the metallization network, all AC-active nodes are equally accessible (see Table 29.5 for a comparison of optical beam probing to existing techniques). The key limitation for this approach is that the incident photon beam must be focused to as small a spot as possible. Diffraction limits this to a spot, which is becoming significantly larger than a transistor. Since the shifts in diffraction properties of the silicon are small, signal to noise is a major concern and improvements in signal processing have improved the efficiency of this approach. Another approach is an emission microscopy based technique [58], picosecond imaging circuit analysis (PICA), which detects the light emissions that occur during the changing of state of the CMOS transistors. This is a time-resolved emission microscopy approach, using the faint but normal emissions that occur due to hot carriers during state-to-state transistors in CMOS devices. Improvements in efficiency for localized photon detectors have led to the development of single point techniques, which allow timing measurements at single point with high efficiency. The ability to improve optical detection efficiency is much higher for the single spot approach. This approach has been integrated into systems which can measure timing on devices [59,60]. It is significant to note that the other techniques acquire true waveforms, the single point PICA approach provides timing information i.e., an emission peak is observed during the time the transistors are transitioning between states. Thus the timing of the transition is known but the shape of the rising and falling edges is not clear. The spatial resolution is limited by the ability to exclude light from adjacent circuitry.

DK4126—Chapter29—23/5/2007—20:00—CRCPAG—240458—XML MODEL CRC12a – pp. 1–26.

29-14

Handbook of Semiconductor Manufacturing Technology

TABLE 29.5

Probing Technologies Are Compared Relative to Key Issues in Probing Mechanical Probe

Contact

Mechanical—requires direct contact to conductor

Loading

Capacitive and resistive load—minimized by high impedance probes Very high and can be enhanced by kelvin probe techniques Need to be aware of loading effects

DC accuracy AC accuracy Node accessibility

Issues with probe placement and access to underlying nodes

Electron Beam Probe

Optical Beam Probing

Single Point Picosecond Imaging Circuit Analysis

Non-contact

Non-contact, no incident beam use

IR light may provide some electron-hole pair generation

None

Poor

Poor

Poor

Limited by bandwidth but not loading effects Probe points required for extracting buried signals

Limited by bandwidth but not loading effects All signals accessible Limited by laser spot size

Rise and fall time are difficult to obtain

Non-contact, capacitive coupling allow probing of underlying conductor None

All signals accessible

An additional limitation with increasing importance is the emission at voltages below 1 V The IR imaging of both LVP and single point PICA is improved using silicon immersion lenses (Table 29.5).

29.3

Physical Analysis Tools

Physical analysis tools are those tools, which are used to identify the physical location and physical characteristics of the defect, which is responsible for the electrical failure observation. These tools include a broad range of sample preparation and observation or microscopy techniques.

29.3.1

Package Analysis

Two very important techniques are used in the analysis of package defects: x-ray and scanning acoustic microscopy (SAM). X-ray and SAM are powerful and complementary tools for observing defects in packaging [61,62]. Packaging related defects typically manifest themselves in the form of continuity or leakage failures. For this reason, continuity and leakage failures are commonly analyzed with x-ray [63] and SAM [64]. In addition, SAM is commonly performed on all surface mount devices, which have been through a board assembly process, typically vapor phase or IR reflow conditions. X-ray provides an excellent method for observing the electrical connections within a package. Typically, the metal elements of the package can be clearly delineated. The exception is aluminum bond wires, which are difficult to distinguish due to the low atomic number of aluminum. While x-ray provides visibility into the metal interconnections, SAM provides information about the adhesion of various interfaces within the package such as the mold compound to die and mold compound to leadframe interfaces. Two additional new tools play key roles in the analysis of failure in complex packages. Time domain reflectometry (TDR) is used in the analysis of open circuits. This technique measures the time for an electrical pulse to be reflected back to a package pin or ball [65]. Similarly, the scanning magnetic microscope shows promise for the analysis of shorted devices by allowing the mapping of current through the defect path. As pointed out above, this technique can also be applied to die level shorts and leakage. A variety of magnetic sensors have been developed [48–50].

DK4126—Chapter29—23/5/2007—20:00—CRCPAG—240458—XML MODEL CRC12a – pp. 1–26.

Failure Analysis

29.3.2

29-15

Deprocessing

There are three important approaches for exposing and observing defects in IC: deprocessing, parallel polishing, and cross sectioning. Deprocessing [66–70] is the chemical removal of the various thin films formed in the wafer fabrication process in order to expose the wafer fabrication defect of interest. The order of layer removal is the reverse of order of the application during fabrication. The chemical removal of the materials can be divided into three categories: metallization, dielectrics, and silicon. The processes used for deprocessing are typically rather similar to those used in the wafer fab for the removal of the same thin film. While chemically, the processes are similar, there are marked differences. Typical wafer fab processes can use a significant overetch because their endpoint is determined by selectivity. For example, a metallization etch will be stopped at a dielectric under the metallization, as no underlying metallization is left exposed. In the comparable FA deprocessing step, the metallization etch must be selective to underlying dielectric as above but must also be stopped in the vias. Metallization removal has historically been performed using wet chemicals where possible. Wet chemical metallization etches are generally highly selective with respect to dielectrics. The use of wet chemicals is dependent on some blocking mechanism to prevent etching underlying metallization layers through vias. This is the case where barrier-adhesion layers are used as part of the metallization structure or dissimilar metals are used in the vias such as tungsten plugs. In cases where etch of underlying metallization is not blocked, either parallel polishing or plasma based metallization etching must be used. Plasma based chemistries for aluminum metallization are normally chlorine based, which add significant to the costs and safety concerns. For dielectrics, the plasma-based removal of the materials has been the method of choice for many years. In general, etch endpoints must be time based rather than selectivity based. This occurs because the deprocessing etch must be stopped on both metallization and the underlying dielectric as well. Early plasma applications were both barrel and planar type plasma etchers. As metallization traces became narrower and undercutting became an issue, anisotropy became a requirement as well. Reactive ion etching (RIE) of dielectrics provides this greatly enhanced anisotropy. In recent years, magnetically enhanced RIE and inductively couple plasma (ICP) process have also become popular to improve control of the etch process and reduce the chances of damage to the device being deprocessed.

29.3.3

Parallel Polishing

With the poorly defined endpoints of deprocessing, particularly dielectric deprocessing, and the increased number of thin film layers to remove, it has become significantly more difficult and time consuming to fully deprocess devices. These factors along with the difficulty of avoiding artifacts with deprocessing have led to an increase in the use of parallel polishing. Parallel polishing is the process of mechanical removal of the thin films in order to expose the wafer fabrication defect, which has resulted in failure. Frequently, parallel polishing is used to do the preliminary layer removal when the defect is expected to occur at a specific level followed by deprocessing. For example, emission microscopy may indicate a transistor level defect so that all of the metallization and interlevel dielectrics can be removed prior to starting deprocessing. Maintaining parallelism across a large die is difficult. This makes good failure site isolation critical for parallel polishing, since maintaining parallelism in a small isolated area is much easier than for large areas.

29.3.4

Cross-Section Analysis

A powerful alternative to deprocessing is cross-section analysis [71]. It is particularly useful when the position of the defect is precisely known. Cross-sections tend to be more definitive about the wafer fab process step during which the defect occurred. Deprocessing can tend to distort the location and composition of defects by creating defect replicas in underlying materials. As has been observed with many of the tools and techniques of FA discussed above, the steady reduction in feature size has led to

DK4126—Chapter29—23/5/2007—20:00—CRCPAG—240458—XML MODEL CRC12a – pp. 1–26.

29-16

Handbook of Semiconductor Manufacturing Technology

more stringent demands on spatial resolution for defect inspection and identification. This has clearly been the case with cross-section preparation. Early cross-section preparation was by encapsulation in clear plastic and sawing, grinding and polishing on standard metallurgical tools. In fact, this process continues to be used for assembly process defects such as package cracking, bond intermetallics, and die attach integrity etc. Cleaving and fracture along crystallographic planes, also remains a useful tool. As a tool for preparing cross-sections of a general area such as memory arrays or features in the order of a few microns, it remains a quick and efficient sample preparation technique. Precision in the order of a micron has been achieved with automated cleavers. These tools include a capability for liquid nitrogen cleaving, which reduces smearing of metallization and other ductile materials. Improved precision has been obtained with unencapsulated sectioning techniques. In this technique, the sample is attached with wax to stub, which in turn is attached to block that is supported on a Teflon bar. This assembly is mounted on a polishing wheel, where the sample is ground away or polished. The original use of this approach employed a lime glass wheel as an abrasive. Polishing on a variety of abrasives to improve the finish and for slower sample erosion is a part of the process as well. Features as small as a few tenths of a micron can be routinely cross sectioned by this method. However, FIB cross sectioning has become the dominant method for cross-section sample for deep sub-micron features. The cross section is prepared by sputtering or milling away a large box near the feature of interest. The box is typically terraced with the deep end near the feature of interest. The section face is milled away with decreasing beam current to create a polished surface, which can be inspected by SEM or FIB. Dual column FIB systems, SEM and FIB in the same system, provide a great improvement in efficiency for cross sectioning. Blanket Ion Mills provide an efficient and low-cost alternative for less precise cross sections. A mechanical barrier is used to stop the ion beam in areas which are to be maintained and the exposed areas are etched away very quickly.

29.3.5

Microscopy

A broad array of physical characterization techniques and tools are employed at various phases of FA. These are generally non-destructive techniques used to observe physical features of the die or package. These include microscopy of various types to allow inspection and documentation of both deprocessed devices and cross-sectioned devices [72], also see Chapter on Characterization. As feature sizes are reduced, the minimum size of killing defects (i.e., defects which cause electrical failures) is also reduced. The spatial resolution of physical analysis tools as well as chemical analysis tools must be adequate to address the smaller size. In general, FA must be able to deal with defects, which are in the order of 10%–20% of the minimum feature size. On-line tools more typically must be able to deal with defects in the order of 30%–50% of the minimum feature size. Smaller defects are commonly dealt with more advanced tools in a research environment. Optical microscopy remains an integral part of the FA toolkit. Optical microscopy continues to have one significant advantage over other inspection tools in that defects can be viewed through transparent dielectrics. Other significant advantages are ease of use and straightforward interpretation. While brightfield and to some extent darkfield applications are the dominant uses of the optical microscope in FA, other modes find useful niches. For example, interference contrast is particularly useful in the documentation of crystalline defects after deprocessing to silicon and decoration. Fluorescence microscopy can also prove useful in the evaluation of contamination, for example observing the spreading of an organic contaminant. In addition, it provides the basis for FMI discussed above. Polarized light microscopy finds applications in such areas as enhancement of liquid crystal transitions. Confocal microscopy provides a method for approaching diffraction limited resolution in optical microscopy but with minimum depth of field. Confocal microscopy approaches the physical limits of optical microscope spatial resolution through the use a pinhole aperture. Spinning disc configurations and computer addition of images at various heights allow merging of images from different focal planes into a composite with nearly diffraction-limited resolution and effectively higher depth field. Spinning disc systems provide real time imaging by using a large number of pinholes. Single pinhole confocal

DK4126—Chapter29—23/5/2007—20:00—CRCPAG—240458—XML MODEL CRC12a – pp. 1–26.

Failure Analysis

29-17

microscopes are not real time due the time required to manually scan in the z direction while grabbing images. These capabilities are further enhanced by the use of lasers in laser scanning microscope (LSM) to improve acquisition times by increasing signal levels. In addition to provide optical microscope enhancement, the LSM provides an ideal platform for optically based global failure site isolation techniques such as OBIC, LIVA, OBIRCH, and TIVA, which were discussed above. Infrared microscopy finds a growing range of applications dependent on the IR transparency of silicon and other semiconductors. Infrared microscopy is a vital element of backside side analysis, providing the only current technology for penetrating a polished silicon surface. Infrared microscope differs in several significant ways from an optical microscope. Since IR light is not “visible,” it cannot be observed directly. A converter must be used to change the IR image to a format that can be viewed. Resolution of the IR microscope is poorer than a comparable optical microscope because the wavelength of light used is larger. The use of silicon immersion lenses can improve the spatial resolution of IR microscopy by about a factor of three. Also, colors or variations with wavelength do not add significantly to the interpretation of IR region images. The theoretical limit is in the order of 0.5–0.6 of the wavelength of light used. This is in the order of 0.3 mm for visible light and worse for IR. However, many of the optical microscope techniques such as the use of lasers and confocal microscopy remain applicable. Ultraviolet (UV) light microscopy have been developed with much better spatial resolution. Since the wavelength of light is shorter, a better theoretical resolution is possible. Since the transparency of the common dielectrics, silicon oxide and silicon nitride, extend into the UV region, UV microscopy provides a useful extension of the optical microscope. Like IR microscope, it requires a converter since the UV image is not directly observable. Monochromatic UV light is normally used to reduce chromic aberration. The usefulness of the optical microscope, however, continues to become limited, as the minimum feature size has passed the diffraction limit for the wavelengths of visible light. This has driven a marked trend towards more SEM utilization relative to optical microscopy. This is in large part due to the resolution and depth of field limitations of the optical microscope, which come into play with smaller geometry devices and an increased number of layers in devices. It is also a reflection of the increased ease use of the SEM and particularly field emission scanning electron microscope (FESEM), which has become dominant in the semiconductor industry. The FESEM has become an essential tool for FA. The very high magnification capability is essential for documentation of defects in deep sub-micron processes. In addition, SEM provides an excellent depth of field, for low magnification inspections such as bond wires. In addition to its value as a microscope, the FESEM provides the platform for other e-beam techniques such as backscattered electron imaging, electron beam probing, and EBIC and x-ray microchemical analysis. As killing defect sizes continue to shrink, observation and documentation of these defects require higher magnification and better resolution. This requirement has fueled the transitions from optical microscopes to thermionic emission SEM’s to FESEM and it now drives greater use of higher resolution options. These requirements are filled in part by higher resolution FESEM’s, commonly referred to as in-the-lens systems in which the sample is located within the electron lenses. Unfortunately, this arrangement severely limits sample size. Transmission electron microscope (TEM) and AFM also provide tools which fill portions of the resolution gap, particularly the TEM whose use has grown dramatically in recent years. The derivatives of the AFM [73,74] are commonly referred as scanning probe microscopes and include a broad range of measurement capabilities including voltage [58] (electric field), current (magnetic field), and capacitance [59] as describe above. In addition, near field optical microscopy can provide a limited optical approach for microscopy which is not diffraction limited [60,75].

29.3.6

Transmission Electron Microscopy

The need for better resolution has led to the more wide spread use of TEM for FA. The TEM is however limited to some extent by sample preparation. The TEM cross-section sample preparation techniques can be grouped into three categories: standard general area approaches, specific area approaches, and FIB

DK4126—Chapter29—23/5/2007—20:00—CRCPAG—240458—XML MODEL CRC12a – pp. 1–26.

29-18

Handbook of Semiconductor Manufacturing Technology

approaches, which now dominate the sample preparation arena. In addition, plan view TEM sample preparation is becoming more popular. The classical approach for TEM sample preparation has been the thinning of a stack of devices glued together followed by dimpling and ion milling. This approach is used to observe the general processing features, such as thickness measurements. Several approaches for mechanically polishing a specific area of interest from both sides have been developed as well. Although these approaches can be very time consuming, specific area sections are essential for defects such as resistive via contacts. More rapid sections can be prepared using the FIB. The FIB can be used to mill cross-section views from both sides of a defect, resulting in the thinned section required for TEM viewing. The only serious limiter for FIB sample preparation is ion beam damage to the sample from gallium ion implantation. Techniques have been identified to reduce this effect, but it is not eliminated.

29.4

Chemical Characterization

Contamination continues to be a primary cause of failures in IC. This occurs primarily in the form of particles but may occur in many other forms such as spots from evaporation and co-implanted contaminants. The total levels of contamination are normally exceptionally small and concentrated in a very small volume. When selecting a method of analysis (also see Chapter on Characterization), several factors are critical: the spatial resolution of the technique, the volume of material to be analyzed, the sensitivity of the technique to the contaminant and background elements, the type of chemical information provided by the technique, and the ease of performing the analysis. Most analytical techniques can be viewed in terms of four elements: incident radiation, the physical interaction of the incident radiation and the sample, radiation flux from the sample which results from that interaction, either a new form of radiation excited by the incident radiation or an attenuation of the incident radiation, and a detector for the radiation flux from the sample. The spatial resolution is an exceptionally important requirement for analysis in semiconductors. Very small particles must be analyzed. In a FA environment, it is important to be able to perform analysis on particles, which are 10%–20% of the minimum feature size. In general, the spatial resolution of a technique is dominated by spot size of the incident radiation and to some extent by the dispersion of the incident radiation in the sample as in the case of characteristic x-ray analysis. The requirement to accurately focus the incident radiation tends to make charged species, particularly electrons, most useful as incident radiation for high-resolution application. This is true because charged particle beams, particularly electron beams, are readily focused. However, the use of lasers and new focusing techniques for electromagnetic radiation are bringing other techniques into prominence. The depth of analysis is controlled by one of the two factors, the depth of penetration of the incident beam and the inelastic escape probability of the radiation flux from the sample as a function of sample depth. For example, the depth of electron beam penetration controls x-ray analysis depth. However, the depth of analysis for Auger, using the same incident radiation, is controlled by the escape cross-section of the Auger electron as a function of depth in the sample. Sensitivity tends to be dominated by several factors, which can be viewed as impacting the signal to noise ratio of the technique. The first is the cross-section of the interaction of the incident radiation with the sample. It is very difficult to get high sensitivity from low cross-section interactions. Basically, high cross-section events provide a higher signal level. The second factor is the detector and how efficiently it can collect the signal. This includes factors such as geometry, which impacts the fraction of the signal, which can be detected, and detector efficiency. It also includes the noise level of detector and how it compares with the signal level. The type of chemical information provided is typically dependent on the type of interaction between the incident radiation and the sample. There are basically two types of information obtained: atomic and molecular. Atomic (or elemental) information is about what elements are present in the volume sampled. Molecular information defines in some form, the chemical bonding between the elements and/or at least the oxidation state of the elements present. As a generality, atomic information is most readily obtained

DK4126—Chapter29—23/5/2007—20:00—CRCPAG—240458—XML MODEL CRC12a – pp. 1–26.

Failure Analysis

29-19

TABLE 29.6 The Primary Qualities of Analytical Techniques Which Impact FA Are Listed for the Techniques Most Commonly Used Technique

Spatial Resolution

Energy dispersive

Good

Auger

Good

Secondary ion mass spectroscopy Fourier transform infrared spectroscopy

Moderate Adequate

Depth of Analysis

Sensitivity

Ease of Use/ Interpretation

Data Type

Applications

Bulk: 2–5 mm Surface: 20–50 A˚

Moderate

High

Atomic

General purpose

Poor

Moderate

Atomic

Surface

Very high

Low

Atomic/ molecular

Adhesion problems Doping levels

Varies

High

Moderate

Molecular

Organic contamination

Primary benefit(s) of techniques are italicized.

from interaction of the incident radiation with inner shell electrons, while molecular information is best obtained from interactions, which impact interatomic interactions such as vibration or rotational frequencies (Table 29.6).

29.4.1

X-Ray Analysis (Energy Dispersive)

Energy Dispersive x-ray Spectroscopy (EDS or EDX) remains the mainstay chemical analysis tool for FA. Its primary advantage is that it is used in conjunction with a SEM, which is already available in the FA laboratory. It has an excellent spatial resolution. Further, information can be obtained quickly and easily interpreted by the analyst. The spatial resolution is good since the SEM focuses the electron beam. The capability to move the electron beam also provides the ability for dot mapping and line scanning. The depth of beam penetration is typically several microns. Since the escape cross-section for x-rays is high, analysis is performed of material down to that depth in the sample. This makes the surface a relatively small part of the sampled volume and results in x-ray analysis being of limited value for very thin surface films. The sensitivity of the technique is in the order of 0.1%–1% for most elements but less for light elements. The primary disadvantages of the technique are limited sensitivity, especially for light elements and limited applicability to surface analysis. These do not however keep it from being the primary analytical technique for FA. Wavelength Dispersive (WD) Analysis detects the same interaction as Energy Dispersive Analysis. Hence, the type of data collected is identical, however, the different method of detection improves the resolution of x-ray peaks and yields improved signal to noise ratios. This results in improvement in sensitivity. The drawback is that the time for analysis is long relative to energy dispersive analysis. Microcalorimeters are likely to supplant EDS detectors, providing energy resolution on a par with WD with many of the good characteristics of the EDS [76].

29.4.2

Auger

Auger analysis is also an electron beam technique. The observed radiation is Auger electrons, which result from a two-electron process. Since the inelastic escape cross-section from below the surface is low, Auger electrons are detected only from the top 10 to 30 A˚. Layers below the surface can be analyzed by sputter etching away the surface layers, typically in 50–500 A˚ increments. Depth profiling is possible by analysis of sequential spectra. In addition, some limited molecular information can be obtained from shifts in the energy of the Auger electrons with oxidation state. The primary limitation of Auger is low sensitivity. For surface analysis, it is a significant improvement over x-ray analysis, since the volume of material analyzed is the surface only. Auger application is best in detection and

DK4126—Chapter29—23/5/2007—20:00—CRCPAG—240458—XML MODEL CRC12a – pp. 1–26.

29-20

Handbook of Semiconductor Manufacturing Technology

thickness estimation of thin film contamination. Thus Auger is an ideal technique to study delaminations and disadhesions. It is commonly used to detect contamination on a bond pad. Great caution is required in interpretation of data, particularly initial data which frequently only identifies adsorbed material on the surface. The use of controls, known good material, should be standard, a part of all, but the most routine Auger analyses.

29.4.3

Secondary Ion Mass Spectroscopy

Secondary ion mass spectroscopy (SIMS) is a very powerful FA tool when a requirement for high sensitivity occurs. The technique uses an incident high-energy ion beam. This beam sputters away surface atoms and atomic clusters, some of which are emitted as ions. These ions, both negative and positive, can be analyzed via a mass spectrometer. For depth profiling, the ion beam is rastered over the area of interest in a pattern of reducing size to minimize edge effects. The spatial resolution of SIMS can be somewhat limited by the requirement to begin with a large area raster. Some molecular information is available from the atomic clusters which are sputtered. Secondary ion mass spectroscopy is the most sensitive (in the order of 20 ppb) of the routinely used FA techniques. This makes the use of control samples essential, since many elements will be detected at very low levels even on the cleanest of devices, making interpretation of data difficult. While SIMS is essentially a surface technique, it continuously sputters away the surface. This makes depth profiling, a natural application of SIMS and in fact, this is the output format normally obtained with only mass peaks of interest plotted. In addition, laser based techniques which analyze ions sputtered from a surface have also been developed. The techniques summarized above are the workhorse tools for FA. They have the common advantage of utilizing a charged beam, which results in good spatial resolution. Unfortunately, they share the common limitation of providing little or no molecular information. The techniques summarized below fill this gap in some manner and can be employed in FA as required.

29.4.4

Microspot Fourier Transform Infrared Spectroscopy

Fourier Transform Infrared Spectroscopy (FTIR) measures absorption of IR light by a sample. Absorption of IR light is measured with an interferometer with the spectrum expressed as the Fourier transform of interferogram. Spectra can be obtained from solvent extraction of a sample as well as directly from the sample. The development of Microspot techniques has greatly enhanced the applicability of this technique to FA. Now areas of a few microns across the organic contamination can be successfully analyzed in a reflection mode. The technique is most useful in FA for the identification of organic contamination. Organic compounds have rotational and vibrational absorptions in the IR region. In most cases, the contaminant FTIR spectrum is compared to libraries of organic compounds or spectra of suspected contamination sources. Micro-Raman spectroscopy has also been used and provides a good complement to the FTIR technique.

29.4.5

Others

A number of other analytical techniques may be useful from time to time in FA. Some of these fill specific niches in analytical capability. The x-ray photoelectron spectroscopy (XPS) observes electrons, which are ejected from atoms after interaction with monochromatic x-rays. Elemental characterization is by binding energy (x-ray energy minus the kinetic energy of the electron). Oxidation state information is obtained from energy shifts with oxidation states. This is also a surface technique, since as with Auger, the probability of electron escape without energy loss from below the surface is low. The primary limitation of the technique is the limitation of focus of the incident x-ray beam. Although significant progress has been made in columnating x-ray beams, the spatial resolution remains inadequate for many FA applications. The technique may be used in conjunction with Auger analysis in order to compliment the elemental information of Auger with molecular information of XPS. Other techniques with FA

DK4126—Chapter29—23/5/2007—20:00—CRCPAG—240458—XML MODEL CRC12a – pp. 1–26.

Failure Analysis

29-21

applications include Rutherford Backscattering Spectroscopy, Ion Chromatography, Thermal Gravimetric and Thermal Mechanical Testing. Specifically, Rutherford Backscattering Spectroscopy can be used to measure stoichiometry, particularly in silicides. Ion Chromatography of water extraction has been useful in measurement of total ionic surface contamination on samples such as packages and wafers with relatively large surface areas. Thermal Gravimetric and Thermal Mechanical Testing are required to evaluate the curing of mold compounds. Even this lengthy listing omits such techniques as Atomic Absorption and x-ray Diffraction Techniques. In addition, as TEM use has increased, Electron energy loss spectroscopy (EELS) has become a valuable analytical tool, complementing EDS which can be put on a TEM as well as on a SEM. Obviously, one challenge of FA is to select the analytical techniques, which are best suited to a particular situation.

29.5

Future of Failure Analysis

With the decrease in life cycles in products and technology, the emphasis in diagnostic activity will continue to be on more upstream activities (see Figure 29.3). Design debug capability to quickly assure second pass silicon success will be essential for supporting customer requirements. Short life cycles will also drive a requirement to ramp yield more quickly. The short life cycle will also reduce the acceptability of long qualification processes, which result from qualification failures. A long-term reduction in qualification failures coupled with an improved reliability should drive fewer qualification failure analyses. The customer return analysis flow is a very lagging indicator of quality and reliability performance and should become a less significant part of the improvement process, particularly for products with short lives. In addition to moving diagnostic activities more upstream, the technology roadmaps will drive significant changes in the tool requirements for FA. The development of tools for FA is driven by both device and process complexity as well as changes in the packaging technology. Increases in the number of levels of metallization and transitions to flip-chip are driving the broader use of backside analysis tools. Smaller feature sizes are driving demands for tools with higher spatial resolution in both the fab and assembly arenas. In addition, the industry is rapidly transitioning to many new materials such as copper metallization and low-k dielectrics currently and high-k gates and porous dielectrics in the forecast. Changes in FA technology have historically occurred in several ways. Abrupt changes typically result from radical changes in the technology. For example, surface mount package suddenly drove a strong

Customer return feedback Qualification FA feedback Assembly yield analysis feedback

FIGURE 29.3 loops.

Customer use

Qualification

Assembly/Test

Wafer fab

Design

Design debug and FAB yield analysis feedback

Failure analsis (FA) feedback loops are illustrated. Future emphasis is expected on shorter feedback

DK4126—Chapter29—23/5/2007—20:00—CRCPAG—240458—XML MODEL CRC12a – pp. 1–26.

29-22

Handbook of Semiconductor Manufacturing Technology

AFM TEM W/LaB6 SEM

FESEM

Optical





0.25 μ

0.07 μ

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF