Practical Guide to Image Analysis - John J. Friel (ASM International)

November 8, 2017 | Author: Shameeka | Category: Ibm Pc Compatibles, Personal Computers, Digital Imaging, Microstructure, Macintosh
Share Embed Donate


Short Description

Image Analysis Book...

Description

JOBNAME: PGIA−−spec 2 PAGE: 1 SESS: 10 OUTPUT: Thu Oct 26 15:57:06 2000

Practical Guide to Image Analysis

ASM International® Materials Park, OH 44073-0002 www.asminternational.org

JOBNAME: PGIA−−spec 2 PAGE: 2 SESS: 10 OUTPUT: Thu Oct 26 15:57:06 2000

Copyright © 2000 by ASM International® All rights reserved No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the written permission of the copyright owner. First printing, December 2000

Great care is taken in the compilation and production of this Volume, but it should be made clear that NO WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION, WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, ARE GIVEN IN CONNECTION WITH THIS PUBLICATION. Although this information is believed to be accurate by ASM, ASM cannot guarantee that favorable results will be obtained from the use of this publication alone. This publication is intended for use by persons having technical skill, at their sole discretion and risk. Since the conditions of product or material use are outside of ASM’s control, ASM assumes no liability or obligation in connection with any use of this information. No claim of any kind, whether as to products or information in this publication, and whether or not based on negligence, shall be greater in amount than the purchase price of this product or publication in respect of which damages are claimed. THE REMEDY HEREBY PROVIDED SHALL BE THE EXCLUSIVE AND SOLE REMEDY OF BUYER, AND IN NO EVENT SHALL EITHER PARTY BE LIABLE FOR SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES WHETHER OR NOT CAUSED BY OR RESULTING FROM THE NEGLIGENCE OF SUCH PARTY. As with any material, evaluation of the material under end-use conditions prior to specification is essential. Therefore, specific testing under actual conditions is recommended. Nothing contained in this book shall be construed as a grant of any right of manufacture, sale, use, or reproduction, in connection with any method, process, apparatus, product, composition, or system, whether or not covered by letters patent, copyright, or trademark, and nothing contained in this book shall be construed as a defense against any alleged infringement of letters patent, copyright, or trademark, or as a defense against liability for such infringement. Comments, criticisms, and suggestions are invited, and should be forwarded to ASM International. ASM International staff who worked on this project included E.J. Kubel, Jr., Technical Editor; Bonnie Sanders, Manager, Production; Nancy Hrivnak, Copy Editor; Kathy Dragolich, Production Supervisor; and Scott Henry, Assistant Director, Reference Publications. Library of Congress Cataloging-in-Publication Data Practical guide to image analysis. p. cm. Includes bibliographical references and index. 1. Metallography. 2. Image analysis. I. ASM International. TN690.P6448 2000 669’.95—dc21 00-059347 ISBN 0-87170-688-1 SAN: 204-7586 ASM International® Materials Park, OH 44073-0002 www.asminternational.org Printed in the United States of America

JOBNAME: PGIA−−spec 2 PAGE: 3 SESS: 10 OUTPUT: Thu Oct 26 15:57:06 2000

About the Authors John J. Friel is technical director at Princeton Gamma-Tech (Princeton, NJ). He received his undergraduate education at the University of Pennsylvania, his M.S. degree from Temple University, and his Ph.D. from the University of Pennsylvania. He did postdoctoral work at Lehigh University and worked at Homer Research Lab, Bethlehem Steel Corp. before joining PGT. In addition to his work on x-ray microanalysis for PGT, John serves as an adjunct Professor of Ceramics at Rutgers University and is a member of the International Centre for Diffraction Data (ICDD). John is the author of over 50 technical publications on subjects in materials science, x-ray microanalysis, and image analysis, and also authored a book entitled X-Ray and Image Analysis in Electron Microscopy. He is past-president of the Microbeam Analysis Society and chairman of ASTM Subcommittee E04.11 on X-Ray and Electron Metallography. James C. Grande is leader, Light Microscopy and Image Analysis at GE Corporate Research and Development Center (Schenectady, NY). He received his B.S. degree in mechanical engineering from Northeastern University in 1980 and began working in the microscopy lab at the R&D Center as a metallographer during his undergraduate work. Jim has authored several articles, presented several talks on metallography and image-analysis techniques, and has taught training courses on stereology and image analysis. He has more than 20 years of experience in a research environment using image analysis to characterize many different materials in a variety of applications. Dennis Hetzner is a research specialist at the Timken Co. (Canton, OH). Dennis received his B.S. and M.S. degrees in metallurgical engineering from Illinois Institute of Technology and his Ph.D. in Metallurgical Engineering from University of Tennessee. He specializes in quantitative metallography and conducted research in the areas of powder metal processing, high-temperature, mechanical property testing, rolling contact fatigue, and laser glazing of bearings. He has presented several papers and tutorial lectures regarding the use of image analysis to solve problems related to quantification of materials microstructural features, and he teaches courses on quantitative image analysis. Dennis is chairman of ASTM Subcommittee E04.05 on Microindentation Hardness Testing, and he is a member of ASM International, the International Metallographic Society (IMS), and ASTM. Krzysztof Kurzydłowski is head, Dept. of Materials Science and Engineering, Warsaw University of Technology (Warsaw, Poland). He received his undergraduate (1978) and Ph.D. (1981) degrees from iii

JOBNAME: PGIA−−spec 2 PAGE: 4 SESS: 12 OUTPUT: Thu Oct 26 15:57:06 2000

Warsaw University of Technology and his D.Sc. degree (1990) from Sileasian University of Technology. His research interests include quantification of materials microstructures, materials modeling, design of polycrystalline materials and composites, environmental effect on materials properties, and prediction of in-service materials degradation. Kris has authored four books/monographs and authored or coauthored more than 50 technical publications. He is a member of International Society for Stereology, Materials Research Society, European Materials Research Society, and American Society of Mechanical Engineers, and he is a Fellow of the Institute of Materials. Don Laferty is director, Research and Development at Objective Imaging Ltd. (Cambridge, England). Don received his B.A. degree in physics and philosophy from Denison University in 1988. He has been involved with optics and digital imaging for more than 13 years. His early interests in real-time optical pattern recognition systems evolved into digital image analysis and microscopy upon joining Cambridge Instruments Inc. (now part of Leica Microsystems) in 1989. While at Cambridge Instruments, he was active in applying techniques based on mathematical morphology to scene segmentation problems in all areas of microscopy and furthering the practical use of image processing and analysis in cytogenetics, pathology, metallurgy, materials, and other microscopy-related fields. Don currently is involved with high-performance hardware and software solutions for automated microscope-based image analysis. Mahmoud T. Shehata is research scientist at Materials Technology Laboratory/CANMET (Ottawa, Ontario, Canada) since 1978 conducting research in microstructural characteriazation of engineering materials for industrial clients. He received his B.S. degree in metallurgical engineering from University of Cairo (Egypt) and his Ph.D. degree in materials science from McMaster University (Ontario, Canada). Mahmoud has used metallographic analysis throughout his research career mostly in the area of microstructure/property relationships of engineering materials, particularly in the area of effects of nonmetallic inclusions on steel properties. He is the author of more than 50 technical papers and more than 100 reports, and he is a member of several technical organizations including ASM International, International Metallographic Society, and Canadian Institute of Mining, Metallurgy, and Petroleum. Vito Smolej is research engineer at Carl Zeiss Vision (Munich, Germany). He received his undergraduate degree in technical physics from University of Ljubljana (Slovenia) in 1971, his masters degree in biophysics from University of Zagreb (Croatia) in 1977, and his Ph.D. in solid state chemistry from University of Ljubljana in 1977. He did research and post-doctoral work at the Dept. of Applied Mathematics of the Josef Stefan Institute in Ljubljana and taught programming technologies and advanced-programming languages at University of Maribor (Kranj, Slovenia). After spending a PostDoc year at the Max Planck iv

JOBNAME: PGIA−−spec 2 PAGE: 5 SESS: 12 OUTPUT: Thu Oct 26 15:57:06 2000

Institute for Materials Science in Stuttgart (Germany) in 1982, Vito joined Carl Zeiss Canada where he was involved with software-based imageanalysis systems produced by Kontron Elektornik. In 1988, he moved to Kontron Elektronik Bild Analyse (which became Carl Zeiss Vision in 1997) in Munich, where he is involved in software development and systems design. He is author or coauthor of more than 40 technical publications. George F. Vander Voort is director, Research and Technology at Buehler Ltd. (Lake Bluff, IL). He received his B.S. degree in metallurgical engineering from Drexel University in 1967 and his M.S. degree in Metallurgy and Materials Science from Lehigh University in 1974. He has 29 years experience in the specialty steel industry with Bethlehem Steel and Carpenter Technology Corp. George is author of more than 150 publications including the book Metallography: Principles and Practice, as well as the ASM International video course Principles of Metallography. George has been active with ASTM since 1979 as a member of committees E-4 on Metallography and E-28 on Mechanical Testing. He is a member of several technical organizations including ASM International, International Metallographic Society, ASTM, International Society for Stereology, Microscope Society of America, and State Microscopy Society of Illinois. Leczek Wojnar is associate professor, Institute of Materials Science, Cracow University of Technology (Cracow, Poland). He studied at Cracow University of Technology and Academy of Mining and Metallurgy and graduated in 1979, and he received his Ph.D. degree from Cracow University of Technology in 1985. His research interests include the application of computer technology in materials science, including image analysis, stereology, materials engineering, and software development to assess weldability. Leszek has authored three books and more than 50 technical publications. His work Principles of Quantitative Fractography (1990) was the first such complete monograph in Poland and gave him the D.Sc. position. He is a member of International Society for Stereology, Polish Society for Materials Engineering, and Polish Society for Stereology.

v

JOBNAME: PGIA−−spec 2 PAGE: 6 SESS: 11 OUTPUT: Thu Oct 26 15:57:06 2000

Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix CHAPTER 1: Image Analysis: Historical Perspective . . . . . . . . . 1 Don Laferty, Objective Imaging Ltd. Video Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Beginnings: 1960s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Growth: 1970s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Maturity: 1980s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Desktop Imaging: 1990s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Truly Digital: 2000 and Beyond . . . . . . . . . . . . . . . . . . . . . . . 12 CHAPTER 2: Introduction to Stereological Principles . . . . . . . 15 George F. Vander Voort, Buehler Ltd. Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specimen Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . Volume Fraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Number per Unit Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intersections and Interceptions per Unit Length . . . . . . . . . . . Grain-Structure Measurements . . . . . . . . . . . . . . . . . . . . . . Inclusion Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Measurement Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . Image Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . .

. 17 . 18 . 19 . 22 . 23 . 23 . 31 . 32 . 33 . 33

CHAPTER 3: Specimen Preparation for Image Analysis . . . . . . 35 George F. Vander Voort, Buehler Ltd. Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sectioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specimen Mounting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Grinding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Polishing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Examples of Preparation Procedures . . . . . . . . . . . . . . . . . . Etching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

vi

. . . . . . . .

. 35 . 37 . 40 . 46 . 49 . 56 . 61 . 72

JOBNAME: PGIA−−spec 2 PAGE: 7 SESS: 13 OUTPUT: Thu Oct 26 15:57:06 2000

CHAPTER 4: Principles of Image Analysis . . . . . . . . . . . . . . . . 75 James C. Grande, General Electric Research and Development Center Image Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . Image Storage and Compression . . . . . . . . . . . . . . . . . . . . Image Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Feature Discrimination . . . . . . . . . . . . . . . . . . . . . . . . . . . Binary Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . Further Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. 75 . 80 . 80 . 82 . 88 . 92 . 99

CHAPTER 5: Measurements . . . . . . . . . . . . . . . . . . . . . . . . . 101 John J. Friel, Princeton Gamma Tech Contrast Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Direct Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Field Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . Feature Specific Measurements . . . . . . . . . . . . . . . . . . . . . Derived Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . Field Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . Feature-Specific Derived Measurements . . . . . . . . . . . . . . Standard Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. 101 . 102 . 102 . 110 . 115 . 115 . 122 . 126

CHAPTER 6: Characterization of Particle Dispersion . . . . . . . 129 Mahmoud T. Shehata, Materials Technology Laboratory/CANMET Number Density Variation Technique . . . . . . . . . . . . . . . . . Nearest-Neighbor Spacing Distribution . . . . . . . . . . . . . . . . Dilation and Counting Technique . . . . . . . . . . . . . . . . . . . . Dirichlet Tessellation Technique . . . . . . . . . . . . . . . . . . . . Tessellation by Dilation Technique . . . . . . . . . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. 131 . 132 . 134 . 137 . 141 . 143

CHAPTER 7: Analysis and Interpretation . . . . . . . . . . . . . . . . 145 Leczek Wojnar, Cracow University of Technology Krzysztof J. Kurzydłowski, Warsaw University of Technology Microstructure-Property Relationships . . . . . . . . . . . . . . . . . Essential Characteristics for Microstructure Description . . . . . Parameters and Their Evaluation . . . . . . . . . . . . . . . . . . . . . Sampling Strategy and Its Effect on Results . . . . . . . . . . . . . Bias Introduced by Specimen Preparation and Image Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bias Introduced by Image Processing and Digital Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Estimating Basic Characteristics . . . . . . . . . . . . . . . . . . . . .

vii

. 145 . 150 . 154 . 162 . 165 . 171 . 183

JOBNAME: PGIA−−spec 2 PAGE: 8 SESS: 14 OUTPUT: Thu Oct 26 15:57:06 2000

Data Interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Data Interpretation Examples . . . . . . . . . . . . . . . . . . . . . . . . 191 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 CHAPTER 8: Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Dennis W. Hetzner, The Timken Co. Gray Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Image Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . Image Segmentation (Thresholding) . . . . . . . . . . . . . . . . . . Image Amendment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Field and Feature-Specific Measurements . . . . . . . . . . . . . . Feature-Specific Distributions . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. 204 . 211 . 214 . 222 . 224 . 234

CHAPTER 9: Color Image Processing . . . . . . . . . . . . . . . . . . 257 Vito Smolej, Carl Zeiss Vision Modeling Color . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Color Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Electronic Recording of Color Images . . . . . . . . . . . . . . . . Color Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Color Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . RGB-HLS Model Conversion . . . . . . . . . . . . . . . . . . . . . . Color Processing and Enhancement . . . . . . . . . . . . . . . . . . Color Discrimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . Color Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Quantitative Example: Determining Phase Volume Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . .

. 257 . 258 . 261 . 263 . 265 . 265 . 266 . 267 . 269

. . 269 . . 270

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273

viii

JOBNAME: PGIA−−spec 2 PAGE: 9 SESS: 10 OUTPUT: Thu Oct 26 15:57:06 2000

Preface Man has been using objects made from metals for more than 3000 years—objects ranging from domestic utensils, artwork, and jewelry, to weapons made of brass alloys, silver, and gold. The alloys used for these projects were developed by combining empirical knowledge developed over centuries by trial and error. Prior to the late 1800s, engineers had no concept of the relationship between a material’s properties and its structure. In most human endeavors, empirical observations are used to create things, and the scientific principles that govern how the materials behave lag far behind. Also, once the scientific concepts are understood, practicing metallurgists often have been slow to understand how to apply the theory to advance the industries. The origins of the art of metallography date back to Sorby’s work in 1863. While his metallographic work was ignored for 20 years, the procedures he developed for revealing the microstructures of metals directly lead to some of today’s well-established relationships between structure and properties. During the past 140 years, metallography has transformed from an art into a science. Concurrent with the advances in specimen preparation techniques has been the development of methodologies to better evaluate microstructural features quantitatively. This book, as its title suggests, is intended to serve as a “practical guide” for applying image analysis procedures to evaluate microstructural features. Chapters 1 and 2 present an historical overview of how quantitative image analysis developed and the evolution of today’s television computer-based analysis systems, and the science of stereology, respectively. The third chapter provides details of how metallographic specimens should be properly prepared for image analysis. Chapters 4 through 7 consider the principles of image analysis, what types of measurements can be made, the characteristics of particle dispersions, and methods for analysis and interpretation of the results. Chapter 8 illustrates how macro programs are developed to perform several specific image analysis applications. Chapter 9 illustrates the use of color metallography for image analysis problems. This book considers most of the aspects that are required to apply image analysis to materials problems. The book should be useful to engineers, scientists, and technicians that need to extract quantitative information from material systems. The principles discussed can be applied to typical quality control problems and standards, as well as to problems that may be encountered in research and development probjects. In many image ix

JOBNAME: PGIA−−spec 2 PAGE: 10 SESS: 10 OUTPUT: Thu Oct 26 15:57:06 2000

analysis problems, statistical evaluation of the data is required. This book attempts to provide simple solutions for each problem presented; however, when necessary, a more rigorous analysis is included. Hopefully, readers will find all aspects of the book to be useful, as their skill levels increase. The authors represent a very diverse group of individuals, and each has been involved in some aspect of image analysis for 20 or more years. As indicated in their biographies, each brings a unique contribution to this book. Several are active members of ASTM Committee E04 on Metallography, and most are involved in professional societies dealing with testing, metallography, stereology, and materials. I enjoyed writing the Applications chapter and got a firsthand appreciation of the technical breadth and quality of the information contained in this book from having the opportunity to review each chapter. I would like to thank Ed Kubel of ASM, who has done an excellent job of technical editing all chapters. This book should be an excellent addition to the technical literature and assist investigators at all levels of training and expertise in using image analysis. Dennis W. Hetzner June 2000

x

JOBNAME: PGIA−−spec 2 PAGE: 1 SESS: 10 OUTPUT: Thu Oct 26 14:43:15 2000

CHAPTER

1

Image Analysis: Historical Perspective Don Laferty Objective Imaging Ltd.

QUANTITATIVE MICROSCOPY, the ability to rapidly quantify microstructural features, is the result of developments that occurred over a period of more than 100 years, beginning in the mid-1800s. The roots of quantitative microscopy lie in the two logical questions from scientists after the first microscopes were invented: how large is a particular feature and how much of a particular constituent is present? P.P. Anosov first used a metallurgical microscope in 1841 to reveal the structure of a Damascus knife (Ref 1). Natural curiosity most likely spurred a further question: what are the volume quantities of each constituent? This interest in determining how to relate observations made using a microscope from a two-dimensional field of view to three dimensions is known as stereology. The first quantitative stereological relationship developed using microscopy is attributed to A. Delesse (Ref 2). From his work is derived the equivalency of area fraction (AA) and volume fraction (VV), or AA ⫽ VV. Many of the early studies of metallography (the study of the structure of metals and alloys) are attributed to Sorby. He traced the images of rocks onto paper using projected light. After cutting out the one phase present and weighing the pieces of paper representing each phase, he estimated the volume fraction of the phases. Lineal analysis, the relationship between lineal fraction (LL) and volume fraction, or LL ⫽ VV, was demonstrated by Rosiwal in 1898 (Ref 3). Sauveur conducted one of the first studies to correlate chemical composition with structure in 1896 (Ref 4). From this work, the relationship between the carbon content of plain carbon steel and the volume fraction of the various constituents was discovered. Later, the relationship between volume fraction and points in a test grid was

JOBNAME: PGIA−−spec 2 PAGE: 2 SESS: 16 OUTPUT: Thu Oct 26 14:43:15 2000 2 / Practical Guide to Image Analysis

established both by Thompson (Ref 5) and Glagolev (Ref 6) in 1930 and 1931, respectively, establishing the relationship PP ⫽ VV, where PP is the point count. From these first experiments has evolved the now well-known relationship: PP ⫽ LL ⫽AA ⫽ VV Initially, the procedures developed to perform stereological measurements were based on laborious, time-consuming manual measurements. Of all these manual procedures, point counting is probably the most important. From a metallographer’s perspective, point counting is the easiest way to manually estimate the volume fraction of a specific constituent. Regarding image analysis, point counting will be shown to be equally important.

Video Microscopy Television, or TV, as we know it today, evolved from the early work of Philo Taylor Farnsworth in the 1920s. There were several commercial demonstrations in the late 1920s and early 1930s (Ref 7), but the technology was applied to building radar systems for the military during World War II and was not commercialized until after the war. One early video technique used the “flying spot.” The output of a cathode ray tube was used as the source of illumination; this bright spot was rastered (scanned) across a specimen. A detector tube was used to analyze the output signal. Systems such as this were used to evaluate blood cells (Ref 8) and to assess nonmetallic inclusions in steel (Ref 9). As the technology advanced, ordinary television cameras were used to convert the output signal of the microscope into an electronic signal and a corresponding image on a video tube. These early systems were analog devices. The advancement and continually increasing sophistication of television and computer systems have allowed the development of powerful image analysis (IA) systems, which have largely supplanted manual measurement methods. Today, measurements and calculations that previously required many hours to perform can be made in seconds and even microseconds. In reality, an IA system is only a simple point counter. However, operating a point counter in conjunction with a computer allows the use of highly sophisticated analysis algorithms to rapidly perform many different types of measurements. While the same measurements could be made manually, measurement time would be prohibitive. The practice of IA has seen many changes in the nearly 40 years since the development of the first television-based image analyzers in the

JOBNAME: PGIA−−spec 2 PAGE: 3 SESS: 16 OUTPUT: Thu Oct 26 14:43:15 2000 Image Analysis: Historical Perspective / 3

1960s. From limited hardware systems first used for quantitative metallographic characterizations to modern, highly flexible image processing software applications, IA has found a home in an enormous range of industrial and biomedical applications. The popularity of digital imaging in various forms is still growing. The explosion of affordable computer technologies during the 1990s coupled with recent trends in digital-image acquisition devices places a renewed interest in how digital images are created, managed, processed, and analyzed. There now is an unprecedented, growing audience involved in the digital imaging world on a daily basis. Who would have imagined in the early days of IA, when an imaging system cost many tens—if not hundreds—of thousands of dollars, the degree of powerful image processing software that would be available today for a few hundred dollars at the corner computer store? Yet, beneath all these new technological developments, underlying common elements that are particular to the flavor of imaging referred to as “scientific image analysis” have changed little since their very beginnings. Photographers say that good pictures are not “taken” but instead are carefully composed and considered; IA similarly relies on intelligent decisions regarding how a given subject—the specimen— should be prepared for study and what illumination and optical configurations provide the most meaningful information. In acquiring the image, the analyst who attends to details such as appropriate video levels and shading correction ensures reliable and repeatable results, which build confidence. The myriad digital image processing methods can be powerful allies when applied to both simple and complex imaging tasks. The real benefits of image processing, however, only come when the practitioner has the understanding and experience to choose the appropriate tools, and, perhaps more importantly, knows the boundaries inside which tool use can be trusted. The goal for IA is information from which to distill a manageable set of meaningful quantitative descriptions from the specimen (or better, a set of specimens). In practice, successful quantification depends on an understanding of the nature of these measurements so that, when the proper parameters are selected, accuracy, precision, and repeatability, as well as the efficiency of the whole process, are maximized. Set against the current, renewed emphasis on digital imaging is a history of TV-based IA that spans nearly four decades. Through various generations of systems, techniques for specimen preparation, image acquisition, processing, measurement, and analysis have evolved from the first systems of the early 1960s into the advanced general purpose systems of the 1980s, finally arriving at the very broad spectrum of imaging options available into the 21st century. A survey of the methods used in practice today shows that for microstructure evaluations, many of the actual image processing and analysis techniques used now do not differ all that much from those used decades ago.

JOBNAME: PGIA−−spec 2 PAGE: 4 SESS: 10 OUTPUT: Thu Oct 26 14:43:15 2000 4 / Practical Guide to Image Analysis

Beginnings: 1960s The technology advancement of the industrial age placed increasing importance on characterizing materials microstructures. This need has driven the development of efficient practical methods for the manual measurement of count, volume fraction, and size information for various microstructures. From these early attempts evolved the science of stereology, where a mathematical framework was developed that allowed systematic, manual measurements using various overlay grids. Marked by the founding of the International Society for Stereology in 1961, these efforts continued to promote the development and use of this science for accurate and efficient manual measurements of two-dimensional and three-dimensional structures in specimens. Despite the considerable labor-saving stereological principles provided, microstructure characterizations using manual point and intercept counting are a time-consuming, tiring process. In many cases, tedious hours are spent achieving the desired levels of statistical confidence. This provided the background for a technological innovation that would alleviate part of a quantitative metallurgist’s workload. Image analysis as we know it today—in particular that associated with materials and microscopy—saw in the early 1960s two major developments: TV-based image analyzers and mathematical morphology. The first commercial IA system in the world was the Quantimet A from Metals Research in 1963, with the very first system off the production line being sold to British Steel in Sheffield, UK (Ref 10). Metals Research was established in 1957 by Cambridge University graduate Dr. Michael Cole and was based above Percivals Coach Company, beside the Champion of the Thames pub on King Street, Cambridge, UK. The person inspiring the design of the Quantimet was Dr. Colin Fisher, who joined the company in 1962, and, for its design, Metals Research was awarded numerous awards, including the Queens Award to Industry on six occasions. The “QTM” notation has been applied to IA systems because the Quantimet A was referred to as a quantitative television microscope (QTM). While this system served primarily as a densitometer, it was the beginning of the age of automation. These early IA systems were purely hardware-based systems. The Quantimet B was a complete system for analyzing phase percentage of microstructures and included a purpose-built video camera and specialized hardware to measure and display image information (Fig. 1). While these early systems had relatively limited application mainly geared toward phase percentage analysis and counting, they also achieved extremely high performance. It was necessary to continuously gather information from the live video signal, because there was no large-scale memory to hold the image

JOBNAME: PGIA−−spec 2 PAGE: 5 SESS: 10 OUTPUT: Thu Oct 26 14:43:15 2000 Image Analysis: Historical Perspective / 5

information for a period longer than that of the video frame rate. Typically, only a few lines of video were being stored at one time. In one regard, these systems were very simple to use. For example, to gage the area percentage result using the original Quantimet A, the investigator needed to simply read the value from the continuously updated analog meter. Compared with the tedium of manual point counting using grid overlays, the immediate results produced by this new QTM gave a hint of the promise of applying television technology to microstructure characterization. The first system capable of storing a full black and white image was the Bausch and Lomb QMS introduced in 1968 (Ref 11). Using a light pen, the operator could measure properties of individual objects, now referred to as feature specific properties, for the first time. The second major foundation of IA in these early days was mathematical morphology, developed primarily by French mathematicians J. Serra and G. Matheron and at Ecole des Mines de Paris (Ref 12). The mathematical framework for morphological image processing was introduced by applying topology and set theory to problems in earth and materials sciences. In mathematical morphology, the image is treated in a numerical format as a set of valued points, and basic set transformations such as the union and intersection are performed. This results in concepts such as the erosion and dilation operations, which are, in one form or another, some of the most heavily used processing operations in applied IA even today.

Fig. 1

Quantitative television microscope, the Quantimet B (Metals Research, Cambridge, U.K.)

JOBNAME: PGIA−−spec 2 PAGE: 6 SESS: 10 OUTPUT: Thu Oct 26 14:43:15 2000 6 / Practical Guide to Image Analysis

Growth: 1970s By the 1970s, the field of IA was prepared for rapid growth into a wide range of applications. The micro-Videomat system was introduced by Carl Zeiss (Ref 13), and the Millipore ␲MC particle-measurement system was being marketed in America and Europe (Ref 14). The first IA system to use mathematical morphology was the Leitz texture analysis system (TAS) introduced in 1974. Also, a new field specific system named the Histotrak image analyzer was introduced by the British firm Ealing-Beck (Ref 15). In the meantime, Metals Research had become IMANCO (for Image Analyzing Computers), and its Quantimet 720 system offered a great deal more flexibility than the original systems of the 1960s (Fig. 2). Still hardware based, this second generation of systems offered many new and useful features. The Q720 used internal digital-signal processing hardware, a built-in binary morphological image processor with selection of structuring element and size via dials on the front panel, and advanced feature analysis with the size and shape of individual objects measured and reported on-screen in real time. The system was also flexible, due to programmability implemented via a logic matrix configured using sets of twisted pairs of wire. Other impressive innovations included a light pen for direct editing of the image on the video monitor and automated control of microscope stage and focus. Other systems offered in the day, such as the TAS and pattern analysis system, (PAS) (Bausch and Lomb, USA), had many similar processing and measurement capabilities. The performance of early hardware-based systems was very high, even by the standards of today. Using analog tube-video cameras, high-

Fig. 2

Q720 (IMANCO, Cambridge, U.K.) image analyzer with digital image processing hardware

JOBNAME: PGIA−−spec 2 PAGE: 7 SESS: 10 OUTPUT: Thu Oct 26 14:43:15 2000 Image Analysis: Historical Perspective / 7

resolution images of 896 ⫻ 704 pixels were achieved with around 10 frames/s display rates. That these specialized systems of the 1970s performed image acquisition, thresholding, binary image morphological processing such as erosion and dilation, and feature measurement, and provided continuously displayed results for each video frame and many times a second, is impressive. The primary issues for these systems were their accuracy and reliability. In the best systems, care was taken to ensure that the geometry of the video input was homogeneous and that the system could accurately measure the possible range of shapes, sizes, orientation, and number of features within the image without encountering systematic problems. The 1970s also saw the introduction of general-purpose computers coupled to IA systems. Dedicated systems were connected to generalpurpose minicomputers so results could be more conveniently stored for later review (Fig. 3). Although the computer became an integral part of the overall system, image processing and analysis still were performed by the specialized hardware of the image analyzer. The introduction of the general-purpose computer into the system actually slowed down the process. After all, the IA systems of the day were processing nearly megapixel images very fast. For instance, the IMANCO Q360 achieved an analysis throughput of 20 fields per second, which included acquiring the image, analyzing it, and moving the specimen (Ref 16). This is a remarkable rate even for the technology of today, and the general-purpose computers of the 1970s could not even come close to this performance on their own.

Fig. 3

Q720 (IMANCO, Cambridge, U.K.) with minicomputer for flexible results handling

JOBNAME: PGIA−−spec 2 PAGE: 8 SESS: 17 OUTPUT: Thu Oct 26 14:43:15 2000 8 / Practical Guide to Image Analysis

Each development during the 1970s led to more widespread use of IA in a variety of fields. From the introductory systems of the early 1960s that were applied primarily to metallurgical examinations, the 1970s saw image analyzers put to good use in cytology, botany, and cytogenetics, as well as in general materials and inspection applications. The price of these systems—measured typically in the hundreds of thousands of dollars— coupled with their specialized operation and complexity led to the rise of dedicated IA specialists, who required an in-depth understanding of all aspects of IA. Video tubes required regular calibration and maintenance, and shading effects complicated image acquisition. The various quirks associated with early computing systems required the appropriate level of respect. Also, IA specialists needed to learn how to combine all this knowledge to coax these systems into producing appropriate results for unique applications.

Maturity: 1980s The heyday of hardware-based IA arrived in the 1980s, while at the same time a new paradigm of personal computer-based (PC-based) imaging began to emerge. Increasing power and declining cost of computers fueled both developments. In the case of systems that continued to use dedicated image processing hardware, computers now contained integrated microprocessors and built-in memory for flexible image and results storage. Systems appearing in the 1980s combined these features with vast new options for programmability, giving rise to increasingly sophisticated applications and penetration of scientific IA into many research and routine environments. Many of these systems still are used today. Though generally slower than their purely hardware-based predecessors, systems of the 1980s made up for slowness by being significantly easier to use and more flexible. Many systems of the early to mid-1980s provided a richer implementation of morphological image processing facilities than was possible in their purely hard-wired predecessors. These operations were performed primarily on binary images due to the memory and speed available at those times. For instance, Cambridge Instrument’s Q900 system provided high, nearly megapixel resolution imaging and allowed for a wide range of morphological operations such as erosion, dilation, opening, closing, a host of skeletonization processes, and a full compliment of Boolean operations (Fig. 4). The Q970 system included true-color, high-resolution image acquisition using a motorized color filter wheel (Fig. 5), a technique still used today in some high-end digital cameras. In these and other systems were embodied a range of image acquisition, processing, and measurement capabilities, which, when coupled with the flexibility offered by general purpose microcomputers, became commonplace,

JOBNAME: PGIA−−spec 2 PAGE: 9 SESS: 17 OUTPUT: Thu Oct 26 14:43:15 2000 Image Analysis: Historical Perspective / 9

Fig. 4

Q900 (Cambridge Instruments) with integrated microcomputer

tried-and-true tools accepted today for their practicality in solving a broad range of IA applications. A revolutionary event during the 1980s was the introduction of the personal computer (PC). With both Apple-class and IBM-class computers, the costs of computing reached new lows, and it was not long before the first imaging systems relying heavily on PC technology were made available (Fig. 6). The first purely software-only approaches to image processing on the PC were hopelessly slow compared with their hardware-powered predecessors, but they did offer a low-cost alternative suitable for experimenting with image processing and analysis techniques without the need for expensive hardware. However, it still was necessary to acquire the image, so a variety of image-acquisition devices—the

Fig. 5

Q970 (Cambridge Instruments) with high-resolution true color acquisition

JOBNAME: PGIA−−spec 2 PAGE: 10 SESS: 13 OUTPUT: Thu Oct 26 14:43:15 2000 10 / Practical Guide to Image Analysis

Fig. 6

Early PC-based system, the Q10 (Cambridge Instruments/Olympus)

“frame-grabber” boards that fit into the PC open architecture—became popular (Ref 17). Many of these early devices included on-board image memory with special processors to improve performance for often-used facilities, such as look-up table transformations (LUTs) and convolutions. In some respects, the architecture of the Apple computers of the 1980s, such as the Macintosh (“Mac”), was more suitable for dealing with the large amounts of memory required for image processing applications, as is evident in the popularity of the early Macs used for desktop publishing. With the introduction of the Microsoft Windows operating system for IBM-compatible computers, the availability of imaging software for both major PC platforms grew. For example, Optimas software (Bioscan, now part of Media Cybernetics, U.S.) was one of the first IA software packages introduced to the new Windows operating system.

Desktop Imaging: 1990s The development of large hardware-based systems peaked in the 1980s, followed in the 1990s by the rise of the PC as an acceptable platform for most traditional IA applications due to the dramatic improvement in PC performance. Development shifted from specialized, purpose-built IA hardware to supporting various frame grabber cards, developing efficient software-based algorithms for image processing and analysis, and creating user-interfaces designed for ease of use, with sufficient power for a variety of applications. Personal computer performance increased as PC prices dropped, and increasingly sophisticated software development

JOBNAME: PGIA−−spec 2 PAGE: 11 SESS: 15 OUTPUT: Thu Oct 26 14:43:15 2000 Image Analysis: Historical Perspective / 11

tools resulted in many new software-based imaging products. The focus on software development and reliance on off-the-shelf computers and components in the 1990s turned out many dozens of new imaging companies, compared with the previous two decades where only a few companies had the resources required to design and manufacture high-end imaging systems. At the same time, imaging was finding its way into dedicated application-specific turnkey systems, tailored to specific tasks in both industrial and biomedical areas. In the early 1990s, specialized image processors for morphology provided a new level of performance allowing practical, routine use of gray scale, as well as binary morphological processing methods. Some of the major U.S. suppliers of metallographic consumables, such as Buehler Ltd., Leco Corp., and Struers Inc., introduced field-specific machines and systems, which performed limited but specialized measurements. For more generalized material analysis, systems such as Leica’s Q570 (Fig. 7), and similar systems manufactured by Kontron and Clemex Technologies Inc. (Canada), having high-speed processors for gray-scale image amendment, watershed transformations, and morphological reconstruction methods were used to solve a wide variety of challenging image processing problems. These new capabilities, together with tried and true binary transforms and early gray-processing developments, such as autodelineation and convolution filters, offered the image analyst a broad range of tools having sufficient performance to allow experimentation and careful selection for the desired effect. By the end of the decade, PC technology had advanced so rapidly that even the need for specific on-board processors was relegated to specialized

Fig. 7

Q570 (Leica) with high-speed gray-scale morphological processors

JOBNAME: PGIA−−spec 2 PAGE: 12 SESS: 18 OUTPUT: Thu Oct 26 14:43:15 2000 12 / Practical Guide to Image Analysis

real-time machine-vision applications. For traditional IA work, the only specialized hardware required was a PCI frame grabber, which quickly transferred image data directly into computer memory or onto the display. Computer central processing units (CPUs) now were sufficiently fast to handle most of the intensive pixel-processing jobs on their own.

Truly Digital: 2000 and Beyond Today, the IA landscape consists of a wide a range of options. Basic imaging software libraries allow users to program their own customized solutions. General-purpose IA software packages, such as Image-Pro (Media Cybernetics, U.S.), support a range of acquisition and automation options having macrocapabilities. Fully configured systems offered by Zeiss and Clemex, similar to the earlier generations of IA systems, still are available for use in imaging applications in all aspects of microscopy. From a historical perspective, the current direct descendant in the lineage that began some 38 years ago with the first Quantimet A is Leica Microsystems’ Q550MW (Fig. 8), a system that targets applicationspecific tasks in the analysis of material microstructures, not at all unlike its original ancestor. What will the future bring in the area of imaging technology? A striking recent addition is the availability of fully digital cameras for use in both scientific and consumer applications. Unlike analog cameras or traditional

Fig. 8

Q550MW (Leica Microsystems) materials workstation

JOBNAME: PGIA−−spec 2 PAGE: 13 SESS: 17 OUTPUT: Thu Oct 26 14:43:15 2000 Image Analysis: Historical Perspective / 13

photomicrography, the digital camera relies heavily on the use of a computer and software, and so this development too owes a debt to fast, inexpensive PC technology. Now, traditional image-documentation tasks are taking on a digital flavor, with the benefit that additional information regarding the specimen can be archived easily into an image database and electronically mailed to colleagues and clients using the latest major development in computing, networks, and the Internet. As this digital transition occurs, the need to understand the practical issues that arise when dealing with these images, particularly when processing them for quantitative information, becomes more important than ever. Wherever the developments of the future lead, high-quality results that can be trusted and used with confidence always will depend on an in-depth understanding of the following fundamental aspects of scientific image analysis: O Use the best procedures possible to prepare the specimens for analysis. No amount of image enhancement can correct problems created by poorly prepared specimens. O Take care in acquiring high-quality images. O Apply only the image processing necessary to reveal what counts. O Keep the final goals in mind when deciding what and how to measure.

References 1. P.P. Anosov, Collected Works, Akad. Nauk SSSR, 1954 2. A. Delesse, Procede Mechanique Pour Determiner la Composition des Roches, Ann. Mines (IV), Vol 13, 1848, p 379 3. A. Rosiwal, On Geometric Rock Analysis. A Simple Surface Measurement to Determine the Quantitative Content of the Mineral Constituents of a Stony Aggregate, Verhandl. K.K. Geol. Reich., 1898, p 143 4. A. Sauveur, The Microstructure of Steel and the Current Theories of Hardening, TAIME, 1896, p 863 5. E. Thompson, Quantitative Microscopic Analysis, J. Geol., Vol 27, 1930, p 276 6. A.A. Glagolev, Mineralog. Mater., 1931, p10 7. D.E. Fisher and J.F. Marshall, “Tube: the Invention of Television,” Counterpoint, 1996 8. W.E. Tolles, Methods of Automatic Quantification of Micro-Autoradiographs, Lab. Invest., Vol 8, 1959, p1889 9. R.A. Bloom, H. Walz, and J.G. Koenig, An Electronic ScannerComputer for Determining the Non-Metallic Inclusion Content of Steels, JISI, 1964, p 107

JOBNAME: PGIA−−spec 2 PAGE: 14 SESS: 10 OUTPUT: Thu Oct 26 14:43:15 2000 14 / Practical Guide to Image Analysis

10. Leica Microsystems Customer History, http://www.leica-microsystems.com/ 11. Bausch & Lomb advertisements, 1969 12. J. Serra, Image Analysis and Mathematical Morphology, Academic Press, 1982 13. The Microscope, Vol 18, 3rd quarter, July 1970, p xiii 14. The Microscope, Vol 19, 2nd quarter, April 1971, p xvii 15. The Microscope, Vol 23, 2nd quarter, April 1975, p vii 16. The Microscope, Vol 20, 1st quarter, January 1972, back cover 17. S. Inoué, Video Microscopy, Plenum Press, 1986

JOBNAME: PGIA−−spec 2 PAGE: 1 SESS: 12 OUTPUT: Thu Oct 26 14:44:16 2000

CHAPTER

2

Introduction to Stereological Principles George F. Vander Voort Buehler Ltd.

THE FUNDAMENTAL RELATIONSHIPS for stereology—the foundation of quantitative metallography—have been known for some time, but implementation of these concepts has been limited when performed manually due to the tremendous effort required. Further, while humans are quite good at pattern recognition (as in the identification of complex structures), they are less capable of accurate, repetitive counting. Many years ago, George Moore (Ref 1) and members of ASTM Committee E-4 on Metallography conducted a simple counting experiment asking about 400 persons to count the number of times the letter “e” appeared in a paragraph without striking out the letters as they counted. The correct answer was obtained by only 3.8% of the group, and results were not Gaussian. Only 4.3% had higher values, while 92% had lower values, some much lower. The standard deviation was 12.28. This experiment revealed a basic problem with manual ratings: if a familiar subject (as in Moore’s experiment) results in only one out of 26 persons obtaining a correct count, what level of counting accuracy can be expected with a less familiar subject, such as microstructural features? By comparison, image analyzers are quite good at counting but not as competent at recognizing features of interest. Fortunately, there has been tremendous progress in the development of powerful, user-friendly image analyzers since the 1980s. Chart methods for rating microstructures have been used for many years to evaluate microstructures, chiefly for conformance to specifications. Currently, true quantitative procedures are replacing chart methods for such purposes, and they are used increasingly in quality control and research studies. Examples of the applications of stereological measurements were reviewed by Underwood (Ref 2).

JOBNAME: PGIA−−spec 2 PAGE: 2 SESS: 29 OUTPUT: Thu Oct 26 14:44:16 2000 16 / Practical Guide to Image Analysis

Basically, two types of measurements of microstructures are made. The first group includes measurements of depth, such as depth of decarburization, depth of surface hardening, thickness of coatings and platings, and so forth. These measurements are made at a specific location (the surface) and may be subject to considerable variation. To obtain reproducible data, surface conditions must be measured at a number of positions on a given specimen and on several specimens if the material being sampled is rather large. Standard metrology methods, which can be automated, are used. Metrology methods are also used for individual feature analysis of particle size and shape measurement. The second group of measurements belongs to the field referred to as stereology. This is the body of measurements that describe relationships between measurements made on the two-dimensional plane of polish and the characteristics of the three-dimensional microstructural features sampled. To facilitate communications, the International Society for Stereology (ISS) proposed a standard system of notation, as shown in Table 1 (Ref 3), that lists the most commonly used notations. Notations have not been standardized for many of the more recently developed procedures. These measurements can be made manually with the aid of templates outlining a fixed field area, systems of straight or curved lines of known Table 1 Standard notation recommended by international Society for Stereology Symbol

Units

Description

P

...

Number of point elements or test points

Pp

...

Point fraction (number of point elements per total number of test points)

Common name

... Point count

L

mm

PL

mm⫺1

Length of linear elements or test-line length

...

Number of point intersections per unit length of test line

...

LL

mm/mm

A

mm2

Planar area of intercepted features or test area

S

mm2

Surface area or interface area, generally reserved for curved surfaces

...

V

mm3

Volume of three-dimensional structural elements or test volume

...

Sum of linear intercept lengths divided by total test-line length

AA

mm2/mm2 Sum of areas of intercepted features divided by total test area

SV

mm2/mm3 Surface or interface area divided by total test volume (surface-to-volume ratio)

VV

mm3/mm3 Sum of volumes of structural features divided by total test volume

N

...

NL

mm⫺1

Number of interceptions of features divided by total test-line length

mm⫺2

Number of point features divided by total test area

PA LA NA PV LV NV

Number of features

mm/mm2 Sum of lengths of linear features divided by total test area mm⫺2

Number of interceptions of features divided by total test area

mm⫺3

Number of points per test volume

mm/mm3 Length of features per test volume mm⫺3

Number of features per test volume

Lineal fraction ...

Areal fraction ... Volume fraction ... Lineal density ... Perimeter (total) Areal density ... ... Volumetric density

L

mm

Mean linear interception distance, LL/NL

...

A

mm2

Mean area intercept, AA/NA

...

S

mm2

Mean particle surface area, SV /NV

...

V

mm3

Mean particle volume, VV /NV

...

Note: Fractional parameters are expressed per unit length, area or volume. Source: Ref 3

JOBNAME: PGIA−−spec 2 PAGE: 3 SESS: 24 OUTPUT: Thu Oct 26 14:44:16 2000 Introduction to Stereological Principles / 17

length, or a number of systematically spaced points. The simple counting measurements, PP, PL, NL, PA, and NA are most important and are easily made. These measurements are useful by themselves and can be used to derive other important relationships, and they can be made using semiautomatic tracing tablets or automatic image analyzers. This Chapter describes the basic rules of stereology with emphasis on how these procedures are applied manually. Other Chapters describe how these ideas can be implemented using image analysis (IA). Image analysis users should understand these principles clearly before using them. When developing a new measurement routine, it is good practice to compare IA data with data developed manually. It is easy to make a mistake in setting up a measurement routine, and the user needs a check against such occurrences.

Sampling Sampling of the material is an important consideration, because measurement results must be representative of the material. Ideally, random sampling would be best, but this can rarely be performed, except for small parts like fasteners where a specific number of fasteners can be drawn from a production lot at random. It generally is impossible to select specimens at random from the bulk mass of a large component such as a forging or casting, so the part is produced with additional material added to the part, which provides material for test specimens. For a casting, it may be possible to trepan (machine a cylinder of material from a section) sections at locations that will be machined anyway later in the production process. Another approach used is to cast a separate, small chunk of material of a specified size (called a “keel block”) along with the production castings, which provides material for test specimens. However, material from the keel block may produce results markedly different than those obtained from the casting if there is a large difference in size and solidification and cooling rates between casting and keel block. After obtaining specimens, there still is a sampling problem, particularly in wrought (hot worked) material, such as rolled, extruded, or forged material. Microstructural measurements made on a plane parallel to the deformation axis, for example, will often be quite different from those taken on a plane perpendicular to the deformation axis, especially for features such as nonmetallic inclusions. In such cases, the practice is to compare results on similarly oriented planes. It generally is too time consuming to measure the microstructural feature of interest on the three primary planes in a flat product such as plate or sheet, so that the true three-dimensional nature of the structure cannot be determined except, perhaps, in research studies.

JOBNAME: PGIA−−spec 2 PAGE: 4 SESS: 12 OUTPUT: Thu Oct 26 14:44:16 2000 18 / Practical Guide to Image Analysis

The sampling plan also must specify the number of specimens to be tested. In practice, the number of specimens chosen is a compromise between minimizing testing cost and the desire to perform adequate testing to characterize the lot. Excessive testing is rare. Inadequate sampling is more likely due to physical constraints of some components and a desire to control testing costs. In the case of inclusion ratings, a testing plan was established years ago by the chart method as described in ASTM E 45. The procedure calls for sampling billets at locations representing the top and bottom or top, middle, and bottom of the first, middle, and last ingots on a heat. The plane of polish is longitudinal (parallel to the hot-working axis) at the midthickness location. This yields either six or nine specimens, providing an examination surface area of 160 mm2 per specimen, or a total of 960 and 1440 mm2, respectively. This small area establishes the inclusion content and is the basis for a decision as to the quality (and salability) of a heat of steel, which could weigh from 50 to 300 tons. For bottom-poured heats, there is no first, middle, and last ingot, and continuous casting eliminates ingots, so alternative sampling plans are required. In the writer’s work with inclusion testing, characterization is improved by using at least 18 or 27 specimens per heat from the surface, midradius, and center locations at each billet location (top and bottom or top, middle, and bottom of the first, middle, and last top-poured ingots; and three ingots at random from a bottom-poured heat).

Specimen Preparation In the vast majority of work, the measurement part of the task is simple, and 90% or more of the difficulty is in preparing the specimens properly so that the true structure can be observed. Measurement of inclusions is done on as-polished specimens because etching brings out extraneous details that may obscure the detection of inclusions. Measurement of graphite in cast iron also is performed on as-polished specimens. It is possible, however, that shrinkage cavities often present in castings may interfere with detection of the graphite, because shrinkage cavities and graphite have overlapping gray scales. When the specimen must be etched to see the constituent of interest, it is best to etch the specimen so that only the constituent of interest is revealed. Selective etchants are best. Preparation of specimens today is easier than ever before with the introduction of automated sample-preparation equipment; specimens so prepared have better flatness than manually prepared specimens. This is especially important if the edge must be examined and measurements performed. The preparation sequence must establish the true structure, free of any artifacts. Automated equipment can produce a much greater number of properly prepared specimens per day than the best manual operator. A more detailed description on specimen preparation is in Chapter 3.

JOBNAME: PGIA−−spec 2 PAGE: 5 SESS: 12 OUTPUT: Thu Oct 26 14:44:16 2000 Introduction to Stereological Principles / 19

Volume Fraction It is well known that the amount of a second phase or constituent in a two-phase alloy can have a significant influence on its properties and behavior. Consequently, determination of the amount of the second phase is an important measurement. The amount of a second phase is defined as the volume of the second phase per unit volume, or volume fraction. There is no simple experimental technique to measure the volume of a second phase or constituent per unit volume of specimen. The closest approach might be to use an acid digestion method, where a cube of metal is weighed and then partially dissolved in an appropriate electrolyte that dissolves the matrix but not the phase of interest. The residue is cleaned, dried, and weighed. The remains of the cube (after cleaning and drying) are weighed, and weight loss is calculated. The weight of the undissolved second phase is divided by the weight loss to get an estimate of the volume fraction of the second phase, with the densities of the matrix and second phase known. This is a tedious method, not applicable to all situations and subject to interferences. Three experimental approaches for estimating the volume fraction have been developed using microscopy methods: the area fraction, the lineal fraction, and the point fraction methods. The volume fraction was first estimated by areal (relating to area) analysis by A. Delesse, a French geologist, in 1848. He showed that the area fraction was an unbiased estimate of the volume fraction. Several procedures have been used on real structures. One is to trace the second phase or constituent with a planimeter and determine the area of each particle. These areas are summed and divided by the field area to obtain the area fraction, AA. Another approach is to weigh a photograph and then cut out the second-phase particles and weigh them. Then the two weights are used to calculate the area fraction, as the weight fraction of the micrograph should be equivalent to the area fraction. Both of these techniques are only possible with a coarse second phase. A third approach is the so-called “occupied squares” method. A clear plastic grid containing 500 small square boxes is superimposed over a micrograph or live image. The operator then counts the number of grid boxes that are completely filled, 3⁄4 filled, 1⁄2 filled, and 1⁄4 filled by the second phase or constituent. These data are used to calculate the area covered by the second phase, which then is divided by the image area to obtain the area fraction. All three methods give a precise measurement of the area fraction of one field. An enormous amount of effort must be extended per field. However, it is well recognized that the field-to-field variability in volume fraction has a larger influence on the precision of the volume fraction estimate than the error in rating a specific field, regardless of the procedure used. So, it is not

JOBNAME: PGIA−−spec 2 PAGE: 6 SESS: 24 OUTPUT: Thu Oct 26 14:44:16 2000 20 / Practical Guide to Image Analysis

wise to spend a great deal of effort to obtain a very precise measurement on one or only a few fields. Delesse also stated that the volume fraction could be determined by a lineal analysis approach, but he did not develop such a method. This was done in 1898 by A. Rosiwal, a German geologist, who demonstrated that a sum of the lengths of line segments within the phase of interest divided by the total length, LL, would provide a valid estimate of the volume fraction with less effort than areal analysis. However, studies show that a third method, the point count, is a more efficient method than lineal analysis; that is, it yields the best precision with minimal effort (Ref 4). The point count method is described in ASTM E 562 and is widely used to estimate volume fractions of microstructural constituents. To perform this test, a clear plastic grid with a number of systematically spaced points is placed on a micrograph or a projection screen, or inserted as an eyepiece reticle (crosses primarily are used, where the “point” is the intersection of the arms, typically consisting of 9, 16, 25, 49, 64, or 100 points). The number of points lying on the phase or constituent of interest is counted and divided by the total number of grid points. Points lying on a boundary are counted as half-points. This procedure is repeated on a number of fields selected without bias; that is, without looking at the image. The point fraction, PP, is given by: P⫽

P␣ PT

(Eq 1)

where P␣ is the number of grid points lying inside the feature of interest, ␣, plus one-half the number of grid points lying on particle boundaries and PT is the total number of grid points. Studies show that the point fraction is equivalent to the lineal fraction, LL, and the area fraction, AA, and all three are unbiased estimates of the volume fraction, VV, of the second-phase particles: PP ⫽ LL ⫽ AA⫽ VV

(Eq 2)

Point counting is much faster than lineal or areal analysis and is the preferred manual method. Point counting is always performed on the minor phase, where VV < 0.5. The amount of the major (matrix) phase can be determined by the difference. The fields measured should be selected at locations over the entire polished surface and not confined to a small portion of the specimen surface. The field measurements should be averaged, and the standard deviation can be used to assess the relative accuracy of the measurement, as described in ASTM E 562.

JOBNAME: PGIA−−spec 2 PAGE: 7 SESS: 13 OUTPUT: Thu Oct 26 14:44:16 2000 Introduction to Stereological Principles / 21

In general, the number of points on the grid should be increased as the volume fraction of the feature of interest decreases. One study (Ref 4) suggested that the optimum number of grid test points is 3/VV. Therefore, for volume fractions of 0.5 (50%) and 0.01 (1%), the optimum numbers of grid points are 6 and 300, respectively. If the structure is heterogeneous, measurement precision is improved by using a low-point-density grid and increasing the number of fields measured. The field-to-field variability in the volume fraction has a greater influence on the measurement than the precision in measuring a specific field. Therefore, it is better to assess a greater number of fields with a low-point-density grid than to assess a small number of fields using a high-point-density grid, where the total number of points is constant. In manual measurements, the saying, “Do more, less well” refers to this problem. To illustrate the point-counting procedure, Fig. 1 shows a 1000⫻ image of a microstructural model consisting of fourteen circular particles 15 µm in diameter within a field area of 16,830 µm2, or 0.0168 mm2. The total area of the circular particles is 2474.0 µm2, which is an area fraction of 0.147, or 14.7%. This areal measurement is a very accurate estimate of the volume fraction for such a geometrically simple microstructure and will be considered to be the true value. To demonstrate the use of point counting to estimate the volume fraction, a grid pattern was drawn over this field, producing 45 intersection points. Six of these intersections are completely within the particles and two lie on particle interfaces. The number of “hits” is, therefore, 6 plus 1⁄2 times (2), or 7. Thus, PP (7 divided by 45) is 0.155, or 15.5%, which agrees well with the calculated

Fig. 1 particles

Point-counting method for estimating minor-phase volume fraction using a microstructural model containing identical sized circular

JOBNAME: PGIA−−spec 2 PAGE: 8 SESS: 13 OUTPUT: Thu Oct 26 14:44:16 2000 22 / Practical Guide to Image Analysis

area fraction (5.4% greater). For an actual microstructure, the time required to point count one field is far less than the time to do an areal analysis on that field. In practice, a number of fields would be point counted, and the average value would be a good estimate of the volume fraction acquired in a small fraction of the time required to do areal analysis on an adequate number of fields. An areal fraction measurement can only be done easily when the feature of interest is large in size and of simple shape. Point counting is the simplest and most efficient technique to use to assess the volume fraction. The area fraction, AA, and the point fraction, PP, are unbiased estimates of the volume fraction, VV, as long as the sectioning plane intersects the structural features at random. The lineal fraction, LL, can also be determined for the microstructural model shown in Fig. 1. The length of the horizontal and vertical line segments within the circular particles was measured and found to be 278.2 µm. The total test line length is 1743 µm. Consequently, the lineal fraction (278.2 divided by 1743) is 0.16, or 16%. This is a slightly higher estimate, about 8.8% greater than that obtained by areal analysis. Again, if a number of fields were measured, the average would be a good estimate of the volume fraction. Lineal analysis becomes more tedious as the structural features become smaller. For coarse structures, however, it is rather simple to perform. Lineal analysis is commonly performed when a structural gradient must be measured; that is, a change in second-phase concentration as a function of distance from a surface or an interface.

Number per Unit Area The count of the number of particles within a given measurement area, NA, is a useful microstructural parameter and is used in other calculations. Referring again to Fig. 1, there are 14 particles in the measurement area (16,830 µm2, or 0.0168 mm2). Therefore, the number of particles per unit area, NA, is 0.0008 per µm2, or 831.8 per mm2. The average crosssectional area of the particles can be calculated by dividing the volume fraction, VV, by NA: A⫽

VV NA

(Eq 3)

This yields an average area, A, of 176.725 µm2, which agrees extremely well with the calculated area of a 15 µm diameter circular particle of 176.71 µm2. The above example illustrates the calculation of the average area of particles in a two-phase microstructure using stereological field measurements rather than individual particle measurements.

JOBNAME: PGIA−−spec 2 PAGE: 9 SESS: 24 OUTPUT: Thu Oct 26 14:44:16 2000 Introduction to Stereological Principles / 23

Intersections and Interceptions per Unit Length Counting of the number of intersections of a line of known length with particle boundaries or grain boundaries, PL, or the number of interceptions of particles or grains by a line of known length, NL, provides two very useful microstructural parameters. For space-filling grain structures (single phase), PL ⫽ NL, while for two-phase structures, PL ⫽ 2NL (this may differ by one count in actual cases).

Grain-Structure Measurements For single-phase grain structures, it is usually easier to count the grain-boundary intersections with a line of known length, especially for circular test lines. This is the basis of the Heyn intercept grain size procedure described in ASTM E 112. For most work, a circular test grid composed of three concentric circles with a total line length of 500 mm is preferred. Grain size is defined by the mean lineal intercept length, l: l⫽

1 1 ⫽ PL NL

(Eq 4)

This equation must be modified, as described later, for two-phase structures. The value l can be used to calculate the ASTM grain size number. Grain size determination is discussed subsequently in more detail. PL measurements can be used to define the surface area per unit volume, SV, or the length per unit area, LA, of grain boundaries: SV ⫽ 2PL

(Eq 5)

and ␲ LA ⫽ PL 2

(Eq 6)

For single-phase structures, PL and NL are equal, and either measurement can be used. For two-phase structures, it is best to measure PL to determine the phase-boundary surface area per unit volume, or phaseboundary length per unit area. Partially Oriented Structures. Many deformation processes, particularly those that are not followed by recrystallization, produce partially oriented microstructures. It is not uncommon to see partially oriented grain structures in metals that have been cold deformed. The presence and definition of the orientation (Ref 3) can only be detected and defined by

JOBNAME: PGIA−−spec 2 PAGE: 10 SESS: 13 OUTPUT: Thu Oct 26 14:44:16 2000 24 / Practical Guide to Image Analysis

examination of specimens on the principle planes. Certain microstructures have a high degree of preferred directionality on the plane of polish or within the sample volume. A structure is completely oriented if all of its elements are parallel. Partially oriented systems are those with features having both random and oriented elements. Once the orientation axis has been defined, then a plane parallel to the orientation axis can be used for measurements. PL measurements are used to assess the degree of orientation of lines or surfaces. Several approaches can be used to assess the degree of orientation of a microstructure. For single-phase grain structures, a simple procedure is to make PL measurements parallel and perpendicular to the deformation axis on a longitudinally oriented specimen, as the orientation axis is usually the longitudinal direction. The degree of grain elongation is the ratio of perpendicular to parallel PL values; that is, PL⬜/PL 储. Another very useful procedure is to calculate the degree of orientation, ⍀, using these PL values: ⍀⫽

PL Ⲛ ⫺ PL 储

(Eq 7)

PL Ⲛ ⫹ 0.571PL

To illustrate these measurements, consider a section of low-carbon steel sheet, cold rolled to reductions in thickness of 12, 30, and 70%. PL⬜ and PL 储 measurements were made using a grid with parallel straight test lines on a longitudinal section from each of four specimens (one specimen for each of the three reductions, plus one specimen of as-received material). The results are given in Table 2. As shown, cold working produces an increased orientation of the grains in the longitudinal direction. Spacing. The spacing between second-phase particles or constituents is a very structure-sensitive parameter influencing strength, toughness, and ductile fracture behavior. NL measurements are more easily used to study the spacing of two-phase structures than PL measurements. Perhaps the commonest spacing measurement is that of the interlamellar spacing of eutectoid (such as pearlite) or eutectic structures (Ref 5). The true interlamellar spacing, ␴t, is difficult to measure, but the mean random spacing, ␴r, is readily assessable and is directly related to the mean true spacing: Table 2 Degrees of grain orientation for four samples of low-carbon steel sheet Sample

PL⬜(a)

PL||(a)

PL⬜/PL(a)

⍀, %

As-received

114.06

98.86

1.15

8.9

12% reduction

126.04

75.97

1.66

29.6

30% reduction

167.71

60.6

2.77

52.9

70% reduction

349.4

34.58

Cold-rolled

(a) Number of grain-boundary intersections per millimeter

10.1

85.3

JOBNAME: PGIA−−spec 2 PAGE: 11 SESS: 14 OUTPUT: Thu Oct 26 14:44:16 2000 Introduction to Stereological Principles / 25

␴t ⫽ ␴r / 2

(Eq 8)

The mean random spacing is determined by placing a test grid consisting of one or more concentric circles on the pearlite lamellae in an unbiased manner. The number of interceptions of the carbide lamellae with the test line(s) is counted and divided by the true length of the test line to obtain NL. The reciprocal of NL is the mean random spacing: ␴r ⫽

1 NL

(Eq 9)

The mean true spacing, ␴t, is 1⁄2 ␴r. To make accurate measurements, the lamellae must be clearly resolved; therefore, use of transmission electron microscope (TEM) replicas is quite common. NL measurements also are used to measure the interparticle spacing in a two-phase alloy, such as the spacing between carbides or intermetallic precipitates. The mean center-to-center planar spacing between particles over 360°, ␴, is the reciprocal of NL. For the second-phase particles in the idealized two-phase structure shown in Fig. 1, a count of the number of particles intercepted by the horizontal and vertical test lines yields 22.5 interceptions. The total line length is 1743 µm; therefore, NL ⫽ 0.0129 per µm or 12.9 per mm and ␴ is 77.5 µm, or 0.0775 mm. The mean edge-to-edge distance between such particles over 360°, known as the mean free path, ␭, is determined in like manner but requires knowledge of the volume fraction of the particles. The mean free path is calculated from: ␭⫽

1⫺VV NL

(Eq 10)

For the structure illustrated in Fig. 1, the volume fraction of the particles was estimated as 0.147. Therefore, ␭ is 66.1 µm, or 0.066 mm. The mean lineal intercept distance, l␣, for these particles is determined by: l␣ ⫽ ␴ ⫺ ␭

(Eq 11)

For this example, l␣ is 11.4 µm, or 0.0114 mm. This value is smaller than the caliper diameter of the particles because the test lines intercept the particles at random, not only at the maximum dimension. The calculated mean lineal intercept length for a circle with a 15 µm diameter is 11.78 µm. Again, stereological field measurements can be used to determine a characteristic dimension of individual features without performing individual particle measurements. Grain Size. Perhaps the most common quantitative microstructural measurement is that of the grain size of metals, alloys, and ceramic

JOBNAME: PGIA−−spec 2 PAGE: 12 SESS: 14 OUTPUT: Thu Oct 26 14:44:16 2000 26 / Practical Guide to Image Analysis

materials. Numerous procedures have been developed to estimate grain size; these procedures are summarized in detail in ASTM E 112 and described in Ref 6 to 9. Several types of grain sizes can be measured: ferrite grain size, austenite grain size, and prior-austenite grain size. Each type presents particular problems associated with revealing these boundaries so that an accurate rating can be obtained (Ref 6, 9). While this relates specifically to steels, ferrite grain boundaries are identical (geometrically) to grain boundaries in any alloy that does not exhibit annealing twins while austenite grains are identical (geometrically) to grain boundaries in any alloy that exhibits annealing twins. Therefore, charts depicting ferrite grains in steel can be used to rate grain size in metals such as aluminum, chromium, and titanium, while charts depicting austenite grains can be used to rate grain size in metals such as copper, brass, cobalt, and nickel. A variety of parameters are used to measure grain size: O O O O O O O

Average grain diameter, d Average grain area, A Number of grains per unit area, NA Average intercept length, L Number of grains intercepted by a line of fixed length, N Number of grains per unit volume, NV Average grain volume, V

These parameters can be related to the ASTM grain size number, G. The ASTM grain-size scale was established using the English system of units, but no difficulty is introduced using metric measurements, which are more common. The ASTM grain size equation is: n ⫽ 2G⫺1

(Eq 12)

where n is the number of grains per square inch at 100⫻. Multiplying n by 15.5 yields the number of grains per square millimeter, NA, at 1⫻. The metric grain size number, GM, which is used by International Standards Organization (ISO) and many other countries, is based upon the number of grains per mm2, m, at 1⫻, and uses the following formula: m ⫽ 8 (2GM)

(Eq 13)

The metric grain size number, GM, is slightly lower than the ASTM grain size number, G, for the same structure: G ⫽ GM ⫹ 0.046

(Eq 14)

This very small difference usually can be ignored (unless the value is near a specification limit).

JOBNAME: PGIA−−spec 2 PAGE: 13 SESS: 16 OUTPUT: Thu Oct 26 14:44:16 2000 Introduction to Stereological Principles / 27

Planimetric Method. The oldest procedure for measuring the grain size of metals is the planimetric method introduced by Zay Jeffries in 1916 based upon earlier work by Albert Sauveur. A circle of known size (generally 79.8 mm diameter, or 5000 mm2 area) is drawn on a micrograph or used as a template on a projection screen. The number of grains completely within the circle, n1, and the number of grains intersecting the circle, n2, are counted. For accurate counts, the grains must be marked off as they are counted, which makes this method slow. The number of grains per square millimeter at 1⫻, NA, is determined by: NA ⫽ f (n1 ⫹ n2 / 2)

(Eq 15)

where f is the magnification squared divided by 5000 (the circle area). The average grain area, A, in square millimeters, is: A⫽

1 NA

(Eq 16)

and the average grain diameter, d, in millimeters, is: d ⫽ (A) 1/2 ⫽

1 (NA)1/2

(Eq 17)

The ASTM grain size, G, can be found by using the tables in ASTM E 112 or by the following equation: G ⫽ [3.322 (log NA) ⫺ 2.95]

(Eq 18)

Figure 2 illustrates the planimetric method. Expressing grain size in terms of d is being discouraged by ASTM Committee E-4 on Metallography because the calculation implies that grain cross sections are square in shape, which they are not. In theory, the test line will, on average, bisect grains intercepting a straight line. If the test line is curved, however, bias is introduced (Ref 10). This bias decreases as the number of grains within the circle increases. If only a few grains are within the circle, the error is large, for example, a 10% error if only 10 grains are within the circle. ASTM E 112 recommends adjusting the magnification so that about 50 grains are within the test circle. Under this condition, the error is reduced to about 2% (Ref 10). This degree of error is not too excessive. If the magnification is decreased or the circle is enlarged to encompass more grains, for example, 100 or more, obtaining an accurate count of the grains inside the test circle becomes very difficult. There is a simple alternative to this problem, and one that is amenable to image analysis. If the test pattern is a square or rectangle, rather than a circle, bias can be easily eliminated. Counting of the grains intersecting

JOBNAME: PGIA−−spec 2 PAGE: 14 SESS: 28 OUTPUT: Thu Oct 26 14:44:16 2000 28 / Practical Guide to Image Analysis

the test line, n2, however, is slightly different. In this method, grains will intercept the four corners of the square or rectangle. Statistically, the portions intercepting the four corners would be in parts of four such contiguous test patterns. So, when counting n2, the grains intercepting the four corners are not counted but are weighted as 1. Count all of the other grains intercepting the test square or rectangle (of known size). Equation 15 is modified as follows: NA ⫽ f (n1 ⫹ n2/2 ⫹ 1)

(Eq 19)

where n1 is still the number of grains completely within the test figure (square or rectangular grid), n2 is the number of grains intercepting the sides of the square or rectangle, but not the four corners, 1 accounts for the corner grain interceptions, and f is the magnification divided by the area of the square or rectangle grid.

Fig. 2

The ferrite grain size of a carbon sheet steel (shown at 500⫻, 2% nital etch) was measured by the planimetric method with images at 200, 500, and 1000⫻ using the Jeffries planimetric method (79.8 mm diameter test circle). This produced NA values (using Eq 15) of 2407.3, 2674.2, and 3299 grains per mm2 (ASTM G values of 8.28, 8.43, and 8.73, respectively) for the 200, 500 and 1000⫻ images, respectively. The planimetric method was also performed on these three images using the full rectangular image field and the alternate grain counting method. This produced NA values of 2400.4, 2506.6, and 2420.2 grains per mm2 (ASTM G values of 8.28, 8.34, and 8.29, respectively). This experiment shows that the standard planimetric method is influenced by the number of grains counted (n1 was 263, 39, and 10 for the 200, 500 and 1000⫻ images, respectively). In practice, more than one field should be evaluated due to the potential for field-to-field variability.

JOBNAME: PGIA−−spec 2 PAGE: 15 SESS: 30 OUTPUT: Thu Oct 26 14:44:16 2000 Introduction to Stereological Principles / 29

Intercept Method. The intercept method, developed by Emil Heyn in 1904, is faster than the planimetric method because the micrograph or template does not require marking to obtain an accurate count. ASTM E 112 recommends use of a template consisting of three concentric circles with a total line length of 500 mm (template available from ASTM). The template is placed over the grain structure without bias, and the number of grain-boundary intersections, P, or the number of grains intercepted, N, is counted. Dividing P or N by the true line length, L, gives PL or NL, which are identical for a single-phase grain structure. It is usually easier to count grain-boundary intersections for single-phase structures. If a grain boundary is tangent to the line, it is counted as 1⁄2 of an intersection. If a triple-point line junction is intersected, it is counted as 11⁄2 or 2. The latter is preferred because the small diameter of the inner circle introduces a slight bias to the measurement that is offset by weighing a triple-line intersection as 2 hits. The mean lineal intercept length, l, determined as shown in Eq 4, is a measure of ASTM grain size. The value l is smaller than the maximum grain diameter because the test lines do not intersect each grain at their maximum breadth. The ASTM grain size, G, can be determined by use of the tables in ASTM E 112 or can be calculated from: G ⫽ [⫺6.644 (log l ) ⫺ 3.288]

(Eq 20)

where l is in millimeters. Figure 3 illustrates the intercept method for a single-phase alloy. Nonequiaxed Grains. Ideally, nonequiaxed grain structures should be measured on the three principal planes: longitudinal, planar, and transverse. However, in practice, measurements on any two of the three are adequate. For such structures, the intercept method is preferred, but the test grid should consist of a number of straight, parallel test lines (rather than circles) of known length oriented as described subsequently. Because the ends of the straight lines generally end within grains, these interceptions are counted as half-hits. Three mutually perpendicular orientations are evaluated using grain-interception counts: O NLl—parallel to the grain elongation, longitudinal or planar surface O NLt—perpendicular to the grain elongation (through-thickness direction), longitudinal or transverse surface O NLP—perpendicular to the grain elongation (across width), planar or transverse surface The average NL value is obtained from the cube root of the product of the three directional NL values. G is determined by reference to the tables in ASTM E 112 or by use of Eq 20 (l is the reciprocal of NL; see Eq 4). Two-Phase Grain Structures. The grain size of a particular phase in a two-phase structure requires determination of the volume fraction of the

JOBNAME: PGIA−−spec 2 PAGE: 16 SESS: 24 OUTPUT: Thu Oct 26 14:44:16 2000 30 / Practical Guide to Image Analysis

Fig. 3

The ferrite grain size of the specimen analyzed using the Jeffries method in Fig. 2 (shown at 200⫻ magnification) (2% nital etch), was measured by the intercept method with a single test circle (79.8 mm diameter) at 200, 500 and 1000⫻ magnifications. This yielded mean lineal intercept lengths of 17.95, 17.56, and 17.45 µm (for the 200, 500 and 1000⫻ images, respectively) corresponding to ASTM G values of 8.2, 8.37, and 8.39, respectively. These are in reasonably good agreement. In practice, more than one field should be evaluated due to the field-to-field variability of specimens.

phase of interest, by point counting, for example. The minor, or second, phase is point-counted and the volume fraction of the major, or matrix, phase is determined by the difference. Next, a circular test grid is applied to the microstructure without bias and the number of grains of the phase of interest intercepted by the test line, N␣, is counted. The mean lineal intercept length of the ␣-grains, l␣, is determined by: l␣ ⫽

(VV)(L/M ) N␣

(Eq 21)

where L is the line length and M is the magnification. The ASTM grain size number can be determined from the tables in ASTM E 112 or by use of Eq 20. The method is illustrated in Fig. 4.

JOBNAME: PGIA−−spec 2 PAGE: 17 SESS: 24 OUTPUT: Thu Oct 26 14:44:16 2000 Introduction to Stereological Principles / 31

Determination of ␣-phase grain size in two-phase microstructure of heat treated Ti-8Al-1Mo-1V (1010 °C or 1850 °F/air cooled (AC)/593 °C or 1100 °F 8 h/AC) etched using Kroll’s reagent. Volume fraction of alpha grains is determined by point counting (␣ in the ␣-␤ eutectoid constituent is not included) to be 0.452. The number of proeutectoid ␣ grains, N␣, intercepted by a 79.8 mm diameter test circle is 27. From Eq 20, the mean lineal intercept length in the ␣-phase is 8.4 µm (ASTM G ⫽ 10.5). In practice 500⫻

Fig. 4

Inclusion Content Assessment of inclusion type and content commonly is performed on high-quality steels. Production evaluations use comparison chart methods such as those described in ASTM E 45, SAE J422a, ISO 4967, and the German standard SEP 1570 (DIN 50602). In these chart methods, the inclusion pictures are defined by type and graded by severity (amount). Either qualitative procedures (worst rating of each type observed) or quantitative procedures (all fields in a given area rated) are used. Only the

JOBNAME: PGIA−−spec 2 PAGE: 18 SESS: 12 OUTPUT: Thu Oct 26 14:44:16 2000 32 / Practical Guide to Image Analysis

Japanese standard JIS-G-0555 uses actual volume fraction measurements for the rating of inclusion content, although the statistical significance of the data is questionable due to the limited number of counts required. Manual measurement of the volume fraction of inclusions requires considerable effort to obtain acceptable measurement accuracy due to the rather low volume fractions usually encountered (Ref 11). When the volume fraction is below 0.02, or 2%, which is the case for inclusions (even in free-machining steels), acceptable relative accuracies (Eq 22) cannot be obtained by manual point counting without a vast amount of counting time (Ref 11). Consequently, image analyzers are extensively used to overcome this problem. Image analyzers separate the oxide and sulfide inclusions on the basis of their gray-level differences. By using automated stage movement and autofocusing, enough field measurements can be made in a relatively short time to obtain reasonable statistical precision. Image analysis also is used to measure the length of inclusions and to determine stringer lengths. Two image analysis-based standards have been developed: ASTM E 1122 (Ref 12) and E 1245 (Ref 13–16). E 1122 produces Jernkontoret (JK) ratings using image analysis, which overcome most of the weaknesses of manual JK ratings. E 1245 is a stereological approach defining, for oxides and sulfides, the volume fraction (VV), number per unit area (NA), average length, average area, and the mean free path (spacing in the through-thickness direction). These statistical data are easily incorporated into a database, and mean values and standard deviations can be developed. This allows comparison of data from different tests using statistical methods to determine if the differences between the measurements are valid at a particular confidence limit (CL).

Measurement Statistics It is necessary to make stereological measurements on a number of fields and average the results. Measurements on a single field may not be representative of bulk material conditions, because few (if any) materials are sufficiently homogeneous. Calculation of the standard deviation of field measurements provides a good indication of measurement variability. Calculation of the standard deviation can be done quite simply with an inexpensive pocket calculator. A further refinement of statistical analysis is calculation of the 95% CL based on the standard deviation, s, of the field measurements. The 95% CL is calculated from the expression:

95% CL ⫽

ts N1/2

(Eq 22)

JOBNAME: PGIA−−spec 2 PAGE: 19 SESS: 25 OUTPUT: Thu Oct 26 14:44:16 2000 Introduction to Stereological Principles / 33

where t is the Student’s t value that varies with N, the number of measurements. Many users standardize on a single value of t, 2, for calculations, irrespective of N. The measurement value is expressed as the average, X, ⫾ the 95% CL value. This means that if the test were conducted 100 times, the average values would be between plus and minus the average, X, in 95 of the measurements. Next, it is possible to calculate the relative accuracy (% RA) of the measurement by: % RA ⫽

95% CL X

(Eq 23)

Usually, a 10% relative accuracy is considered to be adequate. DeHoff (Ref 17) developed a simple formula to determine how many fields, N, must be measured to obtain a specific desired degree of relative accuracy at the 95% CL: N⫽

[200 ⫺ s] 2 [%RA · X]

(Eq 24)

Image Analysis The measurements described in this brief review, and other measurements not discussed, can be made by use of automatic image analyzers. These devices rely primarily on the gray level of the image on the television monitor to detect the desired features. In some instances, complex image editing can be used to aid separation. Some structures, however, cannot be separated completely, which requires the use of semiautomatic digital tracing devices to improve measurement speed.

Conclusions Many of the simple stereological counting measurements and simple relationships based on these parameters have been reviewed. More complex measurements are discussed in Chapters 5 to 8. The measurements described are easy to learn and use. Their application enables the metallographer to discuss microstructures in a more quantitative manner and reveals relationships between the structure and properties of the material.

JOBNAME: PGIA−−spec 2 PAGE: 20 SESS: 23 OUTPUT: Thu Oct 26 14:44:16 2000 34 / Practical Guide to Image Analysis

References 1. G.A. Moore, Is Quantitative Metallography Quantitative?, Application of Modern Metallographic Techniques, STP 480, ASTM, 1970, p 3–48 2. E.E. Underwood, Applications of Quantitative Metallography, Mechanical Testing, Vol 8, Metals Handbook, 8th ed., American Society for Metals, 1973, p 37–47 3. E.E. Underwood, Quantitative Stereology, Addison-Wesley, 1970 4. J.E. Hilliard and J. W. Cahn, An Evaluation of Procedures in Quantitative Metallography for Volume-Fraction Analysis, Trans. AIME, Vol 221, April 1961, p 344–352 5. G.F. Vander Voort and A. Roósz, Measurement of the Interlamellar Spacing of Pearlite, Metallography, Vol 17, Feb 1984, p 1–17 6. H. Abrams, Grain Size Measurements by the Intercept Method, Metallography, Vol 4, 1971, p 59–78 7. G.F. Vander Voort, Grain Size Measurement, Practical Applications of Quantitative Metallography, STP 839, ASTM, 1984, p 85–131 8. G.F. Vander Voort, Examination of Some Grain Size Measurement Problems, Metallography: Past, Present and Future, STP 1165, ASTM, 1993, p 266–294 9. G.F. Vander Voort, Metallography: Principles and Practice, ASM International, 1999 10. S.A. Saltykov, Stereometric Metallography, 2nd ed., Metallurgizdat, Moscow, 1958 11. G.F. Vander Voort, Inclusion Measurement, Metallography as a Quality Control Tool, Plenum Press, New York, 1980, p 1–88 12. G.F. Vander Voort and J. F. Golden, Automating the JK Inclusion Analysis, Microstructural Science, Vol 10, Elsevier North-Holland, NY, 1982, p 277–290 13. G.F. Vander Voort, Measurement of Extremely Low Inclusion Contents by Image Analysis, Effect of Steel Manufacturing Processes on the Quality of Bearing Steels, STP 987, ASTM, 1988, p 226–249 14. G.F. Vander Voort, Characterization of Inclusions in a Laboratory Heat of AISI 303 Stainless Steel, Inclusions and Their Influence on Materials Behavior, ASM International, 1988, p 49–64 15. G.F. Vander Voort, Computer-Aided Microstructural Analysis of Specialty Steels, Mater. Charact., Vol 27 (No. 4), Dec 1991, p 241–260 16. G.F. Vander Voort, Inclusion Ratings: Past, Present and Future, Bearing Steels Into the 21st Century, STP 1327, ASTM, 1998, p 13–26 17. R.T. De Hoff, Quantitative Metallography, Techniques of Metals Research, Vol II, Part 1, Interscience, New York, 1968, p 221–253

JOBNAME: PGIA−−spec 2 PAGE: 1 SESS: 71 OUTPUT: Thu Oct 26 14:44:56 2000

CHAPTER

3

Specimen Preparation for Image Analysis George F. Vander Voort Buehler Ltd.

SPECIMEN PREPARATION is an extremely important precursor to image analysis work. In fact, more than 90% of the problems associated with image analysis work center on preparation. Once a well prepared specimen is obtained and the phase or constituent of interest is revealed selectively with adequate contrast, the actual image-analysis (IA) measurement is generally quite simple. Experience has demonstrated that getting the required image quality to the microscope is by far the biggest problem. Despite this, many treat the specimen preparation stage as a trivial exercise. However, the quality of the data is primarily a function of specimen preparation. This can be compared to the classic computer adage, “garbage in, garbage out.”

Sampling The specimen or specimens being prepared must be representative of the material to be examined. Random sampling, as advocated by statisticians, rarely can be performed by metallographers. An exception is fastener testing where a production lot can be randomly sampled. However, a large forging or casting, for example, cannot be sampled randomly because the component might be rendered useless commercially. Instead, systematically selected test locations are widely used, based on sampling convenience. Many material specifications dictate the sampling procedure. In failure studies, specimens usually are removed to study the origin of failure, examine highly stressed areas or secondary cracks, and so forth. This, of course, is not random sampling. It is rare to

JOBNAME: PGIA−−spec 2 PAGE: 2 SESS: 53 OUTPUT: Thu Oct 26 14:44:56 2000 36 / Practical Guide to Image Analysis

encounter excessive sampling, because testing costs usually are closely controlled. Inadequate sampling is more likely to occur. In the vast majority of cases, a specimen must be removed from a larger mass and then prepared for examination. This requires application of one or more sectioning methods. For example, in a manufacturing facility, a piece may be cut from incoming metal barstock using a power hacksaw or an abrasive cutter used without a coolant. This sample is sent to the laboratory where it must be cut smaller to obtain a size more convenient for preparation. All sectioning processes produce damage; some methods, such as flame cutting and dry abrasive cutting, produce extreme amounts of damage. Traditional laboratory sectioning procedures using abrasive cut-off saws introduce a minor amount of damage that varies with the material being cut and the thermal and mechanical history of the material. Generally, it is unwise to use the sample face from the original cut made in the shop as the starting point for metallographic preparation because the depth of damage at this location can be quite extensive. This damage must be removed if the true structure is to be examined. However, the preparation sequence must be carefully planned and performed because abrasive grinding and polishing steps also produce damage (depth of damage decreases with decreasing abrasive size), and preparation-induced artifacts will be interpreted as structural elements. The preparation method should be as simple as possible, yield consistent, high-quality results in a minimum of time and cost, and must be reproducible. The prepared specimen should have the following characteristics, which can be segmented and measured, to reveal the true structure: O Deformation induced by sectioning, grinding, and polishing must be removed or be shallow enough to be removed by the etchant. O Coarse grinding scratches must be removed, although very fine polishing scratches often do not interfere with image segmentation. O Pullout, pitting, cracking of hard particles, smear, and so forth must be avoided. O Relief (i.e., excessive surface height variations between structural features of different hardness) must be minimized. O The surface must be flat, particularly at edges (if they are of interest). O Coated or plated surfaces must be kept flat to be able to precisely measure width. O Specimens must be cleaned adequately between preparation steps, after preparation, and after etching (avoid staining). O The etchant chosen must be selective in its action (that is, it must reveal only the phase or constituent of interest, or at least produce strong contrast or color differences between two or more phases present), produce crisp, clear phase or grain boundaries, and produce strong contrast.

JOBNAME: PGIA−−spec 2 PAGE: 3 SESS: 53 OUTPUT: Thu Oct 26 14:44:56 2000 Specimen Preparation for Image Analysis / 37

Many metallographic image analysis studies require more than one specimen. A classic case is evaluation of the inclusion content of steel. One specimen is not representative of the entire lot of steel, so sampling becomes important. ASTM standards E 45, E 1122, and E 1245 give advice on sampling procedures for inclusion studies. To study grain size, it is common to use a single specimen from a lot. This may or may not be adequate, depending on the nature of the lot. Good engineering judgment should dictate sampling. In many cases, a product specification may rigorously define the procedure. Because grain structure is not always equiaxed, it can be misleading to select only a plane oriented perpendicular to the deformation axis (transverse plane) for such a study. If grains are elongated due to processing, the transverse plane usually shows that the grains are equiaxed in shape and smaller in diameter than the true grain size. To study the effect of deformation on the grain shape of wrought metals, a minimum of two sections is required: one perpendicular to, and the other parallel to, the direction of deformation. Techniques used to study anisotropic structures in metals incoporate unique vertical sampling procedures, such as in the trisector method (Ref 1–5). Preparation of metallographic specimens (Ref 6–8) generally requires five major operations: (1) sectioning, (2) mounting (optional), (3) grinding, (4) polishing, and (5) etching (optional).

Sectioning Bulk samples for sectioning may be removed from larger pieces or parts using methods such as core drilling, band and hack sawing, flame cutting, and so forth. When these techniques must be used, the microstructure will be heavily altered in the area of the cut. It is necessary to resection the piece in the laboratory using an abrasive-wheel cutoff system to establish the location of the desired plane of polish. In the case of relatively brittle materials, sectioning may be accomplished by fracturing the specimen at the desired location. Abrasive-Wheel Cutting. By far the most widely used sectioning devices in metallographic laboratories are abrasive cut-off machines (Fig. 1). All abrasive-wheel sectioning should be done wet; direct an ample flow of water containing a water-soluble oil additive for corrosion protection into the cut. Wet cutting produces a smooth surface finish and, most importantly, guards against excessive surface damage caused by overheating. Abrasive wheels should be selected according to the recommendations of the manufacturer. In general, the bond strength of the material that holds the abrasive together in the wheel must be decreased with increasing hardness of the workpiece to be cut, so the bond material can break down and release old dulled abrasive and introduce new sharp

JOBNAME: PGIA−−spec 2 PAGE: 4 SESS: 53 OUTPUT: Thu Oct 26 14:44:56 2000 38 / Practical Guide to Image Analysis

abrasive to the cut. If the bond strength is too high, burning results, which severely damages the underlying microstructure. The use of proper bond strength eliminates the production of burnt surfaces. Bonding material may be a polymeric resin, a rubber-based compound, or a mixture of the two. In general, rubber offers the lowest-bond-strength wheels used to cut the most difficult materials. Such cuts are characterized by an odor that can become rather strong. In such cases, there should be provisions to properly exhaust and ventilate the saw area. Specimens must be fixtured securely during cutting, and cutting pressure should be applied carefully to prevent wheel breakage. Some materials, such as commercial purity (CP) titanium (Fig. 2), are more prone to sectioning damage than many other materials. Precision Saws. Precision saws (Fig. 3) commonly are used in metallographic preparation and may be used to section materials intended for IA. As the name implies, this type of saw is designed to make very precise cuts. They are smaller in size than the typical laboratory abrasive cut-off saw and use much smaller blades, typically from 8 to 20 mm (3 to 8 in.) in diameter. These blades are most commonly of the nonconsumable type, made of copper-base alloys and having diamond or cubic boron nitride abrasive bonded to the periphery of the blade. Consumable blades incorporate alumina or silicon carbide abrasives with a rubber bond and only work on a machine that operates at speeds higher than 1500 rpm. These blades are much thinner than abrasive cutting wheels. The load applied during cutting is much less than that used for abrasive cutting,

Fig. 1

Abrasive cut-off machine used to section a specimen for metallographic preparation

JOBNAME: PGIA−−spec 2 PAGE: 5 SESS: 53 OUTPUT: Thu Oct 26 14:44:56 2000 Specimen Preparation for Image Analysis / 39

and, therefore, much less heat is generated during cutting, and depth of damage is very shallow. While small section-size pieces that would normally be sectioned with an abrasive cutter can be cut with a precision saw, cutting time is appreciably greater, but the depth of damage is much less. These saws are widely used to section sintered carbides, ceramic materials, thermallysprayed coatings, printed circuit boards, and electronic components.

Fig. 2

Damage to commercially pure titanium metallographic specimen resulting from sectioning using an abrasive cut-off wheel. The specimen was etched using modified Weck’s reagent.

Fig. 3

A precision saw used for precise sectioning of metallographic specimens

JOBNAME: PGIA−−spec 2 PAGE: 6 SESS: 54 OUTPUT: Thu Oct 26 14:44:56 2000 40 / Practical Guide to Image Analysis

Specimen Mounting The primary purpose of mounting metallographic specimens is to provide convenience in handling specimens of difficult shapes or sizes during the subsequent steps of metallographic preparation and examination. A secondary purpose is to protect and preserve outer edges or surface defects during metallographic preparation. Care must be exercised when selecting the mounting method so that it is in no way injurious to the microstructure of the specimen. Most likely sources of injurious effects are mechanical deformation and heat. Clamp Mounting. Clamps offer a quick, convenient method to mount metallographic cross sections in the form of thin sheets, where several specimens can be clamped in sandwich form. Edge retention is excellent when done properly, and there is no problem with seepage of fluids from crevices between specimens. The outer clamp edges should be beveled to minimize damage to polishing cloths. Improper use of clamps leaves gaps between specimens, allowing fluids and abrasives to become entrapped and seep out, obscuring edges. Ways to minimize this problem include proper tightening of clamps, using plastic spacers between specimens, and coating specimen surfaces with epoxy before tightening. A disadvantage of clamps is the difficulty encountered in placing specimen information on the clamp for identification purposes. Compression Mounting. The most common mounting method uses pressure and heat to encapsulate the specimen within a thermosetting or thermoplastic mounting material. Common thermosetting resins include phenolic (Bakelite), diallyl phthalate, and epoxy, while methyl methacrylate is the most commonly used thermoplastic mounting resin. Both thermosetting and thermoplastic materials require heat and pressure during the molding cycle. After curing, mounts made of thermosetting materials may be ejected from the mold at the maximum molding temperature, while mounts made of thermoplastic resins must be cooled to ambient under pressure. However, cooling thermosetting resins under pressure to at least a temperature of 55 °C (130 °F) before ejection reduces shrinkage gap formation. A thermosetting resin mount should never be water cooled after hot ejection from the molding temperature. This causes the metal to pull away from the resin, producing shrinkage gaps that promote poor edge retention (see Fig. 4). Thermosetting epoxy resins provide the best edge retention of these resins and are less affected by hot etchants than phenolic resins. Mounting presses vary from simple laboratory jacks with a heater and mold assembly to fully automated devices, as shown in Fig. 5. Compression mounting resins have the advantage that a fair amount of information can be scribed on the backside with a vibratory pencil-engraving device for specimen identification.

JOBNAME: PGIA−−spec 2 PAGE: 7 SESS: 67 OUTPUT: Thu Oct 26 14:44:56 2000 Specimen Preparation for Image Analysis / 41

Castable Resins for Mounting. Cold mounting materials require neither pressure nor external heat and are recommended for mounting heat-sensitive and/or pressure-sensitive specimens. Acrylic resins are the most widely used castable resins due to their low cost and fast curing time. However, shrinkage is somewhat of a problem. Epoxy resins, although more expensive than acrylics, commonly are used because epoxy physically adheres to specimens and can be drawn into cracks and pores, particularly if a vacuum impregnation chamber is used. Therefore, epoxies are very suitable for mounting fragile or friable specimens and corrosion or oxidation specimens. Dyes or fluorescent agents are added to some epoxies to study porous specimens such as thermal spray coated specimens. Most epoxies are cured at room temperature, with curing times varying from 2 to 20 h. Some can be cured in less time at slightly

Fig. 4

Poor edge retention due to shrinkage gap between metal specimen and the resin mount caused by water cooling a hot-ejected thermosetting resin mount. Specimen is carburized AISI 8620 alloy steel, etched using 2% nital.

Fig. 5

Automated mounting press used to encapsulate metallographic specimen in a resin mount

JOBNAME: PGIA−−spec 2 PAGE: 8 SESS: 81 OUTPUT: Thu Oct 26 14:44:56 2000 42 / Practical Guide to Image Analysis

elevated temperatures; the higher temperature must not adversely affect the specimen. Castable resins are not as convenient as compression mounts for scribing identification information on the mount. Edge Preservation. Edge preservation is a long-standing metallographic problem, the solution of which has resulted in the development and promotion of many “tricks” (most pertaining to mounting, but some to grinding and polishing). These methods include the use of backup material in the mount, the application of coatings to the surfaces before mounting, and the addition of a filler material to the mount. Plating of a compatible metal on the surface to be protected (electroless nickel is widely used) generally is considered to be the most effective procedure. However, image contrast at an interface between a specimen and the electroless nickel may be inadequate in certain cases. Figure 6 shows the surface of a specimen of AISI type 1215 free-machining steel (UNS G12150) that was salt bath nitrided. Both specimens (one plated with electroless nickel) are mounted in Epomet (Buehler Ltd., Lake Bluff, IL) thermosetting epoxy resin. For the plated specimen, it is hard to discern where the nitrided layer stops, because of poor image contrast between the nickel and nitrided surface (Fig. 6a). The problem does not exist for the unplated specimen (Fig. 6b). Edge-preservation problems have been reduced with advancements in equipment. For example, mounting presses now can cool the specimen to near ambient temperature under pressure, producing much tighter mounts. Gaps that form between specimen and resin are a major contributor to edge rounding, as shown in Fig. 4. Staining at shrinkage gaps also may be

(a)

Fig. 6

(b)

Visibility problem caused by plating the specimen surface with a compatible metal (electroless nickel in this case) to help edge retention. It is difficult to discern the free edge of (a) a plated nitrided AISI 1215 steel specimen, due to poor image contrast between the nickel plate and the nitrided layer. By comparison, (b) the unplated specimen reveals good image contrast between specimen and thermosetting epoxy resin mount, which allows clear distinction of the nitrided layer. Etchant is 2% nital.

JOBNAME: PGIA−−spec 2 PAGE: 9 SESS: 68 OUTPUT: Thu Oct 26 14:44:56 2000 Specimen Preparation for Image Analysis / 43

a problem, as shown in Fig. 7. Semiautomatic and automatic grinding/ polishing equipment increases surface flatness and edge retention over that obtained using manual (hand) preparation. However, to obtain the best results, the position of the specimen holder relative to the platen must be adjusted so the outer edge of the specimen holder rotates out over the edge of the surface on the platen during grinding and polishing. Use of harder, woven and nonwoven napless surfaces for polishing using diamond abrasives maintains flatness better than softer cloths, such as canvas, billiard, and felt. Final polishing using low-nap cloths for short times introduces very little rounding compared with the use of higher nap, softer cloths. These procedures produce better edge retention with all thermosetting and thermoplastic mounting materials. Nevertheless, there are still differences between polymeric materials used for mounting. Thermosetting resins provide better edge retention than thermoplastic resins. Of the thermosetting resins, diallyl phthalate provides little improvement over the much less expensive phenolic compounds. By far, the best results are obtained with epoxy-base thermosetting resins that contain a filler material. For comparison, Fig. 8 shows micrographs of the nitrided 1215 steel specimen mounted in a phenolic resin (Fig. 8a) and in methyl methacrylate (Fig. 8b) at 1000⫻. These specimens were prepared in the same specimen holder as those shown in Fig. 6, but neither displays acceptable edge retention at 1000⫻. Figure 9 shows examples of perfect edge retention, as also illustrated in Fig. 6. These are three markedly different materials all mounted in the thermosetting epoxy resin. Very fine aluminum oxide spheres have been added to epoxy mounts to help maintain edge retention. However, this really is not a satisfactory solution because the particles are extremely hard (approximately 2000

Fig. 7

Etching stains emanating from gaps between the specimen and resin mount. Specimen is M2 high-speed steel etched with Vilella’s reagent.

JOBNAME: PGIA−−spec 2 PAGE: 10 SESS: 86 OUTPUT: Thu Oct 26 14:44:56 2000 44 / Practical Guide to Image Analysis

(a)

(b)

Fig. 8

These nitrided 1215 specimens were prepared in the same holder as those specimens shown in Fig. 6 but did not exhibit acceptable edge retention due to the choice of mounting compound. Both thermosetting and thermoplastic mounting resins can result in poor edge retention if proper polishing techniques are not used, as seen in (a) thermosetting phenolic mount and (b) thermoplastic methyl methacrylate resin mount. Specimens were etched with 2% nital

JOBNAME: PGIA−−spec 2 PAGE: 11 SESS: 68 OUTPUT: Thu Oct 26 14:44:56 2000 45 / Practical Guide to Image Analysis

(a)

(b)

Fig. 9

Examples of perfect edge retention of two different materials in Epomet (Buehler Ltd., Lake Bluff, IL) thermosetting epoxy mounts. (a) Ion-nitrided H13 tool steel specimen etched with 2% nital. (b) Coated carbide tool specimen etched with Murakami’s reagent

Fig. 10

H 13 annealed tool steel specimen, etched with 4% picral. Use of soft ceramic shot helps maintain edge retention.

HV, or Vickers hardness), and their grinding/polishing characteristics are incompatible with the softer metals placed inside the mount. Soft ceramic shot (approximately 775 HV) offers grinding/polishing characteristics more compatible with metallic specimens placed in the mount. Figure 10 shows an example of edge retention using Flat-Edge Filler (Buehler, Ltd., Lake Bluff, IL) soft ceramic shot in an epoxy mount. In summary, to obtain the best possible edge retention, use the following guidelines, some of which are more critical than others:

JOBNAME: PGIA−−spec 2 PAGE: 12 SESS: 53 OUTPUT: Thu Oct 26 14:44:56 2000 46 / Practical Guide to Image Analysis

O Properly mounted specimens yield better edge retention than unmounted specimens; rounding is difficult, if not impossible, to prevent at a free edge. Hot compression mounts yield better edge preservation than castable resins. O Electrolytic or electroless plating of the surface of interest provides excellent edge retention. If the compression mount is cooled too quickly after polymerization, the plating may be pulled away from the specimen, leaving a gap. When this happens, the plating is ineffective for edge retention. O Thermoplastic compression mounting materials are less effective than thermosetting resins. The best thermosetting resin is the epoxy-based resin containing a hard filler material. O Never hot eject a thermosetting resin after polymerization and cool it quickly to ambient (e.g., by water cooling), because a gap will form between specimen and mount due to the differences in thermal contraction rates. Automated mounting presses cool the mounted specimen to near ambient under pressure, greatly minimizing gap formation due to shrinkage. O Automated grinding/polishing equipment produces flatter specimens than manual, or hand, preparation. O In automated grinder/polisher use, central-pressure mode provides better flatness than individual pressure mode (both modes defined later in this chapter). O Orient the position of the smaller diameter specimen holder so its periphery slightly overlaps the periphery of the larger diameter platen as it rotates. O Use pressure-sensitive-adhesive-backed silicon carbide (SiC) grinding paper (if SiC is used) and pressure-sensitive-adhesive-backed polishing cloths rather than stretched cloths. O Use hard, napless surfaces for rough polishing until the final polishing step(s). Use a low-nap to medium-nap cloth for the final step, and keep it brief. O Rigid grinding disks produce excellent flatness and edge retention and should be used when possible.

Grinding Grinding should commence with the finest grit size that will establish an initially flat surface and remove the effects of sectioning within a few minutes. An abrasive grit size of 180 or 240 is coarse enough to use on specimen surfaces sectioned using an abrasive cut-off wheel. Rough surfaces, such as those produced using a hacksaw and bandsaw, usually require abrasive grit sizes in the range of 60 to 180 grit. The abrasive used for each succeeding grinding operation should be one or two grit sizes

JOBNAME: PGIA−−spec 2 PAGE: 13 SESS: 88 OUTPUT: Thu Oct 26 14:44:56 2000 Specimen Preparation for Image Analysis / 47

smaller than that used in the preceding operation. A satisfactory fine grinding sequence might involve SiC papers having grit sizes of 240, 320, 400, and 600 grit (in the ANSI/CAMI scale). This technique is known as the traditional approach. As in abrasive-wheel sectioning, all grinding should be done wet using water, provided that water has no adverse effects on any constituents of the microstructure. Wet grinding minimizes loading of the abrasive with metal removed from the specimen being prepared and minimizes specimen heating. Each grinding step, while producing damage itself, must remove the damage from the previous step. Depth of damage decreases with the abrasive size, but so does metal removal rate. For a given abrasive size, the depth of damage introduced is greater for soft materials than for hard materials. There are a number of options available to circumvent the use of SiC paper. One option, used mainly with semiautomatic and automatic systems, is to grind a number of specimens placed in a holder simultaneously using a conventional grinding stone generally made of coarse grit alumina to remove cutting damage. This step, often called planar grinding, has the second goal of making all of the specimen surfaces coplanar. This requires a special purpose machine, because the stone must rotate at a high speed (ⱖ1500 rpm) to cut effectively. The stone must be dressed regularly with a diamond tool to maintain flatness, and embedding of alumina abrasive in specimens can be a problem. Silicon carbide and alumina abrasive papers, usually of 120-, 180-, or 240-grit size, have been used for planar grinding and are very effective. Other materials have been used both for the planar grinding stage and to replace SiC paper after planar grinding. For very hard materials such as ceramics and sintered carbides, two or more metal-bonded or resinbonded diamond disks having grit sizes from about 70 to 9 µm can be used. An alternative type of disk has diamond particles suspended in a resin applied in small blobs, or spots, to a disk surface. These are available with diamond sizes from 120 to 6 µm. Another type of disk, available in several diamond sizes, uses diamond attached to the edges of a perforated, screenlike metal disk. Another approach uses a stainless steel woven mesh “cloth” on a platen charged with coarse diamond, usually in slurry form, for planar grinding. After obtaining a planar surface, there are several single-step procedures available that avoid the need to use finer SiC papers including the use of platens, woven polyester, or silk PSA-cloths, and rigid grinding disks. A coarse diamond size (most commonly 9 µm) is used with each of these. Grinding Media. Grinding abrasives commonly used in the preparation of metallographic specimens are SiC, aluminum oxide, or alumina (Al2O3), emery (Al2O3-Fe3O4), composite ceramics, and diamond. Emery paper is rarely used today in metallography due to its low cutting

JOBNAME: PGIA−−spec 2 PAGE: 14 SESS: 69 OUTPUT: Thu Oct 26 14:44:56 2000 48 / Practical Guide to Image Analysis

efficiency. SiC is more readily available than alumina as waterproof paper, although alumina papers do have a better cutting rate than SiC for some metals (Ref 8). These abrasives are generally bonded to paper, polymeric, or cloth-backing materials of various weights in the form of sheets, disks, and belts of various sizes. Grinding wheels consisting of abrasives embedded in a bonding material see limited use. Abrasives also may be used in powder form by charging the grinding surfaces with loose abrasive particles or with abrasive in a premixed slurry or suspension. When grinding soft metals, such as lead, tin, cadmium, bismuth, and aluminum, SiC particles, particularly with the finer grit-size papers, embed readily in the metal specimen as shown in Fig. 11. Embedding of diamond abrasive also is a problem with these soft metals, mainly with slurries when napless cloths are used (Fig. 12). Grinding Equipment. Although rarely used in industry, stationary grinding paper that is supplied in strips or rolls still is used in some introductory instruction to metallographic techniques. Holding the specimen on the paper away from his person, the operator manually slides the specimen against the paper toward him. Grinding in one direction usually keeps surfaces flatter than grinding in both directions. While this can be done dry for certain delicate materials, water is usually added to keep the specimen surface cool and to carry away the swarf. Most labs have belt grinders, which mainly are used to remove burrs from sectioning, to round edges that need not be preserved for examination, to flatten cut surfaces to be macroetched, and to remove sectioning damage. Generally only very coarse abrasive papers (60 to 240 grit) are used. Most grinding work is done on a rotating wheel; that is, a motor-driven platen on which the SiC paper is attached.

Fig. 11

Silicon-carbide particles from grinding paper embedded in a “soft” 6061-T6 aluminum alloy weldment. Etchant was 0.5% HF (hydrofluoric acid).

JOBNAME: PGIA−−spec 2 PAGE: 15 SESS: 69 OUTPUT: Thu Oct 26 14:44:56 2000 Specimen Preparation for Image Analysis / 49

Fig. 12

Fine (6 µm) diamond abrasive particles embedded in soft lead specimen

Lapping is an abrasive technique in which the abrasive particles roll freely on the surface of a carrier disk commonly made of cast iron or plastic. During the lapping process, the disk is charged with small amounts of a hard abrasive such as diamond or silicon carbide. Some platens, referred to as laps, are charged with diamond slurries. Initially the diamond particles roll over the lap surface (just as with other grinding surfaces), but soon they become embedded in and cut the surface, producing chips. Lapping disks can produce a flatter specimen than that produced by grinding, but lapping does not remove metal as does grinding, and, therefore, is not commonly used in metallographic preparation.

Polishing Polishing is the final step (or steps) used to produce a deformation-free surface, which is flat, scratch-free, and mirrorlike in appearance. Such a surface is necessary for subsequent qualitative and quantitative metallographic interpretation. The polishing technique used should not introduce extraneous structures such as disturbed metal (Fig. 13), pitting (Fig. 14), dragging out of graphite and inclusions, “comet tailing” (Fig. 15), and staining (Fig. 16). Relief (height differences between different constituents, or between holes and constituents) (Fig. 17 and 18) must be minimized. Polishing usually consists of rough, intermediate, and final stages. Rough polishing traditionally is done using 6 or 3 µm diamond abrasive charged onto napless or low-nap cloths. For hard materials such as through-hardened steels, ceramics, and cemented carbides, an additional

JOBNAME: PGIA−−spec 2 PAGE: 16 SESS: 87 OUTPUT: Thu Oct 26 14:44:56 2000 50 / Practical Guide to Image Analysis

rough polishing step may be required. For such materials, initial rough polishing may be followed by polishing with 1 µm diamond on a napless, low-nap, or medium-nap cloth. A compatible lubricant should be used sparingly to prevent overheating and/or surface deformation. Intermediate polishing should be performed thoroughly to keep final polishing to a minimum. Final polishing usually consists of a single step but could involve two steps, such as polishing using 0.3 µm and 0.05 µm alumina, or a final polishing step using alumina or colloidal silica followed by vibratory polishing, using either of these two abrasives.

(a)

Fig. 13

(b)

Examples of residual sectioning/grinding damage in polished specimens. (a) Waspaloy etched with Fry’s reagent. (b) Commercially pure titanium etched with Kroll’s reagent. Differential interference-contrast (DIC)

illumination

Fig. 14

Polishing pits in as-polished cold drawn Cu-20% Zn specimen

JOBNAME: PGIA−−spec 2 PAGE: 17 SESS: 87 OUTPUT: Thu Oct 26 14:44:56 2000 Specimen Preparation for Image Analysis / 51

Fig. 15

Comet tailing at hard nitride precipitates in AISI HI3 tool steel. Differential interference-contrast illumination emphasizes topigraphical detail.

Fig. 16

Staining from polishing solution on as-polished Ti-6Al-2Sn-4Zr-2Mo titanium alloy

JOBNAME: PGIA−−spec 2 PAGE: 18 SESS: 56 OUTPUT: Thu Oct 26 14:44:56 2000 52 / Practical Guide to Image Analysis

For inclusion analysis, a fine (1 µm) diamond abrasive may be adequate as the last preparation step. Traditionally, aqueous fine alumina slurries have been used for final polishing using medium-nap cloths. Alphaalumina (0.3 µm) and gamma-alumina (0.05 µm) slurries (or suspensions) are popular for final polishing, either in sequence or singularly. Alumina abrasives made by the sol-gel process produce better surface finishes than alumina abrasives made by the traditional calcination process. Calcined alumina abrasives always have some degree of agglomeration, regardless

(a)

(b)

Fig. 17

Examples of relief (in this case, height differences between different constituents) at hypereutectic silicon particles in Al-19.85% Si aluminum alloy. (a) Excessive relief. (b) Minimum relief. Etchant is 0.5 HF (hydrofluoric acid).

(a)

Fig. 18

(b)

Relief (in this case, height differences between constituents and holes) in microstructure of a braze. (a) Excessive relief. (b) Low relief. Etchant is glyceregia.

JOBNAME: PGIA−−spec 2 PAGE: 19 SESS: 69 OUTPUT: Thu Oct 26 14:44:56 2000 Specimen Preparation for Image Analysis / 53

of the efforts to keep them from agglomerating, while sol-gel alumina is free of this problem. Basic colloidal silica suspensions (around 9.5 pH) and acidic alumina suspensions (3 to 4 pH) are very good final polishing abrasives, particularly for difficult to prepare materials. Vibratory polishers (Fig. 19) often are used for final polishing, particularly with more difficult to prepare materials, for image analysis studies or for publicationquality work. Mechanical Polishing. The term mechanical polishing frequently is used to describe the various polishing procedures involving the use of fine abrasives on cloth. The cloth may be attached to a rotating wheel or a vibratory polisher bowl. Cloths either are stretched over the wheel and held in place using an adjustable clamp on the platen periphery or held in place using a pressure-sensitive adhesive bonded to the back of the cloth. Cutting is less effective if a stretched cloth moves under the applied pressure during polishing. Stretched cloths can rip if used on an automated polishing head, especially when preparing unmounted specimens. In mechanical polishing, the specimens are held by hand, held mechanically in a fixture, or merely confined within the polishing area. Electrolytic Polishing. Electrolytic polishing, or electropolishing, is rarely used to prepare specimens for image analysis work, because electropolished surfaces tend to be wavy rather than flat so stage movement and focus control over any reasonable size area is difficult. Electropolishing tends to round edges associated with external surfaces, cracks, and pores. Also, in two-phase alloys, one phase polishes at a different rate than another, leading to excessive relief, and in some cases, one phase may be attacked preferentially. Chemical polishing has the

Fig. 19

Vibratory polisher for final polishing. Its use produces imageanalysis and publication-quality specimens.

JOBNAME: PGIA−−spec 2 PAGE: 20 SESS: 57 OUTPUT: Thu Oct 26 14:44:56 2000 54 / Practical Guide to Image Analysis

same problems and restrictions. Consequently, electrolytic polishing is not recommended, except possibly as a very brief step at the end of a mechanical polishing cycle to remove minor damage that persists. Use of electropolishing should be limited to polishing single-phase structures where maximum polarized light response is required. Manual Preparation. Hand-preparation techniques still follow the basic practice established many years ago, aside from the use of improved grinding surfaces, polishing cloths, and abrasives. Specimen Movement during Grinding. For grinding, hold the specimen rigidly against the rotating SiC paper and slowly move from the center to the edge of the wheel. Rinse after each step and examine to ensure that scratches are uniform and that grinding removed the previous cut or ground surface. After grinding on the first SiC paper (often 120 grit), rotate the specimen 45 to 90° and abrade as before on the next finer paper. Examine the specimen periodically to determine if the current abrasive paper removed the scratch marks from the previous step. Repeat the procedure through all SiC abrasive size papers in the particular grinding process. In some cases, it may be necessary to use more than one sheet of paper of a given size before moving to the next finer paper. This is a common situation for the first step and sometimes for the finer papers. Specimen Movement during Polishing. For polishing, hold the specimen with one or both hands and rotate around the wheel in a circular pattern in a direction counter to the rotation of the polishing wheel, which usually is counterclockwise. In addition, continuously move the specimen back and forth between the center and the edge of the wheel, thereby ensuring even distribution of the abrasive and uniform wear of the polishing cloth. (Some metallographers use a small wrist rotation while moving the specimen from the center to the edge of one side of the wheel.) The main reason to rotate the specimen is to prevent formation of comet tails, polishing artifacts that results from directional polishing of materials containing hard inclusions or precipitates (Fig. 15). Polishing Pressure. In general, firm hand pressure is applied to the specimen. The correct amount of applied pressure must be determined by experience. Washing and Drying. The specimen is washed and swabbed in warm running water, rinsed with ethanol, and dried in a stream of warm air. Excessively hot water may cause pitting of some materials. Scrubbing with cotton soaked with an aqueous soap solution followed by rinsing with water also is commonly used. Alcohol usually can be used to wash the specimen when the abrasive carrier is not soluble in water or if the specimen cannot tolerate water. Ultrasonic cleaning may be required if the specimen is porous or cracked. Cleanness. Precautions for maintaining cleanness must be strictly observed. It usually is advisable to separate grinding operations from polishing operations, especially in a large, high-volume laboratory,

JOBNAME: PGIA−−spec 2 PAGE: 21 SESS: 58 OUTPUT: Thu Oct 26 14:44:56 2000 Specimen Preparation for Image Analysis / 55

because coarse abrasive can carry over to a finer abrasive stage and produce problems. Automatic Polishing. Mechanical polishing can be automated to a high degree using a wide variety of devices ranging from relatively simple systems (Fig. 20) to rather sophisticated, minicomputer-controlled or microprocessor-controlled devices (Fig. 21). Units also vary in capacity from a single specimen to a half-dozen or more at a time. These systems can be used for all grinding and polishing steps and enable an

Fig. 20

Simple automated mechanical polishing system

Fig. 21

Sophisticated automatic polishing system

JOBNAME: PGIA−−spec 2 PAGE: 22 SESS: 81 OUTPUT: Thu Oct 26 14:44:56 2000 56 / Practical Guide to Image Analysis

operator to prepare a large number of specimens per day with a higher degree of quality than that with hand polishing and at reduced consumable costs. Automatic polishing devices produce the best surface flatness and edge retention. Two approaches for handling specimens are central force and individual force. Central force uses a specimen holder with each specimen held in place rigidly. The holder is pressed downward against the preparation surface with the force coming uniformly. This method yields the best edge retention and specimen flatness. Individual force uses a holder that holds specimens loosely in place. Force is applied to each specimen by means of a piston (thus the term “individual force”). This method provides convenience in examining individual specimens during the preparation cycle, without the problem of regaining planarity for all specimens in the holder on the next step. Also, if the etch results are deemed inadequate, the specimen is simply put back in the holder, repeating the last step. The drawback to this method is that slight rocking of the specimen may occur, especially if the specimen height is too great, which slightly reduces edge retention. Polishing Cloths. The requirements of a good polishing cloth include the ability to hold an abrasive, long life, absence of any foreign material that may cause scratches, and absence of any processing chemical (such as dye or sizing) that may react with the specimen. Many cloths of different fabrics, woven or nonwoven, with a wide variety of naps, or napless, are available for metallographic polishing. Napless and low-nap cloths are recommended for rough polishing using diamond-abrasive compounds. Low-nap, medium-nap, and occasionally high-nap cloths are used for final polishing, but this step should be brief to minimize relief. Polishing Abrasives. Polishing usually involves the use of one or more of the following abrasives: diamond, aluminum oxide (Al2O3), magnesium oxide (MgO), and silicon dioxide (SiO2). For certain materials, cerium oxide, chromium oxide, or iron oxide may be used. With the exception of diamond, these abrasives normally are used in a distilledwater suspension, but if the metal to be polished is not compatible with water, other solvents such as ethylene glycol, alcohol, kerosene, or glycerol may be required. All flammable materials must be handled with care to avoid accidents. See ASTM E 2014 and related textbooks, Material Safety Data Sheets (MSDSs), and so forth for guidance on safety issues. Diamond abrasive should be extended only with the carrier recommended by the manufacturer.

Examples of Preparation Procedures The Traditional Method. Over the past 40 years, a general procedure has been developed that is quite successful for preparing most metals and alloys. The method is based on grinding using silicon carbide waterproof

JOBNAME: PGIA−−spec 2 PAGE: 23 SESS: 82 OUTPUT: Thu Oct 26 14:44:56 2000 Specimen Preparation for Image Analysis / 57

Table 1 Traditional method used to prepare most metal and alloy metallographic specimens Abrasive Polishing surface

Load

Type

Grit size

N

lb

Speed, rpm

SiC (water cooled)

120

27

6

240–300

Complementary Until plane

SiC (water cooled)

240

27

6

240–300

Complementary

1–2

SiC (water cooled)

320

27

6

240–300

Complementary

1–2

SiC (water cooled)

400

27

6

240–300

Complementary

1–2

SiC (water cooled)

600

27

6

240–300

Complementary

1–2

Canvas

Diamond paste with extender

6 µm

27

6

120–150

Complementary

2

Billiard or felt cloth

Diamond paste with extender

1 µm

27

6

120–150

Complementary

2

Aqueous ␣-alumina slurry

0.3 µm

27

6

120–150

Complementary

2

Aqueous ␥-alumina slurry

0.5 µm

27

6

120–150

Complementary

2

Waterproof paper

Microcloth pad

Direction(a)

Time, min

(a) Complementary, in the same direction in which the wheel is rotating.

papers through a series of grits, then rough polishing with one or more sizes of diamond abrasive, followed by fine polishing with one or more alumina suspensions of different particle size. This procedure will be called the “traditional” method and is described in Table 1. This procedure is used for manual preparation as well as using a machine, but control of the force applied to a specimen in manual preparation cannot be controlled as accurately and as consistently as with a machine. Complementary motion means that the specimen holder is rotated in the same direction as the platen and does not apply to manual preparation. Some machines can be set so that the specimen holder rotates in the direction opposite to that of the platen, called “contra.” This provides a more aggressive action but was not part of the traditional approach. This action is similar to the manual polishing procedure of running the specimen in a circular path around the wheel in a direction opposite to that of the platen rotation. The steps of the traditional method are not rigid, as other polishing cloths may be substituted and one or more of the polishing steps might be omitted. Times and pressures can be varied, as well, to suit the needs of the work or the material being prepared. This is the art side of metallography. Contemporary Methods. During the 1990s, new concepts and new preparation materials have been introduced that have enabled metallographers to shorten the process while producing better, more consistent results. Much of the effort focused on reducing or eliminating the use of silicon carbide paper in the five grinding steps. In all cases, an initial grinding step must be used, but there is a wide range of materials that can be substituted for SiC paper. If a central-force automated device is used, the first step must remove the sectioning damage on each specimen and bring all of the specimens in the holder to a common plane perpendicular to the axis of the specimen-holder drive system. This first step is often called planar grinding, and SiC paper can be used, although more than one sheet may be needed. Alternatives to SiC paper include the following:

JOBNAME: PGIA−−spec 2 PAGE: 24 SESS: 82 OUTPUT: Thu Oct 26 14:44:56 2000 58 / Practical Guide to Image Analysis

O O O O O O O

Alumina paper Alumina grinding stone Metal-bonded or resin-bonded diamond discs Wire mesh discs with metal-bonded diamond Stainless steel mesh cloths (diamond is applied during use) Rigid grinding discs (RGD) (diamond is applied during use) Lapping platens (diamond is applied and becomes embedded in the surface during use)

This huge range of products to choose from makes it difficult to determine what to use because each of these products has advantages and disadvantages—and this is only the first step. One or more steps using diamond abrasives on napless surfaces usually follow planar grinding. Pressure-sensitive-adhesive-backed silk, nylon, or polyester cloths are widely used. These give good cutting rates, maintain flatness, and avoid relief. Silk cloths provide the best flatness and excellent surface finishes for the diamond size used. Synthetic chemotextiles are excellent for retaining second phase particles and inclusions. Diamond suspensions are most popular for use with automated polishers because they can be added easily during polishing, although it is still best to charge the cloth initially with diamond paste of the same size to get polishing started quickly. Final polishing can be performed using a very fine diamond size, such as 0.1 µm diamond, depending on the material, needs, and personal preferences. Otherwise, final polishing is performed using colloidal silica or alumina slurries with low-nap to medium-nap cloths. For some materials, such as titanium and zirconium alloys, an “attack” polishing solution is added to the slurry to enhance deformation and scratch removal and improve polarized light response. Contra rotation is preferred as the slurry stays on the cloth better, although this will not work if the head rotates at a high rpm. Examples of generic and specific contemporary preparation practices are given in Tables 2 to 6. The starting abrasive size depends on the degree of the cutting damage and the material. Never start with a coarser abrasive than necessary to remove the cutting damage and achieve planar conditions in a reasonable time. The Table 2 Generic four-step contemporary practice used to prepare many metal and alloy metallographic specimens Polishing surface

Load, N (lb)

Speed, rpm/direction

Time, min

Waterproof discs

SiC(a)/120, 180, or 240

Abrasive/grit size

27 (6)

240–300/comp

Until plane

Napless cloth

Diamond/9 µm

27 (6)

120–150/comp

5

Napless cloth

Diamond/ 3 µm

27 (6)

120–150/comp

4

Low- or medium-nap cloth

Colloidal silica or sol-gel alumina suspension/⬃0.05 µm

27 (6)

120–150/contra

2

Comp, complementary; that is, in the same direction in which the wheel is rotating. Contra, opposite to the direction in which the wheel is rotating. (a) Water cooled

JOBNAME: PGIA−−spec 2 PAGE: 25 SESS: 83 OUTPUT: Thu Oct 26 14:44:56 2000 Specimen Preparation for Image Analysis / 59

generic four-step procedure in Table 2 can be extended to five steps for difficult to prepare materials by adding a 1 µm diamond step on a napless cloth for 2 to 3 min as step 4. Similar procedures can be developed using rigid grinding discs, which generally are restricted for use with materials above a certain hardness Table 3 Four-step contemporary practice used to prepare steel metallographic specimens using a rigid grinding disc Polishing surface

Waterproof discs

Abrasive/grit size

SiC(a)/120, 180, or 240

Load, N (lb)

Speed, rpm/direction

Time, min

27 (6)

240–300/comp

Until plane 5

Rigid grinding disc

Diamond suspension/9 µm

27 (6)

120–150/comp

Napless cloth

Diamond/3 µm

27 (6)

120–150/comp

4

Low- or mediumnap cloth

Colloidal silica or sol-gel alumina suspension/⬃0.05 µm

27 (6)

120–150/contra

2

Comp, complementary; that is, in the same direction in which the wheel is rotating. Contra, opposite to the direction in which the wheel is rotating. (a) Water cooled

Table 4 Four-step practice used to prepare sintered-carbide metallographic specimens using two rigid grinding disc steps Polishing surface

Abrasive/grit size

Load, N (lb)

Speed, rpm/direction

Time, min

Rigid grinding disc

Diamond suspension/30 µm

22 (5)

240–300/contra

5

Rigid grinding disc

Diamond suspension/9 µm

27 (6)

240–300/contra

4

Napless cloth

Diamond/3 µm

27 (6)

120–150/contra

3

Napless cloth

Colloidal silica or sol-gel alumina suspension/⬃0.5 µm

27 (6)

120–150/contra

2

Contra, opposite to the direction in which the wheel is rotating

Table 5 Four-step practice used to prepare aluminum alloy metallographic specimens Polishing surface

Load, N (lb)

Speed, rpm/direction

Time, min

Waterproof discs

SiC(a)/240 or 320

Abrasive/grit size

22 (5)

240–300/comp

Until plane

Napless cloth

Diamond/9 µm

40 (9)

120–150/comp

5

Napless cloth

Diamond/3 µm

36 (8)

120–150/comp

3

Low- or medium-nap cloth

Colloidal silica or sol-gel alumina suspension/⬃0.05 µm

31 (7)

120–150/contra

2

Comp, complementary; that is, in the same direction in which the wheel is rotating. Contra, opposite to the direction in which the wheel is rotating. (a) Water cooled

Table 6 Three-step practice used to prepare titanium and Ti-alloy metallographic specimens Polishing surface

Load, N (lb)

Speed, rpm/direction

Time, min

Waterproof paper discs

SiC(a)/320

Abrasive/grit size

27 (6)

240–300/comp

Until plane

Napless cloth

Diamond/9 µm

27 (6)

120–150/contra

10

Medium-nap cloth

Colloidal silica plus attack polish(b)/⬃0.05 µm

27 (6)

120–150/contra

10

Comp, complementary; that is, in the same direction in which the wheel is rotating. Contra, opposite to the direction in which the wheel is rotating. (a) Water cooled. (b) Attack polish is five parts colloidal silica plus one part hydrogen peroxide, 30% concentration. Use with caution.

JOBNAME: PGIA−−spec 2 PAGE: 26 SESS: 70 OUTPUT: Thu Oct 26 14:44:56 2000 60 / Practical Guide to Image Analysis

(175 HV, for example), although some softer materials can be prepared using them. This disc can also be used for the planar grinding step. An example of such a practice applicable to nearly all steels (results are marginal for solution annealed austenitic stainless steels) is given in Table 3. The first step of planar grinding could also be performed using the rigid grinding disc and 30 µm diamond. Rigid grinding discs contain no abrasive; they must be charged during use. Suspensions are the easiest way to do this. Polycrystalline diamond suspensions are favored over monocrystalline synthetic diamond suspensions for most metals and alloys due to their higher cutting rate. As examples of tailoring these types of procedures to other metals, alloys, and materials, the following three methods are shown in Tables 4 to 6 for sintered carbides (these methods also work for ceramics), aluminum, and titanium alloys. Because sintered carbides and ceramics are cut with a precision saw that produces very little deformation and an excellent surface finish, a coarser grit diamond abrasive is not needed for planar grinding (Table 4). Pressure-sensitive-adhesive-backed silk cloths are excellent for sintered carbides. Nylon is also quite popular. A four-step practice for aluminum alloys is presented in Table 5. While MgO was the preferred final polishing abrasive for aluminum and its alloys, it is a difficult abrasive to use and is not available in very fine sizes, and colloidal silica has replaced magnesia. This procedure retains all of the intermetallic precipitates observed in aluminum and its alloys and minimizes relief. Synthetic napless cloths may also be used for the final step with colloidal silica, and they will introduce less relief than a low-nap or medium-nap cloth but may not remove fine polishing scratches as well. For very pure aluminum alloys, this procedure could be followed by vibratory polishing to improve the surface finish, as these are quite difficult to prepare totally free of fine polishing scratches. The contemporary practice for titanium and its alloys (Table 6) demonstrates the use of an attack-polishing agent added to the final polishing abrasive to obtain the best results, especially for commercially pure titanium, a rather difficult metal to prepare free of deformation for color etching, heat tinting, and/or polarized light examination of the grain structure. Attack-polishing solutions added to the abrasive slurry or suspension must be treated with great care to avoid burns. (Caution: use good, safe laboratory practices and wear protective gloves.) This three-step practice could be modified to four steps by adding a 3 µm or 1 µm diamond step. There are a number of attack polishing agents for use on titanium. The simplest is a mixture of 10 mL, 30% concentration hydrogen peroxide (caution: avoid skin contact) and 50 mL colloidal silica. Some metallographers add either a small amount of Kroll’s reagent to this mixture or a few milliliters of nitric and hydrofluoric acids—these latter additions may cause the suspension to gel. In general, these acid additions do little to

JOBNAME: PGIA−−spec 2 PAGE: 27 SESS: 85 OUTPUT: Thu Oct 26 14:44:56 2000 Specimen Preparation for Image Analysis / 61

improve the action of the hydrogen peroxide (the safer 3% concentration is not effective). It is impossible to describe in this book all methods that can be used to prepare all materials, but the above examples illustrate the approach to use. The approach can be modified to suit other materials. Materialpreparation methods can be found in many sources, such as Ref 6–8. Some ASTM standards also provide material-preparation guidelines, such as ASTM E 3 (general preparation suggestions), E 768 (guidelines to prepare steel specimens for inclusion analysis), and E 1920 (guidelines to prepare thermally sprayed metallic specimens).

Etching Metallographic etching encompasses all processes used to reveal particular structural characteristics of a metal that are not evident in the as-polished condition. Examination of a properly polished specimen before etching may reveal structural aspects such as porosity, cracks, graphite, intermetallic precipitates, nitrides, and nonmetallic inclusions. Certain constituents are best measured using image analysis without etching, because etching reveals unwanted detail, making detection difficult or impossible. Classic examples of analyzing unetched specimens are the measurement of inclusions in steel and graphite in cast iron, although many intermetallic precipitates and nitrides also can be measured effectively in the as-polished condition. Grain size also can be revealed adequately in the as-polished condition using polarized light in certain nonferrous alloys having noncubic crystallographic structures, such as beryllium, hafnium, magnesium, titanium, uranium, and zirconium. Figure 22 shows the microstructure of beryllium viewed in cross-polarized light, which produces grain coloration rather than a flat etched appearance where only the grain boundaries are dark. This image could be used in color image analysis but would not be useful for image analysis using a black and white system. Etching Procedures. Microscopical examination usually is limited to a maximum magnification of 1000⫻—the approximate useful limit of the light microscope, unless oil-immersion objectives are used. Many image analysis systems use relay lenses that yield higher screen magnifications, which may make detection of fine structures easier. However, resolution is not raised above the general limit of about 0.3 µm for the light microscope. Microscopical examination of a properly prepared specimen clearly reveals structural characteristics such as grain size, segregation, and the shape, size, and distribution of the phases and inclusions that are present. The microstructure also reveals prior mechanical and thermal treatments that the metal has received. Microstructural features are

JOBNAME: PGIA−−spec 2 PAGE: 28 SESS: 85 OUTPUT: Thu Oct 26 14:44:56 2000 62 / Practical Guide to Image Analysis

measured either according to established image analysis procedures (ASTM standards, for example) or internally developed procedures. Etching is carried out by means of immersion or swabbing or electrolytically, using a suitable chemical solution that basically produces selective corrosion. Swabbing is preferred for metals and alloys that form a tenacious oxide surface layer when exposed to the atmosphere, such as stainless steels, aluminum, nickel, niobium, and titanium. It is best to use surgical grade cotton that will not scratch the polished surface. Etch time varies with etchant strength and can only be determined by experience. In general, for examination at high magnification, the etch depth should be shallow; while for examination at low magnification, a deeper etch yields better image contrast. Some etchants produce selective results; that is, only one phase is attacked or colored. For information on the vast number of etchants that have been developed, see Ref 6–8, 9, and ASTM E 407. After achieving the desired degree of etching, rinse the specimen under running water, displace the water from the specimen surface with alcohol (ethanol is safer to use than methanol), and dry the specimen under hot air. Drying can be challenging if there are cracks, pores, or other holes in the specimen, or shrinkage gaps between specimen and mount. Figure 23 shows two examples of drying problems that obscure the true microstructure. Etchants that reveal grain boundaries are very important for successful determination of the grain size. Grain boundary etchants are given in (Ref 6–9). Problems associated with grain boundary etching, particularly prior-austenite grain boundary etching, are given in Ref 7, 10, and 11.

Crossed-polarized light (Ahrens polarizer/Polaroid filter analyzer ⫹ Berek prism pre-polarizer). This light colorizes the grains of beryllium microstructure, a phenomenon that is useful for color image analysis but harder to utilize in black and white image analysis. For color version of Fig. 22, see endsheets of book.

Fig. 22

JOBNAME: PGIA−−spec 2 PAGE: 29 SESS: 64 OUTPUT: Thu Oct 26 14:44:56 2000 Specimen Preparation for Image Analysis / 63

Measurement of grain size in austenitic, or face-centered cubic (fcc), metals that exhibit annealing twins is a commonly encountered problem. Etchants that will reveal grain boundaries but not twin boundaries are reviewed in Ref 7. Selective Etching. Image analysis work is facilitated if the etchant selected improves the contrast between the feature of interest and everything else. Only a small number of the thousands of etchants that have been developed over the years are selective in nature. Although the selection of the best etchant and its proper use are very critical phases of the image analysis process, only a few publications have addressed this problem (Ref 12–14). Selective etchants, that is, etchants that preferentially attack or color a specific phase, are listed (Ref 6–9, 13, and 14) and shown in Fig. 13 and 14. Stansbury (Ref 15) describes how potentiostatic etching works and lists many preferential potentiostatic-etching methods. The potentiostat offers the ultimate in control over the etching process and is an outstanding tool for this purpose. Many tint etchants function selectively in that they color either the anodic or cathodic constituent in a microstructure. Tint etchants are listed and illustrated in several publications (Ref 6–8, 14, 16–21). A classic example of the different behavior of etchants is shown in Fig. 24 where low-carbon sheet steel has been etched using the standard nital and picral etchants and a color tint etch. Etching with 2% nital reveals the ferrite grain boundaries and cementite (Fig. 24a). Note that many of the ferrite grain boundaries are missing or very faint—a problem that degrades the accuracy of grain size ratings. Etching with 4% picral reveals the cementite aggregates—this cannot be called pearlite because it

(a)

Fig. 23

(b)

Examples of conditions that obscure the true microstructure. (a) Improper drying of the specimen. (b) Water stains emanating from shrinkage gaps between 6061-T6 aluminum alloy and phenolic resin mount. Both specimens viewed using differential interference-contrast (DIC) illumination.

JOBNAME: PGIA−−spec 2 PAGE: 30 SESS: 84 OUTPUT: Thu Oct 26 14:44:56 2000 64 / Practical Guide to Image Analysis

is too nonlamellar in appearance and some of the cementite exists as simple grain boundary film—but no ferrite grain boundaries. If the interest is knowing the amount and nature of the cementite (which can influence formability), then the picral etch is far superior to the nital etch, because picral reveals only the cementite. Tint etching using Beraha’s solution (Klemm I etchant also can be used) colors the grains according to their crystallographic orientation (Fig. 24c). This image can now be

(a)

(b)

(c)

Fig. 24

Examples of different behavior of etchants on the same low-carbon steel sheet. (a) 2% nital etch reveals ferrite grain boundaries and cementite. (b) 4% picral etch reveals cementite aggregates and no ferrite grain boundaries. (c) Tint etching with Beraha’s solution colors all grains according to their crystallographic orientation. All specimens are viewed using bright field illumination. For color version of Fig. 24(c), see endsheets of book.

JOBNAME: PGIA−−spec 2 PAGE: 31 SESS: 84 OUTPUT: Thu Oct 26 14:44:56 2000 Specimen Preparation for Image Analysis / 65

used quite effectively to provide accurate grain size measurements using a color image analyzer because all grains are colored. Figure 25 shows a somewhat more complex example of selective etching. The micrographs show the ferrite-cementite-iron phosphide ternary eutectic in gray iron. Etching sequentially with picral and nital reveals the eutectic surrounded by pearlite (Fig. 25a). Etching with boiling alkaline sodium picrate (Fig. 25b) colors only the cementite phase, including that in the surrounding pearlite (a higher magnification is required to see the very finely spaced cementite that is more lightly colored). Etching with boiling Murakami’s reagent (Fig. 25c) colors the

(a)

(b)

Fig. 25

(c)

Examples of selective etching of ferrite-cementite-iron phosphide ternary eutectic in gray cast iron. (a) Picral/nital etch reveals the eutectic surrounded by pearlite. (b) Boiling alkaline sodium-picrate etch colors only the cementite phase. (c) Boiling Murakami’s reagent etch darkly colors the iron phosphide and lightly colors cementite after prolonged etching. All specimens are viewed using bright field illumination.

JOBNAME: PGIA−−spec 2 PAGE: 32 SESS: 88 OUTPUT: Thu Oct 26 14:44:56 2000 66 / Practical Guide to Image Analysis

iron phosphide darkly and lightly colors the cementite after prolonged etching. The ferrite could be colored preferentially using Klemm I reagent. Selective etching has been commonly applied to stainless steels to detect, identify, and measure ␦-ferrite, ferrite in dual phase grades, and ␴-phase. Figure 26 shows examples of the use of a number of popular etchants to reveal the microstructure of 7Mo Plus (Carpenter Technology Corporation, Reading, PA) (UNS S32950), a dual-phase stainless steel, in the hot-rolled and annealed condition. Figure 26(a) shows a welldelineated structure when the specimen was immersed in ethanolic 15% HCl for 30 min. All of the phase boundaries are clearly revealed, but there is no discrimination between ferrite and austenite, and twin boundaries in the austenite are not revealed. Glyceregia, a popular etchant for stainless steels, is not suitable for this grade because it appears to be rather orientation-sensitive (Fig. 26b). Many electrolytic etchants are used to etch stainless steels, but only a few have selective characteristics. Of the four shown in Fig. 26 (c to f), only aqueous 60% nitric acid produces any gray level discrimination, which is weak, between the phases. However, all nicely reveal the phase boundaries. Two electrolytic reagents are commonly used to color ferrite in dual phase grades and ␦-ferrite in martensitic grades (Fig. 26 g, h). Of these, aqueous 20% sodium hydroxide (Fig. 26g) usually gives more uniform coloring of the ferrite. Murakami’s and Groesbeck’s reagents also are used for this purpose. Tint etchants developed by Beraha nicely color the ferrite phase, as illustrated in Fig. 26(i). Selective etching techniques have been more thoroughly developed for use on iron-base alloys than other alloy systems but are not limited to iron-base alloys. For example, selective etching of ␤-phase in ␣-␤ copper alloys is a popular subject. Figure 27 illustrates coloring of ␤-phase in naval brass (UNS C46400) using Klemm I reagent. Selective etching has long been used to identify intermetallic phases in aluminum alloys; the method was used for many years before the development of energydispersive spectroscopy. It still is useful for image analysis work. Figure 28 shows selective coloration of ␪-phase, CuAl2, in the Al-33% Cu eutectic alloy. Figure 29 illustrates the structure of a simple sintered tungsten carbide (WC-Co) cutting tool. In the as-polished condition (Fig. 29a), the cobalt binder is faintly visible against the more grayish tungsten carbide grains, and a few particles of graphite are visible. Light relief polishing brings out the outlines of the cobalt binder phase, but this image is not particularly useful for image analysis (Fig. 29b). Etching in a solution of hydrochloric acid saturated with ferric chloride (Fig. 29c) attacks the cobalt and provides good uniform contrast for measurement of the cobalt binder phase. A subsequent etch using Murakami’s reagent at room temperature reveals the edges of the tungsten carbide grains, which is useful to evaluate grain size (Fig. 29d).

JOBNAME: PGIA−−spec 2 PAGE: 33 SESS: 87 OUTPUT: Thu Oct 26 14:44:56 2000 Specimen Preparation for Image Analysis / 67

(a)

(b)

(c)

(d)

(e)

(f)

Fig. 26

Examples of selective etching to identify different phases in hot-rolled, annealed 7MoPlus duplex stainless steel microstructure. Chemical etchants used were (a) immersion in 15% HCl in ethanol/30 min and (b) glyceregia/2 min. Electrolytic etchants used were (c) 60% HNO3/1 V direct current (dc)/20 s, platinum cathode; (d) 10% oxalic acid/6 V dc/75 s; (e) 10% CrO3/6 V dc/30 s; and (f) 2% H2SO4/5 V dc/30 s. Selective electrolytic etchants used were (g) 20% NaOH/Pt cathode/4 V dc /10 s and (h) 10N KOH/Pt/3 V dc/4 s. (i) Tint etch 200⫻. See text for description of microstructures. For color version of Fig. 26(i), see endsheets of book.

JOBNAME: PGIA−−spec 2 PAGE: 34 SESS: 72 OUTPUT: Thu Oct 26 14:44:56 2000 68 / Practical Guide to Image Analysis

(g)

(h)

(i)

Fig. 26

Examples of selective etching to identify different phases in hot-rolled, annealed 7MoPlus duplex stainless steel microstructure. Chemical etchants used were (a) immersion in 15% HCl in ethanol/30 min and (b) glyceregia/2 min. Electrolytic etchants used were (c) 60% HNO3/1 V direct current (dc)/20 s, platinum cathode; (d) 10% oxalic acid/6 V dc/75 s; (e) 10% CrO3/6 V dc/30 s; and (f) 2% H2SO4/5 V dc/30 s. Selective electrolytic etchants used were (g) 20% NaOH/Pt cathode/4 V dc /10 s and (h) 10N KOH/Pt/3 V dc/4 s. (i) Tint etch 200⫻. See text for description of microstructures. For color version of Fig. 26(i), see endsheets of book.

JOBNAME: PGIA−−spec 2 PAGE: 35 SESS: 72 OUTPUT: Thu Oct 26 14:44:56 2000 Specimen Preparation for Image Analysis / 69

Electrolytic Etching and Anodizing. The procedure for electrolytic etching basically is the same as that used for electropolishing, except that voltage and current densities are considerably lower. The specimen is made the anode, and some relatively insoluble but conductive material such as stainless steel, graphite, or platinum is used for the cathode. Direct-current electrolysis is used for most electrolytic etching, and for small specimens (13 by 13 mm, or 0.5 by 0.5 in., surface to be etched), one or two standard 1.5 V direct current (dc) flashlight batteries provide an adequate power source, although the current level may be inadequate for some work. Electrolytic etching is commonly used with stainless steels, either to reveal grain boundaries without twin boundaries or to color ␦-ferrite (Fig. 26), ␴-phases, and ␺-phases. Anodizing is a term applied to electrolytic etchants that develop grain coloration when viewed with crossed polarized light, as in the case of aluminum, niobium, tantalum, titanium, tungsten, uranium, vanadium, and zirconium (Ref 7). Figure 30 shows the grain structure of 5754 aluminum alloy sheet (UNS A95754) revealed by anodizing using Barker’s reagent and viewed using crossed-polarized light. Again, color image analysis now makes this image useful for grain size measurements.

(a)

Fig. 27

(b)

Selective etching of naval brass with Klemm I reagent reveals the ␤-phase (dark constituent) in the ␣-␤ copper alloy. (a) Transverse section. (b) Longitudinal section

JOBNAME: PGIA−−spec 2 PAGE: 36 SESS: 84 OUTPUT: Thu Oct 26 14:44:56 2000 70 / Practical Guide to Image Analysis

Fig. 28

Selective tint etching of Al-33%Cu eutectic alloy. The ␪ phase is revealed. For color version of Fig. 28, see endsheets of book.

(a)

(b)

(c)

(d)

Fig. 29

Selective etching of sintered tungsten carbide-cobalt (WC-Co) cutting tool material. (a) Some graphite particles are visible in the as-polished condition. (b) Light relief polishing outlines cobalt binder phase. (c) Hydrochloric acid saturated with ferric chloride solution etch darkens the cobalt phase. (d) Subsequent Murakami’s reagent etch reveals edges of WC grains. Viewed using bright field illumination

JOBNAME: PGIA−−spec 2 PAGE: 37 SESS: 64 OUTPUT: Thu Oct 26 14:44:56 2000 Specimen Preparation for Image Analysis / 71

Heat Tinting. Although not commonly used, heat tinting (Ref 7) is an excellent method to obtain color contrast between constituents or grains. An unmounted polished specimen is placed face up in an air-fired furnace and held at a set temperature as an oxide film grows on the surface. Interference effects, as in tint etching, create coloration for film thicknesses within a certain range—approximately 20 to 500 nm. The observed color is a function of the film thickness. Thermal exposure must be such that it does not alter the microstructure. The correct temperature must be determined by the trial-and-error approach, but the procedure is reproducible and reliable. Figure 31 shows the grain structure of commercially pure (CP) titanium revealed by heat tinting. Interference Layer Method. The interference layer method (Ref 7), introduced by Pepperhoff in 1960, is another procedure used to form a film over the microstructure that generates color by interference effects. In this method, a suitable material is deposited on the polished specimen face by vapor deposition to produce a low-absorption, dielectric film having a high refractive index at a thickness within the range for interference. Very small differences in the natural reflectivity between constituents and the matrix can be dramatically enhanced by this method. Suitable materials for the production of evaporation layers are summarized in Ref 22 and 23. The technique is universally applicable but requires the use of a vacuum evaporator. Its main limitation is difficulty in obtaining a uniformly coated large surface area for measurement.

Fig. 30

Grain coloration of a heat-treated (340°C, or 645 °F, 2 h) 5754 aluminum alloy sheet (longitudinal plane) obtained by anodizing using Barker’s reagent (30 V direct current, 2 min). Viewed using crossed polarized light. For color version, see endsheets of book.

JOBNAME: PGIA−−spec 2 PAGE: 38 SESS: 64 OUTPUT: Thu Oct 26 14:44:56 2000 72 / Practical Guide to Image Analysis

Fig. 31

Grain coloration of commercially pure titanium obtained by heat tinting, viewed using crossed polarized light. For color version of figure, see endsheets of book.

Conclusions Preparation of metallographic specimens is based on scientific principles that are easily understood. Sectioning creates damage that must be removed by the grinding and polishing steps if the true structure is to be examined. Each sectioning process produces a certain amount of damage, thermal and/or mechanical. Consequently, select a procedure that produces the least possible damage. Grinding also causes damage, with the depth of damage decreasing with decreasing abrasive size. Materials respond differently to the same size abrasive, so it is not possible to generalize on metal removal depth. Removal rates also decrease with decreasing abrasive size. With experience, good, reproducible procedures can be established by each laboratory for the materials being prepared. Automation in specimen preparation offers much more than a reduction in labor. Specimens prepared using automated devices consistently have much better flatness, edge retention, relief control, and freedom from artifacts such as scratches, pull out, smearing, and comet tailing. Some image analysis work is performed on as-polished specimens, but many applications require some etching technique to reveal the microstructural constituent of interest. Selective etching techniques are best. These may involve immersion tint etchants, electrolytic etching, potentiostatic etching, or techniques such as heat tinting or vapor deposition. In each case, the goal is to reveal only the constituent of interest with strong

JOBNAME: PGIA−−spec 2 PAGE: 39 SESS: 65 OUTPUT: Thu Oct 26 14:44:56 2000 Specimen Preparation for Image Analysis / 73

contrast. If this is done, image analysis measurement procedures are vastly simplified and the data are more precise and reproducible.

References 1. J.R. Pickens and J. Gurland, “Metallographic Characterization of Fracture Surface Profiles on Sectioning Planes,” Proc. Fourth International Congress for Stereology, National Bureau of Standards Special Publication 431, U. S. Government Printing Office, Washington, D. C., 1976, p 269–272 2. A.M. Gokhale, W.J. Drury, and S. Mishra, Recent Developments in Quantitative Fractography, Fractography of Modern Engineering Materials: Composites and Metals, Second Volume, STP 1203, ASTM, 1993, p 3–22 3. A.M. Gokhale, Unbiased Estimation of Curve Length in 3-D Using Vertical Slices, J. Microsc., Vol 159 (Part 2), August 1990, p 133–141 4. A.M. Gokhale and W. J. Drury, Efficient Vertical Sections: The Trisector, Metall. Mater. Trans. A, 1994, Vol 25, p 919–928 5. B.R. Morris, A.M. Gokhale, and G.F. Vander Voort, Grain Size Estimation in Anisotropic Materials, Metall. Mater. Trans. A, Vol 29, Jan 1998, p 237–244 6. Metallography and Microstructures, Vol 9, Metals Handbook, 9th ed., American Society for Metals, 1985 7. G.F. Vander Voort, Metallography: Principles and Practice, ASM International, 1999 8. L.E. Samuels, Metallographic Polishing by Mechanical Methods, 3rd ed., American Society for Metals, 1982 9. G. Petzow and V. Carle, Metallographic Etching, 2nd ed., ASM International, 1999 10. G.F. Vander Voort, Grain Size Measurement, Practical Applications of Quantitative Metallography, STP 839, ASTM, 1984, p 85–131 11. G.F. Vander Voort, Wetting Agents in Metallography, Mater. Charact., Vol 35 (No. 2), Sept 1995, p 135–137 12. A.Skidmore and L. Dillinger, Etching Techniques for Quantimet Evaluation, Microstruct., Vol 2, Aug/Sept 1971, p 23–24 13. G.F. Vander Voort, Etching Techniques for Image Analysis, Microstruct. Sci., Vol 9, Elsevier North-Holland, NY, 1981, p 135–154 14. G.F. Vander Voort, Phase Identification by Selective Etching, Appl. Metallography, Van Nostrand Reinhold Co., NY, 1986, p 1–19 15. E.E. Stansbury, Potentiostatic Etching, Appl. Metallography, Van Nostrand Reinhold Co., NY, 1986, p 21–39 16. E. Beraha and B. Shpigler, Color Metallography, American Society for Metals, 1977 17. G. F. Vander Voort, Tint Etching, Metal Progress, Vol 127, March

JOBNAME: PGIA−−spec 2 PAGE: 40 SESS: 72 OUTPUT: Thu Oct 26 14:44:56 2000 74 / Practical Guide to Image Analysis

1985, p 31–33, 36–38, 41 18. E. Weck and E. Leistner, Metallographic Instructions for Colour Etching by Immersion, Part I: Klemm Colour Etching, Vol 77, Deutscher Verla für Schweisstechnik GmbH, 1982 19. E. Weck and E. Leistner, Metallographic Instructions for Colour Etching by Immersion, Part II: Beraha Colour Etchants and Their Different Variants, Vol 77/II, Deutscher Verlag für Schweisstechnik GmbH, 1983 20. E. Weck and E. Leistner, Metallographic Instructions for Colour Etching by Immersion, Part III: Non-Ferrous Metals, Cemented Carbides and Ferrous Metals, Nickel-Base and Cobalt-Base Alloys, Vol 77/III, Deutscher Verlag für Schweisstechnik, 1986 21. P. Skocovsky, Colour Contrast in Metallographic Microscopy, Slovmetal, Zˇ ilina, 1993 22. H.E. Bühler and H. P. Hougardy, Atlas of Interference Layer Metallography, Deutsche Gesellschaft für Metallkunde, 1980 23. H.E. Bühler and I. Aydin, Applications of the Interference Layer Method, Appl. Metallography, Van Nostrand Reinhold Co., NY, 1986, p 41–51

CHAPTER

4 Principles of Image Analysis

James C. Grande General Electric Research and Development Center

THE PROCESS by which a visualized scene is analyzed comprises specific steps that lead the user to either an enhanced image or data that can be used for further interpretation. A decision is required at each step to be able to achieve the next step, as shown in Fig. 1. In addition, many different algorithms can be used at each step to achieve desired effects and/or measurements. To illustrate the decision-making process, consider the following hypothetical situation. Visualize a polished section of nodular gray cast iron (which, decidedly, is best acquired by reflected bright-field illumination). After digitizing, the image is enhanced to delineate the edges more clearly. Then the threshold (gray-level range) of the graphite in the metal matrix is set, and the image is transformed into binary form. Next, some binary image processing is performed to eliminate the graphite flakes so the graphite nodules can be segmented as the features of interest. Finally, image analysis software measures area fraction and size distribution of nodules, providing data that can be used to compare against the specifications of the material being analyzed. This chapter discusses the practice of image processing for analysis and explores issues and concerns of which the user should be aware.

Image Considerations An image in its simplest form is a three-dimensional array of numbers representing the spatial coordinates (x and y, or horizontal and vertical) and intensity of a visualized object (Fig. 2). The number array is the fundamental form by which mathematical calculations are performed to

76 / Practical Guide to Image Analysis

enhance an image or to make quantitative measurements of features contained in an image. In the digital world, the image is composed of small, usually square (to avoid directional bias) picture elements called pixels. The gray level, or intensity, of each pixel relates to the number of light photons striking the detector within a camera. Images typically range in size from arrays of 256 ⫻ 256 pixels to those as large as 4096 ⫻ 4096 pixels using specialized imaging devices. There are a myriad number of cameras having wide-ranging resolutions and sensitivities available today. In the

Fig. 1

Fig. 2

Image analysis process steps. Each step has a decision point before the next step can be achieved.

Actual image area with corresponding magnified view. The individual pixels are arranged in x, y coordinate space with gray level, or intensity, associated with each one.

Principles of Image Analysis / 77

mid to late 1980s, 512 ⫻ 512 pixel arrays were the standard. Older systems typically had 64 (26) gray levels, whereas at the time of this publication, all commercial systems offer at least 256 (28) gray levels, although there are systems having 4096 (212) and 65,536 (216) gray levels. These are often referred to 6 bit, 8 bit, 12 bit, and 16 bit cameras, respectively. The process of converting an analog signal to a digital one has some limitations that must be considered during image quantification. For example, pixels that straddle the edge of a feature of interest can affect the accuracy and precision of each measurement because an image is composed of square pixels having discrete intensity levels. Whether a pixel resides inside or outside a feature edge can be quite arbitrary and dependent on positioning of the feature within the pixel array. In addition, the pixels along the feature edge effectively contain an intermediate intensity value that results from averaging adjacent pixels. Such considerations suggest a desire to minimize pixel size and increase the number of gray levels in a system—particularly if features of interest are very small relative to the entire image—at the most reasonable equipment cost. Resolution versus Magnification. Two of the more confusing aspects of a digital image are the concepts of resolution and magnification. Resolution can be defined as the smallest feature that can be resolved. For example, the theoretical limit at which it is no longer possible to distinguish two distinct adjacent lines using light as the imaging method is at a separation distance of about 0.3 µm. Magnification, on the other hand, is the ratio of an object dimension in an image to the actual size of the object. Determining that ratio sometimes can be problematic, especially when the actual dimension is not known. The displayed dimension of pixels is determined by the true magnification of the imaging setup. However, the displayed pixel dimension can vary considerably with display media, such as on a monitor or hard-copy (paper) print out. This is because a typical screen resolution is 72 dots per inch (dpi), and unless the digitized image pixel resolution is exactly the same, the displayed image might be smaller or larger than the observed size due to the scaling of the visualizing software. For example, if an image is digitized into a computer having a 1024 ⫻ 1024 pixel array, the dpi could be virtually any number, depending on the imaging program used. If that same 1024 ⫻ 1024 image is converted to 150 dpi and viewed on a standard monitor, it would appear to be twice as large as expected due to the 72 dpi monitor resolution limit. The necessary printer resolution for a given image depends on the number of gray levels desired, the resolution of the image, and the specific print engine used. Typically, printers require a 4 ⫻ 4 dot array for each pixel if 16 shades of gray are needed. An improvement in output dpi by a factor of 1.5 to 2 is possible with many printers by optimizing the raster, which is a scanning pattern of parallel lines that form the display

78 / Practical Guide to Image Analysis

of an image projected on a printing head of some design. For example, a 300 dpi image having 64 gray levels requires a 600 dpi printer for correct reproduction. While these effects are consistent and can be accounted for, they still are issues that require careful attention because accurate depiction of size and shape can be dramatically affected due to incorrect interpretation of the size of the pixel array used. It is possible to get around these effects by including a scale marker or resolution (e.g., µm/pixel) on all images. Then, accurate depiction of the true size of features in the image is achieved both on monitor display and on paper printout regardless of the enlargement. The actual size of a stored image is nearly meaningless unless the dimensional pixel size (or image size) is known because the final magnification is strictly dependent on the image resolution and output device used. Measurement Issues. Another issue with pixel arrays is determining what is adequate for a given application. The decision influences the sampling necessary to achieve adequate statistical relevance and the necessary resolving power to obtain accurate measurements. For example, if it is possible to resolve the features of interest using the same microscope setup and two cameras having differing resolutions, the camera having the lowest resolution should be used because it will cover a much greater area of the sample. To illustrate this, consider that in a system using a 16⫻ objective and a 1024 resolution camera, each pixel is 0.3 µm2. Measuring 10 fields to provide sufficient sampling statistics provides a total area of 0.94 mm2 (0.001 in.2). Using the same objective but switching to a 760 ⫻ 574 pixel camera, the pixel size is 0.66 µm2. To measure the same total area of 0.94 mm2, it would only require the measurement of 5 fields. This could save substantial time if the analysis is complicated and slow, or if there are hundreds or thousands of samples to measure. However, this example assumes that it is possible to sufficiently resolve features of interest using either camera and the same optical setup, which often is not the case. One of the key points to consider is whether or not the features of interest can be sufficiently resolved. Using a microscope, it is possible to envision a situation where camera resolution is not a concern because, if there are small features, magnification can easily be increased to accurately quantify, for instance, feature size and shape. However, while this logic is accurate, in reality there is much to be gained by maximizing the resolution of a given system, considering hardware and financial constraints. In general, the more pixels you can “pack” into a feature, the more precise is the boundary detection when measuring the feature (Fig. 3). As mentioned previously, the tradeoff of increasing magnification to resolve small features is a greater sampling requirement. Due to the misalignment of square pixels with the actual edge of a feature, significant inaccuracies can occur when trying to quantify the shape of a feature with only a small

Principles of Image Analysis / 79

number of pixels (Fig. 4). If the user is doing more than just determining whether or not a feature exists, the relative accuracy of a system is the limiting factor in making any physical property measurements or correlating a microstructure. When small features exist within an array of larger features, increasing the magnification to improve resolving power forces the user to systematically account for edge effects and significantly increases the need for a larger number of fields to cover the same area that a lower magnification can cover. Again, the tradeoff has to be balanced with the accuracy needed, the system cost, and the speed desired for the application. If a high level of shape characterization is needed, a greater number of pixels may be needed to resolve subtle shape variations.

Fig. 3

Small features magnified over 25 times showing the differences in the size and number density of pixels within features when comparing a 760 ⫻ 560 pixel camera and a 1024 ⫻ 1024 pixel

camera

Fig. 4

Three scenarios of the effects of a minute change in position of a circular feature within the pixel array and the inherent errors in size that can result

80 / Practical Guide to Image Analysis

One way to determine the acceptable magnification is to begin with a much higher magnification and perform the measurements needed, then repeat the same measurement using successively lower magnifications. An analysis routine can be set up after determining the lowest acceptable magnification for the camera resolution used.

Image Storage and Compression Many systems store images onto a permanent medium (e.g., floppy, hard, and optical disks) using proprietary algorithms, which usually compress images to some degree. There also are standardized compression algorithms, for example, that of the Joint Photography Experts Group (JPEG) and the tagged image file format (TIFF). The proliferation of proprietary algorithms makes it cumbersome for users of imaging systems to share images, but many systems offer the option to export images into standard formats. Care must be exercised when storing images in standard formats because considerable loss of information can occur during the image compression process. For instance, JPEG images are compressed by combining contiguous segments of like gray/color levels in an image. A 512 ⫻ 512 ⫻ 24 bit image having average color detail compresses to 30 kb when saved using a mid-range level of compression but shrinks to 10 kb when the same image without any features is compressed. The same image occupies 770 kb when stored in bitmap or TIFF form without any compression. In addition, repeated JPEG compression of an image by opening and saving an image results in increasing information loss, even with identical settings. Therefore, it generally is recommended that very limited compression (no less than half the original size) be used for images that are for analysis as opposed to images that are for archival and visualization purposes only. The errors associated with compression depend on the type of image being compressed and the size and gray-level range of the features to be quantified. If compression is necessary, it is recommended that image measurements are compared before and after compression to determine the inaccuracies introduced (if any) for a particular application. In general, avoid compression when measuring a large array of small features in an image. Compression is much less of an issue when measuring large features (e.g., coatings or layers on a substrate) that contain thousands of pixels.

Image Acquisition Image acquisition devices include light microscopes, electron microscopes (e.g., scanning electron, transmission electron, and Auger), laser

Principles of Image Analysis / 81

scanning, and other systems that translate a visualized scene into an analog or digital form. The critical factor when determining whether useful information can be gleaned from an image is whether there is sufficient contrast between the features of interest and the background. The acquisition device presents its own set of constraints, which must be considered during the image processing phase of analysis. For instance, images produced using a transmission electron microscope (TEM) typically are difficult to analyze because the contrast mechanism uses transition of feature gray levels as the raster scans the sample. However, back-scattered electrons can be used to improve contrast due to the different atomic numbers from different phases contained in a sample examined on a flat surface with no topographic features. Alternatively, elemental signal information might also be used to distinguish features of interest in an appropriately equipped scanning electron microscope (SEM) based on chemical composition of features. When using a light microscope to image samples, dark-field illumination sometimes is used to illuminate features that do not ordinarily reflect most of the light to the objective—as usually occurs under bright-field illumination. Images are electronically converted from an analog signal to a digital array by various means and transferred into computer random access memory (RAM) for further processing. Earlier imaging sensors were mainly of the vacuum tube type, designed for specific applications, such as low-light sensitivity and stability. The main limitations of these sensors were nonlinear light response and geometric distortion. The bulk of today’s sensors are solid-state devices, which have nearly zero geometric distortion and linear light response and are very stable over time. Frame-acquisition electronics (often referred to as a frame grabber), the complimentary part to the imaging sensor, converts the signal from the camera into a digital array. The frame grabber selected must match the camera being used. Clock speed, signal voltage, input signals, and computer interface must be considered when matching the frame grabber to the camera. Some cameras have the digitizing hardware built in and only require the appropriate cable to transfer the data to the computer. An optical scanner is another imaging device that can produce low-cost, very high-resolution images with minimal distortion. The device, however, requires an intermediate imaging step to produce a print or negative that subsequently can be scanned into a computer. Illumination uniformity and inherent fluctuations that can occur with a camera are critical during the acquisition process. Setting up camera gain, offset, and other variables can be critical in attaining consistent results (Ref 1). Any system requires that two basic questions be answered: O Do the size and shape of features change with position within the camera? O Is the feature gray-level range the same over time?

82 / Practical Guide to Image Analysis

Users generally turn to the use of dc power supplies, which isolate power from house current to minimize subtle voltage irregularities. Also, some systems contain feedback loops that continuously monitor the amount of light emanating from the light source and adjust the voltage to compensate for intensity fluctuations. Another way of achieving consistent intensities is to create a sample that can be used as a standard when setting up the system. This can be done by measuring either the actual intensity or feature size of a specified area on the sample.

Image Processing Under ideal conditions, a digitized image can be directly binarized (converted to black and white) and measured to obtain desired features. However, insufficient contrast, artifacts, and/or distortions very often prevent straightforward feature analysis. Image processing can be used in this situation to compensate for the plethora of image deficiencies, enabling fast and accurate analysis of features of interest. Gray-level image processing often is used to enhance features in an image either for visualization purposes or for subsequent quantification. The rapid increase of algorithms over the years offers many ways to enhance images, and many of these algorithms can be used in real time with the advent of low-cost/high-performance computers. Shading Correction. Image defects that are caused by uneven illumination or artifacts in the imaging path must be taken into account during image processing. Shading correction is used when a large portion of an image is darker or lighter than the rest of the image due to, for example, bulb misalignment or by the use poor optics in the system. The relative differences between features of interest and the background are usually the same, but features in one area of the image have a different gray-level range than the same type of feature in another portion of the image. The main methods of shading correction use a background reference image, either actual or artificial, and polynomial fitting of nearest-neighbor pixels. A featureless reference image requires the acquisition of an image using the same lighting conditions but without the features of interest. The reference image is then subtracted or divided (depending on light response) from the shaded image to level the background. If a reference image cannot be obtained, it is sometimes possible to create a pseudoreference image by using rank-order processing (which is discussed later) to diminish the features and blend them into the background (Fig. 5). Polynomial fitting also can be used to create a pseudo-background image, but it is difficult to generate if the features are neither distinct nor somewhat evenly distributed.

Principles of Image Analysis / 83

Each shading correction methodology has its own advantages and limitations, which usually depend on the type of image and illumination used. Commercial systems usually use one shading correction method, which is optimized for that particular system, but also may depend on how easily a reference image can be obtained or the degree of the variation in the image. Pixel point operations are a class of image enhancements that do not alter the relationship of pixels to their neighbors. This class of algorithms uses a type of transfer function to translate original gray levels into new gray levels, usually called a look-up table, or LUT. For instance, a

(a)

(b)

(c)

(d)

Fig. 5

Rank-order processing used to create a pseudoreference image. (a) Image without any features in the light path showing dust particles and shading of dark regions to light regions going from the upper left to the lower right. (b) Same image after shading correction. (c) Image of particles without shading correction. (d) Same image after shading correction showing uniform illumination across the entire image

84 / Practical Guide to Image Analysis

pseudocolor LUT enhancement simply correlates a color with a gray value and assigns a range of colors to the entire gray-level range in an image. This technique can be very useful to delineate subtle features. For example, it is nearly impossible to distinguish features having a difference of, say, five gray levels. However, it is possible to delineate subtle features by assigning different colors to different gray-level ranges because the human eye can distinguish different hues much better than it can different gray levels. Another useful enhancement effect uses a transfer function that changes the relationship between the input gray level and the output or displayed gray level from a linear one to another that enhances the desired image features (Fig. 6). This often is referred to as the gamma curve for the displayed image and has many useful effects, especially when viewing very bright objects with very dark features, such as thermal barrier coatings. An image can be displayed as a histogram by summing up all the pixels in uniform ranges of gray levels and plotting the number of pixels versus

(a)

Fig. 6

(b)

(c)

Reflected bright-field image of an oxide coating before and after use of a gamma curve transformation that translates pixels with lower intensities to higher intensities while keeping the original lighter pixels near the same

levels

Fig. 7

Example of a gray-level histogram generated from an image

Principles of Image Analysis / 85

gray level (Fig. 7). An algorithm is used to transform the histogram, uniformly distributing intermediate brightness values evenly throughout the full gray-level range (usually 0–255), a technique called histogram equalization. The effect is that an individual pixel has the same relative brightness but has a shifted gray level from its original value. The shift in gray-level gradients often provides improved contrast of previously subtle features, as shown in Fig. 8. Neighborhood kernel processing is a class of operations that translates individual pixels based on surrounding pixels. The concept of using a kernel or two-dimensional array of numeric operators provides a wide range of image enhancements including: O O O O O

Sharpening an image Eliminating noise Smoothing edges Finding edges Accentuating subtle features

(a)

(b)

(c)

(d)

Fig. 8

Reflected-light image of an aluminum-silicon alloy before and after gray-level histogram equalization, which significantly improves contrast of the subtle smaller silicon particles by uniformly distributing intensities

86 / Practical Guide to Image Analysis

These algorithms should be used carefully because the effect on an individual pixel depends on its neighbors. The output image after processing can vary considerably from image to image when making quantitative measurements. Numerous mathematical formulas, derivatives, and least-square curve fitting also can be used to provide various enhancements. Neighborhood kernel processing includes rank-order, Gaussian, Laplacian, and averaging filters. An example of a rank-order filter is the Median filter, which determines the median, or 50%, value of a set of gray values in the selected kernel and replaces the central value with the median value. An algorithm translates the selected kernel over to the next pixel and applies the same process (Fig. 9). A variety of operators with the resulting image transformation are illustrated in Fig. 10. Reference 2 describes many kernel filters in much greater detail together with example images. Arithmetic Processing of Images. Image processing that uses more than one image and combines them in some mathematical way is useful to accentuate subtle differences between images and to observe spatial dependencies. For example, adding images is used to increase the brightness in an image, averaging images is used to reduce noise, and subtracting images is used to correct for background shading (see the section “Shading Correction”) and to highlight subtle and not so subtle differences. There are other math manipulations that are used occasionally, but effectiveness can vary widely due to the extreme values that can result when multiplying or dividing gray values from two images. Frequency domain transformation is another image enhancement, which is particularly useful to distinguish patterns, remove very fine texture, and determine repeating periodic structures. The most popular

Fig. 9

Schematic showing how kernel processing works by moving kernel arrays of various sizes over an image and using a formula to transform the central pixel accordingly. In the example shown, a median filter is used.

Principles of Image Analysis / 87

transform is Fourier transform, which uses the fast Fourier transform (FFT) algorithm to quickly calculate the power spectrum and complex values in frequency space. Usually, the power spectrum display is used to determine periodic features or preferred orientations, which assists determining the alignment in an electron microscope and identifying fine periodic structures (Fig. 11). A more extensive description of transform can be found in Ref 2.

(a)

(b)

(c)

(d)

(e)

(f)

Fig. 10

Examples of neighborhood kernel processing using various processes. (a) Original reflected-light image of a titanium alloy. Image using (b) gradient filter, (c) median filter, (d) Sobel operator, (e) top-hat processing, (f) gray-level opening

(a)

Fig. 11

(b)

(c)

Defect shown with different image enhancements. (a) High-resolution image from a transmission electron microscope of silicon carbide defect in silicon showing the alignment of atoms. (b) Power spectrum after application of fast Fourier transform (FFT) showing dark peaks that result from the higher-frequency periodic silicon structure. (c) Defect after masking the periodic peaks and performing an inverse FFT

88 / Practical Guide to Image Analysis

Feature Discrimination Thresholding. As previously described, an image that has 256 gray values needs to be processed in such a way as to allow quantification by reducing the available gray values in an image to only the features of interest. The process in which 256 gray values are reduced to 2 gray values (black and white, or 0 and 1) is called thresholding. It is accomplished by selecting the gray-level range of the features of interest. Pixels within the selected gray-level range are assigned as foreground, or detected features, and everything else as background, or undetected features. In other terms, thresholding simply converts the image to a series of 0s and 1s, which represent undetected and detected features, respectively. Whether white features represent foreground or vice a versa varies with image analysis systems, but it does not affect the analysis in any way and usually is a matter of the programmer’s preference. The segmentation process usually yields three types of images depending on the system: a black and white image, a bit-plane image, and a feature-boundary representation (Fig. 12). The difference between the methods is analogous to a drawing program versus a painting program. A drawing program creates images using lines and/or polygons to represent features and uses much less space. It also can quickly redraw, scale, and change an image comprising multiple features. By comparison, a painting program processes images one pixel at a time and allows the user to change the color of individual pixels because each image comprises various pixel arrangements. The replicated black and white image is more memory intensive because, generally, it creates another image of the same size and gray-level depth after processing and thresholding, and requires the same amount of computer storage as the original image. A bit-plane image is a binary image, usually having a color that represents the features of interest. It often is easier to track binary image processing steps during image processing development using the bit-plane method. Featureboundary representation is more efficient when determining feature perimeter and shape. There is no inherent advantage to any methodology because the final measurements are similar and the range of processing algorithms and possible feature measurements remain competitive. Segmentation. Basically, there are three ways that a user indicates to an image analysis system the appropriate threshold for segmentation using gray level: O Enter the gray-level values that represent the desired range. O Select both width (gray-level range) and location (gray-level values) by moving a slider along a gray-level spectrum bar (Fig. 13). This is known as the interactive method. Interactive selection usually affects

Principles of Image Analysis / 89

the size of a colored overlay bit plane that is superimposed on the gray-level image, which allows setting the gray-level range to agree with the user’s assessment of the correct feature boundaries. O Determine if there are any peaks that correspond to many pixels within a specific gray-level range using a gray-level histogram (Fig. 14). Interactive selection and histogram characteristic-peaks thresholding methods are used frequently, sometimes together, depending on the particular type of image being viewed. Automatic thresholding often uses the histogram peaks method to determine where to set the gray-level ranges for image segmentation. However, when using automatic thresholding, the user must be careful because changing overall brightness or artifacts, or varying amounts of foreground features, can change the location and the relative size of the peaks. Some advanced algorithms can overcome these variations.

(a)

(b)

(c)

(d)

Fig. 12

Images showing the three main transformations from a gray-level image to a thresholded image. (a) Original gray-level image. (b) Black and white image. (c) Binary image using a colored bit plane. (d) Detected feature boundaries

90 / Practical Guide to Image Analysis

There are more issues to consider when thresholding color images for features of interest. Most systems use red, green, and blue (RGB) channels to establish a color for each pixel in an image. It is difficult to determine the appropriate combination of red, green, and blue signals to distinguish features. Some systems allow the user to point at a series of points in a color image and automatically calculate the RGB values, which are used to threshold the entire image. A better methodology than RGB color space for many applications is to view a color image in hue, intensity, and saturation (HIS) space. The advantage of this method is that color information (hue and saturation) is separated from brightness (intensity). Hue essentially is the color a user observes, while the saturation is the relative strength of the color. For example, translating “dark green” to an HIS perspective would use dark as the level of saturation (generally ranges as a value between 0 and 100%) and green as

Fig. 13

Fig. 14

Interactive method of selecting gray levels with graphic slider

Thresholding gray levels in an image by selecting the gray-level peaks that are characteristic of the features of interest

Principles of Image Analysis / 91

the hue observed. While saturation describes the relative strength of color, intensity is associated with the brightness of the color. Intensity is analogous to thresholding of gray values in black and white space. Hue, intensity, and saturation space also is described as hue, lightness, and saturation (HLS) space, where L quantifies the dark-light aspect of colored light (see Chapter 9, “Color Image Processing”). Nonuniform Segmentation. Selecting the threshold range of gray levels to segment foreground features sometimes results in overdetecting some features and underdetecting others. This is due not only to varying brightness across an image, but also is often due to the gradual change of gray levels while scanning across a feature. Delineation enhancement is a useful gray-level enhancement tool in this situation (Fig. 15). This algorithm processes the pixels that surround features by transforming their gradual change in gray level to a much steeper curve. In this way, as features initially fall within the selected gray-level range, the apparent size of the feature will not change much as a wider band of gray levels is selected to segment all features. There are other gray-level image processing tools that can be used to delineate edges prior to segmentation and to improve contrast in certain regions of an image, and their applicability to a specific application can be determined by experimenting with them. Watershed Segmentation. Watershed transformations are iterative processes performed on images that have space-filling features, such as grains. The enhancement usually starts with the basic eroded point or the last point that exists in a feature during successive erosions, often referred to as the ultimate eroded point. Erosion/dilation is the removal and/or addition of pixels to the boundary of features based on neighborhood

(a)

(b)

Fig. 15

Delineation filter enhances feature edges by sharpening the transition of gray values considerably, providing more leeway when thresholding. (a) Magnified original gray-level image of particles showing gradual transition of gray levels along the feature edges. (b) The same image after using a delineation filter

92 / Practical Guide to Image Analysis

relationships. The basic eroded point is dilated until the edge of the dilating feature touches another dilating feature, leaving a line of separation (watershed line) between touching features. Another much faster approach is to create a Euclidean distance map (EDM), which assigns successively brighter gray levels to each dilation iteration in a binary image (Ref 2). The advantage of this approach is that the periphery of each feature grows until impeded by the growth front of another feature. Although watershed segmentation is a powerful tool, it is fraught with application subtleties when applied to a wide range of images. The reader is encouraged to refer to Ref 2 and 3 to gain a better understanding of the proper use and optimization of this algorithm and for a detailed discussion on the use of watershed segmentation in different applications. Texture Segmentation. Many images contain texture, such as lamellar structures, and features of widely varying size, which may or may not be the features of interest. There are several gray-level algorithms that are particularly well suited to images containing texture because of the inherent frequency or spatial relationships between structures. These operators usually transform gradually varying features (low frequency) or highly varying features (high frequency) into an image with significantly less texture. Algorithms such as Laplacian, Variance, Roberts, Hurst, and Frei and Chen operators often are used either alone or in combination with other processing algorithms to delineate structures based on differing textures. Methodology to characterize banding and orientation microstructures of metals and alloys is covered in ASTM E 1268 (Ref 4). Pattern-matching algorithms are powerful processing tools used to discriminate features of interest in an image. Usually, they require prior knowledge of the general shape of the features contained in the image. For instance, if there are cylindrical fibers orientated in various ways within a two-dimensional section of a composite, a set of boundaries can be generated that correspond to the angles at which a cylinder might occur in three-dimensional space. The resulting boundaries are matched to the actual fibers that exist in the section, and the resulting angles are calculated based on the matched patterns (Fig. 16). Generally, patternmatching algorithms are used when required measurements cannot be directly made or calculated from the shape of a binary feature of interest.

Binary Image Processing Boolean Logic. Binary representation of images allows simple analysis of features of interest while disregarding background information. There are many algorithms that operate on binary images to correct for imperfect segmentation. The use of Boolean logic is a powerful tool

Principles of Image Analysis / 93

(a)

(b)

Fig. 16

Pattern matching used for reconstructing glass fibers in a composite. (a) Bright-field image of a glass fiber composite with several broken fibers. (b) Computer-generated image after pattern matching, which reconstructs the fibers enabling the quantification of the degree of fiber breakage after processing

that compares two images on a pixel-by-pixel basis and then generates an output image containing the result of the Boolean combination. Four basic Boolean operations are: O O O O

AND OR Exclusive OR (XOR) NOT

These basic four often are combined in various ways to obtain a desired result, as illustrated in Fig. 17. A simple way to represent Boolean logic is by using a truth table, which shows the criteria that must be fulfilled to be included in the output image. When comparing two images, the AND Boolean operation requires that the corresponding pixels from both images be ON (1 ⫽ ON, 0 ⫽ OFF). Such a truth table would look like this:

If a pixel is ON in one image and OFF in another, the resulting pixel will be OFF after the AND Boolean operator is applied. The OR operator requires only that one or the other corresponding pixel from either image

94 / Practical Guide to Image Analysis

be ON to yield a pixel which is ON. The XOR operator produces an ON pixel as long as the corresponding pixels are different; that is, one is ON and one is OFF. If both pixels are ON or both are OFF, then the resulting output will be an OFF value. The NOT operator is simply the inverse of an image, but when used in combination with other Boolean operators, can yield interesting and useful results. Some other truth tables are shown below:

An important use of Boolean operations is combining multiple criteria, including spatial relationships, multiphase relationships with various materials, brightness differences, and size or morphology within a set of images. It is important that the order and grouping of the particular operation be maintained when designating a particular sequence of Boolean operations. Feature-based Boolean logic is an extension of pixel-based Boolean logic in that individual features, rather than individual pixels, are

Fig. 17

Examples of Boolean operators using two images

Principles of Image Analysis / 95

compared between images (Fig. 18). The resultant image contains the entire feature instead of just the parts of a feature that are affected by the Boolean comparison. Feature-based logic uses artificial features, such as geometric shapes, and real features, such as grain boundaries, to ascertain information about features of interest. There are a plethora of uses for Boolean operators on binary images and also in combination with gray-scale images. Examples include coating thickness measurements, stereological measurements, contiguity of phases, and location detection of features. Morphological Binary Processing. Beyond combining images in unique ways to achieve a useful result, there also are algorithms that alter individual pixels of features within binary images. There are hundreds of specialized algorithms that might help particular applications and merit further experimentation (Ref 2, 3). Several of the most popular algorithms are mentioned below. Hole filling is a common tool that removes internal “holes” within features. For example, one technique completely fills enclosed regions of features (Fig. 19a, b) using feature labeling. This identifies only those features that do not touch the image edge, and these are combined with the original image using the Boolean OR operator to reconstruct the original inverted binary image with the holes filled in. There is no limit on how large or tortuous a shape is. The only requirement for hole filling is that the hole is completely contained within a feature. A variation of this is morphological-based hole filling. In this technique, the holes are treated as features in the inverted image and processed in the desired way before inverting the image back. For example, if only holes of a certain size are to be filled, the image is simply inverted, features below the desired size are eliminated, and then the image is inverted back

Fig. 18

Feature-based Boolean logic operates on entire features when determining whether a feature is ON or OFF. This example shows the result when using the AND Boolean operator with image A and image B from Fig. 17. An image B outline is shown for illustrative purposes.

96 / Practical Guide to Image Analysis

(Fig. 19a, c, d). It also is possible to fill holes based on other shape criteria. Erosion and Dilation. Common operations that use neighborhood relationships between pixels include erosion and dilation. These operations simply remove or add pixels to the periphery (both externally and internally, if it exists) of a feature based on the shape and location of neighborhood pixels. Erosion often is used to remove extraneous pixels, which may result when overdetection during thresholding occurs, because some noise has the same gray-level range as the features of interest. When used in combination with dilation (referred to as “opening”), it is possible to separate touching particles. Dilation often is used to connect features by first dilating the features followed by erosion to return the features to their approximate original size and shape (referred to as “closing”).

(a)

(b)

(c)

(d)

Fig. 19

Effects of different hole-filling methods. (a) Transmitted-light image containing an array of glass particles with some interstitial clear regions within the particles. (b) Same image after the application of the hole-filling algorithm with dark gray regions showing the filled regions. Identified areas 1, 2, and 3 show erroneously filled regions due to the arrangement of particles. (c) Inverted or negative of the first image, which treats the original interstitial holes as individual features. (d) Image after removing features below a certain size and inverting the image to its original binary order with only interstitial holes filled

Principles of Image Analysis / 97

Most image analysis systems allow the option of using several neighborhood-kernel patterns (Fig. 20) and also allow selection of the number of iterations used. However, great care must be exercised when using these algorithms because the feature shape (especially for small features) can be significantly different from the original feature shape. Parameter selection can dramatically affect features in the resulting image because if too many iterations are used relative to the size of the feature, it can take on the shape of the neighborhood pattern used (Fig. 21). However, some very useful results can be achieved when using the right erosion/dilation kernel shape. For instance, using a vertical shape closing in a binary image of a surface can remove edges that fold over themselves (Fig. 22), which allows determination of the roughness of an interface. Skeletonization, SKIZ, Pruning, and Convex Hull. A specialized use of erosion that prevents the separation of features while eroding away pixels is called skeletonization, or thinning. This operation is useful when thinning thick, uneven feature boundaries. Caution is advised when using this algorithm on very thick boundaries because the resulting skeleton can change dramatically depending on the existence of just a few pixels on an edge or within a feature. Skeleton by influence zones (SKIZ), a variation of skeletonization, operates by simultaneously growing all features in an image (or eroding

Fig. 20

Examples of the effects of erosion on a feature using kernels of various shapes and the associated shape of a single pixel after dilation using the same kernel

98 / Practical Guide to Image Analysis

the background) to the extent possible given the zones of influence of growing features (Fig. 23). This is analogous to nearest-neighbor determinations because drawing a line segment from the edge of one feature to the edge of an adjacent feature results in a midpoint, which is the zone of influence. The result of a SKIZ operation often replicates what an arrangement of grain boundaries looks like. Additionally, it is possible to measure the resulting zone size to quantify spatial clustering or statistics on the overall separation between features.

Fig. 21

Particle with elongated features showing the effect of using a number of octagonal-opening (erosion followed by a dilation)

iterations

Fig. 22

Use of a vertical shape closing in a binary image. (a) Reflected-light image of a coating having a tortuous interface. (b) Binary image of coating. (c) Binary image after hole filling and removal of small unconnected features. (d) Equally spaced vertical lines overlaid on the binary image. (e) Result after a Boolean AND of the lines and the binary image. (f) Image after vertical closing of 35 cycles, which closes off overlapping features of the interface for measuring roughness. (g) Binary image showing the lines before (dark gray) and the line segments filled in after closing (black). (h) Vertical lines overlaid on the original lightened gray-level image

Principles of Image Analysis / 99

Occasionally, unconnected boundaries remain after the skeletonization operation and can be removed by using a pruning algorithm that eliminates features having endpoints. The convex-hull operation can be used to fill concavities and smooth very jagged skeletons or feature peripheries. Basically, a convex-hull operation selectively dilates concave feature edges until they become convex (Fig. 24).

Further Considerations The binary operations described in this chapter are only a partial list of the most frequently used operations and can be combined in useful ways to produce an image that lends itself to straightforward quantification of features of interest. Today, image analysis systems incorporate many processing tools to perform automated, or at least fast-feature, analysis. Creativity is the final tool that must be used to take full advantage of the power of image analysis. The user must determine if the time spent in developing a set of processing steps to achieve computerized analysis is justified for the application. For example, if you have a complicated

(a)

(b)

(c)

(d)

Fig. 23

Effects of the skeleton by influence zones (SKIZ) process. (a) Scanning electron microscope image of a superalloy. (b) Binary image of gamma prime particles overlaid on the original gray-level image with light gray particles touching the image boundary. (c) After application of SKIZ showing zones of influence. (d) Zones with original binary particles overlaid

100 / Practical Guide to Image Analysis

(a)

(b)

(c)

(d)

(e)

(f)

Fig. 24

Images showing the use of various binary operations on a grain structure. (a) Original bright-field image of grain structure. (b) Binary image after removal of small disconnected features. (c) Binary image after skeletonization with many short arms extending from grain boundaries. (d) Binary image after pruning. (e) Binary image after pruning and after 3 iterations of convex hull to smooth boundaries. (f) Image showing grain structure after skeletonization of the convex-hulled image

image that has minimal contrast but somewhat obvious features to the human eye and only a couple of images to quantify, then manual measurements or tracing of the features might be adequate. However, the benefit of automated image analysis is that sometimes-subtle feature characterizations can yield answers that the user might never have guessed based on cursory inspections of the microstructure.

References 1. E. Pirard, V. Lebrun, and J.-F. Nivart, Optimal Acquisition of Video Images in Reflected Light Microscopy, Microsc. Anal., Issue 37, 1999 p 19–21 2. J.C. Russ, The Image Processing Handbook, 2nd ed., CRC Press, 1994 3. L. Wojnar, Image Analysis, Applications in Materials Engineering, CRC Press, 1998 4. “Standard Practice for Assessing the Degree of Banding or Orientation of Microstructures,” E 1268-94, Annual Book of ASTM Standards, ASTM, 1999

JOBNAME: PGIA−−spec 2 PAGE: 1 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000

CHAPTER

5 Measurements John J. Friel Princeton Gamma Tech

THE ESSENCE of image analysis is making a measurement or series of measurements that quantify some aspect of the image of a microstructure. A microstructure often is the link between process and properties in materials science, and the extent to which the aspects of the microstructure can be quantified establishes the strength of the link. Image analysis measurements can be made manually, automatically, or even by comparing a series of images that already have been measured, as in comparison chart methods. Manual measurement usually involves counting features, such as points, intersections, and intercepts, but it also involves measuring length. Area and volume, on the other hand, usually are derived values. Generally, reference to the term image analysis is synonymous with automatic image analysis (AIA), in which a computer makes measurements on a digital image. In this case, measurements of size, such as area, longest dimension, and diameter, for example, are direct measurements. Other measurements can be derived from these “primitive” measures, including measures of shape, such as circularity, aspect ratio, and area equivalent diameter. All of these measurements are easily calculated by the computer for every feature in the image(s).

Contrast Mechanisms The principles of how to acquire, segment, and calibrate images have been discussed in Chapter 4. However, one concept that must be considered before making measurements is the choice of signal and contrast mechanism. The contrast mechanism selected carries the information to be quantified, but it is the signal used for acquisition that actually carries one or more contrast mechanisms. It is useful, therefore, to distinguish between the contrast bearing signals suitable for digitiza-

JOBNAME: PGIA−−spec 2 PAGE: 2 SESS: 56 OUTPUT: Thu Oct 26 14:55:36 2000 102 / Practical Guide to Image Analysis

Table 1

Contrast mechanisms with associated imaging signals

Signal

Reflected light

Contrast mechanism

Topographic Crystallographic Composition True color Interference colors

Transmitted light

True color Interference colors and figures Biological/petrographic structure

Secondary electrons

Topographic Voltage Magnetic types 1 and 3

Backscattered electrons

Atomic number Topographic (trajectory) Crystallographic (electron channeling patterns, electron backscattered patterns) Magnetic type 2 Biological structure (stained)

X-rays Absorbed electrons

Composition Atomic number Charge (EBIC) Crystallographic Magnetic type 2

Transmitted electrons

Mass thickness Crystallographic (electron diffraction)

Cathodoluminescence

Composition Electron state

EBIC, electron beam induced current

tion and the contrast mechanism itself that will be used to enhance and quantify. In routine metallography, bright-field reflected-light microscopy is the usual signal, but it may carry many varied contrast types depending on the specimen, its preparation, and etching. The mode of operation of the microscope also affects the selection of contrast mechanisms. A list of some signals and contrast mechanisms is given in Table 1.

Direct Measurements Field Measurements Field measurements usually are collected over a specified number of fields, determined either by statistical considerations of precision or by compliance with a standard procedure. Standard procedures, or norms, are published by national and international standards organizations to conform to agreed-upon levels of precision. Field measurements also are the output of comparison chart methods. Statistical measures of precision, such as 95% confidence interval (CI) or percent of relative accuracy (%RA) are determined on a field-to-field

JOBNAME: PGIA−−spec 2 PAGE: 3 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000 Measurements / 103

basis rather than among individual features. Some standard procedures require a “worst-field” report, and often are influenced by agreements between producer and purchaser. For example, the purchaser may specify that the amount of a certain category of features, such as nonmetallic inclusions in steel, cannot exceed a specified limit. While such an approach does not reduce the number of fields to be measured, it does reduce the amount of information that needs to be reported. Reports for every field and feature are easily generated using automatic image analysis, but the amount of useful information may be no greater than that contained in the worst field report. When making any measurements on a field of features, it always is assumed that the image has been thresholded properly. In general, precision is limited by the number of pixels available to represent the contrast. Any error in segmenting the image to reveal and define the features will result in a bias in the analysis. Automatic image analysis usually results in a more precise and reproducible analysis because so many more features can be counted. However, there are times when manual image analysis is less biased because human perception can better identify the features. This effect is particularly true when features cannot be uniquely thresholded but are easily perceived. An example of this situation is the measurement and quantification of lamellar pearlite in steel if the magnification and contrast are sufficiently high. While the computer would count each lamella as a feature, a human observer would just perceive the pearlite phase. Before starting to make measurements, the analyst must decide the number of fields to measure and at what magnification. This step assumes adequate sampling of the material. This is an area in which standard procedures can be helpful. Such standards may be published by standards writing organizations or they may be in-house procedures developed to make meaningful and reproducible measurements on a particular material. Count. One of the easiest measurements for a computer to perform on a field is to count the features. This may not be the easiest manual measurement, however, especially if the number of features is large. A similar measurement of great usefulness is a count of the number of intercepts. To make this measurement, the computer or the operator counts the number of times a series of selected test lines intercept the features. (A more detailed description of how to make this measurement is given below in the section “Intercept Measurements.”) Once the number of intercepts is counted and the total length of the test line is known, the stereological parameter of lineal density, NL, can be calculated, and from it are derived many other stereological parameters. Some of these are discussed more fully in the “Field Measurements” section under “Derived Measurements.” A list of some primitive field measurements follows:

JOBNAME: PGIA−−spec 2 PAGE: 4 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000 104 / Practical Guide to Image Analysis

O O O O O O O O O O

Field number Field area Number of features Number of features excluded Area of features Area of features filled Area fraction Number of intercepts NL NA (number of features divided by the total area of the field)

The length of the test line in a manual analysis is measured on a template superimposed on a photomicrograph or by use of a reticule in the microscope, each corrected for magnification. An AIA system uses the total length of the scan. Area fraction is another easy measurement for an AIA system. For this measurement, the computer simply scans the number of pixels ascribed to a certain type of feature (phase) and divides by the total number of pixels in the image. This operation is most easily understood by visualizing the gray-level histogram. If the signal and contrast mechanism are suitably selected, the peaks in the histogram correspond to phases in the microstructure. With threshold customarily set in pseudocolor, the area fraction of any phase is merely the sum of all pixels within a selected region of the histogram divided by the sum of pixels in the entire histogram. The example shown in Fig. 1(a) consists of a gray-scale image of a multiphase ceramic varistor acquired using the backscattered electron signal in a scanning electron microscope (SEM). The image was thresholded in pseudocolor as shown in Fig. 1(b). Pseudocolor means false color, and the term is used to distinguish it from the actual or “true color” that is seen in the microscope or color photomicrograph. Pseudocolor customarily is used to depict those intensity levels of the microstructure that have been assigned by the thresholding operation to a particular component (phase) within the specimen. After thresholds have been set on the image, measurements can be made on each segment separately. Figure 2 shows the histogram that the computer used to automatically set the thresholds, and the resulting area fraction phases are shown: Low

High

Area, %

0

106

10.22

107

161

80.05

162

255

9.73

Manually, area fraction usually is measured by point counting. The method used and precision that can be expected are described in ASTM E 562 (Ref 1). From a stereological standpoint, the area fraction, AA, is

JOBNAME: PGIA−−spec 2 PAGE: 5 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000 Measurements / 105

equivalent to the volume fraction, VV, for a field of features that do not have preferred orientation. The average area of features can be calculated from feature-specific measurements, but it also is possible to derive it from field measurements as follows: A⫽

AA

(Eq 1)

NA

where A is the average area of features in many fields, AA is the average area fraction, and NA is the average number of intercepts. The advantage

(a)

(b)

Fig. 1

Multiphase ceramic material in (a) gray scale and (b) pseudocolor. Please see endsheets of book for color version.

Fig. 2

Histogram of the image in Fig. 1

JOBNAME: PGIA−−spec 2 PAGE: 6 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000 106 / Practical Guide to Image Analysis

to determining average area in this manner compared with measuring it feature-by-feature is merely that it is easier just to make two measurements per field. Intercept Measurements. The difference between the concepts of intercepts and intersections must be clearly understood to evaluate the requirements of a specified procedure. When either a manual or computergenerated test line encounters a feature, the boundary between background and feature edge is considered an intersection. As the test line continues, that part of the line superimposed upon the feature constitutes an intercept, and the step from the feature back to the background is another intersection. Therefore, there are two intersections per intercept, except in the case of space filling features, where the number of intersections equals the number of intercepts. Figure 3 shows a microstructure consisting of space filling grains with three test lines to illustrate counting intercepts and intersections. The lines are horizontal, which is appropriate for a random microstructure. If the features in the image have a preferred orientation, then the test lines should be random or aligned to measure a specific type of feature. The number of grains intercepted by each line totals 21, as does the number of intersections. The total length of the test lines, L, in this case equals 1290 µm, and the lineal density, NL (the number of intercepts of features divided by the total test line length) equals 0.016. Actual measurements would be performed using more lines on more fields. It also is possible to do more than count intercepts or intersections. The length of the test line on each feature can be measured in calibrated units of the microstructure. If the total length of the lines (linear intercept lengths, or Li) passing through features, i, is known, it is possible to calculate the length fraction, LL, by dividing Li by the total length of the test lines. Moreover, it is possible to calculate the mean lineal intercept, L3, from the field measurement, NL, by the expression: L3 ⫽

LL NL

(Eq 2)

The mean lineal intercept usually is noted by the symbol L3 to indicate that it is an estimate of the intercept in three dimensions. Figure 4 shows a binary image of porosity in a ceramic with a test line drawn to illustrate an LL measurement. The total length of the test line is 100 µm, and the total length of the white portions, Li, is 42 µm. Therefore LL ⫽ 0.42. Using an image analysis system to measure the same image yields similar results; a computed value of NL ⫽ 0.036 is obtained by counting the intersections of raster lines with features. From Eq 2, L3 ⫽ 11.67 µm. Computer measurement of the mean intercept length on each feature directly produces a mean lineal intercept value of 10.22 µm. The difference is attributable to the use of only one test line to illustrate the LL

JOBNAME: PGIA−−spec 2 PAGE: 7 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000 Measurements / 107

measurement. A more detailed review of intercept measurements and many other measurements can be found in Ref 2. Duplex Feature Size. Special consideration is needed in dealing with a microstructure consisting of features distinguished only by their size or shape, where all features are of the same material. In this case, features cannot be discriminated on the basis of gray level. In such an instance, the computer finds and measures all the features after binarization of the image and selects only those that meet the selected criteria. The analysis may have to be performed several times to report data on each type of feature separately. The criterion needed to distinguish features can be straightforward, as in distinguishing fibers from pores, or the dividing line can be more subjective. For instance, the analyst might wonder what value to use to distinguish coarse from fine particles. A histogram describing the distribution of some size parameter, such as area or diameter, can be helpful. However, even better than a “number” histogram is one that is weighted

Fig. 3

Test lines used to measure intercepts on microstructure image

Fig. 4

Image showing length measurements

JOBNAME: PGIA−−spec 2 PAGE: 8 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000 108 / Practical Guide to Image Analysis

by the selected parameter. For example, large features are not given any more weight than small ones in a number histogram of area, but in an area-weighted histogram, the dividing line between coarse and fine is more easily observed. Using more than one operator to agree on the selection point improves the precision of the analysis by reducing the variance, but any inherent bias still remains. Determination of duplex grain size described in the section “Grain Size” is an example of this situation. Feature orientation in an image constitutes a field measurement even though it could be determined by measuring the orientation of each feature and calculating the mean for the field. This is easily done using a computer, but there is a risk that there might not be enough pixels to sufficiently define the shape of small features. For this reason, orientation measurements are less precise. Moreover, if all features are weighted equally regardless of size, small ill-defined features will add significant error to the results, and the measurement may not be truly representative. Because orientation of features relates to material properties, measurements of many fields taken from different samples are more representative of the material than measurements summed from individual features. This situation agrees nicely with the metallographic principle, do more less well; that is, measurements taken from more samples and more fields give a better representative result than a lot of measurements on one field. A count of intercepts, NL, made in two or more directions on the specimen can be used either manually or automatically to derive a measure of preferred orientation. The directions, for example, might be perpendicular and parallel to the rolling direction in a wrought metal. The term orientation as used here refers to an alignment of features recognizable in a microscope or micrograph. It does not refer to crystallographic orientation, as might be ascertained using diffraction methods. ASTM E 1268 (Ref 3) describes a procedure for measuring and reporting banding in metals. The procedure calls for measuring NL perpendicular and parallel to the observed banding and calculating an anisotropy index, AI, or a degree of orientation, ⍀12, as follows (Ref 4): AI ⫽

NL Ⲛ NL 储

(Eq 3)

and ⍀12 ⫽

NL Ⲛ ⫺ NL 储

NL Ⲛ ⫹ 0.571NL 储

(Eq 4)

Mean Free Path. Microstructural features have a tendency to cluster, which is an aspect of a microstructure that is particularly difficult to

JOBNAME: PGIA−−spec 2 PAGE: 9 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000 Measurements / 109

quantify. While possible approaches to this problem are discussed in the section “Derived Measurements,” considered here is an easily measured field descriptor, mean free path, ␭, which can be calculated from NL and yields a single value per field: ␭⫽

1 ⫺ AA NL

(Eq 5)

In the case of space filling grains, the area fraction equals one, and, therefore, the mean free path is zero. However, for features that do not occupy 100% of the image, mean free path gives a measure of the distance between features on a field-by-field basis. Surface Area. There is at least one way to approximate the surface area from two images using stereoscopy. If images are acquired from two different points of view, and the angle between them is known, the height at any point, Z, can be calculated on the basis of displacement from the optic axis as follows: Z⫽

P 2M sin(␣/2)

(Eq 6)

where M is magnification, P is parallax distance, and ␣ is parallax angle. The derivation of this relationship can be found in Goldstein et al. (Ref 5) and other texts on scanning electron microscopy (SEM). The technique is particularly well suited to imaging using SEM because the stage is easily tilted to provide the parallax angle, and the depth of focus is so large. The height of any point can be calculated from manual measurements on photomicrographs, but automatic image analysis (AIA) makes it possible to compute an entire matrix of points over the surface. Assuming that corresponding points can be identified in each pair of images, their displacement in the direction perpendicular to the tilt axis can be measured, and their height above a reference surface can be calculated using Eq 6. The reference surface is the locus of points representing zero displacement between the left and the right images. Because the absolute magnitude of the reference height can be altered by shifting one entire image relative to the other, it is possible to define the surface of interest in relation to an arbitrarily specified surface. With a matrix of coordinates in x, y, and z, it then is possible to calculate the area of each of the finite number of planar rectangles or triangles defined by the points. The sum of these planes approximates the true surface area. For a description of this and other methods for measuring surface roughness as applied to fracture surfaces, see Underwood and Banerji (Ref 6). If the scale of the measurement can be varied, by magnification, for example, then the logarithm of measured surface area when plotted against the logarithm of

JOBNAME: PGIA−−spec 2 PAGE: 10 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000 110 / Practical Guide to Image Analysis

the scale represents the fractal dimension of the surface. Fractal dimension is discussed further in the section “Derived Measurements.” Direct measurement of surface area using methods such as profilometry or the scanning probe microscopy techniques of scanning tunneling microscopy (STM) and atomic force microscopy (AFM), will not be considered here because they are not image analysis techniques.

Feature-Specific Measurements Feature-specific measurements logically imply the use of an AIA system. In the past, so-called semiautomatic systems were used in which the operator traced the outline of features on a digitizing tablet. This type of analysis is time consuming and is only useful to measure a limited number of features. However, it does have the advantage of requiring the operator to confirm the entry of each feature into the data set. Although specimen preparation and image collection have been discussed previously, it should be emphasized again that automatic image analysis is meaningful only when the image(s) accurately reflects(s) the properties of the features to be measured. The feature finding program ordinarily detects features using pseudocolor. As features are found, their position, area, and pixel intensity are recorded and stored. Other primitive measures of features can be made by determining Feret diameters (directed diameters, or DD) at some discrete number of angles. These include measures of length, such as longest dimension, breadth, and diameter. Theoretically, shape determination becomes more accurate with increasing use of Feret diameters. However, in practice, resolution, threshold setting, and image processing are more likely to be limiting factors than is the number of Feret diameters. A list of some featurespecific primitive measurements follows: O O O O O O O O O O O

Position x and y Area Area filled Directed diameters (including maximum, minimum, and average) Perimeter Inscribed x and y (including maximum and minimum) Tangent count Intercept count Hole count Feature number Feature angle

The area of a feature is one of the easiest measurements for a computer to make because it is merely the sum of the pixels selected by

JOBNAME: PGIA−−spec 2 PAGE: 11 SESS: 55 OUTPUT: Thu Oct 26 14:55:36 2000 Measurements / 111

the threshold setting. For any given pixel resolution, the smaller the feature, the less precise is its area measurement. This problem is even greater for shape measurements, as described in the section “Derived Measurements.” If a microstructure contains features of significantly different sizes, it may be necessary to perform the analysis at two different magnifications. However, there is a magnification effect in which more features are detected at higher magnification, which may cause bias. Underwood (Ref 7) states in a discussion of magnification effect that the investigator “sees” more at higher magnifications. Thus, more grains or particles are counted at higher magnification, so values of NA are greater. The same is true for lamellae, but spacings become smaller as more lamellae are counted. Other factors that can influence area measurement include threshold setting, which can affect the precision of area measurement, and specimen preparation and image processing, which can affect both the precision and bias of area measurement. Length. Feature-specific descriptor functions such as maximum, minimum, and average are readily available with a computer, and are used to define longest dimension (max DD), breadth (min DD), and average diameter. Average diameter as used here refers to the average directed diameter of each feature, rather than the average over all of the features. Length measurements of individual features are not readily accommodated using manual methods, but they can be done. For example, the mean lineal intercept distance, L, can be determined by averaging the chord lengths measured on each feature. As with area measures, the precision of length measurements is limited by pixel resolution, the number of directed diameters constructed by the computer, and threshold setting. Microstructures containing large and small features may have to be analyzed at two different magnifications, as with area measurements. Bias in area and length measurements is influenced by threshold setting and microscope and image analyzer calibration. Calibration should be performed at a magnification as close to that used for the analysis as possible, and, for SEMs, x and y should be calibrated separately (Ref 8). Perimeter. Measurement of perimeter length requires special consideration because representation of the outer edge of features in a digital image consists of steps between adjacent pixels, which either are square or some other polygonal shape. The greater the pixel resolution, the closer will be the approximation to the true length of a curving perimeter. Because the computer knows the coordinates of every pixel, an approximation of the perimeter can be made by calculating the length of the diagonal line between the centers of each of the outer pixels and summing them. However, this approach typically still underestimates the true perimeter of most features. Therefore, AIA systems often use various adjustments to the diagonal distance to minimize bias.

JOBNAME: PGIA−−spec 2 PAGE: 12 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000 112 / Practical Guide to Image Analysis

Along with the measured approximation of the true perimeter, a convex perimeter can be constructed from the Feret diameters. These directed diameters form a polygon with sides touching the feature because they are constructed at some number of angles around the feature. The number of sides to the polygon depends on the number of directed diameters constructed by the software. From the polygon, a convex perimeter, or taut string perimeter, approximates the perimeter that would be formed if a rubber band were stretched around the feature. The perimeter of a nearly circular feature can be computed from its diameter, a descriptor that is easier to measure precisely. Another more complicated approach to perimeter measurement is the Crofton perimeter (Ref 9). In this method, the derivation of which is beyond the scope of this chapter, the length, L, of a curved line, such as the perimeter of an irregular feature, is estimated from the number of intersections it makes with a set of straight lines of known spacing, given by the expression: 1 ␲ L ⫽ nr 2 4

(Eq 7)

In the above equation, L is the length of a curved line such as the perimeter of an object, n is the number of intersections with a series of parallel lines at 45° with each other, and r is the spacing of the lines in calibrated units. The parallel lines at 45° and 135° have different spacings depending on whether they are drawn manually so they are spaced equally with the 0° and 90° lines, or whether the computer uses pixel diagonals. When an image analysis system uses pixel diagonals, the 45° and 135° line spacing must be corrected by a factor of 1/2公 2 , as in the expression: L⫽

[

]

␲ 1 r (n45 ⫹ n135) r (n0 ⫹ n90) ⫹ 2 4 公2

(Eq 8)

Figure 5 shows two grids having the same arbitrary curve of unknown length superimposed. The grid lines in Fig. 5(a) are equally spaced; therefore, those at 45° and 135° do not necessarily coincide with the diagonals of the squares. In Fig. 5(b), the squares outlined by black lines represent pixels in a digital image, and the 45° and 135° lines in blue are constructed along the pixel diagonals. Equation 7 applies to the intersection count from Fig. 5(a), and Eq 8 applies to Fig. 5(b). For example, in Fig. 5(a), the number of intersections of the red curve with the grid equals 56. Therefore, L ⫽ 22.0 in accordance with Eq 7. By comparison, in Fig. 5(b), n⬜ ⫽ 31 and n⫻ ⫽ 36, where n⬜ refers to the number of intersections with the black square gridlines, and n⫻ refers to

JOBNAME: PGIA−−spec 2 PAGE: 13 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000 Measurements / 113

intersections with the blue diagonal lines. Therefore according to Eq 8, L ⫽ 22.2 compared with a measured length of 21.9, all in units of the grid. Position. While the computer knows the position of every pixel, the position of features needs to be defined. The centroid of a feature is an obvious choice for position, but there are shapes in which the centroid lies outside of the feature, such as “crescent moon” shapes and some shapes having reentrant angles. Different image analysis systems approach the problem differently, but one way to check how a particular analyzer defines the position is to have the computer put a mark or label on the

(a)

(b)

Fig. 5

Grids used to measure Crofton perimeter. (a) Equally spaced grid lines. (b) Rectilinear lines and diagonal lines. Please see endsheets of book for color versions.

JOBNAME: PGIA−−spec 2 PAGE: 14 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000 114 / Practical Guide to Image Analysis

feature. Regardless of the way position is defined, it is a measurement best suited to computer-based methods. In this discussion, position refers to absolute position in image coordinates. However, there are situations in which position relative to other features is important—nearest neighbor relationships and clustering of features, for example, which are described in the section “Derived Measurements.” Relative position concepts are most easily reported as field measurements. However, if nearest neighbors can be measured directly either manually or automatically, a distribution of measurements can be reported. An example of a problem that requires relative position information is distinguishing locked from liberated minerals in the mining industry. When the desired feature (ore mineral) is surrounded completely by minerals having no economic value, it is said to be locked. In this problem, it would be useful to measure the extent of contact between the minerals of interest and each of the surrounding minerals. There are other industrial problems that would benefit from data on relative position, but the measurement is difficult to make automatically without sufficient “intelligence” in a program specifically designed for this purpose. Intensity. Although gray-level intensities are used in many image processing operations, the intensity at each pixel most often is used in image analysis only to threshold the image. Once the image is segmented into planes or pseudocolors, the original intensity information can be lost. A more complicated situation is that of a true color image. Here the information contained in the separate red, green, and blue intensities is necessary to segment the image based on shades of color (see Chapter 9, “Color-Image Processing”). Images constructed from spectroscopic techniques, such as x-ray mapping and microinfrared or Raman spectroscopy, also use intensity information. The intensity is necessary to display those regions of the image corresponding to some chemical information and for the computer to correlate or discriminate regions based on spectroscopic data. For example, Fig. 6 shows precipitated carbides in a steel after heat treatment for several months. The upper two images are x-ray maps of iron and silicon, which show areas in which iron seems to be depleted and regions in which silicon seems to be enriched, corresponding to carbide phases. The computer can be instructed to find areas rich in carbon from the intensity in a carbon map (not shown), and then the intensity of other elements can be used to interpret and compare composition among carbide grains. The lower two images, constructed by the computer to show iron and silicon intensity in just the carbides, show that there are two distinct phases, iron carbide (Fe3C) and silicon carbide (SiC). Other maps of manganese and chromium showed that these elements substituted for iron in the metal carbide, or M3C, while silicon formed a separate phase.

JOBNAME: PGIA−−spec 2 PAGE: 15 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000 Measurements / 115

Fig. 6

X-ray intensity maps of carbides in steel

Derived Measurements Field Measurements Stereological Parameters. Stereology is a body of knowledge for characterizing three-dimensional features from their two-dimensional representations in planar sections. A detailed review of stereological relationships can be found in Chapter 2, “Introduction to Stereological Principles,” and in Ref 4 and 10. The notation uses subscripts to denote a ratio. For example, NA refers to the number of features divided by the total area of the field. A feature could be a phase in a microstructure, a particle in a dispersion, or any other identifiable part of an image. Volume fraction, VV, is a quantity derived from the measured area fraction, AA, although in this case, the relationship is one of identity, VV ⫽ AA. There are various stereological parameters that are not directly measured but that correlate well with material properties. For example, the volume fraction of a dispersed phase may correlate with mechanical properties,

JOBNAME: PGIA−−spec 2 PAGE: 16 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000 116 / Practical Guide to Image Analysis

and the length per area, LA, of grain boundaries exposed to a corrosive medium may correlate with corrosion resistance. The easiest measurements to make are those that involve counting rather than measuring. For example, if a test grid of lines or points is used, the number of lines that intercept a feature of interest or the number of points that lie on the feature are counted and reported as NL or point count, PP. ASTM E 562 describes procedures for manual point counting and provides a table showing the expected precision depending on the number of points counted, the number of fields, and the volume fraction of the features (Ref 1). Automatic image analysis systems consider all pixels in the image, and it is left to the operator to tell the computer which pixels should be assigned to a particular phase by using pseudocolor. It also is easy to count the number of points of interest that intersect lines in a grid, PL. If the objects of interest are discrete features, such as particles, then the number of times the features intercept the test lines gives NL. For space filling grains, PL ⫽ NL, and for particles, PL ⫽ 2 NL. In an AIA system, the length of the test line is the entire raster; that is, the total length of lines comprising the image in calibrated units of the microstructure. Similarly, it is possible to count the number per area, NA, but this has the added difficulty of having to rigorously keep track of each feature or grain counted to avoid duplication. All of the parameters above that involve counting are directly measurable, and several other useful parameters can be derived from these measurements. For example, the surface area per volume, SV, can be calculated as follows: SV ⫽ 2PL

(Eq 9)

where SV refers to the total surface area of features divided by their volume, not the volume of the specimen. The length of lines intercepting features divided by the area of the section is defined as: LA ⫽

␲ 2PL

(Eq 10)

Another useful relationship defines the average area of a particular phase counted over many fields as: A⫽

AA NA

(Eq 11)

Today it is more common to use an AIA system to calculate the average area directly from the area measurement of each feature. Grain Size. A field measurement that frequently correlates with material properties is grain size. The concept is based on the number of

JOBNAME: PGIA−−spec 2 PAGE: 17 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000 Measurements / 117

grains in a given area of the specimen, such as the number of grains per square inch at 100⫻ magnification. For more on grain size measurement, see Chapter 2, “Introduction to Stereological Principles;” Chapter 7, “Analysis and Interpretation;” Chapter 8, “Applications,” and Ref 11. Realizing the importance of accurate grain size measurement, ASTM Committee E 4 on Metallography took on the task of standardizing grain size measurement. ASTM E 112 is the current standard for measuring grain size and calculating an ASTM G value (Ref 12). The relationship between G and the number of grains per square inch at a magnification of 100⫻, n, follows: n ⫽ 2G⫺1

(Eq 12)

However, G is generally calculated from various easily measured stereological parameters, such as NL and NA. ASTM E 1382 (Ref 13) describes the procedures for measuring G using automatic or semiautomatic image analysis, and gives two equations: G ⫽ (6.643856 log NL ) ⫺ 3.288

(Eq 13)

where NL is in mm–1 and: G ⫽ (3.321928 log NA ) ⫺ 2.954

(Eq 14)

where NA is in mm–2. The procedures prescribed in ASTM E 112 assume an approximately log-normal grain size distribution. There are other conditions in which grain size needs to be measured and reported differently, such as a situation in which a few large grains are present in a finer-grained matrix. This is reported as the largest grain observed in a sample, expressed as ALA (as large as) grain size. The procedure for making this measurement is described in ASTM E 930. Duplex grain size is an example of features distinguished by their size, shape, and position discussed previously. ASTM E 1181 describes various duplex conditions, such as bimodal distributions, wide-range conditions, necklace conditions, and ALA. Figure 7 shows an image containing bimodal duplex grain size in an Inconel Alloy 718 (UNS N07718) nickel-base superalloy. Simply counting grains and measuring their average grain size (AGS) yields 1004 grains having an ASTM G value of 9.2. However, such an analysis completely mischaracterizes the sample because the grain distribution is bimodal. Figure 8 shows an area-weighted histogram of the microstructure in Fig. 7, which suggests a division in the distribution at an average diameter of approximately 50 µm (Ref 14). The number percent and area percent histograms are superimposed, and the area-weighted plot indicates the bimodal nature of the distribution. The number percent of the coarse

JOBNAME: PGIA−−spec 2 PAGE: 18 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000 118 / Practical Guide to Image Analysis

grains is only 2%, but the area percent is 32%. Repeating the analysis for grains with a diameter greater than 50 µm yields 22 grains having a G value of 4.9. The balance of the microstructure consists of 982 grains having a G value of 9.8. The report on grain size, as specified by ASTM

Fig. 7

Binary image of duplex grain structure in Inconel 718 nickel-base superalloy

Fig. 8

Grain-size histograms of structure in Fig. 7. Source: Ref 14

JOBNAME: PGIA−−spec 2 PAGE: 19 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000 Measurements / 119

E 1181 on Duplex Grain Size, is given as: Duplex, Bimodal, 68% AGS ASTM No. 10, 32% AGS ASTM No. 5. The fractal dimension of a surface, such as a fracture surface, can be derived from measurements of microstructural features. Although fractal dimension is not a common measure of roughness, it can be calculated from measurements such as the length of a trace or the area of a surface. The use of fractal measurements in image analysis is described by Russ (Ref 15) and Underwood (Ref 16). The concept of fractals involves a change in some dimension as a function of scale. A profilometer provides a measure of roughness, but the scale is fixed by the size of the tip. A microscope capable of various magnifications provides a suitable way to change the scale. An SEM has a large range of magnification over which it can be operated, which makes it an ideal instrument to measure length or area as a function of scale. Two linear measurements that can be made are length of a vertical profile of a rough surface and length of the outline of features in serial sections. These and other measurements for describing rough surfaces are extensively reviewed by Underwood and Banerji (Ref 6). If you can measure or estimate the surface area, using, for example, stereoscopy discussed above, then the fractal dimension can be calculated. Such an analysis on a fracture surface is described by Friel and Pande (Ref 17). Figure 9 shows a pair of images of the alloy described in Ref 17 taken at two different tilt angles using an SEM. From stereopairs

Fig. 9

Stereopair of a titanium alloy fracture surface

JOBNAME: PGIA−−spec 2 PAGE: 20 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000 120 / Practical Guide to Image Analysis

such as this, the “true” surface area can be calculated at different microscope magnifications consisting of seven orders of magnitude in scale. Figure 10 shows a plot of log area versus log scale. The slope of the line fitted to this plot is equal to the fractal dimension, determined directly from the two-dimensional surface. Clustering. One of the more difficult aspects of a microstructure to quantitatively characterize is the degree to which specific features tend to cluster together. Pores and inclusions are two types of features that tend to cluster, often affecting materials properties. Although total porosity or inclusion content usually is not too difficult to measure, provided the specimen is prepared adequately, the spatial distribution of these features is much harder to quantify. One method to assess clustering uses the area of a cluster and the number of features in the cluster. This can be accomplished by dilating the features in the cluster until they fuse and then measuring the area of the agglomerate. Alternatively, a new image can be constructed in which intensity is based on the number of features clustered in regions of the original image. A particularly powerful approach to the problem makes use of Dirichlet tessellations. Dirichlet (Ref 18) actually did not use the term tessellation, which simply means mosaic. However, in this context, they are cells constructed by a computer by means of expanding regions outward from features until they meet those of their nearest neighbors. Figure 11 shows clustered, ordered, and random features with corresponding tessellation cells below each feature. Every point within each cell is closer to the feature that generated it than it is to any other feature—a point Dirichlet was first to prove mathematically. The area of the cells is roughly proportional to the nearest-neighbor distance. Once an image of cells is constructed for the three distribution types, as shown in Fig. 11, the entire capability of feature analysis is available to characterize the cells. For example, cell breadth is a measure of the first

Fig. 10

Fractal plot of fracture surface area versus scanning electron microscope magnification

JOBNAME: PGIA−−spec 2 PAGE: 21 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000 Measurements / 121

nearest-neighbor distance, and the longest dimension of a cell is a measure of the second nearest-neighbor distance. Horizontal and vertical Feret diameters provide information about orientation. It is possible not only to make measurements on each field, but also on every cell in the field, and to report distributions. In the simplest case, the computer constructs a perpendicular bisector between the centroid of each feature and its neighbors. These lines are terminated when they meet another, and the result is an image consisting of polygons centered on each feature. Using this simple approach, it is conceivable that a large feature close to another small one would actually extend beyond its cell. That is, the cell centered on the large feature is smaller than the feature itself. This situation occurs because the midpoint of the centroid-to-centroid line may lie within the large particle. A more sophisticated approach suitable for image analysis is to grow the regions outward from the edge of the features, instead of their centroid, using a regions of influence (SKIZ) operator (see Chapter 4, “Principles of Image Analysis”). This approach requires a more powerful computer, but it yields more meaningful results. An example of tessellation cells constructed from features in a microstructure is shown in Fig. 12. The cells are constructed from the edge of the pores; therefore, the cell boundaries are not always straight lines. The small cells correspond to regions of the microstructure in which porosity is clustered.

Fig. 11

Images of clustered, ordered, and random features with their tessellated counterparts

JOBNAME: PGIA−−spec 2 PAGE: 22 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000 122 / Practical Guide to Image Analysis

Another measurement that can be calculated from an image of tessellations is local area fraction. This measure, described by Spitzig (Ref 19), and discussed in greater detail in Chapter 6, “Characterization of Particle Dispersion,” is defined as the area of each feature divided by the area of its cell. The measurement is made on an image consisting of features of interest and their tessellation cells. To define this measure for the computer, the features are treated as holes within the cells, and local area fraction is defined as area filled minus area divided by area filled, where area filled refers to the area of the cells with the features (holes) filled in. Local area fraction typically is reported as a field measurement by averaging over all the cells within a field. For a review of various methods to quantify clustering, refer to Vander Voort (Ref 20), and also Chapter 6, “Characterization of Particle Dispersion,” for a discussion on characterizing particle dispersions.

Feature-Specific Derived Measurements The number of feature-specific derived measurements is unlimited in the sense that it is possible to define any combination of primitive measurements to form a new feature descriptor. However, many such descriptors have become widely used and now are standard in computerbased image analyzer software. Table 2 lists some common featurespecific descriptors. Shape. Feature-specific derived measurements are most useful as measures of shape. Shape is more easily measured on each feature than inferred from field measurements. A simple, common derived shape measurement is aspect ratio. To the computer, this is the ratio of maximum Feret diameter to Feret diameter perpendicular to it, sometimes

(a)

Fig. 12

(b)

Tessellation cells constructed from features in a microstructure. (a) Scanning electron microscope photomicrograph of porosity in TiO2. (b) Tessellation cells constructed based on pores

JOBNAME: PGIA−−spec 2 PAGE: 23 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000 Measurements / 123

called width. Other common feature descriptors, such as circularity and form factor, purport to characterize the similarity of a feature to a circle. However, different descriptors are sensitive to different primitive measurements. For example, while both circularity and form factor quantify the similarity in outline of a feature to a circle, circularity is most sensitive to the aspect ratio, while form factor is sensitive to variations in perimeter curvature. Roughness is another such measurement that is even more perimeter-sensitive than is form factor. Combined Selection Criteria. It often is useful to combine two or more descriptors when imposing selection criteria to determine which features should be included in the analysis. In other words, if more than one population exists in the image, each must be distinguished and analyzed separately. For example, it might be necessary to analyze small-area circular features, such as pores, or large-area high-aspect ratio feature, such as fibers. Generally, selection criteria such as these can be applied separately or in combination (related by Boolean logic) in most AIA systems. Although the computer may allow for combining numerous criteria, experience shows that two selection criteria usually are sufficient to describe the vast majority of useful features. It may be necessary to use statistical methods to distinguish the populations if appropriate selection criteria are not obvious. One such method is stepwise regression, in which variables (size or shape parameters) are input into the regression one at a time rather than regressing all at the same time. An advantage of this technique is that the investigator can observe the effect of each variable and, being familiar with the sample, can select the most significant ones. A review of methods for distinguishing populations can be found in Ref 21. Indirect Measurements. Beyond using multiple selection criteria to screen features, it also is possible to use a combination of descriptors to Table 2

Common feature-specific descriptors

Descriptor

Definition

Area (A)

Pixel count

Perimeter (P)

Diagonal pixel center to center

Longest dimension (L)

Directed diameter max

Breadth (B)

Directed diameter min

Average diameter (D)

Directed diameter average

Aspect ratio (AR)

Max directed diameter /perpendicular directed diameter

Area equivalent diameter (AD)

公4A / ␲ (diameter, if circular)

Form factor (FF)

4␲A / P2 (perimeter-sensitive, always ⱕ1)

Circularity

␲L2 / 4A (longest dimension-sensitive, always ⱖ1)

Mean intercept length

A / projected length (x or y)

Roughness

P / ␲D (␲D ⫽ perimeter of circle)

公A3 (sphere rotated about D)

Volume of a sphere

0.75225 ⫻

Volume of a prolate spheroid

8 / 3␲ ⫻ A2 / L (ellipse rotated about L)

Fiber length

公p 2 ⫺16A) 0.25 ⫻ (P – 公p 2 ⫺16A)

Fiber width

0.25 ⫻ (P ⫹

JOBNAME: PGIA−−spec 2 PAGE: 24 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000 124 / Practical Guide to Image Analysis

make a measurement that cannot be made directly—the average thickness of curved fibers whose length is much greater than their thickness, for example. While it may be possible to write an “intelligent” program to follow the fibers and measure their thickness at intervals along the length, there is an easier way. Because the length is much greater than the width, only two sides account for nearly the entire perimeter; the ends of the fibers are insignificant. Therefore, the perimeter divided by two gives the length of one side. The area is a direct measurement, so the area divided by the length of one side gives the average width, while ignoring variations in thickness. This is an example of using easily taken measurements to calculate a measurement that is not so easily made. Furthermore, it does not involve Ferets, which can be misleading on curving features. The derived descriptors fiber length and fiber width in Table 2 are applicable to this situation. The ability of the user to define his or her own descriptors can be useful in ways other than making indirect measurements as previously described. It is possible that two or more primitive descriptors can be combined into a user-defined parameter that relates to material process or properties. In this case, a computer-based AIA system is virtually a necessity because the computer can make the tens of thousands of measurements while the investigator sorts through the data to select those that show significant correlation with properties. Distributions of Measurements. One of the greatest advantages of an automatic image analysis system over manual methods is the capability to measure numerous parameters on every feature in a microstructure. With all of these data stored, it is tempting to report the complete distribution of many feature descriptors. It is beyond the scope of this chapter on measurements to delve into the statistics of various types of distributions. However, it should be reemphasized that field measurements taken on many fields often better characterize a material than do lists of feature measurements; in other words, do more less well! Reporting feature-specific measurements in a list or a plot must be done carefully to assure that the report captures the essential characteristics of the distribution. In particle-size analysis, for example, a cumulative distribution commonly is used. However, if the complete distribution becomes unwieldy, three parameters can be used instead: mean diameter, standard deviation, and number of particles per unit volume (Ref 22). Plots of shape distributions are more difficult to interpret than those based on size. Shape measurements, such as aspect ratio and circularity, even when reported numerically, do not necessarily correspond intuitively to the physical processes that produced the features. Moreover, the same data may appear to be different when plotted using a different shape measure or on a different type of plot, such as a logarithmic graph.

JOBNAME: PGIA−−spec 2 PAGE: 25 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000 Measurements / 125

In other cases, such as measurements of duplex grain size described above, a weighted distribution is appropriate. A typical distribution of grain size is log normal, but there are grain-size distributions that are significantly different. A few large particles among hundreds or more small ones can have a significant effect on material properties, yet they may be lost in a histogram consisting of a count of features versus size. If the computer software calculates a median value, it is informative to compare the value with the mean value to look for a distribution that is skewed. A more complete treatment of statistics in image analysis and types of distributions are given in Chapter 7, “Analysis and Interpretation” and Chapter 8, “Applications.” Nevertheless, before data can be analyzed statistically, the correct measurements must be made on an image acquired using a suitable contrast mechanism. The decision about what to report and how to illustrate the data requires care on the part of the operator. However, after these parameters are established, the capability of an AIA system to locate and measure thousands of features rapidly in many instances provides a savings in time and an increase in reproducibility over manual methods. Comparison chart methods may be fast, adequately precise, and free of bias for some measurements, but for others, it might be necessary to determine the distribution of feature-specific parameters to characterize the microstructure. The following example illustrates a routine image analysis. Figure 13 shows a microstructure consisting of ferrite in a plain carbon steel.

Fig. 13

Microstructure of plain carbon steel consisting of ferrite grains

JOBNAME: PGIA−−spec 2 PAGE: 26 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000 126 / Practical Guide to Image Analysis

Because the image consists only of space filling grains, the measurement of area fraction and mean free path are not meaningful. However, in a real digital image, the grain boundaries comprise a finite number of pixels. Therefore, the measured area fraction is less than one, and the calculated mean free path is greater than zero. Even when image processing is used to thin the boundaries, their fraction is still not zero, and the reported results should consider this. Tables 3 and 4 show typical field and feature measurements, respectively, made on a binary version of the image in Fig. 13.

Standard Methods Throughout this Chapter, standard procedures have been cited where appropriate. These procedures are the result of consensus by a committee of experts representing various interested groups, and as such, they are a good place to start. A list of various relevant standards from ASTM is given in Tables 5 and 6. International Standards Organization (ISO) and Table 3

Field measurement report

Measurement

Average

Field area, µm2

365,730.8

Total features

1,538

Total intercepts

15,517

Area of features, µm2

252,340.9

Field NA, mm–2

4.2

Field NL, mm–1

63.7

Average area of features, µm2

164.1

Average length of features, µm

14.7

Units per pixel, µm

1.5

NA, number of features divided by area; NL, lineal density

Table 4

Feature measurement report Measurement, µm

Feature

Average

Median

Minimum

Maximum

164.07

103.74

5.79

2,316.15

Perimeter

48.12

41.45

12.72

353.23

x Feret

14.58

13.52

3.00

66.08

y Feret

14.67

12.01

3.00

75.09

Max horizontal chord

13.42

12.01

3.00

60.07

Area

Longest dimension

8.33

16.06

5.40

87.08

Breadth

11.78

10.51

3.00

60.25

Average diam

15.43

13.62

5.15

73.97

Convex perimeter

48.47

42.80

16.17

232.37

Area equivalent diam

54.30

12.84

11.49

4.48

Form factor

0.75

0.77

0.18

1.23

Circularity

2.06

1.93

1.28

12.57

Mean x intercept

9.18

8.26

1.93

36.83

Mean y intercept

9.22

8.23

1.75

36.67

Roughness

0.97

0.97

0.79

1.52

2,228.30

794.87

47.18

83,851.82

1.53

1.43

1.00

8.80

Volume of sphere Aspect ratio

JOBNAME: PGIA−−spec 2 PAGE: 27 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000 Measurements / 127

Table 5 ASTM measurement- and materialsrelated standards ASTM No.

Subject

Measurement standards E 562

Manual point counting

E 1122

Inclusion rating by AIA

E 1382

Grain size by AIA

E 1181

Duplex grain size

E 1245

Inclusions by stereology

E 1268

Degree of banding

B 487

Coating thickness

Material standards C 856

Microscopy of concrete

D 629

Microscopy of textiles

D 686

Microscopy of paper

D 1030

Microscopy of paper

D 2798

Microscopy of coal

D 3849

Electron microscopy of carbon black

AIA, automatic image analysis

Table 6

Microscopy-related standards

ASTM No.

Subject

E3

Specimen preparation

E 766

SEM magnification calibration

E 883

Reflected-light microscopy

E 986

SEM beam size characterization

E 1351

Preparation of replicas

E 1558

Electropolishing

F 728

Microscope setup for line width

SEM, scanning electron microscope

other national standards agencies develop standards that relate to image analysis, but ASTM standards for materials have been developed over a period of 100 years, and often form the basis for standards published by other agencies. Some of the standards listed in Tables 5 and 6 deal directly with how to make measurements of, for instance, grain size and inclusions. Others, such as microscope magnification calibration, indirectly affect measurements. Still others cover the subject of microscopy of various materials and specimen preparation. Although these do not prescribe specific measurements, they are essential as the first step to valid image analysis.

References 1. “Standard Test Method for Determining Volume Fraction by Systematic Manual Point Count,” E 562, Annual Book of ASTM Standards, Vol 03.01, ASTM, 1999, p 507 2. E.E. Underwood, Quantitative Metallography, Metallography and

JOBNAME: PGIA−−spec 2 PAGE: 28 SESS: 54 OUTPUT: Thu Oct 26 14:55:36 2000 128 / Practical Guide to Image Analysis

Microstructures, Vol 9, ASM Handbook, ASM International, 1985, p 123 3. “Standard Practice for Assessing the Degree of Banding or Orientation of Microstructures,” E 1268, Annual Book of ASTM Standards, Vol 03.01, ASTM, 1999, p 780 4. E.E. Underwood, Quantitative Stereology, Addison-Wesley, 1970 5. J.I. Goldstein, D.E. Newbury, P. Echlin, D.C. Joy, A.D. Romig, C.E. Lyman, C. Fiori, and E. Lifshin, Scanning Electron Microscopy and X-ray Microanalysis, Plenum, 1992, p 264 6. E.E. Underwood and K. Banerji, Quantitative Fractography, Fractography, Vol 12, ASM Handbook, ASM International, 1987, p 193 7. E.E. Underwood, “Practical Solutions to Stereological Problems,” Practical Applications of Quantitative Metallography, STP 839, ASTM, p 160 8. “Standard Practice for Calibrating the Magnification of a Scanning Electron Microscope,” E 766, Annual Book of ASTM Standards, Vol 03.01, ASTM, 1999, p 614 9. M.P. do Carmo, Geometry of Curves and Surfaces, Prentice-Hall, 1976, p 41 10. Stereology and Quantitative Metallography, STP 504, ASTM, 1972 11. G.F. Vander Voort, “Grain Size Measurement,” Practical Applications of Quantitative Metallography, STP 839, ASTM, 1984, p 85 12. “Standard Test Methods for Determining Average Grain Size,” E 112, Annual Book of ASTM Standards, Vol 03.01, ASTM, 1999, p 229 13. “Standard Test Methods for Determining Average Grain Size Using Semiautomatic and Automatic Image Analysis,” E 1382, Annual Book of ASTM Standards, Vol 03.01, ASTM, 1999, p 855 14. G.F. Vander Voort and J.J. Friel, Image Analysis Measurements of Duplex Grain Structures, Mater. Charact., 1992, p 293 15. J.C. Russ, Practical Stereology, Plenum, 1986, p 124 16. E.E. Underwood, “Treatment of Reversed Sigmoidal Curves for Fractal Analysis,” Advances in Video Technology for Microstructural Control, STP 1094, ASTM, 1991, p 354 17. J.J. Friel and C.S. Pande, J. Mater. Res., Vol 8, 1993, p 100 18. G.L. Dirichlet, “Über die Reduction der positiven quadratischen Formen mit drei unbestimmten ganzen Zahlen,” J. für die reine und angewandte Mathematik, Vol 40, 1850, p 28 19. W.A. Spitzig, Metallography, Vol 18, 1985, p 235 20. G.F. Vander Voort, “Evaluation Clustering of Second-Phase Particles,” Advances in Video Technology for Microstructural Control, STP 1094, ASTM, 1991, p 242 21. J.C. Russ, Computer-Assisted Microscopy, Plenum, 1990, p 272 22. E.E. Underwood, Particle-Size Distribution, Quantitative Microscopy, McGraw-Hill, 1968, p 149

JOBNAME: PGIA−−spec 2 PAGE: 1 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000

CHAPTER

6

Characterization of Particle Dispersion Mahmoud T. Shehata Materials Technology Laboratory/CANMET

IMAGE ANALYSIS is used to quantitatively characterize particle dispersion to determine whether and to what extent particle dispersion is homogeneous or inhomogeneous. Particle dispersion, among other factors like volume fraction, size distribution, and shape, has a pronounced effect on many mechanical properties, such as fracture toughness (Ref 1–4) and sheet metal formability (Ref 5). Particle dispersion in this case refers to the dispersion of nonmetallic inclusions, precipitates, and second-phase particles in an alloy or a metal-matrix composite material. In all cases, the arrangement, size, and spacing of particles in space determine material properties; that is, it is the local volume fraction of the particles rather than the average value that determines properties (Ref 6). Examples of ordered, random, and clustered dispersions are shown in Fig. 1. A clustered dispersion of inclusions in a steel results in a lower fracture toughness than that in an ordered dispersion (Ref 3). This can be explained in terms of the significant role of inclusions in void and/or crack initiation (Ref 7). It is the distribution of these voids or microcracks that determines the relative ease of void coalescence and accumulation of damage up to a critical local level to cause failure. Therefore, to model the mechanical properties of a steel containing a dispersion of inclusions (or any material containing second-phase particles), it is necessary to develop techniques that quantitatively describe the characteristics of the dispersion in terms of local volume fractions and local number densities rather than overall values. For mineral dressing, on the other hand, it is important to know the local volume fractions of inclusions in the host grains rather than average values to assess a mineral dressing operation (Ref 8).

JOBNAME: PGIA−−spec 2 PAGE: 2 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000 130 / Practical Guide to Image Analysis

Five techniques used to characterize the dispersion of particles are described in this chapter. The examples that illustrate the techniques mainly are for inclusion in steels, but are equally applicable to any material containing second-phase particles. The degree of clustering in the dispersion is quantified for each technique. The techniques are presented in order of complexity: 1. A technique based on number density distribution measurements, sometimes referred to as the sparse sampling technique or grid/ quadrant counting (Ref 9) 2. A technique based on nearest-neighbor spacing distribution measurements (Ref 10–12) 3. A technique called dilation and counting, which involves successive dilations and counting of the number of merging particles after each dilation step until they are all merged 4. A technique based on the construction of a Dirichlet network at mid-distance between particle centroids (Ref 13–15) 5. A technique that leads to the construction of a network (very similar to a Dirichlet network) at mid-edge/edge-particle spacing rather than centroid spacing. The procedure is based on the conditional dilation utility on the image analyzer, which essentially is a continued particle dilation, but a boundary is set up where the dilating particles meet. Sometimes the conditional dilation is called inverse thinning where the inverse binary image (background) is thinned down to a line. Techniques 3, 4, and 5 are based on near-neighbor spacing rather than the nearest neighbor.

The results obtained for a particle dispersion by all five techniques are compared in each case with results for ordered, random, and clustered dispersions. In addition, the techniques are evaluated, and their usefulness and limitations are discussed.

Fig. 1

Point patterns showing different point dispersions. (a) Ordered. (b) Random. (c) Clustered

JOBNAME: PGIA−−spec 2 PAGE: 3 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000 Characterization of Particle Dispersion / 131

Number Density Variation Technique The procedure involves counting the number of particles per unit area, NA, in successive locations or fields. For example, the number of inclusions are counted in various locations on a polished section of a steel sample (Fig. 2). This procedure is easily achieved using an automatic image analyzer equipped with an automated stage, so measurements on successive locations of a large sample are achieved automatically. A quantitative measure for the degree of inhomogeneity of the dispersion is the standard deviation, ␴, defined as: ␴⫽



1 共NAi⫺ NA兲2 n兺 i

(Eq 1)

where NAi is the observed number of inclusions per unit area in the ith location (field of view) and is the average number of particles per unit area viewed on the sample. Maximum homogeneity is characterized by a minimum standard deviation; thus, the degree of homogeneity increases with a decreasing value of standard deviation. To compare relative homogeneity of samples with different number densities, the standard deviation must be normalized by the value of the mean, which is the coefficient of variation, V, defined as V ⫽ ␴ / NA.

Fig. 2

Nonmetallic inclusions in steel. Inclusion dispersion is a combination of random and clustered dispersions.

JOBNAME: PGIA−−spec 2 PAGE: 4 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000 132 / Practical Guide to Image Analysis

It should be noted that the number density technique has an important limitation. The measured variation may depend on the area of the measuring field relative to the area or size of the clusters. For example, if the field area is so large that it contains a number of clusters (as in Fig. 2), it can be anticipated that the standard deviation will be very small. In fact, such a large field size does not allow the variation in the number density from clustered regions to nonclustered regions to be detected because both regions are contained in the measured field. In other words, a large measuring field implies a quasi-homogeneous distribution. The effect of field size is shown in the following table, where the coefficient of variation of number density measurements are given as a function of the measuring field for three different samples (Ref 6). Field area, mm2 Sample

0.96

0.24

0.06

0.015

1

0.23

0.34

0.51

0.66

2

0.23

0.38

0.62

0.81

3

0.31

0.49

0.69

0.84

Note that the coefficient of variation increases as the area of measuring field decreases. This suggests that the field area should be as small as the area of a typical cluster if the number density variation technique is to sense clusters. For example, the field area that should be used for the inclusion dispersion in Fig. 2 should not be larger than the size of a typical cluster of approximately 150 by 100 µm.

Nearest-Neighbor Spacing Distribution Nearest-neighbor spacing is easily measured using many automatic image analyzers. This is achieved by determining the x and y coordinates of all particle centroids in the dispersion and calculating the distances between all pairs of particles. In a dispersion of N particles, the total number of paired spacings is N(N – 1) / 2. However, only one spacing for each particle is the smallest spacing. Therefore there are N nearestneighbor spacings in a dispersion of N particles. A frequency distribution of these N spacings can be determined readily using an image analyzer. An example is shown in Fig. 2, again, for inclusions in a steel sample. To characterize particle dispersion using this technique, comparisons usually are made with random distributions. A random distribution is assumed to be a Poisson’s distribution, shown in Fig. 3. In this case, the probability distribution function of nearest-neighbor spacing ␹ is given by: F(␹) ⫽2 ␲NA ␹ exp 共⫺␲NA ␹ 2兲

(Eq 2)

JOBNAME: PGIA−−spec 2 PAGE: 5 SESS: 24 OUTPUT: Thu Oct 26 15:09:28 2000 Characterization of Particle Dispersion / 133

The expected mean and expected variance, s, are: E(␹) ⫽

1

E共s兲2 ⫽

(Eq 3)

公N

2

A

4⫺␲ 1 · 4␲ NA

(Eq 4)

It is possible to characterize the dispersion by comparing the mean nearest-neighbor spacing and its variance with the expected values obtained for a random distribution (Ref 12). Based on the ratio Q of the observed to the expected (for random) mean nearest-neighbor spacing and the ratio R of the observed to the expected variance of the nearestneighbor spacing, the following comparisons can be made: Distribution

Q value

R value

Random dispersion

1

1

Ordered dispersion

2>Q>1

0
View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF