Inventions and Inventors.pdf

May 4, 2017 | Author: Jesus Ezequiel Castellon | Category: N/A
Share Embed Donate


Short Description

Download Inventions and Inventors.pdf...

Description

Inventions and Inventors

This Page Intentionally Left Blank

MAGILL’S C H O I C E

Inventions and Inventors Volume 1 Abortion pill — Laminated glass 1 – 458 edited by Roger Smith

Salem Press, Inc. Pasadena, California

Hackensack, New Jersey

Copyright © 2002, by Salem Press, Inc. All rights in this book are reserved. No part of this work may be used or reproduced in any manner whatsoever or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without written permission from the copyright owner except in the case of brief quotations embodied in critical articles and reviews. For information address the publisher, Salem Press, Inc., P.O. Box 50062, Pasadena, California 91115. Essays originally appeared in Twentieth Century: Great Events (1992, 1996), Twentieth Century: Great Scientific Achievements (1994), and Great Events from History II: Business and Commerce Series (1994). New material has been added. ∞ The paper used in these volumes conforms to the American National Standard for Permanence of Paper for Printed Library Materials, Z39.48-1992 (R1997). Library of Congress Cataloging-in-Publication Data Inventions and inventors / edited by Roger Smith p.cm. — (Magill’s choice) Includes bibliographical reference and index ISBN 1-58765-016-9 (set : alk. paper) — ISBN 1-58765-017-7 (vol 1 : alk. paper) — ISBN 1-58765-018-5 (vol 2. : alk. paper) 1. Inventions—History—20th century—Encyclopedias. 2. Inventors—Biography—Encyclopedias. I. Smith, Roger, 1953. II. Series. T20 .I59 2001 609—dc21

2001049412

printed in the united states of america

Table of Contents Table of Contents

Publisher’s Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Editor’s Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Abortion pill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Airplane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Alkaline storage battery . . . . . . . . . . . . . . . . . . . . . . . 11 Ammonia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Amniocentesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Antibacterial drugs . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Apple II computer. . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Aqualung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Artificial blood . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Artificial chromosome . . . . . . . . . . . . . . . . . . . . . . . . 41 Artificial heart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Artificial hormone. . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Artificial insemination . . . . . . . . . . . . . . . . . . . . . . . . 54 Artificial kidney . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Artificial satellite . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Aspartame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Assembly line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Atomic bomb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Atomic clock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Atomic-powered ship. . . . . . . . . . . . . . . . . . . . . . . . . 84 Autochrome plate . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 BASIC programming language . . . . . . . . . . . . . . . . . . . 92 Bathyscaphe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Bathysphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 BINAC computer . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Birth control pill . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Blood transfusion . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Breeder reactor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Broadcaster guitar . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Brownie camera . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Bubble memory . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 v

Table of Contents

Bullet train . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Buna rubber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 CAD/CAM . . . . . . . . . . . . . . . . . . . Carbon dating . . . . . . . . . . . . . . . . . Cassette recording . . . . . . . . . . . . . . . CAT scanner . . . . . . . . . . . . . . . . . . Cell phone . . . . . . . . . . . . . . . . . . . Cloning . . . . . . . . . . . . . . . . . . . . . Cloud seeding . . . . . . . . . . . . . . . . . COBOL computer language . . . . . . . . . Color film . . . . . . . . . . . . . . . . . . . . Color television . . . . . . . . . . . . . . . . Colossus computer . . . . . . . . . . . . . . Communications satellite . . . . . . . . . . . Community antenna television . . . . . . . Compact disc . . . . . . . . . . . . . . . . . . Compressed-air-accumulating power plant Computer chips . . . . . . . . . . . . . . . . Contact lenses . . . . . . . . . . . . . . . . . Coronary artery bypass surgery . . . . . . . Cruise missile . . . . . . . . . . . . . . . . . Cyclamate. . . . . . . . . . . . . . . . . . . . Cyclotron . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . .

151 158 163 167 172 177 183 187 192 196 200 204 208 217 225 229 235 240 244 248 252

Diesel locomotive . . . Differential analyzer . . Dirigible. . . . . . . . . Disposable razor . . . . Dolby noise reduction .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

257 262 267 272 279

Electric clock . . . . . . Electric refrigerator . . Electrocardiogram . . . Electroencephalogram. Electron microscope . . Electronic synthesizer . ENIAC computer . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

284 289 293 298 302 307 312

vi

Table of Contents

Fax machine . . . . . . . . . . . . . Fiber-optics . . . . . . . . . . . . . . Field ion microscope. . . . . . . . . Floppy disk . . . . . . . . . . . . . . Fluorescent lighting . . . . . . . . . FM radio . . . . . . . . . . . . . . . Food freezing. . . . . . . . . . . . . FORTRAN programming language Freeze-drying. . . . . . . . . . . . . Fuel cell . . . . . . . . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

316 320 325 330 335 339 343 347 351 355

Gas-electric car . . . . . . . . . . Geiger counter . . . . . . . . . . Genetic “fingerprinting” . . . . Genetically engineered insulin . Geothermal power . . . . . . . . Gyrocompass . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

360 365 370 374 378 382

Hard disk . . . . . . . Hearing aid . . . . . . Heart-lung machine . Heat pump . . . . . . Holography. . . . . . Hovercraft . . . . . . Hydrogen bomb . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

386 390 394 398 402 407 412

IBM Model 1401 computer . In vitro plant culture. . . . . Infrared photography . . . . Instant photography. . . . . Interchangeable parts . . . . Internal combustion engine. The Internet. . . . . . . . . . Iron lung . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

417 421 425 430 434 442 446 451

Laminated glass . . . . . . . . . . . . . . . . . . . . . . . . . . . 454

vii

This Page Intentionally Left Blank

Publisher’s Note Publisher’s Note

To many people, the word “invention” brings to mind cleverly contrived gadgets and devices, such as safety pins, zippers, typewriters, and telephones—all of which have fascinating stories of invention behind them. However, the word actually has a much broader meaning, one that goes back to the Latin word invenire, for “to come upon.” In its broad sense, an invention can be any tangible device or contrivance, or even a process, that is brought into being by human imagination. It is in this broad sense that the term is used in Inventions and Inventors, the latest contribution to the Magill’s Choice reference books. This two-volume set contains articles on 195 twentieth century inventions, which span the full range of human imagination—from simple gadgets, such as disposable razors, to unimaginably complex medical breakthroughs, such as genetically engineered insulin. This set is not an encyclopedic catalog of the past century’s greatest inventions but rather a selective survey of noteworthy breakthroughs in the widest possible variety of fields. A combination of several features sets Inventions and Inventors apart from other reference works on this subject: the diversity of its subject matter, the depth of its individual articles, and its emphasis on the people behind the inventions. The range of subjects covered here is unusually wide. In addition to articles on what might be considered “classic” inventions—such as airplanes, television, and satellites—the set has articles on inventions in fields as diverse as agriculture, biology, chemistry, computer science, consumer products, drugs and vaccines, energy, engineering, food science, genetic engineering, medical procedures, music, photography, physics, synthetics, transportation, and weapons technology. Most of this set’s essays appeared earlier in Twentieth Century: Great Events (1992, 1996) and Twentieth Century: Great Scientific Achievements (1994). Its longest essays are taken from Great Events from History II: Business and Commerce Series (1994). Information in the articles has been updated, and completely new bibliographical notes have been added to all of them. Half the essays also have original sidebars on people behind the inventions. ix

Publisher’s Note

At least one thousand words in length, each essay opens with a brief summary of the invention and its significance, followed by an annotated list of important personages behind it—including scientists, engineers, technicians, and entrepreneurs. The essay then examines the background to the invention, its process of discovery and innovation, and its impact on the world. Half the articles have entirely new sidebars on individuals who played important roles in the inventions’ development and promotion. Users can find topics by using any of several different methods. Articles are alphabetically arranged under their titles, which use the names of the inventions themselves, such as “Abortion pill,” “Airplane,” “Alkaline storage battery,” “Ammonia,” and “Amniocentesis.” Many inventions are known by more than one name, however, and users may find what they are looking for in the general index, which lists topics under multiple terms. Several systems of cross-referencing direct users to articles of interest. Appended to every essay is a list of articles on related or similar inventions. Further help in can be found in appendices at the end of volume two. The first, a Time Line, lists essay topics chronologically, by the years in which the inventions were first made. The second, Topics by Category list, organizes essay topics under broader headings, with most topics appearing under at least two category headings. Allowing for the many topics counted more than once, these categories include Consumer products (36 essays), Electronics (28), Communications (27), Medicine (25), Measurement and detection (24), Computer science (23), Home products (20), Materials (18), Medical procedures (17), Synthetics (17), Photography (16), Energy (16), Engineering (16), Physics (13), Food science (13), Drugs and vaccines (13), Transportation (11), Weapons technology (11), Genetic engineering (11), Aviation and space (10), Biology (9), Chemistry (9), Exploration (8), Music (7), Earth science (6), Manufacturing (6), and Agriculture (5). More than one hundred scholars wrote the original articles used in these volumes. Because their names did not appear with their articles in the Twentieth Century sets, we cannot, unfortunately, list them here. However, we extend our thanks for their contributions. We also are indebted to Roger Smith for his help in assembling the topic list and in writing all the biographical sidebars. x

Editor’s Foreword The articles in Inventions and Inventors recount the birth and growth of important components in the technology of the twentieth centuries. They concern inventions ranging from processes, methods, sensors, and tests to appliances, tools, machinery, vehicles, electronics, and materials. To explain these various inventions, the essays deal with principles of physics, chemistry, engineering, biology, and computers—all intended for general readers. From complex devices, such as electron microscopes, and phenomena difficult to define, such as the Internet, to things so familiar that they are seldom thought of as having individual histories at all, such as Pyrex glass and Velcro, all the inventions described here increased the richness of technological life. Some of these inventions, such as the rotarydial telephone, have passed out of common use, at least in the United States and Europe, while others, such as the computer, are now so heavily relied upon that mass technological culture could scarcely exist without them. Each article, then, is at the same time a historical sketch and technical explanation of an invention, written to inform and, I hope, intrigue. Brief biographical sidebars accompany half the articles. The sidebars outline the lives of people who are in some way responsible for the inventions discussed: the original inventor, a person who makes important refinements, an entrepreneur, or even a social crusader who fostered acceptance for a controversial invention, as Margaret Sanger did for the birth control pill. These little biographies, although offering only basic information, call forth the personal struggles behind inventions. And that is a facet to inventions that needs emphasizing, because it shows that technology, which can seem bewilderingly impersonal and complex, is always rooted in human need and desire. Roger Smith Portland, Oregon

xi

This Page Intentionally Left Blank

Inventions and Inventors

This Page Intentionally Left Blank

Inventions and Inventors

This Page Intentionally Left Blank

MAGILL’S C H O I C E

Inventions and Inventors Volume 2 Laser — Yellow fever vaccine Index 459 – 936 edited by Roger Smith

Salem Press, Inc. Pasadena, California

Hackensack, New Jersey

Copyright © 2002, by Salem Press, Inc. All rights in this book are reserved. No part of this work may be used or reproduced in any manner whatsoever or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without written permission from the copyright owner except in the case of brief quotations embodied in critical articles and reviews. For information address the publisher, Salem Press, Inc., P.O. Box 50062, Pasadena, California 91115. Essays originally appeared in Twentieth Century: Great Events (1992, 1996), Twentieth Century: Great Scientific Achievements (1994), and Great Events from History II: Business and Commerce Series (1994). New material has been added. ∞ The paper used in these volumes conforms to the American National Standard for Permanence of Paper for Printed Library Materials, Z39.48-1992 (R1997). Library of Congress Cataloging-in-Publication Data Inventions and inventors / edited by Roger Smith p.cm. — (Magill’s choice) Includes bibliographical reference and index ISBN 1-58765-016-9 (set : alk. paper) — ISBN 1-58765-017-7 (vol 1 : alk. paper) — ISBN 1-58765-018-5 (vol 2. : alk. paper) 1. Inventions—History—20th century—Encyclopedias. 2. Inventors—Biography—Encyclopedias. I. Smith, Roger, 1953. II. Series. T20 .I59 2001 609—dc21

2001049412

printed in the united states of america

Table of Contents Table of Contents

Laser. . . . . . . . . . . . . . . Laser-diode recording process Laser eye surgery . . . . . . . Laser vaporization . . . . . . . Long-distance radiotelephony Long-distance telephone . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

459 464 468 472 477 482

Mammography. . . Mark I calculator . . Mass spectrograph. Memory metal . . . Microwave cooking

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

486 490 494 498 502

Neoprene . . . . . . . . . . . Neutrino detector . . . . . . Nuclear magnetic resonance Nuclear power plant. . . . . Nuclear reactor. . . . . . . . Nylon . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

507 511 516 520 525 529

Oil-well drill bit . . . . . . . . . . . . . . . . . . . . . . . . . . . 533 Optical disk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 Orlon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541 Pacemaker . . . . . . Pap test . . . . . . . . Penicillin . . . . . . . Personal computer . . Photoelectric cell . . . Photovoltaic cell . . . Plastic . . . . . . . . . Pocket calculator . . . Polio vaccine (Sabin). Polio vaccine (Salk) . Polyester . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

xix

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

545 549 553 558 562 567 571 576 581 585 589

Table of Contents

Polyethylene . . . . . . . . . . . . . . Polystyrene . . . . . . . . . . . . . . . Propeller-coordinated machine gun . Pyrex glass . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

593 597 601 606

Radar . . . . . . . . . . Radio . . . . . . . . . . Radio crystal sets . . . Radio interferometer . Refrigerant gas . . . . . Reserpine . . . . . . . . Rice and wheat strains Richter scale . . . . . . Robot (household) . . . Robot (industrial) . . . Rocket . . . . . . . . . Rotary dial telephone .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

611 616 621 625 630 634 638 645 650 654 658 663

SAINT . . . . . . . . . . . . . . . Salvarsan . . . . . . . . . . . . . Scanning tunneling microscope Silicones. . . . . . . . . . . . . . Solar thermal engine. . . . . . . Sonar . . . . . . . . . . . . . . . Stealth aircraft . . . . . . . . . . Steelmaking process . . . . . . . Supercomputer. . . . . . . . . . Supersonic passenger plane . . Synchrocyclotron . . . . . . . . Synthetic amino acid . . . . . . Synthetic DNA . . . . . . . . . . Synthetic RNA . . . . . . . . . . Syphilis test. . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

. . . . . . . . . . . . . . .

668 673 678 683 687 692 697 701 709 714 720 724 729 733 737

Talking motion pictures . Teflon . . . . . . . . . . . Telephone switching. . . Television . . . . . . . . . Tevatron accelerator . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

741 746 751 756 761

. . . . .

. . . . . . . . . . . .

. . . . .

. . . . . . . . . . . .

. . . . .

. . . . .

xx

Table of Contents

Thermal cracking process . Tidal power plant . . . . . Touch-tone telephone . . . Transistor . . . . . . . . . . Transistor radio . . . . . . Tuberculosis vaccine. . . . Tungsten filament . . . . . Tupperware. . . . . . . . . Turbojet . . . . . . . . . . . Typhus vaccine . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

765 770 774 778 786 791 795 799 807 811

Ultracentrifuge . . . Ultramicroscope . . Ultrasound . . . . . UNIVAC computer

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

815 819 823 828

Vacuum cleaner . . . . . . . . Vacuum tube . . . . . . . . . . Vat dye . . . . . . . . . . . . . Velcro . . . . . . . . . . . . . . Vending machine slug rejector Videocassette recorder . . . . Virtual machine . . . . . . . . Virtual reality. . . . . . . . . . V-2 rocket . . . . . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

832 837 842 846 850 857 861 866 871

. . . .

. . . .

. . . .

. . . .

Walkman cassette player . . . . . . . . . . . . . . . . . . . . . . 875 Washing machine . . . . . . . . . . . . . . . . . . . . . . . . . . 883 Weather satellite . . . . . . . . . . . . . . . . . . . . . . . . . . . 887 Xerography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 891 X-ray crystallography . . . . . . . . . . . . . . . . . . . . . . . . 896 X-ray image intensifier . . . . . . . . . . . . . . . . . . . . . . . 901 Yellow fever vaccine . . . . . . . . . . . . . . . . . . . . . . . . . 905 Time Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 909 Topics by Category . . . . . . . . . . . . . . . . . . . . . . . . . 915 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 923 xxi

This Page Intentionally Left Blank

Inventions and Inventors

This Page Intentionally Left Blank

1

Abortion pill Abortion pill

The invention: RU-486 was the first commercially available drug that prevented fertilized eggs from implanting themselves in the walls of women’s uteruses. The people behind the invention: Étienne-Émile Baulieu (1926), a French biochemist and endocrinologist Georges Teutsch, a French chemist Alain Bélanger, a French chemist Daniel Philibert, a French physicist and pharmacologist Developing and Testing In 1980, Alain Bélanger, a research chemist, was working with Georges Teutsch at Roussel Uclaf, a French pharmaceutical company. Teutsch and Bélanger were interested in understanding how changes in steroids affect the chemicals’ ability to bind to their steroid receptors. (Receptors are molecules on cells that can bind with certain chemical substances such as hormones. Receptors therefore act as connecting links to promote or prevent specific bodily activities or processes.) Bélanger synthesized several steroids that bonded to steroid receptors. Among these steroids was a compound that came to be called “RU-486.” Another member of the research project, Daniel Philibert, found that RU-486 blocked the activities of progesterone by binding tightly to the progesterone receptor. Progesterone is a naturally occurring steroid hormone that prepares the wall of the uterus to accept a fertilized egg. Once this is done, the egg can become implanted and can begin to develop. The hormone also prevents the muscles of the uterus from contracting, which might cause the uterus to reject the egg. Therefore RU-486, by acting as a kind of shield between hormone and receptor, essentially stopped the progesterone from doing its job. At the time, Teutsch’s group did not consider that RU-486 might be useful for deliberately interrupting human pregnancy. It was

2

/

Abortion pill

Étienne-Émile Baulieu, a biochemist and endocrinologist and a consultant for Roussel Uclaf, who made this connection. He persuaded the company to test RU-486 for its effects on fertility control. Many tests were performed on rabbits, rats, and monkeys; they showed that, even in the presence of progesterone, RU-486 could prevent secretory tissue from forming in the uterus, could change the timing of the menstrual cycle, and could terminate a pregnancy—that is, cause an abortion. The compound also seemed to be nontoxic, even in high doses. In October of 1981, Baulieu began testing the drug with human volunteers. By 1985, major tests of RU-486 were being done in

Étienne-Emile Baulieu Étienne-Émile Baulieu was born in Strasbourg, France, in 1926. He moved to Paris for his advanced studies at the Faculty of Medicine and Faculty of Science of Pasteur College. He was an Intern of Paris from 1951 until he received a medical degree in 1955. He passed examinations qualifying him to become a teacher at state schools in 1958 and during the 1961-1962 academic year was a visiting scientist in Columbia University’s Department of Biochemistry. In 1963 Baulieu was made a Doctor of Science and appointed director of a research unit at France’s National Institute of Health and Medical Science, a position he held until he retired in 1997. He also served as Head of Service of Hormonal Biochemistry of the Hospital of Bicêtre (1970-1997), professor of biochemistry at University of Paris-South (1970-1993), and consultant for Roussel Uclaf (1963-1997). Among his many honors are the Gregory Pincus Memorial Award (1978), awards from the National Academy of Medicine, the Christopher Columbus Discovery Award in Biomedical Research (1992), the Joseph Bolivar DeLee Humanitarian Award (1994), and Commander of the Legion of Honor (1990). Although busy with research and teaching duties, Baulieu was on the editorial board of several French and international newspapers, a member of scientific councils, and a participant in the Special Program in Human Reproduction of the World Health Organization.

Abortion pill

/

3

France, Great Britain, The Netherlands, Sweden, and China. When a relatively low dose of RU-486 was given orally, there was an 85 percent success rate in ending pregnancy; the woman’s body expelled the embryo and all the endometrial surface. Researchers found that if a low dose of a prostaglandin (a hormonelike substance that causes the smooth muscles of the uterus to contract, thereby expelling the embryo) was given two days later, the success rate rose to 96 percent. There were few side effects, and the low doses of RU-486 did not interfere with the actions of other steroid hormones that are necessary to keep the body working. In the March, 1990, issue of The New England Journal of Medicine, Baulieu and his coworkers reported that with one dose of RU-486, followed in thirty-six to forty-eight hours with a low dose of prostaglandin, 96 percent of the 2,040 women they studied had a complete abortion with few side effects. The women were monitored after receiving the prostaglandin to watch for side effects, which included nausea, vomiting, abdominal pain, and diarrhea. When they returned for a later checkup, fewer than 2 percent of the women complained of side effects. The researchers used two different prostaglandins; they found that one caused a quicker abortion but also brought about more pain and a longer period of bleeding. Using the Drug In September, 1988, the French government approved the distribution of RU-486 for use in government-controlled clinics. The next month, however, Roussel Uclaf stopped selling the drug because people opposed to abortion did not want RU-486 to be available and were threatening to boycott the company. Then, however, there were threats and pressure from the other side. For example, members of the World Congress of Obstetrics and Gynecology announced that they might boycott Roussel Uclaf if it did not make RU-486 available. The French government, which controlled a 36 percent interest in Roussel Uclaf, ordered the company to start distributing the drug once more. By the fall of 1989, more than one-fourth of all early abortions in France were being done with RU-486 and a prostaglandin. The French government began helping to pay the cost of using RU-486 in 1990.

4

/

Abortion pill

Testing for approval of RU-486 was completed in Great Britain and The Netherlands, but Roussel Uclaf’s parent company, Hoechst AG, did not try to market the drug there or in any other country outside France. (In the United States, government regulations did not allow RU-486 to be tested using government funds.) Medical researchers believe that RU-486 may be useful not only for abortions but also in other ways. For example, it may help in treating certain breast cancers and other tumors. RU-486 is also being investigated as a possible treatment for glaucoma—to lower pressure in the eye that may be caused by a high level of steroid hormone. It may be useful in promoting the healing of skin wounds and softening the cervix at birth, easing delivery. Researchers hope as well that some form of RU-486 may prove useful as a contraceptive—that is, not to prevent a fertilized egg from implanting itself in the mother’s uterus but to prevent ovulation in the first place. Impact Groups opposed to abortion rights have spoken out against RU486, while those who favor the right to abortion have urged its acceptance. The drug has been approved for use in China as well as in France. In the United States, however, the government has avoided giving its approval to the drug. Officials of the World Health Organization (WHO) have argued that RU-486 could prevent the deaths of women who undergo botched abortions. Under international law, WHO has the right to take control of the drug and make it available in poor countries at low cost. Because of the controversy surrounding the drug, however, WHO called for more testing to ensure that RU-486 is quite safe for women. See also Amniocentesis; Antibacterial drugs; Artificial hormone; Birth control pill; Salvarsan. Further Reading Baulieu, Etienne-Emile, and Mort Rosenblum. The “Abortion Pill”: RU-486, a Woman’s Choice. New York: Simon & Schuster, 1991. Butler, John Douglas, and David F. Walbert. Abortion, Medicine, and the Law. 4th ed. New York: Facts on File, 1992.

Abortion pill

/

5

Lyall, Sarah. “Britain Allows Over-the-Counter Sales of MorningAfter Pill.” New York Times (January 15, 2001). McCuen, Gary E. RU 486: The Abortion Pill Controversy. Hudson, Wis.: GEM Publications, 1992. Nemecek, Sasha. “The Second Abortion Pill.” Scientific American 283, no. 6 (December, 2000). Zimmerman, Rachel. “Ads for Controversial Abortion Pill Set to Appear in National Magazines.” Wall Street Journal (May 23, 2001).

6

Airplane Airplane

The invention: The first heavier-than-air craft to fly, the airplane revolutionized transportation and symbolized the technological advances of the twentieth century. The people behind the invention: Wilbur Wright (1867-1912), an American inventor Orville Wright (1871-1948), an American inventor Octave Chanute (1832-1910), a French-born American civil engineer A Careful Search Although people have dreamed about flying since the time of the ancient Greeks, it was not until the late eighteenth century that hotair balloons and gliders made human flight possible. It was not until the late nineteenth century that enough experiments had been done with kites and gliders that people could begin to think seriously about powered, heavier-than-air flight. Two of these people were Wilbur and Orville Wright.

The Wright brothers making their first successful powered flight, at Kitty Hawk, North Carolina. (Library of Congress)

Airplane

/

7

The Wright brothers were more than just tinkerers who accidentally found out how to build a flying machine. In 1899, Wilbur wrote the Smithsonian Institution for a list of books to help them learn about flying. They used the research of people such as George Cayley, Octave Chanute, Samuel Langley, and Otto Lilienthal to help them plan their own experiments with birds, kites, and gliders. They even built their own wind tunnel. They never fully trusted the results of other people’s research, so they repeated the experiments of others and drew their own conclusions. They shared these results with Octave Chanute, who was able to offer them lots of good advice. They were continuing a tradition of excellence in engineering that began with careful research and avoided dangerous trial and error. Slow Success Before the brothers had set their minds to flying, they had built and repaired bicycles. This was a great help to them when they put their research into practice and actually built an airplane. From building bicycles, they knew how to work with wood and metal to make a lightweight but sturdy machine. Just as important, from riding bicycles, they got ideas about how an airplane needed to work. They could see that both bicycles and airplanes needed to be fast and light. They could also see that airplanes, like bicycles, needed to be kept under constant control to stay balanced, and that this control would probably take practice. This was a unique idea. Instead of building something solid that was controlled by levers and wheels like a car, the Wright brothers built a flexible airplane that was controlled partly by the movement of the pilot, like a bicycle. The result was the 1903 Wright Flyer. The Flyer had two sets of wings, one above the other, which were about 12 meters from tip to tip. They made their own 12-horsepower engine, as well as the two propellers the engine spun. The craft had skids instead of wheels. On December 14, 1903, the Wright brothers took the Wright Flyer to the shores of Kitty Hawk, North Carolina, where Wilbur Wright made the first attempt to fly the airplane. The first thing Wilbur found was that flying an airplane was not as easy as riding a bicycle. One wrong move sent him tumbling into

8

/

Airplane

The Wright Brothers Orville and his older brother Wilbur first got interested in aircraft when their father gave them a toy helicopter in 1878. Theirs was a large, supportive family. Their father, a minister, and their mother, a college graduate and inventor of household gadgets, encouraged all five of the children to be creative. Although Wilbur, born in 1867, was four years older than Orville, they were close as children. While in high school, they put out a weekly newspaper together, West Side News, and they opened their bicycle shop in 1892. Orville was the mechanically adept member of the team, the tinkerer; Wilbur was the deliberative one, the planner and designer. Since the bicycle business was seasonal, they had time to pursue their interest in aircraft, puzzling out the technical problems and studying the successes and failures of others. They started with gliders, flying their first, which had a five-foot wing span, in 1899. They developed their own technique to control the gliders, the “wing-warping technique,” after watching how birds fly. They attached wires to the trailing edges of the wings and pulled the wires to deform the wings’ shape. They built a sixteen-foot glider in 1900 and spent a vacation in North Carolina gaining flying experience. Further designs and many more tests followed, including more than two hundred shapes of wing studied in their home-built wind tunnel, before their first successful engine-powered flight in 1903. Neither man ever married. After Wilbur died of typhoid in 1912, Orville was stricken by the loss of his brother but continued to run their business until 1915. He last piloted an airplane himself in 1918 and died thirty years later. Their first powered airplane, the Wright Flyer, lives on at the National Air and Space Museum in Washington, D.C. Small parts from the aircraft were taken to the Moon by Neil Armstrong and Edwin Aldrin when they made the first landing there in 1969.

the sand only moments after takeoff. Wilbur was not seriously hurt, but a few more days were needed to repair the Wright Flyer. On December 17, 1903, at 10:35 a.m., after eight years of research and planning, Orville Wright took to the air for a historic twelve sec-

Airplane

/

9

onds. He covered 37 meters of ground and 152 meters of air space. Both brothers took two flights that morning. On the fourth flight, Wilbur flew for fifty-nine seconds over 260 meters of ground and through more than 800 meters of air space. After he had landed, a sudden gust of wind struck the plane, damaging it beyond repair. Yet no one was able to beat their record for three years. Impact Those first flights in 1903 got little publicity. Only a few people, such as Octave Chanute, understood the significance of the Wright brothers’ achievement. For the next two years, they continued to work on their design, and by 1905 they had built the Wright Flyer III. Although Chanute tried to get them to enter flying contests, the brothers decided to be cautious and try to get their machine patented first, so that no one would be able to steal their ideas. News of their success spread slowly through the United States and Europe, giving hope to others who were working on airplanes of their own. When the Wright brothers finally went public with the Wright Flyer III, they inspired many new advances. By 1910, when the brothers started flying in air shows and contests, their feats were matched by another American, Glen Hammond Curtiss. The age of the airplane had arrived. Later in the decade, the Wright brothers began to think of military uses for their airplanes. They signed a contract with the U.S. Army Signal Corps and agreed to train military pilots. Aside from these achievements, the brothers from Dayton, Ohio, set the standard for careful research and practical experimentation. They taught the world not only how to fly but also how to design airplanes. Indeed, their methods of purposeful, meaningful, and highly organized research had an impact not only on airplane design but also on the field of aviation science in general. See also Bullet train; Cruise missile; Dirigible; Gas-electric car; Propeller-coordinated machine gun; Rocket; Stealth aircraft; Supersonic passenger plane; Turbojet; V-2 rocket.

10

/

Airplane

Further Reading Brady, Tim. The American Aviation Experience: A History. Carbondale: Southern Illinois University Press, 2000. Chanute, Octave, Marvin Wilks, Orville Wright, and Wilbur Wright. The Papers of Wilbur and Orville Wright: Including the ChanuteWright Letters and Other Papers of Octave Chanute. New York: McGraw-Hill, 2000. Culik, Fred, and Spencer Dunmore. On Great White Wings: The Wright Brothers and the Race for Flight. Toronto: McArthur, 2001. Howard, Fred. Wilbur and Orville: A Biography of the Wright Brothers. Mineola, N.Y.: Dover Publications, 1998.

11

Alkaline storage battery Alkaline storage battery

The invention: The nickel-iron alkaline battery was a lightweight, inexpensive portable power source for vehicles with electric motors. The people behind the invention: Thomas Alva Edison (1847-1931), American chemist, inventor, and industrialist Henry Ford (1863-1947), American inventor and industrialist Charles F. Kettering (1876-1958), American engineer and inventor A Three-Way Race The earliest automobiles were little more than pairs of bicycles harnessed together within a rigid frame, and there was little agreement at first regarding the best power source for such contraptions. The steam engine, which was well established for railroad and ship transportation, required an external combustion area and a boiler. Internal combustion engines required hand cranking, which could cause injury if the motor backfired. Electric motors were attractive because they did not require the burning of fuel, but they required batteries that could store a considerable amount of energy and could be repeatedly recharged. Ninety percent of the motorcabs in use in New York City in 1899 were electrically powered. The first practical storage battery, which was invented by the French physicist Gaston Planté in 1859, employed electrodes (conductors that bring electricity into and out of a conducting medium) of lead and lead oxide and a sulfuric acid electrolyte (a solution that conducts electricity). In somewhat improved form, this remained the only practical rechargeable battery at the beginning of the twentieth century. Edison considered the lead acid cell (battery) unsuitable as a power source for electric vehicles because using lead, one of the densest metals known, resulted in a heavy battery that added substantially to the power requirements of a motorcar. In addition, the use of an acid electrolyte required that

12

/

Alkaline storage battery

the battery container be either nonmetallic or coated with a nonmetal and thus less dependable than a steel container. The Edison Battery In 1900, Edison began experiments aimed at developing a rechargeable battery with inexpensive and lightweight metal electrodes and an alkaline electrolyte so that a metal container could be used. He had already been involved in manufacturing the nonrechargeable battery known as the Lalande cell, which had zinc and copper oxide electrodes and a highly alkaline sodium hydroxide electrolyte. Zinc electrodes could not be used in a rechargeable cell because the zinc would dissolve in the electrolyte. The copper electrode also turned out to be unsatisfactory. After much further experimentation, Edison settled on the nickel-iron system for his new storage battery. In this system, the power-producing reaction involved the conversion of nickel oxide to nickel hydroxide together with the oxidation of iron metal to iron oxide, with both materials in contact with a potassium hydroxide solution. When the battery was recharged, the nickel hydroxide was converted into oxide and the iron oxide was converted back to the pure metal. Although the basic ingredients of the Edison cell were inexpensive, they could not readily be obtained in adequate purity for battery use. Edison set up a new chemical works to prepare the needed materials. He purchased impure nickel alloy, which was then dissolved in acid, purified, and converted to the hydroxide. He prepared pure iron powder by using a multiple-step process. For use in the battery, the reactant powders had to be packed in pockets made of nickel-plated steel that had been perforated to alThomas A. Edison. (Library of Congress)

Alkaline storage battery

/

13

low the iron and nickel powders to come into contact with the electrolyte. Because the nickel compounds were poor electrical conductors, a flaky type of graphite was mixed with the nickel hydroxide at this stage. Sales of the new Edison storage battery began in 1904, but within six months it became apparent that the battery was subject to losses in power and a variety of other defects. Edison took the battery off

Thomas Alva Edison Thomas Alva Edison (1847-1931) was America’s most famous and prolific inventor. His astonishing success story, rising from a home-schooled child who worked as a newsboy to a leader in American industry, was celebrated in children’s books, biographies, and movies. Corporations still bear his name, and his inventions and improvements of others’ inventions—such as the light bulb, phonograph, and motion picture—shaped the way Americans live, work, and entertain themselves. The U.S. Patent Office issued Edison 1,093 patents during his lifetime, the most granted to one person. Hailed as a genius, Edison himself emphasized the value of plain determination. Genius is one percent inspiration and 99 percent perspiration, he insisted. He also understood the value of working with others. In fact, one of his greatest contributions to American technology involved organized research. At age twenty-three he sold the rights to his first major invention, an improved ticker-tape machine for Wall Street brokers, for $40,000. He invested the money in building an industrial research laboratory, the first ever. It led to his large facilities at Menlo Park, New Jersey, and, later, labs in other locations. At times as many as one hundred people worked for him, some of whom, such as Nikola Tesla and Reginald Fessenden, became celebrated inventors in their own right. At his labs Edison not only developed electrical items, such as the light bulb and storage battery; he also produced an efficient mimeograph and worked on innovations in metallurgy, organic chemistry, photography and motion pictures, and phonography. The phonograph, he once said, was his favorite invention. Edison never stopped working. He was still receiving patents the year he died.

14

/

Alkaline storage battery

the market in 1905 and offered full-price refunds for the defective batteries. Not a man to abandon an invention, however, he spent the next five years examining the failed batteries and refining his design. He discovered that the repeated charging and discharging of the battery caused a shift in the distribution of the graphite in the nickel hydroxide electrode. By using a different type of graphite, he was able to eliminate this problem and produce a very dependable power source. The Ford Motor Company, founded by Henry Ford, a former Edison employee, began the large-scale production of gasolinepowered automobiles in 1903 and introduced the inexpensive, easyto-drive Model T in 1908. The introduction of the improved Edison battery in 1910 gave a boost to electric car manufacturers, but their new position in the market would be short-lived. In 1911, Charles Kettering invented an electric starter for gasoline-powered vehicles that eliminated the need for troublesome and risky hand cranking. By 1915, this device was available on all gasoline-powered automobiles, and public interest in electrically powered cars rapidly diminished. Although the Kettering starter required a battery, it required much less capacity than an electric motor would have and was almost ideally suited to the six-volt lead-acid battery. Impact Edison lost the race to produce an electrical power source that would meet the needs of automotive transportation. Instead, the internal combustion engine developed by Henry Ford became the standard. Interest in electrically powered transportation diminished as immense reserves of crude oil, from which gasoline could be obtained, were discovered first in the southwestern United States and then on the Arabian peninsula. Nevertheless, the Edison cell found a variety of uses and has been manufactured continuously throughout most of the twentieth century much as Edison designed it. Electrically powered trucks proved to be well suited for local deliveries, and some department stores maintained fleets of such trucks into the mid-1920’s. Electrical power is still preferable to internal combustion for indoor use, where exhaust fumes are a significant problem, so forklifts in factories and passenger transport vehi-

Alkaline storage battery

/

15

cles at airports still make use of the Edison-type power source. The Edison battery also continues to be used in mines, in railway signals, in some communications equipment, and as a highly reliable source of standby emergency power. See also Compressed-air-accumulating power plant; Internal combustion engine; Photoelectric cell; Photovoltaic cell. Further Reading Baldwin, Neil. Edison: Inventing the Century. Chicago: University of Chicago Press, 2001. Boyd, Thomas Alvin. Professional Amateur: The Biography of Charles Franklin Kettering. New York: Arno Press, 1972. Bryan, Ford R. Beyond the Model T: The Other Ventures of Henry Ford. Rev. ed. Detroit: Wayne State University Press, 1997. Cramer, Carol. Thomas Edison. San Diego, Calif.: Greenhaven Press, 2001. Israel, Paul. Edison: A Life of Invention. New York: Wiley, 2000.

16

Ammonia Ammonia

The invention: The first successful method for converting nitrogen from the atmosphere and combining it with hydrogen to synthesize ammonia, a valuable compound used as a fertilizer. The person behind the invention: Fritz Haber (1868-1934), a German chemist who won the 1918 Nobel Prize in Chemistry The Need for Nitrogen The nitrogen content of the soil, essential to plant growth, is maintained normally by the deposition and decay of old vegetation and by nitrates in rainfall. If, however, the soil is used extensively for agricultural purposes, more intensive methods must be used to maintain soil nutrients such as nitrogen. One such method is crop rotation, in which successive divisions of a farm are planted in rotation with clover, corn, or wheat, for example, or allowed to lie fallow for a year or so. The clover is able to absorb nitrogen from the air and deposit it in the soil through its roots. As population has increased, however, farming has become more intensive, and the use of artificial fertilizers—some containing nitrogen—has become almost universal. Nitrogen-bearing compounds, such as potassium nitrate and ammonium chloride, have been used for many years as artificial fertilizers. Much of the nitrate used, mainly potassium nitrate, came from Chilean saltpeter, of which a yearly amount of half a million tons was imported at the beginning of the twentieth century into Europe and the United States for use in agriculture. Ammonia was produced by dry distillation of bituminous coal and other lowgrade fuel materials. Originally, coke ovens discharged this valuable material into the atmosphere, but more economical methods were found later to collect and condense these ammonia-bearing vapors. At the beginning of the twentieth century, Germany had practically no source of fertilizer-grade nitrogen; almost all of its supply

Ammonia

/

17

came from the deserts of northern Chile. As demand for nitrates increased, it became apparent that the supply from these vast deposits would not be enough. Other sources needed to be found, and the almost unlimited supply of nitrogen in the atmosphere (80 percent nitrogen) was an obvious source. Temperature and Pressure When Fritz Haber and coworkers began his experiments on ammonia production in 1904, Haber decided to repeat the experiments of the British chemist Sir William Ramsay and Sydney Young, who in 1884 had studied the decomposition of ammonia at about 800 degrees Celsius. They had found that a certain amount of ammonia was always left undecomposed. In other words, the reaction between ammonia and its constituent elements—nitrogen and hydrogen—had reached a state of equilibrium. Haber decided to determine the point at which this equilibrium took place at temperatures near 1,000 degrees Celsius. He tried several approaches, reacting pure hydrogen with pure nitrogen, and starting with pure ammonia gas and using iron filings as a catalyst. (Catalytic agents speed up a reaction without affecting it otherwise). Having determined the point of equilibrium, he next tried different catalysts and found nickel to be as effective as iron, and calcium and manganese even better. At 1,000 degrees Celsius, the rate of reaction was enough to produce practical amounts of ammonia continuously. Further work by Haber showed that increasing the pressure also increased the percentage of ammonia at equilibrium. For example, at 300 degrees Celsius, the percentage of ammonia at equilibrium at 1 atmosphere of pressure was very small, but at 200 atmospheres, the percentage of ammonia at equilibrium was far greater. A pilot plant was constructed and was successful enough to impress a chemical company, Badische Anilin-und Soda-Fabrik (BASF). BASF agreed to study Haber’s process and to investigate different catalysts on a large scale. Soon thereafter, the process became a commercial success.

18

/

Ammonia

(Nobel Foundation)

Fritz Haber Fritz Haber’s career is a warning to inventors: Beware of what you create, even if your intentions are honorable. Considered a leading chemist of his age, Haber was born in Breslau (now Wroclaw, Poland) in 1868. A brilliant student, he earned a doctorate quickly, specializing in organic chemistry, and briefly worked as an industrial chemist. Although he soon took an academic job, throughout his career Haber believed that science must benefit society—new theoretical discoveries must find practical applications. Beginning in 1904, he applied new chemical techniques to fix atmospheric nitrogen in the form of ammonia. Nitrogen in the form of nitrates was urgently sought because nitrates were necessary to fertilize crops and natural sources were becoming rare. Only artificial nitrates could sustain the amount of agriculture needed to feed expanding populations. In 1908 Haber succeeded in finding an efficient, cheap process to make ammonia and convert it to nitrates, and by 1910 German manufacturers had built large plants to exploit his techniques. He was lauded as a great benefactor to humanity. However, his efforts to help Germany during World War I, even though he hated war, turned his life into a nightmare. His wife committed suicide because of his chlorine gas research, which also poisoned his international reputation and tainted his 1918 Nobel Prize in Chemistry. After the war he redirected his energies to helping Germany rebuild its economy. Eight years of experiments in extracting gold from seawater ended in failure, but he did raise the Kaiser Wilhelm Institute for Physical Chemistry, which he directed, to international prominence. Nonetheless, Haber had to flee Adolf Hitler’s Nazi regime in 1933 and died a year later, better known for his war research than for his fundamental service to agriculture and industry.

Impact With the beginning of World War I, nitrates were needed more urgently for use in explosives than in agriculture. After the fall of Antwerp, 50,000 tons of Chilean saltpeter were discovered in the

Ammonia

/

19

harbor and fell into German hands. Because the ammonia from Haber’s process could be converted readily into nitrates, it became an important war resource. Haber’s other contribution to the German war effort was his development of poison gas, which was used for the chlorine gas attack on Allied troops at Ypres in 1915. He also directed research on gas masks and other protective devices. At the end of the war, the 1918 Nobel Prize in Chemistry was awarded to Haber for his development of the process for making synthetic ammonia. Because the war was still fresh in everyone’s memory, it became one of the most controversial Nobel awards ever made. A headline in The New York Times for January 26, 1920, stated: “French Attack Swedes for Nobel Prize Award: Chemistry Honor Given to Dr. Haber, Inventor of German Asphyxiating Gas.” In a letter to the Times on January 28, 1920, the Swedish legation in Washington, D.C., defended the award. Haber left Germany in 1933 under duress from the anti-Semitic policies of the Nazi authorities. He was invited to accept a position with the University of Cambridge, England, and died on a trip to Basel, Switzerland, a few months later, a great man whose spirit had been crushed by the actions of an evil regime. See also Fuel cell; Refrigerant gas; Silicones; Thermal cracking process. Further Reading Goran, Morris Herbert. The Story of Fritz Haber. Norman: University of Oklahoma Press, 1967. Jansen, Sarah. “Chemical-Warfare Techniques for Insect Control: Insect ‘Pests’ in Germany Before and After World War I.” Endeavour 24, no. 1 (March, 2000). Smil, Vaclav. Enriching the Earth: Fritz Haber, Carl Bosch, and the Transformation of World Food Production. Cambridge, Mass.: MIT Press, 2001.

20

Amniocentesis Amniocentesis

The invention: A technique for removing amniotic fluid from pregnant women, amniocentesis became a life-saving tool for diagnosing fetal maturity, health, and genetic defects. The people behind the invention: Douglas Bevis, an English physician Aubrey Milunsky (1936), an American pediatrician How Babies Grow For thousands of years, the inability to see or touch a fetus in the uterus was a staggering problem in obstetric care and in the diagnosis of the future mental and physical health of human offspring. A beginning to the solution of this problem occurred on February 23, 1952, when The Lancet published a study called “The Antenatal Prediction of a Hemolytic Disease of the Newborn.” This study, carried out by physician Douglas Bevis, described the use of amniocentesis to assess the risk factors found in the fetuses of Rh-negative women impregnated by Rh-positive men. The article is viewed by many as a landmark in medicine that led to the wide use of amniocentesis as a tool for diagnosing fetal maturity, fetal health, and fetal genetic deects. At the beginning of a human pregnancy (conception) an egg and a sperm unite to produce the fertilized egg that will become a new human being. After conception, the fertilized egg passes from the oviduct into the uterus, while dividing and becoming an organized cluster of cells capable of carrying out different tasks in the ninemonth-long series of events leading up to birth. About a week after conception, the cluster of cells, now a “vesicle” (a fluid-filled sac containing the new human cells), attaches to the uterine lining, penetrates it, and becomes intimately intertwined with uterine tissues. In time, the merger between the vesicle and the uterus results in formation of a placenta that connects the mother and the embryo, and an amniotic sac filled with the amniotic fluid in which the embryo floats.

Amniocentesis

/

21

Eight weeks after conception, the embryo (now a Amniotic Fluid fetus) is about 2.5 centiPlacenta meters long and possesses all the anatomic elements it will have when it is Amniotic Sac born. At this time, about two and one-half months after her last menstruation, Uterus the expectant mother typically visits a physician and finds out she is pregnant. Also at this time, expecting mothers often begin to worry about possible birth Physicians extract amniotic fluid directly from the womb and examine it to determine the health of the defects in the babies they fetus. carry. Diabetic mothers and mothers older than thirtyfive years have higher than usual chances of delivering babies who have birth defects. Many other factors inferred from the medical history an expecting mother provides to her physician can indicate the possible appearance of birth defects. In some cases, knowledge of possible physical problems in a fetus may allow their treatment in the uterus and save the newborn from problems that could persist throughout life or lead to death in early childhood. Information is obtained through the examination of the amniotic fluid in which the fetus is suspended throughout pregnancy. The process of obtaining this fluid is called “amniocentesis.” Diagnosing Diseases Before Birth Amniocentesis is carried out in several steps. First, the placenta and the fetus are located by the use of ultrasound techniques. Next, the expecting mother may be given a local anesthetic; a long needle is then inserted carefully into the amniotic sac. As soon as amniotic fluid is seen, a small sample (about four teaspoons) is drawn into a hypodermic syringe and the syringe is removed. Amniocentesis is

22

/

Amniocentesis

nearly painless, and most patients feel only a little abdominal pressure during the procedure. The amniotic fluid of early pregnancy resembles blood serum. As pregnancy continues, its content of substances from fetal urine and other fetal secretions increases. The fluid also contains fetal cells from skin and from the gastrointestinal, reproductive, and respiratory tracts. Therefore, it is of great diagnostic use. Immediately after the fluid is removed from the fetus, the fetal cells are separated out. Then, the cells are used for genetic analysis and the amniotic fluid is examined by means of various biochemical techniques. One important use of the amniotic fluid from amniocentesis is the determination of its lecithin and sphingomyelin content. Lecithins and sphingomyelins are two types of body lipids (fatty molecules) that are useful diagnostic tools. Lecithins are important because they are essential components of the so-called pulmonary surfactant of mature lungs. The pulmonary surfactant acts at lung surfaces to prevent the collapse of the lung air sacs (alveoli) when a person exhales. Subnormal lecithin production in a fetus indicates that it most likely will exhibit respiratory distress syndrome or a disease called “hyaline membrane disease” after birth. Both diseases can be fatal, so it is valuable to determine whether fetal lecithin levels are adequate for appropriate lung function in the newborn baby. This is particularly important in fetuses being carried by diabetic mothers, who frequently produce newborns with such problems. Often, when the risk of respiratory distress syndrome is identified through amniocentesis, the fetus in question is injected with hormones that help it produce mature lungs. This effect is then confirmed by the repeated use of amniocentesis. Many other problems can also be identified by the use of amniocentesis and corrected before the baby is born. Consequences In the years that have followed Bevis’s original observation, many improvements in the methodology of amniocentesis and in the techniques used in gathering and analyzing the genetic and biochemical information obtained have led to good results. Hundreds of debilitating hereditary diseases can be diagnosed and some ameliorated—by

Amniocentesis

/

23

the examination of amniotic fluid and fetal cells isolated by amniocentesis. For many parents who have had a child afflicted by some hereditary disease, the use of the technique has become a major consideration in family planning. Furthermore, many physicians recommend strongly that all mothers over the age of thirty-four be tested by amniocentesis to assist in the diagnosis of Down syndrome, a congenital but nonhereditary form of mental deficiency. There remains the question of whether such solutions are morally appropriate, but parents—and society—now have a choice resulting from the techniques that have developed since Bevis’s 1952 observation. It is also hoped that these techniques will lead to means for correcting and preventing diseases and preclude the need for considering the therapeutic termination of any pregnancy. See also Abortion pill; Birth control pill; CAT scanner; Electrocardiogram; Electroencephalogram; Mammography; Nuclear magnetic resonance; Pap test; Ultrasound; X-ray image intensifier. Further Reading Milunsky, Aubrey. Genetic Disorders and the Fetus: Diagnosis, Prevention, and Treatment. 3d ed. Baltimore: Johns Hopkins University Press, 1992. Rapp, Rayna. Testing Women, Testing the Fetus: The Social Impact of Amniocentesis in America. New York: Routledge, 1999. Rothenberg, Karen H., and Elizabeth Jean Thomson. Women and Prenatal Testing: Facing the Challenges of Genetic Technology. Columbus: Ohio State University Press, 1994. Rothman, Barbara Katz. The Tentative Pregnancy: How Amniocentesis Changes the Experience of Motherhood. New York: Norton, 1993.

24

Antibacterial drugs Antibacterial drugs

The invention: Sulfonamides and other drugs that have proved effective in combating many previously untreatable bacterial diseases. The people behind the invention: Gerhard Domagk (1895-1964), a German physician who was awarded the 1939 Nobel Prize in Physiology or Medicine Paul Ehrlich (1854-1915), a German chemist and bacteriologist who was the cowinner of the 1908 Nobel Prize in Physiology or Medicine The Search for Magic Bullets Although quinine had been used to treat malaria long before the twentieth century, Paul Ehrlich, who discovered a large number of useful drugs, is usually considered the father of modern chemotherapy. Ehrlich was familiar with the technique of using dyes to stain microorganisms in order to make them visible under a microscope, and he suspected that some of these dyes might be used to poison the microorganisms responsible for certain diseases without hurting the patient. Ehrlich thus began to search for dyes that could act as “magic bullets” that would destroy microorganisms and cure diseases. From 1906 to 1910, Ehrlich tested numerous compounds that had been developed by the German dye industry. He eventually found that a number of complex trypan dyes would inhibit the protozoans that caused African sleeping sickness. Ehrlich and his coworkers also synthesized hundreds of organic compounds that contained arsenic. In 1910, he found that one of these compounds, salvarsan, was useful in curing syphilis, a sexually transmitted disease caused by the bacterium Treponema. This was an important discovery, because syphilis killed thousands of people each year. Salvarsan, however, was often toxic to patients, because it had to be taken in large doses for as long as two years to effect a cure. Ehrlich thus searched for and found a less toxic arsenic compound, neosalvarsan, which replaced salvarsan in 1912.

Antibacterial drugs

/

25

In 1915, tartar emetic (a compound containing the metal antimony) was found to be useful in treating kala-azar, which was caused by a protozoan. Kala-azar affected millions of people in Africa, India, and Asia, causing much suffering and many deaths each year. Two years later, it was discovered that injection of tartar emetic into the blood of persons suffering from bilharziasis killed the flatworms infecting the bladder, liver, and spleen. In 1920, suramin, a colorless compound developed from trypan red, was introduced to treat African sleeping sickness. It was much less toxic to the patient than any of the drugs Ehrlich had developed, and a single dose would give protection for more than a month. From the dye methylene blue, chemists made mepacrine, a drug that was effective against the protozoans that cause malaria. This chemical was introduced in 1933 and used during World War II; its principal drawback was that it could cause a patient’s skin to become yellow. Well Worth the Effort Gerhard Domagk had been trained in medicine, but he turned to research in an attempt to discover chemicals that would inhibit or kill microorganisms. In 1927, he became director of experimental pathology and bacteriology at the Elberfeld laboratories of the German chemical firm I. G. Farbenindustrie. Ehrlich’s discovery that trypan dyes selectively poisoned microorganisms suggested to Domagk that he look for antimicrobials in a new group of chemicals known as azo dyes. A number of these dyes were synthesized from sulfonamides and purified by Fritz Mietzsch and Josef Klarer. Domagk found that many of these dyes protected mice infected with the bacteria Streptococcus pyogenes. In 1932, he discovered that one of these dyes was much more effective than any tested previously. This red azo dye containing a sulfonamide was named prontosil rubrum. From 1932 to 1935, Domagk began a rigorous testing program to determine the effectiveness and dangers of prontosil use at different doses in animals. Since all chemicals injected into animals or humans are potentially dangerous, Domagk determined the doses that harmed or killed. In addition, he worked out the lowest doses that would eliminate the pathogen. The firm supplied samples of the

26

/

Antibacterial drugs

drug to physicians to carry out clinical trials on humans. (Animal experimentation can give only an indication of which chemicals might be useful in humans and which doses are required.) Domagk thus learned which doses were effective and safe. This knowledge saved his daughter’s life. One day while knitting, Domagk’s daughter punctured her finger with a needle and was infected with a virulent bacteria, which quickly multiplied and spread from the wound into neighboring tissues. In an attempt to alleviate the swelling, the infected area was lanced and allowed to drain, but this did not stop the infection from spreading. The child became critically ill with developing septicemia, or blood poisoning. In those days, more than 75 percent of those who acquired blood infections died. Domagk realized that the chances for his daughter’s survival were poor. In desperation, he obtained some of the powdered prontosil that had worked so well on infected animals. He extrapolated from his animal experiments how much to give his daughter so that the bacteria would be killed but his daughter would not be poisoned. Within hours of the first treatment, her fever dropped, and she recovered completely after repeated doses of prontosil. Impact Directly and indirectly, Ehrlich’s and Domagk’s work served to usher in a new medical age. Prior to the discovery that prontosil could be use to treat bacterial infection and the subsequent development of a series of sulfonamides, or “sulfa drugs,” there was no chemical defense against this type of disease; as a result, illnesses such as streptococcal infection, gonorrhea, and pneumonia held terrors of which they have largely been shorn. A small injury could easily lead to death. By following the clues presented by the synthetic sulfa drugs and how they worked to destroy bacteria, other scientists were able to develop an even more powerful type of drug, the antibiotic. When the American bacteriologist Rene Dubos discovered that natural organisms could also be used to fight bacteria, interest was renewed in an earlier discovery by the Scottish bacteriologist Sir Alexander: the development of penicillin.

Antibacterial drugs

/

27

Antibiotics such as penicillin and streptomycin have become some of the most important tools in fighting disease. Antibiotics have replaced sulfa drugs for most uses, in part because they cause fewer side effects, but sulfa drugs are still used for a handful of purposes. Together, sulfonamides and antibiotics have offered the possibility of a cure to millions of people who previously would have had little chance of survival. See also Penicillin; Polio vaccine (Sabin); Polio vaccine (Salk); Salvarsan; Tuberculosis vaccine; Typhus vaccine; Yellow fever vaccine. Further Reading Alstaedter, Rosemarie. From Germanin to Acylureidopenicillin: Research That Made History: Documentation of a Scientific Revolution: Dedicated to Gerhardt Domagk on the Eighty-fifth Anniversary of His Birth. Leverkausen, West Germany: Bayer AG, 1980. Baumler, Ernst. Paul Ehrlich: Scientist for Life. New York: Holmes and Meier, 1984. Galdston, Iago. Behind the Sulfa Drugs, a Short History of Chemotherapy. New York: D. Appleton-Century, 1943. Physiology or Medicine, 1922-1941. River Edge, N.J.: World Scientific, 1999.

28

Apple II computer Apple II computer

The invention: The first commercially available, preassembled personal computer, the Apple II helped move computers out of the workplace and into the home. The people behind the invention: Stephen Wozniak (1950), cofounder of Apple and designer of the Apple II computer Steven Jobs (1955), cofounder of Apple Regis McKenna (1939), owner of the Silicon Valley public relations and advertising company that handled the Apple account Chris Espinosa (1961), the high school student who wrote the BASIC program shipped with the Apple II Randy Wigginton (1960), a high school student and Apple software programmer Inventing the Apple As late as the 1960’s, not many people in the computer industry believed that a small computer could be useful to the average person. It was through the effort of two friends from the Silicon Valley—the high-technology area between San Francisco and San Jose— that the personal computer revolution was started. Both Steven Jobs and Stephen Wozniak had attended Homestead High School in Los Altos, California, and both developed early interests in technology, especially computers. In 1971, Wozniak built his first computer from spare parts. Shortly after this, he was introduced to Jobs. Jobs had already developed an interest in electronics (he once telephoned William Hewlett, cofounder of HewlettPackard, to ask for parts), and he and Wozniak became friends. Their first business together was the construction and sale of “blue boxes,” illegal devices that allowed the user to make long-distance telephone calls for free. After attending college, the two took jobs within the electronics industry. Wozniak began working at Hewlett-Packard, where he

Apple II computer

/

29

studied calculator design, and Jobs took a job at Atari, the video company. The friendship paid off again when Wozniak, at Jobs’s request, designed the game “Breakout” for Atari, and the pair was paid seven hundred dollars. In 1975, the Altair computer, a personal computer in kit form, was introduced by Micro Instrumentation and Telemetry Systems (MITS). Shortly thereafter, the first personal computer club, the Homebrew Computer Club, began meeting in Menlo Park, near Stanford University. Wozniak and Jobs began attending the meeting regularly. Wozniak eagerly examined the Altairs that others brought. He thought that the design could be improved. In only a few more weeks, he produced a circuit board and interfaces that connected it to a keyboard and a video monitor. He showed the machine at a Homebrew meeting and distributed photocopies of the design. In this new machine, which he named an “Apple,” Jobs saw a big opportunity. He talked Wozniak into forming a partnership to develop personal computers. Jobs sold his car, and Wozniak sold his two Hewlett-Packard calculators; with the money, they ordered printed circuit boards made. Their break came when Paul Terrell, a retailer, was so impressed that he ordered fifty fully assembled Apples. Within thirty days, the computers were completed, and they sold for a fairly high price: $666.66. During the summer of 1976, Wozniak kept improving the Apple. The new computer would come with a keyboard, an internal power supply, a built-in computer language called the Beginner’s AllPurpose Symbolic Instruction Code” (BASIC), hookups for adding printers and other devices, and color graphics, all enclosed in a plastic case. The output would be seen on a television screen. The machine would sell for twelve hundred dollars. Selling the Apple Regis McKenna was the head of the Regis McKenna Public Relations agency, the best of the public relations firms that served the high-technology industries of the valley, which Jobs wanted to handle the Apple account. At first, McKenna rejected the offer, but Jobs’s constant pleading finally convinced him. The agency’s first

30

/

Apple II computer

Steven Jobs While IBM and other corporations were devoting massive resources and talent to designing a small computer in 1975, Steven Paul Jobs and Stephen Wozniak, members of the tiny Homebrew Computer Club, put together the first truly userfriendly personal computer in Wozniak’s home. Jobs admitted later that “Woz” was the engineering brains. Jobs himself was the brains of design and marketing. Both had to scrape together money for the project from their small salaries as low-level electronics workers. Within eight years, Jobs headed the most progressive company in the new personal computer industry and was worth an estimated $210 million. Little in his background foretold such fast, large material success. Jobs was born in 1955 and became an orphan. Adopted by Paul and Clara Jobs, he grew up in California towns near the area that became known as Silicon Valley. He did not like school much and was considered a loner, albeit one who always had a distinctive way of thinking about things. Still in high school, he impressed William Hewlett, founder of Hewlett-Packard in Palo Alto, and won a summer job at the company, as well as some free equipment for one of his school projects. However, he dropped out of Reed College after one semester and became a hippie. He studied philosophy and Chinese and Indian mysticism. He became a vegetarian and practiced meditation. He even shaved his head and traveled to India on a spiritual pilgrimage. When he returned to America, however, he also returned to his interest in electronics and computers. Through various jobs at his original company, Apple, and elsewhere, he stayed there.

contributions to Apple were the colorful striped Apple logo and a color ad in Playboy magazine. In February, 1977, the first Apple Computer office was opened in Cupertino, California. By this time, two of Wozniak’s friends from Homebrew, Randy Wigginton and Chris Espinosa—both high school students—had joined the company. Their specialty was writing software. Espinosa worked through his Christmas vacation so that BASIC (the built-in computer language) could ship with the computer.

Apple II computer

/

31

The team pushed ahead to complete the new Apple in time to display it at the First West Coast Computer Faire in April, 1977. At this time, the name “Apple II” was chosen for the new model. The Apple II computer debuted at the convention and included many innovations. The “motherboard” was far simpler and more elegantly designed than that of any previous computer, and the ease of connecting the Apple II to a television screen made it that much more attractive to consumers. Consequences The introduction of the Apple II computer launched what was to be a wave of new computers aimed at the home and small-business markets. Within a few months of the Apple II’s introduction, Commodore introduced its PET computer and Tandy Corporation/Radio Shack brought out its TRS-80. Apple continued to increase the types of things that its computers could do and worked out a distribution deal with the new ComputerLand chain of stores. In December, 1977, Wozniak began work on creating a floppy disk system for the Apple II. (A floppy disk is a small, flexible plastic disk coated with magnetic material. The magnetized surface enables computer data to be stored on the disk.) The cassette tape storage on which all personal computers then depended was slow and unreliable. Floppy disks, which had been introduced for larger computers by the International Business Machines (IBM) Corporation in 1970, were fast and reliable. As he did with everything that interested him, Wozniak spent almost all of his time learning about and designing a floppy disk drive. When the final drive shipped in June, 1978, it made possible development of more powerful software for the computer. By 1980, Apple had sold 130,000 Apple II’s. That year, the company went public, and Jobs and Wozniak, among others, became wealthy. Three years later, Apple became the youngest company to make the Fortune 500 list of the largest industrial companies. By then, IBM had entered the personal computer field and had begun to dominate it, but the Apple II’s earlier success ensured that personal computers would not be a market fad. By the end of the 1980’s, 35 million personal computers would be in use.

32

/

Apple II computer

See also BINAC computer; Colossus computer; ENIAC computer; Floppy disk; Hard disk; IBM Model 1401 computer; Personal computer; UNIVAC computer. Further Reading Carlton, Jim. Apple: The Inside Story of Intrigue, Egomania, and Business Blunders. Rev. ed. London: Random House, 1999. Gold, Rebecca. Steve Wozniak: A Wizard Called Woz. Minneapolis: Lerner, 1994. Linzmayer, Owen W. Apple Confidential: The Real Story of Apple Computer, Inc. San Francisco: No Starch Press, 1999. Moritz, Michael. The Little Kingdom: The Private Story of Apple Computer. New York: Morrow, 1984. Rose, Frank. West of Eden: The End of Innocence at Apple Computer. New York: Viking, 1989.

33

Aqualung Aqualung

The invention: A device that allows divers to descend hundreds of meters below the surface of the ocean by enabling them to carry the oxygen they breathe with them. The people behind the invention: Jacques-Yves Cousteau (1910-1997), a French navy officer, undersea explorer, inventor, and author Émile Gagnan, a French engineer who invented an automatic air-regulating device The Limitations of Early Diving Undersea dives have been made since ancient times for the purposes of spying, recovering lost treasures from wrecks, and obtaining natural treasures (such as pearls). Many attempts have been made since then to prolong the amount of time divers could remain underwater. The first device, described by the Greek philosopher Aristotle in 335 b.c.e., was probably the ancestor of the modern snorkel. It was a bent reed placed in the mouth, with one end above the water. In addition to depth limitations set by the length of the reed, pressure considerations also presented a problem. The pressure on a diver’s body increases by about one-half pound per square centimeter for every meter ventured below the surface. After descending about 0.9 meter, inhaling surface air through a snorkel becomes difficult because the human chest muscles are no longer strong enough to inflate the chest. In order to breathe at or below this depth, a diver must breathe air that has been pressurized; moreover, that pressure must be able to vary as the diver descends or ascends. Few changes were possible in the technology of diving until air compressors were invented during the early nineteenth century. Fresh, pressurized air could then be supplied to divers. At first, the divers who used this method had to wear diving suits, complete with fishbowl-like helmets. This “tethered” diving made divers relatively immobile but allowed them to search for sunken treasure or do other complex jobs at great depths.

34

/

Aqualung

The Development of Scuba Diving The invention of scuba gear gave divers more freedom to move about and made them less dependent on heavy equipment. (“Scuba” stands for self-contained underwater breathing apparatus.) Its development occurred in several stages. In 1880, Henry Fleuss of England developed an outfit that used a belt containing pure oxygen. Belt and diver were connected, and the diver breathed the oxygen over and over. A version of this system was used by the U.S. Navy in World War II spying efforts. Nevertheless, it had serious drawbacks: Pure oxygen was toxic to divers at depths greater than 9 meters, and divers could carry only enough oxygen for relatively short dives. It did have an advantage for spies, namely, that the oxygen—breathed over and over in a closed system—did not reach the surface in the form of telltale bubbles. The next stage of scuba development occurred with the design of metal tanks that were able to hold highly compressed air. This enabled divers to use air rather than the potentially toxic pure oxygen. More important, being hooked up to a greater supply of air meant that divers could stay under water longer. Initially, the main problem with the system was that the air flowed continuously through a mask that covered the diver’s entire face. This process wasted air, and the scuba divers expelled a continual stream of air bubbles that made spying difficult. The solution, according to Axel Madsen’s Cousteau (1986), was “a valve that would allow inhaling and exhaling through the same mouthpiece.” Jacques-Yves Cousteau’s father was an executive for Air Liquide— France’s main producer of industrial gases. He was able to direct Cousteau to Émile Gagnan, an engineer at thecompany’s Paris laboratory who had been developing an automatic gas shutoff valve for Air Liquide. This valve became the Cousteau-Gagnan regulator, a breathing device that fed air to the diver at just the right pressure whenever he or she inhaled. With this valve—and funding from Air Liquide—Cousteau and Gagnan set out to design what would become the Aqualung. The first Aqualungs could be used at depths of up to 68.5 meters. During testing, however, the dangers of Aqualung diving became apparent. For example, unless divers ascended and descended in slow stages,

Aqualung

/

35

The son of a businessman who liked to travel, Jacques-Yves Cousteau acquired the same wanderlust. Born in 1910 in SaintAndré-de-Cubzac, France, he was a sickly child, but he learned to love swimming and the ocean. He also took an interest in movies, producing his first film when he was thirteen. Cousteau graduated from France’s naval academy, but his career as an officer ended with a nearly fatal car accident in 1936. He went to Toulon, where he returned to his interests in the sea and photography, a period that culminated in his invention of the aqualung with Émile Gagnan in 1944. During World War II he also won a Légion d’honneur for his photographic espionage. The French Navy established the Underwater Research Group for Cousteau in 1944, and after the war the venture evolved into the freewheeling, worldwide voyages that Cousteau became famous for. Aboard the Calypso, a converted U.S. minesweeper, he and his crew conducted research and pioneered underwater photography. His 1957 documentary The Silent World (based on a 1953 book) won an Oscar and the Palm d’Or of the Cannes film festival. Subsequent movies and The Undersea World of Jacques Cousteau, a television series, established Cousteau as a leading environmentalist and science educator. His Cousteau Society, dedicated to exploring and protecting the oceans, attracted millions of members worldwide. Through it he launched another innovative technology, “Turbosails,” towering non-rotating cylinders that act as sails to reduce ships’ dependency on oilfueled engines. A new ship propelled by them, the Alcyone, eventually replaced the Calypso. Cousteau inspired legions of oceanographers and environmentalists while calling attention to pressing problems in the world’s oceans. Although his later years where marked by family tragedies and controversy, he was revered throughout the world and had received many honors when he died in 1997.

(Library of Congress)

Jacques-Yves Cousteau

it was likely that they would get “the bends” (decompression sickness), the feared disease of earlier, tethered deep-sea divers. Another problem was that, below 42.6 meters, divers encountered nitrogen narcosis. (This can lead to impaired judgment that may cause

36

/

Aqualung

fatal actions, including removing a mouthpiece or developing an overpowering desire to continue diving downward, to dangerous depths.) Cousteau believed that the Aqualung had tremendous military potential. During World War II, he traveled to London soon after the Normandy invasion, hoping to persuade the Allied Powers of its usefulness. He was not successful. So Cousteau returned to Paris and convinced France’s new government to use Aqualungs to locate and neutralize underwater mines laid along the French coast by the German navy. Cousteau was commissioned to combine minesweeping with the study of the physiology of scuba diving. Further research revealed that the use of helium-oxygen mixtures increased to 76 meters the depth to which a scuba diver could go without suffering nitrogen narcosis. Impact One way to describe the effects of the development of the Aqualung is to summarize Cousteau’s continued efforts to the present. In 1946, he and Philippe Tailliez established the Undersea Research Group of Toulon to study diving techniques and various aspects of life in the oceans. They studied marine life in the Red Sea from 1951 to 1952. From 1952 to 1956, they engaged in an expedition supported by the National Geographic Society. By that time, the Research Group had developed many techniques that enabled them to identify life-forms and conditions at great depths. Throughout their undersea studies, Cousteau and his coworkers continued to develop better techniques for scuba diving, for recording observations by means of still and television photography, and for collecting plant and animal specimens. In addition, Cousteau participated (with Swiss physicist Auguste Piccard) in the construction of the deep-submergence research vehicle, or bathyscaphe. In the 1960’s, he directed a program called Conshelf, which tested a human’s ability to live in a specially built underwater habitat. He also wrote and produced films on underwater exploration that attracted, entertained, and educated millions of people. Cousteau has won numerous medals and scientific distinctions. These include the Gold Medal of the National Geographic Society

Aqualung

/

37

(1963), the United Nations International Environment Prize (1977), membership in the American and Indian academies of science (1968 and 1978, respectively), and honorary doctor of science degrees from the University of California, Berkeley (1970), Harvard University (1979), and Rensselaer Polytechnical Institute (1979). See also Bathyscaphe; Bathysphere. Further Reading Cousteau, Jacques Yves. The Silent World. New York: Harper & Brothers, 1952. _____. “Lord of The Depths. Time 153, no. 12 (March 29, 1999). _____, and James Dugan. The Living Sea. London: Elm Tree, 1988. Madsen, Axel. Cousteau: An Unauthorized Biography. New York: Beaufort Books, 1986. Munson, Richard. Cousteau: The Captain and His World. New York: Paragon House, 1991. Zanelli, Leo, and George T. Skuse. Sub-Aqua Illustrated Dictionary. New York: Oxford University Press, 1976.

38

Artificial blood Artificial blood

The invention: A perfluorocarbon emulsion that serves as a blood plasma substitute in the treatment of human patients. The person behind the invention: Ryoichi Naito (1906-1982), a Japanese physician Blood Substitutes The use of blood and blood products in humans is a very complicated issue. Substances present in blood serve no specific purpose and can be dangerous or deadly, especially when blood or blood products are taken from one person and given to another. This fact, combined with the necessity for long-term blood storage, a shortage of donors, and some patients’ refusal to use blood for religious reasons, brought about an intense search for a universal bloodlike substance. The life-sustaining properties of blood (for example, oxygen transport) can be entirely replaced by a synthetic mixture of known chemicals. Fluorocarbons are compounds that consist of molecules containing only fluorine and carbon atoms. These compounds are interesting to physiologists because they are chemically and pharmacologically inert and because they dissolve oxygen and other gases. Studies of fluorocarbons as blood substitutes began in 1966, when it was shown that a mouse breathing a fluorocarbon liquid treated with oxygen could survive. Subsequent research involved the use of fluorocarbons to play the role of red blood cells in transporting oxygen. Encouraging results led to the total replacement of blood in a rat, and the success of this experiment led in turn to trials in other mammals, culminating in 1979 with the use of fluorocarbons in humans. Clinical Studies The chemical selected for the clinical studies was Fluosol-DA, produced by the Japanese Green Cross Corporation. Fluosol-DA

Artificial blood

/

39

consists of a 20 percent emulsion of two perfluorocarbons (perfluorodecalin and perfluorotripopylamine), emulsifiers, and salts that are included to give the chemical some of the properties of blood plasma. Fluosol-DA had been tested in monkeys, and it had shown a rapid reversible uptake and release of oxygen, a reasonably rapid excretion, no carcinogenicity or irreversible changes in the animals’ systems, and the recovery of blood components to normal ranges within three weeks of administration. The clinical studies were divided into three phases. The first phase consisted of the administration of Fluosol-DA to normal human volunteers. Twelve healthy volunteers were administered the chemical, and the emulsion’s effects on blood pressure and composition and on heart, liver, and kidney functions were monitored. No adverse effects were found in any case. The first phase ended in March, 1979, and based on its positive results, the second and third phases were begun in April, 1979. Twenty-four Japanese medical institutions were involved in the next two phases. The reasons for the use of Fluosol-DA instead of blood in the patients involved were various, and they included refusal of transfusion for religious reasons, lack of compatible blood, “bloodless” surgery for protection from risk of hepatitis, and treatment of carbon monoxide intoxication. Among the effects noticed by the patients were the following: a small increase in blood pressure, with no corresponding effects on respiration and body temperature; an increase in blood oxygen content; bodily elimination of half the chemical within six to nineteen hours, depending on the initial dose administered; no change in red-cell count or hemoglobin content of blood; no change in wholeblood coagulation time; and no significant blood-chemistry changes. These results made the clinical trials a success and opened the door for other, more extensive ones. Impact Perfluorocarbon emulsions were initially proposed as oxygencarrying resuscitation fluids, or blood substitutes, and the results of the pioneering studies show their success as such. Their success in this area, however, led to advanced studies and expanded use of

40

/

Artificial blood

these compounds in many areas of clinical medicine and biomedical research. Perfluorocarbon emulsions are useful in cancer therapy, because they increase the oxygenation of tumor cells and therefore sensitize them to the effects of radiation or chemotherapy. Perfluorocarbons can also be used as “contrasting agents” to facilitate magnetic resonance imaging studies of various tissues; for example, the uptake of particles of the emulsion by the cells of malignant tissues makes it possible to locate tumors. Perfluorocarbons also have a high nitrogen solubility and therefore can be used to alleviate the potentially fatal effects of decompression sickness by “mopping up” nitrogen gas bubbles from the circulation system. They can also be used to preserve isolated organs and amputated extremities until they can be reimplanted or reattached. In addition, the emulsions are used in cell cultures to regulate gas supply and to improve cell growth and productivity. The biomedical applications of perfluorocarbon emulsions are multidisciplinary, involving areas as diverse as tissue imaging, organ preservation, cancer therapy, and cell culture. The successful clinical trials opened the door for new applications of these compounds, which rank among the most versatile compounds exploited by humankind. See also Artificial heart; Artificial hormone; Artificial kidney; Blood transfusion; Coronary artery bypass surgery; Electrocardiogram; Heart-lung machine. Further Reading “Artificial Blood Product May Debut in Two Years.” Health Care Strategic Management 18, no. 8 (August, 2000). “The Business of Blood: Ryoichi Naito and Fluosol-DA Artificial Blood.” Forbes 131 (January 17, 1983). Glanz, James. “Pulse Quickens in Search for Blood Substitute.” Research & Development 34, no. 10 (September, 1992). Tsuchida, E. Artificial Red Cells: Materials, Performances, and Clinical Study as Blood Substitutes. New York: Wiley, 1997.

41

Artificial chromosome Artificial chromosome

The invention: Originally developed for use in the study of natural chromosome behavior, the artificial chromosome proved to be a valuable tool for recombinant DNA technology. The people behind the invention: Jack W. Szostak (1952), a British-born Canadian professor at Harvard Medical School Andrew W. Murray (1956), a graduate student The Value of Artificial Chromosomes The artificial chromosome gives biologists insight into the fundamental mechanisms by which cells replicate and plays an important role as a tool in genetic engineering technology. Soon after its invention in 1983 by Andrew W. Murray and Jack W. Szostak, the artificial chromosome was judged by scientists to be important and its value in the field of medicine was exploited. Chromosomes are essentially carriers of genetic information; that is, they possess the genetic code that is the blueprint for life. In higher organisms, the number and type of chromosomes that a cell contains in its nucleus are characteristic of the species. For example, each human cell has forty-six chromosomes, while the garden pea has fourteen and the guinea pig has sixty-four. The chromosome’s job in a dividing cell is to replicate and then distribute one copy of itself into each new “daughter” cell. This process, which is referred to as “mitosis” or “meiosis,” depending upon the actual mechanism by which the process occurs, is of supreme importance to the continuation of life. In 1953, when biophysicists James D. Watson and Francis Crick discovered the structure of deoxyribonucleic acid (DNA), an achievement for which they won the 1962 Nobel Prize in Physiology or Medicine, it was immediately apparent to them how the doublehelical form of DNA (which looks something like a twisted ladder) might explain the mechanism behind cell division. During DNA replication, the chromosome unwinds to expose the thin threads of

42

/

Artificial chromosome

DNA. The two strands of the double helix separate, and each acts as a template for the formation of a new complementary strand, thus forming two complete and identical chromosomes that can be distributed to each new cell. This distribution process, which is referred to as “segregation,” relies on the chromosomes being pulled along a microtubule framework in the cell called the “mitotic spindle.” Creating Artificial Chromosomes An artificial chromosome is a laboratory-designed chromosome that possesses only those functional elements its creators choose. In order to be a true working chromosome, however, it must, at minimum, maintain the machinery necessary for replication and segregation. By the early 1980’s, Murray and Szostak had recognized the possible advantages of using a simple, controlled model to study chromosome behavior, since there are several difficulties associated with studying chromosomes in their natural state. Since natural chromosomes are large and have poorly defined structures, it is almost impossible to sift out for study those elements that are essential for replication and segregation. Previous methods of altering a natural chromosome and observing the effects were difficult to use because the cells containing that altered chromosome usually died. Furthermore, even if the cell survived, analysis was complicated by the extensive amount of genetic information carried by the chromosome. Artificial chromosomes are simple and have known components, although the functions of those components may be poorly understood. In addition, since artificial chromosomes are extra chromosomes that are carried by the cell, their alteration does not kill the cell. Prior to the synthesis of the first artificial chromosome, the essential functional chromosomal elements of replication and segregation had to be identified and harvested. One of the three chromosomal elements thought to be required is the origin of replication, the site at which the synthesis of new DNA begins. The relatively weak interaction between DNA strands at this site facilitates their separation, making possible—with the help of appropriate enzymes— the subsequent replication of the strands into “sister chromatids.”

Artificial chromosome

/

43

The second essential element is the “centromere,” a thinner segment of the chromosome that serves as the attachment site for the mitotic spindle. Sister chromatids are pulled into diametric ends of the dividing cell by the spindle apparatus, thus forming two identical daughter cells. The final functional elements are repetitive sequences of DNA called “telomeres,” which are located at both ends of the chromosome. The telomeres are needed to protect the terminal genes from degradation. With all the functional elements at their disposal, Murray and Szostak proceeded to construct their first artificial chromosome. Once made, this chromosome would be inserted into yeast cells to replicate, since yeast cells are relatively simple and well characterized but otherwise resemble cells of higher organisms. Construction begins with a commonly used “bacterial plasmid,” a small, circular, autonomously replicating section of DNA. Enzymes are then called upon to create a gap in this “cloning vector” into which the three chromosomal elements are spliced. In addition, genes that confer some distinct trait, such as color, to yeast cells are also inserted, thus making it possible to determine which cells have actually taken up the new chromosome. Although their first attempt resulted in a chromosome that failed to segregate properly, by September, 1983, Murray and Szostak had announced in the prestigious British journal Nature their success in creating the first artificial chromosome. Consequences One of the most exciting aspects of the artificial chromosome is its application to recombinant DNA technology, which involves creating novel genetic materials by combining segments of DNA from various sources. For example, the artificial yeast chromosome can be used as a cloning vector. In this process, a segment of DNA containing some desired gene is inserted into an artificial chromosome and is then allowed to replicate in yeast until large amounts of the gene are produced. David T. Burke, Georges F. Carle, and Maynard Victor Olson at Washington University in St. Louis have pioneered the technique of combining human genes with artificial yeast chromosomes and have succeeded in cloning large segments of human DNA.

44

/

Artificial chromosome

Although amplifying DNA in this manner has been done before, using bacterial plasmids as cloning vectors, the artificial yeast chromosome has the advantage of being able to hold much larger segments of DNA, thus allowing scientists to clone very large genes. This is of great importance, since the genes that cause diseases such as hemophilia and Duchenne’s muscular dystrophy are enormous. The most ambitious project for which the artificial yeast chromosome is being used is the national project whose intent is to clone the entire human genome. See also Artificial blood; Artificial hormone; Genetic “fingerprinting”; Genetically engineered insulin; In vitro plant culture; Synthetic DNA; Synthetic RNA. Further Reading “Evolving RNA with Enzyme-Like Action.” Science News 144 (August 14, 1993). Freedman, David H. “Playing God: The Handmade Cell.” Discover 13, no. 8 (August, 1992). Varshavsky, Alexander. “The 2000 Genetics Society of America Medal: Jack W. Szostak.” Genetics 157, no. 2 (February, 2001).

45

Artificial heart Artificial heart

The invention: The first successful artificial heart, the Jarvik-7, has helped to keep patients suffering from otherwise terminal heart disease alive while they await human heart transplants. The people behind the invention: Robert Jarvik (1946), the main inventor of the Jarvik-7 William Castle DeVries (1943), a surgeon at the University of Utah in Salt Lake City Barney Clark (1921-1983), a Seattle dentist, the first recipient of the Jarvik-7 Early Success The Jarvik-7 artificial heart was designed and produced by researchers at the University of Utah in Salt Lake City; it is named for the leader of the research team, Robert Jarvik. An air-driven pump made of plastic and titanium, it is the size of a human heart. It is made up of two hollow chambers of polyurethane and aluminum, each containing a flexible plastic membrane. The heart is implanted in a human being but must remain connected to an external air pump by means of two plastic hoses. The hoses carry compressed air to the heart, which then pumps the oxygenated blood through the pulmonary artery to the lungs and through the aorta to the rest of the body. The device is expensive, and initially the large, clumsy air compressor had to be wheeled from room to room along with the patient. The device was new in 1982, and that same year Barney Clark, a dentist from Seattle, was diagnosed as having only hours to live. His doctor, cardiac specialist William Castle DeVries, proposed surgically implanting the Jarvik-7 heart, and Clark and his wife agreed. The Food and Drug Administration (FDA), which regulates the use of medical devices, had already given DeVries and his coworkers permission to implant up to seven Jarvik-7 hearts for permanent use. The operation was performed on Clark, and at first it seemed quite successful. Newspapers, radio, and television reported this medical breakthrough: the first time a severely damaged heart had been re-

46

/

Artificial heart

William C. DeVries William Castle DeVries did not invent the artificial heart himself; however, he did develop the procedure to implant it. The first attempt took him seven and a half hours, and he needed fourteen assistants. A success, the surgery made DeVries one of the most talked-about doctors in the world. DeVries was born in Brooklyn, New York, in 1943. His father, a Navy physician, was killed in action a few months later, and his mother, a nurse, moved with her son to Utah. As a child DeVries showed both considerable mechanical aptitude and athletic prowess. He won an athletic scholarship to the University of Utah, graduating with honors in 1966. He entered the state medical school and there met Willem Kolff, a pioneer in designing and testing artificial organs. Under Kolff’s guidance, DeVries began performing experimental surgeries on animals to test prototype mechanical hearts. He finished medical school in 1970 and from 1971 until 1979 was an intern and then a resident in surgery at the Duke University Medical Center in North Carolina. DeVries returned to the University of Utah as an assistant professor of cardiovascular and thoracic surgery. In the meantime, Robert K. Jarvik had devised the Jarvik-7 artificial heart. DeVries experimented, implanting it in animals and cadavers until, following approval from the Federal Drug Administration, Barney Clark agreed to be the first test patient. He died 115 days after the surgery, having never left the hospital. Although controversy arose over the ethics and cost of the procedure, more artificial heart implantations followed, many by DeVries. Long administrative delays getting patients approved for surgery at Utah frustrated DeVries, so he moved to Humana Hospital-Audubon in Louisville, Kentucky, in 1984 and then took a professorship at the University of Louisville. In 1988 he left experimentation for a traditional clinical practice. The FDA withdrew its approval for the Jarvik-7 in 1990. In 1999 DeVries retired from practice, but not from medicine. The next year he joined the Army Reserve and began teaching surgery at the Walter Reed Army Medical Center.

placed by a totally artificial heart. It seemed DeVries had proved that an artificial heart could be almost as good as a human heart. Soon after Clark’s surgery, DeVries went on to implant the device

Artificial heart

/

47

in several other patients with serious heart disease. For a time, all of them survived the surgery. As a result, DeVries was offered a position at Humana Hospital in Louisville, Kentucky. Humana offered to pay for the first one hundred implant operations. The Controversy Begins In the three years after DeVries’s operation on Barney Clark, however, doubts and criticism arose. Of the people who by then had received the plastic and metal device as a permanent replacement for their own diseased hearts, three had died (including Clark) and four had suffered serious strokes. The FDA asked Humana Hospital and Symbion (the company that manufactured the Jarvik-7) for complete, detailed histories of the artificial-heart recipients. It was determined that each of the patients who had died or been disabled had suffered from infection. Life-threatening infection, or “foreign-body response,” is a danger with the use of any artificial organ. The Jarvik-7, with its metal valves, plastic body, and Velcro attachments, seemed to draw bacteria like a magnet—and these bacteria proved resistant to even the most powerful antibiotics. By 1988, researchers had come to realize that severe infection was almost inevitable if a patient used the Jarvik-7 for a long period of time. As a result, experts recommended that the device be used for no longer than thirty days. Questions of values and morality also became part of the controversy surrounding the artificial heart. Some people thought that it was wrong to offer patients a device that would extend their lives but leave them burdened with hardship and pain. At times DeVries claimed that it was worth the price for patients to be able live another year; at other times, he admitted that if he thought a patient would have to spend the rest of his or her life in a hospital, he would think twice before performing the implant. There were also questions about “informed consent”—the patient’s understanding that a medical procedure has a high risk of failure and may leave the patient in misery even if it succeeds. Getting truly informed consent from a dying patient is tricky, because, understandably, the patient is probably willing to try anything. The Jarvik-7 raised several questions in this regard: Was the

48

/

Artificial heart

ordeal worth the risk? Was the patient’s suffering justifiable? Who should make the decision for or against the surgery: the patient, the researchers, or a government agency? Also there was the issue of cost. Should money be poured into expensive, high-technology devices such as the Jarvik heart, or should it be reserved for programs to help prevent heart disease in the first place? Expenses for each of DeVries’s patients had amounted to about one million dollars. Humana’s and DeVries’s earnings were criticized in particular. Once the first one hundred free Jarvik-7 implantations had been performed, Humana Hospital could expect to make large amounts of money on the surgery. By that time, Humana would have so much expertise in the field that, though the surgical techniques could not be patented, it was expected to have a practical monopoly. DeVries himself owned thousands of shares of stock in Symbion. Many people wondered whether this was ethical. Consequences Given all the controversies, in December of 1985 a panel of experts recommended that the FDA allow the experiment to continue, but only with careful monitoring. Meanwhile, cardiac transplantation was becoming easier and more common. By the end of 1985, almost twenty-six hundred patients in various countries had received human heart transplants, and 76 percent of these patients had survived for at least four years. When the demand for donor hearts exceeded the supply, physicians turned to the Jarvik device and other artificial hearts to help see patients through the waiting period. Experience with the Jarvik-7 made the world keenly aware of how far medical science still is from making the implantable permanent mechanical heart a reality. Nevertheless, the device was a breakthrough in the relatively new field of artificial organs. Since then, other artificial body parts have included heart valves, blood vessels, and inner ears that help restore hearing to the deaf. See also Artificial blood; Artificial kidney; Blood transfusion; Coronary artery bypass surgery; Electrocardiogram; Heart-lung machine; Pacemaker; Velcro.

Artificial heart

/

49

Further Reading Fox, Renee C., and Judith P. Swazy. Spare Parts: Organ Replacement in American Society. New York: Oxford University Press, 1992. Kunin, Calvin M., Joanne J. Debbins, and Julio C. Melo. “Infectious Complications in Four Long-Term Recipients of the Jarvik-7 Artificial Heart.” JAMA 259 (February 12, 1988). Kunzig, Robert. “The Beat Goes On.” Discover 21, no. 1 (January, 2000). Lawrie, Gerald M. “Permanent Implantation of the Jarvik-7 Total Artificial Heart: A Clinical Perspective.” JAMA 259 (February 12, 1988).

50

Artificial hormone Artificial hormone

The invention: Synthesized oxytocin, a small polypeptide hormone from the pituitary gland that has shown how complex polypeptides and proteins may be synthesized and used in medicine. The people behind the invention: Vincent du Vigneaud (1901-1978), an American biochemist and winner of the 1955 Nobel Prize in Chemistry Oliver Kamm (1888-1965), an American biochemist Sir Edward Albert Sharpey-Schafer (1850-1935), an English physiologist Sir Henry Hallett Dale (1875-1968), an English physiologist and winner of the 1936 Nobel Prize in Physiology or Medicine John Jacob Abel (1857-1938), an American pharmacologist and biochemist Body-Function Special Effects In England in 1895, physician George Oliver and physiologist Edward Albert Sharpey-Schafer reported that a hormonal extract from the pituitary gland of a cow produced a rise in blood pressure (a pressor effect) when it was injected into animals. In 1901, Rudolph Magnus and Sharpey-Schafer discovered that extracts from the pituitary also could restrict the flow of urine (an antidiuretic effect). This observation was related to the fact that when a certain section of the pituitary was removed surgically from an animal, the animal excreted an abnormally large amount of urine. In addition to the pressor and antidiuretic activities in the pituitary, two other effects were found in 1909. Sir Henry Hallett Dale, an English physiologist, was able to show that the extracts could cause the uterine muscle to contract (an oxytocic effect), and Isaac Ott and John C. Scott found that when lactating (milk-producing) animals were injected with the extracts, milk was released from the mammary gland. Following the discovery of these various effects, attempts were made to concentrate and isolate the substance or substances that

Artificial hormone

/

51

were responsible. John Jacob Abel was able to concentrate the pressor activity at The Johns Hopkins University using heavy metal salts and extraction with organic solvents. The results of the early work, however, were varied. Some investigators came to the conclusion that only one substance was responsible for all the activities, while others concluded that two or more substances were likely to be involved. In 1928, Oliver Kamm and his coworkers at the drug firm of Parke, Davis and Company in Detroit reported a method for the separation of the four activities into two chemical fractions with high potency. One portion contained most of the pressor and antidiuretic activities, while the other contained the uterine-contracting and milk-releasing activities. Over the years, several names have been used for the two substances responsible for the effects. The generic name “vasopressin” generally has become the accepted term for the substance causing the pressor and antidiuretic effects, while the name “oxytocin” has been used for the other two effects. The two fractions that Kamm and his group had prepared were pure enough for the pharmaceutical firm to make them available for medical research related to obstetrics, surgical shock, and diabetes insipidus. A Complicated Synthesis The problem of these hormones and their nature interested Vincent du Vigneaud at the George Washington University School of Medicine. Working with Kamm, he was able to show that the sulfur content of both the oxytocin and the vasopressin fractions was a result of the amino acid cystine. This helped to strengthen the concept that these hormones were polypeptide, or proteinlike, substances. Du Vigneaud and his coworkers next tried to find a way of purifying oxytocin and vasopressin. This required not only the separation of the hormones themselves but also the separation from other impurities present in the preparations. During World War II (1939-1945) and shortly thereafter, other techniques were developed that would give du Vigneaud the tools he needed to complete the job of purifying and characterizing the two hormonal factors. One of the most important was the

52

/

Artificial hormone

countercurrent distribution method of chemist Lyman C. Craig at the Rockefeller Institute. Craig had developed an apparatus that could do multiple extractions, making possible separations of substances with similar properties. Du Vigneaud had used this technique in purifying his synthetic penicillin, and when he returned to the study of oxytocin and vasopressin in 1946, he used it on his purest preparations. The procedure worked well, and milligram quantities of pure oxytocin were available in 1949 for chemical characterization. Using the available techniques, Vigneaud and his coworkers were able to determine the structure of oxytocin. It was du Vigneaud’s goal to make synthetic oxytocin by duplicating the structure his group had worked out. Eventually, du Vigneaud’s synthetic oxytocin was obtained and the method published in the Journal of the American Chemical Society in 1953. Du Vigneaud’s oxytocin was next tested against naturally occurring oxytocin, and the two forms were found to act identically in every respect. In the final test, the synthetic form was found to induce labor when given intravenously to women about to give birth. Also, when microgram quantities of oxytocin were given intravenously to women who had recently given birth, milk was released from the mammary gland in less than a minute. Consequences The work of du Vigneaud and his associates demonstrated for the first time that it was possible to synthesize peptides that have properties identical to the natural ones and that these can be useful in certain medical conditions. Oxytocin has been used in the last stages of labor during childbirth. Vasopressin has been used in the treatment of diabetes insipidus, when an individual has an insufficiency in the natural hormone, much as insulin is used by persons having diabetes mellitus. After receiving the Nobel Prize in Chemistry in 1955, du Vigneaud continued his work on synthesizing chemical variations of the two hormones. By making peptides that differed from oxytocin and vasopressin by one or more amino acids, it was possible to study how the structure of the peptide was related to its physiological activity.

Artificial hormone

/

53

After the structure of insulin and some of the smaller proteins were determined, they, too, were synthesized, although with greater difficulty. Other methods of carrying out the synthesis of peptides and proteins have been developed and are used today. The production of biologically active proteins, such as insulin and growth hormone, has been made possible by efficient methods of biotechnology. The genes for these proteins can be put inside microorganisms, which then make them in addition to their own proteins. The microorganisms are then harvested and the useful protein hormones isolated and purified. See also Abortion pill; Artificial blood; Birth control pill; Genetically engineered insulin; Pap test. Further Reading Basa, Channa, and G. M. Anantharamaiah. Peptides: Design, Synthesis, and Biological Activity. Boston: Birkauser, 1994. Bodanszky, Miklos. “Vincent du Vigneaud, 1901-1978.” Nature 279, no. 5710 (1979). Vigneud, Vincent du. “A Trail of Sulfur Research from Insulin to Oxytocin” [Nobel lecture]. In Chemistry, 1942-1962. River Edge, N.J.: World Scientific, 1999.

54

Artificial insemination Artificial insemination

The invention: Practical techniques for the artificial insemination of farm animals that have revolutionized livestock breeding practices throughout the world. The people behind the invention: Lazzaro Spallanzani (1729-1799), an Italian physiologist Ilya Ivanovich Ivanov (1870-1932), a Soviet biologist R. W. Kunitsky, a Soviet veterinarian Reproduction Without Sex The tale is told of a fourteenth-century Arabian chieftain who sought to improve his mediocre breed of horses. Sneaking into the territory of a neighboring hostile tribe, he stimulated a prize stallion to ejaculate into a piece of cotton. Quickly returning home, he inserted this cotton into the vagina of his own mare, who subsequently gave birth to a high-quality horse. This may have been the first case of “artificial insemination,” the technique by which semen is introduced into the female reproductive tract without sexual contact. The first scientific record of artificial insemination comes from Italy in the 1770’s. Lazzaro Spallanzani was one of the foremost physiologists of his time, well known for having disproved the theory of spontaneous generation, which states that living organisms can spring “spontaneously” from lifeless matter. There was some disagreement at that time about the basic requirements for reproduction in animals. It was unclear if the sex act was necessary for an embryo to develop, or if it was sufficient that the sperm and eggs come into contact. Spallanzani began by studying animals in which union of the sperm and egg normally takes place outside the body of the female. He stimulated males and females to release their sperm and eggs, then mixed these sex cells in a glass dish. In this way, he produced young frogs, toads, salamanders, and silkworms. Next, Spallanzani asked whether the sex act was also unnecessary for reproduction in those species in which fertilization nor-

Artificial insemination

/

55

mally takes place inside the body of the female. He collected semen that had been ejaculated by a male spaniel and, using a syringe, injected the semen into the vagina of a female spaniel in heat. Two months later, she delivered a litter of three pups, which bore some resemblance to both the mother and the male that had provided the sperm. It was in animal breeding that Spallanzani’s techniques were to have their most dramatic application. In the 1880’s, an English dog breeder, Sir Everett Millais, conducted several experiments on artificial insemination. He was interested mainly in obtaining offspring from dogs that would not normally mate with one another because of difference in size. He followed Spallanzani’s methods to produce a cross between a short, low, basset hound and the much larger bloodhound. Long-Distance Reproduction Ilya Ivanovich Ivanov was a Soviet biologist who was commissioned by his government to investigate the use of artificial insemination on horses. Unlike previous workers who had used artificial insemination to get around certain anatomical barriers to fertilization, Ivanov began the use of artificial insemination to reproduce thoroughbred horses more effectively. His assistant in this work was the veterinarian R. W. Kunitsky. In 1901, Ivanov founded the Experimental Station for the Artificial Insemination of Horses. As its director, he embarked on a series of experiments to devise the most efficient techniques for breeding these animals. Not content with the demonstration that the technique was scientifically feasible, he wished to ensure further that it could be practiced by Soviet farmers. If sperm from a male were to be used to impregnate females in another location, potency would have to be maintained for a long time. Ivanov first showed that the secretions from the sex glands were not required for successful insemination; only the sperm itself was necessary. He demonstrated further that if a testicle were removed from a bull and kept cold, the sperm would remain alive. More useful than preservation of testicles would be preservation of the ejaculated sperm. By adding certain salts to the sperm-

56

/

Artificial insemination

containing fluids, and by keeping these at cold temperatures, Ivanov was able to preserve sperm for long periods. Ivanov also developed instruments to inject the sperm, to hold the vagina open during insemination, and to hold the horse in place during the procedure. In 1910, Ivanov wrote a practical textbook with technical instructions for the artificial insemination of horses. He also trained some three hundred veterinary technicians in the use of artificial insemination, and the knowledge he developed quickly spread throughout the Soviet Union. Artificial insemination became the major means of breeding horses. Until his death in 1932, Ivanov was active in researching many aspects of the reproductive biology of animals. He developed methods to treat reproductive diseases of farm animals and refined methods of obtaining, evaluating, diluting, preserving, and disinfecting sperm. He also began to produce hybrids between wild and domestic animals in the hope of producing new breeds that would be able to withstand extreme weather conditions better and that would be more resistant to disease. His crosses included hybrids of ordinary cows with aurochs, bison, and yaks, as well as some more exotic crosses of zebras with horses. Ivanov also hoped to use artificial insemination to help preserve species that were in danger of becoming extinct. In 1926, he led an expedition to West Africa to experiment with the hybridization of different species of anthropoid apes. Impact The greatest beneficiaries of artificial insemination have been dairy farmers. Some bulls are able to sire genetically superior cows that produce exceptionally large volumes of milk. Under natural conditions, such a bull could father at most a few hundred offspring in its lifetime. Using artificial insemination, a prize bull can inseminate ten to fifteen thousand cows each year. Since frozen sperm may be purchased through the mail, this also means that dairy farmers no longer need to keep dangerous bulls on the farm. Artificial insemination has become the main method of reproduction of dairy cows, with about 150 million cows (as of 1992) produced this way throughout the world.

Artificial insemination

/

57

In the 1980’s, artificial insemination gained added importance as a method of breeding rare animals. Animals kept in zoo cages, animals that are unable to take part in normal mating, may still produce sperm that can be used to inseminate a female artificially. Some species require specific conditions of housing or diet for normal breeding to occur, conditions not available in all zoos. Such animals can still reproduce using artificial insemination. See also Abortion pill; Amniocentesis; Artificial chromosome; Birth control pill; Cloning; Genetic “fingerprinting”; Genetically engineered insulin; In vitro plant culture; Rice and wheat strains; Synthetic DNA. Further Reading Bearden, Henry Joe, and John W. Fuquay. Applied Animal Reproduction. 5th ed. Upper Saddle River, N.J.: Prentice Hall, 2000. Foote, Robert H. Artificial Insemination to Cloning: Tracing Fifty Years of Research. Ithaca, N.Y.: Cornell University Press, 1998. Hafez, Elsayed Saad Eldin. Reproduction in Farm Animals. 6th ed. Philadelphia: Lea and Febiger, 1993. Herman, Harry August. Improving Cattle by the Millions: NAAB and the Development and Worldwide Application of Artificial Insemination. Columbia: University of Missouri Press, 1981.

58

Artificial kidney Artificial kidney

The invention: A machine that removes waste end-products and poisons out of the blood when human kidneys are not working properly. The people behind the invention: John Jacob Abel (1857-1938), a pharmacologist and biochemist known as the “father of American pharmacology” Willem Johan Kolff (1911), a Dutch American clinician who pioneered the artificial kidney and the artificial heart Cleansing the Blood In the human body, the kidneys are the dual organs that remove waste matter from the bloodstream and send it out of the system as urine. If the kidneys fail to work properly, this cleansing process must be done artifically—such as by a machine. John Jacob Abel was the first professor of pharmacology at Johns Hopkins University School of Medicine. Around 1912, he began to study the by-products of metabolism that are carried in the blood. This work was difficult, he realized, because it was nearly impossible to detect even the tiny amounts of the many substances in blood. Moreover, no one had yet developed a method or machine for taking these substances out of the blood. In devising a blood filtering system, Abel understood that he needed a saline solution and a membrane that would let some substances pass through but not others. Working with Leonard Rowntree and Benjamin B. Turner, he spent nearly two years figuring out how to build a machine that would perform dialysis—that is, remove metabolic by-products from blood. Finally their efforts succeeded. The first experiments were performed on rabbits and dogs. In operating the machine, the blood leaving the patient was sent flowing through a celloidin tube that had been wound loosely around a drum. An anticlotting substance (hirudin, taken out of leeches) was added to blood as the blood flowed through the tube. The drum, which was immersed in a saline and dextrose solution, rotated

Artificial kidney

/

59

slowly. As blood flowed through the immersed tubing, the pressure of osmosis removed urea and other substances, but not the plasma or cells, from the blood. The celloidin membranes allowed oxygen to pass from the saline and dextrose solution into the blood, so that purified, oxygenated blood then flowed back into the arteries. Abel studied the substances that his machine had removed from the blood, and he found that they included not only urea but also free amino acids. He quickly realized that his machine could be useful for taking care of people whose kidneys were not working properly. Reporting on his research, he wrote, “In the hope of providing a substitute in such emergencies, which might tide over a dangerous crisis . . . a method has been devised by which the blood of a living animal may be submitted to dialysis outside the body, and again returned to the natural circulation.” Abel’s machine removed large quantities of urea and other poisonous substances fairly quickly, so that the process, which he called “vividiffusion,” could serve as an artificial kidney during cases of kidney failure. For his physiological research, Abel found it necessary to remove, study, and then replace large amounts of blood from living animals, all without dissolving the red blood cells, which carry oxygen to the body’s various parts. He realized that this process, which he called “plasmaphaeresis,” would make possible blood banks, where blood could be stored for emergency use. In 1914, Abel published these two discoveries in a series of three articles in the Journal of Pharmacology and Applied Therapeutics, and he demonstrated his techniques in London, England, and Groningen, The Netherlands. Though he had suggested that his techniques could be used for medical purposes, he himself was interested mostly in continuing his biochemical research. So he turned to other projects in pharmacology, such as the crystallization of insulin, and never returned to studying vividiffusion. Refining the Technique Georg Haas, a German biochemist working in Giessen, West Germany, was also interested in dialysis; in 1915, he began to experiment with “blood washing.” After reading Abel’s 1914 writings, Haas tried substituting collodium for the celloidin that Abel had used

60

/

Artificial kidney

as a filtering membrane and using commercially prepared heparin instead of the homemade hirudin Abel had used to prevent blood clotting. He then used this machine on a patient and found that it showed promise, but he knew that many technical problems had to be worked out before the procedure could be used on many patients. In 1937, Willem Johan Kolff was a young physician at Groningen. He felt sad to see patients die from kidney failure, and he wanted to find a way to cure others. Having heard his colleagues talk about the possibility of using dialysis on human patients, he decided to build a dialysis machine. Kolff knew that cellophane was an excellent membrane for dialyzing, and that heparin was a good anticoagulant, but he also realized that his machine would need to be able to treat larger volumes of blood than Abel’s and Haas’s had. During World War II (1939John Jacob Abel Born in 1857, John Jacob Abel grew up in Cleveland, Ohio, and then attended the University of Michigan. He graduated in 1883 and studied for six years in Germany, which boasted the finest medical researchers of the times. He received a medical degree in 1888 in Strasbourg, transferred to Vienna, Austria, for more clinical experience, and then returned to the United States in 1891 to teach pharmacology at the University of Michigan. He had to organize his own laboratory, journal, and course of instruction. His efforts attracted the notice of Johns Hopkins University, which then had the nation’s most progressive medical school. In 1893 Abel moved there and became the first American to hold the title of professor of pharmacology. He remained at Johns Hopkins until his retirement in 1932. His biochemical research illuminated the complex interaction in the endocrine system. He isolated epinephrine (adrenaline), used his artificial kidney apparatus to demonstrate the presence of amino acids in the blood, and investigated pituitary gland hormones and insulin. Abel died in 1938, but his influence did not. His many students took Abel’s interest in the biochemical basis of pharmacology to other universities and commercial laboratories, modernizing American drug research.

Artificial kidney

/

61

1945), with the help of the director of a nearby enamel factory, Kolff built an artificial kidney that was first tried on a patient on March 17, 1943. Between March, 1943, and July 21, 1944, Kolff used his secretly constructed dialysis machines on fifteen patients, of whom only one survived. He published the results of his research in Acta Medica Scandinavica. Even though most of his patients had not survived, he had collected information and developed the technique until he was sure dialysis would eventually work. Kolff brought machines to Amsterdam and The Hague and encouraged other physicians to try them; meanwhile, he continued to study blood dialysis and to improve his machines. In 1947, he brought improved machines to London and the United States. By the time he reached Boston, however, he had given away all of his machines. He did, however, explain the technique to John P. Merrill, a physician at the Harvard Medical School, who soon became the leading American developer of kidney dialysis and kidney-transplant surgery. Kolff himself moved to the United States, where he became an expert not only in artificial kidneys but also in artificial hearts. He helped develop the Jarvik-7 artificial heart (named for its chief inventor, Robert Jarvik), which was implanted in a patient in 1982. Impact Abel’s work showed that the blood carried some substances that had not been previously known and led to the development of the first dialysis machine for humans. It also encouraged interest in the possibility of organ transplants. After World War II, surgeons had tried to transplant kidneys from one animal to another, but after a few days the recipient began to reject the kidney and die. In spite of these failures, researchers in Europe and America transplanted kidneys in several patients, and they used artificial kidneys to take care of the patients who were waiting for transplants. In 1954, Merrill—to whom Kolff had demonstrated an artificial kidney—successfully transplanted kidneys in identical twins. After immunosuppressant drugs (used to prevent the body from rejecting newly transplanted tissue) were discovered in 1962, transplantation surgery became much more practical. After kid-

62

/

Artificial kidney

ney transplants became common, the artificial kidney became simply a way of keeping a person alive until a kidney donor could be found. See also Artificial blood; Artificial heart; Blood transfusion; Genetically engineered insulin; Reserpine. Further Reading Cogan, Martin G., Patricia Schoenfeld, and Frank A. Gotch. Introduction to Dialysis. 2d ed. New York: Churchill Livingstone, 1991. DeJauregui, Ruth. One Hundred Medical Milestones That Shaped World History. San Mateo, Calif.: Bluewood Books, 1998. Noordwijk, Jacob van. Dialysing for Life: The Development of the Artificial Kidney. Boston: Kluwer Academic Publishers, 2001.

63

Artificial satellite Artificial satellite

The invention: Sputnik I, the first object put into orbit around the earth, which began the exploration of space. The people behind the invention: Sergei P. Korolev (1907-1966), a Soviet rocket scientist Konstantin Tsiolkovsky (1857-1935), a Soviet schoolteacher and the founder of rocketry in the Soviet Union Robert H. Goddard (1882-1945), an American scientist and the founder of rocketry in the United States Wernher von Braun (1912-1977), a German who worked on rocket projects Arthur C. Clarke (1917), the author of more than fifty books and the visionary behind telecommunications satellites A Shocking Launch In Russian, sputnik means “satellite” or “fellow traveler.” On October 4, 1957, Sputnik 1, the first artificial satellite to orbit Earth, was placed into successful orbit by the Soviet Union. The launch of this small aluminum sphere, 0.58 meter in diameter and weighing 83.6 kilograms, opened the doors to the frontiers of space. Orbiting Earth every 96 minutes, at 28,962 kilometers per hour, Sputnik 1 came within 215 kilometers of Earth at its closest point and 939 kilometers away at its farthest point. It carried equipment to measure the atmosphere and to experiment with the transmission of electromagnetic waves from space. Equipped with two radio transmitters (at different frequencies) that broadcast for twenty-one days, Sputnik 1 was in orbit for ninety-two days, until January 4, 1958, when it disintegrated in the atmosphere. Sputnik 1 was launched using a Soviet intercontinental ballistic missile (ICBM) modified by Soviet rocket expert Sergei P. Korolev. After the launch of Sputnik 2, less than a month later, Chester Bowles, a former United States ambassador to India and Nepal, wrote: “Armed with a nuclear warhead, the rocket which launched

64

/

Artificial satellite

Sergei P. Korolev Sergei P. Korolev’s rocket launched the Space Age: Sputnik I climbed into outer space aboard one of his R-7 missiles. Widely considered the Soviet Union’s premiere rocket scientist, he almost died in Joseph Stalin’s infamous Siberian prison camps before he could build the launchers that made his country a military superpower and pioneer of space exploration. Born in 1907, Korolev studied aeronautical engineering at the Kiev Polytechnic Institute. Upon graduation he helped found the Group for Investigation of Reactive Motion, which in the early 1930’s tested liquid-fuel rockets. His success attracted the military’s attention. It created the Reaction Propulsion Scientific Research Institute for him, and he was on the verge of testing a rocket-propelled airplane when he was arrested during a political purge in 1937 and sent as a prison laborer to the Kolyma gold mines. After Germany attacked Russia in World War II, Korolev was transferred to a prison research institute to help develop advanced aircraft. After World War II, rehabilitated in the eyes of the Soviet authorities, Korolev was placed in charge of long-range ballistic missile research. In 1953 he began to build the R-7 intercontinental ballistic missile (ICBM). While other design bureaus concentrated on developing the ICBM into a Cold War weapon, Korolev built rockets that explored the Moon with probes. His goal was to send cosmonauts there too. With his designs and guidance, the Soviet space program proved that human space flight was possible in 1961, and so in 1962 he began development of the N-1, a booster that like the American Saturn V was powerful enough to send a crewed vehicle to the Moon. Tragically, Korolev died following minor surgery in 1966. The N-1 project was cancelled in 1971, along with Russian dreams of settling its citizens on the Moon.

Sputnik 1 could destroy New York, Chicago, or Detroit 18 minutes after the button was pushed in Moscow.” Although the launch of Sputnik 1 came as a shock to the general public, it came as no surprise to those who followed rocketry. In June, 1957, the United States Air Force had issued a nonclassified memo stating that there was “every reason to believe that the Rus-

Artificial satellite

/

65

sian satellite shot would be made on the hundredth anniversary” of Konstantin Tsiolkovsky’s birth. Thousands of Launches Rockets have been used since at least the twelfth century, when Europeans and the Chinese were using black powder devices. In 1659, the Polish engineer Kazimir Semenovich published his Roketten für Luft und Wasser (rockets for air and water), which had a drawing of a three-stage rocket. Rockets were used and perfected for warfare during the nineteenth and twentieth centuries. Nazi Germany’s V-2 rocket (thousands of which were launched by Germany against England during the closing years of World War II) was the model for American and Soviet rocket designers between 1945 and 1957. In the Soviet Union, Tsiolkovsky had been thinking about and writing about space flight since the last decade of the nineteenth century, and in the United States, Robert H. Goddard had been thinking about and experimenting with rockets since the first decade of the twentieth century. Wernher von Braun had worked on rocket projects for Nazi Germany during World War II, and, as the war was ending in May, 1945, von Braun and several hundred other people involved in German rocket projects surrendered to American troops in Europe. Hundreds of other German rocket experts ended up in the Soviet Union to continue with their research. Tom Bower pointed out in his book The Paperclip Conspiracy: The Hunt for the Nazi Scientists (1987)—so named because American “recruiting officers had identified [Nazi] scientists to be offered contracts by slipping an ordinary paperclip onto their files”—that American rocketry research was helped tremendously by Nazi scientists who switched sides after World War II. The successful launch of Sputnik 1 convinced people that space travel was no longer simply science fiction. The successful launch of Sputnik 2 on November 3, 1957, carrying the first space traveler, a dog named Laika (who was euthanized in orbit because there were no plans to retrieve her), showed that the launch of Sputnik 1 was only the beginning of greater things to come.

66

/

Artificial satellite

Consequences After October 4, 1957, the Soviet Union and other nations launched more experimental satellites. On January 31, 1958, the United States sent up Explorer 1, after failing to launch a Vanguard satellite on December 6, 1957. Arthur C. Clarke, most famous for his many books of science fiction, published a technical paper in 1945 entitled “Extra-Terrestrial Relays: Can Rocket Stations Give World-Wide Radio Coverage?” In that paper, he pointed out that a satellite placed in orbit at the correct height and speed above the equator would be able to hover over the same spot on Earth. The placement of three such “geostationary” satellites would allow radio signals to be transmitted around the world. By the 1990’s, communications satellites were numerous. In the first twenty-five years after Sputnik 1 was launched, from 1957 to 1982, more than two thousand objects were placed into various Earth orbits by more than twenty-four nations. On the average, something was launched into space every 3.82 days for this twentyfive-year period, all beginning with Sputnik 1. See also Communications satellite; Cruise missile; Rocket; V-2 rocket; Weather satellite. Further Reading Dickson, Paul. Sputnik: The Shock of the Century. New York: Walker, 2001. Heppenheimer, T. A. Countdown: A History of Space Flight. New York: John Wiley & Sons, 1997. Logsdon, John M., Roger D. Launius, and Robert W. Smith. Reconsidering Sputnik: Forty Years Since the Soviet Satellite. Australia: Harwood Academic, 2000.

67

Aspartame Aspartame

The invention: An artificial sweetener with a comparatively natural taste widely used in carbonated beverages. The people behind the invention: Arthur H. Hayes, Jr. (1933), a physician and commissioner of the U.S. Food and Drug Administration (FDA) James M. Schlatter (1942), an American chemist Michael Sveda (1912), an American chemist and inventor Ludwig Frederick Audrieth (1901), an American chemist and educator Ira Remsen (1846-1927), an American chemist and educator Constantin Fahlberg (1850-1910), a German chemist Sweetness Without Calories People have sweetened food and beverages since before recorded history. The most widely used sweetener is sugar, or sucrose. The only real drawback to the use of sucrose is that it is a nutritive sweetener: In addition to adding a sweet taste, it adds calories. Because sucrose is readily absorbed by the body, an excessive amount can be life-threatening to diabetics. This fact alone would make the development of nonsucrose sweeteners attractive. There are three common nonsucrose sweeteners in use around the world: saccharin, cyclamates, and aspartame. Saccharin was the first of this group to be discovered, in 1879. Constantin Fahlberg synthesized saccharin based on the previous experimental work of Ira Remsen using toluene (derived from petroleum). This product was found to be three hundred to five hundred times as sweet as sugar, although some people could detect a bitter aftertaste. In 1944, the chemical family of cyclamates was discovered by Ludwig Frederick Audrieth and Michael Sveda. Although these compounds are only thirty to eighty times as sweet as sugar, there was no detectable aftertaste. By the mid-1960’s, cyclamates had resplaced saccharin as the leading nonnutritive sweetener in the United States. Although cyclamates are still in use throughout the

68

/

Aspartame

world, in October, 1969, FDA removed them from the list of approved food additives because of tests that indicated possible health hazards. A Political Additive Aspartame is the latest in artificial sweeteners that are derived from natural ingredients—in this case, two amino acids, one from milk and one from bananas. Discovered by accident in 1965 by American chemist James M. Schlatter when he licked his fingers during an experiment, aspartame is 180 times as sweet as sugar. In 1974, the FDA approved its use in dry foods such as gum and cereal and as a sugar replacement. Shortly after its approval for this limited application, the FDA held public hearings on the safety concerns raised by John W. Olney, a professor of neuropathology at Washington University in St. Louis. There was some indication that aspartame, when combined with the common food additive monosodium glutamate, caused brain damage in children. These fears were confirmed, but the risk of brain damage was limited to a small percentage of individuals with a rare genetic disorder. At this point, the public debate took a political turn: Senator William Proxmire charged FDA Commissioner Alexander M. Schmidt with public misconduct. This controversy resulted in aspartame being taken off the market in 1975. In 1981, the new FDA commissioner, Arthur H. Hayes, Jr., resapproved aspartame for use in the same applications: as a tabletop sweetener, as a cold-cereal additive, in chewing gum, and for other miscellaneous uses. In 1983, the FDA approved aspartame for use in carbonated beverages, its largest application to date. Later safety studies revealed that children with a rare metabolic disease, phenylketonuria, could not ingest this sweetener without severe health risks because of the presence of phenylalanine in aspartame. This condition results in a rapid buildup in phenylalanine in the blood. Laboratories simulated this condition in rats and found that high doses of aspartame inhibited the synthesis of dopamine, a neurotransmitter. Once this happens, an increase in the frequency of seizures can occur. There was no direct evidence, however, that aspartame actually caused seizures in these experiments.

Aspartame

/

69

Many other compounds are being tested for use as sugar replacements, the sweetest being a relative of aspartame. This compound is seventeen thousand to fifty-two thousand times sweeter than sugar. Impact The business fallout from the approval of a new low-calorie sweetener occurred over a short span of time. In 1981, sales of this artificial sweetener by G. D. Searle and Company were $74 million. In 1983, sales rose to $336 million and exceeded half a billion dollars the following year. These figures represent sales of more than 2,500 tons of this product. In 1985, 3,500 tons of aspartame were consumed. Clearly, this product’s introduction was a commercial success for Searle. During this same period, the percentage of reducedcalorie carbonated beverages containing saccharin declined from 100 percent to 20 percent in an industry that had $4 billion in sales. Universally, consumers preferred products containing aspartame; the bitter aftertaste of saccharin was rejected in favor of the new, less powerful sweetener. There is a trade-off in using these products. The FDA found evidence linking both saccharin and cyclamates to an elevated incidence of cancer. Cyclamates were banned in the United States for this reason. Public resistance to this measure caused the agency to back away from its position. The rationale was that, compared to other health risks associated with the consumption of sugar (especially for diabetics and overweight persons), the chance of getting cancer was slight and therefore a risk that many people would choose to ignore. The total domination of aspartame in the sweetener market seems to support this assumption. See also Cyclamate; Genetically engineered insulin. Further Reading Blaylock, Russell L. Excitotoxins: The Taste That Kills. Santa Fe, N.Mex.: Health Press, 1998. Hull, Janet Starr. Sweet Poison: How the World’s Most Popular Artificial Sweetener Is Killing Us—My Story. Far Hills, N.J.: New Horizon Press, 1999.

70

/

Aspartame

Roberts, Hyman Jacob. Aspartame (NutraSweet®): Is It Safe? Philadelphia: Charles Press, 1990. Stegink, Lewis D., and Lloyd J. Filer, Aspartame: Physiology and Biochemistry. New York: M. Dekker, 1984. Stoddard, Mary Nash. Deadly Deception: Story of Aspartame, Shocking Expose of the World’s Most Controversial Sweetener. Dallas: Odenwald Press, 1998.

71

Assembly line Assembly line

The invention: A manufacturing technique pioneered in the automobile industry by Henry Ford that lowered production costs and helped bring automobile ownership within the reach of millions of Americans in the early twentieth century. The people behind the invention: Henry Ford (1863-1947), an American carmaker Eli Whitney (1765-1825), an American inventor Elisha King Root (1808-1865), the developer of division of labor Oliver Evans (1755-1819), the inventor of power conveyors Frederick Winslow Taylor (1856-1915), an efficiency engineer A Practical Man Henry Ford built his first “horseless carriage” by hand in his home workshop in 1896. In 1903, the Ford Motor Company was born. Ford’s first product, the Model A, sold for less than one thousand dollars, while other cars at that time were priced at five to ten thousand dollars each. When Ford and his partners tried, in 1905, to sell a more expensive car, sales dropped. Then, in 1907, Ford decided that the Ford Motor Company would build “a motor car for the great multitude.” It would be called the Model T. The Model T came out in 1908 and was everything that Henry Ford said it would be. Ford’s Model T was a low-priced (about $850), practical car that came in one color only: black. In the twenty years during which the Model T was built, the basic design never changed. Yet the price of the Model T, or “Tin Lizzie,” as it was affectionately called, dropped over the years to less than half that of the original Model T. As the price dropped, sales increased, and the Ford Motor Company quickly became the world’s largest automobile manufacturer. The last of more than 15 million Model T’s was made in 1927. Although it looked and drove almost exactly like the first Model T, these two automobiles were built in an entirely different way. The first was custom-built, while the last came off an assembly line. At first, Ford had built his cars in the same way everyone else

72

/

Assembly line

did: one at a time. Skilled mechanics would work on a car from start to finish, while helpers and runners brought parts to these highly paid craftsmen as they were needed. After finishing one car, the mechanics and their helpers would begin the next. The Quest for Efficiency Custom-built products are good when there is little demand and buyers are willing to pay the high labor costs. This was not the case with the automobile. Ford realized that in order to make a large number of quality cars at a low price, he had to find a more efficient way to build cars. To do this, he looked to the past and the work of others. He found four ideas: interchangeable parts, continuous flow, division of labor, and elimination of wasted motion. Eli Whitney, the inventor of the cotton gin, was the first person to use interchangeable parts successfully in mass production. In 1798, the United States government asked Whitney to make several thousand muskets in two years. Instead of finding and hiring gunsmiths to make the muskets by hand, Whitney used most of his time and money to design and build special machines that could make large numbers of

Model-T assembly line in the Ford Motor Company’s Highland Park Factory. (Library of Congress)

Assembly line

/

73

identical parts—one machine for each part that was needed to build a musket. These tools, and others Whitney made for holding, measuring, and positioning the parts, made it easy for semiskilled, and even unskilled, workers to build a large number of muskets. Production can be made more efficient by carefully arranging the different stages of production to create a “continuous flow.” Ford borrowed this idea from at least two places: the meat-packing houses of Chicago and an automatic grain mill run by Oliver Evans. Ford’s idea for a moving assembly line came from Chicago’s great meat-packing houses in the late 1860’s. Here, the bodies of animals were moved along an overhead rail past a number of workers, each of whom made a certain cut, or handled one part of the packing job. This meant that many animals could be butchered and packaged in a single day. Ford looked to Oliver Evans for an automatic conveyor system. In 1783, Evans had designed and operated an automatic grain mill that could be run by only two workers. As one worker poured grain into a funnel-shaped container, called a “hopper,” at one end of the mill, a second worker filled sacks with flour at the other end. Everything in between was done automatically, as Evans’s conveyors passed the grain through the different steps of the milling process without any help. The idea of “division of labor” is simple: When one complicated job is divided into several easier jobs, some things can be made faster, with fewer mistakes, by workers who need fewer skills than ever before. Elisha King Root had used this principle to make the famous Colt “Six-Shooter.” In 1849, Root went to work for Samuel Colt at his Connecticut factory and proved to be a manufacturing genius. By dividing the work into very simple steps, with each step performed by one worker, Root was able to make many more guns in much less time. Before Ford applied Root’s idea to the making of engines, it took one worker one day to make one engine. By breaking down the complicated job of making an automobile engine into eighty-four simpler jobs, Ford was able to make the process much more efficient. By assigning one person to each job, Ford’s company was able to make 352 engines per day—an increase of more than 400 percent. Frederick Winslow Taylor has been called the “original efficiency

74

/

Assembly line

Henry Ford Henry Ford (1863-1947) was more of a synthesizer and innovator than an inventor. Others invented the gasoline-powered automobile and the techniques of mass production, but it was Ford who brought the two together. The result was the assembly line-produced Model T that the Ford Motor Company turned out in the millions from 1908 until 1927. And it changed America profoundly. Ford’s idea was to lower production costs enough so that practically everyone could afford a car, not just the wealthy. He succeeded brilliantly. The first Model T’s cost $850, rock bottom for the industry, and by 1927 the price was down to $290. Americans bought them up like no other technological marvel in the nation’s history. For years, out of every one hundred cars on the road almost forty of them were Model T’s. The basic version came with nothing on the dash board but an ignition switch, and the cars were quirky—so much so that an entire industry grew up to outfit them for the road and make sure they stayed running. Even then, they could only go up steep slopes backwards, and starting them was something of an art. Americans took the Model T to heart, affectionately nicknaming it the flivver and Tin Lizzie. This “democratization of the automobile,” as Ford called it, not only gave common people modern transportation and made them more mobile than every before; it started the American love affair with the car. Even after production stopped in 1927, the Model T Ford remained the archetype of American automobiles. As the great essayist E. B. White wrote in “Farewell My Lovely” (1936), his eulogy for the Model T, “…to a few million people who grew up with it, the old Ford practically was the American scene.”

expert.” His idea was that inefficiency was caused by wasted time and wasted motion. So Taylor studied ways to eliminate wasted motion. He proved that, in the long run, doing a job too quickly was as bad as doing it too slowly. “Correct speed is the speed at which men can work hour after hour, day after day, year in and year out, and remain continuously in good health,” he said. Taylor also studied ways to streamline workers’ movements. In this way, he was able to keep wasted motion to a minimum.

Assembly line

/

75

Impact The changeover from custom production to mass production was an evolution rather than a revolution. Henry Ford applied the four basic ideas of mass production slowly and with care, testing each new idea before it was used. In 1913, the first moving assembly line for automobiles was being used to make Model T’s. Ford was able to make his Tin Lizzies faster than ever, and his competitors soon followed his lead. He had succeeded in making it possible for millions of people to buy automobiles. Ford’s work gave a new push to the Industrial Revolution. It showed Americans that mass production could be used to improve quality, cut the cost of making an automobile, and improve profits. In fact, the Model T was so profitable that in 1914 Ford was able to double the minimum daily wage of his workers, so that they too could afford to buy Tin Lizzies. Although Americans account for only about 6 percent of the world’s population, they now own about 50 percent of its wealth. There are more than twice as many radios in the United States as there are people. The roads are crowded with more than 180 million automobiles. Homes are filled with the sounds and sights emitting from more than 150 million television sets. Never have the people of one nation owned so much. Where did all the products—radios, cars, television sets—come from? The answer is industry, which still depends on the methods developed by Henry Ford. See also CAD/CAM; Color television; Interchangeable parts; Steelmaking process. Further Reading Abernathy, William, Kim Clark, and Alan Kantrow. Industrial Renaissance. New York: Basic Books, 1983. Bruchey, Stuart. Enterprise: The Dynamic Economy of a Free People. Cambridge, Mass.: Harvard University Press, 1990. Flink, James. The Car Culture. Cambridge, Mass.: MIT Press, 1975. Hayes, Robert. Restoring Our Competitive Edge. New York: Wiley, 1984. Olson, Sidney. Young Henry Ford: A Picture History of the First Forty Years. Detroit: Wayne State University Press, 1997.Wiley, 1984.

76

Atomic bomb Atomic bomb

The invention: A weapon of mass destruction created during World War II that utilized nuclear fission to create explosions equivalent to thousands of tons of trinitrotoluene (TNT), The people behind the invention: J. Robert Oppenheimer (1904-1967), an American physicist Leslie Richard Groves (1896-1970), an American engineer and Army general Enrico Fermi (1900-1954), an Italian American nuclear physicist Niels Bohr (1885-1962), a Danish physicist Energy on a Large Scale The first evidence of uranium fission (the splitting of uranium atoms) was observed by German chemists Otto Hahn and Fritz Strassmann in Berlin at the end of 1938. When these scientists discovered radioactive barium impurities in neutron-irradiated uranium, they wrote to their colleague Lise Meitner in Sweden. She and her nephew, physicist Otto Robert Frisch, calculated the large release of energy that would be generated during the nuclear fission of certain elements. This result was reported to Niels Bohr in Copenhagen. Meanwhile, similar fission energies were measured by Frédéric Joliot and his associates in Paris, who demonstrated the release of up to three additional neutrons during nuclear fission. It was recognized immediately that if neutron-induced fission released enough additional neutrons to cause at least one more such fission, a selfsustaining chain reaction would result, yielding energy on a large scale. While visiting the United States from January to May of 1939, Bohr derived a theory of fission with John Wheeler of Princeton University. This theory led Bohr to predict that the common isotope uranium 238 (which constitutes 99.3 percent of naturally occurring uranium) would require fast neutrons for fission, but that the rarer uranium 235 would fission with neutrons of any energy. This meant

Atomic bomb

/

77

that uranium 235 would be far more suitable for use in any sort of bomb. Uranium bombardment in a cyclotron led to the discovery of plutonium in 1940 and the discovery that plutonium 239 was fissionable—and thus potentially good bomb material. Uranium 238 was then used to “breed” (create) plutonium 239, which was then separated from the uranium by chemical methods. During 1942, the Manhattan District of the Army Corps of Engineers was formed under General Leslie Richard Groves, an engineer and Army general who contracted with E. I. Du Pont de Nemours and Company to construct three secret atomic cities at a total cost of $2 billion. At Oak Ridge, Tennessee, twenty-five thousand workers built a 1,000-kilowatt reactor as a pilot plant. A second city of sixty thousand inhabitants was built at Hanford, Washington, where three huge reactors and remotely controlled plutoniumextraction plants were completed in early 1945. A Sustained and Awesome Roar Studies of fast-neutron reactions for an atomic bomb were brought together in Chicago in June of 1942 under the leadership of J. Robert Oppenheimer. He soon became a personal adviser to Groves, who built for Oppenheimer a laboratory for the design and construction of the bomb at Los Alamos, New Mexico. In 1943, Oppenheimer gathered two hundred of the best scientists in what was by now being called the Manhattan Project to live and work in this third secret city. Two bomb designs were developed. A gun-type bomb called “Little Boy” used 15 kilograms of uranium 235 in a 4,500-kilogram cylinder about 2 meters long and 0.5 meter in diameter, in which a uranium bullet could be fired into three uranium target rings to form a critical mass. An implosion-type bomb called “Fat Man” had a 5-kilogram spherical core of plutonium about the size of an orange, which could be squeezed inside a 2,300-kilogram sphere about 1.5 meters in diameter by properly shaped explosives to make the mass critical in the shorter time required for the faster plutonium fission process. A flat scrub region 200 kilometers southeast of Alamogordo, called Trinity, was chosen for the test site, and observer bunkers

78

/

Atomic bomb

were built about 10 kilometers from a 30-meter steel tower. On July 13, 1945, one of the plutonium bombs was assembled at the site; the next morning, it was raised to the top of the tower. Two days later, on July 16, after a short thunderstorm delay, the bomb was detonated at 5:30 a.m. The resulting implosion initiated a chain reaction of nearly 60 fission generations in about a microsecond. It produced an intense flash of light and a fireball that expanded to a diameter of about 600 meters in two seconds, rose to a height of more than 12 kilometers, and formed an ominous mushroom shape. Forty seconds later, an air blast hit the observer bunkers, followed by a sustained and awesome roar. Measurements confirmed that the explosion had the power of 18.6 kilotons of trinitrotoluene (TNT), nearly four times the predicted value. Impact On March 9, 1945, 325 American B-29 bombers dropped 2,000 tons of incendiary bombs on Tokyo, resulting in 100,000 deaths from the fire storms that swept the city. Nevertheless, the Japanese military refused to surrender, and American military plans called for an invasion of Japan, with estimates of up to a half million American casualties, plus as many as 2 million Japanese casualties. On August 6, 1945, after authorization by President Harry S. Truman, the B-29 Enola Gay dropped the uranium Little Boy bomb on Hiroshima at 8:15 a.m. On August 9, the remaining plutonium Fat Man bomb was dropped on Nagasaki. Approximately 100,000 people died at Hiroshima (out of a population of 400,000), and about 50,000 more died at Nagasaki. Japan offered to surrender on August 10, and after a brief attempt by some army officers to rebel, an official announcement by Emperor Hirohito was broadcast on August 15. The development of the thermonuclear fusion bomb, in which hydrogen isotopes could be fused together by the force of a fission explosion to produce helium nuclei and almost unlimited energy, had been proposed early in the Manhattan Project by physicist Edward Teller. Little effort was invested in the hydrogen bomb until after the surprise explosion of a Soviet atomic bomb in September, 1949, which had been built with information stolen from the Manhattan Project. After three years of development under Teller’s

Atomic bomb

/

79

guidance, the first successful H-bomb was exploded on November 1, 1952, obliterating the Elugelab atoll in the Marshall Islands of the South Pacific. The arms race then accelerated until each side had stockpiles of thousands of H-bombs. The Manhattan Project opened a Pandora’s box of nuclear weapons that would plague succeeding generations, but it contributed more than merely weapons. About 19 percent of the electrical energy in the United States is generated by about 110 nuclear reactors producing more than 100,000 megawatts of power. More than 400 reactors in thirty countries provide 300,000 megawatts of the world’s power. Reactors have made possible the widespread use of radioisotopes in medical diagnosis and therapy. Many of the techniques for producing and using these isotopes were developed by the hundreds of nuclear physicists who switched to the field of radiation biophysics after the war, ensuring that the benefits of their wartime efforts would reach the public. See also Airplane; Breeder reactor; Cruise missile; Hydrogen bomb; Rocket; Stealth aircraft; V-2 rocket. Further Reading Goudsmit, Samuel Abraham, and Albert E. Moyer. The History of Modern Physics, 1800-1950. Los Angeles: Tomash Publishers, 1983. Henshall, Phillip. The Nuclear Axis: Germany, Japan, and the Atom Bomb Race, 1939-1945. Stoud: Sutton, 2000. Krieger, David. Splitting the Atom: A Chronology of the Nuclear Age. Santa Barbara, Calif.: Nuclear Age Peace Foundation, 1998. Smith, June. How the Atom Bombs Began, 1939-1946. London: Brockwell, 1988.

80

Atomic clock Atomic clock

The invention: A clock using the ammonia molecule as its oscillator that surpasses mechanical clocks in long-term stability, precision, and accuracy. The person behind the invention: Harold Lyons (1913-1984), an American physicist Time Measurement The accurate measurement of basic quantities, such as length, electrical charge, and temperature, is the foundation of science. The results of such measurements dictate whether a scientific theory is valid or must be modified or even rejected. Many experimental quantities change over time, but time cannot be measured directly. It must be measured by the occurrence of an oscillation or rotation, such as the twenty-four-hour rotation of the earth. For centuries, the rising of the Sun was sufficient as a timekeeper, but the need for more precision and accuracy increased as human knowledge grew. Progress in science can be measured by how accurately time has been measured at any given point. In 1713, the British government, after the disastrous sinking of a British fleet in 1707 because of a miscalculation of longitude, offered a reward of 20,000 pounds for the invention of a ship’s chronometer (a very accurate clock). Latitude is determined by the altitude of the Sun above the southern horizon at noon local time, but the determination of longitude requires an accurate clock set at Greenwich, England, time. The difference between the ship’s clock and the local sun time gives the ship’s longitude. This permits the accurate charting of new lands, such as those that were being explored in the eighteenth century. John Harrison, an English instrument maker, eventually built a chronometer that was accurate within one minute after five months at sea. He received his reward from Parliament in 1765. Atomic Clocks Provide Greater Stability A clock contains four parts: energy to keep the clock operating, an oscillator, an oscillation counter, and a display. A grandfather

Atomic clock

/

81

clock has weights that fall slowly, providing energy that powers the clock’s gears. The pendulum, a weight on the end of a rod, swings back and forth (oscillates) with a regular beat. The length of the rod determines the pendulum’s period of oscillation. The pendulum is attached to gears that count the oscillations and drive the display hands. There are limits to a mechanical clock’s accuracy and stability. The length of the rod changes as the temperature changes, so the period of oscillation changes. Friction in the gears changes as they wear out. Making the clock smaller increases its accuracy, precision, and stability. Accuracy is how close the clock is to telling the actual time. Stability indicates how the accuracy changes over time, while precision is the number of accurate decimal places in the display. A grandfather clock, for example, might be accurate to ten seconds per day and precise to a second, while having a stability of minutes per week. Applying an electrical signal to a quartz crystal will make the crystal oscillate at its natural vibration frequency, which depends on its size, its shape, and the way in which it was cut from the larger crystal. Since the faster a clock’s oscillator vibrates, the more precise the clock, a crystal-based clock is more precise than a large pendulum clock. By keeping the crystal under constant temperature, the clock is kept accurate, but it eventually loses its stability and slowly wears out. In 1948, Harold Lyons and his colleagues at the National Bureau of Standards (NBS) constructed the first atomic clock, which used the ammonia molecule as its oscillator. Such a clock is called an atomic clock because, when it operates, a nitrogen atom vibrates. The pyramid-shaped ammonia molecule is composed of a triangular base; there is a hydrogen atom at each corner and a nitrogen atom at the top of the pyramid. The nitrogen atom does not remain at the top; if it absorbs radio waves of the right energy and frequency, it passes through the base to produce an upside-down pyramid and then moves back to the top. This oscillation frequency occurs at 23,870 megacycles (1 megacycle equals 1 million cycles) per second. Lyons’s clock was actually a quartz-ammonia clock, since the signal from a quartz crystal produced radio waves of the crystal’s fre-

82

/

Atomic clock

quency that were fed into an ammonia-filled tube. If the radio waves were at 23,870 megacycles, the ammonia molecules absorbed the waves; a detector sensed this, and it sent no correction signal to the crystal. If radio waves deviated from 23,870 megacycles, the ammonia did not absorb them, the detector sensed the unabsorbed radio waves, and a correction signal was sent to the crystal. The atomic clock’s accuracy and precision were comparable to those of a quartz-based clock—one part in a hundred million—but the atomic clock was more stable because molecules do not wear out. The atomic clock’s accuracy was improved by using cesium 133 atoms as the source of oscillation. These atoms oscillate at 9,192,631,770 plus or minus 20 cycles per second. They are accurate to a billionth of a second per day and precise to nine decimal places. A cesium clock is stable for years. Future developments in atomic clocks may see accuracies of one part in a million billions. Impact The development of stable, very accurate atomic clocks has farreaching implications for many areas of science. Global positioning satellites send signals to receivers on ships and airplanes. By timing the signals, the receiver’s position is calculated to within several meters of its true location. Chemists are interested in finding the speed of chemical reactions, and atomic clocks are used for this purpose. The atomic clock led to the development of the maser (an acronym for microwave amplification by stimulated emission of radiation), which is used to amplify weak radio signals, and the maser led to the development of the laser, a light-frequency maser that has more uses than can be listed here. Atomic clocks have been used to test Einstein’s theories of relativity that state that time on a moving clock, as observed by a stationary observer, slows down, and that a clock slows down near a large mass (because of the effects of gravity). Under normal conditions of low velocities and low mass, the changes in time are very small, but atomic clocks are accurate and stable enough to detect even these small changes. In such experiments, three sets of clocks were used—one group remained on Earth, one was flown west

Atomic clock

/

83

around the earth on a jet, and the last set was flown east. By comparing the times of the in-flight sets with the stationary set, the predicted slowdowns of time were observed and the theories were verified. See also Carbon dating; Cyclotron; Electric clock; Laser; Synchrocyclotron; Tevatron accelerator. Further Reading Audoin, Claude, and Bernard Guinot. The Measurement of Time: Time, Frequency, and the Atomic Clock. New York: Cambridge University Press, 2001. Barnett, Jo Ellen. Time’s Pendulum: The Quest to Capture Time—From Sundials to Atomic Clocks. New York: Plenum Trade, 1998. Bendick, Jeanne. The First Book of Time. New York: F. Watts, 1970. “Ultra-Accurate Atomic Clock Unveiled at NIST Laboratory.” Research and Development 42, no. 2 (February, 2000).

84

Atomic-powered ship Atomic-powered ship

The invention: The world’s first atomic-powered merchant ship demonstrated a peaceful use of atomic power. The people behind the invention: Otto Hahn (1879-1968), a German chemist Enrico Fermi (1901-1954), an Italian American physicist Dwight D. Eisenhower (1890-1969), president of the United States, 1953-1961 Splitting the Atom In 1938, Otto Hahn, working at the Kaiser Wilhelm Institute for Chemistry, discovered that bombarding uranium atoms with neutrons causes them to split into two smaller, lighter atoms. A large amount of energy is released during this process, which is called “fission.” When one kilogram of uranium is fissioned, it releases the same amount of energy as does the burning of 3,000 metric tons of coal. The fission process also releases new neutrons. Enrico Fermi suggested that these new neutrons could be used to split more uranium atoms and produce a chain reaction. Fermi and his assistants produced the first human-made chain reaction at the University of Chicago on December 2, 1942. Although the first use of this new energy source was the atomic bombs that were used to defeat Japan in World War II, it was later realized that a carefully controlled chain reaction could produce useful energy. The submarine Nautilus, launched in 1954, used the energy released from fission to make steam to drive its turbines. U.S. President Dwight David Eisenhower proposed his “Atoms for Peace” program in December, 1953. On April 25, 1955, President Eisenhower announced that the “Atoms for Peace” program would be expanded to include the design and construction of an atomicpowered merchant ship, and he signed the legislation authorizing the construction of the ship in 1956.

Atomic-powered ship

/

85

Savannah’s Design and Construction A contract to design an atomic-powered merchant ship was awarded to George G. Sharp, Inc., on April 4, 1957. The ship was to carry approximately one hundred passengers (later reduced to sixty to reduce the ship’s cost) and 10,886 metric tons of cargo while making a speed of 21 knots, about 39 kilometers per hour. The ship was to be 181 meters long and 23.7 meters wide. The reactor was to provide steam for a 20,000-horsepower turbine that would drive the ship’s propeller. Most of the ship’s machinery was similar to that of existing ships; the major difference was that steam came from a reactor instead of a coal- or oil-burning boiler. New York Shipbuilding Corporation of Camden, New Jersey, won the contract to build the ship on November 16, 1957. States Marine Lines was selected in July, 1958, to operate the ship. It was christened Savannah and launched on July 21, 1959. The name Savannah was chosen to honor the first ship to use steam power while crossing an ocean. This earlier Savannah was launched in New York City in 1818. Ships are normally launched long before their construction is complete, and the new Savannah was no exception. It was finally turned over to States Marine Lines on May 1, 1962. After extensive testing by its operators and delays caused by labor union disputes, it began its maiden voyage from Yorktown, Virginia, to Savannah, Georgia, on August 20, 1962. The original budget for design and construction was $35 million, but by this time, the actual cost was about $80 million. Savannah‘s nuclear reactor was fueled with about 7,000 kilograms (15,400 pounds) of uranium. Uranium consists of two forms, or “isotopes.” These are uranium 235, which can fission, and uranium 238, which cannot. Naturally occurring uranium is less than 1 percent uranium 235, but the uranium in Savannah‘s reactor had been enriched to contain nearly 5 percent of this isotope. Thus, there was less than 362 kilograms of usable uranium in the reactor. The ship was able to travel about 800,000 kilometers on this initial fuel load. Three and a half million kilograms of water per hour flowed through the reactor under a pressure of 5,413 kilograms per square centimeter. It entered the reactor at 298.8 degrees Celsius and left at

86

/

Atomic-powered ship

317.7 degrees Celsius. Water leaving the reactor passed through a heat exchanger called a “steam generator.” In the steam generator, reactor water flowed through many small tubes. Heat passed through the walls of these tubes and boiled water outside them. About 113,000 kilograms of steam per hour were produced in this way at a pressure of 1,434 kilograms per square centimeter and a temperature of 240.5 degrees Celsius. Labor union disputes dogged Savannah‘s early operations, and it did not start its first trans-Atlantic crossing until June 8, 1964. Savannah was never a money maker. Even in the 1960’s, the trend was toward much bigger ships. It was announced that the ship would be retired in August, 1967, but that did not happen. It was finally put out of service in 1971. Later, Savannah was placed on permanent display at Charleston, South Carolina. Consequences Following the United States’ lead, Germany and Japan built atomic-powered merchant ships. The Soviet Union is believed to have built several atomic-powered icebreakers. Germany’s Otto Hahn, named for the scientist who first split the atom, began service in 1968, and Japan’s Mutsuai was under construction as Savannah retired. Numerous studies conducted in the early 1970’s claimed to prove that large atomic-powered merchant ships were more profitable than oil-fired ships of the same size. Several conferences devoted to this subject were held, but no new ships were built. Although the U.S. Navy has continued to use reactors to power submarines, aircraft carriers, and cruisers, atomic power has not been widely used for merchant-ship propulsion. Labor union problems such as those that haunted Savannah, high insurance costs, and high construction costs are probably the reasons. Public opinion, after the reactor accidents at Three Mile Island (in 1979) and Chernobyl (in 1986) is also a factor. See also Gyrocompass; Hovercraft; Nuclear reactor; Supersonic passenger plane.

Atomic-powered ship

/

87

Further Reading Epstein, Sam Epstein, Beryl (William) Epstein, and Raymond Burns. Enrico Fermi, Father of Atomic Power. Champaign, Ill. Garrard Publishing, 1970. Hahn, Otto, and Wily Ley. Otto Hahn: A Scientific Autobiography. New York: C. Scribner’s Sons, 1966. Hoffman, Klaus. Otto Hahn: Achievement and Responsibility. New York: Springer, 2001. “The Race to Power Bigger, Faster Ships.” Business Week 2305 (November 10, 1973). “Underway on Nuclear Power.” All Hands 979 (November, 1998).

88

Autochrome plate Autochrome plate

The invention: The first commercially successful process in which a single exposure in a regular camera produced a color image. The people behind the invention: Louis Lumière (1864-1948), a French inventor and scientist Auguste Lumière (1862-1954), an inventor, physician, physicist, chemist, and botanist Alphonse Seyewetz, a skilled scientist and assistant of the Lumière brothers Adding Color In 1882, Antoine Lumière, painter, pioneer photographer, and father of Auguste and Louis, founded a factory to manufacture photographic gelatin dry-plates. After the Lumière brothers took over the factory’s management, they expanded production to include roll film and printing papers in 1887 and also carried out joint research that led to fundamental discoveries and improvements in photographic development and other aspects of photographic chemistry. While recording and reproducing the actual colors of a subject was not possible at the time of photography’s inception (about 1822), the first practical photographic process, the daguerreotype, was able to render both striking detail and good tonal quality. Thus, the desire to produce full-color images, or some approximation to realistic color, occupied the minds of many photographers and inventors, including Louis and Auguste Lumière, throughout the nineteenth century. As researchers set out to reproduce the colors of nature, the first process that met with any practical success was based on the additive color theory expounded by the Scottish physicist James Clerk Maxwell in 1861. He believed that any color can be created by adding together red, green, and blue light in certain proportions. Maxwell, in his experiments, had taken three negatives through screens or filters of these additive primary colors. He then took slides made from these negatives and projected the slides through

Autochrome plate

/

89

Antoine Lumière and Sons Antoine Lumière was explosive in temperament, loved a good fight, and despised Americans. With these qualities—and his sons to take care of the practicalities—he turned France into a leader of the early photography and film industries. Lumière was born into a family of wine growers in 1840 and trained to be a sign painter. Bored with his job, he learned the new art of photography, set up a studio in Lyon, and began to experiment with ways to make his own photographic plates. Failures led to frustration, and frustration ignited his temper, which often ended in his smashing the furniture and glassware nearby. His sons, Auguste, born 1862, and Louis, born 1864, came to the rescue. Louis, a science whiz as a teenager, succeeded where his father had failed. The dry plate he invented, Blue Label, was the most sensitive yet. The Lumières set up a factory to manufacture the plates and quickly found themselves wealthy, but the old man’s love of extravagant spending and parties led them to the door of bankruptcy in 1882. His sons had to take control to save the family finances. The father, an ardent French patriot, soon threw himself into a new crusade. American tariffs made it impossible for the Lumières to make a profit selling their photographic plates in the United States, which so angered the old man that he looked for revenge. He found it in the form of Thomas Edison’s Kinetoscope in 1894. He got hold of samples, and soon the family factory was making motion picture film of its own and could undersell Edison in France. Louis also invented a projector, adapted from a sewing machine, that made it possible for movies to be shown to audiences. Before Antoine Lumière died in Paris in 1911, he had the satisfaction of seeing his beloved France producing better, cheaper photographic products than those available from America, as well as becoming a pioneer in film making.

the same filters onto a screen so that their images were superimposed. As a result, he found that it was possible to reproduce the exact colors as well as the form of an object. Unfortunately, since colors could not be printed in their tonal relationships on paper before the end of the nineteenth century,

90

/

Autochrome plate

Maxwell’s experiment was unsuccessful. Although Frederick E. Ives of Philadelphia, in 1892, optically united three transparencies so that they could be viewed in proper alignment by looking through a peephole, viewing the transparencies was still not as simple as looking at a black-and-white photograph. The Autochrome Plate The first practical method of making a single photograph that could be viewed without any apparatus was devised by John Joly of Dublin in 1893. Instead of taking three separate pictures through three colored filters, he took one negative through one filter minutely checkered with microscopic areas colored red, green, and blue. The filter and the plate were exactly the same size and were placed in contact with each other in the camera. After the plate was developed, a transparency was made, and the filter was permanently attached to it. The black-and-white areas of the picture allowed more or less light to shine through the filters; if viewed from a proper distance, the colored lights blended to form the various colors of nature. In sum, the potential principles of additive color and other methods and their potential applications in photography had been discovered and even experimentally demonstrated by 1880. Yet a practical process of color photography utilizing these principles could not be produced until a truly panchromatic emulsion was available, since making a color print required being able to record the primary colors of the light cast by the subject. Louis and Auguste Lumière, along with their research associate Alphonse Seyewetz, succeeded in creating a single-plate process based on this method in 1903. It was introduced commercially as the autochrome plate in 1907 and was soon in use throughout the world. This process is one of many that take advantage of the limited resolving power of the eye. Grains or dots too small to be recognized as separate units are accepted in their entirety and, to the sense of vision, appear as tones and continuous color.

Autochrome plate

/

91

Impact While the autochrome plate remained one of the most popular color processes until the 1930’s, soon this process was superseded by subtractive color processes. Leopold Mannes and Leopold Godowsky, both musicians and amateur photographic researchers who eventually joined forces with Eastman Kodak research scientists, did the most to perfect the Lumière brothers’ advances in making color photography practical. Their collaboration led to the introduction in 1935 of Kodachrome, a subtractive process in which a single sheet of film is coated with three layers of emulsion, each sensitive to one primary color. A single exposure produces a color image. Color photography is now commonplace. The amateur market is enormous, and the snapshot is almost always taken in color. Commercial and publishing markets use color extensively. Even photography as an art form, which was done in black and white through most of its history, has turned increasingly to color. See also Color film; Instant photography; Xerography. Further Reading Collins, Douglas. The Story of Kodak. New York: Harry N. Abrams, 1990. Glendinning, Peter. Color Photography: History, Theory, and Darkroom Technique. Englewood Cliffs, N.J.: Prentice-Hall, 1985. Lartigue, Jacques-Henri, and Georges Herscher. The Autochromes of J. H. Lartigue, 1912-1927. New York: Viking Press, 1981. Tolstoy, Ivan. James Clerk Maxwell: A Biography. Chicago: University of Chicago Press, 1982. Wood, John. The Art of the Autochrome: The Birth of Color Photography. Iowa City: University of Iowa Press, 1993.

92

BASIC programming language BASIC programming language

The invention: An interactive computer system and simple programming language that made it easier for nontechnical people to use computers. The people behind the invention: John G. Kemeny (1926-1992), the chairman of Dartmouth’s mathematics department Thomas E. Kurtz (1928), the director of the Kiewit Computation Center at Dartmouth Bill Gates (1955), a cofounder and later chairman of the board and chief operating officer of the Microsoft Corporation The Evolution of Programming The first digital computers were developed during World War II (1939-1945) to speed the complex calculations required for ballistics, cryptography, and other military applications. Computer technology developed rapidly, and the 1950’s and 1960’s saw computer systems installed throughout the world. These systems were very large and expensive, requiring many highly trained people for their operation. The calculations performed by the first computers were determined solely by their electrical circuits. In the 1940’s, The American mathematician John von Neumann and others pioneered the idea of computers storing their instructions in a program, so that changes in calculations could be made without rewiring their circuits. The programs were written in machine language, long lists of zeros and ones corresponding to on and off conditions of circuits. During the 1950’s, “assemblers” were introduced that used short names for common sequences of instructions and were, in turn, transformed into the zeros and ones intelligible to the computer. The late 1950’s saw the introduction of high-level languages, notably Formula Translation (FORTRAN), Common Business Oriented Language (COBOL), and Algorithmic Language (ALGOL), which used English words to

BASIC programming language

/

93

communicate instructions to the computer. Unfortunately, these high-level languages were complicated; they required some knowledge of the computer equipment and were designed to be used by scientists, engineers, and other technical experts. Developing BASIC John G. Kemeny was chairman of the department of mathematics at Dartmouth College in Hanover, New Hampshire. In 1962, Thomas E. Kurtz, Dartmouth’s computing director, approached Kemeny with the idea of implementing a computer system at Dartmouth College. Both men were dedicated to the idea that liberal arts students should be able to make use of computers. Although the English commands of FORTRAN and ALGOL were a tremendous improvement over the cryptic instructions of assembly language, they were both too complicated for beginners. Kemeny convinced Kurtz that they needed a completely new language, simple enough for beginners to learn quickly, yet flexible enough for many different kinds of applications. The language they developed was known as the “Beginner’s Allpurpose Symbolic Instruction Code,” or BASIC. The original language consisted of fourteen different statements. Each line of a BASIC program was preceded by a number. Line numbers were referenced by control flow statements, such as, “IF X = 9 THEN GOTO 200.” Line numbers were also used as an editing reference. If line 30 of a program contained an error, the programmer could make the necessary correction merely by retyping line 30. Programming in BASIC was first taught at Dartmouth in the fall of 1964. Students were ready to begin writing programs after two hours of classroom lectures. By June of 1968, more than 80 percent of the undergraduates at Dartmouth could write a BASIC program. Most of them were not science majors and used their programs in conjunction with other nontechnical courses. Kemeny and Kurtz, and later others under their supervision, wrote more powerful versions of BASIC that included support for graphics on video terminals and structured programming. The creators of BASIC, however, always tried to maintain their original design goal of keeping BASIC simple enough for beginners.

94

/

BASIC programming language

Consequences Kemeny and Kurtz encouraged the widespread adoption of BASIC by allowing other institutions to use their computer system and by placing BASIC in the public domain. Over time, they shaped BASIC into a powerful language with numerous features added in response to the needs of its users. What Kemeny and Kurtz had not foreseen was the advent of the microprocessor chip in the early 1970’s, which revolutionized computer technology. By 1975, microcomputer kits were being sold to hobbyists for well under a thousand dollars. The earliest of these was the Altair. That same year, prelaw student William H. Gates (1955- ) was persuaded by a friend, Paul Allen, to drop out of Harvard University and help create a version of BASIC that would run on the Altair. Gates and Allen formed a company, Microsoft Corporation, to sell their BASIC interpreter, which was designed to fit into the tiny memory of the Altair. It was about as simple as the original Dartmouth BASIC but had to depend heavily on the computer hardware. Most computers purchased for home use still include a version of Microsoft Corporation’s BASIC. See also BINAC computer; COBOL computer language; FORTRAN programming language; SAINT; Supercomputer. Further Reading Kemeney, John G., and Thomas E. Kurtz. True BASIC: The Structured Language System for the Future. Reference Manual. West Lebanon, N.H.: True BASIC, 1988. Kurtz, Thomas E., and John G. Kemeney. BASIC. 5th ed. Hanover, N.H., 1970. Spencer, Donald D. Great Men and Women of Computing. 2d ed. Ormond Beach, Fla.: Camelot Publishing, 1999.

95

Bathyscaphe Bathyscaphe

The invention: A submersible vessel capable of exploring the deepest trenches of the world’s oceans. The people behind the invention: William Beebe (1877-1962), an American biologist and explorer Auguste Piccard (1884-1962), a Swiss-born Belgian physicist Jacques Piccard (1922), a Swiss ocean engineer Early Exploration of the Deep Sea The first human penetration of the deep ocean was made by William Beebe in 1934, when he descended 923 meters into the Atlantic Ocean near Bermuda. His diving chamber was a 1.5-meter steel ball that he named Bathysphere, from the Greek word bathys (deep) and the word sphere, for its shape. He found that a sphere resists pressure in all directions equally and is not easily crushed if it is constructed of thick steel. The bathysphere weighed 2.5 metric tons. It had no buoyancy and was lowered from a surface ship on a single 2.2-centimeter cable; a broken cable would have meant certain death for the bathysphere’s passengers. Numerous deep dives by Beebe and his engineer colleague, Otis Barton, were the first uses of submersibles for science. Through two small viewing ports, they were able to observe and photograph many deep-sea creatures in their natural habitats for the first time. They also made valuable observations on the behavior of light as the submersible descended, noting that the green surface water became pale blue at 100 meters, dark blue at 200 meters, and nearly black at 300 meters. A technique called “contour diving” was particularly dangerous. In this practice, the bathysphere was slowly towed close to the seafloor. On one such dive, the bathysphere narrowly missed crashing into a coral crag, but the explorers learned a great deal about the submarine geology of Bermuda and the biology of a coral-reef community. Beebe wrote several popular and scientific books about his adventures that did much to arouse interest in the ocean.

96

/

Bathyscaphe

Testing the Bathyscaphe The next important phase in the exploration of the deep ocean was led by the Swiss physicist Auguste Piccard. In 1948, he launched a new type of deep-sea research craft that did not require a cable and that could return to the surface by means of its own buoyancy. He called the craft a bathyscaphe, which is Greek for “deep boat.” Piccard began work on the bathyscaphe in 1937, supported by a grant from the Belgian National Scientific Research Fund. The German occupation of Belgium early in World War II cut the project short, but Piccard continued his work after the war. The finished bathyscaphe was named FNRS 2, for the initials of the Belgian fund that had sponsored the project. The vessel was ready for testing in the fall of 1948. The first bathyscaphe, as well as later versions, consisted of two basic components: first, a heavy steel cabin to accommodate observers, which looked somewhat like an enlarged version of Beebe’s bathysphere; and second, a light container called a float, filled with gasoline, that provided lifting power because it was lighter than water. Enough iron shot was stored in silos to cause the vessel to descend. When this ballast was released, the gasoline in the float gave the bathyscaphe sufficient buoyancy to return to the surface. Piccard’s bathyscaphe had a number of ingenious devices. JacquesYves Cousteau, inventor of the Aqualung six years earlier, contributed a mechanical claw that was used to take samples of rocks, sediment, and bottom creatures. A seven-barreled harpoon gun, operated by water pressure, was attached to the sphere to capture specimens of giant squids or other large marine animals for study. The harpoons had electrical-shock heads to stun the “sea monsters,” and if that did not work, the harpoon could give a lethal injection of strychnine poison. Inside the sphere were various instruments for measuring the deep-sea environment, including a Geiger counter for monitoring cosmic rays. The air-purification system could support two people for up to twenty-four hours. The bathyscaphe had a radar mast to broadcast its location as soon as it surfaced. This was essential because there was no way for the crew to open the sphere from the inside.

Bathyscaphe

/

97

Auguste Piccard used balloons to set records in altitude both above sea level and below sea level. However, setting records was not his purpose: He went where no one had gone before for the sake of science. Born in Basel, Switzerland, in 1884, Auguste and his twin brother, Jean-Félix Piccard, studied in Zurich. After university in 1913, Auguste, a physicist, and Jean-Félix, a chemist, took up hot-air ballooning, and they joined the balloon section of the Swiss Army in 1915. Auguste moved to Brussels, Belgium, in 1922 to take a professorship of applied physics, and there he continued his ballooning. His subject of interest was cosmic rays, and in order to study them he had to get above the thick lower layer of atmosphere. Accordingly, he designed hydrogen-filled balloons that could reach high altitude. A ballshaped, pressurized gondola carried him, his instruments, and one colleague to 51,775 feet altitude in 1931 and to 53,152 feet in 1932. Both were records. Auguste, working with his son Jacques, then turned his attention to the sea. In order to explore the largely unknown world underwater, he built the bathyscaphe. It was really just another type of balloon, one which was made of steel and carried him inside. His dives with his son in various models of bathyscaphe set record after record. Their 1953 dive down 10,300 feet into the Mediterranean Sea was the deepest until Jacques, accompanied by a U.S. Navy officer, descended to the deepest spot on Earth seven years later.

(Library of Congress)

Auguste Piccard

The FNRS 2 was first tested off the Cape Verde Islands with the assistance of the French navy. Although Piccard descended to only 25 meters, the dive demonstrated the potential of the bathyscaphe. On the second dive, the vessel was severely damaged by waves, and further tests were suspended. A redesigned and rebuilt bathyscaphe, renamed FNRS 3 and operated by the French navy, descended to a depth of 4,049 meters off Dakar, Senegal, on the west coast of Africa in early 1954. In August, 1953, Auguste Piccard, with his son Jacques, launched a

98

/

Bathyscaphe

greatly improved bathyscaphe, the Trieste, which they named for the Italian city in which it was built. In September of the same year, the Trieste successfully dived to 3,150 meters in the Mediterranean Sea. The Piccards glimpsed, for the first time, animals living on the seafloor at that depth. In 1958, the U.S. Navy purchased the Trieste and transported it to California, where it was equipped with a new cabin designed to enable the vessel to reach the seabed of the great oceanic trenches. Several successful descents were made in the Pacific by Jacques Piccard, and on January 23, 1960, Piccard, accompanied by Lieutenant Donald Walsh of the U.S. Navy, dived a record 10,916 meters to the bottom of the Mariana Trench near the island of Guam. Impact The oceans have always raised formidable barriers to humanity’s curiosity and understanding. In 1960, two events demonstrated the ability of humans to travel underwater for prolonged periods and to observe the extreme depths of the ocean. The nuclear submarine Triton circumnavigated the world while submerged, and Jacques Piccard and Lieutenant Donald Walsh descended nearly 11 kilometers to the bottom of the ocean’s greatest depression aboard the Trieste. After sinking for four hours and forty-eight minutes, the Trieste landed in the Challenger Deep of the Mariana Trench, the deepest known spot on the ocean floor. The explorers remained on the bottom for only twenty minutes, but they answered one of the biggest questions about the sea: Can animals live in the immense cold and pressure of the deep trenches? Observations of red shrimp and flatfishes proved that the answer was yes. The Trieste played another important role in undersea exploration when, in 1963, it located and photographed the wreckage of the nuclear submarine Thresher. The Thresher had mysteriously disappeared on a test dive off the New England coast, and the Navy had been unable to find a trace of the lost submarine using surface vessels equipped with sonar and remote-control cameras on cables. Only the Trieste could actually search the bottom. On its third dive, the bathyscaphe found a piece of the wreckage, and it eventually photographed a 3,000-meter trail of debris that led to Thresher‘s hull, at a depth of 2.5 kilometers.

Bathyscaphe

/

99

These exploits showed clearly that scientific submersibles could be used anywhere in the ocean. Piccard’s work thus opened the last geographic frontier on Earth. See also Aqualung; Bathysphere; Sonar; Ultrasound. Further Reading Ballard, Robert D., and Will Hively. The Eternal Darkness: A Personal History of Deep-Sea Exploration. Princeton, N.J.: Princeton University Press, 2000. Piccard, Jacques, and Robert S. Dietz. Seven Miles Down: The Story of the Bathyscaphe Trieste. New York: Longmans, 1962. Welker, Robert Henry. Natural Man: The Life of William Beebe. Bloomington: Indiana University Press, 1975.

100

Bathysphere Bathysphere

The invention: The first successful chamber for manned deep-sea diving missions. The people behind the invention: William Beebe (1877-1962), an American naturalist and curator of ornithology Otis Barton (1899), an American engineer John Tee-Van (1897-1967), an American general associate with the New York Zoological Society Gloria Hollister Anable (1903?-1988), an American research associate with the New York Zoological Society Inner Space Until the 1930’s, the vast depths of the oceans had remained largely unexplored, although people did know something of the ocean’s depths. Soundings and nettings of the ocean bottom had been made many times by a number of expeditions since the 1870’s. Diving helmets had allowed humans to descend more than 91 meters below the surface, and the submarine allowed them to reach a depth of nearly 120 meters. There was no firsthand knowledge, however, of what it was like in the deepest reaches of the ocean: inner space. The person who gave the world the first account of life at great depths was William Beebe. When he announced in 1926 that he was attempting to build a craft to explore the ocean, he was already a well-known naturalist. Although his only degrees had been honorary doctorates, he was graduated as a special student in the Department of Zoology of Columbia University in 1898. He began his lifelong association with the New York Zoological Society in 1899. It was during a trip to the Galápagos Islands off the west coast of South America that Beebe turned his attention to oceanography. He became the first scientist to use a diving helmet in fieldwork, swimming in the shallow waters. He continued this shallow-water work at the new station he established in 1928, with the permission of En-

Bathysphere

/

101

glish authorities, on the tiny island of Nonesuch in the Bermudas. Beebe realized, however, that he had reached the limits of the current technology and that to study the animal life of the ocean depths would require a new approach. A New Approach While he was considering various cylindrical designs for a new deep-sea exploratory craft, Beebe was introduced to Otis Barton. Barton, a young New Englander who had been trained as an engineer at Harvard University, had turned to the problems of ocean diving while doing postgraduate work at Columbia University. In December, 1928, Barton brought his blueprints to Beebe. Beebe immediately saw that Barton’s design was what he was looking for, and the two went ahead with the construction of Barton’s craft. The “bathysphere,” as Beebe named the device, weighed 2,268 kilograms and had a diameter of 1.45 meters and steel walls 3.8 centimeters thick. The door, weighing 180 kilograms, would be fastened over a manhole with ten bolts. Four windows, made of fused quartz, were ordered from the General Electric Company at a cost of $500 each. A 250-watt water spotlight lent by the Westinghouse Company provided the exterior illumination, and a telephone lent by the Bell Telephone Laboratory provided a means of communicating with the surface. The breathing apparatus consisted of two oxygen tanks that allowed 2 liters of oxygen per minute to escape into the sphere. During the dive, the carbon dioxide and moisture were removed, respectively, by trays containing soda lime and calcium chloride. A winch would lower the bathysphere on a steel cable. In early July, 1930, after several test dives, the first manned dive commenced. Beebe and Barton descended to a depth of 244 meters. A short circuit in one of the switches showered them with sparks momentarily, but the descent was largely a success. Beebe and Barton had descended farther than any human. Two more days of diving yielded a final dive record of 435 meters below sea level. Beebe and the other members of his staff (ichthyologist John Tee-Van and zoologist Gloria Hollister Anable) saw many species of fish and other marine life that previously had been seen only after being caught in nets. These first dives proved that an un-

102

/

Bathysphere

dersea exploratory craft had potential value, at least for deep water. After 1932, the bathysphere went on display at the Century of Progress Exhibition in Chicago. In late 1933, the National Geographic Society offered to sponsor another series of dives. Although a new record was not a stipulation, Beebe was determined to supply one. The bathysphere was completely refitted before the new dives. An unmanned test dive to 920 meters was made on August 7, 1934, once again off Nonesuch Island. Minor adjustments were made, and on the morning of August 11, the first dive commenced, attaining a depth of 765 meters and recording a number of new scientific observations. Several days later, on August 15, the weather was again right for the dive. This dive also paid rich dividends in the number of species of deep-sea life observed. Finally, with only a few turns of cable left on the winch spool, the bathysphere reached a record depth of 923 meters—almost a kilometer below the ocean’s surface. Impact Barton continued to work on the bathysphere design for some years. It was not until 1948, however, that his new design, the benthoscope, was finally constructed. It was similar in basic design to the bathysphere, though the walls were increased to withstand greater pressures. Other improvements were made, but the essential strengths and weaknesses remained. On August 16, 1949, Barton, diving alone, broke the record he and Beebe had set earlier, reaching a depth of 1,372 meters off the coast of Southern California. The bathysphere effectively marked the end of the tethered exploration of the deep, but it pointed the way to other possibilities. The first advance in this area came in 1943, when undersea explorer Jacques-Yves Cousteau and engineer Émile Gagnan developed the Aqualung underwater breathing apparatus, which made possible unfettered and largely unencumbered exploration down to about 60 meters. This was by no means deep diving, but it was clearly a step along the lines that Beebe had envisioned for underwater research. A further step came in the development of the bathyscaphe by

Bathysphere

/

103

Auguste Piccard, the renowned Swiss physicist, who, in the 1930’s, had conquered the stratosphere in high-altitude balloons. The bathyscaphe was a balloon that operated in reverse. A spherical steel passenger cabin was attached beneath a large float filled with gasoline for buoyancy. Several tons of iron pellets held by electromagnets acted as ballast. The bathyscaphe would sink slowly to the bottom of the ocean, and when its passengers wished to return, the ballast would be dumped. The craft would then slowly rise to the surface. On September 30, 1953, Piccard touched bottom off the coast of Italy, some 3,000 meters below sea level. See also Aqualung; Bathyscaphe; Sonar; Ultrasound. Further Reading Ballard, Robert D., and Will Hively. The Eternal Darkness: A Personal History of Deep-Sea Exploration. Princeton, N.J.: Princeton University Press, 2000. Forman, Will. The History of American Deep Submersible Operations, 1775-1995. Flagstaff, Ariz.: Best, 1999. Welker, Robert Henry. Natural Man: The Life of William Beebe. Bloomington: Indiana University Press, 1975.

104

BINAC computer BINAC computer

The invention: The world’s first electronic general-purpose digital computer. The people behind the invention: John Presper Eckert (1919-1995), an American electrical engineer John W. Mauchly (1907-1980), an American physicist John von Neumann (1903-1957), a Hungarian American mathematician Alan Mathison Turing (1912-1954), an English mathematician Computer Evolution In the 1820’s, there was a need for error-free mathematical and astronomical tables for use in navigation, unreliable versions of which were being produced by human “computers.” The problem moved English mathematician and inventor Charles Babbage to design and partially construct some of the earliest prototypes of modern computers, with substantial but inadequate funding from the British government. In the 1880’s, the search by the U.S. Bureau of the Census for a more efficient method of compiling the 1890 census led American inventor Herman Hollerith to devise a punched-card calculator, a machine that reduced by several years the time required to process the data. The emergence of modern electronic computers began during World War II (1939-1945), when there was an urgent need in the American military for reliable and quickly produced mathematical tables that could be used to aim various types of artillery. The calculation of very complex tables had progressed somewhat since Babbage’s day, and the human computers were being assisted by mechanical calculators. Still, the growing demand for increased accuracy and efficiency was pushing the limits of these machines. Finally, in 1946, following three years of intense work at the University of Pennsylvania’s Moore School of Engineering, John Presper Eckert and John W. Mauchly presented their solution to the problems in the form of the Electronic Numerical Integrator and Calcula-

BINAC computer

/

105

tor (ENIAC) the world’s first electronic general-purpose digital computer. The ENIAC, built under a contract with the Army’s Ballistic Research Laboratory, became a great success for Eckert and Mauchly, but even before it was completed, they were setting their sights on loftier targets. The primary drawback of the ENIAC was the great difficulty involved in programming it. Whenever the operators needed to instruct the machine to shift from one type of calculation to another, they had to reset a vast array of dials and switches, unplug and replug numerous cables, and make various other adjustments to the multiple pieces of hardware involved. Such a mode of operation was deemed acceptable for the ENIAC because, in computing firing tables, it would need reprogramming only occasionally. Yet if instructions could be stored in a machine’s memory, along with the data, such a machine would be able to handle a wide range of calculations with ease and efficiency. The Turing Concept The idea of a stored-program computer had first appeared in a paper published by English mathematician Alan Mathison Turing in 1937. In this paper, Turing described a hypothetical machine of quite simple design that could be used to solve a wide range of logical and mathematical problems. One significant aspect of this imaginary Turing machine was that the tape that would run through it would contain both information to be processed and instructions on how to process it. The tape would thus be a type of memory device, storing both the data and the program as sets of symbols that the machine could “read” and understand. Turing never attempted to construct this machine, and it was not until 1946 that he developed a design for an electronic stored-program computer, a prototype of which was built in 1950. In the meantime, John von Neumann, a Hungarian American mathematician acquainted with Turing’s ideas, joined Eckert and Mauchly in 1944 and contributed to the design of ENIAC’s successor, the Electronic Discrete Variable Automatic Computer (EDVAC), another project financed by the Army. The EDVAC was the first computer designed to incorporate the concept of the stored program.

106

/

BINAC computer

In March of 1946, Eckert and Mauchly, frustrated by a controversy over patent rights for the ENIAC, resigned from the Moore School. Several months later, they formed the Philadelphiabased Electronic Control Company on the strength of a contract from the National Bureau of Standards and the Census Bureau to build a much grander computer, the Universal Automatic Computer (UNIVAC). They thus abandoned the EDVAC project, which was finally completed by the Moore School in 1952, but they incorporated the main features of the EDVAC into the design of the UNIVAC. Building the UNIVAC, however, proved to be much more involved and expensive than anticipated, and the funds provided by the original contract were inadequate. Eckert and Mauchly, therefore, took on several other smaller projects in an effort to raise funds. On October 9, 1947, they signed a contract with the Northrop Corporation of Hawthorne, California, to produce a relatively small computer to be used in the guidance system of a top-secret missile called the Snark, which Northrop was building for the Air Force. This computer, the Binary Automatic Computer (BINAC), turned out to be Eckert and Mauchly’s first commercial sale and the first stored-program computer completed in the United States. The BINAC was designed to be at least a preliminary version of a compact, airborne computer. It had two main processing units. These contained a total of fourteen hundred vacuum tubes, a drastic reduction from the eighteen thousand used in the ENIAC. There were also two memory units, as well as two power supplies, an input converter unit, and an input console, which used either a typewriter keyboard or an encoded magnetic tape (the first time such tape was used for computer input). Because of its dual processing, memory, and power units, the BINAC was actually two computers, each of which would continually check its results against those of the other in an effort to identify errors. The BINAC became operational in August, 1949. Public demonstrations of the computer were held in Philadelphia from August 18 through August 20.

BINAC computer

/

107

Impact The design embodied in the BINAC is the real source of its significance. It demonstrated successfully the benefits of the dual processor design for minimizing errors, a feature adopted in many subsequent computers. It showed the suitability of magnetic tape as an input-output medium. Its most important new feature was its ability to store programs in its relatively spacious memory, the principle that Eckert, Mauchly, and von Neumann had originally designed into the EDVAC. In this respect, the BINAC was a direct descendant of the EDVAC. In addition, the stored-program principle gave electronic computers new powers, quickness, and automatic control that, as they have continued to grow, have contributed immensely to the aura of intelligence often associated with their operation. The BINAC successfully demonstrated some of these impressive new powers in August of 1949 to eager observers from a number of major American corporations. It helped to convince many influential leaders of the commercial segment of society of the promise of electronic computers. In doing so, the BINAC helped to ensure the further evolution of computers. See also Apple II computer; BINAC computer; Colossus computer; ENIAC computer; IBM Model 1401 computer; Personal computer; Supercomputer; UNIVAC computer. Further Reading Macrae, Norman. John von Neumann: The Scientific Genius Who Pioneered the Modern Computer, Game Theory, Nuclear Deterrence, and Much More. New York: Pantheon Books, 1992. Spencer, Donald D. Great Men and Women of Computing. 2d ed. Ormond Beach, Fla.: Camelot Publishing, 1999. Zientara, Marguerite. The History of Computing: A Biographical Portrait of the Visionaries Who Shaped the Destiny of the Computer Industry. Framingham, Mass.: CW Communications, 1981.

108

Birth control pill Birth control pill

The invention: An orally administered drug that inhibits ovulation in women, thereby greatly reducing the chance of pregnancy. The people behind the invention: Gregory Pincus (1903-1967), an American biologist Min-Chueh Chang (1908-1991), a Chinese-born reproductive biologist John Rock (1890-1984), an American gynecologist Celso-Ramon Garcia (1921), a physician Edris Rice-Wray (1904), a physician Katherine Dexter McCormick (1875-1967), an American millionaire Margaret Sanger (1879-1966), an American activist An Ardent Crusader Margaret Sanger was an ardent crusader for birth control and family planning. Having decided that a foolproof contraceptive was necessary, Sanger met with her friend, the wealthy socialite Katherine Dexter McCormick. A 1904 graduate in biology from the Massachusetts Institute of Technology, McCormick had the knowledge and the vision to invest in biological research. Sanger arranged a meeting between McCormick and Gregory Pincus, head of the Worcester Institutes of Experimental Biology. After listening to Sanger’s pleas for an effective contraceptive and McCormick’s offer of financial backing, Pincus agreed to focus his energies on finding a pill that would prevent pregnancy. Pincus organized a team to conduct research on both laboratory animals and humans. The laboratory studies were conducted under the direction of Min-Chueh Chang, a Chinese-born scientist who had been studying sperm biology, artificial insemination, and in vitro fertilization. The goal of his research was to see whether pregnancy might be prevented by manipulation of the hormones usually found in a woman.

Birth control pill

/

109

It was already known that there was one time when a woman could not become pregnant—when she was already pregnant. In 1921, Ludwig Haberlandt, an Austrian physiologist, had transplanted the ovaries from a pregnant rabbit into a nonpregnant one. The latter failed to produce ripe eggs, showing that some substance from the ovaries of a pregnant female prevents ovulation. This substance was later identified as the hormone progesterone by George W. Corner, Jr., and Willard M. Allen in 1928. If progesterone could inhibit ovulation during pregnancy, maybe progesterone treatment could prevent ovulation in nonpregnant females as well. In 1937, this was shown to be the case by scientists from the University of Pennsylvania, who prevented ovulation in rabbits with injections of progesterone. It was not until 1951, however, when Carl Djerassi and other chemists devised inexpensive ways of producing progesterone in the laboratory, that serious consideration was given to the medical use of progesterone. The synthetic version of progesterone was called “progestin.” Testing the Pill In the laboratory, Chang tried more than two hundred different progesterone and progestin compounds, searching for one that would inhibit ovulation in rabbits and rats. Finally, two compounds were chosen: progestins derived from the root of a wild Mexican yam. Pincus arranged for clinical tests to be carried out by CelsoRamon Garcia, a physician, and John Rock, a gynecologist. Rock had already been conducting experiments with progesterone as a treatment for infertility. The treatment was effective in some women but required that large doses of expensive progesterone be injected daily. Rock was hopeful that the synthetic progestin that Chang had found effective in animals would be helpful in infertile women as well. With Garcia and Pincus, Rock treated another group of fifty infertile women with the synthetic progestin. After treatment ended, seven of these previously infertile women became pregnant within half a year. Garcia, Pincus, and Rock also took several physiological measurements of the women while they were taking the progestin and were able to conclude that ovulation did not occur while the women were taking the progestin pill.

110

/

Birth control pill

(Library of Congess)

Margaret Sanger Margaret Louise Higgins saw her mother die at the age of only fifty. The cause was tuberculosis, but Margaret, the sixth of eleven children, was convinced her mother’s string of pregnancies was what killed her. Her crusade to liberate women from the burden of unwanted, dangerous pregnancies lasted the rest of her life. Born in Corning, New York, in 1879, she went to Claverack College and Hudson River Institute and joined a nursing program at White Plains Hospital, graduating in 1900. Two years later she married William Sanger, an architect and painter. They moved into New York City in 1910 and became part of Greenwich Village’s community of left-wing intellectuals, artists, and activists, such as John Reed, Upton Sinclair, and Emma Goldman. She used her free time to support liberal reform causes, participating in labor actions of the Industrial Workers of the World. Working as a visiting nurse, she witnessed the health problems among poor women caused by poor hygiene and frequent pregnancies. In 1912 she test this began a newspaper column, “What Every Girl Should Know,” about reproductive health and education. The authorities tried to suppress some of the columns as obscene—for instance, one explaining venereal disease—but Sanger was undaunted. In 1914, she launched The Woman Rebel, a magazine promoting women’s liberation and birth control. From then on, although threatened with legal action and jail, she vigorously fought the political battles for birth control She published books, lectured, took part in demonstrations, opened a birth control clinic in Brooklyn (the nation’s first), started the Birth Control Federation of American (later renamed Planned Parenthood Federation of America), and traveled overseas to promote birth control in order to improve the standard of living in Third World countries and to curb population growth. Sanger was not an inventor, but she contributed ideas to the invention of various birth control devices and in the 1950’s found the money needed for the research and development of oral contraceptives at the Worcester Foundation for Experimental Biology, which produced the first birth control pill. She died in Tucson, Arizona, in 1966.

Birth control pill

/

111

Having shown that the hormone could effectively prevent ovulation in both animals and humans, the investigators turned their attention back to birth control. They were faced with several problems: whether side effects might occur in women using progestins for a long time, and whether women would remember to take the pill day after day, for months or even years. To solve these problems, the birth control pill was tested on a large scale. Because of legal problems in the United States, Pincus decided to conduct the test in Puerto Rico. The test started in April of 1956. Edris Rice-Wray, a physician, was responsible for the day-to-day management of the project. As director of the Puerto Rico Family Planning Association, she had seen firsthand the need for a cheap, reliable contraceptive. The women she recruited for the study were married women from a low-income population living in a housing development in Río Piedras, a suburb of San Juan. Word spread quickly, and soon women were volunteering to take the pill that would prevent pregnancy. In the first study, 221 women took a pill containing 10 milligrams of progestin and 0.15 milligrams of estrogen. (The estrogen was added to help control breakthrough bleeding.) Results of the test were reported in 1957. Overall, the pill proved highly effective in preventing conception. None of the women who took the pill according to directions became pregnant, and most women who wanted to get pregnant after stopping the pill had no difficulty. Nevertheless, 17 percent of the women had some unpleasant reactions, such as nausea or dizziness. The scientists believed that these mild side effects, as well as one death from congestive heart failure, were unrelated to the use of the pill. Even before the final results were announced, additional field tests were begun. In 1960, the U.S. Food and Drug Administration (FDA) approved the use of the pill developed by Pincus and his collaborators as an oral contraceptive. Consequences Within two years of approval by the FDA, more than a million women in the United States were using the birth control pill. New contraceptives were developed in the 1960’s and 1970’s, but the birth control pill remains the most widely used method of prevent-

112

/

Birth control pill

ing pregnancy. More than 60 million women use the pill worldwide. The greatest impact of the pill has been in the social and political world. Before Sanger began the push for the pill, birth control was regarded often as socially immoral and often illegal as well. Women in those post-World War II years were expected to have a lifelong career as a mother to their many children. With the advent of the pill, Dispensers designed to help users keep track of a radical change occurred the days on which they take their pills. (Image Club Graphics) in society’s attitude toward women’s work. Women had increased freedom to work and enter careers previously closed to them because of fears that they might get pregnant. Women could control more precisely when they would get pregnant and how many children they would have. The women’s movement of the 1960’s—with its change to more liberal social and sexual values—gained much of its strength from the success of the birth control pill. See also Abortion pill; Amniocentesis; Artificial hormone; Genetically engineered insulin; Mammography; Syphilis test; Ultrasound. Further Reading DeJauregui, Ruth. One Hundred Medical Milestones That Shaped World History. San Mateo, Calif.: Bluewood Books, 1998. Tone, Andrea. Devices and Desires: A History of Contraceptives in America. New York: Hill and Wang, 2001. Watkins, Elizabeth Siegel. On the Pill: A Social History of Oral Contraceptives, 1950-1970. Baltimore: Johns Hopkins University Press, 1998.

113

Blood transfusion Blood transfusion

The invention: A technique that greatly enhanced surgery patients’ chances of survival by replenishing the blood they lose in surgery with a fresh supply. The people behind the invention: Charles Drew (1904-1950), American pioneer in blood transfusion techniques George Washington Crile (1864-1943), an American surgeon, author, and brigadier general in the U.S. Army Medical Officers’ Reserve Corps Alexis Carrel (1873-1944), a French surgeon Samuel Jason Mixter (1855-1923), an American surgeon Nourishing Blood Transfusions It is impossible to say when and where the idea of blood transfusion first originated, although descriptions of this procedure are found in ancient Egyptian and Greek writings. The earliest documented case of a blood transfusion is that of Pope Innocent VII. In April, 1492, the pope, who was gravely ill, was transfused with the blood of three young boys. As a result, all three boys died without bringing any relief to the pope. In the centuries that followed, there were occasional descriptions of blood transfusions, but it was not until the middle of the seventeenth century that the technique gained popularity following the English physician and anatomist William Harvey’s discovery of the circulation of the blood in 1628. In the medical thought of those times, blood transfusion was considered to have a nourishing effect on the recipient. In many of those experiments, the human recipient received animal blood, usually from a lamb or a calf. Blood transfusion was tried as a cure for many different diseases, mainly those that caused hemorrhages, as well as for other medical problems and even for marital problems. Blood transfusions were a dangerous procedure, causing many deaths of both donor and recipient as a result of excessive blood

114

/

Blood transfusion

loss, infection, passage of blood clots into the circulatory systems of the recipients, passage of air into the blood vessels (air embolism), and transfusion reaction as a result of incompatible blood types. In the mid-nineteenth century, blood transfusions from animals to humans stopped after it was discovered that the serum of one species agglutinates and dissolves the blood cells of other species. A sharp drop in the use of blood transfusion came with the introduction of physiologic salt solution in 1875. Infusion of salt solution was simple and was safer than blood transfusion. Direct-Connection Blood Transfusions In 1898, when George Washington Crile began his work on blood transfusions, the major obstacle he faced was solving the problem of blood clotting during transfusions. He realized that salt solutions were not helpful in severe cases of blood loss, when there is a need to restore the patient to consciousness, steady the heart action, and raise the blood pressure. At that time, he was experimenting with indirect blood transfusions by drawing the blood of the donor into a vessel, then transferring it into the recipient’s vein by tube, funnel, and cannula, the same technique used in the infusion of saline solution. The solution to the problem of blood clotting came in 1902 when Alexis Carrel developed the technique of surgically joining blood vessels without exposing the blood to air or germs, either of which can lead to clotting. Crile learned this technique from Carrel and used it to join the peripheral artery in the donor to a peripheral vein of the recipient. Since the transfused blood remained sealed in the inner lining of the vessels, blood clotting did not occur. The first human blood transfusion of this type was performed by Crile in December, 1905. The patient, a thirty-five-year-old woman, was transfused by her husband but died a few hours after the procedure. The second, but first successful, transfusion was performed on August 8, 1906. The patient, a twenty-three-year-old male, suffered from severe hemorrhaging following surgery to remove kidney stones. After all attempts to stop the bleeding were exhausted with no results, and the patient was dangerously weak, transfusion was considered as a last resort. One of the patient’s brothers was the do-

Blood transfusion

/

115

While he was still in medical school, Charles Richard Drew saw a man’s life saved with a blood transfusion. He also saw patients die because suitable donors could not be found. Impressed by both the life-saving power of transfusions and the dire need for more of them, Drew devoted his career to improving the nation’s blood supply. His inventions saved untold thousands of lives, especially during World War II, before artificial blood was developed. Born in 1904 in Washington, D.C., Drew was a star athlete in high school, in Amherst College—from which he graduated in 1926—and even in medical school at McGill University in Montreal from 1928 to 1933. He returned to the U.S. capital to become a resident in Freedmen’s Hospital of Howard University. While there he invented a method for separating plasma from whole blood and discovered that it was not necessary to recombine the plasma and red blood cells for transfusion. Plasma alone was sufficient, and by drying or and freezing it, the plasma remained fresh enough over long periods to act as an emergency reserve. In 1938 Drew took a fellowship in blood research at Columbia Presbyterian Hospital in New York City. Employing his plasma preservation methods, he opened the first blood bank and wrote a dissertation on his techniques. He became the first African American to earn a Doctor of Science degree from Columbia University in 1940. He organized another blood bank, this one in Great Britain, and in 1941 was appointed director of the American Red Cross blood donor project. However, Drew learned to his disgust that the Red Cross and U.S. government would not allow blood from African Americans and Caucasians to be mixed in the blood bank. There was no scientific reason for such segregation. Bias prevailed. Drew angrily denounced the policy at a press conference and resigned from the Red Cross. He went back to Howard University as head of surgery and, later, director of Freedmen’s Hospital. Drew died in 1950 following an automobile accident.

(Associated Publishers)

Charles Drew

nor. Following the transfusion, the patient showed remarkable recovery and was strong enough to withstand surgery to remove the kidney and stop the bleeding. When his condition deteriorated a

116

/

Blood transfusion

few days later, another transfusion was done. This time, too, he showed remarkable improvement, which continued until his complete recovery. For his first transfusions, Crile used the Carrel suture method, which required using very fine needles and thread. It was a very delicate and time-consuming procedure. At the suggestion of Samuel Jason Mixter, Crile developed a new method using a short tubal device with an attached handle to connect the blood vessels. By this method, 3 or 4 centimeters of the vessels to be connected were surgically exposed, clamped, and cut, just as under the previous method. Yet, instead of suturing of the blood vessels, the recipient’s vein was passed through the tube and then cuffed back over the tube and tied to it. Then the donor’s artery was slipped over the cuff. The clamps were opened, and blood was allowed to flow from the donor to the recipient. In order to accommodate different-sized blood vessels, tubes of four different sizes were made, ranging in diameter from 1.5 to 3 millimeters. Impact Crile’s method was the preferred method of blood transfusion for a number of years. Following the publication of his book on transfusion, a number of modifications to the original method were published in medical journals. In 1913, Edward Lindeman developed a method of transfusing blood simply by inserting a needle through the patient’s skin and into a surface vein, making it for the first time a nonsurgical method. This method allowed one to measure the exact quantity of blood transfused. It also allowed the donor to serve in multiple transfusions. This development opened the field of transfusions to all physicians. Lindeman’s needle and syringe method also eliminated another major drawback of direct blood transfusion: the need to have both donor and recipient right next to each other. See also Coronary artery bypass surgery; Electrocardiogram; Electroencephalogram; Heart-lung machine.

Blood transfusion

/

117

Further Reading English, Peter C. Shock, Physiological Surgery, and George Washington Crile: Medical Innovation in the Progressive Era. Westport, Conn.: Greenwood Press, 1980. Le Vay, David, and Roy Porter. Alexis Carrel: The Perfectibility of Man. Rockville, Md.: Kabel Publishers, 1996. Malinin, Theodore I. Surgery and Life: The Extraordinary Career of Alexis Carrel. New York: Harcourt Brace Jovanovich, 1979. May, Angelo M., and Alice G. May. The Two Lions of Lyons: The Tale of Two Surgeons, Alexis Carrel and René Leriche. Rockville, Md.: Kabel Publishers, 1992.

118

Breeder reactor Breeder reactor

The invention: A plant that generates electricity from nuclear fission while creating new fuel. The person behind the invention: Walter Henry Zinn (1906-2000), the first director of the Argonne National Laboratory Producing Electricity with More Fuel The discovery of nuclear fission involved both the discovery that the nucleus of a uranium atom would split into two lighter elements when struck by a neutron and the observation that additional neutrons, along with a significant amount of energy, were released at the same time. These neutrons might strike other atoms and cause them to fission (split) also. That, in turn, would release more energy and more neutrons, triggering a chain reaction as the process continued to repeat itself, yielding a continuing supply of heat. Besides the possibility that an explosive weapon could be constructed, early speculation about nuclear fission included its use in the generation of electricity. The occurrence of World War II (19391945) meant that the explosive weapon would be developed first. Both the weapons technology and the basic physics for the electrical reactor had their beginnings in Chicago with the world’s first nuclear chain reaction. The first self-sustaining nuclear chain reaction occurred in a laboratory at the University of Chicago on December 2, 1942. It also became apparent at that time that there was more than one way to build a bomb. At this point, two paths were taken: One was to build an atomic bomb with enough fissionable uranium in it to explode when detonated, and another was to generate fissionable plutonium and build a bomb. Energy was released in both methods, but the second method also produced another fissionable substance. The observation that plutonium and energy could be produced together meant that it would be possible to design electric power systems that would produce fissionable plutonium in quantities as large as, or larger than, the amount of fissionable material consumed. This

Breeder reactor

/

119

is the breeder concept, the idea that while using up fissionable uranium 235, another fissionable element can be made. The full development of this concept for electric power was delayed until the end of World War II. Electricity from Atomic Energy On August 1, 1946, the Atomic Energy Commission (AEC) was established to control the development and explore the peaceful uses of nuclear energy. The Argonne National Laboratory was assigned the major responsibilities for pioneering breeder reactor technologies. Walter Henry Zinn was the laboratory’s first director. He led a team that planned a modest facility (Experimental Breeder Reactor I, or EBR-I) for testing the validity of the breeding principle. Planning for this had begun in late 1944 and grew as a natural extension of the physics that developed the plutonium atomic bomb. The conceptual design details for a breeder-electric reactor were reasonably complete by late 1945. On March 1, 1949, the AEC announced the selection of a site in Idaho for the National Reactor Station (later to be named the Idaho National Engineering Laboratory, or INEL). Construction at the INEL site in Arco, Idaho, began in October, 1949. Critical mass was reached in August, 1951. (“Critical mass” is the amount and concentration of fissionable material required to produce a self-sustaining chain reaction.) The system was brought to full operating power, 1.1 megawatts of thermal power, on December 19, 1951. The next day, December 20, at 11:00 a.m., steam was directed to a turbine generator. At 1:23 p.m., the generator was connected to the electrical grid at the site, and “electricity flowed from atomic energy,” in the words of Zinn’s console log of that day. Approximately 200 kilowatts of electric power were generated most of the time that the reactor was run. This was enough to satisfy the needs of the EBR-I facilities. The reactor was shut down in 1964 after five years of use primarily as a test facility. It had also produced the first pure plutonium. With the first fuel loading, a conversion ratio of 1.01 was achieved, meaning that more new fuel was generated than was consumed by about 1 percent. When later fuel loadings were made with plutonium, the conversion ratios were more favorable, reaching as high

120

/

Breeder reactor

as 1.27. EBR-I was the first reactor to generate its own fuel and the first power reactor to use plutonium for fuel. The use of EBR-I also included pioneering work on fuel recovery and reprocessing. During its five-year lifetime, EBR-I operated with four different fuel loadings, each designed to establish specific benchmarks of breeder technology. This reactor was seen as the first in a series of increasingly large reactors in a program designed to develop breeder technology. The reactor was replaced by EBR-II, which had been proposed in 1953 and was constructed from 1955 to 1964. EBR-II was capable of producing 20 megawatts of electrical power. It was approximately fifty times more powerful than EBR-I but still small compared to light-water commercial reactors of 600 to 1,100 megawatts in use toward the end of the twentieth century. Consequences The potential for peaceful uses of nuclear fission were dramatized with the start-up of EBR-I in 1951: It was the first in the world to produce electricity, while also being the pioneer in a breeder reactor program. The breeder program was not the only reactor program being developed, however, and it eventually gave way to the light-water reactor design for use in the United States. Still, if energy resources fall into short supply, it is likely that the technologies first developed with EBR-I will find new importance. In France and Japan, commercial reactors make use of breeder reactor technology; these reactors require extensive fuel reprocessing. Following the completion of tests with plutonium loading in 1964, EBR-I was shut down and placed in standby status. In 1966, it was declared a national historical landmark under the stewardship of the U.S. Department of the Interior. The facility was opened to the public in June, 1975. See also Atomic bomb; Geothermal power; Nuclear power plant; Nuclear reactor; Solar thermal engine; Tidal power plant.

Breeder reactor

/

121

Further Reading “Breeder Trouble.” Technology Review 91, no. 5 (July, 1988). Hippel, Frank von, and Suzanne Jones. “Birth of the Breeder.” Bulletin of the Atomic Scientists 53, no. 5 (September/October, 1997). Krieger, David. Splitting the Atom: A Chronology of the Nuclear Age. Santa Barbara, Calif.: Nuclear Age Peace foundation, 1998.

122

Broadcaster guitar Broadcaster guitar

The invention: The first commercially manufactured solid-body electric guitar, the Broadcaster revolutionized the guitar industry and changed the face of popular music The people behind the invention: Leo Fender (1909-1991), designer of affordable and easily massproduced solid-body electric guitars Les Paul (Lester William Polfuss, 1915), a legendary guitarist and designer of solid-body electric guitars Charlie Christian (1919-1942), an influential electric jazz guitarist of the 1930’s Early Electric Guitars It has been estimated that between 1931 and 1937, approximately twenty-seven hundred electric guitars and amplifiers were sold in the United States. The Electro String Instrument Company, run by Adolph Rickenbacker and his designer partners, George Beauchamp and Paul Barth, produced two of the first commercially manufactured electric guitars—the Rickenbacker A-22 and A-25—in 1931. The Rickenbacker models were what are known as “lap steel” or Hawaiian guitars. A Hawaiian guitar is played with the instrument lying flat across a guitarist’s knees. By the mid-1930’s, the Gibson company had introduced an electric Spanish guitar, the ES-150. Legendary jazz guitarist Charlie Christian made this model famous while playing for Benny Goodman’s orchestra. Christian was the first electric guitarist to be heard by a large American audience. He became an inspiration for future electric guitarists, because he proved that the electric guitar could have its own unique solo sound. Along with Christian, the other electric guitar figures who put the instrument on the musical map were blues guitarist T-Bone Walker, guitarist and inventor Les Paul, and engineer and inventor Leo Fender. Early electric guitars were really no more than acoustic guitars, with the addition of one or more pickups, which convert string vi-

Broadcaster guitar

/

123

brations to electrical signals that can be played through a speaker. Amplification of a guitar made it a more assertive musical instrument. The electrification of the guitar ultimately would make it more flexible, giving it a more prominent role in popular music. Les Paul, always a compulsive inventor, began experimenting with ways of producing an electric solid-body guitar in the late 1930’s. In 1929, at the age of thirteen, he had amplified his first acoustic guitar. Another influential inventor of the 1940’s was Paul Bigsby. He built a prototype solid-body guitar for country music star Merle Travis in 1947. It was Leo Fender who revolutionized the electric guitar industry by producing the first commercially viable solid-body electric guitar, the Broadcaster, in 1948. Leo Fender Leo Fender was born in the Anaheim, California, area in 1909. As a teenager, he began to build and repair guitars. By the 1930’s, Fender was building and renting out public address systems for group gatherings. In 1937, after short tenures of employment with the Division of Highways and the U.S. Tire Company, he opened a radio repair company in Fullerton, California. Always looking to expand and invent new and exciting electrical gadgets, Fender and Clayton Orr “Doc” Kauffman started the K & F Company in 1944. Kauffman was a musician and a former employee of the Electro String Instrument Company. The K & F Company lasted until 1946 and produced steel guitars and amplifiers. After that partnership ended, Fender founded the Fender Electric Instruments Company. With the help of George Fullerton, who joined the company in 1948, Fender developed the Fender Broadcaster. The body of the Broadcaster was made of a solid plank of ash wood. The corners of the ash body were rounded. There was a cutaway located under the joint with the solid maple neck, making it easier for the guitarist to access the higher frets. The maple neck was bolted to the body of the guitar, which was unusual, since most guitar necks prior to the Broadcaster had been glued to the body. Frets were positioned directly into designed cuts made in the maple of the neck. The guitar had two pickups. The Fender Electric Instruments Company made fewer than one

124

/

Broadcaster guitar

thousand Broadcasters. In 1950, the name of the guitar was changed from the Broadcaster to the Telecaster, as the Gretsch company had already registered the name Broadcaster for some of its drums and banjos. Fender decided not to fight in court over use of the name. Leo Fender has been called the Henry Ford of the solid-body electric guitar, and the Telecaster became known as the Model T of the industry. The early Telecasters sold for $189.50. Besides being inexpensive, the Telecaster was a very durable instrument. Basically, the Telecaster was a continuation of the Broadcaster. Fender did not file for a patent on its unique bridge pickup until January 13, 1950, and he did not file for a patent on the Telecaster’s unique body shape until April 3, 1951. In the music industry during the late 1940’s, it was important for a company to unveil new instruments at trade shows. At this time, there was only one important trade show, sponsored by the National Association of Music Merchants. The Broadcaster was first sprung on the industry at the 1948 trade show in Chicago. The industry had seen nothing like this guitar ever before. This new guitar existed only to be amplified; it was not merely an acoustic guitar that had been converted. Impact The Telecaster, as it would be called after 1950, remained in continuous production for more years than any other guitar of its type and was one of the industry’s best sellers. From the beginning, it looked and sounded unique. The electrified acoustic guitars had a mellow woody tone, whereas the Telecaster had a clean twangy tone. This tone made it popular with country and blues guitarists. The Telecaster could also be played at higher volume than previous electric guitars. Because Leo Fender attempted something revolutionary by introducing an electric solid-body guitar, there was no guarantee that his business venture would succeed. Fender Electric Instruments Company had fifteen employees in 1947. At times, during the early years of the company, it looked as though Fender’s dreams would not come to fruition, but the company persevered and grew. Between 1948 and 1955 with an increase of employees, the company

Broadcaster guitar

/

125

was able to produce ten thousand Broadcaster/Telecaster guitars. Fender had taken a big risk, but it paid off enormously. Between 1958 and the mid-1970’s, Fender produced more than 250,000 Telecasters. Other guitar manufacturers were placed in a position of having to catch up. Fender had succeeded in developing a process by which electric solid-body guitars could be manufactured profitably on a large scale. Early Guitar Pickups The first pickups used on a guitar can be traced back to the 1920’s and the efforts of Lloyd Loar, but there was not strong interest on the part of the American public for the guitar to be amplified. The public did not become intrigued until the 1930’s. Charlie Christian’s electric guitar performances with Benny Goodman woke up the public to the potential of this new and exciting sound. It was not until the 1950’s, though, that the electric guitar became firmly established. Leo Fender was the right man in the right place. He could not have known that his Fender guitars would help to usher in a whole new musical landscape. Since the electric guitar was the newest member of the family of guitars, it took some time for musical audiences to fully appreciate what it could do. The electric solid-body guitar has been called a dangerous, uncivilized instrument. The youth culture of the 1950’s found in this new guitar a voice for their rebellion. Fender unleashed a revolution not only in the construction of a guitar but also in the way popular music would be approached henceforth. Because of the ever-increasing demand for the Fender product, Fender Sales was established as a separate distribution company in 1953 by Don Randall. Fender Electric Instruments Company had fifteen employees in 1947, but by 1955, the company employed fifty people. By 1960, the number of employees had risen to more than one hundred. Before Leo Fender sold the company to CBS on January 4, 1965, for $13 million, the company occupied twenty-seven buildings and employed more than five hundred workers. Always interested in finding new ways of designing a more nearly perfect guitar, Leo Fender again came up with a remarkable guitar in 1954, with the Stratocaster. There was talk in the guitar industry that

126

/

Broadcaster guitar

Charlie Christian Charlie Christian (1919-1942) did not invent the electric guitar, but he did pioneer its use. He was born to music, and for jazz aficionados he quickly developed into a legend, not only establishing a new solo instrument but also helping to invent a whole new type of jazz. Christian grew up in Texas, surrounded by a family of professional musicians. His parents and two brothers played trumpet, guitar, and piano, and sang, and Charlie was quick to imitate them. As a boy he made his own guitars out of cigar boxes and, according to a childhood friend, novelist Ralph Ellison, wowed his friends at school with his riffs. When he first heard an electric guitar in the mid-1930’s, he made that his own, too. The acoustic guitar had been only a backup instrument in jazz because it was too quiet to soar in solos. In 1935, Eddie Durham found that electric guitars could swing side by side with louder instruments. Charlie, already an experienced performer with acoustic guitar and bass, immediately recognized the power and range of subtle expression possible with the electrified instrument. He bought a Gibson ES-150 and began to make musical history with his improvisations. He impressed producer John Hammond, who introduced him to big-band leader Benny Goodman in 1939. Notoriously hard to please, Goodman rejected Christian after an audition. However, Hammond later sneaked him on stage while the Goodman band was performing. Outraged, Goodman segued into a tune he was sure Christian did not know, “Rose Room.” Christian was undaunted. He delivered an astonishingly inventive solo, and Goodman was won over despite himself. Christian’s ensuing tenure with Goodman’s band brought electric guitar solos into the limelight. However, it was during after-hours jam sessions at the Hotel Cecil in New York that Christian left his stylistic imprint on jazz. Including such jazz greats as Joe Guy, Thelonious Monk, and Kenny Clarke, the groups played around with new sounds. Out of these sessions bebop was born, and Christian was a central figure. Sick with tuberculosis, he had to quit playing in 1941 and died the following spring, only twenty-five years old.

Broadcaster guitar

/

127

Fender had gone too far with the introduction of the Stratocaster, but it became a huge success because of its versatility. It was the first commercial solid-body electric guitar to have three pickups and a vibrato bar. It was also easier to play than the Telecaster because of its double cutaway, contoured body, and scooped back. The Stratocaster sold for $249.50. Since its introduction, the Stratocaster has undergone some minor changes, but Fender and his staff basically got it right the first time. The Gibson company entered the solid-body market in 1952 with the unveiling of the “Les Paul” model. After the Telecaster, the Les Paul guitar was the next significant solid-body to be introduced. Les Paul was a legendary guitarist who also had been experimenting with electric guitar designs for many years. The Gibson designers came up with a striking model that produced a thick rounded tone. Over the years, the Les Paul model has won a loyal following. The Precision Bass In 1951, Leo Fender introduced another revolutionary guitar, the Precision bass. At a cost of $195.50, the first electric bass would go on to dominate the market. The Fender company has manufactured numerous guitar models over the years, but the three that stand above all others in the field are the Telecaster, the Precision bass, and the Stratocaster. The Telecaster is considered to be more of a workhorse, whereas the Stratocaster is thought of as the thoroughbred of electric guitars. The Precision bass was in its own right a revolutionary guitar. With a styling that had been copied from the Telecaster, the Precision freed musicians from bulky oversized acoustic basses, which were prone to feedback. The name Precision had meaning. Fender’s electric bass made it possible, with its frets, for the precise playing of notes; many acoustic basses were fretless. The original Precision bass model was manufactured from 1951 to 1954. The next version lasted from 1954 until June of 1957. The Precision bass that went into production in June, 1957, with its split humbucking pickup, continued to be the standard electric bass on the market into the 1990’s. By 1964, the Fender Electric Instruments Company had grown enormously. In addition to Leo Fender, a number of crucial people worked for the organization, including George Fullerton and Don

128

/

Broadcaster guitar

Randall. Fred Tavares joined the company’s research and development team in 1953. In May, 1954, Forrest White became Fender’s plant manager. All these individuals played vital roles in the success of Fender, but the driving force behind the scene was always Leo Fender. As Fender’s health deteriorated, Randall commenced negotiations with CBS to sell the Fender company. In January, 1965, CBS bought Fender for $13 million. Eventually, Leo Fender regained his health, and he was hired as a technical adviser by CBS/Fender. He continued in this capacity until 1970. He remained determined to create more guitar designs of note. Although he never again produced anything that could equal his previous success, he never stopped trying to attain a new perfection of guitar design. Fender died on March 21, 1991, in Fullerton, California. He had suffered for years from Parkinson’s disease, and he died of complications from the disease. He is remembered for his Broadcaster/ Telecaster, Precision bass, and Stratocaster, which revolutionized popular music. Because the Fender company was able to mass produce these and other solid-body electric guitars, new styles of music that relied on the sound made by an electric guitar exploded onto the scene. The electric guitar manufacturing business grew rapidly after Fender introduced mass production. Besides American companies, there are guitar companies that have flourished in Europe and Japan. The marriage between rock music and solid-body electric guitars was initiated by the Fender guitars. The Telecaster, Precision bass, and Stratocaster become synonymous with the explosive character of rock and roll music. The multi-billion-dollar music business can point to Fender as the pragmatic visionary who put the solid-body electric guitar into the forefront of the musical scene. His innovative guitars have been used by some of the most important guitarists of the rock era, including Jimi Hendrix, Eric Clapton, and Jeff Beck. More important, Fender guitars have remained bestsellers with the public worldwide. Amateur musicians purchased them by the thousands for their own entertainment. Owning and playing a Fender guitar, or one of the other electric guitars that followed, allowed these amateurs to feel closer to their musician idols. A large market for sheet music from popular artists also developed. In 1992, Fender was inducted into the Rock and Roll Hall of

Broadcaster guitar

/

129

Fame. He is one of the few non-musicians ever to be inducted. The sound of an electric guitar is the sound of exuberance, and since the Broadcaster was first unveiled in 1948, that sound has grown to be pervasive and enormously profitable. See also Cassette recording; Dolby noise reduction; Electronic synthesizer. Further Reading Bacon, Tony, and Paul Day. The Fender Book. San Francisco: GPI Books, 1992. Brosnac, Donald, ed. Guitars Made by the Fender Company. Westport, Conn.: Bold Strummer, 1986. Freeth, Nick. The Electric Guitar. Philadelphia: Courage Books, 1999. Trynka, Paul. The Electric Guitar: An Illustrated History. San Francisco: Chronicle Books, 1995. Wheeler, Tom. American Guitars: An Illustrated History. New York: Harper & Row, 1982. _____. “Electric Guitars.” In The Guitar Book: A Handbook for Electric and Acoustic Guitarists. New York: Harper & Row, 1974.

130

Brownie camera Brownie camera

The invention: The first inexpensive and easy-to-use camera available to the general public, the Brownie revolutionized photography by making it possible for every person to become a photographer. The people behind the invention: George Eastman (1854-1932), founder of the Eastman Kodak Company Frank A. Brownell, a camera maker for the Kodak Company who designed the Brownie Henry M. Reichenbach, a chemist who worked with Eastman to develop flexible film William H. Walker, a Rochester camera manufacturer who collaborated with Eastman A New Way to Take Pictures In early February of 1900, the first shipments of a new small box camera called the Brownie reached Kodak dealers in the United States and England. George Eastman, eager to put photography within the reach of everyone, had directed Frank Brownell to design a small camera that could be manufactured inexpensively but that would still take good photographs. Advertisements for the Brownie proclaimed that everyone— even children—could take good pictures with the camera. The Brownie was aimed directly at the children’s market, a fact indicated by its box, which was decorated with drawings of imaginary elves called “Brownies” created by the Canadian illustrator Palmer Cox. Moreover, the camera cost only one dollar. The Brownie was made of jute board and wood, with a hinged back fastened by a sliding catch. It had an inexpensive two-piece glass lens and a simple rotary shutter that allowed both timed and instantaneous exposures to be made. With a lens aperture of approximately f14 and a shutter speed of approximately 1/50 of a second, the Brownie was certainly capable of taking acceptable snap-

Brownie camera

/

131

shots. It had no viewfinder; however, an optional clip-on reflecting viewfinder was available. The camera came loaded with a six-exposure roll of Kodak film that produced square negatives 2.5 inches on a side. This film could be developed, printed, and mounted for forty cents, and a new roll could be purchased for fifteen cents. George Eastman’s first career choice had been banking, but when he failed to receive a promotion he thought he deserved, he decided to devote himself to his hobby, photography. Having worked with a rigorous wet-plate process, he knew why there were few amateur photographers at the time—the whole process, from plate preparation to printing, was too expensive and too much trouble. Even so, he had already begun to think about the commercial possibilities of photography; after reading of British experiments with dry-plate technology, he set up a small chemical laboratory and came up with a process of his own. The Eastman Dry Plate Company became one of the most successful producers of gelatin dry plates. Dry-plate photography had attracted more amateurs, but it was still a complicated and expensive hobby. Eastman realized that the number of photographers would have to increase considerably if the market for cameras and supplies were to have any potential. In the early 1880’s, Eastman first formulated the policies that would make the Eastman Kodak Company so successful in years to come: mass production, low prices, foreign and domestic distribution, and selling through extensive advertising and by demonstration. In his efforts to expand the amateur market, Eastman first tackled the problem of the glass-plate negative, which was heavy, fragile, and expensive to make. By 1884, his experiments with paper negatives had been successful enough that he changed the name of his company to The Eastman Dry Plate and Film Company. Since flexible roll film needed some sort of device to hold it steady in the camera’s focal plane, Eastman collaborated with William Walker to develop the Eastman-Walker roll-holder. Eastman’s pioneering manufacture and use of roll films led to the appearance on the market in the 1880’s of a wide array of hand cameras from a number of different companies. Such cameras were called “detective cameras” because they were small and could be used surreptitiously. The most famous of these, introduced by Eastman in 1888, was named the “Kodak”—a word he coined to be terse, distinctive, and easily

132

/

Brownie camera

pronounced in any language. This camera’s simplicity of operation was appealing to the general public and stimulated the growth of amateur photography. The Camera The Kodak was a box about seven inches long and four inches wide, with a one-speed shutter and a fixed-focus lens that produced reasonably sharp pictures. It came loaded with enough roll film to make one hundred exposures. The camera’s initial price of twentyfive dollars included the cost of processing the first roll of film; the camera also came with a leather case and strap. After the film was exposed, the camera was mailed, unopened, to the company’s plant in Rochester, New York, where the developing and printing were done. For an additional ten dollars, the camera was reloaded and sent back to the customer. The Kodak was advertised in mass-market publications, rather than in specialized photographic journals, with the slogan: “You press the button, we do the rest.” With his introduction of a camera that was easy to use and a service that eliminated the need to know anything about processing negatives, Eastman revolutionized the photographic market. Thousands of people no longer depended upon professional photographers for their portraits but instead learned to make their own. In 1892, the Eastman Dry Plate and Film Company became the Eastman Kodak Company, and by the mid1890’s, one hundred thousand Kodak cameras had been manufactured and sold, half of them in Europe by Kodak Limited. Having popularized photography with the first Kodak, in 1900 Eastman turned his attention to the children’s market with the introduction of the Brownie. The first five thousand cameras sent to dealers were sold immediately; by the end of the following year, almost a quarter of a million had been sold. The Kodak Company organized Brownie camera clubs and held competitions specifically for young photographers. The Brownie came with an instruction booklet that gave children simple directions for taking successful pictures, and “The Brownie Boy,” an appealing youngster who loved photography, became a standard feature of Kodak’s advertisements.

Brownie camera

/

133

Impact Eastman followed the success of the first Brownie by introducing several additional models between 1901 and 1917. Each was a more elaborate version of the original. These Brownie box cameras were on the market until the early 1930’s, and their success inspired other companies to manufacture box cameras of their own. In 1906, the Ansco company produced the Buster Brown camera in three sizes that corresponded to Kodak’s Brownie camera range; in 1910 and 1914, Ansco made three more versions. The Seneca company’s Scout box camera, in three sizes, appeared in 1913, and Sears Roebuck’s Kewpie cameras, in five sizes, were sold beginning in 1916. In England, the Houghtons company introduced its first Scout camera in 1901, followed by another series of four box cameras in 1910 sold under the Ensign trademark. Other English manufacturers of box cameras included the James Sinclair company, with its Traveller Una of 1909, and the Thornton-Pickard company, with a Filma camera marketed in four sizes in 1912. After World War I ended, several series of box cameras were manufactured in Germany by companies that had formerly concentrated on more advanced and expensive cameras. The success of box cameras in other countries, led by Kodak’s Brownie, undoubtedly prompted this trend in the German photographic industry. The Ernemann Film K series of cameras in three sizes, introduced in 1919, and the all-metal Trapp Little Wonder of 1922 are examples of popular German box cameras. In the early 1920’s, camera manufacturers began making boxcamera bodies from metal rather than from wood and cardboard. Machine-formed metal was less expensive than the traditional handworked materials. In 1924, Kodak’s two most popular Brownie sizes appeared with aluminum bodies. In 1928, Kodak Limited of England added two important new features to the Brownie—a built-in portrait lens, which could be brought in front of the taking lens by pressing a lever, and camera bodies in a range of seven different fashion colors. The Beau Brownie cameras, made in 1930, were the most popular of all the colored box cameras. The work of Walter Dorwin Teague, a leading American designer, these cameras had an Art Deco geometric pat-

134

/

Brownie camera

tern on the front panel, which was enameled in a color matching the leatherette covering of the camera body. Several other companies, including Ansco, again followed Kodak’s lead and introduced their own lines of colored cameras. In the 1930’s, several new box cameras with interesting features appeared, many manufactured by leading film companies. In France, the Lumiere Company advertised a series of box cameras—the Luxbox, Scoutbox, and Lumibox—that ranged from a basic camera to one with an adjustable lens and shutter. In 1933, the German Agfa company restyled its entire range of box cameras, and in 1939, the Italian Ferrania company entered the market with box cameras in two sizes. In 1932, Kodak redesigned its Brownie series to take the new 620 roll film, which it had just introduced. This film and the new Six-20 Brownies inspired other companies to experiment with variations of their own; some box cameras, such as the Certo Double-box, the Coronet Every Distance, and the Ensign E-20 cameras, offered a choice of two picture formats. Another new trend was a move toward smaller-format cameras using standard 127 roll film. In 1934, Kodak marketed the small Baby Brownie. Designed by Teague and made from molded black plastic, this little camera with a folding viewfinder sold for only one dollar—the price of the original Brownie in 1900. The Baby Brownie, the first Kodak camera made of molded plastic, heralded the move to the use of plastic in camera manufacture. Soon many others, such as the Altissa series of box cameras and the Voigtlander Brilliant V/6 camera, were being made from this new material. Later Trends By the late 1930’s, flashbulbs had replaced flash powder for taking pictures in low light; again, the Eastman Kodak Company led the way in introducing this new technology as a feature on the inexpensive box camera. The Falcon Press-Flash, marketed in 1939, was the first mass-produced camera to have flash synchronization and was followed the next year by the Six-20 Flash Brownie, which had a detachable flash gun. In the early 1940’s, other companies, such as Agfa-Ansco, introduced this feature on their own box cameras.

Brownie camera

/

135

Frugal, bold, practical, generous to those who were loyal, impatient with dissent, and possessing a steely determination, George Eastman (1854-1932) rose to become one of the richest people of his generation. He abhorred poverty and did his best to raise others from it as well. At age fourteen, when his father died, Eastman had to drop out of school to support his mother and sister. The missed opportunity for an education and the struggle to earn a living shaped his outlook. He worked at an insurance agency and then at a bank, keeping careful record of the money he earned. By the time he was twenty-five he had saved three thousand dollars and found his job as a banker to be unrewarding. As a teenager, he had taught himself photography. However, that was only a start. He taught himself the physics and chemistry of photography too—and enough French and German to read the latest foreign scientific journals. His purpose was practical, to make cameras cheap and easy to use so that average people could own them. This launched him on the career of invention and business that took him away from banking and made his fortune. At the same time he remembered his origins and family. Out of his first earnings, he bought photographs for his mother and a favorite teacher. He never stopped giving. At the company he founded, he gave substantial raises to employees, reduced their hours, and installed safety equipment, a medical department, and a lunch room. He gave millions to the Hampton Institute, Tuskegee Institute, Massachusetts Institute of Technology, and University of Rochester, while also establishing dental clinics for the poor. In his old age he found he could no longer keep up with his younger scientific and business colleagues. In 1932, leaving behind a note that asked, simply, “My work is done, why wait?” he committed suicide. Even then he continued to give. His will left most of his vast fortune to charities.

(Smithsonian Institution)

George Eastman

In the years after World War II, the box camera evolved into an eye-level camera, making it more convenient to carry and use. Many amateur photographers, however, still had trouble handling

136

/

Brownie camera

paper-backed roll film and were taking their cameras back to dealers to be unloaded and reloaded. Kodak therefore developed a new system of film loading, using the Kodapak cartridge, which could be mass-produced with a high degree of accuracy by precision plastic-molding techniques. To load the camera, the user simply opened the camera back and inserted the cartridge. This new film was introduced in 1963, along with a series of Instamatic cameras designed for its use. Both were immediately successful. The popularity of the film cartridge ended the long history of the simple and inexpensive roll film camera. The last English Brownie was made in 1967, and the series of Brownies made in the United States was discontinued in 1970. Eastman’s original marketing strategy of simplifying photography in order to increase the demand for cameras and film continued, however, with the public’s acceptance of cartridge-loading cameras such as the Instamatic. From the beginning, Eastman had recognized that there were two kinds of photographers other than professionals. The first, he declared, were the true amateurs who devoted time enough to acquire skill in the complex processing procedures of the day. The second were those who merely wanted personal pictures or memorabilia of their everyday lives, families, and travels. The second class, he observed, outnumbered the first by almost ten to one. Thus, it was to this second kind of amateur photographer that Eastman had appealed, both with his first cameras and with his advertising slogan, “You press the button, we do the rest.” Eastman had done much more than simply invent cameras and films; he had invented a system and then developed the means for supporting that system. This is essentially what the Eastman Kodak Company continued to accomplish with the series of Instamatics and other descendants of the original Brownie. In the decade between 1963 and 1973, for example, approximately sixty million Instamatics were sold throughout the world. The research, manufacturing, and marketing activities of the Eastman Kodak Company have been so complex and varied that no one would suggest that the company’s prosperity rests solely on the success of its line of inexpensive cameras and cartridge films, although these have continued to be important to the company. Like Kodak, however, most large companies in the photographic indus-

Brownie camera

/

137

try have expanded their research to satisfy the ever-growing demand from amateurs. The amateurism that George Eastman recognized and encouraged at the beginning of the twentieth century thus still flourished at its end. See also Autochrome plate; Color film; Instant photography. Further Reading Brooke-Ball, Peter. George Eastman and Kodak. Watford: Exley, 1994. Collins, Douglas. The Story of Kodak. New York: Harry N. Abrams, 1990. Freund, Gisele. Photography and Society. Boston: David R. Godine, 1980. Wade, John. A Short History of the Camera. Watford, England: Fountain Press, 1979. West, Nancy Martha. Kodak and the Lens of Nostalgia. Charlottesville: University Press of Virginia, 2000.

138

Bubble memory Bubble memory

The invention: An early nonvolatile medium for storing information on computers. The person behind the invention: Andrew H. Bobeck (1926), a Bell Telephone Laboratories scientist Magnetic Technology The fanfare over the commercial prospects of magnetic bubbles was begun on August 8, 1969, by a report appearing in both The New York Times and The Wall Street Journal. The early 1970’s would see the anticipation mount (at least in the computer world) with each prediction of the benefits of this revolution in information storage technology. Although it was not disclosed to the public until August of 1969, magnetic bubble technology had held the interest of a small group of researchers around the world for many years. The organization that probably can claim the greatest research advances with respect to computer applications of magnetic bubbles is Bell Telephone Laboratories (later part of American Telephone and Telegraph). Basic research into the properties of certain ferrimagnetic materials started at Bell Laboratories shortly after the end of World War II (1939-1945). Ferrimagnetic substances are typically magnetic iron oxides. Research into the properties of these and related compounds accelerated after the discovery of ferrimagnetic garnets in 1956 (these are a class of ferrimagnetic oxide materials that have the crystal structure of garnet). Ferrimagnetism is similar to ferromagnetism, the phenomenon that accounts for the strong attraction of one magnetized body for another. The ferromagnetic materials most suited for bubble memories contain, in addition to iron, the element yttrium or a metal from the rare earth series. It was a fruitful collaboration between scientist and engineer, between pure and applied science, that produced this promising

Bubble memory

/

139

breakthrough in data storage technology. In 1966, Bell Laboratories scientist Andrew H. Bobeck and his coworkers were the first to realize the data storage potential offered by the strange behavior of thin slices of magnetic iron oxides under an applied magnetic field. The first U.S. patent for a memory device using magnetic bubbles was filed by Bobeck in the fall of 1966 and issued on August 5, 1969. Bubbles Full of Memories The three basic functional elements of a computer are the central processing unit, the input/output unit, and memory. Most implementations of semiconductor memory require a constant power source to retain the stored data. If the power is turned off, all stored data are lost. Memory with this characteristic is called “volatile.” Disks and tapes, which are typically used for secondary memory, are “nonvolatile.” Nonvolatile memory relies on the orientation of magnetic domains, rather than on electrical currents, to sustain its existence. One can visualize by analogy how this will work by taking a group of permanent bar magnets that are labeled with N for north at one end and S for south at the other. If an arrow is painted starting from the north end with the tip at the south end on each magnet, an orientation can then be assigned to a magnetic domain (here one whole bar magnet). Data are “stored” with these bar magnets by arranging them in rows, some pointing up, some pointing down. Different arrangements translate to different data. In the binary world of the computer, all information is represented by two states. A stored data item (known as a “bit,” or binary digit) is either on or off, up or down, true or false, depending on the physical representation. The “on” state is commonly labeled with the number 1 and the “off” state with the number 0. This is the principle behind magnetic disk and tape data storage. Now imagine a thin slice of a certain type of magnetic material in the shape of a 3-by-5-inch index card. Under a microscope, using a special source of light, one can see through this thin slice in many regions of the surface. Darker, snakelike regions can also be seen, representing domains of an opposite orientation (polarity) to the transparent regions. If a weak external magnetic field is then applied by

140

/

Bubble memory

placing a permanent magnet of the same shape as the card on the underside of the slice, a strange thing happens to the dark serpentine pattern—the long domains shrink and eventually contract into “bubbles,” tiny magnetized spots. Viewed from the side of the slice, the bubbles are cylindrically shaped domains having a polarity opposite to that of the material on which they rest. The presence or absence of a bubble indicates either a 0 or a 1 bit. Data bits are stored by moving the bubbles in the thin film. As long as the field is applied by the permanent magnet substrate, the data will be retained. The bubble is thus a nonvolatile medium for data storage. Consequences Magnetic bubble memory created quite a stir in 1969 with its splashy public introduction. Most of the manufacturers of computer chips immediately instituted bubble memory development projects. Texas Instruments, Philips, Hitachi, Motorola, Fujitsu, and International Business Machines (IBM) joined the race with Bell Laboratories to mass-produce bubble memory chips. Texas Instruments became the first major chip manufacturer to mass-produce bubble memories in the mid-to-late 1970’s. By 1990, however, almost all the research into magnetic bubble technology had shifted to Japan. Hitachi and Fujitsu began to invest heavily in this area. Mass production proved to be the most difficult task. Although the materials it uses are different, the process of producing magnetic bubble memory chips is similar to the process applied in producing semiconductor-based chips such as those used for random access memory (RAM). It is for this reason that major semiconductor manufacturers and computer companies initially invested in this technology. Lower fabrication yields and reliability issues plagued early production runs, however, and, although these problems have mostly been solved, gains in the performance characteristics of competing conventional memories have limited the impact that magnetic bubble technology has had on the marketplace. The materials used for magnetic bubble memories are costlier and possess more complicated structures than those used for semiconductor or disk memory. Speed and cost of materials are not the only bases for compari-

Bubble memory

/

141

son. It is possible to perform some elementary logic with magnetic bubbles. Conventional semiconductor-based memory offers storage only. The capability of performing logic with magnetic bubbles puts bubble technology far ahead of other magnetic technologies with respect to functional versatility. A small niche market for bubble memory developed in the 1980’s. Magnetic bubble memory can be found in intelligent terminals, desktop computers, embedded systems, test equipment, and similar microcomputer-based systems. See also Computer chips; Floppy disk; Hard disk; Optical disk; Personal computer. Further Reading “Bubble Memory’s Ruggedness Revives Interest for Military Use.” Aviation Week and Space Technology 130, no. 3 (January 16, 1989). Graff, Gordon. “Better Bubbles.” Popular Science 232, no. 2 (February, 1988). McLeod, Jonah. “Will Bubble Memories Make a Comeback?” Electronics 61, no. 14 (August, 1988). Nields, Megan. “Bubble Memory Bursts into Niche Markets.” MiniMicro Systems 20, no. 5 (May, 1987).

142

Bullet train Bullet train

The invention: An ultrafast passenger railroad system capable of moving passengers at speeds double or triple those of ordinary trains. The people behind the invention: Ikeda Hayato (1899-1965), Japanese prime minister from 1960 to 1964, who pushed for the expansion of public expenditures Shinji Sogo (1901-1971), the president of the Japanese National Railways, the “father of the bullet train” Building a Faster Train By 1900, Japan had a world-class railway system, a logical result of the country’s dense population and the needs of its modernizing economy. After 1907, the government controlled the system through the Japanese National Railways (JNR). In 1938, JNR engineers first suggested the idea of a train that would travel 125 miles per hour from Tokyo to the southern city of Shimonoseki. Construction of a rapid train began in 1940 but was soon stopped because of World War II. The 311-mile railway between Tokyo and Osaka, the Tokaido Line, has always been the major line in Japan. By 1957, a business express along the line operated at an average speed of 57 miles per hour, but the double-track line was rapidly reaching its transport capacity. The JNR established two investigative committees to explore alternative solutions. In 1958, the second committee recommended the construction of a high-speed railroad on a separate double track, to be completed in time for the Tokyo Olympics of 1964. The Railway Technical Institute of the JNR concluded that it was feasible to design a line that would operate at an average speed of about 130 miles per hour, cutting time for travel between Tokyo and Osaka from six hours to three hours. By 1962, about 17 miles of the proposed line were completed for test purposes. During the next two years, prototype trains were tested to correct flaws and make improvements in the design. The en-

Bullet train

/

143

tire project was completed on schedule in July, 1964, with total construction costs of more than $1 billion, double the original estimates. The Speeding Bullet Service on the Shinkansen, or New Trunk Line, began on October 1, 1964, ten days before the opening of the Olympic Games. Commonly called the “bullet train” because of its shape and speed, the Shinkansen was an instant success with the public, both in Japan and abroad. As promised, the time required to travel between Tokyo and Osaka was cut in half. Initially, the system provided daily services of sixty trains consisting of twelve cars each, but the number of scheduled trains was almost doubled by the end of the year. The Shinkansen was able to operate at its unprecedented speed because it was designed and operated as an integrated system, making use of countless technological and scientific developments. Tracks followed the standard gauge of 56.5 inches, rather than the more narrow gauge common in Japan. For extra strength, heavy

Japanese bullet trains. (PhotoDisc)

144

/

Bullet train

welded rails were attached directly onto reinforced concrete slabs. The minimum radius of a curve was 8,200 feet, except where sharper curves were mandated by topography. In many ways similar to modern airplanes, the railway cars were made airtight in order to prevent ear discomfort caused by changes in pressure when trains enter tunnels. The Shinkansen trains were powered by electric traction motors, with four 185-kilowatt motors on each car—one motor attached to each axle. This design had several advantages: It provided an even distribution of axle load for reducing strain on the tracks; it allowed the application of dynamic brakes (where the motor was used for braking) on all axles; and it prevented the failure of one or two units from interrupting operation of the entire train. The 25,000-volt electrical current was carried by trolley wire to the cars, where it was rectified into a pulsating current to drive the motors. The Shinkansen system established a casualty-free record because of its maintenance policies combined with its computerized Centralized Traffic Control system. The control room at Tokyo Station was designed to maintain timely information about the location of all trains and the condition of all routes. Although train operators had some discretion in determining speed, automatic brakes also operated to ensure a safe distance between trains. At least once each month, cars were thoroughly inspected; every ten days, an inspection train examined the conditions of tracks, communication equipment, and electrical systems. Impact Public usage of the Tokyo-Osaka bullet train increased steadily because of the system’s high speed, comfort, punctuality, and superb safety record. Businesspeople were especially happy that the rapid service allowed them to make the round-trip without the necessity of an overnight stay, and continuing modernization soon allowed nonstop trains to make a one-way trip in two and one-half hours, requiring speeds of 160 miles per hour in some stretches. By the early 1970’s, the line was transporting a daily average of 339,000 passengers in 240 trains, meaning that a train departed from Tokyo about every ten minutes.

Bullet train

/

145

The popularity of the Shinkansen system quickly resulted in demands for its extension into other densely populated regions. In 1972, a 100-mile stretch between Osaka and Okayama was opened for service. By 1975, the line was further extended to Hakata on the island of Kyushu, passing through the Kammon undersea tunnel. The cost of this 244-mile stretch was almost $2.5 billion. In 1982, lines were completed from Tokyo to Niigata and from Tokyo to Morioka. By 1993, the system had grown to 1,134 miles of track. Since high usage made the system extremely profitable, the sale of the JNR to private companies in 1987 did not appear to produce adverse consequences. The economic success of the Shinkansen had a revolutionary effect on thinking about the possibilities of modern rail transportation, leading one authority to conclude that the line acted as “a savior of the declining railroad industry.” Several other industrial countries were stimulated to undertake large-scale railway projects; France, especially, followed Japan’s example by constructing highspeed electric railroads from Paris to Nice and to Lyon. By the mid1980’s, there were experiments with high-speed trains based on magnetic levitation and other radical innovations, but it was not clear whether such designs would be able to compete with the Shinkansen model. See also Airplane; Atomic-powered ship; Diesel locomotive; Supersonic passenger plane. Further Reading French, Howard W. “Japan’s New Bullet Train Draws Fire.” New York Times (September 24, 2000). Frew, Tim. Locomotives: From the Steam Locomotive to the Bullet Train. New York: Mallard Press, 1990. Holley, David. “Faster Than a Speeding Bullet: High-Speed Trains Are Japan’s Pride, Subject of Debate.” Los Angeles Times (April 10, 1994). O’Neill, Bill. “Beating the Bullet Train.” New Scientist 140, no. 1893 (October 2, 1993). Raoul, Jean-Claude. “How High-Speed Trains Make Tracks.” Scientific American 277 (October, 1997).

146

Buna rubber Buna rubber

The invention: The first practical synthetic rubber product developed, Buna inspired the creation of other other synthetic substances that eventually replaced natural rubber in industrial applications. The people behind the invention: Charles de la Condamine (1701-1774), a French naturalist Charles Goodyear (1800-1860), an American inventor Joseph Priestley (1733-1804), an English chemist Charles Greville Williams (1829-1910), an English chemist A New Synthetic Rubber The discovery of natural rubber is often credited to the French scientist Charles de la Condamine, who, in 1736, sent the French Academy of Science samples of an elastic material used by Peruvian Indians to make balls that bounced. The material was primarily a curiosity until 1770, when Joseph Priestley, an English chemist, discovered that it rubbed out pencil marks, after which he called it “rubber.” Natural rubber, made from the sap of the rubber tree (Hevea brasiliensis), became important after Charles Goodyear discovered in 1830 that heating rubber with sulfur (a process called “vulcanization”) made it more elastic and easier to use. Vulcanized natural rubber came to be used to make raincoats, rubber bands, and motor vehicle tires. Natural rubber is difficult to obtain (making one tire requires the amount of rubber produced by one tree in two years), and wars have often cut off supplies of this material to various countries. Therefore, efforts to manufacture synthetic rubber began in the late eighteenth century. Those efforts followed the discovery by English chemist Charles Greville Williams and others in the 1860’s that natural rubber was composed of thousands of molecules of a chemical called isoprene that had been joined to form giant, necklace-like molecules. The first successful synthetic rubber, Buna, was patented by Germany’s I. G. Farben Industrie in 1926. The suc-

Buna rubber

/

147

cess of this rubber led to the development of many other synthetic rubbers, which are now used in place of natural rubber in many applications.

It was an accident that finally showed Charles Goodyear (1800-1860) how to make rubber into a durable, practical material. For years he had been experimenting at home looking for ways to improve natural rubber—and producing stenches that drove his family and neighbors to distraction—when in 1839 he dropped a piece of rubber mixed with sulfur onto a hot stove. When he examined the charred specimen, he discovered it was not sticky, as hot natural rubber always is, and when he took it outside into the cold, it did not become brittle. The son of an inventor, Goodyear invented much more than his vulcanizing process for rubber. He also patented a spring-lever faucet, pontoon boat, hay fork, and air pump, but he was never successful in making money from his inventions. Owner of a hardware store, he went broke during a financial panic in 1830 and had to spend time in debtor’s prison. He was never financially stable afterwards, often having to borrow money and sell his family’s belongings to support his experiments. And he had a large family—twelve children, of whom only half lived beyond childhood. Even vulcanized rubber did not make Goodyear’s fortune. He delayed patenting it until Thomas Hancock, an Englishman, replicated Goodyear’s method of vulcanizing and began producing rubber in England. Goodyear sued and lost. Others stole his method, and although he won one large case, legal expenses took away most of the settlement. He borrowed more and more money to advertise his product, with some success. For example, Emperor Napoleon III awarded Goodyear the Cross of the Legion of Honor for his display at the 1851 Crystal Palace Exhibition in London. Nevertheless, Goodyear died deeply in debt. Despite all the imitators, vulcanized rubber remained associated with Goodyear. Thirty-eight years after he died, the world’s larger rubber manufacturer took his name for the company’s title.

(Smithsonian Institution)

Charles Goodyear

148

/

Buna rubber

From Erasers to Gas Pumps Natural rubber belongs to the group of chemicals called “polymers.” A polymer is a giant molecule that is made up of many simpler chemical units (“monomers”) that are attached chemically to form long strings. In natural rubber, the monomer is isoprene (dimethylbutadiene). The first efforts to make a synthetic rubber used the discovery that isoprene could be made and converted into an elastic polymer. The synthetic rubber that was created from isoprene was, however, inferior to natural rubber. The first Buna rubber, which was patented by I. G. Farben in 1926, was better, but it was still less than ideal. Buna rubber was made by polymerizing the monomer butadiene in the presence of sodium. The name Buna comes from the first two letters of the words “butadiene” and “natrium” (German for sodium). Natural and Buna rubbers are called homopolymers because they contain only one kind of monomer. The ability of chemists to make Buna rubber, along with its successful use, led to experimentation with the addition of other monomers to isoprene-like chemicals used to make synthetic rubber. Among the first great successes were materials that contained two alternating monomers; such materials are called “copolymers.” If the two monomers are designated A and B, part of a polymer molecule can be represented as (ABABABABABABABABAB). Numerous synthetic copolymers, which are often called “elastomers,” now replace natural rubber in applications where they have superior properties. All elastomers are rubbers, since objects made from them both stretch greatly when pulled and return quickly to their original shape when the tension is released. Two other well-known rubbers developed by I. G. Farben are the copolymers called Buna-N and Buna-S. These materials combine butadiene and the monomers acrylonitrile and styrene, respectively. Many modern motor vehicle tires are made of synthetic rubber that differs little from Buna-S rubber. This rubber was developed after the United States was cut off in the 1940’s, during World War II, from its Asian source of natural rubber. The solution to this problem was the development of a synthetic rubber industry based on GR-S rubber (government rubber plus styrene), which was essentially Buna-S rubber. This rubber is still widely used.

Buna rubber

/

149

Buna-S rubber is often made by mixing butadiene and styrene in huge tanks of soapy water, stirring vigorously, and heating the mixture. The polymer contains equal amounts of butadiene and styrene (BSBSBSBSBSBSBSBS). When the molecules of the Buna-S polymer reach the desired size, the polymerization is stopped and the rubber is coagulated (solidified) chemically. Then, water and all the unused starting materials are removed, after which the rubber is dried and shipped to various plants for use in tires and other products. The major difference between Buna-S and GR-S rubber is that the method of making GR-S rubber involves the use of low temperatures. Buna-N rubber is made in a fashion similar to that used for BunaS, using butadiene and acrylonitrile. Both Buna-N and the related neoprene rubber, invented by Du Pont, are very resistant to gasoline and other liquid vehicle fuels. For this reason, they can be used in gas-pump hoses. All synthetic rubbers are vulcanized before they are used in industry. Impact Buna rubber became the basis for the development of the other modern synthetic rubbers. These rubbers have special properties that make them suitable for specific applications. One developmental approach involved the use of chemically modified butadiene in homopolymers such as neoprene. Made of chloroprene (chlorobutadiene), neoprene is extremely resistant to sun, air, and chemicals. It is so widely used in machine parts, shoe soles, and hoses that more than 400 million pounds are produced annually. Another developmental approach involved copolymers that alternated butadiene with other monomers. For example, the successful Buna-N rubber (butadiene and acrylonitrile) has properties similar to those of neoprene. It differs sufficiently from neoprene, however, to be used to make items such as printing press rollers. About 200 million pounds of Buna-N are produced annually. Some 4 billion pounds of the even more widely used polymer Buna-S/ GR-S are produced annually, most of which is used to make tires. Several other synthetic rubbers have significant industrial applications, and efforts to make copolymers for still other purposes continue.

150

/

Buna rubber

See also Neoprene; Nylon; Orlon; Plastic; Polyester; Polyethylene; Polystyrene; Silicones; Teflon; Velcro. Further Reading Herbert, Vernon. Synthetic Rubber: A Project That Had to Succeed. Westport, Conn.: Greenwood Press, 1985. Mossman, S. T. I., and Peter John Turnbull Morris. The Development of Plastics. Cambridge: Royal Society of Chemistry, 1994. Von Hagen, Victor Wolfgang. South America Called Them: Explorations of the Great Naturalists, La Condamine, Humboldt, Darwin, Spruce. New York: A. A. Knopf, 1945.

151

CAD/CAM CAD/CAM

The invention: Computer-Aided Design (CAD) and ComputerAided Manufacturing (CAM) enhanced flexibility in engineering design, leading to higher quality and reduced time for manufacturing The people behind the invention: Patrick Hanratty, a General Motors Research Laboratory worker who developed graphics programs Jack St. Clair Kilby (1923), a Texas Instruments employee who first conceived of the idea of the integrated circuit Robert Noyce (1927-1990), an Intel Corporation employee who developed an improved process of manufacturing integrated circuits on microchips Don Halliday, an early user of CAD/CAM who created the Made-in-America car in only four months by using CAD and project management software Fred Borsini, an early user of CAD/CAM who demonstrated its power Summary of Event Computer-Aided Design (CAD) is a technique whereby geometrical descriptions of two-dimensional (2-D) or three-dimensional (3D) objects can be created and stored, in the form of mathematical models, in a computer system. Points, lines, and curves are represented as graphical coordinates. When a drawing is requested from the computer, transformations are performed on the stored data, and the geometry of a part or a full view from either a two- or a three-dimensional perspective is shown. CAD systems replace the tedious process of manual drafting, and computer-aided drawing and redrawing that can be retrieved when needed has improved drafting efficiency. A CAD system is a combination of computer hardware and software that facilitates the construction of geometric models and, in many cases, their analysis. It allows a wide variety of visual representations of those models to be displayed.

152

/

CAD/CAM

Computer-Aided Manufacturing (CAM) refers to the use of computers to control, wholly or partly, manufacturing processes. In practice, the term is most often applied to computer-based developments of numerical control technology; robots and flexible manufacturing systems (FMS) are included in the broader use of CAM systems. A CAD/CAM interface is envisioned as a computerized database that can be accessed and enriched by either design or manufacturing professionals during various stages of the product development and production cycle. In CAD systems of the early 1990’s, the ability to model solid objects became widely available. The use of graphic elements such as lines and arcs and the ability to create a model by adding and subtracting solids such as cubes and cylinders are the basic principles of CAD and of simulating objects within a computer. CAD systems enable computers to simulate both taking things apart (sectioning) and putting things together for assembly. In addition to being able to construct prototypes and store images of different models, CAD systems can be used for simulating the behavior of machines, parts, and components. These abilities enable CAD to construct models that can be subjected to nondestructive testing; that is, even before engineers build a physical prototype, the CAD model can be subjected to testing and the results can be analyzed. As another example, designers of printed circuit boards have the ability to test their circuits on a CAD system by simulating the electrical properties of components. During the 1950’s, the U.S. Air Force recognized the need for reducing the development time for special aircraft equipment. As a result, the Air Force commissioned the Massachusetts Institute of Technology to develop numerically controlled (NC) machines that were programmable. A workable demonstration of NC machines was made in 1952; this began a new era for manufacturing. As the speed of an aircraft increased, the cost of manufacturing also increased because of stricter technical requirements. This higher cost provided a stimulus for the further development of NC technology, which promised to reduce errors in design before the prototype stage. The early 1960’s saw the development of mainframe computers. Many industries valued computing technology for its speed and for

CAD/CAM

/

153

its accuracy in lengthy and tedious numerical operations in design, manufacturing, and other business functional areas. Patrick Hanratty, working for General Motors Research Laboratory, saw other potential applications and developed graphics programs for use on mainframe computers. The use of graphics in software aided the development of CAD/CAM, allowing visual representations of models to be presented on computer screens and printers. The 1970’s saw an important development in computer hardware, namely the development and growth of personal computers (PCs). Personal computers became smaller as a result of the development of integrated circuits. Jack St. Clair Kilby, working for Texas Instruments, first conceived of the integrated circuit; later, Robert Noyce, working for Intel Corporation, developed an improved process of manufacturing integrated circuits on microchips. Personal computers using these microchips offered both speed and accuracy at costs much lower than those of mainframe computers. Five companies offered integrated commercial computer-aided design and computer-aided manufacturing systems by the first half of 1973. Integration meant that both design and manufacturing were contained in one system. Of these five companies—Applicon, Computervision, Gerber Scientific, Manufacturing and Consulting Services (MCS), and United Computing—four offered turnkey systems exclusively. Turnkey systems provide design, development, training, and implementation for each customer (company) based on the contractual agreement; they are meant to be used as delivered, with no need for the purchaser to make significant adjustments or perform programming. The 1980’s saw a proliferation of mini- and microcomputers with a variety of platforms (processors) with increased speed and better graphical resolution. This made the widespread development of computer-aided design and computer-aided manufacturing possible and practical. Major corporations spent large research and development budgets developing CAD/CAM systems that would automate manual drafting and machine tool movements. Don Halliday, working for Truesports Inc., provided an early example of the benefits of CAD/CAM. He created the Made-in-America car in only four months by using CAD and project management software. In the late 1980’s, Fred Borsini, the president of Leap Technologies in

154

/

CAD/CAM

Michigan, brought various products to market in record time through the use of CAD/CAM. In the early 1980’s, much of the CAD/CAM industry consisted of software companies. The cost for a relatively slow interactive system in 1980 was close to $100,000. The late 1980’s saw the demise of minicomputer-based systems in favor of Unix work stations and PCs based on 386 and 486 microchips produced by Intel. By the time of the International Manufacturing Technology show in September, 1992, the industry could show numerous CAD/CAM innovations including tools, CAD/CAM models to evaluate manufacturability in early design phases, and systems that allowed use of the same data for a full range of manufacturing functions. Impact In 1990, CAD/CAM hardware sales by U.S. vendors reached $2.68 billion. In software alone, $1.42 billion worth of CAD/CAM products and systems were sold worldwide by U.S. vendors, according to International Data Corporation figures for 1990. CAD/ CAM systems were in widespread use throughout the industrial world. Development lagged in advanced software applications, particularly in image processing, and in the communications software and hardware that ties processes together. A reevaluation of CAD/CAM systems was being driven by the industry trend toward increased functionality of computer-driven numerically controlled machines. Numerical control (NC) software enables users to graphically define the geometry of the parts in a product, develop paths that machine tools will follow, and exchange data among machines on the shop floor. In 1991, NC configuration software represented 86 percent of total CAM sales. In 1992, the market shares of the five largest companies in the CAD/CAM market were 29 percent for International Business Machines, 17 percent for Intergraph, 11 percent for Computervision, 9 percent for Hewlett-Packard, and 6 percent for Mentor Graphics. General Motors formed a joint venture with Ford and Chrysler to develop a common computer language in order to make the next generation of CAD/CAM systems easier to use. The venture was aimed particularly at problems that posed barriers to speeding up

CAD/CAM

/

155

the design of new automobiles. The three car companies all had sophisticated computer systems that allowed engineers to design parts on computers and then electronically transmit specifications to tools that make parts or dies. CAD/CAM technology was expected to advance on many fronts. As of the early 1990’s, different CAD/CAM vendors had developed systems that were often incompatible with one another, making it difficult to transfer data from one system to another. Large corporations, such as the major automakers, developed their own interfaces and network capabilities to allow different systems to communicate. Major users of CAD/CAM saw consolidation in the industry through the establishment of standards as being in their interests. Resellers of CAD/CAM products also attempted to redefine their markets. These vendors provide technical support and service to users. The sale of CAD/CAM products and systems offered substantial opportunities, since demand remained strong. Resellers worked most effectively with small and medium-sized companies, which often were neglected by the primary sellers of CAD/CAM equipment because they did not generate a large volume of business. Some projections held that by 1995 half of all CAD/CAM systems would be sold through resellers, at a cost of $10,000 or less for each system. The CAD/CAM market thus was in the process of dividing into two markets: large customers (such as aerospace firms and automobile manufacturers) that would be served by primary vendors, and small and medium-sized customers that would be serviced by resellers. CAD will find future applications in marketing, the construction industry, production planning, and large-scale projects such as shipbuilding and aerospace. Other likely CAD markets include hospitals, the apparel industry, colleges and universities, food product manufacturers, and equipment manufacturers. As the linkage between CAD and CAM is enhanced, systems will become more productive. The geometrical data from CAD will be put to greater use by CAM systems. CAD/CAM already had proved that it could make a big difference in productivity and quality. Customer orders could be changed much faster and more accurately than in the past, when a change could require a manual redrafting of a design. Computers could do

156

/

CAD/CAM

automatically in minutes what once took hours manually. CAD/ CAM saved time by reducing, and in some cases eliminating, human error. Many flexible manufacturing systems (FMS) had machining centers equipped with sensing probes to check the accuracy of the machining process. These self-checks can be made part of numerical control (NC) programs. With the technology of the early 1990’s, some experts estimated that CAD/CAM systems were in many cases twice as productive as the systems they replaced; in the long run, productivity is likely to improve even more, perhaps up to three times that of older systems or even higher. As costs for CAD/ CAM systems concurrently fall, the investment in a system will be recovered more quickly. Some analysts estimated that by the mid1990’s, the recovery time for an average system would be about three years. Another frontier in the development of CAD/CAM systems is expert (or knowledge-based) systems, which combine data with a human expert’s knowledge, expressed in the form of rules that the computer follows. Such a system will analyze data in a manner mimicking intelligence. For example, a 3-D model might be created from standard 2-D drawings. Expert systems will likely play a pivotal role in CAM applications. For example, an expert system could determine the best sequence of machining operations to produce a component. Continuing improvements in hardware, especially increased speed, will benefit CAD/CAM systems. Software developments, however, may produce greater benefits. Wider use of CAD/CAM systems will depend on the cost savings from improvements in hardware and software as well as on the productivity of the systems and the quality of their product. The construction, apparel, automobile, and aerospace industries have already experienced increases in productivity, quality, and profitability through the use of CAD/CAM. A case in point is Boeing, which used CAD from start to finish in the design of the 757. See also Differential analyzer; Mark I calculator; Personal computer; SAINT; Virtual machine; Virtual reality.

CAD/CAM

/

157

Further Reading Groover, Mikell P., and Emory W. Zimmers, Jr. CAD/CAM: Computer-Aided Design and Manufacturing. Englewood Cliffs, N.J.: Prentice-Hall, 1984. Jurgen, Ronald K. Computers and Manufacturing Productivity. New York: Institute of Electrical and Electronics Engineers, 1987. McMahon, Chris, and Jimmie Browne. CAD/CAM: From Principles to Practice. Reading, Mass.: Addison-Wesley, 1993. _____. CAD/CAM: Principles, Practice, and Manufacturing Management. 2d ed. Harlow, England: Addison-Wesley, 1998. Medland, A. J., and Piers Burnett. CAD/CAM in Practice. New York: John Wiley & Sons, 1986.

158

Carbon dating Carbon dating

The invention: A technique that measures the radioactive decay of carbon 14 in organic substances to determine the ages of artifacts as old as ten thousand years. The people behind the invention: Willard Frank Libby (1908-1980), an American chemist who won the 1960 Nobel Prize in Chemistry Charles Wesley Ferguson (1922-1986), a scientist who demonstrated that carbon 14 dates before 1500 b.c. needed to be corrected One in a Trillion Carbon dioxide in the earth’s atmosphere contains a mixture of three carbon isotopes (isotopes are atoms of the same element that contain different numbers of neutrons), which occur in the following percentages: about 99 percent carbon 12, about 1 percent carbon 13, and approximately one atom in a trillion of radioactive carbon 14. Plants absorb carbon dioxide from the atmosphere during photosynthesis, and then animals eat the plants, so all living plants and animals contain a small amount of radioactive carbon. When a plant or animal dies, its radioactivity slowly decreases as the radioactive carbon 14 decays. The time it takes for half of any radioactive substance to decay is known as its “half-life.” The half-life for carbon 14 is known to be about fifty-seven hundred years. The carbon 14 activity will drop to one-half after one half-life, onefourth after two half-lives, one-eighth after three half-lives, and so forth. After ten or twenty half-lives, the activity becomes too low to be measurable. Coal and oil, which were formed from organic matter millions of years ago, have long since lost any carbon 14 activity. Wood samples from an Egyptian tomb or charcoal from a prehistoric fireplace a few thousand years ago, however, can be dated with good reliability from the leftover radioactivity. In the 1940’s, the properties of radioactive elements were still being discovered and were just beginning to be used to solve problems. Scientists still did not know the half-life of carbon 14, and ar-

Carbon dating

/

159

chaeologists still depended mainly on historical evidence to determine the ages of ancient objects. In early 1947, Willard Frank Libby started a crucial experiment in testing for radioactive carbon. He decided to test samples of methane gas from two different sources. One group of samples came from the sewage disposal plant at Baltimore, Maryland, which was rich in fresh organic matter. The other sample of methane came from an oil refinery, which should have contained only ancient carbon from fossils whose radioactivity should have completely decayed. The experimental results confirmed Libby’s suspicions: The methane from fresh sewage was radioactive, but the methane from oil was not. Evidently, radioactive carbon was present in fresh organic material, but it decays away eventually. Tree-Ring Dating In order to establish the validity of radiocarbon dating, Libby analyzed known samples of varying ages. These included tree-ring samples from the years 575 and 1075 and one redwood from 979 b.c.e., as well as artifacts from Egyptian tombs going back to about 3000 b.c.e. In 1949, he published an article in the journal Science that contained a graph comparing the historical ages and the measured radiocarbon ages of eleven objects. The results were accurate within 10 percent, which meant that the general method was sound. The first archaeological object analyzed by carbon dating, obtained from the Metropolitan Museum of Art in New York, was a piece of cypress wood from the tomb of King Djoser of Egypt. Based on historical evidence, the age of this piece of wood was about fortysix hundred years. A small sample of carbon obtained from this wood was deposited on the inside of Libby’s radiation counter, giving a count rate that was about 40 percent lower than that of modern organic carbon. The resulting age of the wood calculated from its residual radioactivity was about thirty-eight hundred years, a difference of eight hundred years. Considering that this was the first object to be analyzed, even such a rough agreement with the historic age was considered to be encouraging. The validity of radiocarbon dating depends on an important assumption—namely, that the abundance of carbon 14 in nature has

160

/

Carbon dating

Willard Frank Libby Born in 1908, Willard Frank Libby came from a family of farmers in Grand View, Colorado. They moved to Sebastopol, California, where Libby went through public school. He entered the University of California, Berkeley, in 1927, earning a bachelor of science degree in 1931 and a doctorate in 1933. He stayed on at Berkeley as an instructor of chemistry until he won the first of his three Guggenheim Fellowships in 1941. He moved to Princeton University to study, but World War II cut short his fellowship. Instead, he joined the Manhattan Project, helping design the atomic bomb at Columbia University’s Division of War Research. After the war Libby became a professor of chemistry at the University of Chicago, where he conducted his research on carbon-14 dating. A leading expert in radiochemistry, he also investigated isotope tracers and the effects of fallout. However, his career saw as much public service as research. In 1954 President Dwight Eisenhower appointed him to the Atomic Energy Commission as its first chemist, and Libby directed the administration’s international Atoms for Peace program. He resigned in 1959 to take an appointment at the University of California, Los Angeles, as professor of chemistry and then in 1962 as director of the Institute of Geophysics and Planetary Physics, a position he held until he died in 1980. Libby received the Nobel Prize in Chemistry in 1960 for developing carbon-14 dating. Among his many other honors were the American Chemical Society’s Willard Gibbs Award in 1958, the Albert Einstein Medal in 1959, and the Day Medal of the Geological Society of America in 1961. He was a member of the Advisory Board of the Guggenheim Memorial Foundation, the Office of Civil and Defense Mobilization, the National Science Foundation’s General Commission on Science, and the Academic Institution and also a director of Douglas Aircraft Company.

been constant for many thousands of years. If carbon 14 was less abundant at some point in history, organic samples from that era would have started with less radioactivity. When analyzed today, their reduced activity would make them appear to be older than they really are.

Carbon dating

/

161

Charles Wesley Ferguson from the Tree-Ring Research Laboratory at the University of Arizona tackled this problem. He measured the age of bristlecone pine trees both by counting the rings and by using carbon 14 methods. He found that carbon 14 dates before 1500 b.c.e. needed to be corrected. The results show that radiocarbon dates are older than tree-ring counting dates by as much as several hundred years for the oldest samples. He knew that the number of tree rings had given him the correct age of the pines, because trees accumulate one ring of growth for every year of life. Apparently, the carbon 14 content in the atmosphere has not been constant. Fortunately, tree-ring counting gives reliable dates that can be used to correct radiocarbon measurements back to about 6000 b.c.e. Impact Some interesting samples were dated by Libby’s group. The Dead Sea Scrolls had been found in a cave by an Arab shepherd in 1947, but some Bible scholars at first questioned whether they were genuine. The linen wrapping from the Book of Isaiah was tested for carbon 14, giving a date of 100 b.c.e., which helped to establish its authenticity. Human hair from an Egyptian tomb was determined to be nearly five thousand years old. Well-preserved sandals from a cave in eastern Oregon were determined to be ninety-three hundred years old. A charcoal sample from a prehistoric site in western South Dakota was found to be about seven thousand years old. The Shroud of Turin, located in Turin, Italy, has been a controversial object for many years. It is a linen cloth, more than four meters long, which shows the image of a man’s body, both front and back. Some people think it may have been the burial shroud of Jesus Christ after his crucifixion. A team of scientists in 1978 was permitted to study the shroud, using infrared photography, analysis of possible blood stains, microscopic examination of the linen fibers, and other methods. The results were ambiguous. A carbon 14 test was not permitted because it would have required cutting a piece about the size of a handkerchief from the shroud. A new method of measuring carbon 14 was developed in the late 1980’s. It is called “accelerator mass spectrometry,” or AMS. Unlike Libby’s method, it does not count the radioactivity of carbon. In-

162

/

Carbon dating

stead, a mass spectrometer directly measures the ratio of carbon 14 to ordinary carbon. The main advantage of this method is that the sample size needed for analysis is about a thousand times smaller than before. The archbishop of Turin permitted three laboratories with the appropriate AMS apparatus to test the shroud material. The results agreed that the material was from the fourteenth century, not from the time of Christ. The figure on the shroud may be a watercolor painting on linen. Since Libby’s pioneering experiments in the late 1940’s, carbon 14 dating has established itself as a reliable dating technique for archaeologists and cultural historians. Further improvements are expected to increase precision, to make it possible to use smaller samples, and to extend the effective time range of the method back to fifty thousand years or earlier. See also Atomic clock; Geiger counter; Richter scale. Further Reading Goldberg, Paul, Vance T. Holliday, and C. Reid Ferring. Earth Sciences and Archaeology. New York: Kluwer Academic Plenum, 2001. Libby, Willard Frank. “Radiocarbon Dating” [Nobel lecture]. In Chemistry, 1942-1962. River Edge, N.J.: World Scientific, 1999. Lowe, John J. Radiocarbon Dating: Recent Applications and Future Potential. New York: John Wiley and Sons, 1996.

163

Cassette recording Cassette recording

The invention: Self-contained system making it possible to record and repeatedly play back sound without having to thread tape through a machine. The person behind the invention: Fritz Pfleumer, a German engineer whose work on audiotapes paved the way for audiocassette production Smaller Is Better The introduction of magnetic audio recording tape in 1929 was met with great enthusiasm, particularly in the entertainment industry, and specifically among radio broadcasters. Although somewhat practical methods for recording and storing sound for later playback had been around for some time, audiotape was much easier to use, store, and edit, and much less expensive to produce. It was Fritz Pfleumer, a German engineer, who in 1929 filed the first audiotape patent. His detailed specifications indicated that tape could be made by bonding a thin coating of oxide to strips of either paper or film. Pfleumer also suggested that audiotape could be attached to filmstrips to provide higher-quality sound than was available with the film sound technologies in use at that time. In 1935, the German electronics firm AEG produced a reliable prototype of a record-playback machine based on Pfleumer’s idea. By 1947, the American company 3M had refined the concept to the point where it was able to produce a high-quality tape using a plastic-based backing and red oxide. The tape recorded and reproduced sound with a high degree of clarity and dynamic range and would soon become the standard in the industry. Still, the tape was sold and used in a somewhat inconvenient open-reel format. The user had to thread it through a machine and onto a take-up reel. This process was somewhat cumbersome and complicated for the layperson. For many years, sound-recording technology remained a tool mostly for professionals. In 1963, the first audiocassette was introduced by the Nether-

164

/

Cassette recording

lands-based Philips NV company. This device could be inserted into a machine without threading. Rewind and fast-forward were faster, and it made no difference where the tape was stopped prior to the ejection of the cassette. By contrast, open-reel audiotape required that the tape be wound fully onto one or the other of the two reels before it could be taken off the machine. Technical advances allowed the cassette tape to be much narrower than the tape used in open reels and also allowed the tape speed to be reduced without sacrificing sound quality. Thus, the cassette was easier to carry around, and more sound could be recorded on a cassette tape. In addition, the enclosed cassette decreased wear and tear on the tape and protected it from contamination. Creating a Market One of the most popular uses for audiocassettes was to record music from radios and other audio sources for later playback. During the 1970’s, many radio stations developed “all music” formats in which entire albums were often played without interruption. That gave listeners an opportunity to record the music for later playback. At first, the music recording industry complained about this practice, charging that unauthorized recording of music from the radio was a violation of copyright laws. Eventually, the issue died down as the same companies began to recognize this new, untapped market for recorded music on cassette. Audiocassettes, all based on the original Philips design, were being manufactured by more than sixty companies within only a few years of their introduction. In addition, spin-offs of that design were being used in many specialized applications, including dictation, storage of computer information, and surveillance. The emergence of videotape resulted in a number of formats for recording and playing back video based on the same principle. Although each is characterized by different widths of tape, each uses the same technique for tape storage and transport. The cassette has remained a popular means of storing and retrieving information on magnetic tape for more than a quarter of a century. During the early 1990’s, digital technologies such as audio CDs (compact discs) and the more advanced CD-ROM (compact

Cassette recording

/

165

discs that reproduce sound, text, and images via computer) were beginning to store information in revolutionary new ways. With the development of this increasingly sophisticated technology, need for the audiocassette, once the most versatile, reliable, portable, and economical means of recording, storing, and playing-back sound, became more limited. Consequences The cassette represented a new level of convenience for the audiophile, resulting in a significant increase in the use of recording technology in all walks of life. Even small children could operate cassette recorders and players, which led to their use in schools for a variety of instructional tasks and in the home for entertainment. The recording industry realized that audiotape cassettes would allow consumers to listen to recorded music in places where record players were impractical: in automobiles, at the beach, even while camping. The industry also saw the need for widespread availability of music and information on cassette tape. It soon began distributing albums on audiocassette in addition to the long-play vinyl discs, and recording sales increased substantially. This new technology put recorded music into automobiles for the first time, again resulting in a surge in sales for recorded music. Eventually, information, including language instruction and books-on-tape, became popular commuter fare. With the invention of the microchip, audiotape players became available in smaller and smaller sizes, making them truly portable. Audiocassettes underwent another explosion in popularity during the early 1980’s, when the Sony Corporation introduced the Walkman, an extremely compact, almost weightless cassette player that could be attached to clothing and used with lightweight earphones virtually anywhere. At the same time, cassettes were suddenly being used with microcomputers for backing up magnetic data files. Home video soon exploded onto the scene, bringing with it new applications for cassettes. As had happened with audiotape, video camera-recorder units, called “camcorders,” were miniaturized to the point where 8-millimeter videocassettes capable of recording up

166

/

Cassette recording

to 90 minutes of live action and sound were widely available. These cassettes closely resembled the audiocassette first introduced in 1963. See also Compact disc; Dolby noise reduction; Electronic synthesizer; FM radio; Transistor radio; Walkman cassette player. Further Reading Miller, Christopher. “The One Hundred Greatest Inventions: Audio and Video.” Popular Science 254, no. 4 (April, 1999). Praag, Phil van. Evolution of the Audio Recorder. Waukesha, Wis.: EC Designs, 1997. Stark, Craig. “Thirty Five Years of Tape Recording.” Stereo Review 58 (September, 1993).

167

CAT scanner CAT scanner

The invention: A technique that collects X-ray data from solid, opaque masses such as human bodies and uses a computer to construct a three-dimensional image. The people behind the invention: Godfrey Newbold Hounsfield (1919), an English electronics engineer who shared the 1979 Nobel Prize in Physiology or Medicine Allan M. Cormack (1924-1998), a South African-born American physicist who shared the 1979 Nobel Prize in Physiology or Medicine James Ambrose, an English radiologist A Significant Merger Computerized axial tomography (CAT) is a technique that collects X-ray data from an opaque, solid mass such as a human body and uses a sophisticated computer to assemble those data into a three-dimensional image. This sophisticated merger of separate technologies led to another name for CAT, computer-assisted tomography (it came to be called computed tomography, or CT). CAT is a technique of medical radiology, an area of medicine that began after the German physicist Wilhelm Conrad Röntgen’s 1895 discovery of the high-energy electromagnetic radiations he named “X rays.” Röntgen and others soon produced X-ray images of parts of the human body, and physicians were quick to learn that these images were valuable diagnostic aids. In the late 1950’s and early 1960’s, Allan M. Cormack, a physicist at Tufts University in Massachusetts, pioneered a mathematical method for obtaining detailed X-ray absorption patterns in opaque samples meant to model biological samples. His studies used narrow X-ray beams that were passed through samples at many different angles. Because the technique probed test samples from many different points of reference, it became possible—by using the proper mathematics—to reconstruct the interior structure of a thin slice of the object being studied.

168

/

CAT scanner

Cormack published his data but received almost no recognition because computers that could analyze the data in an effective fashion had not yet been developed. Nevertheless, X-ray tomography— the process of using X-rays to produce detailed images of thin sections of solid objects—had been born. It remained for Godfrey Newbold Hounsfield of England’s Electrical and Musical Instruments (EMI) Limited (independently, and reportedly with no knowledge of Cormack’s work) to design the first practical CAT scanner. A Series of Thin Slices Hounsfield, like Cormack, realized that X-ray tomography was the most practical approach to developing a medical body imager. It could be used to divide any three-dimensional object into a series of thin slices that could be reconstructed into images by using appropriate computers. Hounsfield developed another mathematical approach to the method. He estimated that the technique would make possible the very accurate reconstruction of images of thin body sections with a sensitivity well above that of the X-ray methodology then in use. Moreover, he proposed that his method would enable

Medical technicians studying CAT scan results. (PhotoDisc)

CAT scanner

/

169

Godfrey Newbold Hounsfield On his family farm outside Newark, Nottinghamshire, England, Godfrey Newbold Hounsfield (born 1919), the youngest of five children, was usually left to his own devices. The farm, he later wrote, offered an infinite variety of diversions, and his favorites were the many mechanical and electrical gadgets. By his teen years, he was making his own gadgets, such as an electrical recording machine, and experimenting with homemade gliders and water-propelled rockets. All these childhood projects taught him the fundamentals of practical reasoning. During World War II he joined the Royal Air Force, where his talent with gadgets got him a position as an instructor at the school for radio mechanics. There, on his own, he built his an oscilloscope and demonstration equipment. This initiative caught the eye of a high-ranking officer, who after the war arranged a scholarship so that Hounsfield could attend the Faraday Electrical Engineering College in London. Upon graduating in 1951, he took a research position with Electrical and Musical Instruments, Limited (EMI). His first assignments involved radar and guided weapons, but he also developed an interest in computers and in 1958 led the design team that put together England’s first all-transistor computer, the EMIDEC 1100. This experience, in turn, prepared him to follow through on his idea for computed tomography, which came to him in 1967. EMI released its first CT scanner in 1971, and it so impressed the medical world that in 1979 Hounsfield and Allan M. Cormack shared the Nobel Prize in Physiology or Medicine for the invention. Hounsfield, who continued to work on improved computed tomography and other diagnostic imagining techniques, was knighted in 1981.

researchers and physicians to distinguish between normal and diseased tissue. Hounsfield was correct about that. The prototype instrument that Hounsfield developed was quite slow, requiring nine days to scan an object. Soon, he modified the scanner so that its use took only nine hours, and he obtained successful tomograms of preserved human brains and the fresh brains of cattle. The further development of the CAT scanner then pro-

170

/

CAT scanner

ceeded quickly, yielding an instrument that required four and onehalf minutes to gather tomographic data and twenty minutes to produce the tomographic image. In late 1971, the first clinical CAT scanner was installed at Atkinson Morley’s Hospital in Wimbledon, England. By early 1972, the first patient, a woman with a suspected brain tumor, had been examined, and the resultant tomogram identified a dark, circular cyst in her brain. Additional data collection from other patients soon validated the technique. Hounsfield and EMI patented the CAT scanner in 1972, and the findings were reported at that year’s annual meeting of the British Institute of Radiology. Hounsfield published a detailed description of the instrument in 1973. Hounsfield’s clinical collaborator, James Ambrose, published on the clinical aspects of the technique. Neurologists all around the world were ecstatic about the new tool that allowed them to locate tissue abnormalities with great precision. The CAT scanner consisted of an X-ray generator, a scanner unit composed of an X-ray tube and a detector in a circular chamber about which they could be rotated, a computer that could process all the data obtained, and a cathode-ray tube on which tomograms were viewed. To produce tomograms, the patient was placed on a couch, head inside the scanner chamber, and the emitter-detector was rotated 1 degree at a time. At each position, 160 readings were taken, converted to electrical signals, and fed into the computer. In the 180 degrees traversed, 28,800 readings were taken and processed. The computer then converted the data into a tomogram (a cross-sectional representation of the brain that shows the differences in tissue density). A Polaroid picture of the tomogram was then taken and interpreted by the physician in charge. Consequences Many neurologists agree that CAT is the most important method developed in the twentieth century to facilitate diagnosis of disorders of the brain. Even the first scanners could distinguish between brain tumors and blood clots and help physicians to diagnose a variety of brain-related birth defects. In addition, the scanners are believed to have saved many lives by allowing physicians to avoid

CAT scanner

/

171

the dangerous exploratory brain surgery once required in many cases and by replacing more dangerous techniques, such as pneumoencephalography, which required a physician to puncture the head for diagnostic purposes. By 1975, improvements, including quicker reaction time and more complex emitter-detector systems, made it possible for EMI to introduce full-body CAT scanners to the world market. Then it became possible to examine other parts of the body—including the lungs, the heart, and the abdominal organs—for cardiovascular problems, tumors, and other structural health disorders. The technique became so ubiquitous that many departments of radiology changed their names to departments of medical imaging. The use of CAT scanners has not been problem-free. Part of the reason for this is the high cost of the devices—ranging from about $300,000 for early models to $1 million for modern instruments—and resultant claims by consumer advocacy groups that the scanners are unnecessarily expensive toys for physicians. Still, CAT scanners have become important everyday diagnostic tools in many areas of medicine. Furthermore, continuation of the efforts of Hounsfield and others has led to more improvements of CAT scanners and to the use of nonradiologic nuclear magnetic resonance imaging in such diagnoses. See also Amniocentesis; Electrocardiogram; Electroencephalogram; Mammography; Nuclear magnetic resonance; Pap test; Ultrasound; X-ray image intensifier. Further Reading Gambarelli, J. Computerized Axial Tomography: An Anatomic Atlas of Serial Sections of the Human Body: Anatomy—Radiology—Scanner. New York: Springer Verlag, 1977. Raju, Tones N. K. “The Nobel Chronicles.” Lancet 354, no. 9190 (November 6, 1999). Thomas, Robert McG., Jr. “Allan Cormack, Seventy Four, Nobelist Who Helped Invent CAT Scan.” New York Times (May 9, 1998).

172

Cell phone Cell phone

The invention: Mobile telephone system controlled by computers to use a region’s radio frequencies, or channels, repeatedly, thereby accommodating large numbers of users. The people behind the invention: William Oliver Baker (1915), the president of Bell Laboratories Richard H. Fefrenkiel, the head of the mobile systems engineering department at Bell The First Radio Telephones The first recorded attempt to use radio technology to provide direct access to a telephone system took place in 1920. It was not until 1946, however, that Bell Telephone established the first such commercial system in St. Louis. The system had a number of disadvantages; users had to contact an operator who did the dialing and the connecting, and the use of a single radio frequency prevented simultaneous talking and listening. In 1949, a system was developed that used two radio frequencies (a “duplex pair”), permitting both the mobile unit and the base station to transmit and receive simultaneously and making a more normal sort of telephone conversation possible. This type of service, known as Mobile Telephone Service (MTS), was the norm in the field for many years. The history of MTS is one of continuously increasing business usage. The development of the transistor made possible the design and manufacture of reasonably light, compact, and reliable equipment, but the expansion of MTS was slowed by the limited number of radio frequencies; there is nowhere near enough space on the radio spectrum for each user to have a separate frequency. In New York City, for example, New York Telephone Company was limited to just twelve channels for its more than seven hundred mobile subscribers, meaning that only twelve conversations could be carried on at once. In addition, because of possible interference, none of those channels could be reused in nearby cities; only fifty-four channels were available na-

Cell phone

/

173

tionwide. By the late 1970’s, most of the systems in major cities were considered full, and new subscribers were placed on a waiting list; some people had been waiting for as long as ten years to become subscribers. Mobile phone users commonly experienced long delays in getting poor-quality channels. The Cellular Breakthrough In 1968, the Federal Communications Commission (FCC) requested proposals for the creation of high-capacity, specA dominant trend in cell phone design is smaller trum-efficient mobile systems. and lighter units. (PhotoDisc) Bell Telephone had already been lobbying for the creation of such a system for some years. In the early 1970’s, both Motorola and Bell Telephone proposed the use of cellular technology to solve the problems posed by mobile telephone service. Cellular systems involve the use of a computer to make it possible to use an area’s frequencies, or channels, repeatedly, allowing such systems to accommodate many more users. A two-thousand-customer, 2100-square-mile cellular telephone system called the Advanced Mobile Phone Service, built by the AMPS Corporation, an AT&T subsidiary, became operational in Chicago in 1978. The Illinois Bell Telephone Company was allowed to make a limited commercial offering and obtained about fourteen hundred subscribers. American Radio Telephone Service was allowed to conduct a similar test in the Baltimore/Washington area. These first systems showed the technological feasibility and affordability of cellular service. In 1979, Bell Labs of Murray Hill, New Jersey, received a patent

174

/

Cell phone

William Oliver Baker For great discoveries and inventions to be possible in the world of high technology, inventors need great facilities—laboratories and workshops—with brilliant colleagues. These must be managed by imaginative administrators. One of the best was William Oliver Baker (b. 1915), who rose to become president of the legendary Bell Labs. Baker started out as one of the most promising scientists of his generation. After earning a Ph.D. in chemistry at Princeton University, he joined the research section at Bell Telephone Laboratories in 1939. He studied the physics and chemistry of polymers, especially for use in electronics and telecommunications. During his research career he helped develop synthetic rubber and radar, found uses for polymers in communications and power cables, and participated in the discovery of microgels. In 1954 he ranked among the top-ten scientists in American industry and asked to chair a National Research Council committee studying heat shields for missiles and satellites. Administration suited him. The following year he took over as leader of research at Bell Labs and served as president from 1973 until 1979. Under his direction, basic discoveries and inventions poured out of the lab that later transformed the way people live and work: satellite communications, principles for programming high-speed computers, the technology for modern electronic communications, the superconducting solenoid, the maser, and the laser. His scientists won Nobel Prizes and legions of other honors, as did Baker himself, who received dozens of medals, awards, and honorary degrees. Moreover, he was an original member of the President’s Science Advisory Board, became the first chair of the National Science Information Council, and served on the National Science Board. His influence on American science and technology was deep and lasting.

for such a system. The inventor was Richard H. Fefrenkiel, head of the mobile systems engineering department under the leadership of Labs president William Baker. The patented method divides a city into small coverage areas called “cells,” each served by lowpower transmitter-receivers. When a vehicle leaves the coverage

Cell phone

/

175

of one cell, calls are switched to the antenna and channels of an adjacent cell; a conversation underway is automatically transferred and continues without interruption. A channel used in one cell can be reused a few cells away for a different conversation. In this way, a few hundred channels can serve hundreds of thousands of users. Computers control the call-transfer process, effectively reducing the amount of radio spectrum required. Cellular systems thus actually use radio frequencies to transmit conversations, but because the equipment is so telephone-like, “cellular telephone” (or “cell phone”) became the accepted term for the new technology. Each AMPS cell station is connected by wire to a central switching office, which determines when a mobile phone should be transferred to another cell as the transmitter moves out of range during a conversation. It does this by monitoring the strength of signals received from the mobile unit by adjacent cells, “handing off” the call when a new cell receives a stronger signal; this change is imperceptible to the user. Impact In 1982, the FCC began accepting applications for cellular system licenses in the thirty largest U.S. cities. By the end of 1984, there were about forty thousand cellular customers in nearly two dozen cities. Cellular telephone ownership boomed to 9 million by 1992. As cellular telephones became more common, they also became cheaper and more convenient to buy and to use. New systems developed in the 1990’s continued to make smaller, lighter, and cheaper cellular phones even more accessible. Since the cellular telephone was made possible by the marriage of communications and computers, advances in both these fields have continued to change the industry at a rapid rate. Cellular phones have proven ideal for many people who need or want to keep in touch with others at all times. They also provide convenient emergency communication devices for travelers and field-workers. On the other hand, ownership of a cellular phone can also have its drawbacks; many users have found that they can never be out of touch—even when they would rather be.

176

/

Cell phone

See also Internet; Long-distance telephone; Rotary dial telephone; Telephone switching; Touch-tone telephone. Further Reading Carlo, George Louis, and Martin Schram. Cell Phones: Invisible Hazards in the Wireless Age. New York: Carroll and Graf, 2001. “The Cellular Phone.” Newsweek 130, 24A (Winter 1997/1998). Oliphant, Malcolm W. “How Mobile Telephony Got Going.” IEEE Spectrum 36, no. 8 (August, 1999). Young, Peter. Person to Person: The International Impact of the Telephone. Cambridge: Granta Editions, 1991.

177

Cloning Cloning

The invention: Experimental technique for creating exact duplicates of living organisms by recreating their DNA. The people behind the invention: Ian Wilmut, an embryologist with the Roslin Institute Keith H. S. Campbell, an experiment supervisor with the Roslin Institute J. McWhir, a researcher with the Roslin Institute W. A. Ritchie, a researcher with the Roslin Institute Making Copies On February 22, 1997, officials of the Roslin Institute, a biological research institution near Edinburgh, Scotland, held a press conference to announce startling news: They had succeeded in creating a clone—a biologically identical copy—from cells taken from an adult sheep. Although cloning had been performed previously with simpler organisms, the Roslin Institute experiment marked the first time that a large, complex mammal had been successfully cloned. Cloning, or the production of genetically identical individuals, has long been a staple of science fiction and other popular literature. Clones do exist naturally, as in the example of identical twins. Scientists have long understood the process by which identical twins are created, and agricultural researchers have often dreamed of a method by which cheap identical copies of superior livestock could be created. The discovery of the double helix structure of deoxyribonucleic acid (DNA), or the genetic code, by James Watson and Francis Crick in the 1950’s led to extensive research into cloning and genetic engineering. Using the discoveries of Watson and Crick, scientists were soon able to develop techniques to clone laboratory mice; however, the cloning of complex, valuable animals such as livestock proved to be hard going. Early versions of livestock cloning were technical attempts at dupli-

178

/

Cloning

Ian Wilmut Ian Wilmut was born in Hampton Lucey, not far from Warwick in central England, in 1944. He found his life’s calling in embryology—and especially animal genetic engineering— while he was studying at the University of Nottingham, where his mentor was G. Eric Lamming, a leading expert on reproduction. After receiving his undergraduate degree, he attended Darwin College, Cambridge University. He completed his doctorate in 1973 upon submitting a thesis about freezing boar sperm. This came after he produced a viable calf, named Frosty, from the frozen semen, the first time anyone had done so. Soon afterward he joined the Animal Breeding Research Station, which later became the Roslin Institute in Roslin, Scotland. He immersed himself in research, seldom working fewer than nine hours a day. During the 1980’s he experimented with the insertion of genes into sheep embryos but concluded that cloning would be less time-consuming and less prone to failure. Joined by Keith Campbell in 1990, he cloned two Welsh mountain sheep from differentiated embryo cells, a feat similar to those of other reproductive experimenters. However, Dolly, who was cloned from adult cells, shook the world when her birth was announced in 1997. That same year Wilmut and Campbell produced another cloned sheep, Polly. Cloned from fetal skin cells, she was genetically altered to carry a human gene. Wilmut’s technique for cloning from adult cells, which the laboratory patented, was a fundamentally new method of reproduction, but he had a loftier purpose in mind than simply establishing a first. He believed that animals genetically engineered to include human genes can produce proteins needed by people who because of genetic diseases cannot make the proteins themselves. The production of new treatments for old diseases, he told an astonished public after the revelation of Dolly, was his goal.

cating the natural process of fertilized egg splitting that leads to the birth of identical twins. Artificially inseminated eggs were removed, split, and then reinserted into surrogate mothers. This method proved to be overly costly for commercial purposes, a situation aggravated by a low success rate.

Cloning

/

179

Model of a double helix. (PhotoDisc)

Nuclear Transfer Researchers at the Roslin Institute found these earlier attempts to be fundamentally flawed. Even if the success rate could be improved, the number of clones created (of sheep, in this case) would still be limited. The Scots, led by embryologist Ian Wilmut and experiment supervisor Keith Campbell, decided to take an entirely different approach. The result was the first live birth of a mammal produced through a process known as “nuclear transfer.” Nuclear transfer involves the replacement of the nucleus of an immature egg with a nucleus taken from another cell. Previous attempts at nuclear transfer had cells from a single embryo divided up and implanted into an egg. Because a sheep embryo has only about forty usable cells, this method also proved limiting. The Roslin team therefore decided to grow their own cells in a laboratory culture. They took more mature embryonic cells than those previously used, and they experimented with the use of a nutrient mixture. One of their breakthroughs occurred when they discovered that these “cell lines” grew much more quickly when certain nutrients were absent.

180

/

Cloning

Using this technique, the Scots were able to produce a theoretically unlimited number of genetically identical cell lines. The next step was to transfer the cell lines of the sheep into the nucleus of unfertilized sheep eggs. First, 277 nuclei with a full set of chromosomes were transferred to the unfertilized eggs. An electric shock was then used to cause the eggs to begin development, the shock performing the duty of fertilization. Of these eggs, twenty-nine developed enough to be inserted into surrogate mothers. All the embryos died before birth except one: a ewe the scientists named “Dolly.” Her birth on July 5, 1996, was witnessed by only a veterinarian and a few researchers. Not until the clone had survived the critical earliest stages of life was the success of the experiment disclosed; Dolly was more than seven months old by the time her birth was announced to a startled world. Impact The news that the cloning of sophisticated organisms had left the realm of science fiction and become a matter of accomplished scientific fact set off an immediate uproar. Ethicists and media commentators quickly began to debate the moral consequences of the use— and potential misuse—of the technology. Politicians in numerous countries responded to the news by calling for legal restrictions on cloning research. Scientists, meanwhile, speculated about the possible benefits and practical limitations of the process. The issue that stirred the imagination of the broader public and sparked the most spirited debate was the possibility that similar experiments might soon be performed using human embryos. Although most commentators seemed to agree that such efforts would be profoundly immoral, many experts observed that they would be virtually impossible to prevent. “Could someone do this tomorrow morning on a human embryo?” Arthur L. Caplan, the director of the University of Pennsylvania’s bioethics center, asked reporters. “Yes. It would not even take too much science. The embryos are out there.” Such observations conjured visions of a future that seemed marvelous to some, nightmarish to others. Optimists suggested that the

Cloning

/

181

best and brightest of humanity could be forever perpetuated, creating an endless supply of Albert Einsteins and Wolfgang Amadeus Mozarts. Pessimists warned of a world overrun by clones of selfserving narcissists and petty despots, or of the creation of a secondary class of humans to serve as organ donors for their progenitors. The Roslin Institute’s researchers steadfastly proclaimed their own opposition to human experimentation. Moreover, most scientists were quick to point out that such scenarios were far from realization, noting the extremely high failure rate involved in the creation of even a single sheep. In addition, most experts emphasized more practical possible uses of the technology: improving agricultural stock by cloning productive and disease-resistant animals, for example, or regenerating endangered or even extinct species. Even such apparently benign schemes had their detractors, however, as other observers remarked on the potential dangers of thus narrowing a species’ genetic pool. Even prior to the Roslin Institute’s announcement, most European nations had adopted a bioethics code that flatly prohibited genetic experiments on human subjects. Ten days after the announcement, U.S. president Bill Clinton issued an executive order that banned the use of federal money for human cloning research, and he called on researchers in the private sector to refrain from such experiments voluntarily. Nevertheless, few observers doubted that Dolly’s birth marked only the beginning of an intriguing—and possibly frightening—new chapter in the history of science. See also Amniocentesis; Artificial chromosome; Artificial insemination; Genetic “fingerprinting”; In vitro plant culture; Rice and wheat strains. Further Reading Facklam, Margery, Howard Facklam, and Paul Facklam. From Cell to Clone: The Story of Genetic Engineering. New York: Harcourt Brace Jovanovich, 1979. Gillis, Justin. “Cloned Cows Are Fetching Big Bucks: Dozens of Genetic Duplicates Ready to Take Up Residence on U.S. Farms.” Washington Post (March 25, 2001).

182

/

Cloning

Kolata, Gina Bari. Clone: The Road to Dolly, and the Path Ahead. New York: William Morrow, 1998. Regalado, Antonio. “Clues Are Sought for Cloning’s Fail Rate: Researchers Want to Know Exactly How an Egg Reprograms Adult DNA.” Wall Street Journal (November 24, 2000). Winslow, Ron. “Scientists Clone Pigs, Lifting Prospects of Replacement Organs for Humans.” Wall Street Journal (August 17, 2000).

183

Cloud seeding Cloud seeding

The invention: Technique for inducing rainfall by distributing dry ice or silver nitrate into reluctant rainclouds. The people behind the invention: Vincent Joseph Schaefer (1906-1993), an American chemist and meteorologist Irving Langmuir (1881-1957), an American physicist and chemist who won the 1932 Nobel Prize in Chemistry Bernard Vonnegut (1914-1997), an American physical chemist and meteorologist Praying for Rain Beginning in 1943, an intense interest in the study of clouds developed into the practice of weather “modification.” Working for the General Electric Research Laboratory, Nobel laureate Irving Langmuir and his assistant researcher and technician, Vincent Joseph Schaefer, began an intensive study of precipitation and its causes. Past research and study had indicated two possible ways that clouds produce rain. The first possibility is called “coalescing,” a process by which tiny droplets of water vapor in a cloud merge after bumping into one another and become heavier and fatter until they drop to earth. The second possibility is the “Bergeron process” of droplet growth, named after the Swedish meteorologist Tor Bergeron. Bergeron’s process relates to supercooled clouds, or clouds that are at or below freezing temperatures and yet still contain both ice crystals and liquid water droplets. The size of the water droplets allows the droplets to remain liquid despite freezing temperatures; while small droplets can remain liquid only down to 4 degrees Celsius, larger droplets may not freeze until reaching −15 degrees Celsius. Precipitation occurs when the ice crystals become heavy enough to fall. If the temperature at some point below the cloud is warm enough, it will melt the ice crystals before they reach the earth, producing rain. If the temperature remains at the freezing

184

/

Cloud seeding

point, the ice crystals retain their form and fall as snow. Schaefer used a deep-freezing unit in order to observe water droplets in pure cloud form. In order to observe the droplets better, Schaefer lined the chest with black velvet and concentrated a beam of light inside. The first agent he introduced inside the supercooled freezer was his own breath. When that failed to form the desired ice crystals, he proceeded to try other agents. His hope was to form ice crystals that would then cause the moisture in the surrounding air to condense into more ice crystals, which would produce a miniature snowfall. He eventually achieved success when he tossed a handful of dry ice inside and was rewarded with the long-awaited snow. The freezer was set at the freezing point of water, 0 degrees Celsius, but not all the particles were ice crystals, so when the dry ice was introduced all the stray water droplets froze instantly, producing ice crystals, or snowflakes. Planting the First Seeds On November 13, 1946, Schaefer took to the air over Mount Greylock with several pounds of dry ice in order to repeat the experiment in nature. After he had finished sprinkling, or seeding, a supercooled cloud, he instructed the pilot to fly underneath the cloud he had just seeded. Schaefer was greeted by the sight of snow. By the time it reached the ground, it had melted into the first-ever human-made rainfall. Independently of Schaefer and Langmuir, another General Electric scientist, Bernard Vonnegut, was also seeking a way to cause rain. He found that silver iodide crystals, which have the same size and shape as ice crystals, could “fool” water droplets into condensing on them. When a certain chemical mixture containing silver iodide is heated on a special burner called a “generator,” silver iodide crystals appear in the smoke of the mixture. Vonnegut’s discovery allowed seeding to occur in a way very different from seeding with dry ice, but with the same result. Using Vonnegut’s process, the seeding is done from the ground. The generators are placed outside and the chemicals are mixed. As the smoke wafts upward, it carries the newly formed silver iodide crystals with it into the clouds.

Cloud seeding

/

185

The results of the scientific experiments by Langmuir, Vonnegut, and Schaefer were alternately hailed and rejected as legitimate. Critics argue that the process of seeding is too complex and would have to require more than just the addition of dry ice or silver nitrate in order to produce rain. One of the major problems surrounding the question of weather modification by cloud seeding is the scarcity of knowledge about the earth’s atmosphere. A journey begun about fifty years ago is still a long way from being completed. Impact Although the actual statistical and other proofs needed to support cloud seeding are lacking, the discovery in 1946 by the General Electric employees set off a wave of interest and demand for information that far surpassed the interest generated by the discovery of nuclear fission shortly before. The possibility of ending drought and, in the process, hunger excited many people. The discovery also prompted both legitimate and false “rainmakers” who used the information gathered by Schaefer, Langmuir, and Vonnegut to set up cloud-seeding businesses. Weather modification, in its current stage of development, cannot be used to end worldwide drought. It does, however, have beneficial results in some cases on the crops of smaller farms that have been affected by drought. In order to understand the advances made in weather modification, new instruments are needed to record accurately the results of further experimentation. The storm of interest—both favorable and nonfavorable—generated by the discoveries of Schaefer, Langmuir, and Vonnegut has had and will continue to have far-reaching effects on many aspects of society. See also Airplane; Artificial insemination; In vitro plant culture; Weather satellite. Further Reading Cole, Stephen. “Mexico Results Spur New Looking at Rainmaking.” Washington Post (January 22, 2001).

186

/

Cloud seeding

Havens, Barrington S., James E. Jiusto, and Bernard Vonnegut. Early History of Cloud Seeding. Socorro, N.Mex.: Langmuir Laboratory, New Mexico Institute of Mining and Technology, 1978. “Science and Technology: Cloudbusting.” The Economist (August 21, 1999). Villiers, Marq de. Water: The Fate of Our Most Precious Resource. Boston: Houghton Mifflin, 2000.

187

COBOL computer language COBOL computer language

The invention: The first user-friendly computer programming language, COBOL was originally designed to solve ballistics problems. The people behind the invention: Grace Murray Hopper (1906-1992), an American mathematician Howard Hathaway Aiken (1900-1973), an American mathematician Plain Speaking Grace Murray Hopper, a mathematician, was a faculty member at Vassar College when World War II (1939-1945) began. She enlisted in the Navy and in 1943 was assigned to the Bureau of Ordnance Computation Project, where she worked on ballistics problems. In 1944, the Navy began using one of the first electronic computers, the Automatic Sequence Controlled Calculator (ASCC), designed by an International Business Machines (IBM) Corporation team of engineers headed by Howard Hathaway Aiken, to solve ballistics problems. Hopper became the third programmer of the ASCC. Hopper’s interest in computer programming continued after the war ended. By the early 1950’s, Hopper’s work with programming languages had led to her development of FLOW-MATIC, the first English-language data processing compiler. Hopper’s work on FLOW-MATIC paved the way for her later work with COBOL (Common Business Oriented Language). Until Hopper developed FLOW-MATIC, digital computer programming was all machine-specific and was written in machine code. A program designed for one computer could not be used on another. Every program was both machine-specific and problemspecific in that the programmer would be told what problem the machine was going to be asked and then would write a completely new program for that specific problem in the machine code.

188

/

COBOL computer language

Grace Murray Hopper Grace Brewster Murray was born in New York City in 1906. As a child she revered her great-grandfather, a U.S. Navy admiral, and her grandfather, an engineer. Her career melded their professions. She studied mathematics and physics at Vassar College, earning a bachelor’s degree in 1928 and a master’s degree in 1930, when she married Vincent Foster Hopper. She accepted a teaching post at Vassar but continued her studies, completing a doctorate at Yale University in 1934. In 1943 she left academia for the Navy and was assigned to the Bureau of Ordnance Computation Project at Harvard University. She worked on the nation’s first modern computer, the Mark I, and contributed to the development of major new models afterward, including Sperry Corporation’s ENIAC and UNIVAC. While still with the Navy project at Harvard, Hopper participated in a minor incident that forever marked computer slang. One day a moth became caught in a switch, causing the computer to malfunction. She and other technicians found it and ever after referred to correcting mechanical glitches as “debugging.” Hopper joined Sperry Corporation after the war and carried out her seminal work with the FLOW-MATIC and COBOL computer languages. Meanwhile, she retained her commission in the Naval Reserves, helping the service incorporate computers and COBOL into its armaments and administration systems. She retired from the Navy in 1966 and from Sperry in 1971, but the Navy soon had her out of retirement on temporary active duty to help with its computer systems. After her second retirement, the Navy, grateful for her tireless service, promoted her to rear admiral in 1985, the nation’s first woman admiral. She was also awarded the Distinguished Service Cross by the Department of Defense, the National Medal of Technology, and the Legion of Merit. She became an inductee into the Engineering and Science Hall of Fame in 1991. Hopper, nicknamed Amazing Grace, died a year later.

Machine code was based on the programmer’s knowledge of the physical characteristics of the computer as well as the requirements of the problem to be solved; that is, the programmer had to know what was happening within the machine as it worked through a series of

COBOL computer language

/

189

calculations, which relays tripped when and in what order, and what mathematical operations were necessary to solve the problem. Programming was therefore a highly specialized skill requiring a unique combination of linguistic, reasoning, engineering, and mathematical abilities that not even all the mathematicians and electrical engineers who designed and built the early computers possessed. While every computer still operates in response to the programming, or instructions, built into it, which are formatted in machine code, modern computers can accept programs written in nonmachine code—that is, in various automatic programming languages. They are able to accept nonmachine code programs because specialized programs now exist to translate those programs into the appropriate machine code. These translating programs are known as “compilers,” or “assemblers,” and FLOW-MATIC was the first such program. Hopper developed FLOW-MATIC after realizing that it would be necessary to eliminate unnecessary steps in programming to make computers more efficient. FLOW-MATIC was based, in part, on Hopper’s recognition that certain elements, or commands, were common to many different programming applications. Hopper theorized that it would not be necessary to write a lengthy series of instructions in machine code to instruct a computer to begin a series of operations; instead, she believed that it would be possible to develop commands in an assembly language in such a way that a programmer could write one command, such as the word add, that would translate into a sequence of several commands in machine code. Hopper’s successful development of a compiler to translate programming languages into machine code thus meant that programming became faster and easier. From assembly languages such as FLOW-MATIC, it was a logical progression to the development of high-level computer languages, such as FORTRAN (Formula Translation) and COBOL. The Language of Business Between 1955 (when FLOW-MATIC was introduced) and 1959, a number of attempts at developing a specific business-oriented language were made. IBM and Remington Rand believed that the only way to market computers to the business community was through

190

/

COBOL computer language

the development of a language that business people would be comfortable using. Remington Rand officials were especially committed to providing a language that resembled English. None of the attempts to develop a business-oriented language succeeded, however, and by 1959 Hopper and other members of the U.S. Department of Defense had persuaded representatives of various companies of the need to cooperate. On May 28 and 29, 1959, a conference sponsored by the Department of Defense was held at the Pentagon to discuss the problem of establishing a common language for the adaptation of electronic computers for data processing. As a result, the first distribution of COBOL was accomplished on December 17, 1959. Although many people were involved in the development of COBOL, Hopper played a particularly important role. She not only found solutions to technical problems but also succeeded in selling the concept of a common language from an administrative and managerial point of view. Hopper recognized that while the companies involved in the commercial development of computers were in competition with one another, the use of a common, business-oriented language would contribute to the growth of the computer industry as a whole, as well as simplify the training of computer programmers and operators. Consequences COBOL was the first compiler developed for business data processing operations. Its development simplified the training required for computer users in business applications and demonstrated that computers could be practical tools in government and industry as well as in science. Prior to the development of COBOL, electronic computers had been characterized as expensive, oversized adding machines that were adequate for performing time-consuming mathematics but lacked the flexibility that business people required. In addition, the development of COBOL freed programmers not only from the need to know machine code but also from the need to understand the physical functioning of the computers they were using. Programming languages could be written that were both machine-independent and almost universally convertible from one computer to another.

COBOL computer language

/

191

Finally, because Hopper and the other committee members worked under the auspices of the Department of Defense, the software was not copyrighted, and in a short period of time COBOL became widely available to anyone who wanted to use it. It diffused rapidly throughout the industry and contributed to the widespread adaptation of computers for use in countless settings. See also BASIC programming language; Colossus computer; ENIAC computer; FORTRAN programming language; SAINT. Further Reading Cohen, Bernard I., Gregory W. Welch, and Robert V. D. Campbell. Makin’ Numbers: Howard Aiken: and the Computer. Cambridge, Mass.: MIT Press, 1999. Cohen, Bernard I. Howard Aiken: Portrait of a Computer Pioneer. Cambridge, Mass.: MIT Press, 1999. Ferguson, David E. “The Roots of COBOL.” Systems 3X World and As World 17, no. 7 (July, 1989). Yount, Lisa. A to Z of Women in Science and Math. New York: Facts on File, 1999.

192

Color film Color film

The invention: A photographic medium used to take full-color pictures. The people behind the invention: Rudolf Fischer (1881-1957), a German chemist H. Siegrist (1885-1959), a German chemist and Fischer’s collaborator Benno Homolka (1877-1949), a German chemist The Process Begins Around the turn of the twentieth century, Arthur-Louis Ducos du Hauron, a French chemist and physicist, proposed a tripack (threelayer) process of film development in which three color negatives would be taken by means of superimposed films. This was a subtractive process. (In the “additive method” of making color pictures, the three colors are added in projection—that is, the colors are formed by the mixture of colored light of the three primary hues. In the “subtractive method,” the colors are produced by the superposition of prints.) In Ducos du Hauron’s process, the blue-light negative would be taken on the top film of the pack; a yellow filter below it would transmit the yellow light, which would reach a green-sensitive film and then fall upon the bottom of the pack, which would be sensitive to red light. Tripacks of this type were unsatisfactory, however, because the light became diffused in passing through the emulsion layers, so the green and red negatives were not sharp. To obtain the real advantage of a tripack, the three layers must be coated one over the other so that the distance between the bluesensitive and red-sensitive layers is a small fraction of a thousandth of an inch. Tripacks of this type were suggested by the early pioneers of color photography, who had the idea that the packs would be separated into three layers for development and printing. The manipulation of such systems proved to be very difficult in practice. It was also suggested, however, that it might be possible to develop such tripacks as a unit and then, by chemical treatment, convert the silver images into dye images.

Color film

/

193

Fischer’s Theory One of the earliest subtractive tripack methods that seemed to hold great promise was that suggested by Rudolf Fischer in 1912. He proposed a tripack that would be made by coating three emulsions on top of one another; the lowest one would be red-sensitive, the middle one would be green-sensitive, and the top one would be bluesensitive. Chemical substances called “couplers,” which would produce dyes in the development process, would be incorporated into the layers. In this method, the molecules of the developing agent, after becoming oxidized by developing the silver image, would react with the unoxidized form (the coupler) to produce the dye image. The two types of developing agents described by Fischer are paraminophenol and paraphenylenediamine (or their derivatives). The five types of dye that Fischer discovered are formed when silver images are developed by these two developing agents in the presence of suitable couplers. The five classes of dye he used (indophenols, indoanilines, indamines, indothiophenols, and azomethines) were already known when Fischer did his work, but it was he who discovered that the photographic latent image could be used to promote their formulation from “coupler” and “developing agent.” The indoaniline and azomethine types have been found to possess the necessary properties, but the other three suffer from serious defects. Because only p-phenylenediamine and its derivatives can be used to form the indoaniline and azomethine dyes, it has become the most widely used color developing agent. Impact In the early 1920’s, Leopold Mannes and Leopold Godowsky made a great advance beyond the Fischer process. Working on a new process of color photography, they adopted coupler development, but instead of putting couplers into the emulsion as Fischer had, they introduced them during processing. Finally, in 1935, the film was placed on the market under the name “Kodachrome,” a name that had been used for an early two-color process. The first use of the new Kodachrome process in 1935 was for 16millimeter film. Color motion pictures could be made by the Koda-

194

/

Color film

chrome process as easily as black-and-white pictures, because the complex work involved (the color development of the film) was done under precise technical control. The definition (quality of the image) given by the process was soon sufficient to make it practical for 8-millimeter pictures, and in 1936, Kodachrome film was introduced in a 35-millimeter size for use in popular miniature cameras. Soon thereafter, color processes were developed on a larger scale and new color materials were rapidly introduced. In 1940, the Kodak Research Laboratories worked out a modification of the Fischer process in which the couplers were put into the emulsion layers. These couplers are not dissolved in the gelatin layer itself, as the Fischer couplers are, but are carried in small particles of an oily material that dissolves the couplers, protects them from the gelatin, and protects the silver bromide from any interaction with the couplers. When development takes place, the oxidation product of the developing agent penetrates into the organic particles and reacts with the couplers so that the dyes are formed in small particles that are dispersed throughout the layers. In one form of this material, Ektachrome (originally intended for use in aerial photography), the film is reversed to produce a color positive. It is first developed with a black-and-white developer, then reexposed and developed with a color developer that recombines with the couplers in each layer to produce the appropriate dyes, all three of which are produced simultaneously in one development. In summary, although Fischer did not succeed in putting his theory into practice, his work still forms the basis of most modern color photographic systems. Not only did he demonstrate the general principle of dye-coupling development, but the art is still mainly confined to one of the two types of developing agent, and two of the five types of dye, described by him. See also Autochrome plate; Brownie camera; Infrared photography; Instant photography. Further Reading Collins, Douglas. The Story of Kodak. New York: Harry N. Abrams, 1990.

Color film

/

195

Glendinning, Peter. Color Photography: History, Theory, and Darkroom Technique. Englewood Cliffs, N.J.: Prentice-Hall, 1985. Wood, John. The Art of the Autochrome: The Birth of Color Photography. Iowa City: University of Iowa Press, 1993.

196

Color television Color television

The invention: System for broadcasting full-color images over the airwaves. The people behind the invention: Peter Carl Goldmark (1906-1977), the head of the CBS research and development laboratory William S. Paley (1901-1990), the businessman who took over CBS David Sarnoff (1891-1971), the founder of RCA The Race for Standardization Although by 1928 color television had already been demonstrated in Scotland, two events in 1940 mark that year as the beginning of color television. First, on February 12, 1940, the Radio Corporation of America (RCA) demonstrated its color television system privately to a group that included members of the Federal Communications Commission (FCC), an administrative body that had the authority to set standards for an electronic color system. The demonstration did not go well; indeed, David Sarnoff, the head of RCA, canceled a planned public demonstration and returned his engineers to the Princeton, New Jersey, headquarters of RCA’s laboratories. Next, on September 1, 1940, the Columbia Broadcasting System (CBS) took the first step to develop a color system that would become the standard for the United States. On that day, CBS demonstrated color television to the public, based on the research of an engineer, Peter Carl Goldmark. Goldmark placed a set of spinning filters in front of the black-and-white television images, breaking them down into three primary colors and producing color television. The audience saw what was called “additive color.” Although Goldmark had been a researcher at CBS since January, 1936, he did not attempt to develop a color television system until March, 1940, after watching the Technicolor motion picture Gone with the Wind (1939). Inspired, Goldmark began to tinker in his tiny

Color television

/

197

CBS laboratory in the headquarters building in New York City. If a decision had been made in 1940, the CBS color standard would have been accepted as the national standard. The FCC was, at that time, more concerned with trying to establish a black-andwhite standard for television. Color television seemed decades away. In 1941, the FCC decided to adopt standards for black-and-white television only, leaving the issue of color unresolved—and the doors to the future of color broadcasting wide open. Control of a potentially lucrative market as well as personal rivalry threw William S. Paley, the head of CBS, and Sarnoff into a race for the control of color television. Both companies would pay dearly in terms of money and time, but it would take until the 1960’s before the United States would become a nation of color television watchers. RCA was at the time the acknowledged leader in the development of black-and-white television. CBS engineers soon discovered, however, that their company’s color system would not work when combined with RCA black-and-white televisions. In other words, customers would need one set for black-and-white and one for color. Moreover, since the color system of CBS needed more broadcast frequency space than the black-and-white system in use, CBS was forced to ask the FCC to allocate new channel space in the ultrahigh frequency (UHF) band, which was then not being used. In contrast, RCA scientists labored to make a compatible color system that required no additional frequency space. No Time to Wait Following the end of World War II, in 1945, the suburbanites who populated new communities in America’s cities wanted television sets right away; they did not want to wait for the government to decide on a color standard and then wait again while manufacturers redesigned assembly lines to make color sets. Rich with savings accumulated during the prosperity of the war years, Americans wanted to spend their money. After the war, the FCC saw no reason to open up proceedings regarding color systems. Black-and-white was operational; customers were waiting in line for the new electronic marvel. To give its engineers time to create a compatible color system, RCA skillfully lobbied the members of the FCC to take no action.

198

/

Color television

There were other problems with the CBS mechanical color television. It was noisy and large, and its color balance was hard to maintain. CBS claimed that through further engineering work, it would improve the actual sets. Yet RCA was able to convince other manufacturers to support it in preference to CBS principally because of its proven manufacturing track record. In 1946, RCA demonstrated a new electronic color receiver with three picture tubes, one for each of the primary colors. Color reproduction was fairly true; although any movement on the screen caused color blurring, there was little flicker. It worked, however, and thus ended the invention phase of color television begun in 1940. The race for standardization would require seven more years of corporate struggle before the RCA system would finally win adoption as the national standard in 1953. Impact Through the 1950’s, black-and-white television remained the order of the day. Through the later years of the decade, only the National Broadcasting Company (NBC) television network was regularly airing programs in color. Full production and presentation of shows in color during prime time did not come until the mid-1960’s; most industry observers date 1972 as the true arrival of color television. By 1972, color sets were found in more than half the homes in the United States. At that point, since color was so widespread, TV Guide stopped tagging color program listings with a special symbol and instead tagged only black-and-white shows, as it does to this day. Gradually, only cheap, portable sets were made for black-andwhite viewing, while color sets came in all varieties from tiny handheld pocket televisions to mammoth projection televisions. See also Autochrome plate; Community antenna television; Communications satellite; Fiber-optics; FM radio; Radio; Television; Transistor; Videocassette recorder.

Color television

/

199

Further Reading Burns, R. W. Television: An International History of the Formative Years. London: Institution of Electrical Engineers in association with the Science Museum, 1998. Fisher, David E., and Marshall Fisher. Tube: The Invention of Television. Washington, D.C.: Counterpoint, 1996. Lewis, Tom. Empire of the Air: The Men Who Made Radio. New York: HarperPerennial, 1993. Lyons, Eugene. David Sarnoff: A Biography. New York: Harper and Row, 1967.

200

Colossus computer Colossus computer

The invention: The first all-electronic calculating device, the Colossus computer was built to decipher German military codes during World War II. The people behind the invention: Thomas H. Flowers, an electronics expert Max H. A. Newman (1897-1984), a mathematician Alan Mathison Turing (1912-1954), a mathematician C. E. Wynn-Williams, a member of the Telecommunications Research Establishment An Undercover Operation In 1939, during World War II (1939-1945), a team of scientists, mathematicians, and engineers met at Bletchley Park, outside London, to discuss the development of machines that would break the secret code used in Nazi military communications. The Germans were using a machine called “Enigma” to communicate in code between headquarters and field units. Polish scientists, however, had been able to examine a German Enigma and between 1928 and 1938 were able to break the codes by using electromechanical codebreaking machines called “bombas.” In 1938, the Germans made the Enigma more complicated, and the Polish were no longer able to break the codes. In 1939, the Polish machines and codebreaking knowledge passed to the British. Alan Mathison Turing was one of the mathematicians gathered at Bletchley Park to work on codebreaking machines. Turing was one of the first people to conceive of the universality of digital computers. He first mentioned the “Turing machine” in 1936 in an article published in the Proceedings of the London Mathematical Society. The Turing machine, a hypothetical device that can solve any problem that involves mathematical computation, is not restricted to only one task—hence the universality feature. Turing suggested an improvement to the Bletchley codebreaking machine, the “Bombe,” which had been modeled on the Polish

Colossus computer

/

201

bomba. This improvement increased the computing power of the machine. The new codebreaking machine replaced the tedious method of decoding by hand, which in addition to being slow, was ineffective in dealing with complicated encryptions that were changed daily. Building a Better Mousetrap The Bombe was very useful. In 1942, when the Germans started using a more sophisticated cipher machine known as the “Fish,” Max H. A. Newman, who was in charge of one subunit at Bletchley Park, believed that an automated device could be designed to break the codes produced by the Fish. Thomas H. Flowers, who was in charge of a switching group at the Post Office Research Station at Dollis Hill, had been approached to build a special-purpose electromechanical device for Bletchley Park in 1941. The device was not useful, and Flowers was assigned to other problems. Flowers began to work closely with Turing, Newman, and C. E. Wynn-Williams of the Telecommunications Research Establishment (TRE) to develop a machine that could break the Fish codes. The Dollis Hill team worked on the tape driving and reading problems, and Wynn-Williams’s team at TRE worked on electronic counters and the necessary circuitry. Their efforts produced the “Heath Robinson,” which could read two thousand characters per second. The Heath Robinson used vacuum tubes, an uncommon component in the early 1940’s. The vacuum tubes performed more reliably and rapidly than the relays that had been used for counters. Heath Robinson and the companion machines proved that high-speed electronic devices could successfully do cryptoanalytic work (solve decoding problems). Entirely automatic in operation once started, the Heath Robinson was put together at Bletchley Park in the spring of 1943. The Heath Robinson became obsolete for codebreaking shortly after it was put into use, so work began on a bigger, faster, and more powerful machine: the Colossus. Flowers led the team that designed and built the Colossus in eleven months at Dollis Hill. The first Colossus (Mark I) was a bigger, faster version of the Heath Robinson and read about five thou-

202

/

Colossus computer

sand characters per second. Colossus had approximately fifteen hundred vacuum tubes, which was the largest number that had ever been used at that time. Although Turing and Wynn-Williams were not directly involved with the design of the Colossus, their previous work on the Heath Robinson was crucial to the project, since the first Colossus was based on the Heath Robinson. Colossus became operational at Bletchley Park in December, 1943, and Flowers made arrangements for the manufacture of its components in case other machines were required. The request for additional machines came in March, 1944. The second Colossus, the Mark II, was extensively redesigned and was able to read twentyfive thousand characters per second because it was capable of performing parallel operations (carrying out several different operations at once, instead of one at a time); it also had a short-term memory. The Mark II went into operation on June 1, 1944. More machines were made, each with further modifications, until there were ten. The Colossus machines were special-purpose, programcontrolled electronic digital computers, the only known electronic programmable computers in existence in 1944. The use of electronics allowed for a tremendous increase in the internal speed of the machine. Impact The Colossus machines gave Britain the best codebreaking machines of World War II and provided information that was crucial for the Allied victory. The information decoded by Colossus, the actual messages, and their influence on military decisions would remain classified for decades after the war. The later work of several of the people involved with the Bletchley Park projects was important in British computer development after the war. Newman’s and Turing’s postwar careers were closely tied to emerging computer advances. Newman, who was interested in the impact of computers on mathematics, received a grant from the Royal Society in 1946 to establish a calculating machine laboratory at Manchester University. He was also involved with postwar computer growth in Britain. Several other members of the Bletchley Park team, including Tu-

Colossus computer

/

203

ring, joined Newman at Manchester in 1948. Before going to Manchester University, however, Turing joined Britain’s National Physical Laboratory (NPL). At NPL, Turing worked on an advanced computer known as the Pilot Automatic Computing Engine (Pilot ACE). While at NPL, Turing proposed the concept of a stored program, which was a controversial but extremely important idea in computing. A “stored” program is one that remains in residence inside the computer, making it possible for a particular program and data to be fed through an input device simultaneously. (The Heath Robinson and Colossus machines were limited by utilizing separate input tapes, one for the program and one for the data to be analyzed.) Turing was among the first to explain the stored-program concept in print. He was also among the first to imagine how subroutines could be included in a program. (A subroutine allows separate tasks within a large program to be done in distinct modules; in effect, it is a detour within a program. After the completion of the subroutine, the main program takes control again.) See also Apple II computer; Differential analyzer; ENIAC computer; IBM Model 1401 computer; Personal computer; Supercomputer; UNIVAC computer. Further Reading Carter, Frank. Codebreaking with the Colossus Computer: Finding the KWheel Patterns—An Account of Some of the Techniques Used. Milton Keynes, England: Bletchley Park Trust, 1997. Gray, Paul. “Computer Scientist: Alan Turing.” Time 153, no. 12 (March 29, 1999). Hodges, Andrew. Alan Turing: The Enigma. New York: Walker, 2000. Sale, Tony. The Colossus Computer, 1943-1996: And How It Helped to Break the German Lorenz Cipher in World War II. Cleobury Mortimer: M&M Baldwin, 1998.

204

Communications satellite Communications satellite

The invention: Telstar I, the world’s first commercial communications satellite, opened the age of live, worldwide television by connecting the United States and Europe. The people behind the invention: Arthur C. Clarke (1917), a British science-fiction writer who in 1945 first proposed the idea of using satellites as communications relays John R. Pierce (1910), an American engineer who worked on the Echo and Telstar satellite communications projects Science Fiction? In 1945, Arthur C. Clarke suggested that a satellite orbiting high above the earth could relay television signals between different stations on the ground, making for a much wider range of transmission than that of the usual ground-based systems. Writing in the February, 1945, issue of Wireless World, Clarke said that satellites “could give television and microwave coverage to the entire planet.” In 1956, John R. Pierce at the Bell Telephone Laboratories of the American Telephone & Telegraph Company (AT&T) began to urge the development of communications satellites. He saw these satellites as a replacement for the ocean-bottom cables then being used to carry transatlantic telephone calls. In 1950, about one-and-a-half million transatlantic calls were made, and that number was expected to grow to three million by 1960, straining the capacity of the existing cables; in 1970, twenty-one million calls were made. Communications satellites offered a good, cost-effective alternative to building more transatlantic telephone cables. On January 19, 1961, the Federal Communications Commission (FCC) gave permission for AT&T to begin Project Telstar, the first commercial communications satellite bridging the Atlantic Ocean. AT&T reached an agreement with the National Aeronautics and Space Administration (NASA) in July, 1961, in which AT&T would pay $3 million for

Communications satellite

/

205

each Telstar launch. The Telstar project involved about four hundred scientists, engineers, and technicians at the Bell Telephone Laboratories, twenty more technical personnel at AT&T headquarters, and the efforts of more than eight hundred other companies that provided equipment or services. Telstar 1 was shaped like a faceted sphere, was 88 centimeters in diameter, and weighed 80 kilograms. Most of its exterior surface (sixty of the seventy-four facets) was covered by 3,600 solar cells to convert sunlight into 15 watts of electricity to power the satellite. Each solar cell was covered with artificial sapphire to reduce the damage caused by radiation. The main instrument was a two-way radio able to handle six hundred telephone calls at a time or one television channel. The signal that the radio would send back to Earth was very weak—less than one-thirtieth the energy used by a household light bulb. Large ground antennas were needed to receive Telstar’s faint signal. The main ground station was built by AT&T in Andover, Maine, on a hilltop informally called “Space Hill.” A horn-shaped antenna, weighing 380 tons, with a length of 54 meters and an open end with an area of 1,097 square meters, was mounted so that it could rotate to track Telstar across the sky. To protect it from wind and weather, the antenna was built inside an inflated dome, 64 meters in diameter and 49 meters tall. It was, at the time, the largest inflatable structure ever built. A second, smaller horn antenna in Holmdel, New Jersey, was also used. International Cooperation In February, 1961, the governments of the United States and England agreed to let the British Post Office and NASA work together to test experimental communications satellites. The British Post Office built a 26-meter-diameter steerable dish antenna of its own design at Goonhilly Downs, near Cornwall, England. Under a similar agreement, the French National Center for Telecommunications Studies constructed a ground station, almost identical to the Andover station, at Pleumeur-Bodou, Brittany, France. After testing, Telstar 1 was moved to Cape Canaveral, Florida, and attached to the Thor-Delta launch vehicle built by the Douglas

206

/

Communications satellite

Aircraft Company. The Thor-Delta was launched at 3:35 a.m. eastern standard time (EST) on July 10, 1962. Once in orbit, Telstar 1 took 157.8 minutes to circle the globe. The satellite came within range of the Andover station on its sixth orbit, and a television test pattern was transmitted to the satellite at 6:26 p.m. EST. At 6:30 p.m. EST, a tape-recorded black-and-white image of the American flag with the Andover station in the background, transmitted from Andover to Holmdel, opened the first television show ever broadcast by satellite. Live pictures of U.S. vice president Lyndon B. Johnson and other officials gathered at Carnegie Institution in Washington, D.C., followed on the AT&T program carried live on all three American networks. Up to the moment of launch, it was uncertain if the French station would be completed in time to participate in the initial test. At 6:47 p.m. EST, however, Telstar’s signal was picked up by the station in Pleumeur-Bodou, and Johnson’s image became the first television transmission to cross the Atlantic. Pictures received at the French station were reported to be so clear that they looked like they had been sent from only forty kilometers away. Because of technical difficulties, the English station was unable to receive a clear signal. The first formal exchange of programming between the United States and Europe occurred on July 23, 1962. This special eighteenminute program, produced by the European Broadcasting Union, consisted of live scenes from major cities throughout Europe and was transmitted from Goonhilly Downs, where the technical difficulties had been corrected, to Andover via Telstar. On the previous orbit, a program entitled “America, July 23, 1962,” showing scenes from fifty television cameras around the United States, was beamed from Andover to Pleumeur-Bodou and seen by an estimated one hundred million viewers throughout Europe. Consequences Telstar 1 and the communications satellites that followed it revolutionized the television news and sports industries. Before, television networks had to ship film across the oceans, meaning delays of hours or days between the time an event occurred and the broadcast

Communications satellite

/

207

of pictures of that event on television on another continent. Now, news of major significance, as well as sporting events, can be viewed live around the world. The impact on international relations also was significant, with world opinion becoming able to influence the actions of governments and individuals, since those actions could be seen around the world as the events were still in progress. More powerful launch vehicles allowed new satellites to be placed in geosynchronous orbits, circling the earth at a speed the same as the earth’s rotation rate. When viewed from the ground, these satellites appeared to remain stationary in the sky. This allowed continuous communications and greatly simplified the ground antenna system. By the late 1970’s, private individuals had built small antennas in their backyards to receive television signals directly from the satellites. See also Artificial satellite; Cruise missile; Rocket; Weather satellite. Further Reading McAleer, Neil. Odyssey: The Authorised Biography of Arthur C. Clarke. London: Victor Gollancz, 1992. Pierce, John Robinson. The Beginnings of Satellite Communications. San Francisco: San Francisco Press, 1968. _____. Science, Art, and Communication. New York: C. N. Potter, 1968.

208

Community antenna television Community antenna television

The invention: A system for connecting households in isolated areas to common antennas to improve television reception, community antenna television was a forerunner of modern cabletelevision systems. The people behind the invention: Robert J. Tarlton, the founder of CATV in eastern Pennsylvania Ed Parsons, the founder of CATV in Oregon Ted Turner (1938), founder of the first cable superstation, WTBS Growing Demand for Television Television broadcasting in the United States began in the late 1930’s. After delays resulting from World War II, it exploded into the American public’s consciousness. The new medium relied primarily on existing broadcasting stations that quickly converted from radio to television formats. Consequently, the reception of television signals was centralized in large cities. The demand for television quickly swept across the country. Ownership of television receivers increased dramatically, and those who could not afford their own flocked to businesses, usually taverns, or to the homes of friends with sets. People in urban areas had more opportunities to view the new medium and had the advantage of more broadcasts within the range of reception. Those in outlying regions were not so fortunate, as they struggled to see fuzzy pictures and were, in some cases, unable to receive a signal at all. The situation for outlying areas worsened in 1948, when the Federal Communications Commission (FCC) implemented a ban on all new television stations while it considered how to expand the television market and how to deal with a controversy over color reception. This left areas without nearby stations in limbo, while people in areas with established stations reaped the benefits of new programming. The ban would remain in effect until 1952, when new stations came under construction across the country.

Community antenna television

/

209

Poor reception in some areas and the FCC ban on new station construction together set the stage for the development of Community Antenna Television (CATV). CATV did not have a glamorous beginning. Late in 1949, two different men, frustrated by the slow movement of television to outlying areas, set up what would become the foundation of the multimillion-dollar cable industry. Robert J. Tarlton was a radio salesman in Lansford, Pennsylvania, about sixty-five miles from Philadelphia. He wanted to move into television sales but lived in an area with poor reception. Together with friends, he founded Panther Valley Television and set up a master antenna in a mountain range that blocked the reception of Philadelphia-based broadcasting. For an installation fee of $125 and a fee of $3 per month, Panther Valley Television offered residents clear reception of the three Philadelphia stations via a coaxial cable wired to their homes. At the same time, Ed Parsons, of KAST radio in Astoria, Oregon, linked homes via coaxial cables to a master antenna set up to receive remote broadcasts. Both systems offered three channels, the major network affiliates, to subscribers. By 1952, when the FCC ban was lifted, some seventy CATV systems provided small and rural communities with the wonders of television. That same year, the National Cable Television Association was formed to represent the interests of the young industry. Early systems could carry only one to three channels. In 1953, CATV began to use microwave relays, which could import distant signals to add more variety and pushed system capability to twelve channels. A system of towers began sprouting up across the country. These towers could relay a television signal from a powerful originating station to each cable system’s main antenna. This further opened the reception available to subscribers. Pay Television The notion of pay television also began at this time. In 1951, the FCC authorized a test of Zenith Radio Corporation’s Phonevision in Chicago. Scrambled images could be sent as electronic impulses over telephone lines, then unscrambled by devices placed in subscribers’ homes. Subscribers could order a film over the telephone for a minimal cost, usually $1. Advertisers for the system promoted

210

/

Community antenna television

the idea of films for the “sick, aged, and sitterless.” This early test was a forerunner of the premium, or pay, channels of later decades. Network opposition to CATV came in the late 1950’s. RCA chairman David Sarnoff warned against a pay television system that could soon fall under government regulation, as in the case of utilities. In April, 1959, the FCC found no basis for asserting jurisdiction or authority over CATV. This left the industry open to tremendous growth. By 1960, the industry included 640 systems with 700,000 subscribers. Ten years later, 2,490 systems were in operation, serving more than 4.5 million households. This accelerated growth came at a price. In April, 1965, the FCC reversed itself and asserted authority over microwave-fed CATV. A year later, the entire cable system came under FCC control. The FCC quickly restricted the use of distant signals in the largest hundred markets. The FCC movement to control cable systems stemmed from the agency’s desire to balance the television market. From the onset of television broadcasting, the FCC strived to maintain a balanced programming schedule. The goal was to create local markets in which local affiliate stations prospered from advertising and other community support and would not be unduly harmed by competition from larger metropolitan stations. In addition, growth of the industry ideally was to be uniform, with large and small cities receiving equal consideration. Cable systems, particularly those that could receive distant signals via microwave relay, upset the balance. For example, a small Ohio town could receive New York channels as well as Chicago channels via cable, as opposed to receiving only the channels from one city. The balance was further upset with the creation of a new communications satellite, COMSAT, in 1963. This technology allowed a signal to be sent to the satellite, retransmitted back to Earth, and then picked up by a receiving station. This further increased the range of cable offerings and moved the transmission of television signals to a national scale, as microwave-relayed transmissions worked best in a regional scope. These two factors led the FCC to freeze the cable industry from new development and construction in December, 1968. After 1972, when the cable freeze was lifted, the greatest impact of CATV would be felt.

Community antenna television

/

211

“The whole idea of grand things always turned me on,” Ted Turner said in a 1978 Playboy magazine interview. Irrepressible, tenacious, and flamboyant, Turner was groomed from childhood for grandness. Born Robert Edward Turner III in 1938 in Cincinnati, Ohio, he was raised by a harsh, demanding father who sent him to military preparatory schools and insisted he study business at Brown University instead of attending the U.S. Naval Academy, as the son wanted. Known as “Terrible Ted” in school for his high-energy, maverick ways, he became an champion debater, expert sailor, and natural leader. When the Turner Advertising Company failed in 1960, and his father committed suicide, young Turner took it over and parlayed it into an empire, acquiring or creating television stations and revolutionizing how they were broadcast to Americans. From then on he acquired, innovated, and, often, shocked. He bought the Atlanta Braves baseball team and Hawks basketball team, often angering sports executives with his recruiting methods, earning the nicknames “Mouth of the South” and “Captain Outrageous” for his assertiveness. He won the prestigious America’s Cup in 1977 at the helm of the yacht Courageous. He bought MetroGolden-Mayer/United Artists and incensed movie purists by having black-and-white classics “colorized.” In 1995 he concluded a $7.5 billion merger of Turner Broadcasting and Time Warner and set about an insult-slinging business war with another media tycoon, Rupert Murdoch. Meanwhile, he went through three marriages, the last to movie star Jane Fonda, and became the largest private landholder in the nation, with luxury homes in six states. However, Turner’s life was not all acquisition. He started a charitable foundation and sponsored the Olympics-like Goodwill Games between the United States and the Soviet Union to improve relations, for which Time magazine named him its man of the year in 1991. However, Turner’s grandest shocker came in 1997 when he promised to donate $1 billion—$100 million each year for a decade—to the United Nations to help in feeding the poor, resettling refugees, and eradicating land mines. And he publicly challenged other super-rich people to use their vast wealth similarly.

(George Bennett)

Ted Turner

212

/

Community antenna television

Impact The founding of cable television had a two-tier effect on the American public. The immediate impact of CATV was the opening of television to areas cut off from network broadcasting as a result of distance or topographical obstructions. Cable brought television to those who would have otherwise missed the early years of the medium. As technology furthered the capabilities of the industry, a second impact emerged. Along with the 1972 lifting of the ban on cable expansion, the FCC established strict guidelines for the advancement of the industry. Issuing a 500-page blueprint for the expansion of cable, the FCC included limits on the use of imported distant signals, required the blacking out of some specific programs (films and serials, for example), and limited pay cable to films that were more than two years old and to sports. Another component of the guidelines required all systems that went into operation after March, 1972 (and all systems by March, 1977), to provide public access channels for education and local government. In addition, channels were to be made available for lease. These access channels opened information to subscribers that would not normally be available. Local governments and school boards began to broadcast meetings, and even high school athletics soon appeared via public access channels. These channels also provided space to local educational institutions for home-based courses in a variety of disciplines. Cable Communications Policy Act Further FCC involvement came in the 1984 Cable Communications Policy Act, which deregulated the industry and opened the door for more expansion. This act removed local control over cable service rates and virtually made monopolies out of local providers by limiting competition. The late 1980’s brought a new technology, fiber optics, which promised to further advance the industry by increasing the quality of cable services and channel availability. One area of the cable industry, pay television, took off in the 1970’s and early 1980’s. The first major pay channel was developed

Community antenna television

/

213

by the media giant Time-Life. It inaugurated Home Box Office (HBO) in 1975 as the first national satellite interconnected network. Early HBO programming primarily featured films but included no films less than two years old (meeting the 1972 FCC guidelines), no serials, and no advertisements. Other premium movie channels followed, including Showtime, Cinemax, and The Movie Channel. By the late 1970’s, cable systems offered multiple premium channels to their subscribers. Superstations were another component of the cable industry that boomed in the 1970’s and 1980’s. The first, WTBS, was owned and operated by Ted Turner and broadcast from Atlanta, Georgia. It emphasized films and reruns of old television series. Cable systems that broadcast WTBS were asked to allocate the signal to channel 17, thus creating uniformity across the country for the superstation. Chicago’s WGN and New York City’s WOR soon followed, gaining access to homes across the nation via cable. Both these superstations emphasized sporting events in the early years and expanded to include films and other entertainment in the 1980’s. Both pay channels and superstations transmitted via satellites (WTBS leased space from RCA, for example) and were picked up by cable systems across the country. Other stations with broadcasts intended solely for the cable industry opened in the 1980’s. Ted Turner started the Cable News Network in 1980 and followed with the allnews network Headline News. He added another channel with the Turner Network Television (TNT) in 1988. Other 1980’s additions included The Disney Channel, ESPN, The Entertainment Channel, The Discovery Channel, and Lifetime. The Cable-Satellite Public Affairs Network (C-SPAN) enhanced the cable industry’s presence in Washington, D.C., by broadcasting sessions of the House of Representatives. Specialized networks for particular audiences also developed. Music Television (MTV), featuring songs played along with video sequences, premiered in 1981. Nickelodeon, a children’s channel, and VH-1, a music channel aimed at baby boomers rather than MTV’s teenage audience, reflected the movement toward specialization. Other specialized channels, such as the Sci-Fi Channel and the Comedy Channel, went even further in targeting specific audiences.

214

/

Community antenna television

Cable and the Public The impact on the American public was tremendous. Information and entertainment became available around the clock. Cable provided a new level of service, information, and entertainment unavailable to nonsubscribers. One phenomenon that exploded in the late 1980’s was home shopping. Via The Home Shopping Club and QVC, two shopping channels offered through cable television, the American public could order a full range of products. Everything from jewelry to tools and home cleaning supplies to clothing and electronics was available to anyone with a credit card. Americans could now go shopping from home. The cable industry was not without its competitors and critics. In the 1980’s, the videocassette recorder (VCR) opened the viewing market. Prerecorded cassettes of recent film releases as well as classics were made available for purchase or for a small rental fee. National chains of video rental outlets, such as Blockbuster Video and Video Towne, offered thousands of titles for rent. Libraries also began to stock films. This created competition for the cable industry, in particular the premium movie channels. To combat this competition, channels began to offer original productions unavailable on videocassette. The combined effect of the cable industry and the videocassette market was devastating to the motion picture industry. The wide variety of programming available at home encouraged the American public, especially baby boomers with children, to stay home and watch cable or rented films instead of going to theaters. Critics of the cable industry seized on the violence, sexual content, and graphic language found in some of cable’s offerings. One parent responded by developing a lockout device that could make certain channels unavailable to children. Some premium channels developed an after-hours programming schedule that aired adulttheme programming only late at night. Another criticism stemmed from the repetition common on pay channels. As a result of the limited supply of and large demand for films, pay channels were forced to repeat programs several times within a month and to rebroadcast films that were several years old. This led consumers to question the value of the additional monthly fee paid for such channels. To com-

Community antenna television

/

215

bat the problem, premium channels increased efforts aimed at original production and added more films that had not been box-office hits. By the early 1990’s, as some eleven thousand cable systems were serving 56.2 million subscribers, a new cry for regulation began. Debates over services and increasingly high rates led the FCC and Congress to investigate the industry, opening the door for new guidelines on the cable industry. The non-cable networks—American Broadcasting Company (ABC), Columbia Broadcasting System (CBS), National Broadcasting Company (NBC), and newcomer Fox—stressed their concerns about the cable industry. These networks provided free programming, and cable systems profited from inclusion of network programming. Television industry representatives expressed the opinion that cable providers should pay for the privilege of retransmitting network broadcasts. The impact on cable’s subscribers, especially concerning monthly cable rates, came under heavy debate in public and government forums. The administration in Washington, D.C., expressed concern that cable rates had risen too quickly and for no obvious reason other than profit-seeking by what were essentially monopolistic local cable systems. What was clear was that the cable industry had transformed the television experience and was going to remain a powerful force within the medium. Regulators and television industry leaders were left to determine how to maintain an equitable coexistence within the medium. See also Color television; Communications satellite; Fiber-optics; Telephone switching; Television. Further Reading Baldwin, Thomas F., and D. Stevens McVoy. Cable Communication. Englewood Cliffs, N.J.: Prentice-Hall, 1983. Brenner, Daniel L., and Monroe E. Price. Cable Television and Other Nonbroadcast Video: Law and Policy. New York: Clark Boardman, 1986. Burns, R. W. Television: An International History of the Formative Years. London: Institution of Electrical Engineers in Association with the Science Museum, 1998.

216

/

Community antenna television

Coleman, Wim. The Age of Broadcasting: Television. Carlisle, Mass.: Discovery Enterprises, 1997. Negrine, Ralph M., ed. Cable Television and the Future of Broadcasting. New York: St. Martin’s Press, 1985. Sconce, Jeffrey. Haunted Media: Electronic Presence from Telegraphy to Television. Durham, N.C.: Duke University Press, 2000. Whittemore, Hank. CNN: The Inside Story. Boston: Little, Brown, 1990.

217

Compact disc Compact disc

The invention: A plastic disk on which digitized music or computer data is stored. The people behind the invention: Akio Morita (1921), a Japanese physicist and engineer who was a cofounder of Sony Wisse Dekker (1924), a Dutch businessman who led the Philips company W. R. Bennett (1904-1983), an American engineer who was a pioneer in digital communications and who played an important part in the Bell Laboratories research program Digital Recording The digital system of sound recording, like the analog methods that preceded it, was developed by the telephone companies to improve the quality and speed of telephone transmissions. The system of electrical recording introduced by Bell Laboratories in the 1920s was part of this effort. Even Edison’s famous invention of the phonograph in 1877 was originally conceived as an accompaniment to the telephone. Although developed within the framework of telephone communications, these innovations found wide applications in the entertainment industry. The basis of the digital recording system was a technique of sampling the electrical waveforms of sound called PCM, or pulse code modulation. PCM measures the characteristics of these waves and converts them into numbers. This technique was developed at Bell Laboratories in the 1930’s to transmit speech. At the end of World War II, engineers of the Bell System began to adapt PCM technology for ordinary telephone communications. The problem of turning sound waves into numbers was that of finding a method that could quickly and reliably manipulate millions of them. The answer to this problem was found in electronic computers, which used binary code to handle millions of computations in a few seconds. The rapid advance of computer technology and the

218

/

Compact disc

semiconductor circuits that gave computers the power to handle complex calculations provided the means to bring digital sound technology into commercial use. In the 1960’s, digital transmission and switching systems were introduced to the telephone network. Pulse coded modulation of audio signals into digital code achieved standards of reproduction that exceeded even the best analog system, creating an enormous dynamic range of sounds with no distortion or background noise. The importance of digital recording went beyond the transmission of sound because it could be applied to all types of magnetic recording in which the source signal is transformed into an electric current. There were numerous commercial applications for such a system, and several companies began to explore the possibilities of digital recording in the 1970’s. Researchers at the Sony, Matsushita, and Mitsubishi electronics companies in Japan produced experimental digital recording systems. Each developed its own PCM processor, an integrated circuit that changes audio signals into digital code. It does not continuously transform sound but instead samples it by analyzing thousands of minute slices of it per second. Sony’s PCM-F1 was the first analog-to-digital conversion chip to be produced. This gave Sony a lead in the research into and development of digital recording. All three companies had strong interests in both audio and video electronics equipment and saw digital recording as a key technology because it could deal with both types of information simultaneously. They devised recorders for use in their manufacturing operations. After using PCM techniques to turn sound into digital code, they recorded this information onto tape, using not magnetic audio tape but the more advanced video tape, which could handle much more information. The experiments with digital recording occurred simultaneously with the accelerated development of video recording technology and owed much to the enhanced capabilities of video recorders. At this time, videocassette recorders were being developed in several corporate laboratories in Japan and Europe. The Sony Corporation was one of the companies developing video recorders at this time. Its U-matic machines were successfully used to record digitally. In 1972, the Nippon Columbia Company began to make its master recordings digitally on an Ampex video recording machine.

Compact disc

/

219

Links Among New Technologies There were powerful links between the new sound recording systems and the emerging technologies of storing and retrieving video images. The television had proved to be the most widely used and profitable electronic product of the 1950’s, but with the market for color television saturated by the end of the 1960’s, manufacturers had to look for a replacement product. A machine to save and replay television images was seen as the ideal companion to the family TV set. The great consumer electronics companies—General Electric and RCA in the United States, Philips and Telefunken in Europe, and Sony and Matsushita in Japan—began experimental programs to find a way to save video images. RCA’s experimental teams took the lead in developing an optical videodisc system, called Selectavision, that used an electronic stylus to read changes in capacitance on the disc. The greatest challenge to them came from the Philips company of Holland. Its optical videodisc used a laser beam to read information on a revolving disc, in which a layer of plastic contained coded information. With the aid of the engineering department of the Deutsche Grammophon record company, Philips had an experimental laser disc in hand by 1964. The Philips Laservision videodisc was not a commercial success, but it carried forward an important idea. The research and engineering work carried out in the laboratories at Eindhoven in Holland proved that the laser reader could do the job. More important, Philips engineers had found that this fragile device could be mass produced as a cheap and reliable component of a commercial product. The laser optical decoder was applied to reading the binary codes of digital sound. By the end of the 1970’s, Philips engineers had produced a working system. Ten years of experimental work on the Laservision system proved to be a valuable investment for the Philips corporation. Around 1979, it started to work on a digital audio disc (DAD) playback system. This involved more than the basic idea of converting the output of the PCM conversion chip onto a disc. The lines of pits on the compact disc carry a great amount of information: the left- and right-hand tracks of the stereo system are identified, and a sequence

220

/

Compact disc

of pits also controls the motor speed and corrects any error in the laser reading of the binary codes. This research was carried out jointly with the Sony Corporation of Japan, which had produced a superior method of encoding digital sound with its PCM chips. The binary codes that carried the information were manipulated by Sony’s sixteen-bit microprocessor. Its PCM chip for analog-to-digital conversion was also employed. Together, Philips and Sony produced a commercial digital playback record that they named the compact disc. The name is significant, as it does more than indicate the size of the disc—it indicates family ties with the highly successful compact cassette. Philips and Sony had already worked to establish this standard in the magnetic tape format and aimed to make their compact disc the standard for digital sound reproduction. Philips and Sony began to demonstrate their compact digital disc (CD) system to representatives of the audio industry in 1981. They were not alone in digital recording. The Japanese Victor Company, a subsidiary of Matsushita, had developed a version of digital recording from its VHD video disc design. It was called audio high density disc (AHD). Instead of the small CD disc, the AHD system used a ten-inch vinyl disc. Each digital recording system used a different PCM chip with a different rate of sampling the audio signal. The recording and electronics industries’ decision to standardize on the Philips/Sony CD system was therefore a major victory for these companies and an important event in the digital era of sound recording. Sony had found out the Although not much larger than a 3.25-inch floppy hard way that the technical disk, a compact disk can store more than five hunperformance of an innova- dred times as much data. (PhotoDisc)

Compact disc

/

221

tion is irrelevant when compared with the politics of turning it into an industrywide standard. Although the pioneer in videocassette recorders, Sony had been beaten by its rival, Matsushita, in establishing the video recording standard. This mistake was not repeated in the digital standards negotiations, and many companies were persuaded to license the new technology. In 1982, the technology was announced to the public. The following year, the compact disc was on the market. The Apex of Sound Technology The compact disc represented the apex of recorded sound technology. Simply put, here at last was a system of recording in which there was no extraneous noise—no surface noise of scratches and pops, no tape hiss, no background hum—and no damage was done to the recording as it was played. In principle, a digital recording will last forever, and each play will sound as pure as the first. The compact disc could also play much longer than the vinyl record or long-playing cassette tape. Despite these obvious technical advantages, the commercial success of digital recording was not ensured. There had been several other advanced systems that had not fared well in the marketplace, and the conspicuous failure of quadrophonic sound in the 1970’s had not been forgotten within the industry of recorded sound. Historically, there were two key factors in the rapid acceptance of a new system of sound recording and reproduction: a library of prerecorded music to tempt the listener into adopting the system and a continual decrease in the price of the playing units to bring them within the budgets of more buyers. By 1984, there were about a thousand titles available on compact disc in the United States; that number had doubled by 1985. Although many of these selections were classical music—it was naturally assumed that audiophiles would be the first to buy digital equipment—popular music was well represented. The first CD available for purchase was an album by popular entertainer Billy Joel. The first CD-playing units cost more than $1,000, but Akio Morita of Sony was determined that the company should reduce the price of players even if it meant selling them below cost. Sony’s

222

/

Compact disc

Akio Morita Akio Morita was born in Nagoya, Japan, in 1921 into a family owning one of the country’s oldest, most prosperous sake breweries. As the eldest son, Morita was expected to take over its management from his father. However, business did not interest him as a child. Electronics did, especially radios. He made his own radio and phonograph and resolved to be a scientist. He succeeded, but in an ironic twist, he also became one of the twentieth century’s most successful businessmen. After taking a degree in physics from Osaka Imperial University in 1944, he worked at the Naval Research Center. There he met Masaru Ibuka. Although Ibuka was twelve years older and much more reserved in temperament, the two became fast friends. After World War II, they borrowed the equivalent of about $500 from Morita’s father and opened the Tokyo Telecommunications Company, making voltmeters and, later, tape recorders. To help along sluggish sales, Morita visited local schools to demonstrate the tape recorder’s usefulness in teaching. He was so successful that a third of Japan’s elementary schools bought them. From then on, Morita, as vice president of the company, was the lead man in marketing and sales strategy. He bought rights from West Electric Company to manufacture transistors in 1954, and soon the company was turning out transistor radios. Sales soared. They changed the name to Sony (based on the Latin word for sound, sonus) because it was more memorable. Despite an American bias against Japanese products— which many Americans regarded as shoddy imitations—Morita launched Sony America in 1960. In 1963 Sony became the first Japanese company to sell its stock in America and in 1970 the first to be listed on the New York Stock Exchange, opening an American factory two years later. Morita became president of Sony Corporation in 1971 and board chairman in 1976. In 1984 Sony earnings exceeded $5 billion, a ten-million percent increase in worth in less than forty years. As important for Japanese industry and national honor, Morita and Sony moved Japanese electronics into leading edge of technical sophistication and craftsmanship.

Compact disc

/

223

audio engineering department improved the performance of the players while reducing size and cost. By 1984, Sony had a small CD unit on the market for $300. Several of Sony’s competitors, including Matsushita, had followed its lead into digital reproduction. There were several compact disc players available in 1985 that cost less than $500. Sony quickly applied digital technology to the popular personal stereo and to automobile sound systems. Sales of CD units increased roughly tenfold from 1983 to 1985. Impact on Vinyl Recording When the compact disc was announced in 1982, the vinyl record was the leading form of recorded sound, with 273 million units sold annually compared to 125 million prerecorded cassette tapes. The compact disc sold slowly, beginning with 800,000 units shipped in 1983 and rising to 53 million in 1986. By that time, the cassette tape had taken the lead, with slightly fewer than 350 million units. The vinyl record was in decline, with only about 110 million units shipped. Compact discs first outsold vinyl records in 1988. In the ten years from 1979 to 1988, the sales of vinyl records dropped nearly 80 percent. In 1989, CDs accounted for more than 286 million sales, but cassettes still led the field with total sales of 446 million. The compact disc finally passed the cassette in total sales in 1992, when more than 300 million CDs were shipped, an increase of 22 percent over the figure for 1991. The introduction of digital recording had an invigorating effect on the industry of recorded sound, which had been unable to fully recover from the slump of the late 1970’s. Sales of recorded music had stagnated in the early 1980’s, and an industry accustomed to steady increases in output became eager to find a new product or style of music to boost its sales. The compact disc was the product to revitalize the market for both recordings and players. During the 1980’s, worldwide sales of recorded music jumped from $12 billion to $22 billion, with about half of the sales volume accounted for by digital recordings by the end of the decade. The success of digital recording served in the long run to undermine the commercial viability of the compact disc. This was a playonly technology, like the vinyl record before it. Once users had be-

224

/

Compact disc

come accustomed to the pristine digital sound, they clamored for digital recording capability. The alliance of Sony and Philips broke down in the search for a digital tape technology for home use. Sony produced a digital tape system called DAT, while Philips responded with a digital version of its compact audio tape called DCC. Sony answered the challenge of DCC with its Mini Disc (MD) product, which can record and replay digitally. The versatility of digital recording has opened up a wide range of consumer products. Compact disc technology has been incorporated into the computer, in which CD-ROM readers convert the digital code of the disc into sound and images. Many home computers have the capability to record and replay sound digitally. Digital recording is the basis for interactive audio/video computer programs in which the user can interface with recorded sound and images. Philips has established a strong foothold in interactive digital technology with its CD-I (compact disc interactive) system, which was introduced in 1990. This acts as a multimedia entertainer, providing sound, moving images, games, and interactive sound and image publications such as encyclopedias. The future of digital recording will be broad-based systems that can record and replay a wide variety of sounds and images and that can be manipulated by users of home computers. See also Cassette recording; Dolby noise reduction; Electronic synthesizer; FM radio; Laser-diode recording process; Optical disk; Transistor; Videocassette recorder; Walkman cassette player. Further Reading Copeland, Peter. Sound Recordings. London: British Library, 1991. Heerding, A. A Company of Many Parts. Cambridge: Cambridge University Press, 1998. Marshall, David V. Akio Morita and Sony. Watford: Exley, 1995. Morita, Akio, with Edwin M. Reingold, and Mitsuko Shimomura. Made in Japan: Akio Morita and Sony. London: HarperCollins, 1994. Nathan, John. Sony: The Private Life. Boston, Mass.: Houghton Mifflin, 1999. Schlender, Brenton R. “How Sony Keeps the Magic Going.” Fortune 125 (February 24, 1992).

225

Compressed-air-accumulating power plant Compressed-air-accumulating power plant

The invention: Plants that can be used to store energy in the form of compressed air when electric power demand is low and use it to produce energy when power demand is high. The organization behind the invention: Nordwestdeutsche Kraftwerke, a Germany company Power, Energy Storage, and Compressed Air Energy, which can be defined as the capacity to do work, is essential to all aspects of modern life. One familiar kind of energy, which is produced in huge amounts by power companies, is electrical energy, or electricity. Most electricity is produced in a process that consists of two steps. First, a fossil fuel such as coal is burned and the resulting heat is used to make steam. Then, the steam is used to operate a turbine system that produces electricity. Electricity has myriad applications, including the operation of heaters, home appliances of many kinds, industrial machinery, computers, and artificial illumination systems. An essential feature of electricity manufacture is the production of the particular amount of electricity that is needed at a given time. If moment-to-moment energy requirements are not met, the city or locality involved will experience a “blackout,” the most obvious feature of which is the loss of electrical lighting. To prevent blackouts, it is essential to store extra electricity at times when power production exceeds power demands. Then, when power demands exceed the capacity to make energy by normal means, stored energy can be used to make up the difference. One successful modern procedure for such storage is the compressed-air-accumulation process, pioneered by the Nordwestdeutsche Kraftwerke company’s compressed-air-accumulating power plant, which opened in December, 1978. The plant, which is located in Huntorf, Germany (at the time, West Germany), makes compressed air during periods of low electricity demand, stores the

226

/

Compressed-air-accumulating power plant

air in an underground cavern, and uses it to produce extra electricity during periods of high demand. Plant Operation and Components The German 300-megawatt compressed-air-accumulating power plant in Huntorf produces extra electricity from stored compressed air that will provide up to four hours per day of local peak electricity needs. The energy-storage process, which is vital to meeting very high peak electric power demands, is viable for electric power plants whose total usual electric outputs range from 25 megawatts to the 300 megawatts produced at Huntorf. It has been suggested, however, that the process is most suitable for 25- to 50-megawatt plants. The energy-storage procedure used at Huntorf is quite simple. All the surplus electricity that is made in nonpeak-demand periods is utilized to drive an air compressor. The compressor pumps air from the surrounding atmosphere into an airtight underground storage cavern. When extra electricity is required, the stored compressed air is released and passed through a heating unit to be warmed, after which it is used to run gas-turbine systems that produce electricity. This sequence of events is the same as that used in any gas-turbine generating system; the only difference is that the compressed air can be stored for any desired period of time rather than having to be used immediately. One requirement of any compressed-air-accumulating power plant is an underground storage chamber. The Huntorf plant utilizes a cavern that was hollowed out some 450 meters below the surface of the earth. The cavern was created by drilling a hole into an underground salt deposit and pumping in water. The water dissolved the salt, and the resultant saltwater solution (brine) was pumped out of the deposit. The process of pumping in water and removing brine was continued until the cavern reached the desired size. This type of storage cavern is virtually leak-free. The preparation of such underwater salt-dome caverns has been performed roughly since the middle of the twentieth century. Until the Huntorf endeavor, such caves were used to stockpile petroleum and natural gas for later use. It is also possible to use mined, hard-rock caverns

Compressed-air-accumulating power plant

Air Compressor

Electricity Out

Electricity In

/

227

Exhaust Stack Recuperator

Clutch Valve

Motor Clutch Turbine Generator Valve Combuster Burning Fuel

Schematic of a compressed-air-accumulating power plant.

for compressed-air accumulation when it is necessary to compress air to pressures higher than those that can be maintained effectively in a salt-dome cavern. The essential machinery that must be added to conventional power plants to turn them into compressed-air-accumulating power plants are motor-driven air compressors and gas turbine generating systems. This equipment must be connected appropriately so that in the storage mode, the overall system will compress air for storage in the underground cavern, and in the power-production mode, the system will produce electricity from the stored compressed air. Large compressed-air-accumulating power plants require specially constructed machinery. For example, the compressors that are used at Huntorf were developed specifically for that plant by Sulzer, a Swiss company. When the capacity of such plants is no higher than 50 megawatts, however, standard, readily available components can be used. This means that relatively small compressed-air-accumulating power plants can be constructed for a reasonable cost. Consequences The development of compressed-air-accumulating power plants has had a significant impact on the electric power industry, adding to its capacity to store energy. The main storage methods available prior to the development of compressed-air-accumulation methodology were batteries and water that was pumped uphill (hydro-storage). Battery technology is expensive, and its capacity is insufficient for major, long-term power storage. Hydro-storage is a more viable technology.

228

/

Compressed-air-accumulating power plant

Compressed-air energy-storage systems have several advantages over hydro-storage. First, they can be used in areas where flat terrain makes it impossible to use hydro-storage. Second, compressedair storage is more efficient than hydro-storage. Finally, the fact that standard plant components can be used, along with several other factors, means that 25- to 50-megawatt compressed-air storage plants can be constructed much more quickly and cheaply than comparable hydro-storage plants. The attractiveness of compressed-air-accumulating power plants has motivated efforts to develop hard-rock cavern construction techniques that cut costs and make it possible to use high-pressure air storage. In addition, aquifers (underground strata of porous rock that normally hold groundwater) have been used successfully for compressed-air storage. It is expected that compressed-air-accumulating power plants will be widely used in the future, which will help to decrease pollution and cut the use of fossil fuels. See also Alkaline storage battery; Breeder reactor; Fuel cell; Geothermal power; Heat pump; Nuclear power plant; Tidal power plant. Further Reading “Compressed Air Stores Electricity.” Popular Science 242, no. 5 (May, 1993). Lee, Daehee. “Power to Spare: Compressed Air Energy Storage.” Mechanical Engineering 113, no. 7 (July, 1991). Shepard, Sam, and Septimus van der Linden. “Compressed Air Energy Storage Adapts Proven Technology to Address Market Opportunities.” Power Engineering 105, no. 4 (April, 2001). Zink, John C. “Who Says You Can’t Store Electricity?” Power Engineering 101, no. 3 (March, 1997).

229

Computer chips Computer chips

The invention: Also known as a microprocessor, a computer chip combines the basic logic circuits of a computer on a single silicon chip. The people behind the invention: Robert Norton Noyce (1927-1990), an American physicist William Shockley (1910-1989), an American coinventor of the transistor who was a cowinner of the 1956 Nobel Prize in Physics Marcian Edward Hoff, Jr. (1937), an American engineer Jack St. Clair Kilby (1923), an American researcher and assistant vice president of Texas Instruments The Shockley Eight The microelectronics industry began shortly after World War II with the invention of the transistor. While radar was being developed during the war, it was discovered that certain crystalline substances, such as germanium and silicon, possess unique electrical properties that make them excellent signal detectors. This class of materials became known as “semiconductors,” because they are neither conductors nor insulators of electricity. Immediately after the war, scientists at Bell Telephone Laboratories began to conduct research on semiconductors in the hope that they might yield some benefits for communications. The Bell physicists learned to control the electrical properties of semiconductor crystals by “doping” (treating) them with minute impurities. When two thin wires for current were attached to this material, a crude device was obtained that could amplify the voice. The transistor, as this device was called, was developed late in 1947. The transistor duplicated many functions of vacuum tubes; it was also smaller, required less power, and generated less heat. The three Bell Laboratories scientists who guided its development—William Shockley, Walter H. Brattain, and John Bardeen—won the 1956 Nobel Prize in Physics for their work.

230

/

Computer chips

Shockley left Bell Laboratories and went to Palo Alto, California, where he formed his own company, Shockley Semiconductor Laboratories, which was a subsidiary of Beckman Instruments. Palo Alto is the home of Stanford University, which, in 1954, set aside 655 acres of land for a high-technology industrial area known as Stanford Research Park. One of the first small companies to lease a site there was Hewlett-Packard. Many others followed, and the surrounding area of Santa Clara County gave rise in the 1960’s and 1970’s to a booming community of electronics firms that became known as “Silicon Valley.” On the strength of his prestige, Shockley recruited eight young scientists from the eastern United States to work for him. One was Robert Norton Noyce, an Iowa-bred physicist with a doctorate from the Massachusetts Institute of Technology. Noyce came to Shockley’s company in 1956. The “Shockley Eight,” as they became known in the industry, soon found themselves at odds with their boss over issues of research and development. Seven of the dissenting scientists negotiated with industrialist Sherman Fairchild, and they convinced the remaining holdout, Noyce, to join them as their leader. The Shock-

Despite their tiny size, individual computer chips contain the basic logic circuits of entire computers. (PhotoDisc)

Computer chips

/

231

Jack St. Clair Kilby Maybe the original, deepest inspiration for the integrated circuit chip was topographical: As a boy Jack Kilby (b.1923) often accompanied his father, an electrical engineer, on trips over the circuit of roads through his flat home state, Kansas. In any case, he learned to love things electrical, and radios especially, from his father. Young Kilby had just started studying at the University of Illinois on his way to a degree in electrical engineering, when World War II started. He joined the Office of Strategic Services (OSS), which sent him into Japaneseoccupied territory to train local freedom fighters. He found the radios given to him to be heavy and unreliable, so he got hold of components on his own and built better, smaller radios. The “better, smaller” theme stayed with him. His first job out of college was with Centralab in Milwaukee, Wisconsin, where he designed ever smaller circuits. However, the bulky, hot vacuum tubes then in use limited miniaturization. In 1952, Centralab and Kilby eagerly incorporated the newly invented transistors into their designs. Kilby found, however, that all the electrical connections needed to hook up transistors and wires in a complex circuit also limited miniaturization. He moved to Texas Instruments in 1958. The company was working on a modular approach to miniaturization with snaptogether standardized parts. Kilby had a better idea: place everything for a specific circuit on a chip of silicon. Along with many other inventors, Kilby was soon looking for ways to put this new integrated circuit to work. He experimented with their use in computers and in generating solar power. He helped to develop the first hand-held calculator. Soon integrated circuits were in practically every electronic gadget, so that by the year 2000 his invention supported an electronic equipment industry that earned more than a trillion dollars a year. Among his many awards, Kilby shared the 2000 Nobel Prize in Physics with Zhores I. Alferov and Herbert Kroemer, both of whom also miniaturized electronics.

ley Eight defected in 1957 to form a new company, Fairchild Semiconductor, in nearby Mountain View, California. Shockley’s company, which never recovered from the loss of these scientists, soon went out of business.

232

/

Computer chips

Integrating Circuits Research efforts at Fairchild Semiconductor and Texas Instruments, in Dallas, Texas, focused on putting several transistors on one piece, or “chip,” of silicon. The first step involved making miniaturized electrical circuits. Jack St. Clair Kilby, a researcher at Texas Instruments, succeeded in making a circuit on a chip that consisted of tiny resistors, transistors, and capacitors, all of which were connected with gold wires. He and his company filed for a patent on this “integrated circuit” in February, 1959. Noyce and his associates at Fairchild Semiconductor followed in July of that year with an integrated circuit manufactured by means of a “planar process,” which involved laying down several layers of semiconductor that were isolated by layers of insulating material. Although Kilby and Noyce are generally recognized as coinventors of the integrated circuit, Kilby alone received a membership in the National Inventors Hall of Fame for his efforts. Consequences By 1968, Fairchild Semiconductor had grown to a point at which many of its key Silicon Valley managers had major philosophical differences with the East Coast management of their parent company. This led to a major exodus of top-level management and engineers. Many started their own companies. Noyce, Gordon E. Moore, and Andrew Grove left Fairchild to form a new company in Santa Clara called Intel with $2 million that had been provided by venture capitalist Arthur Rock. Intel’s main business was the manufacture of computer memory integrated circuit chips. By 1970, Intel was able to develop and bring to market a random-access memory (RAM) chip that was subsequently purchased in large quantities by several major computer manufacturers, providing large profits for Intel. In 1969, Marcian Edward Hoff, Jr., an Intel research and development engineer, met with engineers from Busicom, a Japanese firm. These engineers wanted Intel to design a set of integrated circuits for Busicom’s desktop calculators, but Hoff told them their specifications were too complex. Nevertheless, Hoff began to think about the possi-

Computer chips

/

233

Circuitry of a typical computer chip. (PhotoDisc)

bility of incorporating all the logic circuits of a computer central processing unit (CPU) into one chip. He began to design a chip called a “microprocessor,” which, when combined with a chip that would hold a program and one that would hold data, would become a small, general-purpose computer. Noyce encouraged Hoff and his associates to continue his work on the microprocessor, and Busicom contracted with Intel to produce the chip. Frederico Faggin, who was hired from Fairchild, did the chip layout and circuit drawings. In January, 1971, the Intel team finished its first working microprocessor, the 4004. The following year, Intel made a higher-capacity microprocessor, the 8008, for Computer Terminals Corporation. That company contracted with Texas Instruments to produce a chip with the same specifications as the 8008, which was produced in June, 1972. Other manufacturers soon produced their own microprocessors. The Intel microprocessor became the most widely used computer chip in the budding personal computer industry and may take significant credit for the PC “revolution” that soon followed. Microprocessors have become so common that people use them every day without realizing it. In addition to being used in computers,

234

/

Computer chips

the microprocessor has found its way into automobiles, microwave ovens, wristwatches, telephones, and many other ordinary items. See also Bubble memory; Floppy disk; Hard disk; Optical disk; Personal computer; Virtual machine. Further Reading Ceruzzi, Paul E. A History of Modern Computing. Cambridge, Mass.: MIT Press, 2000. Reid, T. R. The Chip: How Two Americans Invented the Microchip and Launched a Revolution. New York: Random House, 2001. Slater, Robert. Portraits in Silicon. Cambridge, Mass.: MIT Press, 1987.

235

Contact lenses Contact lenses

The invention: Small plastic devices that fit under the eyelids, contact lenses, or “contacts,” frequently replace the more familiar eyeglasses that many people wear to correct vision problems. The people behind the invention: Leonardo da Vinci (1452-1519), an Italian artist and scientist Adolf Eugen Fick (1829-1901), a German glassblower Kevin Tuohy, an American optician Otto Wichterle (1913), a Czech chemist William Feinbloom (1904-1985), an American optometrist An Old Idea There are two main types of contact lenses: hard and soft. Both types are made of synthetic polymers (plastics). The basic concept of the contact lens was conceived by Leonardo da Vinci in 1508. He proposed that vision could be improved if small glass ampules filled with water were placed in front of each eye. Nothing came of the idea until glass scleral lenses were invented by the German glassblower Adolf Fick. Fick’s large, heavy lenses covered the pupil of the eye, its colored iris, and part of the sclera (the white of the eye). Fick’s lenses were not useful, since they were painful to wear. In the mid-1930’s, however, plastic scleral lenses were developed by various organizations and people, including the German company I. G. Farben and the American optometrist William Feinbloom. These lenses were light and relatively comfortable; they could be worn for several hours at a time. In 1945, the American optician Kevin Tuohy developed corneal lenses, which covered only the cornea of the eye. Reportedly, Tuohy’s invention was inspired by the fact that his nearsighted wife could not bear scleral lenses but hated to wear eyeglasses. Tuohy’s lenses were hard contact lenses made of rigid plastic, but they were much more comfortable than scleral lenses and could be worn for longer periods of time. Soon after, other people developed soft contact lenses, which cover both the cornea and the iris. At present,

236

/

Contact lenses

many kinds of contact lenses are available. Both hard and soft contact lenses have advantages for particular uses. Eyes, Tears, and Contact Lenses The camera-like human eye automatically focuses itself and adjusts to the prevailing light intensity. In addition, it never runs out of “film” and makes a continuous series of visual images. In the process of seeing, light enters the eye and passes through the clear, dome-shaped cornea, through the hole (the pupil) in the colored iris, and through the clear eye lens, which can change shape by means of muscle contraction. The lens focuses the light, which next passes across the jellylike “vitreous humor” and hits the retina. There, light-sensitive retinal cells send visual images to the optic nerve, which transmits them to the brain for interpretation. Many people have 20/20 (normal) vision, which means that they can clearly see letters on a designated line of a standard eye chart placed 20 feet away. Nearsighted (myopic) people have vision of 20/40 or worse. This means that, 20 feet from the eye chart, they see clearly what people with 20/20 vision can see clearly at a greater distance. Myopia (nearsightedness) is one of the four most common visual defects. The others are hyperopia, astigmatism, and presbyopia. All are called “refractive errors” and are corrected with appropriate eyeglasses or contact lenses. Myopia, which occurs in 30 percent of humans, occurs when the eyeball is too long for the lens’s focusing ability and images of distant objects focus before they reach the retina, causing blurry vision. Hyperopia, or farsightedness, occurs when the eyeballs are too short. In hyperopia, the eye’s lenses cannot focus images of nearby objects by the time those images reach the retina, resulting in blurry vision. A more common condition is astigmatism, in which incorrectly shaped corneas make all objects appear blurred. Finally, presbyopia, part of the aging process, causes the lens of the eye to lose its elasticity. It causes progressive difficulty in seeing nearby objects. In myopic, hyperopic, or astigmatic people, bifocal (two-lens) systems are used to correct presbyopia, whereas monofocal systems are used to correct presbyopia in people whose vision is otherwise normal.

Contact lenses

/

237

William Feinbloom William Feinbloom started his career in eye care when he was only three, helping his father, an optometrist, in his practice. Born in Brooklyn, New York, in 1904, Feinbloom studied at the Columbia School of Optometry and graduated at nineteen. He later earned degrees in physics, mathematics, biophysics, and psychology, all of it to help him treat people who suffered visual impairments. His many achievements on the behalf of the partially sighted won him professional accolades as the “father of low vision.” In 1932, while working in a clinic, Feinbloom produced the first of his special vision-enhancing inventions. He ground three-power lenses, imitating the primary lens of a refracting telescope, and fit them in a frame for an elderly patient whose vision could not be treated. The patient was again able to see, and when news of this miracle later reached Pope Pius XI, he sent a special blessing to Feinbloom. He soon opened his own practice and during the next fifty years invented a series of new lenses for people with macular degeneration and other vision diseases, as well as making the first set of contact lenses in America. In 1978 Feinbloom bequeathed his practice to the Pennsylvania College of Optometry, which named it the William Feinbloom Vision Rehabilitation Center. Every year the William Feinbloom Award honors a vision-care specialist who has improved the delivery and quality of optometric service. Feinbloom died in 1985.

Modern contact lenses, which many people prefer to eyeglasses, are used to correct all common eye defects as well as many others not mentioned here. The lenses float on the layer of tears that is made continuously to nourish the eye and keep it moist. They fit under the eyelids and either over the cornea or over both the cornea and the iris, and they correct visual errors by altering the eye’s focal length enough to produce 20/20 vision. In addition to being more attractive than eyeglasses, contact lenses correct visual defects more effectively than eyeglasses can. Some soft contact lenses (all are made of flexible plastics) can be worn almost continuously. Hard lenses are

238

/

Contact lenses

made of more rigid plastic and last longer, though they can usually be worn only for six to nine hours at a time. The choice of hard or soft lenses must be made on an individual basis. The disadvantages of contact lenses include the fact that they must be cleaned frequently to prevent eye irritation. Furthermore, people who do not produce adequate amounts of tears (a condition called “dry eyes”) cannot wear them. Also, arthritis, many allergies, and poor manual dexterity caused by old age or physical problems make many people poor candidates for contact lenses. Impact The invention of Plexiglas hard scleral contact lenses set the stage for the development of the widely used corneal hard lenses by Tuohy. The development of soft contact lenses available to the general public began in Czechoslovakia in the 1960’s. It led to the sale, starting in the 1970’s, of the popular, soft contact lenses pioneered by Otto Wichterle. The Wichterle lenses, which cover both the cornea and the iris, are made of a plastic called HEMA (short for hydroxyethylmethylmethacrylate). These very thin lenses have disadvantages that include the requirement of disinfection between uses, incomplete astigmatism correction, low durability, and the possibility of chemical combination with some medications, which can damage the eyes. Therefore, much research is being carried out to improve Contact lenses are placed directly on the surface of them. For this reason, and because of the continued the eye. (Digital Stock)

Contact lenses

/

239

popularity of hard lenses, new kinds of soft and hard lenses are continually coming on the market. See also Artificial heart; Disposable razor; Hearing aid; Laser eye surgery; Pacemaker. Further Reading “The Contact Lens.” Newsweek 130 (Winter, 1997/1998). Hemphill, Clara. “A Quest for Better Vision: Spectacles over the Centuries.” New York Times (August 8, 2000). Koetting, Robert A. History of the Contact Lens. Irvine, Calif.: Allergan, 1978. Lubick, Naomi. “The Hard and the Soft.” Scientific American 283, no. 4 (October, 2000).

240

Coronary artery bypass surgery Coronary artery bypass surgery

The invention: The most widely used procedure of its type, coronary bypass surgery uses veins from legs to improve circulation to the heart. The people behind the invention: Rene Favaloro (1923-2000), a heart surgeon Donald B. Effler (1915), a member of the surgical team that performed the first coronary artery bypass operation F. Mason Sones (1918), a physician who developed an improved technique of X-raying the heart’s arteries Fighting Heart Disease In the mid-1960’s, the leading cause of death in the United States was coronary artery disease, claiming nearly 250 deaths per 100,000 people. Because this number was so alarming, much research was being conducted on the heart. Most of the public’s attention was focused on heart transplants performed separately by the famous surgeons Christiaan Barnard and Michael DeBakey. Yet other, less dramatic procedures were being developed and studied. A major problem with coronary artery disease, besides the threat of death, is chest pain, or angina. Individuals whose arteries are clogged with fat and cholesterol are frequently unable to deliver enough oxygen to their heart muscles. This may result in angina, which causes enough pain to limit their physical activities. Some of the heart research in the mid-1960’s was an attempt to find a surgical procedure that would eliminate angina in heart patients. The various surgical procedures had varying success rates. In the late 1950’s and early 1960’s, a team of physicians in Cleveland was studying surgical procedures that would eliminate angina. The team was composed of Rene Favaloro, Donald B. Effler, F. Mason Sones, and Laurence Groves. They were working on the concept, proposed by Dr. Arthur M. Vineberg from McGill University in Montreal, of implanting a healthy artery from the chest into the heart. This bypass procedure would provide the heart with another

Coronary artery bypass surgery

Bypass Graft Blockage

/

241

source of blood, resulting in enough oxygen to overcome the angina. Yet Vineberg’s surgery was often ineffective because it was hard to determine exactly where to implant the new artery. New Techniques

Before bypass surgery (left) the blockage in the artery threatens to cut off bloodflow; after surgery to graft a piece of vein (right), the blood can flow around the blockage.

In order to make Vineberg’s proposed operation successful, better diagnostic tools were needed. This was accomplished by the work of Sones. He developed a diagnostic procedure, called “arteriography,” whereby a catheter was inserted into an artery in the arm, which he ran all the way into the heart. He then injected a dye into the coronary arteries and photographed them with a high-speed motionpicture camera. This provided an image of the heart, which made it easy to determine where the blockages were in the coronary arteries. Using this tool, the team tried several new techniques. First, the surgeons tried to ream out the deposits found in the narrow portion of the artery. They found, however, that this actually reduced blood flow. Second, they tried slitting the length of the blocked area of the artery and suturing a strip of tissue that would increase the diameter of the opening. This was also ineffective because it often resulted in turbulent blood flow. Finally, the team attempted to reroute the flow of blood around the blockage by suturing in other tissue, such as a portion of a vein from the upper leg. This bypass procedure removed that part of the artery that was clogged and replaced it with a clear vessel, thereby restoring blood flow through the artery. This new method was introduced by Favaloro in 1967. In order for Favaloro and other heart surgeons to perform coronary artery surgery successfully, several other medical techniques had to be developed. These included extracorporeal circulation and microsurgical techniques.

242

/

Coronary artery bypass surgery

Extracorporeal circulation is the process of diverting the patient’s blood flow from the heart and into a heart-lung machine. This procedure was developed in 1953 by U.S. surgeon John H. Gibbon, Jr. Since the blood does not flow through the heart, the heart can be temporarily stopped so that the surgeons can isolate the artery and perform the surgery on motionless tissue. Microsurgery is necessary because some of the coronary arteries are less than 1.5 millimeters in diameter. Since these arteries had to be sutured, optical magnification and very delicate and sophisticated surgical tools were required. After performing this surgery on numerous patients, follow-up studies were able to determine the surgery’s effectiveness. Only then was the value of coronary artery bypass surgery recognized as an effective procedure for reducing angina in heart patients. Consequences According to the American Heart Association, approximately 332,000 bypass surgeries were performed in the United States in 1987, an increase of 48,000 from 1986. These figures show that the work by Favaloro and others has had a major impact on the health of United States citizens. The future outlook is also positive. It has been estimated that five million people had coronary artery disease in 1987. Of this group, an estimated 1.5 million had heart attacks and 500,000 died. Of those living, many experienced angina. Research has developed new surgical procedures and new drugs to help fight coronary artery disease. Yet coronary artery bypass surgery is still a major form of treatment. See also Artificial blood; Artificial heart; Blood transfusion; Electrocardiogram; Heart-lung machine; Pacemaker.

Further Reading Bing, Richard J. Cardiology: The Evolution of the Science and the Art. 2d ed. New Brunswick, N.J.: Rutgers University Press, 1999.

Coronary artery bypass surgery

/

243

Faiola, Anthony. “Doctor ’s Suicide Strikes at Heart of Argentina’s Health Care Crisis: Famed Cardiac Surgeon Championed the Poor.” Washington Post (August 25, 2000). Favaloro, René G. The Challenging Dream of Heart Surgery: From the Pampas to Cleveland. Boston: Little, Brown, 1994.

244

Cruise missile Cruise missile

The invention: Aircraft weapons system that makes it possible to attack both land and sea targets with extreme accuracy without endangering the lives of the pilots. The person behind the invention: Rear Admiral Walter M. Locke (1930manager

), U.S. Navy project

From the Buzz Bombs of World War II During World War II, Germany developed and used two different types of missiles: ballistic missiles and cruise missiles. A ballistic missile is one that does not use aerodynamic lift in order to fly. It is fired into the air by powerful jet engines and reaches a high altitude; when its engines are out of fuel, it descends on its flight path toward its target. The German V-2 was the first ballistic missile. The United States and other countries subsequently developed a variety of highly sophisticated and accurate ballistic missiles. The other missile used by Germany was a cruise missile called the V-1, which was also called the flying bomb or the buzz bomb. The V-1 used aerodynamic lift in order to fly, just as airplanes do. It flew relatively low and was slow; by the end of the war, the British, against whom it was used, had developed techniques for countering it, primarily by shooting it down. After World War II, both the United States and the Soviet Union carried on the Germans’ development of both ballistic and cruise missiles. The United States discontinued serious work on cruise missile technology during the 1950’s: The development of ballistic missiles of great destructive capability had been very successful. Ballistic missiles armed with nuclear warheads had become the basis for the U.S. strategy of attempting to deter enemy attacks with the threat of a massive missile counterattack. In addition, aircraft carriers provided an air-attack capability similar to that of cruise missiles. Finally, cruise missiles were believed to be too vulnerable to being shot down by enemy aircraft or surface-to-air missiles.

Cruise missile

/

245

While ballistic missiles are excellent for attacking large, fixed targets, they are not suitable for attacking moving targets. They can be very accurately aimed, but since they are not very maneuverable during their final descent, they are limited in their ability to change course to hit a moving target, such as a ship. During the 1967 war, the Egyptians used a Soviet-built cruise missile to sink the Israeli ship Elath. The U.S. military, primarily the Navy and the Air Force, took note of the Egyptian success and within a few years initiated cruise missile development programs. The Development of Cruise Missiles The United States probably could have developed cruise missiles similar to 1990’s models as early as the 1960’s, but it would have required a huge effort. The goal was to develop missiles that could be launched from ships and planes using existing launching equipment, could fly long distances at low altitudes at fairly high speeds, and could reach their targets with a very high degree of accuracy. If the missiles flew too slowly, they would be fairly easy to shoot down, like the German V-1’s. If they flew at too high an altitude, they would be vulnerable to the same type of surface-based missiles that shot down Gary Powers, the pilot of the U.S. U2 spyplane, in 1960. If they were inaccurate, they would be of little use. The early Soviet cruise missiles were designed to meet their performance goals without too much concern about how they would be launched. They were fairly large, and the ships that launched them required major modifications. The U.S. goal of being able to launch using existing equipment, without making major modifications to the ships and planes that would launch them, played a major part in their torpedo-like shape: Sea-Launched Cruise Missiles (SLCMs) had to fit in the submarine’s torpedo tubes, and AirLaunched Cruise Missiles (ALCMs) were constrained to fit in rotary launchers. The size limitation also meant that small, efficient jet engines would be required that could fly the long distances required without needing too great a fuel load. Small, smart computers were needed to provide the required accuracy. The engine and computer technologies began to be available in the 1970’s, and they blossomed in the 1980’s.

246

/

Cruise missile

The U.S. Navy initiated cruise missile development efforts in 1972; the Air Force followed in 1973. In 1977, the Joint Cruise Missile Project was established, with the Navy taking the lead. Rear Admiral Walter M. Locke was named project manager. The goal was to develop air-, sea-, and ground-launched cruise missiles. By coordinating activities, encouraging competition, and requiring the use of common components wherever possible, the cruise missile development program became a model for future weapon-system development efforts. The primary contractors included Boeing Aerospace Company, General Dynamics, and McDonnell Douglas. In 1978, SLCMs were first launched from submarines. Over the next few years, increasingly demanding tests were passed by several versions of cruise missiles. By the mid-1980’s, both antiship and antiland missiles were available. An antiland version could be guided to its target with extreme accuracy by comparing a map programmed into its computer to the picture taken by an on-board video camera. The typical cruise missile is between 18 and 21 feet long, about 21 inches in diameter, and has a wingspan of between 8 and 12 feet. Cruise missiles travel slightly below the speed of sound and have a range of around 1,350 miles (antiland) or 250 miles (antiship). Both conventionally armed and nuclear versions have been fielded. Consequences Cruise missiles have become an important part of the U.S. arsenal. They provide a means of attacking targets on land and water without having to put an aircraft pilot’s life in danger. Their value was demonstrated in 1991 during the Persian Gulf War. One of their uses was to “soften up” defenses prior to sending in aircraft, thus reducing the risk to pilots. Overall estimates are that about 85 percent of cruise missiles used in the Persian Gulf War arrived on target, which is an outstanding record. It is believed that their extreme accuracy also helped to minimize noncombatant casualties. See also Airplane; Atomic bomb; Hydrogen bomb; Rocket; Stealth aircraft; V-2 rocket.

Cruise missile

/

247

Further Reading Collyer, David G. Buzz Bomb. Deal, Kent, England: Kent Aviation Historical Research Society, 1994. McDaid, Hugh, and David Oliver. Robot Warriors: The Top Secret History of the Pilotless Plane. London: Orion Media, 1997. Macknight, Nigel. Tomahawk Cruise Missile. Osceola, Wis.: Motorbooks International, 1995. Werrell, Kenneth P. The Evolution of the Cruise Missile. Maxwell Air Force Base, Ala.: Air University Press, 1997.

248

Cyclamate Cyclamate

The invention: An artificial sweetener introduced to the American market in 1950 under the tradename Sucaryl. The person behind the invention: Michael Sveda (1912-1999), an American chemist A Foolhardy Experiment The first synthetic sugar substitute, saccharin, was developed in 1879. It became commercially available in 1907 but was banned for safety reasons in 1912. Sugar shortages during World War I (19141918) resulted in its reintroduction. Two other artificial sweeteners, Dulcin and P-4000, were introduced later but were banned in 1950 for causing cancer in laboratory animals. In 1937, Michael Sveda was a young chemist working on his Ph.D. at the University of Illinois. A flood in the Ohio valley had ruined the local pipe-tobacco crop, and Sveda, a smoker, had been forced to purchase cigarettes. One day while in the laboratory, Sveda happened to brush some loose tobacco from his lips and noticed that his fingers tasted sweet. Having a curious, if rather foolhardy, nature, Sveda tasted the chemicals on his bench to find which one was responsible for the taste. The culprit was the forerunner of cyclohexylsulfamate, the material that came to be known as “cyclamate.” Later, on reviewing his career, Sveda explained the serendipitous discovery with the comment: “God looks after . . . fools, children, and chemists.” Sveda joined E. I. Du Pont de Nemours and Company in 1939 and assigned the patent for cyclamate to his employer. In June of 1950, after a decade of testing on animals and humans, Abbott Laboratories announced that it was launching Sveda’s artificial sweetener under the trade name Sucaryl. Du Pont followed with its sweetener product, Cyclan. A Time magazine article in 1950 announced the new product and noted that Abbott had warned that because the product was a sodium salt, individuals with kidney problems should consult their doctors before adding it to their food.

Cyclamate

/

249

Cyclamate had no calories, but it was thirty to forty times sweeter than sugar. Unlike saccharin, cyclamate left no unpleasant aftertaste. The additive was also found to improve the flavor of some foods, such as meat, and was used extensively to preserve various foods. By 1969, about 250 food products contained cyclamates, including cakes, puddings, canned fruit, ice cream, salad dressings, and its most important use, carbonated beverages. It was originally thought that cyclamates were harmless to the human body. In 1959, the chemical was added to the GRAS (generally recognized as safe) list. Materials on this list, such as sugar, salt, pepper, and vinegar, did not have to be rigorously tested before being added to food. In 1964, however, a report cited evidence that cyclamates and saccharin, taken together, were a health hazard. Its publication alarmed the scientific community. Numerous investigations followed. Shooting Themselves in the Foot Initially, the claims against cyclamate had been that it caused diarrhea or prevented drugs from doing their work in the body. By 1969, these claims had begun to include the threat of cancer. Ironically, the evidence that sealed the fate of the artificial sweetener was provided by Abbott itself. A private Long Island company had been hired by Abbott to conduct an extensive toxicity study to determine the effects of longterm exposure to the cyclamate-saccharin mixtures often found in commercial products. The team of scientists fed rats daily doses of the mixture to study the effect on reproduction, unborn fetuses, and fertility. In each case, the rats were declared to be normal. When the rats were killed at the end of the study, however, those that had been exposed to the higher doses showed evidence of bladder tumors. Abbott shared the report with investigators from the National Cancer Institute and then with the U.S. Food and Drug Administration (FDA). The doses required to produce the tumors were equivalent to an individual drinking 350 bottles of diet cola a day. That was more than one hundred times greater than that consumed even by those people who consumed a high amount of cyclamate. A six-person

250

/

Cyclamate

panel of scientists met to review the data and urged the ban of all cyclamates from foodstuffs. In October, 1969, amid enormous media coverage, the federal government announced that cyclamates were to be withdrawn from the market by the beginning of 1970. In the years following the ban, the controversy continued. Doubt was cast on the results of the independent study linking sweetener use to tumors in rats, because the study was designed not to evaluate cancer risks but to explain the effects of cyclamate use over many years. Bladder parasites, known as “nematodes,” found in the rats may have affected the outcome of the tests. In addition, an impurity found in some of the saccharin used in the study may have led to the problems observed. Extensive investigations such as the three-year project conducted at the National Cancer Research Center in Heidelberg, Germany, found no basis for the widespread ban. In 1972, however, rats fed high doses of saccharin alone were found to have developed bladder tumors. At that time, the sweetener was removed from the GRAS list. An outright ban was averted by the mandatory use of labels alerting consumers that certain products contained saccharin. Impact The introduction of cyclamate heralded the start of a new industry. For individuals who had to restrict their sugar intake for health reasons, or for those who wished to lose weight, there was now an alternative to giving up sweet food. The Pepsi-Cola company put a new diet drink formulation on the market almost as soon as the ban was instituted. In fact, it ran advertisements the day after the ban was announced showing the Diet Pepsi product boldly proclaiming “Sugar added—No Cyclamates.” Sveda, the discoverer of cyclamates, was not impressed with the FDA’s decision on the sweetener and its handling of subsequent investigations. He accused the FDA of “a massive cover-up of elemental blunders” and claimed that the original ban was based on sugar politics and bad science. For the manufacturers of cyclamate, meanwhile, the problem lay with the wording of the Delaney amendment, the legislation that

Cyclamate

/

251

regulates new food additives. The amendment states that the manufacturer must prove that its product is safe, rather than the FDA having to prove that it is unsafe. The onus was on Abbott Laboratories to deflect concerns about the safety of the product, and it remained unable to do so. See also Aspartame; Genetically engineered insulin. Further Reading Kaufman, Leslie. “Michael Sveda, the Inventor of Cyclamates, Dies at Eighty Seven.” New York Times (August 21, 1999). Lawler, Philip F. Sweet Talk: Media Coverage of Artificial Sweeteners. Washington, D.C.: Media Institute, 1986. Remington, Dennis W. The Bitter Truth About Artificial Sweeteners. Provo, Utah: Vitality House, 1987. Whelan, Elizabeth M. “The Bitter Truth About a Sweetener Scare.” Wall Street Journal (August 26, 1999).

252

Cyclotron Cyclotron

The invention: The first successful magnetic resonance accelerator for protons, the cyclotron gave rise to the modern era of particle accelerators, which are used by physicists to study the structure of atoms. The people behind the invention: Ernest Orlando Lawrence (1901-1958), an American nuclear physicist who was awarded the 1939 Nobel Prize in Physics M. Stanley Livingston (1905-1986), an American nuclear physicist Niels Edlefsen (1893-1971), an American physicist David Sloan (1905), an American physicist and electrical engineer The Beginning of an Era The invention of the cyclotron by Ernest Orlando Lawrence marks the beginning of the modern era of high-energy physics. Although the energies of newer accelerators have increased steadily, the principles incorporated in the cyclotron have been fundamental to succeeding generations of accelerators, many of which were also developed in Lawrence’s laboratory. The care and support for such machines have also given rise to “big science”: the massing of scientists, money, and machines in support of experiments to discover the nature of the atom and its constituents. At the University of California, Lawrence took an interest in the new physics of the atomic nucleus, which had been developed by the British physicist Ernest Rutherford and his followers in England, and which was attracting more attention as the development of quantum mechanics seemed to offer solutions to problems that had long preoccupied physicists. In order to explore the nucleus of the atom, however, suitable probes were required. An artificial means of accelerating ions to high energies was also needed. During the late 1920’s, various means of accelerating alpha particles, protons (hydrogen ions), and electrons had been tried, but

Cyclotron

/

253

none had been successful in causing a nuclear transformation when Lawrence entered the field. The high voltages required exceeded the resources available to physicists. It was believed that more than a million volts would be required to accelerate an ion to sufficient energies to penetrate even the lightest atomic nuclei. At such voltages, insulators broke down, releasing sparks across great distances. European researchers even attempted to harness lightning to accomplish the task, with fatal results. Early in April, 1929, Lawrence discovered an article by a German electrical engineer that described a linear accelerator of ions that worked by passing an ion through two sets of electrodes, each of which carried the same voltage and increased the energy of the ions correspondingly. By spacing the electrodes appropriately and using an alternating electrical field, this “resonance acceleration” of ions could speed subatomic particles to many times the energy applied in each step, overcoming the problems presented when one tried to apply a single charge to an ion all at once. Unfortunately, the spacing of the electrodes would have to be increased as the ions were accelerated, since they would travel farther between each alternation of the phase of the accelerating charge, making an accelerator impractically long in those days of small-scale physics. Fast-Moving Streams of Ions Lawrence knew that a magnetic field would cause the ions to be deflected and form a curved path. If the electrodes were placed across the diameter of the circle formed by the ions’ path, they should spiral out as they were accelerated, staying in phase with the accelerating charge until they reached the periphery of the magnetic field. This, it seemed to him, afforded a means of producing indefinitely high voltages without using high voltages by recycling the accelerated ions through the same electrodes. Many scientists doubted that such a method would be effective. No mechanism was known that would keep the circulating ions in sufficiently tight orbits to avoid collisions with the walls of the accelerating chamber. Others tried unsuccessfully to use resonance acceleration. A graduate student, M. Stanley Livingston, continued Lawrence’s work. For his dissertation project, he used a brass cylinder 10 centi-

254

/

Cyclotron

Ernest Orlando Lawrence A man of great energy and gusty temper, Ernest Orlando Lawrence danced for joy when one of his cyclotrons accelerated a particle to more than the one million electron volts. That amount of power was important, according to contemporary theorists, because it was enough to penetrate the nucleus of a target atom. For giving physicists a tool with which to examine the subatomic realm, Lawrence received the 1939 Nobel Prize in Physics, among many other honors. The grandson of Norwegian immigrants, Lawrence was born in Canton, South Dakota, in 1901. After high school, he went to St. Olaf’s College, the University of South Dakota, the University of Minnesota, and Yale University, where he completed a doctorate in physics in 1925. After post-graduate fellowships at Yale, he became a professor at the University of California, Berkeley, the youngest on campus. In 1936 the university made him director of its radiation laboratory. Now named the Lawrence-Livermore National Laboratory, it stayed at the forefront of physics and high-technology research ever since. Before World War II Lawrence and his brother, Dr. John Lawrence, also at the university, worked together to find practical biological and medical applications for the radioisotopes made in Lawrence’s particle accelerators. During the war Lawrence participated in the Manhattan Project, which made the atomic bomb. He was a passionate anticommunist and after the war argued before Congress for funds to develop death rays and radiation bombs from research with his cyclotrons; however, he was also an American delegate to the Geneva Conference in 1958, which sought a ban on atomic bomb tests. Lawrence helped solve the mystery of cosmic particles, invented a method for measuring ultra-small time intervals, and calculated with high precision the ratio of the charge of an electron to its mass, a fundamental constant of nature. Lawrence died in 1958 in Palo Alto, California.

meters in diameter sealed with wax to hold a vacuum, a half-pillbox of copper mounted on an insulated stem to serve as the electrode, and a Hartley radio frequency oscillator producing 10 watts. The hydrogen molecular ions were produced by a thermionic cathode

Cyclotron

/

255

(mounted near the center of the apparatus) from hydrogen gas admitted through an aperture in the side of the cylinder after a vacuum had been produced by a pump. Once formed, the oscillating electrical field drew out the ions and accelerated them as they passed through the cylinder. The accelerated ions spiraled out in a magnetic field produced by a 10-centimeter electromagnet to a collector. By November, 1930, Livingston had observed peaks in the collector current as he tuned the magnetic field through the value calculated to produce acceleration. Borrowing a stronger magnet and tuning his radio frequency oscillator appropriately, Livingston produced 80,000-electronvolt ions at his collector on January 2, 1931, thus demonstrating the principle of magnetic resonance acceleration. Impact Demonstration of the principle led to the construction of a succession of large cyclotrons, beginning with a 25-centimeter cyclotron developed in the spring and summer of 1931 that produced one-million-electronvolt protons. With the support of the Research Corporation, Lawrence secured a large electromagnet that had been developed for radio transmission and an unused laboratory to house it: the Radiation Laboratory. The 69-centimeter cyclotron built with the magnet was used to explore nuclear physics. It accelerated deuterons, ions of heavy water or deuterium that contain, in addition to the proton, the neutron, which was discovered by Sir James Chadwick in 1932. The accelerated deuteron, which injected neutrons into target atoms, was used to produce a wide variety of artificial radioisotopes. Many of these, such as technetium and carbon 14, were discovered with the cyclotron and were later used in medicine. By 1939, Lawrence had built a 152-centimeter cyclotron for medical applications, including therapy with neutron beams. In that year, he won the Nobel Prize in Physics for the invention of the cyclotron and the production of radioisotopes. During World War II, Lawrence and the members of his Radiation Laboratory developed electromagnetic separation of uranium ions to produce the uranium 235 required for the atomic bomb. After the war, the 467-centimeter

256

/

Cyclotron

cyclotron was completed as a synchrocyclotron, which modulated the frequency of the accelerating fields to compensate for the increasing mass of ions as they approached the speed of light. The principle of synchronous acceleration, invented by Lawrence’s associate, the American physicist Edwin Mattison McMillan, became fundamental to proton and electron synchrotrons. The cyclotron and the Radiation Laboratory were the center of accelerator physics throughout the 1930’s and well into the postwar era. The invention of the cyclotron not only provided a new tool for probing the nucleus but also gave rise to new forms of organizing scientific work and to applications in nuclear medicine and nuclear chemistry. Cyclotrons were built in many laboratories in the United States, Europe, and Japan, and they became a standard tool of nuclear physics. See also Atomic bomb; Electron microscope; Field ion microscope; Geiger counter; Hydrogen bomb; Mass spectrograph; Neutrino detector; Scanning tunneling microscope; Synchrocyclotron; Tevatron accelerator. Further Reading Childs, Herbert. An American Genius: The Life of Ernest Orlando Lawrence. New York: Dutton, 1968. Close, F. E., Michael Marten, and Christine Sutton. The Particle Explosion. New York: Oxford University Press, 1994. Pais, Abraham. Inward Bound: Of Matter and Forces in the Physical World. New York: Clarendon Press, 1988. Wilson, Elizabeth K. “Fifty Years of Heavy Chemistry.” Chemical and Engineering News 78, no. 13 (March 27, 2000).

257

Diesel locomotive Diesel locomotive

The invention: An internal combustion engine in which ignition is achieved by the use of high-temperature compressed air, rather than a spark plug. The people behind the invention: Rudolf Diesel (1858-1913), a German engineer and inventor Sir Dugold Clark (1854-1932), a British engineer Gottlieb Daimler (1834-1900), a German engineer Henry Ford (1863-1947), an American automobile magnate Nikolaus Otto (1832-1891), a German engineer and Daimler’s teacher A Beginning in Winterthur By the beginning of the twentieth century, new means of providing society with power were needed. The steam engines that were used to run factories and railways were no longer sufficient, since they were too heavy and inefficient. At that time, Rudolf Diesel, a German mechanical engineer, invented a new engine. His diesel engine was much more efficient than previous power sources. It also appeared that it would be able to run on a wide variety of fuels, ranging from oil to coal dust. Diesel first showed that his engine was practical by building a diesel-driven locomotive that was tested in 1912. In the 1912 test runs, the first diesel-powered locomotive was operated on the track of the Winterthur-Romanston rail line in Switzerland. The locomotive was built by a German company, Gesellschaft für Thermo-Lokomotiven, which was owned by Diesel and his colleagues. Immediately after the test runs at Winterthur proved its efficiency, the locomotive—which had been designed to pull express trains on Germany’s Berlin-Magdeburg rail line—was moved to Berlin and put into service. It worked so well that many additional diesel locomotives were built. In time, diesel engines were also widely used to power many other machines, including those that ran factories, motor vehicles, and ships.

258

/

Diesel locomotive

Rudolf Diesel Unbending, suspicious of others, but also exceptionally intelligent, Rudolf Christian Karl Diesel led a troubled life and came to a mysterious end. His parents, expatriate Germans, lived in Paris when he was born, 1858, and he spent his early childhood there. In 1870, just as he was starting his formal education, his family fled to England on the outbreak of the FrancoPrussian War, which turned the French against Germans. In England, Diesel spent much of his spare time in museums, educating himself. His father, a leather craftsman, was unable to support his family, so as a teenager Diesel was packed off to Augsburg, Germany, where he was largely on his own. Although these experiences made him fluent in English, French, and German, his was not a stable or happy childhood. He threw himself into his studies, finishing his high school education three years ahead of schedule, and entered the Technical College of Munich, where he was the star student. Once, during his school years, he saw a demonstration of a Chinese firestick. The firestick was a tube with a plunger. When a small piece of flammable material was put in one end and the plunger pushed down rapidly toward it, the heat of the compressed air in the tube ignited the material. The demonstration later inspired Diesel to adapt the principle to an engine. His was the first engine to run successfully with compressed air fuel ignition, but it was not the first design. So although he received the patent for the diesel engine, he had to fight challenges in court from other inventors over licensing rights. He always won, but the strain of litigation worsened his tendency to stubborn self-reliance, and this led him into difficulties. The first compression engines were unreliable and unwieldy, but Diesel rebuffed all suggestions for modifications, requiring that builders follow his original design. His attitude led to delays in development of the engine and lost him financial support. In 1913, while crossing the English Channel aboard a ship, Diesel disappeared. His body was never found, and although the authorities concluded that Diesel committed suicide, no one knows what happened.

Diesel locomotive

/

259

Diesels, Diesels Everywhere In the 1890’s, the best engines available were steam engines that were able to convert only 5 to 10 percent of input heat energy to useful work. The burgeoning industrial society and a widespread network of railroads needed better, more efficient engines to help businesses make profits and to speed up the rate of transportation available for moving both goods and people, since the maximum speed was only about 48 kilometers per hour. In 1894, Rudolf Diesel, then thirty-five years old, appeared in Augsburg, Germany, with a new engine that he believed would demonstrate great efficiency. The diesel engine demonstrated at Augsburg ran for only a short time. It was, however, more efficient than other existing engines. In addition, Diesel predicted that his engines would move trains faster than could be done by existing engines and that they would run on a wide variety of fuels. Experimentation proved the truth of his claims; even the first working motive diesel engine (the one used in the Winterthur test) was capable of pulling heavy freight and passenger trains at maximum speeds of up to 160 kilometers per hour. By 1912, Diesel, a millionaire, saw the wide use of diesel locomotives in Europe and the United States and the conversion of hundreds of ships to diesel power. Rudolf Diesel’s role in the story ends here, a result of his mysterious death in 1913—believed to be a suicide by the authorities—while crossing the English Channel on the steamer Dresden. Others involved in the continuing saga of diesel engines were the Britisher Sir Dugold Clerk, who improved diesel design, and the American Adolphus Busch (of beer-brewing fame), who bought the North American rights to the diesel engine. The diesel engine is related to automobile engines invented by Nikolaus Otto and Gottlieb Daimler. The standard Otto-Daimler (or Otto) engine was first widely commercialized by American auto magnate Henry Ford. The diesel and Otto engines are internalcombustion engines. This means that they do work when a fuel is burned and causes a piston to move in a tight-fitting cylinder. In diesel engines, unlike Otto engines, the fuel is not ignited by a spark from a spark plug. Instead, ignition is accomplished by the use of high-temperature compressed air.

260

/

Diesel locomotive

Intake

Compression

Power

Exhaust

The four strokes of a diesel engine. (Robert Bosch Corporation)

In common “two-stroke” diesel engines, pioneered by Sir Dugold Clerk, a starter causes the engine to make its first stroke. This draws in air and compresses the air sufficiently to raise its temperature to 900 to 1,000 degrees Fahrenheit. At this point, fuel (usually oil) is sprayed into the cylinder, ignites, and causes the piston to make its second, power-producing stroke. At the end of that stroke, more air enters as waste gases leave the cylinder; air compression occurs again; and the power-producing stroke repeats itself. This process then occurs continuously, without restarting. Impact Proof of the functionality of the first diesel locomotive set the stage for the use of diesel engines to power many machines. Although Rudolf Diesel did not live to see it, diesel engines were widely used within fifteen years after his death. At first, their main applications were in locomotives and ships. Then, because diesel engines are more efficient and more powerful than Otto engines, they were modified for use in cars, trucks, and buses. At present, motor vehicle diesel engines are most often used in buses and long-haul trucks. In contrast, diesel engines are not as popular in automobiles as Otto engines, although European auto-

Diesel locomotive

/

261

makers make much wider use of diesel engines than American automakers do. Many enthusiasts, however, view diesel automobiles as the wave of the future. This optimism is based on the durability of the engine, its great power, and the wide range and economical nature of the fuels that can be used to run it. The drawbacks of diesels include the unpleasant odor and high pollutant content of their emissions. Modern diesel engines are widely used in farm and earth-moving equipment, including balers, threshers, harvesters, bulldozers,rock crushers, and road graders. Construction of the Alaskan oil pipeline relied heavily on equipment driven by diesel engines. Diesel engines are also commonly used in sawmills, breweries, coal mines, and electric power plants. Diesel’s brainchild has become a widely used power source, just as he predicted. It is likely that the use of diesel engines will continue and will expand, as the demands of energy conservation require more efficient engines and as moves toward fuel diversification require engines that can be used with various fuels. See also Bullet train; Gas-electric car; Internal combustion engine. Further Reading Cummins, C. Lyle. Diesel’s Engine. Wilsonville, Oreg.: Carnot Press, 1993. Diesel, Eugen. From Engines to Autos: Five Pioneers in Engine Development and Their Contributions to the Automotive Industry. Chicago: H. Regnery, 1960. Nitske, Robert W., and Charles Morrow Wilson. Rudolf Diesel: Pioneer of the Age of Power. Norman: University of Oklahoma Press, 1965.

262

Differential analyzer Differential analyzer

The invention: An electromechanical device capable of solving differential equations. The people behind the invention: Vannevar Bush (1890-1974), an American electrical engineer Harold L. Hazen (1901-1980), an American electrical engineer Electrical Engineering Problems Become More Complex After World War I, electrical engineers encountered increasingly difficult differential equations as they worked on vacuum-tube circuitry, telephone lines, and, particularly, long-distance power transmission lines. These calculations were lengthy and tedious. Two of the many steps required to solve them were to draw a graph manually and then to determine the area under the curve (essentially, accomplishing the mathematical procedure called integration). In 1925, Vannevar Bush, a faculty member in the Electrical Engineering Department at the Massachusetts Institute of Technology (MIT), suggested that one of his graduate students devise a machine to determine the area under the curve. They first considered a mechanical device but later decided to seek an electrical solution. Realizing that a watt-hour meter such as that used to measure electricity in most homes was very similar to the device they needed, Bush and his student refined the meter and linked it to a pen that automatically recorded the curve. They called this machine the Product Integraph, and MIT students began using it immediately. In 1927, Harold L. Hazen, another MIT faculty member, modified it in order to solve the more complex second-order differential equations (it originally solved only firstorder equations). The Differential Analyzer The original Product Integraph had solved problems electrically, and Hazen’s modification had added a mechanical integrator. Al-

Differential analyzer

/

263

though the revised Product Integraph was useful in solving the types of problems mentioned above, Bush thought the machine could be improved by making it an entirely mechanical integrator, rather than a hybrid electrical and mechanical device. In late 1928, Bush received funding from MIT to develop an entirely mechanical integrator, and he completed the resulting Differential Analyzer in 1930. This machine consisted of numerous interconnected shafts on a long, tablelike framework, with drawing boards flanking one side and six wheel-and-disk integrators on the other. Some of the drawing boards were configured to allow an operator to trace a curve with a pen that was linked to the Analyzer, thus providing input to the machine. The other drawing boards were configured to receive output from the Analyzer via a pen that drew a curve on paper fastened to the drawing board. The wheel-and-disk integrator, which Hazen had first used in the revised Product Integraph, was the key to the operation of the Differential Analyzer. The rotational speed of the horizontal disk was the input to the integrator, and it represented one of the variables in the equation. The smaller wheel rolled on the top surface of the disk, and its speed, which was different from that of the disk, represented the integrator’s output. The distance from the wheel to the center of the disk could be changed to accommodate the equation being solved, and the resulting geometry caused the two shafts to turn so that the output was the integral of the input. The integrators were linked mechanically to other devices that could add, subtract, multiply, and divide. Thus, the Differential Analyzer could solve complex equations involving many different mathematical operations. Because all the linkages and calculating devices were mechanical, the Differential Analyzer actually acted out each calculation. Computers of this type, which create an analogy to the physical world, are called analog computers. The Differential Analyzer fulfilled Bush’s expectations, and students and researchers found it very useful. Although each different problem required Bush’s team to set up a new series of mechanical linkages, the researchers using the calculations viewed this as a minor inconvenience. Students at MIT used the Differential Analyzer in research for doctoral dissertations, master’s theses, and bachelor’s theses. Other researchers worked on a wide range of problems

264

/

Differential analyzer

Vannevar Bush One of the most politically powerful scientists of the twentieth century, Vannevar Bush was born in 1890 in Everett, Massachusetts. He studied at Tufts College in Boston, not only earning two degrees in engineering but also registering his first patent while still an undergraduate. He worked for General Electric Company briefly after college and then conducted research on submarine detection for the U.S. Navy during World War I. After the war he became a professor of electrical power transmission (and later dean of the engineering school) at the Massachusetts Institute of Technology (MIT). He also acted as a consultant for industry and started companies of his own, including (with two others) Raytheon Corporation. While at MIT he developed the Product Integraph and Differential Analyzer to aid in solving problems related to electrical power transmission. Starting in 1939, Bush became a key science administrator. He was president of the Carnegie Foundation from 1939 until 1955, chaired the National Advisory Committee for Aeronautics from 1939 until 1941, in 1940 was appointed chairman of the President’s National Defense Research Committee, and from 1941 until 1946 was director of the Office of Scientific Research and Development. This meant he was President Franklin Roosevelt’s science adviser during World War II and oversaw wartime military research, including involvement in the Manhattan Project that build the first atomic bombs. After the war he worked for peaceful application of atomic power and was instrumental in inaugurating the National Science Foundation, which he directed, in 1950. Between 1957 and 1959 he served as chairman of MIT Corporation, retaining an honorary chairmanship thereafter. All these political and administrative roles meant he exercised enormous influence in deciding which scientific projects were supported financially. Having received many honorary degrees and awards, including the National Medal of Science (1964), Bush died in 1974.

Differential analyzer

/

265

with the Differential Analyzer, mostly in electrical engineering, but also in atomic physics, astrophysics, and seismology. An English researcher, Douglas Hartree, visited Bush’s laboratory in 1933 to learn about the Differential Analyzer and to use it in his own work on the atomic field of mercury. When he returned to England, he built several analyzers based on his knowledge of MIT’s machine. The U.S. Army also built a copy in order to carry out the complex calculations required to create artillery firing tables (which specified the proper barrel angle to achieve the desired range). Other analyzers were built by industry and universities around the world. Impact As successful as the Differential Analyzer had been, Bush wanted to make another, better analyzer that would be more precise, more convenient to use, and more mathematically flexible. In 1932, Bush began seeking money for his new machine, but because of the Depression it was not until 1936 that he received adequate funding for the Rockefeller Analyzer, as it came to be known. Bush left MIT in 1938, but work on the Rockefeller Analyzer continued. It was first demonstrated in 1941, and by 1942, it was being used in the war effort to calculate firing tables and design radar antenna profiles. At the end of the war, it was the most important computer in existence. All the analyzers, which were mechanical computers, faced serious limitations in speed because of the momentum of the machinery, and in precision because of slippage and wear. The digital computers that were being developed after World War II (even at MIT) were faster, more precise, and capable of executing more powerful operations because they were electrical computers. As a result, during the 1950’s, they eclipsed differential analyzers such as those built by Bush. Descendants of the Differential Analyzer remained in use as late as the 1990’s, but they played only a minor role. See also Colossus computer; ENIAC computer; Mark I calculator; Personal computer; SAINT; UNIVAC computer.

266

/

Differential analyzer

Further Reading Bush, Vannevar. Pieces of the Action. New York: Morrow, 1970. Marcus, Alan I., and Howard P. Segal. Technology in America. Fort Worth, Tex.: Harcourt Brace College, 1999. Spencer, Donald D. Great Men and Women of Computing. Ormond Beach, Fla.: Camelot Publishing, 1999. Zachary, G. Pascal. Endless Frontier: Vannevar Bush, Engineer of the American Century. Cambridge, Mass.: MIT Press, 1999.

267

Dirigible Dirigible

The invention: A rigid lighter-than-air aircraft that played a major role in World War I and in international air traffic until a disastrous accident destroyed the industry. The people behind the invention: Ferdinand von Zeppelin (1838-1917), a retired German general Theodor Kober (1865-1930), Zeppelin’s private engineer Early Competition When the Montgolfier brothers launched the first hot-air balloon in 1783, engineers—especially those in France—began working on ways to use machines to control the speed and direction of balloons. They thought of everything: rowing through the air with silk-covered oars; building movable wings; using a rotating fan, an airscrew, or a propeller powered by a steam engine (1852) or an electric motor (1882). At the end of the nineteenth century, the internal combustion engine was invented. It promised higher speeds and more power. Up to this point, however, the balloons were not rigid. A rigid airship could be much larger than a balloon and could fly farther. In 1890, a rigid airship designed by David Schwarz of Dalmatia was tested in St. Petersburg, Russia. The test failed because there were problems with inflating the dirigible. A second test, in Berlin in 1897, was only slightly more successful, since the hull leaked and the flight ended in a crash. Schwarz’s airship was made of an entirely rigid aluminum cylinder. Ferdinand von Zeppelin had a different idea: His design was based on a rigid frame. Zeppelin knew about balloons from having fought in two wars in which they were used: the American Civil War of 1861-1865 and the Franco-Prussian War of 1870-1871. He wrote down his first “thoughts about an airship” in his diary on March 25, 1874, inspired by an article about flying and international mail. Zeppelin soon lost interest in this idea of civilian uses for an airship and concentrated instead on the idea that dirigible balloons might become an important part of modern warfare. He asked the

268

/

Dirigible

German government to fund his research, pointing out that France had a better military air force than Germany did. Zeppelin’s patriotism was what kept him trying, in spite of money problems and technical difficulties. In 1893, in order to get more money, Zeppelin tried to persuade the German military and engineering experts that his invention was practical. Even though a government committee decided that his work was worth a small amount of funding, the army was not sure that Zeppelin’s dirigible was worth the cost. Finally, the committee chose Schwarz’s design. In 1896, however, Zeppelin won the support of the powerful Union of German Engineers, which in May, 1898, gave him 800,000 marks to form a stock company called the Association for the Promotion of Airship Flights. In 1899, Zeppelin began building his dirigible in Manzell at Lake Constance. In July, 1900, the airship was finished and ready for its first test flight. Several Attempts Zeppelin, together with his engineer, Theodor Kober, had worked on the design since May, 1892, shortly after Zeppelin’s retirement from the army. They had finished the rough draft by 1894, and though they made some changes later, this was the basic design of the Zeppelin. An improved version was patented in December, 1897. In the final prototype, called the LZ 1, the engineers tried to make the airship as light as possible. They used a light internal combustion engine and designed a frame made of the light metal aluminum. The airship was 128 meters long and had a diameter of 11.7 meters when inflated. Twenty-four zinc-aluminum girders ran the length of the ship, being drawn together at each end. Sixteen rings held the body together. The engineers stretched an envelope of smooth cotton over the framework to reduce wind resistance and to protect the gas bags from the sun’s rays. Seventeen gas bags made of rubberized cloth were placed inside the framework. Together they held more than 120,000 cubic meters of hydrogen gas, which would lift 11,090 kilograms. Two motor gondolas were attached to the sides, each with a 16-horsepower gasoline engine, spinning four propellers.

Dirigible

/

269

Count Ferdinand von Zeppelin The Zeppelin, the first lighter-than-air craft that was powered and steerable, began as a retirement project. Count Ferdinand von Zeppelin was born near Lake Constance in southern Germany in 1838 and grew up in a family long used to aristocratic privilege and government service. After studying engineering at the University of Tübingen, he was commissioned as a lieutenant of engineers. In 1863 he traveled to the United States and, armed with a letter of introduction from President Abraham Lincoln, toured the Union emplacements. The observation balloons then used to see behind enemy lines impressed him. He learned all he could about them and even flew up in one to seven hundred feet. His enthusiasm for airships stayed with him throughout his career, but he was not really able to apply himself to the problem until he retired (as a brigadier general) in 1890. Then he concentrated on the struggle to line up financing and attract talented help. He found investors for 90 percent of the money he needed and got the rest from his wife’s inheritance. The first LZ’s (Luftschiff Zeppelin) had troubles, but setbacks did not stop him. He was a stubborn, determined man. By the time he died in 1917 near Berlin he had seen ninety-two airships built. And because his design was so thoroughly associated with lighterthan-air vessels in the mind of the German public, they have ever after been known as zeppelins. However, he had already recognized their vulnerability as military aircraft, his main interest, and so he had turned his attention to designs for large airplanes as bombers.

The test flight did not go well. The two main questions—whether the craft was strong enough and fast enough—could not be answered because little things kept going wrong; for example, a crankshaft broke and a rudder jammed. The first flight lasted no more than eighteen minutes, with a maximum speed of 13.7 kilometers per hour. During all three test flights, the airship was in the air for a total of only two hours, going no faster than 28.2 kilometers per hour. Zeppelin had to drop the project for some years because he ran out of money, and his company was dissolved. The LZ 1 was

270

/

Dirigible

wrecked in the spring of 1901. A second airship was tested in November, 1905, and January, 1906. Both tests were unsuccessful, and in the end the ship was destroyed during a storm. By 1906, however, the German government was convinced of the military usefulness of the airship, though it would not give money to Zeppelin unless he agreed to design one that could stay in the air for at least twenty-four hours. The third Zeppelin failed this test in the autumn of 1907. Finally, in the summer of 1908, the LZ 4 not only proved itself to the military but also attracted great publicity. It flew for more than twenty-four hours and reached a speed of more than 60 kilometers per hour. Caught in a storm at the end of this flight, the airship was forced to land and exploded, but money came from all over Germany to build another. Impact Most rigid airships were designed and flown in Germany. Of the 161 that were built between 1900 and 1938, 139 were made in Germany, and 119 were based on the Zeppelin design. More than 80 percent of the airships were built for the military. The Germans used more than one hundred for gathering information and for bombing during World War I (1914-1918). Starting in May, 1915, airships bombed Warsaw, Poland; Bucharest, Romania; Salonika, Greece; and London, England. This was mostly a fear tactic, since the attacks did not cause great damage, and the English antiaircraft defense improved quickly. By 1916, the German army had lost so many airships that it stopped using them, though the navy continued. Airships were first used for passenger flights in 1910. By 1914, the Delag (German Aeronautic Stock Company) used seven passenger airships for sightseeing trips around German cities. There were still problems with engine power and weather forecasting, and it was difficult to move the airships on the ground. After World War I, the Zeppelins that were left were given to the Allies as payment, and the Germans were not allowed to build airships for their own use until 1925. In the 1920’s and 1930’s, it became cheaper to use airplanes for

Dirigible

/

271

short flights, so airships were useful mostly for long-distance flight. A British airship made the first transatlantic flight in 1919. The British hoped to connect their empire by means of airships starting in 1924, but the 1930 crash of the R-101, in which most of the leading English aeronauts were killed, brought that hope to an end. The United States Navy built the Akron (1931) and the Macon (1933) for long-range naval reconnaissance, but both airships crashed. Only the Germans continued to use airships on a regular basis. In 1929, the world tour of the Graf Zeppelin was a success. Regular flights between Germany and South America started in 1932, and in 1936, German airships bearing Nazi swastikas flew to Lakehurst, New Jersey. The tragic explosion of the hydrogen-filled Hindenburg in 1937, however, brought the era of the rigid airship to a close. The U.S. secretary of the interior vetoed the sale of nonflammable helium, fearing that the Nazis would use it for military purposes, and the German government had to stop transatlantic flights for safety reasons. In 1940, the last two remaining rigid airships were destroyed. See also Airplane; Gyrocompass; Stealth aircraft; Supersonic passenger plane; Turbojet. Further Reading Brooks, Peter. Zeppelin: Rigid Airships, 1893-1940. London: Putman, 1992. Chant, Christopher. The Zeppelin: The History of German Airships from 1900-1937. New York: Barnes and Noble Books, 2000. Griehl, Manfred, and Joachim Dressel. Zeppelin! The German Airship Story. New York: Sterling Publishing, 1990. Syon, Guillaume de. Zeppelin!: Germany and the Airship, 1900-1939. Baltimore: John Hopkins University Press, 2001.

272

Disposable razor Disposable razor

The invention: An inexpensive shaving blade that replaced the traditional straight-edged razor and transformed shaving razors into a frequent household purchase item. The people behind the invention: King Camp Gillette (1855-1932), inventor of the disposable razor Steven Porter, the machinist who created the first three disposable razors for King Camp Gillette William Emery Nickerson (1853-1930), an expert machine inventor who created the machines necessary for mass production Jacob Heilborn, an industrial promoter who helped Gillette start his company and became a partner Edward J. Stewart, a friend and financial backer of Gillette Henry Sachs, an investor in the Gillette Safety Razor Company John Joyce, an investor in the Gillette Safety Razor Company William Painter (1838-1906), an inventor who inspired Gillette George Gillette, an inventor, King Camp Gillette’s father A Neater Way to Shave In 1895, King Camp Gillette thought of the idea of a disposable razor blade. Gillette spent years drawing different models, and finally Steven Porter, a machinist and Gillette’s associate, created from those drawings the first three disposable razors that worked. Gillette soon founded the Gillette Safety Razor Company, which became the leading seller of disposable razor blades in the United States. George Gillette, King Camp Gillette’s father, had been a newspaper editor, a patent agent, and an inventor. He never invented a very successful product, but he loved to experiment. He encouraged all of his sons to figure out how things work and how to improve on them. King was always inventing something new and had many patents, but he was unsuccessful in turning them into profitable businesses. Gillette worked as a traveling salesperson for Crown Cork and

Disposable razor

/

273

Seal Company. William Painter, one of Gillette’s friends and the inventor of the crown cork, presented Gillette with a formula for making a fortune: Invent something that would constantly need to be replaced. Painter’s crown cork was used to cap beer and soda bottles. It was a tin cap covered with cork, used to form a tight seal over a bottle. Soda and beer companies could use a crown cork only once and needed a steady supply. King took Painter’s advice and began thinking of everyday items that needed to be replaced often. After owning a Star safety razor for some time, King realized that the razor blade had not been improved for a long time. He studied all the razors on the market and found that both the common straight razor and the safety razor featured a heavy V-shaped piece of steel, sharpened on one side. King reasoned that a thin piece of steel sharpened on both sides would create a better shave and could be thrown away once it became dull. The idea of the disposable razor had been born. Gillette made several drawings of disposable razors. He then made a wooden model of the razor to better explain his idea. Gillette’s first attempt to construct a working model was unsuccessful, as the steel was too flimsy. Steven Porter, a Boston machinist, decided to try to make Gillette’s razor from his drawings. He produced three razors, and in the summer of 1899 King was the first man to shave with a disposable razor. Changing Consumer Opinion In the early 1900’s, most people considered a razor to be a oncein-a-lifetime purchase. Many fathers handed down their razors to their sons. Straight razors needed constant and careful attention to keep them sharp. The thought of throwing a razor in the garbage after several uses was contrary to the general public’s idea of a razor. If Gillette’s razor had not provided a much less painful and faster shave, it is unlikely that the disposable would have been a success. Even with its advantages, public opinion against the product was still difficult to overcome. Financing a company to produce the razor proved to be a major obstacle. King did not have the money himself, and potential investors were skeptical. Skepticism arose both because of public percep-

274

/

Disposable razor

tions of the product and because of its manufacturing process. Mass production appeared to be impossible, but the disposable razor would never be profitable if produced using the methods used to manufacture its predecessor. William Emery Nickerson, an expert machine inventor, had looked at Gillette’s razor and said it was impossible to create a machine to produce it. He was convinced to reexamine the idea and finally created a machine that would create a workable blade. In the process, Nickerson changed Gillette’s original model. He improved the handle and frame so that it would better support the thin steel blade. In the meantime, Gillette was busy getting his patent assigned to the newly formed American Safety Razor Company, owned by Gillette, Jacob Heilborn, Edward J. Stewart, and Nickerson. Gillette owned considerably more shares than anyone else. Henry Sachs provided additional capital, buying shares from Gillette. The stockholders decided to rename the company the Gillette Safety Razor Company. It soon spent most of its money on machinery and lacked the capital it needed to produce and advertise its product. The only offer the company had received was from a group of New York investors who were willing to give $125,000 in exchange for 51 percent of the company. None of the directors wanted to lose control of the company, so they rejected the offer. John Joyce, a friend of Gillette, rescued the financially insecure new company. He agreed to buy $100,000 worth of bonds from the company for sixty cents on the dollar, purchasing the bonds gradually as the company needed money. He also received an equivalent amount of company stock. After an investment of $30,000, Joyce had the option of backing out. This deal enabled the company to start manufacturing and advertising. Impact The company used $18,000 to perfect the machinery to produce the disposable razor blades and razors. Originally the directors wanted to sell each razor with twenty blades for three dollars. Joyce insisted on a price of five dollars. In 1903, five dollars was about one-third of the average American’s weekly salary, and a highquality straight razor could be purchased for about half that price.

Disposable razor

/

275

The other directors were skeptical, but Joyce threatened to buy up all the razors for three dollars and sell them himself for five dollars. Joyce had the financial backing to make this promise good, so the directors agreed to the higher price. The Gillette Safety Razor Company contracted with Townsend & Hunt for exclusive sales. The contract stated that Townsend & Hunt would buy 50,000 razors with twenty blades each during a period of slightly more than a year and would purchase 100,000 sets per year for the following four years. The first advertisement for the product appeared in System Magazine in early fall of 1903, offering the razors by mail order. By the end of 1903, only fifty-one razors had been sold. Since Gillette and most of the directors of the company were not salaried, Gillette had needed to keep his job as salesman with Crown Cork and Seal. At the end of 1903, he received a promotion that meant relocation from Boston to London. Gillette did not want to go and pleaded with the other directors, but they insisted that the company could not afford to put him on salary. The company decided to reduce the number of blades in a set from twenty to twelve in an effort to increase profits without noticeably raising the cost of a set. Gillette resigned the title of company president and left for England. Shortly thereafter, Townsend & Hunt changed its name to the Gillette Sales Company, and three years later the sales company sold out to the parent company for $300,000. Sales of the new type of razor were increasing rapidly in the United States, and Joyce wanted to sell patent rights to European companies for a small percentage of sales. Gillette thought that that would be a horrible mistake and quickly traveled back to Boston. He had two goals: to stop the sale of patent rights, based on his conviction that the foreign market would eventually be very lucrative, and to become salaried by the company. Gillette accomplished both these goals and soon moved back to Boston. Despite the fact that Joyce and Gillette had been good friends for a long time, their business views often differed. Gillette set up a holding company in an effort to gain back controlling interest in the Gillette Safety Razor Company. He borrowed money and convinced his allies in the company to invest in the holding company, eventu-

276

/

Disposable razor

ally regaining control. He was reinstated as president of the company. One clear disagreement was that Gillette wanted to relocate the company to Newark, New Jersey, and Joyce thought that that would be a waste of money. Gillette authorized company funds to be invested in a Newark site. The idea was later dropped, costing the company a large amount of capital. Gillette was not a very wise businessman

King Camp Gillette At age sixteen, King Camp Gillette (1855-1932) saw all of his family’s belongings consumed in the Great Chicago Fire. He had to drop out of school because of it and earn his own living. The catastrophe and the sudden loss of security that followed shaped his ambitions. He was not about to risk destitution ever again. He made himself a successful traveling salesman but still felt he was earning too little. So he turned his mind to inventions, hoping to get rich quick. The disposable razor was his only venture, but it was enough. After its long preparation for marketing Gillette’s invention and some subsequent turmoil among its board of directors, the Gillette Safety Razor Company was a phenomenal success and a bonanza for Gillette. He became wealthy. He retired in 1913, just ten years after the company opened, his security assured. His mother had written cookbooks, one of which was a bestseller. As an adult, Gillette got the writing bug himself and wrote four books, but his theme was far loftier than cooking— social theory and security for the masses. Like Karl Marx he argued that economic competition squanders human resources and leads to deprivation, which in turn leads to crime. So, he reasoned, getting rid of economic competition will end misery and crime. He recommended that a centralized agency plan production and oversee distribution, a recommendation that America resoundingly ignored. However, other ideas of his eventually found acceptance, such as air conditioning for workers and government assistance for the unemployed. In 1922 Gillette moved to Los Angeles, California, and devoted himself to raising oranges and collecting his share of the company profits. However, he seldom felt free enough with his money to donate it to charity or finance social reform.

Disposable razor

/

277

and made many costly mistakes. Joyce even accused him of deliberately trying to keep the stock price low so that Gillette could purchase more stock. Joyce eventually bought out Gillette, who retained his title as president but had little say about company business. With Gillette out of a management position, the company became more stable and more profitable. The biggest problem the company faced was that it would soon lose its patent rights. After the patent expired, the company would have competition. The company decided that it could either cut prices (and therefore profits) to compete with the lower-priced disposables that would inevitably enter the market, or it could create a new line of even better razors. The company opted for the latter strategy. Weeks before the patent expired, the Gillette Safety Razor Company introduced a new line of razors. Both World War I and World War II were big boosts to the company, which contracted with the government to supply razors to almost all the troops. This transaction created a huge increase in sales and introduced thousands of young men to the Gillette razor. Many of them continued to use Gillettes after returning from the war. Aside from the shaky start of the company, its worst financial difficulties were during the Great Depression. Most Americans simply could not afford Gillette blades, and many used a blade for an extended time and then resharpened it rather than throwing it away. If it had not been for the company’s foreign markets, the company would not have shown a profit during the Great Depression. Gillette’s obstinancy about not selling patent rights to foreign investors proved to be an excellent decision. The company advertised through sponsoring sporting events, including the World Series. Gillette had many celebrity endorsements from well-known baseball players. Before it became too expensive for one company to sponsor an entire event, Gillette had exclusive advertising during the World Series, various boxing matches, the Kentucky Derby, and football bowl games. Sponsoring these events was costly, but sports spectators were the typical Gillette customers. The Gillette Company created many products that complemented razors and blades, including shaving cream, women’s ra-

278

/

Disposable razor

zors, and electric razors. The company expanded into new products including women’s cosmetics, writing utensils, deodorant, and wigs. One of the main reasons for obtaining a more diverse product line was that a one-product company is less stable, especially in a volatile market. The Gillette Company had learned that lesson in the Great Depression. Gillette continued to thrive by following the principles the company had used from the start. The majority of Gillette’s profits came from foreign markets, and its employees looked to improve products and find opportunities in other departments as well as their own. See also Contact lenses; Memory metal; Steelmaking process. Further Reading Adams, Russell B., Jr. King C. Gillette: The Man and His Wonderful Shaving Device. Boston: Little, Brown, 1978. Dowling, Tim. Inventor of the Disposable Culture: King Camp Gillette, 1855-1932. London: Short, 2001. “Gillette: Blade-runner.” The Economist 327 (April 10, 1993). Killgren, Lucy. “Nicking Gillette.” Marketing Week 22 (June 17, 1999). McKibben, Gordon. Cutting Edge: Gillette’s Journey to Global Leadership. Boston, Mass.: Harvard Business School Press, 1998. Thomas, Robert J. New Product Success Stories: Lessons from Leading Innovators. New York: John Wiley, 1995. Zeien, Alfred M. The Gillette Company. New York: Newcomen Society of the United States, 1999.

279

Dolby noise reduction Dolby noise reduction

The invention: Electronic device that reduces the signal-to-noise ratio of sound recordings and greatly improves the sound quality of recorded music. The people behind the invention: Emil Berliner (1851-1929), a German inventor Ray Milton Dolby (1933), an American inventor Thomas Alva Edison (1847-1931), an American inventor Phonographs, Tapes, and Noise Reduction The main use of record, tape, and compact disc players is to listen to music, although they are also used to listen to recorded speeches, messages, and various forms of instruction. Thomas Alva Edison invented the first sound-reproducing machine, which he called the “phonograph,” and patented it in 1877. Ten years later, a practical phonograph (the “gramophone”) was marketed by a German, Emil Berliner. Phonographs recorded sound by using diaphragms that vibrated in response to sound waves and controlled needles that cut grooves representing those vibrations into the first phonograph records, which in Edison’s machine were metal cylinders and in Berliner’s were flat discs. The recordings were then played by reversing the recording process: Placing a needle in the groove in the recorded cylinder or disk caused the diaphragm to vibrate, re-creating the original sound that had been recorded. In the 1920’s, electrical recording methods developed that produced higher-quality recordings, and then, in the 1930’s, stereophonic recording was developed by various companies, including the British company Electrical and Musical Industries (EMI). Almost simultaneously, the technology of tape recording was developed. By the 1940’s, long-playing stereo records and tapes were widely available. As recording techniques improved further, tapes became very popular, and by the 1960’s, they had evolved into both studio master recording tapes and the audio cassettes used by consumers.

280

/

Dolby noise reduction

Hisses and other noises associated with sound recording and its environment greatly diminished the quality of recorded music. In 1967, Ray Dolby invented a noise reducer, later named “Dolby A,” that could be used by recording studios to reduce tape signal-tonoise ratios. Several years later, his “Dolby B” system, designed for home use, became standard equipment in all types of playback machines. Later, Dolby and others designed improved noisesuppression systems. Recording and Tape Noise Sound is made up of vibrations of varying frequencies—sound waves—that sound recorders can convert into grooves on plastic records, varying magnetic arrangements on plastic tapes covered with iron particles, or tiny pits on compact discs. The following discussion will focus on tape recordings, for which the original Dolby noise reducers were designed. Tape recordings are made by a process that converts sound waves into electrical impulses that cause the iron particles in a tape to reorganize themselves into particular magnetic arrangements. The process is reversed when the tape is played back. In this process, the particle arrangements are translated first into electrical impulses and then into sound that is produced by loudspeakers. Erasing a tape causes the iron particles to move back into their original spatial arrangement. Whenever a recording is made, undesired sounds such as hisses, hums, pops, and clicks can mask the nuances of recorded sound, annoying and fatiguing listeners. The first attempts to do away with undesired sounds (noise) involved making tapes, recording devices, and recording studios quieter. Such efforts did not, however, remove all undesired sounds. Furthermore, advances in recording technology increased the problem of noise by producing better instruments that “heard” and transmitted to recordings increased levels of noise. Such noise is often caused by the components of the recording system; tape hiss is an example of such noise. This type of noise is most discernible in quiet passages of recordings, because loud recorded sounds often mask it.

Dolby noise reduction

/

281

Ray Dolby Ray Dolby, born in Portland, Oregon, in 1933, became an electronics engineer while still in high school in 1952. That is when he began working part time for Ampex Corporation, helping develop the first videotape recorder. He was responsible for the electronics in the Ampex VTR, which was marketed in 1956. The next year he finished a bachelor of science degree at Stanford University, won a Marshall Scholarship and National Science Foundation grant, and went to Cambridge University in England for graduate studies. He received a Ph.D. in 1961 and a fellowship to Pembroke College, during which he also consulted for the United Kingdom Atomic Energy Authority. After two years in India as a United Nations adviser, he set up Dolby Laboratories in London. It was there that he produced the sound suppression equipment that made him famous to audiophiles and movie goers, particularly in the 1970’s for the Dolby stereo (“surround sound”) that enlivened such blockbusters as Star Wars. In 1976 he moved to San Francisco and opened new offices for his company. The holder of more than fifty patents, Dolby published monographs on videotape recording, long wavelength X-ray analysis, and noise reduction. He is among the most honored scientists in the recording industry. Among many other awards, he received an Oscar, Emmy, Samuel L. Warner Memorial Award, gold and silver medals from the Audio Engineering Society, and the National Medal of Technology. England made him an honorary Officer of the Most Excellent Order of the British Empire, and Cambridge University and York University awarded him honorary doctorates.

Because of the problem of noise in quiet passages of recorded sound, one early attempt at noise suppression involved the reduction of noise levels by using “dynaural” noise suppressors. These devices did not alter the loud portions of a recording; instead, they reduced the very high and very low frequencies in the quiet passages in which noise became most audible. The problem with such devices was, however, that removing the high and low frequencies could also affect the desirable portions of the recorded sound. These suppressors could not distinguish desirable from undesirable sounds. As recording techniques improved, dynaural noise sup-

282

/

Dolby noise reduction

pressors caused more and more problems, and their use was finally discontinued. Another approach to noise suppression is sound compression during the recording process. This compression is based on the fact that most noise remains at a constant level throughout a recording, regardless of the sound level of a desired signal (such as music). To carry out sound compression, the lowest-level signals in a recording are electronically elevated above the sound level of all noise. Musical nuances can be lost when the process is carried too far, because the maximum sound level is not increased by devices that use sound compression. To return the music or other recorded sound to its normal sound range for listening, devices that “expand” the recorded music on playback are used. Two potential problems associated with the use of sound compression and expansion are the difficulty of matching the two processes and the introduction into the recording of noise created by the compression devices themselves. In 1967, Ray Dolby developed Dolby A to solve these problems as they related to tape noise (but not to microphone signals) in the recording and playing back of studio master tapes. The system operated by carrying out ten-decibel compression during recording and then restoring (noiselessly) the range of the music on playback. This was accomplished by expanding the sound exactly to its original range. Dolby A was very expensive and was thus limited to use in recording studios. In the early 1970’s, however, Dolby invented the less expensive Dolby B system, which was intended for consumers. Consequences The development of Dolby A and Dolby B noise-reduction systems is one of the most important contributions to the high-quality recording and reproduction of sound. For this reason, Dolby A quickly became standard in the recording industry. In similar fashion, Dolby B was soon incorporated into virtually every highfidelity stereo cassette deck to be manufactured. Dolby’s discoveries spurred advances in the field of noise reduction. For example, the German company Telefunken and the Japanese companies Sanyo and Toshiba, among others, developed their own noise-reduction systems. Dolby Laboratories countered by

Dolby noise reduction

/

283

producing an improved system: Dolby C. The competition in the area of noise reduction continues, and it will continue as long as changes in recording technology produce new, more sensitive recording equipment. See also Cassette recording; Compact disc; Electronic synthesizer; FM radio; Radio; Transistor; Transistor radio; Walkman cassette player. Further Reading Alkin, E. G. M. Sound Recording and Reproduction. 3d ed. Boston: Focal Press, 1996. Baldwin, Neil. Edison: Inventing the Century. Chicago: University of Chicago Press, 2001. Wile, Frederic William. Emile Berliner, Maker of the Microphone. New York: Arno Press, 1974.

284

Electric clock Electric clock

The invention: Electrically powered time-keeping device with a quartz resonator that has led to the development of extremely accurate, relatively inexpensive electric clocks that are used in computers and microprocessors. The person behind the invention: Warren Alvin Marrison (1896-1980), an American scientist From Complex Mechanisms to Quartz Crystals William Alvin Marrison’s fabrication of the electric clock began a new era in time-keeping. Electric clocks are more accurate and more reliable than mechanical clocks, since they have fewer moving parts and are less likely to malfunction. An electric clock is a device that generates a string of electric pulses. The most frequently used electric clocks are called “free running” and “periodic,” which means that they generate a continuous sequence of electric pulses that are equally spaced. There are various kinds of electronic “oscillators” (materials that vibrate) that can be used to manufacture electric clocks. The material most commonly used as an oscillator in electric clocks is crystalline quartz. Because quartz (silicon dioxide) is a completely oxidized compound (which means that it does not deteriorate readily) and is virtually insoluble in water, it is chemically stable and resists chemical processes that would break down other materials. Quartz is a “piezoelectric” material, which means that it is capable of generating electricity when it is subjected to pressure or stress of some kind. In addition, quartz has the advantage of generating electricity at a very stable frequency, with little variation. For these reasons, quartz is an ideal material to use as an oscillator. The Quartz Clock A quartz clock is an electric clock that makes use of the piezoelectric properties of a quartz crystal. When a quartz crystal vibrates, a

Electric clock

/

285

Early electric clock. (PhotoDisc)

difference of electric potential is produced between two of its faces. The crystal has a natural frequency (rate) of vibration that is determined by its size and shape. If the crystal is placed in an oscillating electric circuit that has a frequency that is nearly the same as that of the crystal, it will vibrate at its natural frequency and will cause the frequency of the entire circuit to match its own frequency. Piezoelectricity is electricity, or “electric polarity,” that is caused by the application of mechanical pressure on a “dielectric” material (one that does not conduct electricity), such as a quartz crystal. The process also works in reverse; if an electric charge is applied to the dielectric material, the material will experience a mechanical distortion. This reciprocal relationship is called “the piezoelectric effect.” The phenomenon of electricity being generated by the application of mechanical pressure is called the direct piezoelectric effect, and the phenomenon of mechanical stress being produced as a result of the application of electricity is called the converse piezoelectric effect. When a quartz crystal is used to create an oscillator, the natural frequency of the crystal can be used to produce other frequencies that can power clocks. The natural frequency of a quartz crystal is nearly constant if precautions are taken when it is cut and polished and if it is maintained at a nearly constant temperature and pressure. After a quartz crystal has been used for some time, its fre-

286

/

Electric clock

Warren Alvin Marrison Born in Invenary, Canada, in 1896, Warren Alvin Marrison completed high school at Kingston Collegiate Institute in Ontario and attended Queen’s University in Kingston, where he studied science. World War I interrupted his studies, and while serving in the Royal Flying Corps as an electronics researcher, he began his life-long interest in radio. He graduated from university with a degree in engineering physics in 1920, transferred to Harvard University in 1921, and earned a master’s degree. After his studies, he worked for the Western Electric Company in New York, helping to develop a method to record sound on film. He moved to the company’s Bell Laboratory in 1925 and studied how to produce frequency standards for radio transmissions. This research led him to use quartz crystals as oscillators, and he was able to step down the frequency enough that it could power a motor. Because the motor revolved at the same rate as the crystal’s frequency, he could determine the number of vibrations per time unit of the crystal and set a frequency standard. However, because the vibrations were constant over time, the crystal also measured time, and a new type of clock was born. For his work, Marrison received the British Horological Institute’s Gold Medal in 1947 and the Clockmakers’ Company’s Tompion Medal in 1955. He died in California in 1980.

quency usually varies slowly as a result of physical changes. If allowances are made for such changes, quartz-crystal clocks such as those used in laboratories can be manufactured that will accumulate errors of only a few thousandths of a second per month. The quartz crystals that are typically used in watches, however, may accumulate errors of tens of seconds per year. There are other materials that can be used to manufacture accurate electric clocks. For example, clocks that use the element rubidium typically would accumulate errors no larger than a few tenthousandths of a second per year, and those that use the element cesium would experience errors of only a few millionths of a second per year. Quartz is much less expensive than rarer materials such as

Electric clock

/

287

rubidium and cesium, and it is easy to use in such common applications as computers. Thus, despite their relative inaccuracy, electric quartz clocks are extremely useful and popular, particularly for applications that require accurate timekeeping over a relatively short period of time. In such applications, quartz clocks may be adjusted periodically to correct for accumulated errors. Impact The electric quartz clock has contributed significantly to the development of computers and microprocessors. The computer’s control unit controls and synchronizes all data transfers and transformations in the computer system and is the key subsystem in the computer itself. Every action that the computer performs is implemented by the control unit. The computer’s control unit uses inputs from a quartz clock to derive timing and control signals that regulate the actions in the system that are associated with each computer instruction. The control unit also accepts, as input, control signals generated by other devices in the computer system. The other primary impact of the quartz clock is in making the construction of multiphase clocks a simple task. A multiphase clock is a clock that has several outputs that oscillate at the same frequency. These outputs may generate electric waveforms of different shapes or of the same shape, which makes them useful for various applications. It is common for a computer to incorporate a single-phase quartz clock that is used to generate a two-phase clock. See also Atomic clock; Carbon dating; Electric refrigerator; Fluorescent lighting; Microwave cooking; Television; Vacuum cleaner; Washing machine. Further Reading Barnett, Jo Ellen. Time’s Pendulum: From Sundials to Atomic Clocks, the Fascinating History of Time Keeping and How Our Discoveries Changed the World. San Diego: Harcourt Brace, 1999.

288

/

Electric clock

Dennis, Maggie, and Carlene Stephens. “Engineering Time: Inventing the Electronic Wristwatch.” British Journal for the History of Science 33, no. 119 (December, 2000). Ganeri, Anita. From Candle to Quartz Clock: The Story of Time and Timekeeping. London: Evna Brothers, 1996. Thurber, Karl. “All the Time in the World.” Popular Electronics 14, no. 10 (October, 1997).

289

Electric refrigerator Electric refrigerator

The invention: An electrically powered and hermetically sealed food-storage appliance that replaced iceboxes, improved production, and lowered food-storage costs. The people behind the invention: Marcel Audiffren, a French monk Christian Steenstrup (1873-1955), an American engineer Fred Wolf, an American engineer Ice Preserves America’s Food Before the development of refrigeration in the United States, a relatively warm climate made it difficult to preserve food. Meat spoiled within a day and milk could spoil within an hour after milking. In early America, ice was stored below ground in icehouses that had roofs at ground level. George Washington had a large icehouse at his Mount Vernon estate. By 1876, America was consuming more than 2 million tons of ice each year, which required 4,000 horses and 10,000 men to deliver. Several related inventions were needed before mechanical refrigeration was developed. James Watt invented the condenser, an important refrigeration system component, in 1769. In 1805, Oliver Evans presented the idea of continuous circulation of a refrigerant in a closed cycle. In this closed cooling cycle, a liquid refrigerant evaporates to a gas at low temperature, absorbing heat from its environment and thereby producing “cold,” which is circulated around an enclosed cabinet. To maintain this cooling cycle, the refrigerant gas must be returned to liquid form through condensation by compression. The first closed-cycle vapor-compression refrigerator, which was patented by Jacob Perkins in 1834, used ether as a refrigerant. Iceboxes were used in homes before refrigerators were developed. Ice was cut from lakes and rivers in the northern United States or produced by ice machines in the southern United States. An ice machine using air was patented by John Gorrie at New Orleans in 1851. Ferdinand Carre introduced the first successful commercial

290

/

Electric refrigerator

ice machine, which used ammonia as a refrigerant, in 1862, but it was too large for home use and produced only a pound of ice per hour. Ice machinery became very dependable after 1890 but was plagued by low efficiency. Very warm summers in 1890 and 1891 cut natural ice production dramatically and increased demand for mechanical ice production. Ice consumption continued to increase after 1890; by 1914, 21 million tons of ice were used annually. The high prices charged for ice and the extremely low efficiency of home iceboxes gradually led the public to demand a substitute for ice refrigeration. Refrigeration for the Home Domestic refrigeration required a compact unit with a built-in electric motor that did not require supervision or maintenance. Marcel Audiffren, a French monk, conceived the idea of an electric refrigerator for home use around 1910. The first electric refrigerator, which was invented by Fred Wolf in 1913, was called the Domelre, which stood for domestic electric refrigerator. This machine used condensation equipment that was housed in the home’s basement. In 1915, Alfred Mellowes built the first refrigerator to contain all of its components; this machine was known as Guardian’s Frigerator. General Motors acquired Guardian in 1918 and began to mass produce refrigerators. Guardian was renamed Frigidaire in 1919. In 1918, the Kelvinator Company, run by Edmund Copeland, built the first refrigerator with automatic controls, the most important of which was the thermostatic switch. Despite these advances, by 1920 only a few thousand homes had refrigerators, which cost about $1,000 each. The General Electric Company (GE) purchased the rights to the General Motors refrigerator, which was based on an improved design submitted by one of its engineers, Christian Steenstrup. Steenstrup’s innovative design included a motor and reciprocating compressor that were hermetically sealed with the refrigerant. This unit, known as the GE Monitor Top, was first produced in 1927. A patent on this machine was filed for in 1926 and granted to Steenstrup in 1930. Steenstrup became chief engineer of GE’s electric refrigeration department and accumulated thirty-nine addi-

Electric refrigerator

/

291

tional patents in refrigeration over the following years. By 1936, he had more than one hundred patents to his credit in refrigeration and other areas. Further refinement of the refrigerator evolved with the development of Freon, a nonexplosive, nontoxic, and noncorrosive refrigerant discovered by Thomas Midgely, Jr., in 1928. Freon used lower pressures than ammonia did, which meant that lighter materials and lower temperatures could be used in refrigeration. During the years following the introduction of the Monitor Top, the cost of refrigerators dropped from $1,000 in 1918 to $400 in 1926, and then to $170 in 1935. Sales of units increased from 200,000 in 1926 to 1.5 million in 1935. Initially, refrigerators were sold separately from their cabinets, which commonly were used wooden iceboxes. Frigidaire began making its own cabinets in 1923, and by 1930, refrigerators that combined machinery and cabinet were sold. Throughout the 1930’s, refrigerators were well-insulated, hermetically sealed steel units that used evaporator coils to cool the food compartment. The refrigeration system was transferred from on top of to below the food storage area, which made it possible to raise the food storage area to a more convenient level. Special light bulbs that produced radiation to kill taste- and odor-bearing bacteria were used in refrigerators. Other developments included sliding shelves, shelves in doors, rounded and styled cabinet corners, ice cube trays, and even a built-in radio. The freezing capacity of early refrigerators was inadequate. Only a package or two of food could be kept cool at a time, ice cubes melted, and only a minimal amount of food could be kept frozen. The two-temperature refrigerator consisting of one compartment providing normal cooling and a separate compartment for freezing was developed by GE in 1939. Evaporator coils for cooling were placed within the refrigerator walls, providing more cooling capacity and more space for food storage. Frigidaire introduced a Cold Wall compartment, while White-Westinghouse introduced a Colder Cold system. After World War II, GE introduced the refrigeratorfreezer combination.

292

/

Electric refrigerator

Impact Audiffren, Wolf, Steenstrup, and others combined the earlier inventions of Watt, Perkins, and Carre with the development of electric motors to produce the electric refrigerator. The development of domestic electric refrigeration had a tremendous effect on the quality of home life. Reliable, affordable refrigeration allowed consumers a wider selection of food and increased flexibility in their daily consumption. The domestic refrigerator with increased freezer capacity spawned the growth of the frozen food industry. Without the electric refrigerator, households would still depend on unreliable supplies of ice. See also Fluorescent lighting; Food freezing; Freeze-drying; Microwave cooking; Refrigerant gas; Robot (household); Tupperware; Vacuum cleaner; Washing machine. Further Reading Anderson, Oscar Edward. Refrigeration in America: A History of a New Technology and Its Impact. Princeton: Princeton University Press, 1953. Donaldson, Barry, Bernard Nagengast, and Gershon Meckler. Heat and Cold: Mastering the Great Indoors: A Selective History of Heating, Ventilation, Air-Conditioning and Refrigeration from the Ancients to the 1930’s. Atlanta, Ga.: American Society of Heating, Refrigerating and Air-Conditioning Engineers, 1994. Woolrich, Willis Raymond. The Men Who Created Cold: A History of Refrigeration. New York: Exposition Press, 1967.

293

Electrocardiogram Electrocardiogram

The invention: Device for analyzing the electrical currents of the human heart. The people behind the invention: Willem Einthoven (1860-1927), a Dutch physiologist and winner of the 1924 Nobel Prize in Physiology or Medicine Augustus D. Waller (1856-1922), a German physician and researcher Sir Thomas Lewis (1881-1945), an English physiologist Horse Vibrations In the late 1800’s, there was substantial research interest in the electrical activity that took place in the human body. Researchers studied many organs and systems in the body, including the nerves, eyes, lungs, muscles, and heart. Because of a lack of available technology, this research was tedious and frequently inaccurate. Therefore, the development of the appropriate instrumentation was as important as the research itself. The initial work on the electrical activity of the heart (detected from the surface of the body) was conducted by Augustus D. Waller and published in 1887. Many credit him with the development of the first electrocardiogram. Waller used a Lippmann’s capillary electrometer (named for its inventor, the French physicist GabrielJonas Lippmann) to determine the electrical charges in the heart and called his recording a “cardiograph.” The recording was made by placing a series of small tubes on the surface of the body. The tubes contained mercury and sulfuric acid. As an electrical current passed through the tubes, the mercury would expand and contract. The resulting images were projected onto photographic paper to produce the first cardiograph. Yet Waller had only limited sucess with the device and eventually abandoned it. In the early 1890’s, Willem Einthoven, who became a good friend of Waller, began using the same type of capillary tube to study the electrical currents of the heart. Einthoven also had a difficult time

294

/

Electrocardiogram

working with the instrument. His laboratory was located in an old wooden building near a cobblestone street. Teams of horses pulling heavy wagons would pass by and cause his laboratory to vibrate. This vibration affected the capillary tube, causing the cardiograph to be unclear. In his frustration, Einthoven began to modify his laboratory. He removed the floorboards and dug a hole some ten to fifteen feet deep. He lined the walls with large rocks to stabilize his instrument. When this failed to solve the problem, Einthoven, too, abandoned the Lippmann’s capillary tube. Yet Einthoven did not abandon the idea, and he began to experiment with other instruments. Electrocardiographs over the Phone In order to continue his research on the electrical currents of the heart, Einthoven began to work with a new device, the d’Arsonval galvanometer (named for its inventor, the French biophysicist Arsène d’Arsonval). This instrument had a heavy coil of wire suspended between the poles of a horseshoe magnet. Changes in electrical activity would cause the coil to move; however, Einthoven found that the coil was too heavy to record the small electrical changes found in the heart. Therefore, he modified the instrument by replacing the coil with a silver-coated quartz thread (string). The movements could be recorded by transmitting the deflections through a microscope and projecting them on photographic film. Einthoven called the new instrument the “string galvanometer.” In developing his string galvanomter, Einthoven was influenced by the work of one of his teachers, Johannes Bosscha. In the 1850’s, Bosscha had published a study describing the technical complexities of measuring very small amounts of electricity. He proposed the idea that a galvanometer modified with a needle hanging from a silk thread would be more sensitive in measuring the tiny electric currents of the heart. By 1905, Einthoven had improved the string galvanometer to the point that he could begin using it for clinical studies. In 1906, he had his laboratory connected to the hospital in Leiden by a telephone wire. With this arrangement, Einthoven was able to study in his laboratory electrocardiograms derived from patients in the

Electrocardiogram

/

295

Willem Einthoven Willem Einthoven was born in 1860 on the Island of Java, now part of Indonesia. His father was a Dutch army medical officer, and his mother was the daughter of the Finance Director for the Dutch East Indies. When his father died in 1870, his mother moved with her six children to Utrecht, Holland. Einthoven entered the University of Utrecht in 1878 intending to become a physician like his father, but physics and physiology attracted him more. During his education two research projects that he conducted brought him notoriety. The first involved the articulation of the elbow, which he undertook after a sports injury of his own elbow. (He remained an avid participant in sports his whole life.) The second, which earned him his doctorate in 1885, examined stereoscopy and color variation. Because of the keen investigative abilities these studies displayed, he was at once appointed professor of physiology at the University of Leiden. He took up the position the next year, after qualifying as a general practitioner. Einthoven conducted research into asthma and the optics and electrical activity of vision before turning his attention to the heart. He developed the electrocardiogram in order to measure the heart’s electrical activity accurately and tested its applications and capacities with many students and visiting scientists, helping thereby to widen interest in it as a diagnostic tool. For this work he received the 1924 Nobel Prize in Physiology or Medicine. In his later years, Einthoven studied problems in acoustics and the electrical activity of the sympathetic nervous system. He died in Leiden in 1927.

hospital, which was located a mile away. With this source of subjects, Einthoven was able to use his galvanometer to study many heart problems. As a result of these studies, Einthoven identified the following heart problems: blocks in the electrical conduction system of the heart; premature beats of the heart, including two premature beats in a row; and enlargements of the various chambers of the heart. He was also able to study how the heart behaved during the administration of cardiac drugs.

296

/

Electrocardiogram

A major researcher who communicated with Einthoven about the electrocardiogram was Sir Thomas Lewis, who is credited with developing the electrocardiogram into a useful clinical tool. One of Lewis’s important accomplishments was his identification of atrial fibrillation, the overactive state of the upper chambers of the heart. During World War I, Lewis was involved with studying soldiers’ hearts. He designed a series of graded exercises, which he used to test the soldiers’ ability to perform work. From this study, Lewis was able to use similar tests to diagnose heart disease and to screen recruits who had heart problems. Impact As Einthoven published additional studies on the string galvanometer in 1903, 1906, and 1908, greater interest in his instrument was generated around the world. In 1910, the instrument, now called the “electrocardiograph,” was installed in the United States. It was the foundation of a new laboratory for the study of heart disease at Johns Hopkins University. As time passed, the use of the electrocardiogram—or “EKG,” as it is familiarly known—increased substantially. The major advantage of the EKG is that it can be used to diagnose problems in the heart without incisions or the use of needles. It is relatively painless for the patient; in comparison with other diagnostic techniques, moreover, it is relatively inexpensive. Recent developments in the use of the EKG have been in the area of stress testing. Since many heart problems are more evident during exercise, when the heart is working harder, EKGs are often given to patients as they exercise, generally on a treadmill. The clinician gradually increases the intensity of work the patient is doing while monitoring the patient’s heart. The use of stress testing has helped to make the EKG an even more valuable diagnostic tool. See also Amniocentesis; Artificial heart; Blood transfusion; CAT scanner; Coronary artery bypass surgery; Electroencephalogram; Heart-lung machine; Mammography; Nuclear magnetic resonance; Pacemaker; Ultrasound; X-ray image intensifier.

Electrocardiogram

/

297

Further Reading Cline, Barbara Lovett. Men Who Made a New Physics: Physicists and the Quantum Theory. Chicago: University of Chicago Press, 1987. Hollman, Arthur. Sir Thomas Lewis: Pioneer Cardiologist and Clinical Scientist. New York: Springer, 1997. Lewis, Thomas. Collected Works on Heart Disease. 1912. Reprint. New York: Classics of Cardiology Library, 1991. Snellen, H. A. Two Pioneers of Electrocardiography: The Correspondence Between Einthoven and Lewis from 1908-1926. Rotterdam: Donker Academic Publications, 1983. _____. Willem Einthoven, 1860-1927, Father of Electrocardiography: Life and Work, Ancestors and Contemporaries. Boston: Kluwer Academic Publishers, 1995.

298

Electroencephalogram Electroencephalogram

The invention: A system of electrodes that measures brain wave patterns in humans, making possible a new era of neurophysiology. The people behind the invention: Hans Berger (1873-1941), a German psychiatrist and research scientist Richard Caton (1842-1926), an English physiologist and surgeon The Electrical Activity of the Brain Hans Berger’s search for the human electroencephalograph (English physiologist Richard Caton had described the electroencephalogram, or “brain wave,” in rabbits and monkeys in 1875) was motivated by his desire to find a physiological method that might be applied successfully to the study of the long-standing problem of the relationship between the mind and the brain. His scientific career, therefore, was directed toward revealing the psychophysical relationship in terms of principles that would be rooted firmly in the natural sciences and would not have to rely upon vague philosophical or mystical ideas. During his early career, Berger attempted to study psychophysical relationships by making plethysmographic measurements of changes in the brain circulation of patients with skull defects. In plethysmography, an instrument is used to indicate and record by tracings the variations in size of an organ or part of the body. Later, Berger investigated temperature changes occurring in the human brain during mental activity and the action of psychoactive drugs. He became disillusioned, however, by the lack of psychophysical understanding generated by these investigations. Next, Berger turned to the study of the electrical activity of the brain, and in the 1920’s he set out to search for the human electroencephalogram. He believed that the electroencephalogram would finally provide him with a physiological method capable of furnishing insight into mental functions and their disturbances.

Electroencephalogram

/

299

Berger made his first unsuccessful attempt at recording the electrical activity of the brain in 1920, using the scalp of a bald medical student. He then attempted to stimulate the cortex of patients with skull defects by using a set of electrodes to apply an electrical current to the skin covering the defect. The main purpose of these stimulation experiments was to elicit subjective sensations. Berger hoped that eliciting these sensations might give him some clue about the nature of the relationship between the physiochemical events produced by the electrical stimulus and the mental processes revealed by the patients’ subjective experience. The availability of many patients with skull defects—in whom the pulsating surface of the brain was separated from the stimulating electrodes by only a few millimeters of tissue—reactivated Berger’s interest in recording the brain’s electrical activity.

Hans Berger Hans Berger, the father of electroencephalography, was born in Neuses bei Coburn, Germany, in 1873. He entered the University of Jena in 1892 as a medical student and became an assistant in the psychiatric clinic in 1897. In 1912 he was appointed the clinic’s chief doctor and then its director and a university professor of psychiatry. In 1919 he was chosen as rector of the university. Berger hoped to settle the long-standing philosophical question about the brain and the mind by finding observable physical processes that correlated with thought and feelings. He started off by studying the blood circulation in the head and brain temperature. Even though this work founded psychophysiology, he failed to find objective evidence of subjective states until he started examining fluctuations in the electrical potential of the brain in 1924. His 1929 paper describing the electroencephalograph later provided medicine with a basic diagnostic tool, but the instrument proved to be a very confusing probe of the human psyche for him. His colleagues in psychiatry and medicine did not accept his relationships of physical phenomena and mental states. Berger retired as professor emeritus in 1938 and died three years later in Jena.

300

/

Electroencephalogram

Small, Tremulous Movements Berger used several different instruments in trying to detect brain waves, but all of them used a similar method of recording. Electrical oscillations deflected a mirror upon which a light beam was projected. The deflections of the light beam were proportional to the magnitude of the electrical signals. The movement of the spot of the light beam was recorded on photographic paper moving at a speed no greater than 3 centimeters per second. In July, 1924, Berger observed small, tremulous movements of the instrument while recording from the skin overlying a bone defect in a seventeen-year-old patient. In his first paper on the electroencephalogram, Berger described this case briefly as his first successful recording of an electroencephalogram. At the time of these early studies, Berger already had used the term “electroencephalogram” in his diary. Yet for several years he had doubts about the origin of the electrical signals he recorded. As late as 1928, he almost abandoned his electrical recording studies. The publication of Berger’s first paper on the human encephalogram in 1929 had little impact on the scientific world. It was either ignored or regarded with open disbelief. At this time, even when Berger himself was not completely free of doubts about the validity of his findings, he managed to continue his work. He published additional contributions to the study of the electroencephalogram in a series of fourteen papers. As his research progressed, Berger became increasingly confident and convinced of the significance of his discovery. Impact The long-range impact of Berger’s work is incontestable. When Berger published his last paper on the human encephalogram in 1938, the new approach to the study of brain function that he inaugurated in 1929 had gathered momentum in many centers, both in Europe and in the United States. As a result of his pioneering work, a new diagnostic method had been introduced into medicine. Physiology had acquired a new investigative tool. Clinical neurophysiology had been liberated from its dependence upon the functional

Electroencephalogram

/

301

anatomical approach, and electrophysiological exploration of complex functions of the central nervous system had begun in earnest. Berger’s work had finally received its well-deserved recognition. Many of those who undertook the study of the electroencephalogram were able to bring a far greater technical knowledge of neurophysiology to bear upon the problems of the electrical activity of the brain. Yet the community of neurological scientists has not ceased to look with respect to the founder of electroencephalography, who, despite overwhelming odds and isolation, opened a new area of neurophysiology. See also Amniocentesis; CAT scanner; Electrocardiogram; Mammography; Nuclear magnetic resonance; Ultrasound; X-ray image intensifier. Further Reading Barlow, John S. The Electroencephalogram: Its Patterns and Origins. Cambridge, Mass.: MIT Press, 1993. Berger, Hans. Hans Berger on the Electroencephalogram of Man. New York: Elsevier, 1969.

302

Electron microscope Electron microscope

The invention: A device for viewing extremely small objects that uses electron beams and “electron lenses” instead of the light rays and optical lenses used by ordinary microscopes. The people behind the invention: Ernst Ruska (1906-1988), a German engineer, researcher, and inventor who shared the 1986 Nobel Prize in Physics Hans Busch (1884-1973), a German physicist Max Knoll (1897-1969), a German engineer and professor Louis de Broglie (1892-1987), a French physicist who won the 1929 Nobel Prize in Physics Reaching the Limit The first electron microscope was constructed by Ernst Ruska and Max Knoll in 1931. Scientists who look into the microscopic world always demand microscopes of higher and higher resolution (resolution is the ability of an optical instrument to distinguish closely spaced objects). As early as 1834, George Airy, the eminent British astronomer, theorized that there should be a natural limit to the resolution of optical microscopes. In 1873, two Germans, Ernst Abbe, cofounder of the Karl Zeiss Optical Works at Jena, and Hermann von Helmholtz, the famous physicist and philosopher, independently published papers on this issue. Both arrived at the same conclusion as Airy: Light is limited by the size of its wavelength. Specifically, light cannot resolve smaller than one-half the height of its wavelength. One solution to this limitation was to experiment with light, or electromagnetic radiation, or shorter and shorter wavelengths. At the beginning of the twentieth century, Joseph Edwin Barnard experimented on microscopes using ultraviolet light. Such instruments, however, only modestly improved the resolution. In 1912, German physicist Max von Laue considered using X rays. At the time, however, it was hard to turn “X-ray microscopy” into a physical reality. The wavelengths of X rays are exceedingly

Electron microscope

/

303

short, but for the most part they are used to penetrate matter, not to illuminate objects. It appeared that microscopes had reached their limit. Matter Waves In a new microscopy, then, light—even electromagnetic radiation in general—as the medium that traditionally carried image information, had to be replaced by a new medium. In 1924, French theoretical physicist Louis de Broglie advanced a startling hypothesis: Matter on the scale of subatomic particles possesses wave characteristics. De Broglie also concluded that the speed of lowmass subatomic particles, such as electrons, is related to wavelength. Specifically, higher speeds correspond to shorter wavelengths. When Knoll and Ruska built the first electron microscope in 1931, they had never heard about de Broglie’s “matter wave.” Ruska recollected that when, in 1932, he and Knoll first learned about de Broglie’s idea, he realized that those matter waves would have to be many times shorter in wavelength than light waves. The core component of the new instrument was the electron beam, or “cathode ray,” as it was usually called then. The cathoderay tube was invented in 1857 and was the source of a number of discoveries, including X rays. In 1896, Olaf Kristian Birkeland, a Norwegian scientist, after experimenting with the effect of parallel magnetic fields on the electron beam of the cathode-ray tube, concluded that cathode rays that are concentrated on a focal point by means of a magnet are as effective as parallel light rays that are concentrated by means of a lens. From around 1910, German physicist Hans Busch was the leading researcher in the field. In 1926, he published his theory on the trajectories of electrons in magnetic fields. His conclusions confirmed and expanded upon those of Birkeland. As a result, Busch has been recognized as the founder of a new field later known as “electron optics.” His theoretical study showed, among other things, that the analogy between light and lenses on the one hand, and electron beams and electromagnetic lenses, on the other hand, was accurate.

304

/

Electron microscope

Ernst Ruska Ernst August Friedrich Ruska was born in 1906 in Heidelberg to Professor Julius Ruska and his wife, Elisabeth. In 1925 he left home for the Technical College of Munich, moving two years later to the Technical College of Berlin and gaining practical training at nearby Siemens and Halsk Limited. During his university days he became interested in vacuum tube technology and worked at the Institute of High Voltage, participating in the development of a high performance cathode ray oscilloscope. His interests also lay with the theory and application of electron optics. In 1929, as part of his graduate work, Ruska published a proof of Hans Busch’s theory explaining possible lenslike effects of a magnetic field on an electron stream, which led to the invention of the polschuh lens. It formed the core of the electron microscope that Ruska built with his mentor, Max Kroll, in 1931. Ruska completed his doctoral studies in 1934, but he had already found work in industry, believing that further technical development of electron microscopes was beyond the means of university laboratories. He worked for Fernseh Limited from 1933 to 1937 and for Siemens from 1937 to 1955. Following World War II he helped set up the Institute of Electron Optics and worked in the Faculty of Medicine and Biology of the German Academy of Sciences. He joined the Fritz Haber Institute of the Max Planck Society in Berlin in 1949 and took over as director of its Institute for Electron Microscopy in 1955, keeping the position until he retired in 1974. His life-long work with electron microscopy earned Ruska half of the 1986 Nobel Prize in Physics. He died two years later. To honor his memory, European manufacturers of electron microscopes instituted the Ernst Ruska Prizes, one for researchers of materials and optics and one for biomedical researchers.

Beginning in 1928, Ruska, as a graduate student at the Berlin Institute of Technology, worked on refining Busch’s work. He found that the energy of the electrons in the beam was not uniform. This nonuniformity meant that the images of microscopic objects would ultimately be fuzzy. Knoll and Ruska were able to work from the

Electron microscope

/

305

recognition of this problem to the design and materialization of a concentrated electron “writing spot” and to the actual construction of the electron microscope. By April, 1931, they had established a technological landmark with the “first constructional realization of an electron microscope.” Impact The world’s first electron microscope, which took its first photographic record on April 7, 1931, was rudimentary. Its two-stage total magnification was only sixteen times larger than the sample. Since Ruska and Knoll’s creation, however, progress in electron microscopy has been spectacular. Such an achievement is one of the prominent examples that illustrate the historically unprecedented pace of science and technology in the twentieth century. In 1935, for the first time, the electron microscope surpassed the optical microscope in resolution. The problem of damaging the specimen by the heating effects of the electron beam proved to be more difficult to resolve. In 1937, a team at the University of Toronto constructed the first generally usable electron microscope. In 1942, a group headed by James Hillier at the Radio Corporation of America produced commercial transmission electron microscopes. In 1939 and 1940, research papers on electron microscopes began to appear in Sweden, Canada, the United States, and Japan; from 1944 to 1947, papers appeared in Switzerland, France, the Soviet Union, The Netherlands, and England. Following research work in laboratories, commercial transmission electron microscopes using magnetic lenses with short focal lengths also appeared in these countries. See also Cyclotron; Field ion microscope; Geiger counter; Mass spectrograph; Neutrino detector; Scanning tunneling microscope; Synchrocyclotron; Tevatron accelerator; Ultramicroscope. Further Reading Cline, Barbara Lovett. Men Who Made a New Physics: Physicists and the Quantum Theory. Chicago: University of Chicago Press, 1987.

306

/

Electron microscope

Hawkes, P. W. The Beginnings of Electron Microscopy. Orlando: Academic Press, 1985. Marton, Ladislaus. Early History of the Electron Microscope. 2d ed. San Francisco: San Francisco Press, 1994. Rasmussen, Nicolas. Picture Control: The Electron Microscope and the Transformation of Biology in America, 1940-1960. Stanford, Calif.: Stanford University Press, 1997.

307

Electronic synthesizer Electronic synthesizer

The invention: Portable electronic device that both simulates the sounds of acoustic instruments and creates entirely new sounds. The person behind the invention: Robert A. Moog (1934), an American physicist, engineer, and inventor From Harmonium to Synthesizer The harmonium, or acoustic reed organ, is commonly viewed as having evolved into the modern electronic synthesizer that can be used to create many kinds of musical sounds, from the sounds of single or combined acoustic musical instruments to entirely original sounds. The first instrument to be called a synthesizer was patented by the Frenchman J. A. Dereux in 1949. Dereux’s synthesizer, which amplified the acoustic properties of harmoniums, led to the development of the recording organ. Next, several European and American inventors altered and augmented the properties of such synthesizers. This stage of the process was followed by the invention of electronic synthesizers, which initially used electronically generated sounds to imitate acoustic instruments. It was not long, however, before such synthesizers were used to create sounds that could not be produced by any other instrument. Among the early electronic synthesizers were those made in Germany by Herbert Elmert and Robert Beyer in 1953, and the American Olsen-Belar synthesizers, which were developed in 1954. Continual research produced better and better versions of these large, complex electronic devices. Portable synthesizers, which are often called “keyboards,” were then developed for concert and home use. These instruments became extremely popular, especially in rock music. In 1964, Robert A. Moog, an electronics professor, created what are thought by many to be the first portable synthesizers to be made available to the public. Several other well-known portable synthesizers, such as ARP and Buchla synthesizers, were also introduced at about the same

308

/

Electronic synthesizer

time. Currently, many companies manufacture studio-quality synthesizers of various types. Synthesizer Components and Operation Modern synthesizers make music electronically by building up musical phrases via numerous electronic circuits and combining those phrases to create musical compositions. In addition to duplicating the sounds of many instruments, such synthesizers also enable their users to create virtually any imaginable sound. Many sounds have been created on synthesizers that could not have been created in any other way. Synthesizers use sound-processing and sound-control equipment that controls “white noise” audio generators and oscillator circuits. This equipment can be manipulated to produce a huge variety of sound frequencies and frequency mixtures in the same way that a beam of white light can be manipulated to produce a particular color or mixture of colors. Once the desired products of a synthesizer’s noise generator and oscillators are produced, percussive sounds that contain all or many audio frequencies are mixed with many chosen individual sounds and altered by using various electronic processing components. The better the quality of the synthesizer, the more processing components it will possess. Among these components are sound amplifiers, sound mixers, sound filters, reverberators, and sound combination devices. Sound amplifiers are voltage-controlled devices that change the dynamic characteristics of any given sound made by a synthesizer. Sound mixers make it possible to combine and blend two or more manufactured sounds while controlling their relative volumes. Sound filters affect the frequency content of sound mixtures by increasing or decreasing the amplitude of the sound frequencies within particular frequency ranges, which are called “bands.” Sound filters can be either band-pass filters or band-reject filters. They operate by increasing or decreasing the amplitudes of sound frequencies within given ranges (such as treble or bass). Reverberators (or “reverb” units) produce artificial echoes that can have significant musical effects. There are also many other varieties of sound-

Electronic synthesizer

/

309

Robert Moog Robert Moog, born in 1934, grew up in the Queens borough of New York City, a tough area for a brainy kid. To avoid the bullies who picked on him because he was a nerd, Moog spent a lot of time helping his father with his hobby, electronics. At fourteen, he built his own theremin, an eerie-sounding forerunner of electric instruments. Moog’s mother, meanwhile, force-fed him piano lessons. He liked science better and majored in physics at Queens College and then Cornell University, but he did not forget the music. While in college, he designed a kit for making theremins and advertised it, selling enough of them to run up a sizable bankroll. Also while in college, Moog, acting on a suggestion from a composer, put together the first easy-to-play electronic synthesizer. Other music synthesizers already existed, but they were large, complex, and expensive—suitable only for recording studios. When Moog unveiled his synthesizer in 1965, it was portable, sold for one-tenth the price, and gave musicians virtually an orchestra at their fingertips. It became a stage instrument. Walter Carlos used a Moog synthesizer in 1969 for his album Switched-on Bach, electronic renditions of Johann Sebastian Bach’s concertos. It was a hit and won a Grammy award. The album made Moog and his new instrument famous. Its reputation grew when the Beatles used it for “Because” on Abbey Road and Carlos recorded the score for Stanley Kubrick’s classic movie A Clockwork Orange on a Moog. With the introduction of the even more portable Minimoog, the popularity of synthesizers soared, especially among rock musicians but also in jazz and other styles. Moog sold his company and moved to North Carolina in 1978. There he started another company, Big Briar, devoted to designing special instruments, such as a keyboard that can be played with as much expressive subtlety as a violin and an interactive piano.

processing elements, among them sound-envelope generators, spatial locators, and frequency shifters. Ultimately, the soundcombination devices put together the results of the various groups of audio generating and processing elements, shaping the sound that has been created into its final form.

310

/

Electronic synthesizer

A variety of control elements are used to integrate the operation of synthesizers. Most common is the keyboard, which provides the name most often used for portable electronic synthesizers. Portable synthesizer keyboards are most often pressure-sensitive devices (meaning that the harder one presses the key, the louder the resulting sound will be) that resemble the black-and-white keyboards of more conventional musical instruments such as the piano and the organ. These synthesizer keyboards produce two simultaneous outputs: control voltages that govern the pitches of oscillators, and timing pulses that sustain synthesizer responses for as long as a particular key is depressed. Unseen but present are the integrated voltage controls that control overall signal generation and processing. In addition to voltage controls and keyboards, synthesizers contain buttons and other switches that can transpose their sound ranges and other qualities. Using the appropriate buttons or switches makes it possible for a single synthesizer to imitate different instruments—or groups of instruments—at different times. Other synthesizer control elements include sample-and-hold devices and random voltage sources that make it possible to sustain particular musical effects and to add various effects to the music that is being played, respectively. Electronic synthesizers are complex and flexible instruments. The various types and models of synthesizers make it possible to produce many different kinds of music, and many musicians use a variety of keyboards to give them great flexibility in performing and recording. Impact The development and wide dissemination of studio and portable synthesizers has led to their frequent use to combine the sound properties of various musical instruments; a single musician can thus produce, inexpensively and with a single instrument, sound combinations that previously could have been produced only by a large number of musicians playing various instruments. (Understandably, many players of acoustic instruments have been upset by this development, since it means that they are hired to play less often than they were before synthesizers were developed.) Another

Electronic synthesizer

/

311

consequence of synthesizer use has been the development of entirely original varieties of sound, although this area has been less thoroughly explored, for commercial reasons. The development of synthesizers has also led to the design of other new electronic music-making techniques and to the development of new electronic musical instruments. Opinions about synthesizers vary from person to person—and, in the case of certain illustrious musicians, from time to time. One well-known musician initially proposed that electronic synthesizers would replace many or all conventional instruments, particularly pianos. Two decades later, though, this same musician noted that not even the best modern synthesizers could match the quality of sound produced by pianos made by manufacturers such as Steinway and Baldwin. See also Broadcaster guitar; Cassette recording; Compact disc; Dolby noise reduction; Transistor. Further Reading Hopkin, Bart. Gravikords, Whirlies and Pyrophones: Experimental Musical Instruments. Roslyn, N.Y.: Ellipsis Arts, 1996. Koener, Brendan I.. “Back to Music’s Future.” U.S. News & World Report 122, no. 8 (March 3, 1997). Nunziata, Susan. “Moog Keyboard Offers Human Touch.” Billboard 104, no. 7 (February 15, 1992). Shapiro, Peter. Modulations: A History of Electronic Music: Throbbing Words on Sound. New York: Caipirinha Productions, 2000.

312

ENIAC computer ENIAC computer

The invention: The first general-purpose electronic digital computer. The people behind the invention: John Presper Eckert (1919-1995), an electrical engineer John William Mauchly (1907-1980), a physicist, engineer, and professor John von Neumann (1903-1957), a Hungarian American mathematician, physicist, and logician Herman Heine Goldstine (1913), an army mathematician Arthur Walter Burks (1915), a philosopher, engineer, and professor John Vincent Atanasoff (1903-1995), a mathematician and physicist A Technological Revolution The Electronic Numerical Integrator and Calculator (ENIAC) was the first general-purpose electronic digital computer. By demonstrating the feasibility and value of electronic digital computation, it initiated the computer revolution. The ENIAC was developed during World War II (1939-1945) at the Moore School of Electrical Engineering by a team headed by John William Mauchly and John Presper Eckert, who were working on behalf of the U.S. Ordnance Ballistic Research Laboratory (BRL) at the Aberdeen Proving Ground in Maryland. Early in the war, the BRL’s need to generate ballistic firing tables already far outstripped the combined abilities of the available differential analyzers and teams of human computers. In 1941, Mauchly had seen the special-purpose electronic computer developed by John Vincent Atanasoff to solve sets of linear equations. Atanasoff’s computer was severely limited in scope and was never fully completed. The functioning prototype, however, helped convince Mauchly of the feasibility of electronic digital computation and so led to Mauchly’s formal proposal in April, 1943, to develop the general-purpose ENIAC. The BRL, in desperate need of computational help, agreed to fund the project, with Lieutenant

ENIAC computer

/

313

Herman Heine Goldstine overseeing it for the U.S. Army. This first substantial electronic computer was designed, built, and debugged within two and one-half years. Even given the highly talented team, it could be done only by taking as few design risks as possible. The ENIAC ended up as an electronic version of prior computers: Its functional organization was similar to that of the differential analyzer, while it was programmed via a plugboard (which was something like a telephone switchboard), much like the earlier electromechanical calculators made by the International Business Machines (IBM) Corporation. Another consequence was that the internal representation of numbers was decimal rather than the now-standard binary, since the familiar electromechanical computers used decimal digits. Although the ENIAC was completed only after the end of the war, it was used primarily for military purposes. In fact, the first production run on the system was a two-month calculation needed for the design of the hydrogen bomb. John von Neumann, working as a consultant to both the Los Alamos Scientific Laboratory and the ENIAC project, arranged for the production run immediately prior to ENIAC’s formal dedication in 1946. A Very Fast Machine The ENIAC was an impressive machine: It contained 18,000 vacuum tubes, weighed 27 metric tons, and occupied a large room. The final cost to the U.S. Army was about $486,000. For this price, the army received a machine that computed up to a thousand times faster than its electromechanical precursors; for example, addition and subtraction required only 200 microseconds (200 millionths of a second). At its dedication ceremony, the ENIAC was fast enough to calculate a fired shell’s trajectory faster than the shell itself took to reach its target. The machine also was much more complex than any predecessor and employed a risky new technology in vacuum tubes; this caused much concern about its potential reliability. In response to this concern, Eckert, the lead engineer, imposed strict safety factors on all components, requiring the design to use components at a level well below the manufacturers’ specified limits. The result was a machine

314

/

ENIAC computer

that ran for as long as three days without a hardware malfunction. Programming the ENIAC was effected by setting switches and physically connecting accumulators, function tables (a kind of manually set read-only memory), and control units. Connections were made via cables running between plugboards. This was a laborious and error-prone process, often requiring a one-day set time. The team recognized this problem, and in early 1945, Eckert, Mauchly, and Neumann worked on the design of a new machine. Their basic idea was to treat both program and data in the same way, and in particular to store them in the same high-speed memory; in other words, they planned to produce a stored-program computer. Neumann described and explained this design in his “First Draft of a Report on the EDVAC” (EDVAC is an acronym for Electronic Discrete Variable Automatic Computer). In his report, Neumann contributed new design techniques and provided the first general, comprehensive description of the stored-program architecture. After the delivery of the ENIAC, Neumann suggested that it could be wired up so that a set of instructions would be permanently available and could be selected by entries in the function tables. Engineers implemented the idea, providing sixty instructions that could be invoked from the programs stored into the function tables. Despite slowing down the computer’s calculations, this technique was so superior to plugboard programming that it was used exclusively thereafter. In this way, the ENIAC was converted into a kind of primitive stored-program computer. Impact The ENIAC’s electronic speed and the stored-program design of the EDVAC posed a serious engineering challenge: to produce a computer memory that would be large, inexpensive, and fast. Without such fast memories, the electronic control logic would spend most of its time idling. Vacuum tubes themselves (used in the control) were not an effective answer because of their large power requirements and heat generation. The EDVAC design draft proposed using mercury delay lines, which had been used earlier in radars. These delay lines converted an electronic signal into a slower acoustic signal in a mercury solu-

ENIAC computer

/

315

tion; for continuous storage, the signal picked up at the other end was regenerated and sent back into the mercury. Maurice Vincent Wilkes at the University of Cambridge was the first to complete such a system, in May, 1949. One month earlier, Frederick Calland Williams and Tom Kilburn at Manchester University had brought their prototype computer into operation, which used cathode-ray tubes (CRTs) for its main storage. Thus, England took an early lead in developing computing systems, largely because of a more immediate practical design approach. In the meantime, Eckert and Mauchly formed the Electronic Control Company (later the Eckert-Mauchly Computer Corporation). They produced the Binary Automatic Computer (BINAC) in 1949 and the Universal Automatic Computer (UNIVAC) I in 1951; both machines used mercury storage. The memory problem that the ENIAC introduced was finally resolved with the invention of the magnetic core in the early 1950’s. Core memory was installed on the ENIAC and soon on all new machines. The ENIAC continued in operation until October, 1955, when parts of it were retired to the Smithsonian Institution. The ENIAC proved the viability of digital electronics and led directly to the development of stored-program computers. Its impact can be seen in every modern digital computer. See also Apple II computer; BINAC computer; Colossus computer; IBM Model 1401 computer; Personal computer; Supercomputer; UNIVAC computer. Further Reading Burks, Alice R., and Arthur W. Burks. The First Electronic Computer: The Atanasoff Story. Ann Arbor: University of Michigan Press, 1990. McCarney, Scott. ENIAC: The Triumphs and Tragedies of the World’s First Computer. New York: Berkley Books, 2001. Slater, Robert. Portraits in Silicon. Cambridge, Mass.: MIT Press, 1989. Stern, Nancy B. From ENIAC to UNIVAC: An Appraisal of the EckertMauchly Computers. Bedford, Mass.: Digital Press, 1981.

316

Fax machine Fax machine

The invention: Originally known as the “facsimile machine,” a machine that converts written and printed images into electrical signals that can be sent via telephone, computer, or radio. The person behind the invention: Alexander Bain (1818-1903), a Scottish inventor Sending Images The invention of the telegraph and telephone during the latter half of the nineteenth century gave people the ability to send information quickly over long distances. With the invention of radio and television technologies, voices and moving pictures could be seen around the world as well. Oddly, however, the facsimile process— which involves the transmission of pictures, documents, or other physical data over distance—predates all these modern devices, since a simple facsimile apparatus (usually called a fax machine) was patented in 1843 by Alexander Bain. This early device used a pendulum to synchronize the transmitting and receiving units; it did not convert the image into an electrical format, however, and it was quite crude and impractical. Nevertheless, it reflected the desire to send images over long distances, which remained a technological goal for more than a century. Facsimile machines developed in the period around 1930 enabled news services to provide newspapers around the world with pictures for publication. It was not until the 1970’s, however, that technological advances made small fax machines available for everyday office use. Scanning Images Both the fax machines of the 1930’s and those of today operate on the basis of the same principle: scanning. In early machines, an image (a document or a picture) was attached to a roller, placed in the fax machine, and rotated at a slow and fixed speed (which must be

Fax machine

/

317

the same at each end of the link) in a bright light. Light from the image was reflected from the document in varying degrees, since dark areas reflect less light than lighter areas do. A lens moved across the page one line at a time, concentrating and directing the reflected light to a photoelectric tube. This tube would respond to the change in light level by varying its electric output, thus converting the image into an output signal whose intensity varied with the changing light and dark spots of the image. Much like the signal from a microphone or television camera, this modulated (varying) wave could then be broadcast by radio or sent over telephone lines to a receiver that performed a reverse function. At the receiving end, a light bulb was made to vary its intensity to match the varying intensity of the incoming signal. The output of the light bulb was concentrated through a lens onto photographically sensitive paper, thus re-creating the original image as the paper was rotated. Early fax machines were bulky and often difficult to operate. Advances in semiconductor and computer technology in the 1970’s, however, made the goal of creating an easy-to-use and inexpensive fax machine realistic. Instead of a photoelectric tube that consumes a relatively large amount of electrical power, a row of small photodiode semiconductors is used to measure light intensity. Instead of a power-consuming light source, low-power light-emitting diodes (LEDs) are used. Some 1,728 light-sensitive diodes are placed in a row, and the image to be scanned is passed over them one line at a time. Each diode registers either a dark or a light portion of the image. As each diode is checked in sequence, it produces a signal for one picture element, also known as a “pixel” or “pel.” Because many diodes are used, there is no need for a focusing lens; the diode bar is as wide as the page being scanned, and each pixel represents a portion of a line on that page. Since most fax transmissions take place over public telephone system lines, the signal from the photodiodes is transmitted by means of a built-in computer modem in much the same format that computers use to transmit data over telephone lines. The receiving fax uses its modem to convert the audible signal into a sequence that varies in intensity in proportion to the original signal. This varying signal is then sent in proper sequence to a row of 1,728 small wires over which a chemically treated paper is passed. As each wire re-

318

/

Fax machine

ceives a signal that represents a black portion of the scanned image, the wire heats and, in contact with the paper, produces a black dot that corresponds to the transmitted pixel. As the page is passed over these wires one line at a time, the original image is re-created. Consequences The fax machine has long been in use in many commercial and scientific fields. Weather data in the form of pictures are transmitted from orbiting satellites to ground stations; newspapers receive photographs from international news sources via fax; and, using a very expensive but very high-quality fax device, newspapers and magazines are able to transmit full-size proof copies of each edition to printers thousands of miles away so that a publication edited in one country can reach newsstands around the world quickly. With the technological advances that have been made in recent years, however, fax transmission has become a part of everyday life, particularly in business and research environments. The ability to send quickly a copy of a letter, document, or report over thousands of miles means that information can be shared in a matter of minutes rather than in a matter of days. In fields such as advertising and architecture, it is often necessary to send pictures or drawings to remote sites. Indeed, the fax machine has played an important role in providing information to distant observers of political unrest when other sources of information (such as radio, television, and newspapers) are shut down. In fact, there has been a natural coupling of computers, modems, and fax devices. Since modern faxes are sent as computer data over phone lines, specialized and inexpensive modems (which allow two computers to share data) have been developed that allow any computer user to send and receive faxes without bulky machines. For example, a document—including drawings, pictures, or graphics of some kind—is created in a computer and transmitted directly to another fax machine. That computer can also receive a fax transmission and either display it on the computer’s screen or print it on the local printer. Since fax technology is now within the reach of almost anyone who is interested in using it, there is little doubt that it will continue to grow in popularity.

Fax machine

/

319

See also Communications satellite; Instant photography; Internet; Personal computer; Xerography. Further Reading Bain, Alexander, and Leslie William Davidson. Autobiography. New York: Longmans, Green, 1973. Cullen, Scott. “Telecommunications in the Office.” Office Systems 16, no. 12 (December, 1999). Holtzmann, Gerald J. “Just the Fax.” Inc. 20, no. 13 (September 15, 1998). Hunkin, Tim. “Just Give Me the Fax.” New Scientist 137, no. 1860 (February 13, 1993).

320

Fiber-optics Fiber-optics

The invention: The application of glass fibers to electronic communications and other fields to carry large volumes of information quickly, smoothly, and cheaply over great distances. The people behind the invention: Samuel F. B. Morse (1791-1872), the American artist and inventor who developed the electromagnetic telegraph system Alexander Graham Bell (1847-1922), the Scottish American inventor and educator who invented the telephone and the photophone Theodore H. Maiman (1927), the American physicist and engineer who invented the solid-state laser Charles K. Kao (1933), a Chinese-born electrical engineer Zhores I. Alferov (1930), a Russian physicist and mathematician The Singing Sun In 1844, Samuel F. B. Morse, inventor of the telegraph, sent his famous message, “What hath God wrought?” by electrical impulses traveling at the speed of light over a 66-kilometer telegraph wire strung between Washington, D.C., and Baltimore. Ever since that day, scientists have worked to find faster, less expensive, and more efficient ways to convey information over great distances. At first, the telegraph was used to report stock-market prices and the results of political elections. The telegraph was quite important in the American Civil War (1861-1865). The first transcontinental telegraph message was sent by Stephen J. Field, chief justice of the California Supreme Court, to U.S. president Abraham Lincoln on October 24, 1861. The message declared that California would remain loyal to the Union. By 1866, telegraph lines had reached all across the North American continent and a telegraph cable had been laid beneath the Atlantic Ocean to link the Old World with the New World.

Fiber-optics

/

321

Zhores I. Alferov To create a telephone system that transmitted with light, perfecting fiber-optic cables was only half the solution. There also had to be a small, reliable, energy-efficient light source. In the 1960’s engineers realized that lasers were the best candidate. However, early gas lasers were bulky, and semiconductor lasers, while small, were temperamental and had to be cooled in liquid nitrogen. Nevertheless, the race was on to devise a semiconductor laser that produced a continuous beam and did not need to be cooled. The race was between a Bell Labs team in the United States and a Russian team led by Zhores I. Alferov, neither of which knew much about the other. Alferov was born in 1930 in Vitebsk, Byelorussia, then part of the Soviet Union. He earned a degree in electronics from the V. I. Ulyanov (Lenin) Electrotechnical Institute in Leningrad (now St. Petersburg). As part of his graduate studies, he became a researcher at the A. F. Ioffe Physico-Technical Institute in the same city, receiving a doctorate in physics and mathematics in 1970. By then he was one of the world’s leading experts in semiconductor lasers. Alferov found that he could improve the laser’s performance by sandwiching very thin layers of gallium arsenide and metal, insulated in silicon, in such a way that electrons flowed only along a 0.03 millimeter strip, producing light in the process. This double heterojunction narrow-stripe laser was the answer, producing a steady beam at room temperature. Alferov published his results a month before the American team came up with almost precisely the same solution. The question of who was first was not settled until much later, during which time both Bell Labs and Alferov’s institute went on to further refinements of the technology. Alferov rose to become a dean at the St. Petersburg Technical University and vice-president of the Russian Academy of Sciences. In 2000 he shared the Nobel Prize in Physics.

Another American inventor made the leap from the telegraph to the telephone. Alexander Graham Bell, a teacher of the deaf, was interested in the physical way speech works. In 1875, he started experimenting with ways to transmit sound vibrations electrically. He realized that an electrical current could be adjusted to resemble the

322

/

Fiber-optics

vibrations of speech. Bell patented his invention on March 7, 1876. On July 9, 1877, he founded the Bell Telephone Company. In 1880, Bell invented a device called the “photophone.” He used it to demonstrate that speech could be transmitted on a beam of light. Light is a form of electromagnetic energy. It travels in a vibrating wave. When the amplitude (height) of the wave is adjusted, a light beam can be made to carry messages. Bell’s invention included a thin mirrored disk that converted sound waves directly into a beam of light. At the receiving end, a selenium resistor connected to a headphone converted the light back into sound. “I have heard a ray of sun laugh and cough and sing,” Bell wrote of his invention. Although Bell proved that he could transmit speech over distances of several hundred meters with the photophone, the device was awkward and unreliable, and it never became popular as the telephone did. Not until one hundred years later did researchers find important practical uses for Bell’s idea of talking on a beam of light. Two other major discoveries needed to be made first: development of the laser and of high-purity glass. Theodore H. Maiman, an American physicist and electrical engineer at Hughes Research Laboratories in Malibu, California, built the first laser. The laser produces an intense, narrowly focused beam of light that can be adjusted to carry huge amounts of information. The word itself is an acronym for light amplification by the stimulated emission of radiation. It soon became clear, though, that even bright laser light can be broken up and absorbed by smog, fog, rain, and snow. So in 1966, Charles K. Kao, an electrical engineer at the Standard Telecommunications Laboratories in England, suggested that glass fibers could be used to transmit message-carrying beams of laser light without disruption from weather. Fiber Optics Are Tested Optical glass fiber is made from common materials, mostly silica, soda, and lime. The inside of a delicate silica glass tube is coated with a hundred or more layers of extremely thin glass. The tube is then heated to 2,000 degrees Celsius and collapsed into a thin glass rod, or preform. The preform is then pulled into thin strands of fiber. The fibers are coated with plastic to protect them from being nicked or scratched, and then they are covered in flexible cable.

Fiber-optics

/

323

The earliest glass fibers contained many impurities and defects, so they did not carry light well. Signal repeaters were needed every few meters to energize (amplify) the fading pulses of light. In 1970, however, researchers at the Corning Glass Works in New York developed a fiber pure enough to carry light at least one kilometer without Fiber optic strands. (PhotoDisc) amplification. The telephone industry quickly became involved in the new fiber-optics technology. Researchers believed that a bundle of optical fibers as thin as a pencil could carry several hundred telephone calls at the same time. Optical fibers were first tested by telephone companies in big cities, where the great volume of calls often overloaded standard underground phone lines. On May 11, 1977, American Telephone & Telegraph Company (AT&T), along with Illinois Bell Telephone, Western Electric, and Bell Telephone Laboratories, began the first commercial test of fiberoptics telecommunications in downtown Chicago. The system consisted of a 2.4-kilometer cable laid beneath city streets. The cable, only 1.3 centimeters in diameter, linked an office building in the downtown business district with two telephone exchange centers. Voice and video signals were coded into pulses of laser light and transmitted through the hair-thin glass fibers. The tests showed that a single pair of fibers could carry nearly six hundred telephone conversations at once very reliably and at a reasonable cost. Six years later, in October, 1983, Bell Laboratories succeeded in transmitting the equivalent of six thousand telephone signals through an optical fiber cable that was 161 kilometers long. Since that time, countries all over the world, from England to Indonesia, have developed optical communications systems.

324

/

Fiber-optics

Consequences Fiber optics has had a great impact on telecommunications. A single fiber can now carry thousands of conversations with no electrical interference. These fibers are less expensive, weigh less, and take up much less space than copper wire. As a result, people can carry on conversations over long distances without static and at a low cost. One of the first uses of fiber optics and perhaps its best-known application is the fiberscope, a medical instrument that permits internal examination of the human body without surgery or X-ray techniques. The fiberscope, or endoscope, consists of two fiber bundles. One of the fiber bundles transmits bright light into the patient, while the other conveys a color image back to the eye of the physician. The fiberscope has been used to look for ulcers, cancer, and polyps in the stomach, intestine, and esophagus of humans. Medical instruments, such as forceps, can be attached to the fiberscope, allowing the physician to perform a range of medical procedures, such as clearing a blocked windpipe or cutting precancerous polyps from the colon. See also Cell phone; Community antenna television; Communications satellite; FM radio; Laser; Long-distance radiotelephony; Long-distance telephone; Telephone switching. Further Reading Carey, John, and Neil Gross. “The Light Fantastic: Optoelectronics May Revolutionize Computers—and a Lot More.” Business Week (May 10, 1993). Free, John. “Fiber Optics Head for Home.” Popular Science 238 (March, 1991). Hecht, Jeff. City of Light: The Story of Fiber Optics. Oxford: Oxford University Press, 1999. Paul, Noel C. “Laying Down the Line with Huge Projects to Circle the Globe in Fiber Optic Cable.” Christian Science Monitor (March 29, 2001). Shinal, John G., with Timothy J. Mullaney. “At the Speed of Light.” Business Week (October 9, 2000).

325

Field ion microscope Field ion microscope

The invention: A microscope that uses ions formed in high-voltage electric fields to view atoms on metal surfaces. The people behind the invention: Erwin Wilhelm Müller (1911-1977), a physicist, engineer, and research professor J. Robert Oppenheimer (1904-1967), an American physicist To See Beneath the Surface In the early twentieth century, developments in physics, especially quantum mechanics, paved the way for the application of new theoretical and experimental knowledge to the problem of viewing the atomic structure of metal surfaces. Of primary importance were American physicist George Gamow’s 1928 theoretical explanation of the field emission of electrons by quantum mechanical means and J. Robert Oppenheimer’s 1928 prediction of the quantum mechanical ionization of hydrogen in a strong electric field. In 1936, Erwin Wilhelm Müller developed his field emission microscope, the first in a series of instruments that would exploit these developments. It was to be the first instrument to view atomic structures—although not the individual atoms themselves— directly. Müller’s subsequent field ion microscope utilized the same basic concepts used in the field emission microscope yet proved to be a much more powerful and versatile instrument. By 1956, Müller’s invention allowed him to view the crystal lattice structure of metals in atomic detail; it actually showed the constituent atoms. The field emission and field ion microscopes make it possible to view the atomic surface structures of metals on fluorescent screens. The field ion microscope is the direct descendant of the field emission microscope. In the case of the field emission microscope, the images are projected by electrons emitted directly from the tip of a metal needle, which constitutes the specimen under investigation.

326

/

Field ion microscope

These electrons produce an image of the atomic lattice structure of the needle’s surface. The needle serves as the electron-donating electrode in a vacuum tube, also known as the “cathode.” A fluorescent screen that serves as the electron-receiving electrode, or “anode,” is placed opposite the needle. When sufficient electrical voltage is applied across the cathode and anode, the needle tip emits electrons, which strike the screen. The image produced on the screen is a projection of the electron source—the needle surface’s atomic lattice structure. Müller studied the effect of needle shape on the performance of the microscope throughout much of 1937. When the needles had been properly shaped, Müller was able to realize magnifications of up to 1 million times. This magnification allowed Müller to view what he called “maps” of the atomic crystal structure of metals, since the needles were so small that they were often composed of only one simple crystal of the material. While the magnification may have been great, however, the resolution of the instrument was severely limited by the physics of emitted electrons, which caused the images Müller obtained to be blurred. Improving the View In 1943, while working in Berlin, Müller realized that the resolution of the field emission microscope was limited by two factors. The electron velocity, a particle property, was extremely high and uncontrollably random, causing the micrographic images to be blurred. In addition, the electrons had an unsatisfactorily high wavelength. When Müller combined these two factors, he was able to determine that the field emission microscope could never depict single atoms; it was a physical impossibility for it to distinguish one atom from another. By 1951, this limitation led him to develop the technology behind the field ion microscope. In 1952, Müller moved to the United States and founded the Pennsylvania State University Field Emission Laboratory. He perfected the field ion microscope between 1952 and 1956. The field ion microscope utilized positive ions instead of electrons to create the atomic surface images on the fluorescent screen.

Field ion microscope

/

327

Erwin Müller Erwin Müller’s scientific goal was to see an individual atom, and to that purpose he invented ever more powerful microscopes. He was born in Berlin, Germany, in 1911 and attended the city’s Technische Hochschule, earning a diploma in engineering in 1935 and a doctorate in physics in 1936. Following his studies he worked as an industrial researcher. Still a neophyte scientist, he discovered the principle of the field emission microscope and was able to produce an image of a structure only two nanometers in diameter on the surface of a cathode. In 1941 Müller discovered field desorption by reversing the polarity of the electron emitter at very low temperatures so that surface atoms evaporated in the electric field. In 1947 he left industry and began an academic career, teaching physical chemistry at the Altenburg Engineering School. The following year he was appointed a department head at the Fritz Haber Institute. While there, he found that by having a cathode absorb gas ions and then re-emit them he could produce greater magnification. In 1952 Müller became a professor at Pennsylvania State University. Applying the new field-ion emission principle, he was able to achieve his goal, images of individual atoms, in 1956. Almost immediately chemists and physicists adopted the field-ion microscope to conduct basic research concerning the underlying behavior of field ionization and interactions among absorbed atoms. He further aided such research by coupling a field-ion microscope and mass spectrometer, calling the combination an atom-probe field-ion microscope; it could both magnify and chemically analyze atoms. Müller died in 1977. He received the National Medal of Science posthumously, one of many honors for his contributions to microscopy.

When an easily ionized gas—at first hydrogen, but usually helium, neon, or argon—was introduced into the evacuated tube, the emitted electrons ionized the gas atoms, creating a stream of positively charged particles, much as Oppenheimer had predicted in 1928. Müller’s use of positive ions circumvented one of the resolution problems inherent in the use of imaging electrons. Like the electrons, however, the positive ions traversed the tube with unpredict-

328

/

Field ion microscope

ably random velocities. Müller eliminated this problem by cryogenically cooling the needle tip with a supercooled liquefied gas such as nitrogen or hydrogen. By 1956, Müller had perfected the means of supplying imaging positive ions by filling the vacuum tube with an extremely small quantity of an inert gas such as helium, neon, or argon. By using such a gas, Müller was assured that no chemical reaction would occur between the needle tip and the gas; any such reaction would alter the surface atomic structure of the needle and thus alter the resulting microscopic image. The imaging ions allowed the field ion microscope to image the emitter surface to a resolution of between two and three angstroms, making it ten times more accurate than its close relative, the field emission microscope. Consequences The immediate impact of the field ion microscope was its influence on the study of metallic surfaces. It is a well-known fact of materials science that the physical properties of metals are influenced by the imperfections in their constituent lattice structures. It was not possible to view the atomic structure of the lattice, and thus the finest detail of any imperfection, until the field ion microscope was developed. The field ion microscope is the only instrument powerful enough to view the structural flaws of metal specimens in atomic detail. Although the instrument may be extremely powerful, the extremely large electrical fields required in the imaging process preclude the instrument’s application to all but the heartiest of metallic specimens. The field strength of 500 million volts per centimeter exerts an average stress on metal specimens in the range of almost 1 ton per square millimeter. Metals such as iron and platinum can withstand this strain because of the shape of the needles into which they are formed. Yet this limitation of the instrument makes it extremely difficult to examine biological materials, which cannot withstand the amount of stress that metals can. A practical by-product in the study of field ionization—field evaporation—eventually permitted scientists to view large biological molecules. Field evaporation also allowed surface scientists to view the

Field ion microscope

/

329

atomic structures of biological molecules. By embedding molecules such as phthalocyanine within the metal needle, scientists have been able to view the atomic structures of large biological molecules by field evaporating much of the surrounding metal until the biological material remains at the needle’s surface. See also Cyclotron; Electron microscope; Mass spectrograph; Neutrino detector; Scanning tunneling microscope; Sonar; Synchrocyclotron; Tevatron accelerator; Ultramicroscope. Further Reading Gibson, J. M. “Tools for Probing ‘Atomic’ Action.” IEEE Spectrum 22, no. 12 (December, 1985). Kunetka, James W. Oppenheimer: The Years of Risk. Englewood Cliffs, N.J.: Prentice-Hall, 1982. Schweber, Silvan S. In the Shadow of the Bomb: Bethe, Oppenheimer, and the Moral Responsibility of the Scientist. Princeton, N.J.: Princeton University Press, 2000. Tsong, Tien Tzou. Atom-Probe Field Ion Microscopy: Field Ion Emission and Surfaces and Interfaces at Atomic Resolution. New York: Cambridge University Press, 1990.

330

Floppy disk Floppy disk

The invention: Inexpensive magnetic medium for storing and moving computer data. The people behind the invention: Andrew D. Booth (1918), an English inventor who developed paper disks as a storage medium Reynold B. Johnson (1906-1998), a design engineer at IBM’s research facility who oversaw development of magnetic disk storage devices Alan Shugart (1930), an engineer at IBM’s research laboratory who first developed the floppy disk as a means of mass storage for mainframe computers First Tries When the International Business Machines (IBM) Corporation decided to concentrate on the development of computers for business use in the 1950’s, it faced a problem that had troubled the earliest computer designers: how to store data reliably and inexpensively. In the early days of computers (the early 1940’s), a number of ideas were tried. The English inventor Andrew D. Booth produced spinning paper disks on which he stored data by means of punched holes, only to abandon the idea because of the insurmountable engineering problems he foresaw. The next step was “punched” cards, an idea first used when the French inventor Joseph-Marie Jacquard invented an automatic weaving loom for which patterns were stored in pasteboard cards. The idea was refined by the English mathematician and inventor Charles Babbage for use in his “analytical engine,” an attempt to build a kind of computing machine. Although it was simple and reliable, it was not fast enough, nor did it store enough data, to be truly practical. The Ampex Corporation demonstrated its first magnetic audiotape recorder after World War II (1939-1945). Shortly after that, the Binary Automatic Computer (BINAC) was introduced with a storage device that appeared to be a large tape recorder. A more ad-

Floppy disk

/

331

vanced machine, the Universal Automatic Computer (UNIVAC), used metal tape instead of plastic (plastic was easily stretched or even broken). Unfortunately, metal tape was considerably heavier, and its edges were razor-sharp and thus dangerous. Improvements in plastic tape eventually produced sturdy media, and magnetic tape became (and remains) a practical medium for storage of computer data. Still later designs combined Booth’s spinning paper disks with magnetic technology to produce rapidly rotating “drums.” Whereas a tape might have to be fast-forwarded nearly to its end to locate a specific piece of data, a drum rotating at speeds up to 12,500 revolutions per minute (rpm) could retrieve data very quickly and could store more than 1 million bits (or approximately 125 kilobytes) of data. In May, 1955, these drums evolved, under the direction of Reynold B. Johnson, into IBM’s hard disk unit. The hard disk unit consisted of fifty platters, each 2 feet in diameter, rotating at 1,200 rpm. Both sides of the disk could be used to store information. When the operator wished to access the disk, at his or her command a read/write head was moved to the right disk and to the side of the disk that held the desired data. The operator could then read data from or record data onto the disk. To speed things even more, the next version of the device, similar in design, employed one hundred read/write heads—one for each of its fifty double-sided disks. The only remaining disadvantage was its size, which earned IBM’s first commercial unit the nickname “jukebox.” The First Floppy The floppy disk drive developed directly from hard disk technology. It did not take shape until the late 1960’s under the direction of Alan Shugart (it was announced by IBM as a ready product in 1970). First created to help restart the operating systems of mainframe computers that had gone dead, the floppy seemed in some ways to be a step back, for it operated more slowly than a hard disk drive and did not store as much data. Initially, it consisted of a single thin plastic disk eight inches in diameter and was developed without the protective envelope in which it is now universally encased. The ad-

332

/

Floppy disk

dition of that jacket gave the floppy its single greatest advantage over the hard disk: portability with reliability. Another advantage soon became apparent: The floppy is resilient to damage. In a hard disk drive, the read/write heads must hover thousandths of a centimeter over the disk surface in order to attain maximum performance. Should even a small particle of dust get in the way, or should the drive unit be bumped too hard, the head may “crash” into the surface of the disk and ruin its magnetic coating; the result is a permanent loss of data. Because the floppy operates with the read-write head in contact with the flexible plastic disk surface, individual particles of dust or other contaminants are not nearly as likely to cause disaster. As a result of its advantages, the floppy disk was the logical choice for mass storage in personal computers (PCs), which were developed a few years after the floppy disk’s introduction. The floppy is still an important storage device even though hard disk drives for PCs have become less expensive. Moreover, manufacturers continually are developing new floppy formats and new floppy disks that can hold more data.

Three-and-one-half-inch disks improved on the design of earlier floppies by protecting their magnetic media within hard plastic shells and using sliding metal flanges to protect the surfaces on which recording heads make contact. (PhotoDisc)

Floppy disk

/

333

Consequences Personal computing would have developed very differently were it not for the availability of inexpensive floppy disk drives. When IBM introduced its PC in 1981, the machine provided as standard equipment a connection for a cassette tape recorder as a storage device; a floppy disk was only an option (though an option few did not take). The awkwardness of tape drives—their slow speed and sequential nature of storing data—presented clear obstacles to the acceptance of the personal computer as a basic information tool. By contrast, the floppy drive gives computer users relatively fast storage at low cost. Floppy disks provided more than merely economical data storage. Since they are built to be removable (unlike hard drives), they represented a basic means of transferring data between machines. Indeed, prior to the popularization of local area networks (LANs), the floppy was known as a “sneaker” network: One merely carried the disk by foot to another computer. Floppy disks were long the primary means of distributing new software to users. Even the very flexible floppy showed itself to be quite resilient to the wear and tear of postal delivery. Later, the 3.5inch disk improved upon the design of the original 8-inch and 5.25inch floppies by protecting the disk medium within a hard plastic shell and by using a sliding metal door to protect the area where the read/write heads contact the disk. By the late 1990’s, floppy disks were giving way to new datastorage media, particularly CD-ROMs—durable laser-encoded disks that hold more than 700 megabytes of data. As the price of blank CDs dropped dramatically, floppy disks tended to be used mainly for short-term storage of small amounts of data. Floppy disks were also being used less and less for data distribution and transfer, as computer users turned increasingly to sending files via e-mail on the Internet, and software providers made their products available for downloading on Web sites. See also Bubble memory; Compact disc; Computer chips; Hard disk; Optical disk; Personal computer.

334

/

Floppy disk

Further Reading Brandel, Mary. “IBM Fashions the Floppy.” Computerworld 33, no. 23 (June 7, 1999). Chposky, James, and Ted Leonsis. Blue Magic: The People, Power, and Politics Behind the IBM Personal Computer. New York: Facts on File, 1988. Freiberger, Paul, and Michael Swaine. Fire in the Valley: The Making of the Personal Computer. New York: McGraw-Hill, 2000. Grossman. Wendy. Remembering the Future: Interviews from Personal Computer World. New York: Springer, 1997.

335

Fluorescent lighting Fluorescent lighting

The invention: A form of electrical lighting that uses a glass tube coated with phosphor that gives off a cool bluish light and emits ultraviolet radiation. The people behind the invention: Vincenzo Cascariolo (1571-1624), an Italian alchemist and shoemaker Heinrich Geissler (1814-1879), a German glassblower Peter Cooper Hewitt (1861-1921), an American electrical engineer Celebrating the “Twelve Greatest Inventors” On the night of November 23, 1936, more than one thousand industrialists, patent attorneys, and scientists assembled in the main ballroom of the Mayflower Hotel in Washington, D.C., to celebrate the one hundredth anniversary of the U.S. Patent Office. A transport liner over the city radioed the names chosen by the Patent Office as America’s “Twelve Greatest Inventors,” and, as the distinguished group strained to hear those names, “the room was flooded for a moment by the most brilliant light yet used to illuminate a space that size.” Thus did The New York Times summarize the commercial introduction of the fluorescent lamp. The twelve inventors present were Thomas Alva Edison, Robert Fulton, Charles Goodyear, Charles Hall, Elias Howe, Cyrus Hall McCormick, Ottmar Mergenthaler, Samuel F. B. Morse, George Westinghouse, Wilbur Wright, and Eli Whitney. There was, however, no name to bear the honor for inventing fluorescent lighting. That honor is shared by many who participated in a very long series of discoveries. The fluorescent lamp operates as a low-pressure, electric discharge inside a glass tube that contains a droplet of mercury and a gas, commonly argon. The inside of the glass tube is coated with fine particles of phosphor. When electricity is applied to the gas, the mercury gives off a bluish light and emits ultraviolet radiation.

336

/

Fluorescent lighting

When bathed in the strong ultraviolet radiation emitted by the mercury, the phosphor fluoresces (emits light). The setting for the introduction of the fluorescent lamp began at the beginning of the 1600’s, when Vincenzo Cascariolo, an Italian shoemaker and alchemist, discovered a substance that gave off a bluish glow in the dark after exposure to strong sunlight. The fluorescent substance was apparently barium sulfide and was so unusual for that time and so valuable that its formulation was kept secret for a long time. Gradually, however, scholars became aware of the preparation secrets of the substance and studied it and other luminescent materials. Further studies in fluorescent lighting were made by the German physicist Johann Wilhelm Ritter. He observed the luminescence of phosphors that were exposed to various “exciting” lights. In 1801, he noted that some phosphors shone brightly when illuminated by light that the eye could not see (ultraviolet light). Ritter thus discovered the ultraviolet region of the light spectrum. The use of phosphors to transform ultraviolet light into visible light was an important step in the continuing development of the fluorescent lamp. Further studies in fluorescent lighting were made by the German physicist Johann Wilhelm Ritter. He observed the luminescence of phosphors that were exposed to various “exciting” lights. In 1801, he noted that some phosphors shone brightly when illuminated by light that the eye could not see (ultraviolet light). Ritter thus discovered the ultraviolet region of the light spectrum. The use of phosphors to transform ultraviolet light into visible light was an important step in the continuing development of the fluorescent lamp. The British mathematician and physicist Sir George Gabriel Stokes studied the phenomenon as well. It was he who, in 1852, termed the afterglow “fluorescence.” Geissler Tubes While these advances were being made, other workers were trying to produce a practical form of electric light. In 1706, the English physicist Francis Hauksbee devised an electrostatic generator, which is used to accelerate charged particles to very high levels of electrical energy. He then connected the device to a glass “jar,” used a vac-

Fluorescent lighting

/

337

uum pump to evacuate the jar to a low pressure, and tested his generator. In so doing, Hauksbee obtained the first human-made electrical glow discharge by “capturing lightning” in a jar. In 1854, Heinrich Geissler, a glassblower and apparatus maker, opened his shop in Bonn, Germany, to make scientific instruments; in 1855, he produced a vacuum pump that used liquid mercury as an evacuation fluid. That same year, Geissler made the first gaseous conduction lamps while working in collaboration with the German scientist Julius Plücker. Plücker referred to these lamps as “Geissler tubes.” Geissler was able to create red light with neon gas filling a lamp and light of nearly all colors by using certain types of gas within each of the lamps. Thus, both the neon sign business and the science of spectroscopy were born. Geissler tubes were studied extensively by a variety of workers. At the beginning of the twentieth century, the practical American engineer Peter Cooper Hewitt put these studies to use by marketing the first low-pressure mercury vapor lamps. The lamps were quite successful, although they required high voltage for operation, emitted an eerie blue-green, and shone dimly by comparison with their eventual successor, the fluorescent lamp. At about the same time, systematic studies of phosphors had finally begun. By the 1920’s, a number of investigators had discovered that the low-pressure mercury vapor discharge marketed by Hewitt was an extremely efficient method for producing ultraviolet light, if the mercury and rare gas pressures were properly adjusted. With a phosphor to convert the ultraviolet light back to visible light, the Hewitt lamp made an excellent light source. Impact The introduction of fluorescent lighting in 1936 presented the public with a completely new form of lighting that had enormous advantages of high efficiency, long life, and relatively low cost. By 1938, production of fluorescent lamps was well under way. By April, 1938, four sizes of fluorescent lamps in various colors had been offered to the public and more than two hundred thousand lamps had been sold. During 1939 and 1940, two great expositions—the New York

338

/

Fluorescent lighting

World’s Fair and the San Francisco International Exposition— helped popularize fluorescent lighting. Thousands of tubular fluorescent lamps formed a great spiral in the “motor display salon,” the car showroom of the General Motors exhibit at the New York World’s Fair. Fluorescent lamps lit the Polish Restaurant and hung in vertical clusters on the flagpoles along the Avenue of the Flags at the fair, while two-meter-long, upright fluorescent tubes illuminated buildings at the San Francisco International Exposition. When the United States entered World War II (1939-1945), the demand for efficient factory lighting soared. In 1941, more than twenty-one million fluorescent lamps were sold. Technical advances continued to improve the fluorescent lamp. By the 1990’s, this type of lamp supplied most of the world’s artificial lighting. See also Electric clock; Electric refrigerator; Microwave cooking; Television; Tungsten filament; Vacuum cleaner; Washing machine. Further Reading Bowers, B. “New Lamps for Old: The Story of Electric Lighting.” IEE Review 41, no. 6 (November 16, 1995). Dake, Henry Carl, and Jack De Ment. Fluorescent Light and Its Applications, Including Location and Properties of Fluorescent Materials. Brooklyn, N.Y.: Chemical Publishing, 1941. “EPA Sees the Light on Fluorescent Bulbs.” Environmental Health Perspectives 107, no. 12 (December, 1999). Harris, J. B. “Electric Lamps, Past and Present.” Engineering Science and Education Journal 2, no. 4 (August, 1993). “How Fluorescent Lighting Became Smaller.” Consulting-Specifying Engineer 23, no. 2 (February, 1998).

339

FM radio FM radio

The invention: A method of broadcasting radio signals by modulating the frequency, rather than the amplitude, of radio waves, FM radio greatly improved the quality of sound transmission. The people behind the invention: Edwin H. Armstrong (1890-1954), the inventor of FM radio broadcasting David Sarnoff (1891-1971), the founder of RCA An Entirely New System Because early radio broadcasts used amplitude modulation (AM) to transmit their sounds, they were subject to a sizable amount of interference and static. Since good AM reception relies on the amount of energy transmitted, energy sources in the atmosphere between the station and the receiver can distort or weaken the original signal. This is particularly irritating for the transmission of music. Edwin H. Armstrong provided a solution to this technological constraint. A graduate of Columbia University, Armstrong made a significant contribution to the development of radio with his basic inventions for circuits for AM receivers. (Indeed, the monies Armstrong received from his earlier inventions financed the development of the frequency modulation, or FM, system.) Armstrong was one among many contributors to AM radio. For FM broadcasting, however, Armstrong must be ranked as the most important inventor. During the 1920’s, Armstrong established his own research laboratory in Alpine, New Jersey, across the Hudson River from New York City. With a small staff of dedicated assistants, he carried out research on radio circuitry and systems for nearly three decades. At that time, Armstrong also began to teach electrical engineering at Columbia University. From 1928 to 1933, Armstrong worked diligently at his private laboratory at Columbia University to construct a working model of an FM radio broadcasting system. With the primitive limitations then imposed on the state of vacuum tube technology, a number of

340

/

FM radio

Armstrong’s experimental circuits required as many as one hundred tubes. Between July, 1930, and January, 1933, Armstrong filed four basic FM patent applications. All were granted simultaneously on December 26, 1933. Armstrong sought to perfect FM radio broadcasting, not to offer radio listeners better musical reception but to create an entirely new radio broadcasting system. On November 5, 1935, Armstrong made his first public demonstration of FM broadcasting in New York City to an audience of radio engineers. An amateur station based in suburban Yonkers, New York, transmitted these first signals. The scientific world began to consider the advantages and disadvantages of Armstrong’s system; other laboratories began to craft their own FM systems. Corporate Conniving Because Armstrong had no desire to become a manufacturer or broadcaster, he approached David Sarnoff, head of the Radio Corporation of America (RCA). As the owner of the top manufacturer of radio sets and the top radio broadcasting network, Sarnoff was interested in all advances of radio technology. Armstrong first demonstrated FM radio broadcasting for Sarnoff in December, 1933. This was followed by visits from RCA engineers, who were sufficiently impressed to recommend to Sarnoff that the company conduct field tests of the Armstrong system. In 1934, Armstrong, with the cooperation of RCA, set up a test transmitter at the top of the Empire State Building, sharing facilities with the experimental RCA television transmitter. From 1934 through 1935, tests were conducted using the Empire State facility, to mixed reactions of RCA’s best engineers. AM radio broadcasting already had a performance record of nearly two decades. The engineers wondered if this new technology could replace something that had worked so well. This less-than-enthusiastic evaluation fueled the skepticism of RCA lawyers and salespeople. RCA had too much invested in the AM system, both as a leading manufacturer and as the dominant owner of the major radio network of the time, the National Broadcasting Company (NBC). Sarnoff was in no rush to adopt FM. To

FM radio /

341

change systems would risk the millions of dollars RCA was making as America emerged from the Great Depression. In 1935, Sarnoff advised Armstrong that RCA would cease any further research and development activity in FM radio broadcasting. (Still, engineers at RCA laboratories continued to work on FM to protect the corporate patent position.) Sarnoff declared to the press that his company would push the frontiers of broadcasting by concentrating on research and development of radio with pictures, that is, television. As a tangible sign, Sarnoff ordered that Armstrong’s FM radio broadcasting tower be removed from the top of the Empire State Building. Armstrong was outraged. By the mid-1930’s, the development of FM radio broadcasting had become a mission for Armstrong. For the remainder of his life, Armstrong devoted his considerable talents to the promotion of FM radio broadcasting. Impact After the break with Sarnoff, Armstrong proceeded with plans to develop his own FM operation. Allied with two of RCA’s biggest manufacturing competitors, Zenith and General Electric, Armstrong pressed ahead. In June of 1936, at a Federal Communications Commission (FCC) hearing, Armstrong proclaimed that FM broadcasting was the only static-free, noise-free, and uniform system—both day and night—available. He argued, correctly, that AM radio broadcasting had none of these qualities. During World War II (1939-1945), Armstrong gave the military permission to use FM with no compensation. That patriotic gesture cost Armstrong millions of dollars when the military soon became all FM. It did, however, expand interest in FM radio broadcasting. World War II had provided a field test of equipment and use. By the 1970’s, FM radio broadcasting had grown tremendously. By 1972, one in three radio listeners tuned into an FM station some time during the day. Advertisers began to use FM radio stations to reach the young and affluent audiences that were turning to FM stations in greater numbers. By the late 1970’s, FM radio stations were outnumbering AM stations. By 1980, nearly half of radio listeners tuned into FM stations

342

/

FM radio

on a regular basis. A decade later, FM radio listening accounted for more than two-thirds of audience time. Armstrong’s predictions that listeners would prefer the clear, static-free sounds offered by FM radio broadcasting had come to pass by the mid-1980’s, nearly fifty years after Armstrong had commenced his struggle to make FM radio broadcasting a part of commercial radio. See also Community antenna television; Communications satellite; Dolby noise reduction; Fiber-optics; Radio; Radio crystal sets; Television; Transistor radio. Further Reading Lewis, Tom. Empire of the Air: The Men Who Made Radio. New York: HarperPerennial, 1993. Sobel, Robert. RCA. New York: Stein and Day, 1986. Streissguth, Thomas. Communications: Sending the Message. Minneapolis, Minn.: Oliver Press, 1997.

343

Food freezing Food freezing

The invention: It was long known that low temperatures helped to protect food against spoiling; the invention that made frozen food practical was a method of freezing items quickly. Clarence Birdseye’s quick-freezing technique made possible a revolution in food preparation, storage, and distribution. The people behind the invention: Clarence Birdseye (1886-1956), a scientist and inventor Donald K. Tressler (1894-1981), a researcher at Cornell University Amanda Theodosia Jones (1835-1914), a food-preservation pioneer Feeding the Family In 1917, Clarence Birdseye developed a means of quick-freezing meat, fish, vegetables, and fruit without substantially changing their original taste. His system of freezing was called by Fortune magazine “one of the most exciting and revolutionary ideas in the history of food.” Birdseye went on to refine and perfect his method and to promote the frozen foods industry until it became a commercial success nationwide. It was during a trip to Labrador, where he worked as a fur trader, that Birdseye was inspired by this idea. Birdseye’s new wife and five-week-old baby had accompanied him there. In order to keep his family well fed, he placed barrels of fresh cabbages in salt water and then exposed the vegetables to freezing winds. Successful at preserving vegetables, he went on to freeze a winter’s supply of ducks, caribou, and rabbit meat. In the following years, Birdseye experimented with many freezing techniques. His equipment was crude: an electric fan, ice, and salt water. His earliest experiments were on fish and rabbits, which he froze and packed in old candy boxes. By 1924, he had borrowed money against his life insurance and was lucky enough to find three partners willing to invest in his new General Seafoods Company

344

/

Food freezing

(later renamed General Foods), located in Gloucester, Massachusetts. Although it was Birdseye’s genius that put the principles of quick-freezing to work, he did not actually invent quick-freezing. The scientific principles involved had been known for some time. As early as 1842, a patent for freezing fish had been issued in England. Nevertheless, the commercial exploitation of the freezing process could not have happened until the end of the 1800’s, when mechanical refrigeration was invented. Even then, Birdseye had to overcome major obstacles. Finding a Niche By the 1920’s, there still were few mechanical refrigerators in American homes. It would take years before adequate facilities for food freezing and retail distribution would be established across the United States. By the late 1930’s, frozen foods had, indeed, found its role in commerce but still could not compete with canned or fresh foods. Birdseye had to work tirelessly to promote the industry, writing and delivering numerous lectures and articles to advance its popularity. His efforts were helped by scientific research conducted at Cornell University by Donald K. Tressler and by C. R. Fellers of what was then Massachusetts State College. Also, during World War II (1939-1945), more Americans began to accept the idea: Rationing, combined with a shortage of canned foods, contributed to the demand for frozen foods. The armed forces made large purchases of these items as well. General Foods was the first to use a system of extremely rapid freezing of perishable foods in packages. Under the Birdseye system, fresh foods, such as berries or lobster, were packaged snugly in convenient square containers. Then, the packages were pressed between refrigerated metal plates under pressure at 50 degrees below zero. Two types of freezing machines were used. The “double belt” freezer consisted of two metal belts that moved through a 15-meter freezing tunnel, while a special salt solution was sprayed on the surfaces of the belts. This double-belt freezer was used only in permanent installations and was soon replaced by the “multiplate” freezer, which was portable and required only 11.5 square meters of floor space compared to the double belt’s 152 square meters.

Food freezing

/

345

Amanda Theodosia Jones Amanda Theodosia Jones (1835-1914) was close to her brother. When he suddenly died while they were at school and she was left to contact relatives and make the necessary arrangements for his remains, she was devastated. She had a nervous breakdown at seventeen and could not believe he was entirely gone. She was sure that he remained an active presence in her life, and she became a spiritualist and medium so that they could talk during séances. Jones always claimed she did not come up with the idea for the vacuum packing method for preserving food, an important technique before freezing foods became practicable. It was her brother who gave it to her. She did the actual experimental work herself, however, and with the aid of Leroy C. Cooley got the first of their seven patents for food processing. In 1873 she launched The Women’s Canning and Preserving Company, and it was more than just a company. It was a mission. All the officers, stockholders, and employees were women. “This is a woman’s industry,” she insisted, and ran the company so that it was a training school for working women. In the 1880’s, the spirit of invention moved Jones again. Concerned about the high rate of accidents among oil drillers, she examined the problem. Simply add a safety valve to pipes to control the release of the crude oil, she told drillers in Pennsylvania. The idea had not occurred to them, but they tried it, and it so improved safety that Jones won wide praise.

The multiplate freezer also made it possible to apply the technique of quick-freezing to seasonal crops. People were able to transport these freezers easily from one harvesting field to another, where they were used to freeze crops such as peas fresh off the vine. The handy multiplate freezer consisted of an insulated cabinet equipped with refrigerated metal plates. Stacked one above the other, these plates were capable of being opened and closed to receive food products and to compress them with evenly distributed pressure. Each aluminum plate had internal passages through which ammonia flowed and expanded at a temperature of −3.8 degrees Celsius, thus causing the foods to freeze. A major benefit of the new frozen foods was that their taste and

346

/

Food freezing

vitamin content were not lost. Ordinarily, when food is frozen slowly, ice crystals form, which slowly rupture food cells, thus altering the taste of the food. With quick-freezing, however, the food looks, tastes, and smells like fresh food. Quick-freezing also cuts down on bacteria. Impact During the months between one food harvest and the next, humankind requires trillions of pounds of food to survive. In many parts of the world, an adequate supply of food is available; elsewhere, much food goes to waste and many go hungry. Methods of food preservation such as those developed by Birdseye have done much to help those who cannot obtain proper fresh foods. Preserving perishable foods also means that they will be available in greater quantity and variety all year-round. In all parts of the world, both tropical and arctic delicacies can be eaten in any season of the year. With the rise in popularity of frozen “fast” foods, nutritionists began to study their effect on the human body. Research has shown that fresh is the most beneficial. In an industrial nation with many people, the distribution of fresh commodities is, however, difficult. It may be many decades before scientists know the long-term effects on generations raised primarily on frozen foods. See also Electric refrigerator; Freeze-drying; Microwave cooking; Polystyrene; Refrigerant gas; Tupperware. Further Reading Altman, Linda Jacobs. Women Inventors. New York: Facts on File, 1997. Tressler, Donald K. The Memoirs of Donald K. Tressler. Westport, Conn.: Avi Publishing, 1976. _____, and Clifford F. Evers. The Freezing Preservation of Foods. New York: Avi Publishing, 1943.

347

FORTRAN programming language FORTRAN programming language

The invention: The first major computer programming language, FORTRAN supported programming in a mathematical language that was natural to scientists and engineers and achieved unsurpassed success in scientific computation. The people behind the invention: John Backus (1924), an American software engineer and manager John W. Mauchly (1907-1980), an American physicist and engineer Herman Heine Goldstine (1913), a mathematician and computer scientist John von Neumann (1903-1957), a Hungarian American mathematician and physicist Talking to Machines Formula Translation, or FORTRAN—the first widely accepted high-level computer language—was completed by John Backus and his coworkers at the International Business Machines (IBM) Corporation in April, 1957. Designed to support programming in a mathematical language that was natural to scientists and engineers, FORTRAN achieved unsurpassed success in scientific computation. Computer languages are means of specifying the instructions that a computer should execute and the order of those instructions. Computer languages can be divided into categories of progressively higher degrees of abstraction. At the lowest level is binary code, or machine code: Binary digits, or “bits,” specify in complete detail every instruction that the machine will execute. This was the only language available in the early days of computers, when such machines as the ENIAC (Electronic Numerical Integrator and Calculator) required hand-operated switches and plugboard connections. All higher levels of language are imple-

348

/

FORTRAN programming language

mented by having a program translate instructions written in the higher language into binary machine language (also called “object code”). High-level languages (also called “programming languages”) are largely or entirely independent of the underlying machine structure. FORTRAN was the first language of this type to win widespread acceptance. The emergence of machine-independent programming languages was a gradual process that spanned the first decade of electronic computation. One of the earliest developments was the invention of “flowcharts,” or “flow diagrams,” by Herman Heine Goldstine and John von Neumann in 1947. Flowcharting became the most influential software methodology during the first twenty years of computing. Short Code was the first language to be implemented that contained some high-level features, such as the ability to use mathematical equations. The idea came from John W. Mauchly, and it was implemented on the BINAC (Binary Automatic Computer) in 1949 with an “interpreter”; later, it was carried over to the UNIVAC (Universal Automatic Computer) I. Interpreters are programs that do not translate commands into a series of object-code instructions; instead, they directly execute (interpret) those commands. Every time the interpreter encounters a command, that command must be interpreted again. “Compilers,” however, convert the entire command into object code before it is executed. Much early effort went into creating ways to handle commonly encountered problems—particularly scientific mathematical calculations. A number of interpretive languages arose to support these features. As long as such complex operations had to be performed by software (computer programs), however, scientific computation would be relatively slow. Therefore, Backus lobbied successfully for a direct hardware implementation of these operations on IBM’s new scientific computer, the 704. Backus then started the Programming Research Group at IBM in order to develop a compiler that would allow programs to be written in a mathematically oriented language rather than a machine-oriented language. In November of 1954, the group defined an initial version of FORTRAN.

FORTRAN programming language

/

349

A More Accessible Language Before FORTRAN was developed, a computer had to perform a whole series of tasks to make certain types of mathematical calculations. FORTRAN made it possible for the same calculations to be performed much more easily. In general, FORTRAN supported constructs with which scientists were already acquainted, such as functions and multidimensional arrays. In defining a powerful notation that was accessible to scientists and engineers, FORTRAN opened up programming to a much wider community. Backus’s success in getting the IBM 704’s hardware to support scientific computation directly, however, posed a major challenge: Because such computation would be much faster, the object code produced by FORTRAN would also have to be much faster. The lower-level compilers preceding FORTRAN produced programs that were usually five to ten times slower than their hand-coded counterparts; therefore, efficiency became the primary design objective for Backus. The highly publicized claims for FORTRAN met with widespread skepticism among programmers. Much of the team’s efforts, therefore, went into discovering ways to produce the most efficient object code. The efficiency of the compiler produced by Backus, combined with its clarity and ease of use, guaranteed the system’s success. By 1959, many IBM 704 users programmed exclusively in FORTRAN. By 1963, virtually every computer manufacturer either had delivered or had promised a version of FORTRAN. Incompatibilities among manufacturers were minimized by the popularity of IBM’s version of FORTRAN; every company wanted to be able to support IBM programs on its own equipment. Nevertheless, there was sufficient interest in obtaining a standard for FORTRAN that the American National Standards Institute adopted a formal standard for it in 1966. A revised standard was adopted in 1978, yielding FORTRAN 77. Consequences In demonstrating the feasibility of efficient high-level languages, FORTRAN inaugurated a period of great proliferation of program-

350

/

FORTRAN programming language

ming languages. Most of these languages attempted to provide similar or better high-level programming constructs oriented toward a different, nonscientific programming environment. COBOL, for example, stands for “Common Business Oriented Language.” FORTRAN, while remaining the dominant language for scientific programming, has not found general acceptance among nonscientists. An IBM project established in 1963 to extend FORTRAN found the task too unwieldy and instead ended up producing an entirely different language, PL/I, which was delivered in 1966. In the beginning, Backus and his coworkers believed that their revolutionary language would virtually eliminate the burdens of coding and debugging. Instead, FORTRAN launched software as a field of study and an industry in its own right. In addition to stimulating the introduction of new languages, FORTRAN encouraged the development of operating systems. Programming languages had already grown into simple operating systems called “monitors.” Operating systems since then have been greatly improved so that they support, for example, simultaneously active programs (multiprogramming) and the networking (combining) of multiple computers. See also BASIC programming language; COBOL computer language; SAINT. Further Reading Goff, Leslie. “Born of Frustration.” Computerworld 33, no. 6 (February 8, 1999). Moreau, René. The Computer Comes of Age: The People, the Hardware, and the Software. Cambridge, Mass.: MIT Press, 1984. Slater, Robert. Portraits in Silicon. Cambridge, Mass.: MIT Press, 1987. Stern, Nancy B. From ENIAC to UNIVAC: An Appraisal of the EckertMauchly Computers. Bedford, Mass.: Digital Press., 1981.

351

Freeze-drying Freeze-drying

The invention: Method for preserving foods and other organic matter by freezing them and using a vacuum to remove their water content without damaging their solid matter. The people behind the invention: Earl W. Flosdorf (1904), an American physician Ronald I. N. Greaves (1908), an English pathologist Jacques Arsène d’Arsonval (1851-1940), a French physicist Freeze-Drying for Preservation Drying, or desiccation, is known to preserve biomaterials, including foods. In freeze-drying, water is evaporated in a frozen state in a vacuum, by means of sublimation (the process of changing a solid to a vapor without first changing it to a liquid). In 1811, John Leslie had first caused freezing by means of the evaporation and sublimation of ice. In 1813, William Wollaston demonstrated this process to the Royal Society of London. It does not seem to have occurred to either Leslie or Wollaston to use sublimation for drying. That distinction goes to Richard Altmann, a German histologist, who dried pieces of frozen tissue in 1890. Later, in 1903, Vansteenberghe freeze-dried the rabies virus. In 1906, Jacques Arsène d’Arsonval removed water at a low temperature for distillation. Since water removal is the essence of drying, d’Arsonval is often credited with the discovery of freeze-drying, but the first clearly recorded use of sublimation for preservation was by Leon Shackell in 1909. His work was widely recognized, and he freeze-dried a variety of biological materials. The first patent for freeze-drying was issued to Henri Tival, a French inventor, in 1927. In 1934, William Elser received patents for a modern freeze-drying apparatus that supplied heat for sublimation. In 1933, Earl W. Flosdorf had freeze-dried human blood serum and plasma for clinical use. The subsequent efforts of Flosdorf led to commercial freeze-drying applications in the United States.

352

/

Freeze-drying

Freeze-Drying of Foods With the freeze-drying technique fairly well established for biological products, it was a natural extension for Flosdorf to apply the technique to the drying of foods. As early as 1935, Flosdorf experimented with the freeze-drying of fruit juices and milk. An early British patent was issued to Franklin Kidd, a British inventor, in 1941 for the freeze-drying of foods. An experimental program on the freezedrying of food was also initiated at the Low Temperature Research Station at Cambridge University in England, but until World War II, freeze-drying was only an occasionally used scientific tool. It was the desiccation of blood plasma from the frozen state, performed by the American Red Cross for the U.S. armed forces, that provided the first spectacular, extensive use of freeze-drying. This work demonstrated the vast potential of freeze-drying for commercial applications. In 1949, Flosdorf published the first book on freeze-drying, which laid the foundation for freeze-drying of foods and remains one of the most important contributions to large-scale operations in the field. In the book, Flosdorf described the freezedrying of fruit juices, milk, meats, oysters, clams, fish fillets, coffee and tea extracts, fruits, vegetables, and other products. Flosdorf also devoted an entire chapter to describing the equipment used for both batch and continuous processing, and he discussed cost analysis. The holder of more than fifteen patents covering various aspects of freeze-drying, Flosdorf dominated the move toward commercialization in the United States. Simultaneously, researchers in England were developing freezedrying applications under the leadership of Ronald I. N. Greaves. The food crisis during World War II had led to the recognition that dried foods cut the costs of transporting, storing, and packaging foods in times of emergency. Thus, in 1951, the British Ministry of Food Research was established at Aberdeen, Scotland. Scientists at Aberdeen developed a vacuum contact plate freeze-dryer that improved product quality and reduced the time required for rehydration (replacement of the water removed in the freeze-drying process so that the food can be used). In 1954, trials of initial freeze-drying, followed by the ordinary process of vacuum drying, were carried out. The abundance of

Freeze-drying

/

353

membranes within plant and animal tissues was a major obstacle to the movement of water vapor, thus limiting the drying rate. In 1956, two Canadian scientists developed a new method of improving the freeze-drying rate for steaks by impaling the steaks on spiked heater plates. This idea was adapted in 1957 by interposing sheets of expanded metal, instead of spikes, between the drying surfaces of the frozen food and the heating platens. Because of the substantially higher freeze-drying rates that it achieved, the process was called “accelerated freeze-drying.” In 1960, Greaves described an ingenious method of freeze-drying liquids. It involved continuously scraping the dry layer during its formation. This led to a continuous process for freeze-drying liquids. During the remainder of the 1960’s, freeze-drying applications proliferated with the advent of several techniques for controlling and improving the effectiveness of the freeze-drying process. Impact Flosdorf’s vision and ingenuity in applying freeze-drying to foods has revolutionized food preservation. He was also responsible for making a laboratory technique a tremendous commercial success. Freeze-drying is important because it stops the growth of microorganisms, inhibits deleterious chemical reactions, and facilitates distribution and storage. Freeze-dried foods are easily prepared for consumption by adding water (rehydration). When freeze-dried properly, most foods, either raw or cooked, can be rehydrated quickly to yield products that are equal in quality to their frozen counterparts. Freeze-dried products retain most of their nutritive qualities and have a long storage life, even at room temperature. Freeze-drying is not, however, without disadvantages. The major disadvantage is the high cost of processing. Thus, to this day, the great potential of freeze-drying has not been fully realized. The drying of cell-free materials, such as coffee and tea extracts, has been extremely successful, but the obstacles imposed by the cell membranes in foods such as fruits, vegetables, and meats have limited the application to expensive specialty items such as freeze-dried soups and to foods for armies, campers, and astronauts. Future eco-

354

/

Freeze-drying

nomic changes may create a situation in which the high cost of freeze-drying is more than offset by the cost of transportation and storage. See also Electric refrigerator; Food freezing; Polystyrene; Tupperware. Further Reading Comello, Vic. “Improvements in Freeze Drying Expand Application Base.” Research and Development 42, no. 5 (May, 2000). Flosdorf, Earl William. Freeze-Drying: Drying by Sublimation. New York: Reinhold, 1949. Noves, Robert. Freeze Drying of Foods and Biologicals, 1968. Park Ridge, N.J.: Noyes Development Corporation, 1968.

355

Fuel cell Fuel cell

The invention: An electrochemical cell that directly converts energy from reactions between oxidants and fuels, such as liquid hydrogen, into electrical energy. The people behind the invention: Francis Thomas Bacon (1904-1992), an English engineer Sir William Robert Grove (1811-1896), an English inventor Georges Leclanché (1839-1882), a French engineer Alessandro Volta (1745-1827), an Italian physicist The Earth’s Resources Because of the earth’s rapidly increasing population and the dwindling of fossil fuels (natural gas, coal, and petroleum), there is a need to design and develop new ways to obtain energy and to encourage its intelligent use. The burning of fossil fuels to create energy causes a slow buildup of carbon dioxide in the atmosphere, creating pollution that poses many problems for all forms of life on this planet. Chemical and electrical studies can be combined to create electrochemical processes that yield clean energy. Because of their very high rate of efficiency and their nonpolluting nature, fuel cells may provide the solution to the problem of finding sufficient energy sources for humans. The simple reaction of hydrogen and oxygen to form water in such a cell can provide an enormous amount of clean (nonpolluting) energy. Moreover, hydrogen and oxygen are readily available. Studies by Alessandro Volta, Georges Leclanché, and William Grove preceded the work of Bacon in the development of the fuel cell. Bacon became interested in the idea of a hydrogen-oxygen fuel cell in about 1932. His original intent was to develop a fuel cell that could be used in commercial applications. The Fuel Cell Emerges In 1800, the Italian physicist Alessandro Volta experimented with solutions of chemicals and metals that were able to conduct

356

/

Fuel cell

electricity. He found that two pieces of metal and such a solution could be arranged in such a way as to produce an electric current. His creation was the first electrochemical battery, a device that produced energy from a chemical reaction. Studies in this area were continued by various people, and in the late nineteenth century, Georges Leclanché invented the dry cell battery, which is now commonly used. The work of William Grove followed that of Leclanché. His first significant contribution was the Grove cell, an improved form of the cells described above, which became very popular. Grove experimented with various forms of batteries and eventually invented the “gas battery,” which was actually the earliest fuel cell. It is worth noting that his design incorporated separate test tubes of hydrogen and oxygen, which he placed over strips of platinum. After studying the design of Grove’s fuel cell, Bacon decided that, for practical purposes, the use of platinum and other precious metals should be avoided. By 1939, he had constructed a cell in which nickel replaced the platinum used. The theory behind the fuel cell can be described in the following way. If a mixture of hydrogen and oxygen is ignited, energy is released in the form of a violent explosion. In a fuel cell, however, the reaction takes place in a controlled manner. Electrons lost by the hydrogen gas flow out of the fuel cell and return to be taken up by the oxygen in the cell. The electron flow provides electricity to any device that is connected to the fuel cell, and the water that the fuel cell produces can be purified and used for drinking. Bacon’s studies were interrupted by World War II. After the war was over, however, Bacon continued his work. Sir Eric Keightley Rideal of Cambridge University in England supported Bacon’s studies; later, others followed suit. In January, 1954, Bacon wrote an article entitled “Research into the Properties of the Hydrogen/ Oxygen Fuel Cell” for a British journal. He was surprised at the speed with which news of the article spread throughout the scientific world, particularly in the United States. After a series of setbacks, Bacon demonstrated a forty-cell unit that had increased power. This advance showed that the fuel cell was not merely an interesting toy; it had the capacity to do useful work. At this point, the General Electric Company (GE), an Ameri-

Fuel cell

/

357

can corporation, sent a representative to England to offer employment in the United States to senior members of Bacon’s staff. Three scientists accepted the offer. A high point in Bacon’s career was the announcement that the American Pratt and Whitney Aircraft company had obtained an order to build fuel cells for the Apollo project, which ultimately put two men on the Moon in 1969. Toward the end of his career in 1978, Bacon hoped that commercial applications for his fuel cells would be found.

+

H2

O2

Electrolyte Porous Electrodes

Anode

Cathode

Parts of a basic fuel cell

Impact Because they are lighter and more efficient than batteries, fuel cells have proved to be useful in the space program. Beginning with the Gemini 5 spacecraft, alkaline fuel cells (in which a water solution of potassium hydroxide, a basic, or alkaline, chemical, is placed) have been used for more than ten thousand hours in space. The fuel cells used aboard the space shuttle deliver the same amount of power as batteries weighing ten times as much. On a typical seven-day mission, the shuttle’s fuel cells consume 680 kilograms (1,500 pounds) of hydrogen and generate 719 liters (190 gallons) of water that can be used for drinking. Major technical and economic problems must be overcome in order to design fuel cells for practical applications, but some important advancements have been made. A few test vehicles that use fuel

358

/

Fuel cell

Francis Bacon Born in Billericay, England, in 1904, Francis Thomas Bacon completed secondary school at the prestigious Eton College and then attended Trinity College, Cambridge University. In 1932 he started his long search for a practical fuel cell based upon the oxygen-hydrogen (Hydrox) reaction with an alkaline electrolyte and inexpensive nickel electrodes. In 1940 the British Admiralty set him up in full-time experimental work at King’s College, London, and then moved him to the Anti-Submarine Experimental Establishment because the Royal Navy wanted fuel cells for their submarines. After World War II Cambridge University appointed him to the faculty at the Department of Chemical Engineering, and he worked intensively on his fuel cell research. In 1959 he proved the worth of his work by producing a fuel cell capable of powering a small truck. It was not until the 1990’s, however, that fuel cells were taken seriously as the main power source for automobiles. In 1998, for instance, Iceland enlisted the help of DaimlerChrysler, Shell Oil, and Norsk Hydro to convert all its transportation vehicles, including its fishing boats, to fuel cell power, part of its long-range plans for a completely “hydrogen economy.” Meanwhile, Bacon had the satisfaction of seeing his invention become a power source for American space vehicles and stations. He died in 1992 in Cambridge.

cells as a source of power have been constructed. Fuel cells using hydrogen as a fuel and oxygen to burn the fuel have been used in a van built by General Motors Corporation. Thirty-two fuel cells are installed below the floorboards, and tanks of liquid oxygen are carried in the back of the van. A power plant built in New York City contains stacks of hydrogen-oxygen fuel cells, which can be put on line quickly in response to power needs. The Sanyo Electric Company has developed an electric car that is partially powered by a fuel cell. These tremendous technical advances are the result of the singleminded dedication of Francis Thomas Bacon, who struggled all of his life with an experiment he was convinced would be successful.

Fuel cell

/

359

See also Alkaline storage battery; Breeder reactor; Compressedair-accumulating power plant; Fluorescent lighting; Geothermal power; Heat pump; Photoelectric cell; Photovoltaic cell; Solar thermal engine; Tidal power plant. Further Reading Eisenberg, Anne. “Fuel Cell May Be the Future ‘Battery.’” New York Times (October 21, 1999). Hoverstein, Paul. “Century-Old Invention Finding a Niche Today. USA Today (June 3, 1994). Kufahl, Pam. “Electric: Lighting Up the Twentieth Century.” Unity Business 3, no. 7 (June, 2000). Stobart, Richard. Fuel Cell Technology for Vehicles. Warrendale, Pa.: Society of Automotive Engineers, 2001.

360

Gas-electric car Gas-electric car

The invention: A hybrid automobile with both an internal combustion engine and an electric motor. The people behind the invention: Victor Wouk (1919), an American engineer Tom Elliott, executive vice president of American Honda Motor Company Hiroyuki Yoshino, president and chief executive officer of Honda Motor Company Fujio Cho, president of Toyota Motor Corporation Announcing Hybrid Vehicles At the 2000 North American International Auto Show in Detroit, not only did the Honda Motor Company show off its new Insight model, it also announced expanded use of its new technology. Hiroyuki Yoshino, president and chief executive officer, said that Honda’s integrated motor assist (IMA) system would be expanded to other massmarket models. The system basically fits a small electric motor directly on a one-liter, three-cylinder internal combustion engine. The two share the workload of powering the car, but the gasoline engine does not start up until it is needed. The electric motor is powered by a nickel-metal hydride (Ni-MH) battery pack, with the IMA system automatically recharging the energy pack during braking. Tom Elliott, Honda’s executive vice-president, said the vehicle was a continuation of the company’s philosophy of making the latest environmental technology accessible to consumers. The $18,000 Insight was a two-seat sporty car that used many innovations to reduce its weight and improve its performance. Fujio Cho, president of Toyota, also spoke at the Detroit show, where his company showed off its new $20,000 hybrid Prius. The Toyota Prius relied more on the electric motor and had more energystorage capacity than the Insight, but was a four-door, five-seat model. The Toyota Hybrid System divided the power from its 1.5-liter gasoline engine and directed it to drive the wheels and a generator. The

Gas-electric car

/

361

generator alternately powered the motor and recharged the batteries. The electric motor was coupled with the gasoline engine to power the wheels under normal driving. The gasoline engine supplied average power needs, with the electric motor helping the peaks; at low speeds, it was all electric. A variable transmission seamlessly switched back and forth between the gasoline engine and electric motor or applied both of them. Variations on an Idea Automobiles generally use gasoline or diesel engines for driving, electric motors that start the main motors, and a means of recharging the batteries that power starter motors and other devices. In solely electric cars, gasoline engines are eliminated entirely, and the batteries that power the vehicles are recharged from stationary sources. In hybrid cars, the relationship between gasoline engines and electric motors is changed so that electric motors handle some or all of the driving. This is at the expense of an increased number of batteries or other energy-storage devices. Possible in many combinations, “hybrids” couple the low-end torque and regenerative braking potential of electric motors with the range and efficient packaging of gasoline, natural gas, or even hydrogen fuel power plants. The return is greater energy efficiency and reduced pollution. With sufficient energy-storage capacity, an electric motor can actually propel a car from a standing start to a moving speed. In hybrid vehicles, the gasoline engines—which are more energyefficient at higher speeds, then kick in. However, the gasoline engines in these vehicles are smaller, lighter, and more efficient than ordinary gas engines. Designed for average—not peak—driving conditions, they reduce air pollution and considerably improve fuel economy. Batteries in hybrid vehicles are recharged partly by the gas engines and partly by regenerative braking; a third of the energy from slowing the car is turned into electricity. What has finally made hybrids feasible at reasonable cost are the new developments in computer technology, allowing sophisticated controls to coordinate electrical and mechanical power.

362

/

Gas-electric car

Victor Wouk H. Piper, an American engineer, filed the first patent for a hybrid gas-electric powered car in 1905, and from then until 1915 they were popular, although not common, because they could accelerate faster than plain gas-powered cars. Then the gas-only models became as swift. Their hybrid cousins fells by the wayside. Interest in hybrids revived with the unheard-of gasoline prices during the 1973 oil crisis. The champion of their comeback—the father of the modern hybrid electric vehicle (HEV)— was Victor Wouk. Born in 1919 in New York City, Wouk earned a math and physics degree from Columbia University in 1939 and a doctorate in electrical engineering from the California Institute of Technology in 1942. In 1946 he founded Beta Electric Corporation, which he led until 1959, when he founded and was president of another company, Electronic Energy Conversion Corporation. After 1970, he became an independent consultant, hoping to build an HEV that people would prefer to gas-guzzlers. With his partner, Charles Rosen, Wouk gutted the engine compartment of a Buick Skylark and installed batteries designed for police cars, a 20-watt direct-current electric motor, and an RX-2 Mazda rotary engine. Only a test vehicle, it still got better gas mileage (thirty miles per gallon) than the original Skylark and met the requirements for emissions control set by the Clean Air Act of 1970, unlike all American automobiles of the era. Moreover, Wouk designed an HEV that would get fifty miles per gallon and pollute one-eighth as much as gas-powered automobiles. However, the oil crisis ended, gas prices went down, and consumers and the government lost interest. Wouk continued to publish, lecture, and design; still, it was not until the 1990’s that high gas prices and concerns over pollution made HEV’s attractive yet again. Wouk holds twelve patents, mostly for speed and braking controls in electric vehicles but also for air conditioning, high voltage direct-current power sources, and life extenders for incandescent lamps.

Gas-electric car

/

363

One way to describe hybrids is to separate them into two types: parallel, in which either of the two power plants can propel the vehicle, and series, in which the auxiliary power plant is used to charge the battery, rather than propel the vehicle. Honda’s Insight is a simplified parallel hybrid that uses a small but efficient gasoline engine. The electric motor assists the engine, providing extra power for acceleration or hill climbing, helps provide regenerative braking, and starts the engine. However, it cannot run the car by itself. Toyota’s Prius is a parallel hybrid whose power train allows some series features. Its engine runs only at an efficient speed and load and is combined with a unique power splitting device. It allows the car to operate like a parallel hybrid, motor alone, engine alone, or both. It can act as a series hybrid with the engine charging the batteries rather than powering the vehicle. It also provides a continually variable transmission using a planetary gear set that allows interaction between the engine, the motor, and the differential which drives the wheels. Impact In 2001 Honda and Toyota marketed gas-electric hybrids that offered better than 60-mile-per-gallon fuel economy and met California’s stringent standards for “super ultra-low emissions” vehicles. Both comparnies achieved these standards without the inconvenience of fully electric cars which could go only about a hundred miles on a single battery charge and required such gimmicks as kerosene-powered heaters. As a result, other manufacturers were beginning to follow suit. Ford, for example, promised a hybrid sport utility vehicle (SUV) by 2003. Other automakers, including General Motors and DaimlerChrysler, also have announced development of alternative fuel and low emission vehicles. An example is the ESX3 concept car using a 1.5-liter, direct injection diesel combined with a electric motor and a lithium-ion battery While American automakers were planning to offer some “full hybrids”—cars capable of running on battery power alone at low speeds—they were focusing more enthusiastically on electrically assisted gasoline engines called “mild hybrids.” Full hybrids typi-

364

/

Gas-electric car

cally increase gas mileage by up to 60 percent; mild hybrids by only 10 or 20 percent. The “mild hybrid” approach uses regenerative braking with electrical systems of a much lower voltage and storage capacity than for full hybrids, a much cheaper approach. But there still is enough energy available to allow the gasoline engine to turn off automatically when a vehicle stops and turn on instantly when the accelerator is touched. Because the “mild hybrid” approach adds only $1000 to $1500 to a vehicle’s price, it is likely to be used in many models. Full hybrids cost much more, but achieve more benefits. See also Airplane; Diesel locomotive; Hovercraft; Internal combustion engine; Supersonic passenger plane; Turbojet. Further Reading Morton, Ian. “Honda Insight Hybrid Makes Heavy Use of Light Metal.” Automotive News 74, no. 5853 (December 20, 1999). Peters, Eric. “Hybrid Cars: The Hope, Hype, and Future.” Consumers’ Research Magazine 83, no. 6 (June, 2000). Reynolds, Kim. “Burt Rutan Ponders the Hybrid Car.” Road and Track 51, no. 11 (July, 2000). Swoboda, Frank. “’Hybrid’ Cars Draw Waiting List of Buyers.” Washington Post (May 3, 2001). Yamaguchi, Jack. “Toyota Prius IC/Electric Hybrid Update.” Automotive Engineering International 108, no. 12 (December, 2000).

365

Geiger counter Geiger counter

The invention: the first electronic device able to detect and measure radioactivity in atomic particles. The people behind the invention: Hans Geiger (1882-1945), a German physicist Ernest Rutherford (1871-1937), a British physicist Sir John Sealy Edward Townsend (1868-1957), an Irish physicist Sir William Crookes (1832-1919), an English physicist Wilhelm Conrad Röntgen (1845-1923), a German physicist Antoine-Henri Becquerel (1852-1908), a French physicist Discovering Natural Radiation When radioactivity was discovered and first studied, the work was done with rather simple devices. In the 1870’s, Sir William Crookes learned how to create a very good vacuum in a glass tube. He placed electrodes in each end of the tube and studied the passage of electricity through the tube. This simple device became known as the “Crookes tube.” In 1895, Wilhelm Conrad Röntgen was experimenting with a Crookes tube. It was known that when electricity went through a Crookes tube, one end of the glass tube might glow. Certain mineral salts placed near the tube would also glow. In order to observe carefully the glowing salts, Röntgen had darkened the room and covered most of the Crookes tube with dark paper. Suddenly, a flash of light caught his eye. It came from a mineral sample placed some distance from the tube and shielded by the dark paper; yet when the tube was switched off, the mineral sample went dark. Experimenting further, Röntgen became convinced that some ray from the Crookes tube had penetrated the mineral and caused it to glow. Since light rays were blocked by the black paper, he called the mystery ray an “X ray,” with “X” standing for unknown. Antoine-Henri Becquerel heard of the discovery of X rays and, in February, 1886, set out to discover if glowing minerals themselves emitted X rays. Some minerals, called “phosphorescent,” begin to glow when activated by sunlight. Becquerel’s experiment involved

366

/

Geiger counter

wrapping photographic film in black paper and setting various phosphorescent minerals on top and leaving them in the sun. He soon learned that phosphorescent minerals containing uranium would expose the film. A series of cloudy days, however, brought a great surprise. Anxious to continue his experiments, Becquerel decided to develop film that had not been exposed to sunlight. He was astonished to discover that the film was deeply exposed. Some emanations must be coming from the uranium, he realized, and they had nothing to do with sunlight. Thus, natural radioactivity was discovered by accident with a simple piece of photographic film. Rutherford and Geiger Ernest Rutherford joined the world of international physics at about the same time that radioactivity was discovered. Studying the “Becquerel rays” emitted by uranium, Rutherford eventually distinguished three different types of radiation, which he named “alpha,” “beta,” and “gamma” after the first three letters of the Greek alphabet. He showed that alpha particles, the least penetrating of the three, are the nuclei of helium atoms (a group of two neutrons and a proton tightly bound together). It was later shown that beta particles are electrons. Gamma rays, which are far more penetrating than either alpha or beta particles, were shown to be similar to X rays, but with higher energies. Rutherford became director of the associated research laboratory at Manchester University in 1907. Hans Geiger became an assistant. At this time, Rutherford was trying to prove that alpha particles carry a double positive charge. The best way to do this was to measure the electric charge that a stream of alpha particles would bring to a target. By dividing that charge by the total number of alpha particles that fell on the target, one could calculate the charge of a single alpha particle. The problem lay in counting the particles and in proving that every particle had been counted. Basing their design upon work done by Sir John Sealy Edward Townsend, a former colleague of Rutherford, Geiger and Rutherford constructed an electronic counter. It consisted of a long brass tube sealed at both ends from which most of the air had been

Geiger counter

Hans Geiger Atomic radiation was the first physical phenomenon that humans discovered that they could not detect with any of their five natural senses. Hans Geiger found a way to make radiation observable. Born into a family with an academic tradition in 1882, Geiger became an academician himself. His father was a professor of linguistics at the University of Erlangen, where Geiger completed his own doctorate in physics in 1906. One of the world’s centers for experimental physics at the time was England, and there Geiger went in 1907. He became an assistant to Ernest Rutherford at the University of Manchester and thereby began the first of a series of successful collaborations during his career—all devoted to detecting or explaining types of radiation. Rutherford had distinguished three types of radiation. In 1908, he and Geiger built a device to sense the first alpha particles. It gave them evidence for Rutherford’s conjecture that the atom was structured like a miniature solar system. Geiger also worked closely with Ernest Marsden, James Chadwick, and Walter Bothe on aspects of radiation physics. Geiger’s stay in England ended with the outbreak of World War I in 1914. He returned to Germany and served as an artillery officer. Immediately after the war he took up university posts again, first in Berlin, then in Kiel, Tubingen, and back to Berlin. With Walther Müller he perfected a compact version of the radiation detector in 1925, the Geiger- Müller counter. It became the standard radiation sensor for scientists thereafter, and, during the rush to locate uranium deposits during the 1950’s, for prospectors. Geiger used it to prove the existence of the Compton effect, which concerned the scattering of X rays, and his experiments further proved beyond doubt that light can take the form of quanta. He also discovered cosmic-ray showers with his detector. Geiger remained in German during World War II, although he vigorously opposed the Nazi party’s treatment of scientists. He died in Potsdam in 1945, after losing his home and possessions during the Allied occupation of Berlin.

/

367

368

/

Geiger counter

pumped. A thin wire, insulated from the brass, was suspended down the middle of the tube. This wire was connected to batteries producing about thirteen hundred volts and to an electrometer, a device that could measure the voltage of the wire. This voltage could be increased until a spark jumped between the wire and the tube. If the voltage was turned down a little, the tube was ready to operate. An alpha particle entering the tube would ionize (knock some electrons away from) at least a few atoms. These electrons would be accelerated by the high voltage and, in turn, would ionize more atoms, freeing more electrons. This process would continue until an avalanche of electrons struck the central wire and the electrometer registered the voltage change. Since the tube was nearly ready to arc because of the high voltage, every alpha particle, even if it had very little energy, would initiate a discharge. The most complex of the early radiation detection devices—the forerunner of the Geiger counter—had just been developed. The two physicists reported their findings in February, 1908. Impact Their first measurements showed that one gram of radium emitted 34 thousand million alpha particles per second. Soon, the number was refined to 32.8 thousand million per second. Next, Geiger and Rutherford measured the amount of charge emitted by radium each second. Dividing this number by the previous number gave them the charge on a single alpha particle. Just as Rutherford had anticipated, the charge was double that of a hydrogen ion (a proton). This proved to be the most accurate determination of the fundamental charge until the American physicist Robert Andrews Millikan conducted his classic oil-drop experiment in 1911. Another fundamental result came from a careful measurement of the volume of helium emitted by radium each second. Using that value, other properties of gases, and the number of helium nuclei emitted each second, they were able to calculate Avogadro’s number more directly and accurately than had previously been possible. (Avogadro’s number enables one to calculate the number of atoms in a given amount of material.)

Geiger counter

/

369

The true Geiger counter evolved when Geiger replaced the central wire of the tube with a needle whose point lay just inside a thin entrance window. This counter was much more sensitive to alpha and beta particles and also to gamma rays. By 1928, with the assistance of Walther Müller, Geiger made his counter much more efficient, responsive, durable, and portable. There are probably few radiation facilities in the world that do not have at least one Geiger counter or one of its compact modern relatives. See also Carbon dating; Gyrocompass; Radar; Sonar; Richter scale. Further Reading Campbell, John. Rutherford: Scientist Supreme. Christchurch, New Zealand: AAS Publications, 1999. Halacy, D. S. They Gave Their Names to Science. New York: Putnam, 1967. Krebs, A. T. “Hans Geiger: Fiftieth Anniversary of the Publication of His Doctoral Thesis, 23 July 1906.” Science 124 (1956). Weir, Fred. “Muscovites Check Radishes for Radiation; a $50 Personal Geiger Counter Gives Russians a Sense of Confidence at the Market.” Christian Science Monitor (November 4, 1999).

370

Genetic “fingerprinting” Genetic “fingerprinting”

The invention: A technique for using the unique characteristics of each human being’s DNA to identify individuals, establish connections among relatives, and identify criminals. The people behind the invention: Alec Jeffreys (1950), an English geneticist Victoria Wilson (1950), an English geneticist Swee Lay Thein (1951), a biochemical geneticist Microscopic Fingerprints In 1985, Alec Jeffreys, a geneticist at the University of Leicester in England, developed a method of deoxyribonucleic acid (DNA) analysis that provides a visual representation of the human genetic structure. Jeffreys’s discovery had an immediate, revolutionary impact on problems of human identification, especially the identification of criminals. Whereas earlier techniques, such as conventional blood typing, provide evidence that is merely exclusionary (indicating only whether a suspect could or could not be the perpetrator of a crime), DNA fingerprinting provides positive identification. For example, under favorable conditions, the technique can establish with virtual certainty whether a given individual is a murderer or rapist. The applications are not limited to forensic science; DNA fingerprinting can also establish definitive proof of parenthood (paternity or maternity), and it is invaluable in providing markers for mapping disease-causing genes on chromosomes. In addition, the technique is utilized by animal geneticists to establish paternity and to detect genetic relatedness between social groups. DNA fingerprinting (also referred to as “genetic fingerprinting”) is a sophisticated technique that must be executed carefully to produce valid results. The technical difficulties arise partly from the complex nature of DNA. DNA, the genetic material responsible for heredity in all higher forms of life, is an enormously long, doublestranded molecule composed of four different units called “bases.” The bases on one strand of DNA pair with complementary bases on

Genetic “fingerprinting”

/

371

the other strand. A human being contains twenty-three pairs of chromosomes; one member of each chromosome pair is inherited from the mother, the other from the father. The order, or sequence, of bases forms the genetic message, which is called the “genome.” Scientists did not know the sequence of bases in any sizable stretch of DNA prior to the 1970’s because they lacked the molecular tools to split DNA into fragments that could be analyzed. This situation changed with the advent of biotechnology in the mid-1970’s. The door to DNA analysis was opened with the discovery of bacterial enzymes called “DNA restriction enzymes.” A restriction enzyme binds to DNA whenever it finds a specific short sequence of base pairs (analogous to a code word), and it splits the DNA at a defined site within that sequence. A single enzyme finds millions of cutting sites in human DNA, and the resulting fragments range in size from tens of base pairs to hundreds or thousands. The fragments are exposed to a radioactive DNA probe, which can bind to specific complementary DNA sequences in the fragments. X-ray film detects the radioactive pattern. The developed film, called an “autoradiograph,” shows a pattern of DNA fragments, which is similar to a bar code and can be compared with patterns from known subjects. The Presence of Minisatellites The uniqueness of a DNA fingerprint depends on the fact that, with the exception of identical twins, no two human beings have identical DNA sequences. Of the three billion base pairs in human DNA, many will differ from one person to another. In 1985, Jeffreys and his coworkers, Victoria Wilson at the University of Leicester and Swee Lay Thein at the John Radcliffe Hospital in Oxford, discovered a way to produce a DNA fingerprint. Jeffreys had found previously that human DNA contains many repeated minisequences called “minisatellites.” Minisatellites consist of sequences of base pairs repeated in tandem, and the number of repeated units varies widely from one individual to another. Every person, with the exception of identical twins, has a different number of tandem repeats and, hence, different lengths of minisatellite DNA. By using two labeled DNA probes to detect two different

372

/

Genetic “fingerprinting”

minisatellite sequences, Jeffreys obtained a unique fragment band pattern that was completely specific for an individual. The power of the technique derives from the law of chance, which indicates that the probability (chance) that two or more unrelated events will occur simultaneously is calculated as the multiplication product of the two separate probabilities. As Jeffreys discovered, the likelihood of two unrelated people having completely identical DNA fingerprints is extremely small—less than one in ten trillion. Given the population of the world, it is clear that the technique can distinguish any one person from everyone else. Jeffreys called his band patterns “DNA fingerprints” because of their ability to individualize. As he stated in his landmark research paper, published in the English scientific journal Nature in 1985, probes to minisatellite regions of human DNA produce “DNA ‘fingerprints’ which are completely specific to an individual (or to his or her identical twin) and can be applied directly to problems of human identification, including parenthood testing.” Consequences In addition to being used in human identification, DNA fingerprinting has found applications in medical genetics. In the search for a cause, a diagnostic test for, and ultimately the treatment of an inherited disease, it is necessary to locate the defective gene on a human chromosome. Gene location is accomplished by a technique called “linkage analysis,” in which geneticists use marker sections of DNA as reference points to pinpoint the position of a defective gene on a chromosome. The minisatellite DNA probes developed by Jeffreys provide a potent and valuable set of markers that are of great value in locating disease-causing genes. Soon after its discovery, DNA fingerprinting was used to locate the defective genes responsible for several diseases, including fetal hemoglobin abnormality and Huntington’s disease. Genetic fingerprinting also has had a major impact on genetic studies of higher animals. Because DNA sequences are conserved in evolution, humans and other vertebrates have many sequences in common. This commonality enabled Jeffreys to use his probes to human minisatellites to bind to the DNA of many different verte-

Genetic “fingerprinting”

/

373

brates, ranging from mammals to birds, reptiles, amphibians, and fish; this made it possible for him to produce DNA fingerprints of these vertebrates. In addition, the technique has been used to discern the mating behavior of birds, to determine paternity in zoo primates, and to detect inbreeding in imperiled wildlife. DNA fingerprinting can also be applied to animal breeding problems, such as the identification of stolen animals, the verification of semen samples for artificial insemination, and the determination of pedigree. The technique is not foolproof, however, and results may be far from ideal. Especially in the area of forensic science, there was a rush to use the tremendous power of DNA fingerprinting to identify a purported murderer or rapist, and the need for scientific standards was often neglected. Some problems arose because forensic DNA fingerprinting in the United States is generally conducted in private, unregulated laboratories. In the absence of rigorous scientific controls, the DNA fingerprint bands of two completely unknown samples cannot be matched precisely, and the results may be unreliable. See also Amniocentesis; Artificial chromosome; Cloning; In vitro plant culture; Rice and wheat strains; Synthetic amino acid; Synthetic DNA; Synthetic RNA. Further Reading Bodmer, Walter, and Robin McKie. “Probing the Present.” In The Book of Man: The Human Genome Project. New York: Scribner, 1985. Caetano-Anolles, Gustavo, and Peter M. Gresshoff. DNA Markers: Protocols, Applications, and Overviews. New York: Wiley-VCH, 1997. Krawezak, Michael, and Jorg Schmidtke. DNA Fingerprinting. 2d ed. New York: Springer-Verlag, 1998. Schacter, Bernice Zeldin. Issues and Dilemmas of Biotechnology: A Reference Guide. Westport, Conn.: Greenwood Press, 1999.

374

Genetically engineered insulin Genetically engineered insulin

The invention: Artificially manufactured human insulin (Humulin) as a medication for people suffering from diabetes. The people behind the invention: Irving S. Johnson (1925), an American zoologist who was vice president of research at Eli Lilly Research Laboratories Ronald E. Chance (1934), an American biochemist at Eli Lilly Research Laboratories What Is Diabetes? Carbohydrates (sugars and related chemicals) are the main food and energy source for humans. In wealthy countries such as the United States, more than 50 percent of the food people eat is made up of carbohydrates, while in poorer countries the carbohydrate content of diets is higher, from 70 to 90 percent. Normally, most carbohydrates that a person eats are used (or metabolized) quickly to produce energy. Carbohydrates not needed for energy are either converted to fat or stored as a glucose polymer called “glycogen.” Most adult humans carry about a pound of body glycogen; this substance is broken down to produce energy when it is needed. Certain diseases prevent the proper metabolism and storage of carbohydrates. The most common of these diseases is diabetes mellitus, usually called simply “diabetes.” It is found in more than seventy million people worldwide. Diabetic people cannot produce or use enough insulin, a hormone secreted by the pancreas. When their condition is not treated, the eyes may deteriorate to the point of blindness. The kidneys may stop working properly, blood vessels may be damaged, and the person may fall into a coma and die. In fact, diabetes is the third most common killer in the United States. Most of the problems surrounding diabetes are caused by high levels of glucose in the blood. Cataracts often form in diabetics, as excess glucose is deposited in the lens of the eye. Important symptoms of diabetes include constant thirst, exces-

Genetically engineered insulin

/

375

sive urination, and large amounts of sugar in the blood and in the urine. The glucose tolerance test (GTT) is the best way to find out whether a person is suffering from diabetes. People given a GTT are first told to fast overnight. In the morning their blood glucose level is measured; then they are asked to drink about a fourth of a pound of glucose dissolved in water. During the next four to six hours, the blood glucose level is measured repeatedly. In nondiabetics, glucose levels do not rise above a certain amount during a GTT, and the level drops quickly as the glucose is assimilated by the body. In diabetics, the blood glucose levels rise much higher and do not drop as quickly. The extra glucose then shows up in the urine. Treating Diabetes Until the 1920’s, diabetes could be controlled only through a diet very low in carbohydrates, and this treatment was not always successful. Then Sir Frederick G. Banting and Charles H. Best found a way to prepare purified insulin from animal pancreases and gave it to patients. This gave diabetics their first chance to live a fairly normal life. Banting and his coworkers won the 1923 Nobel Prize in Physiology or Medicine for their work. The usual treatment for diabetics became regular shots of insulin. Drug companies took the insulin from the pancreases of cattle and pigs slaughtered by the meat-packing industry. Unfortunately, animal insulin has two disadvantages. First, about 5 percent of diabetics are allergic to it and can have severe reactions. Second, the world supply of animal pancreases goes up and down depending on how much meat is being bought. Between 1970 and 1975, the supply of insulin fell sharply as people began to eat less red meat, yet the numbers of diabetics continued to increase. So researchers began to look for a better way to supply insulin. Studying pancreases of people who had donated their bodies to science, researchers found that human insulin did not cause allergic reactions. Scientists realized that it would be best to find a chemical or biological way to prepare human insulin, and pharmaceutical companies worked hard toward this goal. Eli Lilly and Company was the first to succeed, and on May 14, 1982, it filed a new drug application with the Food and Drug Administration (FDA) for the hu-

376

/

Genetically engineered insulin

man insulin preparation it named “Humulin.” Humulin is made by genetic engineering. Irving S. Johnson, who worked on the development of Humulin, described Eli Lilly’s method for producing Humulin. The common bacterium Escherichia coli is used. Two strains of the bacterium are produced by genetic engineering: The first strain is used to make a protein called an “A chain,” and the second strain is used to make a “B chain.” After the bacteria are harvested, the A and B chains are removed and purified separately. Then the two chains are combined chemically. When they are purified once more, the result is Humulin, which has been proved by Ronald E. Chance and his Eli Lilly coworkers to be chemically, biologically, and physically identical to human insulin. Consequences The FDA and other regulatory agencies around the world approved genetically engineered human insulin in 1982. Humulin does not trigger allergic reactions, and its supply does not fluctuate. It has brought an end to the fear that there would be a worldwide shortage of insulin. Humulin is important as well in being the first genetically engineered industrial chemical. It began an era in which such advanced technology could be a source for medical drugs, chemicals used in farming, and other important industrial products. Researchers hope that genetic engineering will help in the understanding of cancer and other diseases, and that it will lead to ways to grow enough food for a world whose population continues to rise. See also Artificial chromosome; Artificial insemination; Cloning; Genetic “fingerprinting”; Synthetic amino acid; Synthetic DNA; Synthetic RNA. Further Reading Berger, Abi. “Gut Cells Engineered to Produce Insulin.” British Medical Journal 321, no. 7275 (December 16, 2000). “Genetically Engineered Duckweed to Produce Insulin.” Resource 6, no. 3 (March, 1999).

Genetically engineered insulin

/

377

“Lilly Gets FDA Approval for New Insulin Formula.” Wall Street Journal (October 3, 1985). Williams, Linda. “UC Regents Sue Lilly in Dispute Over Biotech Patent for Insulin.” Los Angeles Times (February 8, 1990).

378

Geothermal power Geothermal power

The invention: Energy generated from the earth’s natural hot springs. The people behind the invention: Prince Piero Ginori Conti (1865-1939), an Italian nobleman and industrialist Sir Charles Parsons (1854-1931), an English engineer B. C. McCabe, an American businessman Developing a Practical System The first successful use of geothermal energy was at Larderello in northern Italy. The Larderello geothermal field, located near the city of Pisa about 240 kilometers northwest of Rome, contains many hot springs and fumaroles (steam vents). In 1777, these springs were found to be rich in boron, and in 1818, Francesco de Larderel began extracting the useful mineral borax from them. Shortly after 1900, Prince Piero Ginori Conti, director of the Larderello borax works, conceived the idea of using the steam for power production. An experimental electrical power plant was constructed at Larderello in 1904 to provide electric power to the borax plant. After this initial experiment proved successful, a 250-kilowatt generating station was installed in 1913 and commercial power production began. As the Larderello field grew, additional geothermal sites throughout the region were prospected and tapped for power. Power production grew steadily until the 1940’s, when production reached 130 megawatts; however, the Larderello power plants were destroyed late in World War II (1939-1945). After the war, the generating plants were rebuilt, and they were producing more than 400 megawatts by 1980. The Larderello power plants encountered many of the technical problems that were later to concern other geothermal facilities. For example, hydrogen sulfide in the steam was highly corrosive to copper, so the Larderello power plant used aluminum for electrical connections much more than did conventional power plants of the

Geothermal power

/

379

time. Also, the low pressure of the steam in early wells at Larderello presented problems. The first generators simply used steam to drive a generator and vented the spent steam into the atmosphere. A system of this sort, called a “noncondensing system,” is useful for small generators but not efficient to produce large amounts of power. Most steam engines derive power not only from the pressure of the steam but also from the vacuum created when the steam is condensed back to water. Geothermal systems that generate power from condensation, as well as direct steam pressure, are called “condensing systems.” Most large geothermal generators are of this type. Condensation of geothermal steam presents special problems not present in ordinary steam engines: There are other gases present that do not condense. Instead of a vacuum, condensation of steam contaminated with other gases would result in only a limited drop in pressure and, consequently, very low efficiency. Initially, the operators of Larderello tried to use the steam to heat boilers that would, in turn, generate pure steam. Eventually, a device was developed that removed most of the contaminating gases from the steam. Although later wells at Larderello and other geothermal fields produced steam at greater pressure, these engineering innovations improved the efficiency of any geothermal power plant. Expanding the Idea In 1913, the English engineer Sir Charles Parsons proposed drilling an extremely deep (12-kilometer) hole to tap the earth’s deep heat. Power from such a deep hole would not come from natural steam as at Larderello but would be generated by pumping fluid into the hole and generating steam (as hot as 500 degrees Celsius) at the bottom. In modern terms, Parsons proposed tapping “hot dryrock” geothermal energy. (No such plant has been commercially operated yet, but research is being actively pursued in several countries.) The first use of geothermal energy in the United States was for direct heating. In 1890, the municipal water company of Boise, Idaho, began supplying hot water from a geothermal well. Water was piped from the well to homes and businesses along appropriately named Warm Springs Avenue. At its peak, the system served more

380

/

Geothermal power

than four hundred customers, but as cheap natural gas became available, the number declined. Although Larderello was the first successful geothermal electric power plant, the modern era of geothermal electric power began with the opening of the Geysers Geothermal Field in California. Early attempts began in the 1920’s, but it was not until 1955 that B. C. McCabe, a Los Angeles businessman, leased 14.6 square kilometers in the Geysers area and founded the Magma Power Company. The first 12.5-megawatt generator was installed at the Geysers in 1960, and production increased steadily from then on. The Geysers surpassed Larderello as the largest producing geothermal field in the 1970’s, and more than 1,000 megawatts were being generated by 1980. By the end of 1980, geothermal plants had been installed in thirteen countries, with a total capacity of almost 2,600 megawatts, and projects with a total capacity of more than 15,000 megawatts were being planned in more than twenty countries. Impact Geothermal power has many attractive features. Because the steam is naturally heated and under pressure, generating equipment can be simple, inexpensive, and quickly installed. Equipment and installation costs are offset by savings in fuel. It is economically practical to install small generators, a fact that makes geothermal plants attractive in remote or underdeveloped areas. Most important to a world faced with a variety of technical and environmental problems connected with fossil fuels, geothermal power does not deplete fossil fuel reserves, produces little pollution, and contributes little to the greenhouse effect. Despite its attractive features, geothermal power has some limitations. Geologic settings suitable for easy geothermal power production are rare; there must be a hot rock or magma body close to the surface. Although it is technically possible to pump water from an external source into a geothermal well to generate steam, most geothermal sites require a plentiful supply of natural underground water that can be tapped as a source of steam. In contrast, fossil-fuel generating plants can be at any convenient location.

Geothermal power

/

381

See also Breeder reactor; Compressed-air-accumulating power plant; Fuel cell; Heat pump; Nuclear power plant; Solar thermal engine; Thermal cracking process; Tidal power plant. Further Reading Appleyard, Rollo. Charles Parsons: His Life and Work. London: Constable, 1933. Boyle, Godfrey. Renewable Energy: Power for a Sustainable Future. Oxford: Oxford University Press, 1998. Cassedy, Edward S. Prospects for Sustainable Energy: A Critical Assessment. New York: Cambridge University Press, 2000. Parsons, Robert Hodson. The Steam Turbine and Other Inventions of Sir Charles Parsons, O.M. New York: Longmans Green, 1946.

382

Gyrocompass Gyrocompass

The invention: The first practical navigational device that enabled ships and submarines to stay on course without relying on the earth’s unreliable magnetic poles. The people behind the invention: Hermann Anschütz-Kaempfe (1872-1931), a German inventor and manufacturer Jean-Bernard-Léon Foucault (1819-1868), a French experimental physicist and inventor Elmer Ambrose Sperry (1860-1930), an American engineer and inventor From Toys to Tools A gyroscope consists of a rapidly spinning wheel mounted in a frame that enables the wheel to tilt freely in any direction. The amount of momentum allows the wheel to maintain its “attitude” even when the whole device is turned or rotated. These devices have been used to solve problems arising in such areas as sailing and navigation. For example, a gyroscope aboard a ship maintains its orientation even while the ship is rolling. Among other things, this allows the extent of the roll to be measured accurately. Moreover, the spin axis of a free gyroscope can be adjusted to point toward true north. It will (with some exceptions) stay that way despite changes in the direction of a vehicle in which it is mounted. Gyroscopic effects were employed in the design of various objects long before the theory behind them was formally known. A classic example is a child’s top, which balances, seemingly in defiance of gravity, as long as it continues to spin. Boomerangs and flying disks derive stability and accuracy from the spin imparted by the thrower. Likewise, the accuracy of rifles improved when barrels were manufactured with internal spiral grooves that caused the emerging bullet to spin. In 1852, the French inventor Jean-Bernard-Léon Foucault built the first gyroscope, a measuring device consisting of a rapidly spinning wheel mounted within concentric rings that allowed the wheel

Gyrocompass

/

383

to move freely about two axes. This device, like the Foucault pendulum, was used to demonstrate the rotation of the earth around its axis, since the spinning wheel, which is not fixed, retains its orientation in space while the earth turns under it. The gyroscope had a related interesting property: As it continued to spin, the force of the earth’s rotation caused its axis to rotate gradually until it was oriented parallel to the earth’s axis, that is, in a north-south direction. It is this property that enables the gyroscope to be used as a compass. When Magnets Fail In 1904, Hermann Anschütz-Kaempfe, a German manufacturer working in the Kiel shipyards, became interested in the navigation problems of submarines used in exploration under the polar ice cap. By 1905, efficient working submarines were a reality, and it was evident to all major naval powers that submarines would play an increasingly important role in naval strategy. Submarine navigation posed problems, however, that could not be solved by instruments designed for surface vessels. A submarine needs to orient itself under water in three dimensions; it has no automatic horizon with respect to which it can level itself. Navigation by means of stars or landmarks is impossible when the submarine is submerged. Furthermore, in an enclosed metal hull containing machinery run by electricity, a magnetic compass is worthless. To a lesser extent, increasing use of metal, massive moving parts, and electrical equipment had also rendered the magnetic compass unreliable in conventional surface battleships. It made sense for Anschütz-Kaempfe to use the gyroscopic effect to design an instrument that would enable a ship to maintain its course while under water. Yet producing such a device would not be easy. First, it needed to be suspended in such a way that it was free to turn in any direction with as little mechanical resistance as possible. At the same time, it had to be able to resist the inevitable pitching and rolling of a vessel at sea. Finally, a continuous power supply was required to keep the gyroscopic wheels spinning at high speed. The original Anschütz-Kaempfe gyrocompass consisted of a pair of spinning wheels driven by an electric motor. The device was connected to a compass card visible to the ship’s navigator. Motor, gyro-

384

/

Gyrocompass

Elmer Sperry Although Elmer Ambrose Sperry, born in 1860, had only a grade school education as a child in rural New York, the equipment used on local farms piqued his interest in machinery and he learned about technology on his own. He attended a local teachers’ college, and graduating in 1880, he was determined to become an inventor. He was especially interested in the application of electricity. He designed his own arc lighting system and opened the Sperry Electric Light, Motor, and Car Brake Company to sell it, changing its name to Sperry Electric Company in 1887. He made such progress in devising electric mining equipment, electric brakes for automobiles and streetcars, and his own electric car that General Electric bought him out. In 1900 Sperry opened a laboratory in Washington, D.C., and continued research on a gyroscope that he began in 1896. After more than a decade he patented his device, and after successful trials aboard the USS Worden, he established the Sperry Gyroscope Company in 1910, later supplying the American, British, and Russian navies as well as commercial ships. In 1914 he successfully demonstrated a gyrostabilizer for aircraft and expanded his company to manufacture aeronautical technology. Before he sold the company in 1926 he had registered more than four hundred patents. Sperry died in Brooklyn in 1930.

scope, and suspension system were mounted in a frame that allowed the apparatus to remain stable despite the pitch and roll of the ship. In 1906, the German navy installed a prototype of the AnschützKaempfe gyrocompass on the battleship Undine and subjected it to exhaustive tests under simulated battle conditions, sailing the ship under forced draft and suddenly reversing the engines, changing the position of heavy turrets and other mechanisms, and firing heavy guns. In conditions under which a magnetic compass would have been worthless, the gyrocompass proved a satisfactory navigational tool, and the results were impressive enough to convince the German navy to undertake installation of gyrocompasses in submarines and heavy battleships, including the battleship Deutschland. Elmer Ambrose Sperry, a New York inventor intimately associated with pioneer electrical development, was independently work-

Gyrocompass

/

385

ing on a design for a gyroscopic compass at about the same time. In 1907, he patented a gyrocompass consisting of a single rotor mounted within two concentric shells, suspended by fine piano wire from a frame mounted on gimbals. The rotor of the Sperry compass operated in a vacuum, which enabled it to rotate more rapidly. The Sperry gyrocompass was in use on larger American battleships and submarines on the eve of World War I (1914-1918). Impact The ability to navigate submerged submarines was of critical strategic importance in World War I. Initially, the German navy had an advantage both in the number of submarines at its disposal and in their design and maneuverability. The German U-boat fleet declared all-out war on Allied shipping, and, although their efforts to blockade England and France were ultimately unsuccessful, the tremendous toll they inflicted helped maintain the German position and prolong the war. To a submarine fleet operating throughout the Atlantic and in the Caribbean, as well as in near-shore European waters, effective long-distance navigation was critical. Gyrocompasses were standard equipment on submarines and battleships and, increasingly, on larger commercial vessels during World War I, World War II (1939-1945), and the period between the wars. The devices also found their way into aircraft, rockets, and guided missiles. Although the compasses were made more accurate and easier to use, the fundamental design differed little from that invented by Anschütz-Kaempfe. See also Atomic-powered ship; Dirigible; Hovercraft; Radar; Sonar. Further Reading Hughes, Thomas Parke. Elmer Sperry: Inventor and Engineer. Baltimore: Johns Hopkins University Press, 1993. _____. Science and the Instrument-Maker: Michelson, Sperry, and the Speed of Light. Washington: Smithsonian Institution Press, 1976. Sorg, H. W. “From Serson to Draper: Two Centuries of Gyroscopic Development.” Journal of the Institute of Navigation 23, no. 4 (Winter, 1976-1977).

386

Hard disk Hard disk

The invention: A large-capacity, permanent magnetic storage device built into most personal computers. The people behind the invention: Alan Shugart (1930), an engineer who first developed the floppy disk Philip D. Estridge (1938?-1985), the director of IBM’s product development facility Thomas J. Watson, Jr. (1914-1993), the chief executive officer of IBM The Personal Oddity When the International Business Machines (IBM) Corporation introduced its first microcomputer, called simply the IBM PC (for “personal computer”), the occasion was less a dramatic invention than the confirmation of a trend begun some years before. A number of companies had introduced microcomputers before IBM; one of the best known at that time was Apple Corporation’s Apple II, for which software for business and scientific use was quickly developed. Nevertheless, the microcomputer was quite expensive and was often looked upon as an oddity, not as a useful tool. Under the leadership of Thomas J. Watson, Jr., IBM, which had previously focused on giant mainframe computers, decided to develop the PC. A design team headed by Philip D. Estridge was assembled in Boca Raton, Florida, and it quickly developed its first, pacesetting product. It is an irony of history that IBM anticipated selling only one hundred thousand or so of these machines, mostly to scientists and technically inclined hobbyists. Instead, IBM’s product sold exceedingly well, and its design parameters, as well as its operating system, became standards. The earliest microcomputers used a cassette recorder as a means of mass storage; a floppy disk drive capable of storing approximately 160 kilobytes of data was initially offered only as an option. While home hobbyists were accustomed to using a cassette recorder

Hard disk

/

387

for storage purposes, such a system was far too slow and awkward for use in business and science. As a result, virtually every IBM PC sold was equipped with at least one 5.25-inch floppy disk drive. Memory Requirements All computers require memory of two sorts in order to carry out their tasks. One type of memory is main memory, or random access memory (RAM), which is used by the computer’s central processor to store data it is using while operating. The type of memory used for this function is built typically of silicon-based integrated circuits that have the advantage of speed (to allow the processor to fetch or store the data quickly), but the disadvantage of possibly losing or “forgetting” data when the electric current is turned off. Further, such memory generally is relatively expensive. To reduce costs, another type of memory—long-term storage memory, known also as “mass storage”—was developed. Mass storage devices include magnetic media (tape or disk drives) and optical media (such as the compact disc, read-only memory, or CDROM). While the speed with which data may be retrieved from or stored in such devices is rather slow compared to the central processor’s speed, a disk drive—the most common form of mass storage used in PCs—can store relatively large amounts of data quite inexpensively. Early floppy disk drives (so called because the magnetically treated material on which data are recorded is made of a very flexible plastic) held 160 kilobytes of data using only one side of the magnetically coated disk (about eighty pages of normal, doublespaced, typewritten information). Later developments increased storage capacities to 360 kilobytes by using both sides of the disk and later, with increasing technological ability, 1.44 megabytes (millions of bytes). In contrast, mainframe computers, which are typically connected to large and expensive tape drive storage systems, could store gigabytes (millions of megabytes) of information. While such capacities seem large, the needs of business and scientific users soon outstripped available space. Since even the mailing list of a small business or a scientist’s mathematical model of a chemical reaction easily could require greater storage potential than

388

/

Hard disk

early PCs allowed, the need arose for a mass storage device that could accommodate very large files of data. The answer was the hard disk drive, also known as a “fixed disk drive,” reflecting the fact that the disk itself is not only rigid but also permanently installed inside the machine. In 1955, IBM had envisioned the notion of a fixed, hard magnetic disk as a means of storing computer data, and, under the direction of Alan Shugart in the 1960’s, the floppy disk was developed as well. As the engineers of IBM’s facility in Boca Raton refined the idea of the original PC to design the new IBM PC XT, it became clear that chief among the needs of users was the availability of large-capability storage devices. The decision was made to add a 10-megabyte hard disk drive to the PC. On March 8, 1983, less than two years after the introduction of its first PC, IBM introduced the PC XT. Like the original, it was an evolutionary design, not a revolutionary one. The inclusion of a hard disk drive, however, signaled that mass storage devices in personal computers had arrived. Consequences Above all else, any computer provides a means for storing, ordering, analyzing, and presenting information. If the personal computer is to become the information appliance some have suggested it will be, the ability to manipulate very large amounts of data will be of paramount concern. Hard disk technology was greeted enthusiastically in the marketplace, and the demand for hard drives has seen their numbers increase as their quality increases and their prices drop. It is easy to understand one reason for such eager acceptance: convenience. Floppy-bound computer users find themselves frequently changing (or “swapping”) their disks in order to allow programs to find the data they need. Moreover, there is a limit to how much data a single floppy disk can hold. The advantage of a hard drive is that it allows users to keep seemingly unlimited amounts of data and programs stored in their machines and readily available. Also, hard disk drives are capable of finding files and transferring their contents to the processor much more quickly than a floppy drive. A user may thus create exceedingly large files, keep

Hard disk

/

389

them on hand at all times, and manipulate data more quickly than with a floppy. Finally, while a hard drive is a slow substitute for main memory, it allows users to enjoy the benefits of larger memories at significantly lower cost. The introduction of the PC XT with its 10-megabyte hard drive was a milestone in the development of the PC. Over the next two decades, the size of computer hard drives increased dramatically. By 2001, few personal computers were sold with hard drives with less than three gigabytes of storage capacity, and hard drives with more than thirty gigabytes were becoming the standard. Indeed, for less money than a PC XT cost in the mid-1980’s, one could buy a fully equipped computer with a hard drive holding sixty gigabytes—a storage capacity equivalent to six thousand 10-megabyte hard drives. See also Bubble memory; Compact disc; Computer chips; Floppy disk; Optical disk; Personal computer. Further Reading Chposky, James, and Ted Leonsis. Blue Magic: The People, Power, and Politics Behind the IBM Personal Computer. New York: Facts on File, 1988. Freiberger, Paul, and Michael Swaine. Fire in the Valley: The Making of the Personal Computer. 2d ed. New York: McGraw-Hill, 2000. Grossman, Wendy. Remembering the Future: Interviews from Personal Computer World. New York: Springer, 1997. Watson, Thomas J., and Peter Petre. Father, Son and Co.: My Life at IBM and Beyond. New York: Bantam Books, 2000.

390

Hearing aid Hearing aid

The invention: Miniaturized electronic amplifier worn inside the ears of hearing-impaired persons. The organization behind the invention: Bell Labs, the research and development arm of the American Telephone and Telegraph Company Trapped in Silence Until the middle of the twentieth century, people who experienced hearing loss had little hope of being able to hear sounds without the use of large, awkward, heavy appliances. For many years, the only hearing aids available were devices known as ear trumpets. The ear trumpet tried to compensate for hearing loss by increasing the number of sound waves funneled into the ear canal. A wide, bell-like mouth similar to the bell of a musical trumpet narrowed to a tube that the user placed in his or her ear. Ear trumpets helped a little, but they could not truly increase the volume of the sounds heard. Beginning in the nineteenth century, inventors tried to develop electrical devices that would serve as hearing aids. The telephone was actually a by-product of Alexander Graham Bell’s efforts to make a hearing aid. Following the invention of the telephone, electrical engineers designed hearing aids that employed telephone technology, but those hearing aids were only a slight improvement over the old ear trumpets. They required large, heavy battery packs and used a carbon microphone similar to the receiver in a telephone. More sensitive than purely physical devices such as the ear trumpet, they could transmit a wider range of sounds but could not amplify them as effectively as electronic hearing aids now do. Transistors Make Miniaturization Possible Two types of hearing aids exist: body-worn and head-worn. Body-worn hearing aids permit the widest range of sounds to be heard, but because of the devices’ larger size, many hearing-

Hearing aid

/

391

impaired persons do not like to wear them. Head-worn hearing aids, especially those worn completely in the ear, are much less conspicuous. In addition to in-ear aids, the category of head-worn hearing aids includes both hearing aids mounted in eyeglass frames and those worn behind the ear. All hearing aids, whether head-worn or body-worn, consist of four parts: a microphone to pick up sounds, an amplifier, a receiver, and a power source. The microphone gathers sound waves and converts them to electrical signals; the amplifier boosts, or increases, those signals; and the receiver then converts the signals back into sound waves. In effect, the hearing aid is a miniature radio. After the receiver converts the signals back to sound waves, those waves are directed into the ear canal through an earpiece or ear mold. The ear mold generally is made of plastic and is custom fitted from an impression taken from the prospective user’s ear. Effective head-worn hearing aids could not be built until the electronic circuit was developed in the early 1950’s. The same invention—the transistor—that led to small portable radios and tape players allowed engineers to create miniaturized, inconspicuous hearing aids. Depending on the degree of amplification required, the amplifier in a hearing aid contains three or more transistors. Transistors first replaced vacuum tubes in devices such as radios and phonographs, and then engineers realized that they could be used in devices for the hearing-impaired. The research at Bell Labs that led to the invention of the transistor rose out of military research during World War II. The vacuum tubes used in, for example, radar installations to amplify the strength of electronic signals were big, were fragile because they were made of blown glass, and gave off high levels of heat when they were used. Transistors, however, made it possible to build solid-state, integrated circuits. These are made from crystals of metals such as germanium or arsenic alloys and therefore are much less fragile than glass. They are also extremely small (in fact, some integrated circuits are barely visible to the naked eye) and give off no heat during use. The number of transistors in a hearing aid varies depending upon the amount of amplification required. The first transistor is the most important for the listener in terms of the quality of sound heard. If the frequency response is set too high—that is, if the device is too sensi-

392

/

Hearing aid

tive—the listener will be bothered by distracting background noise. Theoretically, there is no limit on the amount of amplification that a hearing aid can be designed to provide, but there are practical limits. The higher the amplification, the more power is required to operate the hearing aid. This is why body-worn hearing aids can convey a wider range of sounds than head-worn devices can. It is the power source—not the electronic components—that is the limiting factor. A body-worn hearing aid includes a larger battery pack than can be used with a head-worn device. Indeed, despite advances in battery technology, the power requirements of a head-worn hearing aid are such that a 1.4-volt battery that could power a wristwatch for several years will last only a few days in a hearing aid. Consequences The invention of the electronic hearing aid made it possible for many hearing-impaired persons to participate in a hearing world. Prior to the invention of the hearing aid, hearing-impaired children often were unable to participate in routine school activities or function effectively in mainstream society. Instead of being able to live at home with their families and enjoy the same experiences that were available to other children their age, often they were forced to attend special schools operated by the state or by charities. Hearing-impaired people were singled out as being different and were limited in their choice of occupations. Although not every hearing-impaired person can be helped to hear with a hearing aid— particularly in cases of total hearing loss—the electronic hearing aid has ended restrictions for many hearing-impaired people. Hearingimpaired children are now included in public school classes, and hearing-impaired adults can now pursue occupations from which they were once excluded. Today, many deaf and hearing-impaired persons have chosen to live without the help of a hearing aid. They believe that they are not disabled but simply different, and they point out that their “disability” often allows them to appreciate and participate in life in unique and positive ways. For them, the use of hearing aids is a choice, not a necessity. For those who choose, hearing aids make it possible to participate in the hearing world.

Hearing aid

/

393

See also Artificial heart; Artificial kidney; Cell phone; Contact lenses; Heart-lung machine; Pacemaker. Further Reading Alexander, Howard. “Hearing Aids: Smaller and Smarter.” New York Times (November 26, 1998). Fong, Petti. “Guess What’s the New Buzz in Hearing Aids.” Business Week, no. 3730 (April 30, 2001). Levitt, Harry. “Noise Reduction in Hearing Aids: A Review.” Journal of Rehabilitation Research and Development 38, no. 1 (January/February, 2001).

394

Heart-lung machine Heart-lung machine

The invention: The first artificial device to oxygenate and circulate blood during surgery, the heart-lung machine began the era of open-heart surgery. The people behind the invention: John H. Gibbon, Jr. (1903-1974), a cardiovascular surgeon Mary Hopkinson Gibbon (1905), a research technician Thomas J. Watson (1874-1956), chairman of the board of IBM T. L. Stokes and J. B. Flick, researchers in Gibbon’s laboratory Bernard J. Miller (1918), a cardiovascular surgeon and research associate Cecelia Bavolek, the first human to undergo open-heart surgery successfully using the heart-lung machine A Young Woman’s Death In the first half of the twentieth century, cardiovascular medicine had many triumphs. Effective anesthesia, antiseptic conditions, and antibiotics made surgery safer. Blood-typing, anti-clotting agents, and blood preservatives made blood transfusion practical. Cardiac catheterization (feeding a tube into the heart), electrocardiography, and fluoroscopy (visualizing living tissues with an X-ray machine) made the nonsurgical diagnosis of cardiovascular problems possible. As of 1950, however, there was no safe way to treat damage or defects within the heart. To make such a correction, this vital organ’s function had to be interrupted. The problem was to keep the body’s tissues alive while working on the heart. While some surgeons practiced so-called blind surgery, in which they inserted a finger into the heart through a small incision without observing what they were attempting to correct, others tried to reduce the body’s need for circulation by slowly chilling the patient until the heart stopped. Still other surgeons used “cross-circulation,” in which the patient’s circulation was connected to a donor’s circulation. All these approaches carried profound risks of hemorrhage, tissue damage, and death. In February of 1931, Gibbon witnessed the death of a young

Heart-lung machine

/

395

woman whose lung circulation was blocked by a blood clot. Because her blood could not pass through her lungs, she slowly lost consciousness from lack of oxygen. As he monitored her pulse and breathing, Gibbon thought about ways to circumvent the obstructed lungs and straining heart and provide the oxygen required. Because surgery to remove such a blood clot was often fatal, the woman’s surgeons operated only as a last resort. Though the surgery took only six and one-half minutes, she never regained consciousness. This experience prompted Gibbon to pursue what few people then considered a practical line of research: a way to circulate and oxygenate blood outside the body. A Woman’s Life Restored Gibbon began the project in earnest in 1934, when he returned to the laboratory of Edward D. Churchill at Massachusetts General Hospital for his second surgical research fellowship. He was assisted by Mary Hopkinson Gibbon. Together, they developed, using cats, a surgical technique for removing blood from a vein, supplying the blood with oxygen, and returning it to an artery using tubes inserted into the blood vessels. Their objective was to create a device that would keep the blood moving, spread it over a very thin layer to pick up oxygen efficiently and remove carbon dioxide, and avoid both clotting and damaging blood cells. In 1939, they reported that prolonged survival after heart-lung bypass was possible in experimental animals. World War II (1939-1945) interrupted the progress of this work; it was resumed by Gibbon at Jefferson Medical College in 1944. Shortly thereafter, he attracted the interest of Thomas J. Watson, chairman of the board of the International Business Machines (IBM) Corporation, who provided the services of IBM’s experimental physics laboratory and model machine shop as well as the assistance of staff engineers. IBM constructed and modified two experimental machines over the next seven years, and IBM engineers contributed significantly to the evolution of a machine that would be practical in humans. Gibbon’s first attempt to use the pump-oxygenator in a human being was in a fifteen-month-old baby. This attempt failed, not be-

396

/

Heart-lung machine

cause of a malfunction or a surgical mistake but because of a misdiagnosis. The child died following surgery because the real problem had not been corrected by the surgery. On May 6, 1953, the heart-lung machine was first used successfully on Cecelia Bavolek. In the six months before surgery, Bavolek had been hospitalized three times for symptoms of heart failure when she tried to engage in normal activity. While her circulation was connected to the heart-lung machine for forty-five minutes, the surgical team headed by Gibbon was able to close an opening between her atria and establish normal heart function. Two months later, an examination of the defect revealed that it was fully closed; Bavolek resumed a normal life. The age of open-heart surgery had begun. Consequences The heart-lung bypass technique alone could not make openheart surgery truly practical. When it was possible to keep tissues alive by diverting blood around the heart and oxygenating it, other questions already under investigation became even more critical: how to prolong the survival of bloodless organs, how to measure oxygen and carbon dioxide levels in the blood, and how to prolong anesthesia during complicated surgery. Thus, following the first successful use of the heart-lung machine, surgeons continued to refine the methods of open-heart surgery. The heart-lung apparatus set the stage for the advent of “replacement parts” for many types of cardiovascular problems. Cardiac valve replacement was first successfully accomplished in 1960 by placing an artificial ball valve between the left atrium and ventricle. In 1957, doctors performed the first coronary bypass surgery, grafting sections of a leg vein into the heart’s circulation system to divert blood around clogged coronary arteries. Likewise, the first successful heart transplant (1967) and the controversial Jarvik-7 artificial heart implantation (1982) required the ability to stop the heart and keep the body’s tissues alive during time-consuming and delicate surgical procedures. Gibbon’s heart-lung machine paved the way for all these developments.

Heart-lung machine

/

397

See also Artificial heart; Blood transfusion; CAT scanner; Coronary artery bypass surgery; Electrocardiogram; Iron lung; Mammography; Nuclear magnetic resonance; Pacemaker; X-ray image intensifier. Further Reading DeJauregui, Ruth. One Hundred Medical Milestones That Shaped World History. San Mateo, Calif.: Bluewood Books, 1998. Romaine-Davis, Ada. John Gibbon and his Heart-Lung Machine. Philadelphia: University of Pennsylvania Press, 1991. Shumacker, Harris B. A Dream of the Heart: The Life of John H. Gibbon, Jr., Father of the Heart-Lung Machine. Santa Barbara, Calif.: Fithian Press, 1999. Watson, Thomas J., and Peter Petre. Father, Son and Co.: My Life at IBM and Beyond. New York: Bantam Books, 2000.

398

Heat pump Heat pump

The invention: A device that warms and cools buildings efficiently and cheaply by moving heat from one area to another. The people behind the invention: T. G. N. Haldane, a British engineer Lord Kelvin (William Thomson, 1824-1907), a British mathematician, scientist, and engineer Sadi Carnot (1796-1832), a French physicist and thermodynamicist The Heat Pump A heat pump is a device that takes in heat at one temperature and releases it at a higher temperature. When operated to provide heat (for example, for space heating), the heat pump is said to operate in the heating mode; when operated to remove heat (for example, for air conditioning), it is said to operate in the cooling mode. Some type of work must be done to drive the pump, no matter which mode is being used. There are two general types of heat pumps: vapor compression pumps and absorption pumps. The basic principle of vapor compression cycle heat pumps is derived from the work of Sadi Carnot in the early nineteenth century. Carnot’s work was published in 1824. It was William Thomson (later to become known as Lord Kelvin), however, who first proposed a practical heat pump system, or “heat multiplier,” as it was known then, and he also indicated that a refrigerating machine could be used for heating. Thomson’s heat pump used air as its working fluid. Thomson claimed that his heat pump was able to produce heat by using only 3 percent of the energy that would be required for direct heating. Absorption cycle machines have an even longer history. Refrigerators based on the use of sulfuric acid and water date back to 1777. Systems using this fluid combination, improved and modified by Edmond Carré, were used extensively in Paris cafés in the late 1800’s. In 1849, a patent was filed by Ferdinand Carré for the working-fluid pair of ammonia and water in absorption cycle machines.

Heat pump

/

399

Refrigerator or Heater In the early nineteenth century, many people (including some electrical engineers) believed that electrical energy could never be used economically to produce large quantities of heat under ordinary conditions. A few researchers, however, believed that it was possible to produce heat by using electrical energy if that energy was first converted to mechanical energy and if the Carnot principle was then used to pump heat from a lower to a higher temperature. In 1927, T. G. N. Haldane carried out detailed experiments showing that the heat pump can be made to operate in either the heating mode or the cooling mode. A heat pump in the cooling mode works like a refrigerator; a heat pump in the heating mode supplies heat for heating. Haldane demonstrated that a refrigerator could be modified to work as a heating unit. He used a vapor compression cycle refrigerator for his demonstration. In the design of a refrigerating device, the primary objective is the production of cold rather than heat, but the two operations are complementary. The process of producing cold is simply that of pumping heat from a relatively cold to a relatively hot source, but in the refrigeration process particular attention is paid to the prevention of the leakage of heat into the cold source, whereas no attempt is made to prevent the escape of heat from the hot source. If a refrigerating device were treated as a heat pump in which the primary product is the heat rejected to the hot source, the order of importance would be reversed, and every opportunity would be taken to allow heat to leak into the cold source and every precaution would be taken against allowing heat to leak out of the hot source. The components of a heat pump that operates on the principle of vapor compression include an electric motor, a compressor, an evaporator, and a condenser. The compressor sucks in gas from the evaporator and compresses it to a pressure that corresponds to a saturation temperature that is slightly higher than that of the required heat. From the compressor, the compressed gas passes to the condenser, where it is cooled and condensed, thereby giving up a large quantity of heat to the water or other substance that it is intended to heat. The condensed gas then passes through the expansion valve, where a sudden reduction of pressure takes place. This reduction of pressure lowers the boiling

400

/

Heat pump

point of the liquid, which therefore vaporizes and takes in heat from the medium surrounding the evaporator. After evaporation, the gas passes on to the compressor, and the cycle is complete. Haldane was the first person in the United Kingdom to install a heat pump. He was also the first person to install a domestic heat pump to provide hot water and space heating. Electric Power Low-Pressure Vapor

Hea

Evaporator

Condenser

ut

tO

Compressor

ea

n tI

High-Pressure Vapor

H

Expansion Valve Low-Pressure Liquid

High-Pressure Liquid

Components of a heat pump.

Impact Since Haldane’s demonstration of the use of the heat pump, the device has been highly successful in people’s homes, especially in those regions where both heating and cooling are required for single- and multifamily residences (for example, Australia, Japan, and the United States). This is the case because the heat pump can provide both heating and cooling; therefore, the cost of a heat pump system can be spread over both heating and cooling seasons. Total annual sales of heat pumps worldwide have risen to the millions, with most sales being made in Japan and the United States. The use of heat pumps can save energy. In addition, because they are electric, they can save significant quantities of oil, especially in the residential retrofit and replacement markets and when used as add-on devices for existing heating systems. Some heat pumps are now available that may compete cost-effectively with other heating systems in meeting the heating demands of cooler regions.

Heat pump

/

401

Technological developments by heat pump manufacturers are continually improving the performance and cost-effectiveness of heat pumps. The electric heat pump will continue to dominate the residential market, although engine-driven systems are likely to have a greater impact on the multifamily market. See also Breeder reactor; Compressed-air-accumulating power plant; Fuel cell; Geothermal power; Nuclear power plant; Solar thermal engine; Tidal power plant. Further Reading Kavanaugh, Stephen P., and Kevin D. Rafferty. Ground-Source Heat Pumps: Design of Geothermal Systems for Commercial and Institutional Buildings. Atlanta: American Society of Heating, Refrigerating and Air-Conditioning Engineers, 1997. Nisson, Ned. “Efficient and Affordable.” Popular Science 247, no. 2 (August, 1995). Using the Earth to Heat and Cool Homes. Washington, D.C.: U.S. Department of Energy, 1983.

402

Holography Holography

The invention: A lensless system of three-dimensional photography that was one of the most important developments in twentieth century optical science. The people behind the invention: Dennis Gabor (1900-1979), a Hungarian-born inventor and physicist who was awarded the 1971 Nobel Prize in Physics Emmett Leith (1927), a radar researcher who, with Juris Upatnieks, produced the first laser holograms Juris Upatnieks (1936), a radar researcher who, with Emmett Leith, produced the first laser holograms Easter Inspiration The development of photography in the early 1900’s made possible the recording of events and information in ways unknown before the twentieth century: the photographing of star clusters, the recording of the emission spectra of heated elements, the storing of data in the form of small recorded images (for example, microfilm), and the photographing of microscopic specimens, among other things. Because of its vast importance to the scientist, the science of photography has developed steadily. An understanding of the photographic and holographic processes requires some knowledge of the wave behavior of light. Light is an electromagnetic wave that, like a water wave, has an amplitude and a phase. The amplitude corresponds to the wave height, while the phase indicates which part of the wave is passing a given point at a given time. A cork floating in a pond bobs up and down as waves pass under it. The position of the cork at any time depends on both amplitude and phase: The phase determines on which part of the wave the cork is floating at any given time, and the amplitude determines how high or low the cork can be moved. Waves from more than one source arriving at the cork combine in ways that depend on their relative phases. If the waves meet in the same phase, they add and produce a large amplitude; if they arrive out of phase, they sub-

Holography /

403

tract and produce a small amplitude. The total amplitude, or intensity, depends on the phases of the combining waves. Dennis Gabor, the inventor of holography, was intrigued by the way in which the photographic image of an object was stored by a photographic plate but was unable to devote any consistent research effort to the question until the 1940’s. At that time, Gabor was involved in the development of the electron microscope. On Easter morning in 1947, as Gabor was pondering the problem of how to improve the electron microscope, the solution came to him. He would attempt to take a poor electron picture and then correct it optically. The process would require coherent electron beams—that is, electron waves with a definite phase. This two-stage method was inspired by the work of Lawrence Bragg. Bragg had formed the image of a crystal lattice by diffracting the photographic X-ray diffraction pattern of the original lattice. This double diffraction process is the basis of the holographic process. Bragg’s method was limited because of his inability to record the phase information of the X-ray photograph. Therefore, he could study only those crystals for which the phase relationship of the reflected waves could be predicted. Waiting for the Laser Gabor devised a way of capturing the phase information after he realized that adding coherent background to the wave reflected from an object would make it possible to produce an interference pattern on the photographic plate. When the phases of the two waves are identical, a maximum intensity will be recorded; when they are out of phase, a minimum intensity is recorded. Therefore, what is recorded in a hologram is not an image of the object but rather the interference pattern of the two coherent waves. This pattern looks like a collection of swirls and blank spots. The hologram (or photograph) is then illuminated by the reference beam, and part of the transmitted light is a replica of the original object wave. When viewing this object wave, one sees an exact replica of the original object. The major impediment at the time in making holograms using any form of radiation was a lack of coherent sources. For example, the coherence of the mercury lamp used by Gabor and his assistant

404

/

Holography

Dennis Gabor The eldest son of a mine director, Dennis Gabor was born in 1900 in Budapest, Hungary. At fifteen, suddenly developing an intense interest in optics and photography, Gabor and his brother sent up their own home laboratory and experimented in those fields as well as with X rays and radioactivity. The love of physics never left him. Gabor graduated from the Berlin Technische Hochschule in 1924 and earned a doctorate of engineering in 1927 after developing a high-speed cathode ray oscillograph and a new kind of magnetic lens for controlling electrons. After graduate school he joined Siemens and Halske Limited and invented a highpressure mercury lamp, which was later used widely in street lamps. In 1933, Gabor left Germany because of the rise of Nazism and moved to England. He worked in industrial research until 1948, improving gas-discharge tubes and stereoscopic cinematography, but he also published scientific papers on his own, including the first of many on communications theory. At the beginning of 1949, Gabor became a faculty member of the Imperial College of Science and Technology in London, first as a reader in electronics and later as a professor of applied physics. During his academic years came more inventions, including the hologram, an electron-velocity spectroscope, an analog computer, a flat color television tube, and a new type of thermionic converter. He also build a cloud chamber for detecting subatomic particles and used it to study electron interactions. As interested in theory as he was in applied physics, Gabor published papers on theoretical aspects of communications, plasma, magnetrons, and fusion. In his later years he worried deeply about the modern tendency for technology to advance out of step with social institutions and wrote popular books outlining his belief that social reform should be given priority. Gabor became a member of Britain’s Royal Society in 1956 and was awarded its Rumsford Medal in 1968. In 1971 he received the Nobel Prize in Physics for inventing holography. He died in London in 1979.

Holography /

405

Ivor Williams was so short that they were able to make holograms of only about a centimeter in diameter. The early results were rather poor in terms of image quality and also had a double image. For this reason, there was little interest in holography, and the subject lay almost untouched for more than ten years. Interest in the field was rekindled after the laser (light amplification by stimulated emission of radiation) was developed in 1962. Emmett Leith and Juris Upatnieks, who were conducting radar research at the University of Michigan, published the first laser holographs in 1963. The laser was an intense light source with a very long coherence length. Its monochromatic nature improved the resolution of the images greatly. Also, there was no longer any restriction on the size of the object to be photographed. The availability of the laser allowed Leith and Upatnieks to propose another improvement in holographic technique. Before 1964, holograms were made of only thin transparent objects. A small region of the hologram bore a one-to-one correspondence to a region of the object. Only a small portion of the image could be viewed at one time without the aid of additional optical components. Illuminating the transparency diffusely allowed the whole image to be seen at one time. This development also made it possible to record holograms of diffusely reflected three-dimensional objects. Gabor had seen from the beginning that this should make it possible to create three-dimensional images. After the early 1960’s, the field of holography developed very quickly. Because holography is different from conventional photography, the two techniques often complement each other. Gabor saw his idea blossom into a very important technique in optical science. Impact The development of the laser and the publication of the first laser holograms in 1963 caused a blossoming of the new technique in many fields. Soon, techniques were developed that allowed holograms to be viewed with white light. It also became possible for holograms to reconstruct multicolored images. Holographic methods have been used to map terrain with radar waves and to conduct surveillance in the fields of forestry, agriculture, and meteorology.

406

/

Holography

By the 1990’s, holography had become a multimillion-dollar industry, finding applications in advertising, as an art form, and in security devices on credit cards, as well as in scientific fields. An alternate form of holography, also suggested by Gabor, uses sound waves. Acoustical imaging is useful whenever the medium around the object to be viewed is opaque to light rays—for example, in medical diagnosis. Holography has affected many areas of science, technology, and culture. See also Color film; Electron microscope; Infrared photography; Laser; Mammography; Mass spectrograph; X-ray crystallography. Further Reading Greguss, Pál, Tung H. Jeong, and Dennis Gabor. Holography: Commemorating the Ninetieth Anniversary of the Birth of Dennis Gabor. Bellingham, Wash.: SPIE Optical Engineering Press, 1991. Kasper, Joseph Emil, and Steven A. Feller. The Complete Book of Holograms: How They Work and How to Make Them. Mineola, N.Y.: Dover, 2001. McNair, Don. How to Make Holograms. Blue Ridge Summit, Pa.: Tab Books, 1983. Saxby, Graham. Holograms: How to Make and Display Them. New York: Focal Press, 1980.

407

Hovercraft Hovercraft

The invention: A vehicle requiring no surface contact for traction that moves freely over a variety of surfaces—particularly water—while supported on a self-generated cushion of air. The people behind the invention: Christopher Sydney Cockerell (1910), a British engineer who built the first hovercraft Ronald A. Shaw (1910), an early pioneer in aerodynamics who experimented with hovercraft Sir John Isaac Thornycroft (1843-1928), a Royal Navy architect who was the first to experiment with air-cushion theory Air-Cushion Travel The air-cushion vehicle was first conceived by Sir John Isaac Thornycroft of Great Britain in the 1870’s. He theorized that if a ship had a plenum chamber (a box open at the bottom) for a hull and it were pumped full of air, the ship would rise out of the water and move faster, because there would be less drag. The main problem was keeping the air from escaping from under the craft. In the early 1950’s, Christopher Sydney Cockerell was experimenting with ways to reduce both the wave-making and frictional resistance that craft had to water. In 1953, he constructed a punt with a fan that supplied air to the bottom of the craft, which could thus glide over the surface with very little friction. The air was contained under the craft by specially constructed side walls. In 1955, the first true “hovercraft,” as Cockerell called it, was constructed of balsa wood. It weighed only 127 grams and traveled over water at a speed of 13 kilometers per hour. On November 16, 1956, Cockerell successfully demonstrated his model hovercraft at the patent agent’s office in London. It was immediately placed on the “secret” list, and Saunders-Roe Ltd. was given the first contract to build hovercraft in 1957. The first experimental piloted hovercraft, the SR.N1, which had a weight of 3,400 kilograms and could carry three people at the speed of 25

408

/

Hovercraft

knots, was completed on May 28, 1959, and publicly demonstrated on June 11, 1959. Ground Effect Phenomenon In a hovercraft, a jet airstream is directed downward through a hole in a metal disk, which forces the disk to rise. The jet of air has a reverse effect of its own that forces the disk away from the surface. Some of the air hitting the ground bounces back against the disk to add further lift. This is called the “ground effect.” The ground effect is such that the greater the under-surface area of the hovercraft, the greater the reverse thrust of the air that bounces back. This makes the hovercraft a mechanically efficient machine because it provides three functions. First, the ground effect reduces friction between the craft and the earth’s surface. Second, it acts as a spring suspension to reduce some of the vertical acceleration effects that arise from travel over an uneven surface. Third, it provides a safe and comfortable ride at high speed, whatever the operating environment. The air cushion can distribute the weight of the hovercraft over almost its entire area so that the cushion pressure is low. The basic elements of the air-cushion vehicle are a hull, a propulsion system, and a lift system. The hull, which accommodates the crew, passengers, and freight, contains both the propulsion and lift systems. The propulsion and lift systems can be driven by the same power plant or by separate power plants. Early designs used only one unit, but this proved to be a problem when adequate power was not achieved for movement and lift. Better results are achieved when two units are used, since far more power is used to lift the vehicle than to propel it. For lift, high-speed centrifugal fans are used to drive the air through jets that are located under the craft. A redesigned aircraft propeller is used for propulsion. Rudderlike fins and an air fan that can be swiveled to provide direction are placed at the rear of the craft. Several different air systems can be used, depending on whether a skirt system is used in the lift process. The plenum chamber system, the peripheral jet system, and several types of recirculating air

Hovercraft

/

409

systems have all been successfully tried without skirting. A variety of rigid and flexible skirts have also proved to be satisfactory, depending on the use of the vehicle. Skirts are used to hold the air for lift. Skirts were once hung like cur-

Sir John Isaac Thornycroft To be truly ahead of one’s time as an inventor, one must simply know everything there is to know about a specialty and then imagine something useful that contemporary technology is not quite ready for. John Isaac Thornycroft was such an inventor. Born in 1843 in what were then the Papal States (Rome, Italy), he trained as an engineer and became a naval architect. He opened a boatbuilding and engineering company at Chiswick in London in 1866 and began looking for ways to improve the performance of small seacraft. In 1877 he delivered the HMS Lightning, England’s first torpedo boat, to the Royal Navy. He continued to make torpedo boats for coastal waters, nicknamed “scooters,” and made himself a leading expert on boat design. He introduced stabilizers and modified hull and propeller shapes in order to reduce drag from the hull’s contact with water and thereby increase a boat’s speed. One of his best ideas was to have the boat ride on a cushion of air, so that air acted as a lubricant between the hull and water. He even filed patents for the concept and built models, but the power-source technology of the day was simply too inefficient. Engines were too heavy for the amount of power they put out. None could lift a full-size boat off the water and keep it on an air cushion. So the hovercraft had to wait until the 1950’s and incorporation of sophisticated internal combustion engines into the design. Meanwhile, Thornycroft and the company named after him continued to make innovative transports and engines: a steampowered van in 1896, a gas engine in 1902, and heavy trucks in 1912 that the British government used during World War I. By the time Thornycroft died in 1928, on the Isle of Wight, he had been knighted by a grateful government, which would benefit from his company’s products and his advanced ideas for the rest of the twentieth century.

410

/

Hovercraft

tains around hovercraft. Instead of simple curtains to contain the air, there are now complicated designs that contain the cushion, duct the air, and even provide a secondary suspension. The materials used in the skirting have also changed from a rubberized fabric to pure rubber and nylon and, finally, to neoprene, a lamination of nylon and plastic. The three basic types of hovercraft are the amphibious, nonamphibious, and semiamphibious models. The amphibious type can travel over water and land, whereas the nonamphibious type is restricted to water travel. The semiamphibious model is also restricted to water travel but may terminate travel by nosing up on a prepared ramp or beach. All hovercraft contain built-in buoyancy tanks in the side skirting as a safety measure in the event that a hovercraft must settle on the water. Most hovercraft are equipped with gas turbines and use either propellers or water-jet propulsion. Impact Hovercraft are used primarily for short passenger ferry services. Great Britain was the only nation to produce a large number of hovercraft. The British built larger and faster craft and pioneered their successful use as ferries across the English Channel, where they could reach speeds of 111 kilometers per hour (160 knots) and carry more than four hundred passengers and almost one hundred vehicles. France and the former Soviet Union have also effectively demonstrated hovercraft river travel, and the Soviets have experimented with military applications as well. The military adaptations of hovercraft have been more diversified. Beach landings have been performed effectively, and the United States used hovercraft for river patrols during the Vietnam War. Other uses also exist for hovercraft. They can be used as harbor pilot vessels and for patrolling shores in a variety of police-and customs-related duties. Hovercraft can also serve as flood-rescue craft and fire-fighting vehicles. Even a hoverfreighter is being considered. The air-cushion theory in transport systems is rapidly developing. It has spread to trains and smaller people movers in many countries. Their smooth, rapid, clean, and efficient operation makes hovercraft attractive to transportation designers around the world.

Hovercraft

/

411

See also Airplane; Atomic-powered ship; Bullet train; Gyrocompass. Further Reading Amyot, Joseph R. Hovercraft Technology, Economics, and Applications. Amsterdam: Elsevier, 1989. Croome, Angela. Hover Craft. 4th ed. London: Hodder and Stoughton, 1984. Gromer, Cliff. “Flying Low.” Popular Mechanics 176, no. 9 (September, 1999). McLeavy, Roy. Hovercraft and Hydrofoils. London: Jane’s Publishing, 1980. Pengelley, Rupert. “Hovercraft Cushion the Blow of Amphibious Operations.” Jane’s Navy International 104, no. 008 (October 1, 1999). Robertson, Don. A Restless Spirit. New Port, Isle of Wight: Cross Publishing, 1994.

412

Hydrogen bomb Hydrogen bomb

The invention: Popularly known as the “H-Bomb,” the hydrogen bomb differs from the original atomic bomb in using fusion, rather than fission, to create a thermonuclear explosion almost a thousand times more powerful. The people behind the invention: Edward Teller (1908), a Hungarian-born theoretical physicist Stanislaw Ulam (1909-1984), a Polish-born mathematician Crash Development A few months before the 1942 creation of the Manhattan Project, the United States-led effort to build the atomic (fission) bomb, physicist Enrico Fermi suggested to Edward Teller that such a bomb could release more energy by the process of heating a mass of the hydrogen isotope deuterium and igniting the fusion of hydrogen into helium. Fusion is the process whereby two atoms come together to form a larger atom, and this process usually occurs only in stars, such as the Sun. Physicists Hans Bethe, George Gamow, and Teller had been studying fusion since 1934 and knew of the tremendous energy than could be released by this process—even more energy than the fission (atom-splitting) process that would create the atomic bomb. Initially, Teller dismissed Fermi’s idea, but later in 1942, in collaboration with Emil Konopinski, he concluded that a hydrogen bomb, or superbomb, could be made. For practical considerations, it was decided that the design of the superbomb would have to wait until after the war. In 1946, a secret conference on the superbomb was held in Los Alamos, New Mexico, that was attended by, among other Manhattan Project veterans, Stanislaw Ulam and Klaus Emil Julius Fuchs. Supporting the investigation of Teller’s concept, the conferees requested a more complete mathematical analysis of his own admittedly crude calculations on the dynamics of the fusion reaction. In 1947, Teller believed that these calculations might take years. Two years later, however,

Hydrogen bomb

/

413

the Soviet explosion of an atomic bomb convinced Teller that America’s Cold War adversary was hard at work on its own superbomb. Even when new calculations cast further doubt on his designs, Teller began a vigorous campaign for crash development of the hydrogen bomb, or H-bomb. The Superbomb Scientists knew that fusion reactions could be induced by the explosion of an atomic bomb. The basic problem was simple and formidable: How could fusion fuel be heated and compressed long enough to achieve significant thermonuclear burning before the atomic fission explosion blew the assembly apart? A major part of the solution came from Ulam in 1951. He proposed using the energy from an exploding atomic bomb to induce significant thermonuclear reactions in adjacent fusion fuel components. This arrangement, in which the A-bomb (the primary) is physically separated from the H-bomb’s (the secondary’s) fusion fuel, became known as the “Teller-Ulam configuration.” All H-bombs are cylindrical, with an atomic device at one end and the other components filling the remaining space. Energy from the exploding primary could be transported by X rays and would therefore affect the fusion fuel at near light speed—before the arrival of the explosion. Frederick de Hoffman’s work verified and enriched the new concept. In the revised method, moderated X rays from the primary irradiate a reactive plastic medium surrounding concentric and generally cylindrical layers of fusion and fission fuel in the secondary. Instantly, the plastic becomes a hot plasma that compresses and heats the inner layer of fusion fuel, which in turn compresses a central core of fissile plutonium to supercriticality. Thus compressed, and bombarded by fusion-produced, high-energy neutrons, the fission element expands rapidly in a chain reaction from the inside out, further compressing and heating the surrounding fusion fuel, releasing more energy and more neutrons that induce fission in a fuel casing-tamper made of normally stable uranium 238. With its equipment to refrigerate the hydrogen isotopes, the device created to test Teller’s new concept weighed more than sixty tons. During Operation Ivy, it was tested at Elugelab in the Marshall

414

/

Hydrogen bomb

Edward Teller To call Edward Teller “controversial” is equivalent to saying that the hydrogen bomb is “destructive”—an enormous understatement. His forceful support for nuclear arms prompted some to label him a war criminal while others consider him to be one of the most thoughtful statesmen among scientists. Teller was born into a Jewish family in Budapest, Hungary, in 1908. He left his homeland to flee the anti-Semitic fascist government of the late 1920’s and attended the University of Leipzig in Germany. In 1930 he completed his doctorate and hoped to settle into an academic career there, but he fled Germany when Adolf Hitler came to power. Teller migrated to the United States in 1935 and taught at George Washington University, where with George Gamow he studied aspects of quantum mechanics and nuclear physics. He became a U.S. citizen in 1941. Teller was among the first physicists to realize the possibility of an atomic (fission) bomb, and he became a central figure in the Manhattan Project that built it during World War II. However, he was already exploring the idea of a “superbomb” that explodes because of a fusion reaction. He helped persuade President Harry Truman to finance a project to build it and continued to influence the politics of nuclear weapons and power afterward. Teller developed the theoretical basis for the hydrogen bomb and its rough design—and so is know as its father. However, controversy later erupted over credit. Mathematician Stanislaw Ulam claimed he contributed key insights and calculations, a claim Teller vehemently denied. Teller, however, did credit a young physicist, Richard L. Garwin, with creating the successful working design for the first bomb. Fiercely anticommunist, Teller argued for a strong nuclear arsenal to make the Soviet Union afraid of attacking the United States and supported space-based missile defense systems. He served as director of the Lawrence Livermore National Laboratory, professor at the University of California at Berkeley, and senior fellow at the nearby Hoover Institution. In his nineties he outraged environmentalists by suggesting that the atmosphere could be manipulated with technology to offset the effects of global warming.

Hydrogen bomb

/

415

Islands on November 1, 1952. Exceeding the expectations of all concerned and vaporizing the island, the explosion equaled 10.4 million tons of trinitrotoluene (TNT), which meant that it was about seven hundred times more powerful than the atomic bomb dropped on Hiroshima, Japan, in 1945. A version of this device weighing about 20 tons was prepared for delivery by specially modified Air Force B-36 bombers in the event of an emergency during wartime. In development at Los Alamos before the 1952 test was a device weighing only about 4 tons, a “dry bomb” that did not require refrigeration equipment or liquid fusion fuel; when sufficiently compressed and heated in its molded-powder form, the new fusion fuel component, lithium-6 deutride, instantly produced tritium, an isotope of hydrogen. This concept was tested during Operation Castle at Bikini atoll in 1954 and produced a yield of 15 million tons of TNT, the largest-ever nuclear explosion created by the United States. Consequences Teller was not alone in believing that the world could produce thermonuclear devices capable of causing great destruction. Months before Fermi suggested to Teller the possibility of explosive thermonuclear reactions on Earth, Japanese physicist Tokutaro Hagiwara had proposed that a uranium 235 bomb could ignite significant fusion reactions in hydrogen. The Soviet Union successfully tested an H-bomb dropped from an airplane in 1955, one year before the United States did so. Teller became the scientific adviser on nuclear affairs of many presidents, from Dwight D. Eisenhower to Ronald Reagan. The widespread blast and fallout effects of H-bombs assured the mutual destruction of the users of such weapons. During the Cold War (from about 1947 to 1981), both the United States and the Soviet Union possessed H-bombs. “Testing” these bombs made each side aware of how powerful the other side was. Everyone wanted to avoid nuclear war. It was thought that no one would try to start a war that would end in the world’s destruction. This theory was called deterrence: The United States wanted to let the Soviet Union know that it had just as many bombs, or more, than it did, so that the leaders of the Sovet Union would be deterred from starting a war.

416

/

Hydrogen bomb

Teller knew that the availability of H-bombs on both sides was not enough to guarantee that such weapons would never be used. It was also necessary to make the Soviet Union aware of the existence of the bombs through testing. He consistently advised against U.S. participation with the Soviet Union in a moratorium (period of waiting) on nuclear weapons testing. Largely based on Teller’s urging that underground testing be continued, the United States rejected a total moratorium in favor of the 1963 Atmospheric Test Ban Treaty. During the 1980’s, Teller, among others, convinced President Reagan to embrace the Strategic Defense Initiative (SDI). Teller argued that SDI components, such as the space-based “Excalibur,” a nuclear bomb-powered X-ray laser weapon proposed by the Lawrence-Livermore National Laboratory, would make thermonuclear war not unimaginable, but theoretically impossible. See also Airplane; Atomic bomb; Cruise missile; Rocket; Stealth aircraft; V-2 rocket. Further Reading Blumberg, Stanley A., and Louis G. Panos. Edward Teller, Giant of the Golden Age of Physics: A Biography. New York: Scribner’s, 1990. Clash, James M. “Teller Tells It.” Forbes (May 17, 1999). Teller, Edward, Wendy Teller, and Wilson Talley. Conversations on the Dark Secrets of Physics. New York: Plenum Press, 1991. York, Herbert E. The Advisors: Oppenheimer, Teller, and the Superbomb. Stanford, Calif.: Stanford University Press, 1989.

417

IBM Model 1401 computer IBM Model 1401 computer

The invention: A relatively small, simple, and inexpensive computer that is often credited with having launched the personal computer age. The people behind the invention: Howard H. Aiken (1900-1973), an American mathematician Charles Babbage (1792-1871), an English mathematician and inventor Herman Hollerith (1860-1929), an American inventor Computers: From the Beginning Computers evolved into their modern form over a period of thousands of years as a result of humanity’s efforts to simplify the process of counting. Two counting devices that are considered to be very simple, early computers are the abacus and the slide rule. These calculating devices are representative of digital and analog computers, respectively, because an abacus counts numbers of things, while the slide rule calculates length measurements. The first modern computer, which was planned by Charles Babbage in 1833, was never built. It was intended to perform complex calculations with a data processing/memory unit that was controlled by punched cards. In 1944, Harvard University’s Howard H. Aiken and the International Business Machines (IBM) Corporation built such a computer—the huge, punched-tape-controlled Automatic Sequence Controlled Calculator, or Mark I ASCC, which could perform complex mathematical operations in seconds. During the next fifteen years, computer advances produced digital computers that used binary arithmetic for calculation, incorporated simplified components that decreased the sizes of computers, had much faster calculating speeds, and were transistorized. Although practical computers had become much faster than they had been only a few years earlier, they were still huge and extremely expensive. In 1959, however, IBM introduced the Model 1401 computer. Smaller, simpler, and much cheaper than the multi-

418

/

IBM Model 1401 computer

million-dollar computers that were available, the IBM Model 1401 computer was also relatively easy to program and use. Its low cost, simplicity of operation, and very wide use have led many experts to view the IBM Model 1401 computer as beginning the age of the personal computer. Computer Operation and IBM’s Model 1401 Modern computers are essentially very fast calculating machines that are capable of sorting, comparing, analyzing, and outputting information, as well as storing it for future use. Many sources credit Aiken’s Mark I ASCC as being the first modern computer to be built. This huge, five-ton machine used thousands of relays to perform complex mathematical calculations in seconds. Soon after its introduction, other companies produced computers that were faster and more versatile than the Mark I. The computer development race was on. All these early computers utilized the decimal system for calculations until it was found that binary arithmetic, whose numbers are combinations of the binary digits 1 and 0, was much more suitable for the purpose. The advantage of the binary system is that the electronic switches that make up a computer (tubes, transistors, or chips) can be either on or off; in the binary system, the on state can be represented by the digit 1, the off state by the digit 0. Strung together correctly, binary numbers, or digits, can be inputted rapidly and used for high-speed computations. In fact, the computer term bit is a contraction of the phrase “binary digit.” A computer consists of input and output devices, a storage device (memory), arithmetic and logic units, and a control unit. In most cases, a central processing unit (CPU) combines the logic, arithmetic, memory, and control aspects. Instructions are loaded into the memory via an input device, processed, and stored. Then, the CPU issues commands to the other parts of the system to carry out computations or other functions and output the data as needed. Most output is printed as hard copy or displayed on cathode-ray tube monitors, or screens. The early modern computers—such as the Mark I ASCC—were huge because their information circuits were large relays or tubes. Computers became smaller and smaller as the tubes were replaced—

IBM Model 1401 computer

/

419

first with transistors, then with simple integrated circuits, and then with silicon chips. Each technological changeover also produced more powerful, more cost-effective computers. In the 1950’s, with reliable transistors available, IBM began the development of two types of computers that were completed by about 1959. The larger version was the Stretch computer, which was advertised as the most powerful computer of its day. Customized for each individual purchaser (for example, the Atomic Energy Commission), a Stretch computer cost $10 million or more. Some innovations in Stretch computers included semiconductor circuits, new switching systems that quickly converted various kinds of data into one language that was understood by the CPU, rapid data readers, and devices that seemed to anticipate future operations. Consequences The IBM Model 1401 was the first computer sold in very large numbers. It led IBM and other companies to seek to develop less expensive, more versatile, smaller computers that would be sold to small businesses and to individuals. Six years after the development of the Model 1401, other IBM models—and those made by other companies—became available that were more compact and had larger memories. The search for compactness and versatility continued. A major development was the invention of integrated circuits by Jack S. Kilby of Texas Instruments; these integrated circuits became available by the mid-1960’s. They were followed by even smaller “microprocessors” (computer chips) that became available in the 1970’s. Computers continued to become smaller and more powerful. Input and storage devices also decreased rapidly in size. At first, the punched cards invented by Herman Hollerith, founder of the Tabulation Machine Company (which later became IBM), were read by bulky readers. In time, less bulky magnetic tapes and more compact readers were developed, after which magnetic disks and compact disc drives were introduced. Many other advances have been made. Modern computers can talk, create art and graphics, compose music, play games, and operate robots. Further advancement is expected as societal needs

420

/

IBM Model 1401 computer

change. Many experts believe that it was the sale of large numbers of IBM Model 1401 computers that began the trend. See also Apple II computer; BINAC computer; Colossus computer; ENIAC computer; Personal computer; Supercomputer; UNIVAC computer. Further Reading Carroll, Paul. Big Blues: The Unmaking of IBM. New York: Crown, 1993. Chposky, James, and Ted Leonsis. Blue Magic: The People, Power, and Politics Behind the IBM Personal Computer. New York: Facts on File, 1988. Manes, Stephen, and Paul Andrews. Gates: How Microsoft’s Mogul Reinvented an Industry. New York: Doubleday, 1993.

421

In vitro plant culture In vitro plant culture

The invention: Method for propagating plants in artificial media that has revolutionized agriculture. The people behind the invention: Georges Michel Morel (1916-1973), a French physiologist Philip Cleaver White (1913), an American chemist Plant Tissue Grows “In Glass” In the mid-1800’s, biologists began pondering whether a cell isolated from a multicellular organism could live separately if it were provided with the proper environment. In 1902, with this question in mind, the German plant physiologist Gottlieb Haberlandt attempted to culture (grow) isolated plant cells under sterile conditions on an artificial growth medium. Although his cultured cells never underwent cell division under these “in vitro” (in glass) conditions, Haberlandt is credited with originating the concept of cell culture. Subsequently, scientists attempted to culture plant tissues and organs rather than individual cells and tried to determine the medium components necessary for the growth of plant tissue in vitro. In 1934, Philip White grew the first organ culture, using tomato roots. The discovery of plant hormones, which are compounds that regulate growth and development, was crucial to the successful culture of plant tissues; in 1939, Roger Gautheret, P. Nobécourt, and White independently reported the successful culture of plant callus tissue. “Callus” is an irregular mass of dividing cells that often results from the wounding of plant tissue. Plant scientists were fascinated by the perpetual growth of such tissue in culture and spent years establishing optimal growth conditions and exploring the nutritional and hormonal requirements of plant tissue. Plants by the Millions A lull in botanical research occurred during World War II, but immediately afterward there was a resurgence of interest in applying tissue culture techniques to plant research. Georges Morel, a

422

/

In vitro plant culture

plant physiologist at the National Institute for Agronomic Research in France, was one of many scientists during this time who had become interested in the formation of tumors in plants as well as in studying various pathogens such as fungi and viruses that cause plant disease. To further these studies, Morel adapted existing techniques in order to grow tissue from a wider variety of plant types in culture, and he continued to try to identify factors that affected the normal growth and development of plants. Morel was successful in culturing tissue from ferns and was the first to culture monocot plants. Monocots have certain features that distinguish them from the other classes of seed-bearing plants, especially with respect to seed structure. More important, the monocots include the economically important species of grasses (the major plants of range and pasture) and cereals. For these cultures, Morel utilized a small piece of the growing tip of a plant shoot (the shoot apex) as the starting tissue material. This tissue was placed in a glass tube, supplied with a medium containing specific nutrients, vitamins, and plant hormones, and allowed to grow in the light. Under these conditions, the apex tissue grew roots and buds and eventually developed into a complete plant. Morel was able to generate whole plants from pieces of the shoot apex that were only 100 to 250 micrometers in length. Morel also investigated the growth of parasites such as fungi and viruses in dual culture with host-plant tissue. Using results from these studies and culture techniques that he had mastered, Morel and his colleague Claude Martin regenerated virus-free plants from tissue that had been taken from virally infected plants. Tissues from certain tropical species, dahlias, and potato plants were used for the original experiments, but after Morel adapted the methods for the generation of virus-free orchids, plants that had previously been difficult to propagate by any means, the true significance of his work was recognized. Morel was the first to recognize the potential of the in vitro culture methods for the mass propagation of plants. He estimated that several million plants could be obtained in one year from a single small piece of shoot-apex tissue. Plants generated in this manner were clonal (genetically identical organisms prepared from a single plant).

In vitro plant culture

/

423

In vitro plant culture has been especially useful for species such as palm trees that cannot be propagated by other methods, such as by sowing seeds or grafting. (PhotoDisc)

With other methods of plant propagation, there is often a great variation in the traits of the plants produced, but as a result of Morel’s ideas, breeders could select for some desirable trait in a particular plant and then produce multiple clonal plants, all of which expressed the desired trait. The methodology also allowed for the production of virus-free plant material, which minimized both the spread of potential pathogens during shipping and losses caused by disease. Consequences Variations on Morel’s methods are used to propagate plants used for human food consumption; plants that are sources of fiber, oil, and livestock feed; forest trees; and plants used in landscaping and in the floral industry. In vitro stocks are preserved under deepfreeze conditions, and disease-free plants can be proliferated quickly at any time of the year after shipping or storage. The in vitro multiplication of plants has been especially useful for species such as coconut and certain palms that cannot be propagated by other methods, such as by sowing seeds or grafting, and has also become important in the preservation and propagation of

424

/

In vitro plant culture

rare plant species that might otherwise have become extinct. Many of these plants are sources of pharmaceuticals, oils, fragrances, and other valuable products. The capability of regenerating plants from tissue culture has also been crucial in basic scientific research. Plant cells grown in culture can be studied more easily than can intact plants, and scientists have gained an in-depth understanding of plant physiology and biochemistry by using this method. This information and the methods of Morel and others have made possible the genetic engineering and propagation of crop plants that are resistant to disease or disastrous environmental conditions such as drought and freezing. In vitro techniques have truly revolutionized agriculture. See also Artificial insemination; Cloning; Genetically engineered insulin; Rice and wheat strains. Further Reading Arbury, Jim, Richard Bird, Mike Honour, Clive Innes, and Mike Salmon. The Complete Book of Plant Propagation. Newtown, Conn.: Taunton Press, 1997. Clarke, Graham. The Complete Book of Plant Propagation. London: Seven Dials, 2001. Hartmann, Hudson T. Plant Propagation: Principles and Practices. 6th ed. London: Prentice-Hall, 1997. Heuser, Charles. The Complete Book of Plant Propagation. Newtown, Conn.: Taunton Press, 1997.

425

Infrared photography Infrared photography

The invention: The first application of color to infrared photography, which performs tasks not possible for ordinary photography. The person behind the invention: Sir William Herschel (1738-1822), a pioneering English astronomer Invisible Light Photography developed rapidly in the nineteenth century when it became possible to record the colors and shades of visible light on sensitive materials. Visible light is a form of radiation that consists of electromagnetic waves, which also make up other forms of radiation such as X rays and radio waves. Visible light occupies the range of wavelengths from about 400 nanometers (1 nanometer is 1 billionth of a meter) to about 700 nanometers in the electromagnetic spectrum. Infrared radiation occupies the range from about 700 nanometers to about 1,350 nanometers in the electromagnetic spectrum. Infrared rays cannot be seen by the human eye, but they behave in the same way that rays of visible light behave; they can be reflected, diffracted (broken), and refracted (bent). Sir William Herschel, a British astronomer, discovered infrared rays in 1800 by calculating the temperature of the heat that they produced. The term “infrared,” which was probably first used in 1800, was used to indicate rays that had wavelengths that were longer than those on the red end (the high end) of the spectrum of visible light but shorter than those of the microwaves, which appear higher on the electromagnetic spectrum. Infrared film is therefore sensitive to the infrared radiation that the human eye cannot see or record. Dyes that were sensitive to infrared radiation were discovered early in the twentieth century, but they were not widely used until the 1930’s. Because these dyes produced only black-and-white images, their usefulness to artists and researchers was limited. After 1930, however, a tidal wave of infrared photographic applications appeared.

426

/

Infrared photography

The Development of Color-Sensitive Infrared Film In the early 1940’s, military intelligence used infrared viewers for night operations and for gathering information about the enemy. One device that was commonly used for such purposes was called a “snooper scope.” Aerial photography with black-and-white infrared film was used to locate enemy hiding places and equipment. The images that were produced, however, often lacked clear definition. The development in 1942 of the first color-sensitive infrared film, Ektachrome Aero Film, became possible when researchers at the Eastman Kodak Company’s laboratories solved some complex chemical and physical problems that had hampered the development of color infrared film up to that point. Regular color film is sensitive to all visible colors of the spectrum; infrared color film is sensitive to violet, blue, and red light as well as to infrared radiation. Typical color film has three layers of emulsion, which are sensitized to blue, green, and red. Infrared color film, however, has its three emulsion layers sensitized to green, red, and infrared. Infrared wavelengths are recorded as reds of varying densities, depending on the intensity of the infrared radiation. The more infrared radiation there is, the darker the color of the red that is recorded. In infrared photography, a filter is placed over the camera lens to block the unwanted rays of visible light. The filter blocks visible and ultraviolet rays but allows infrared radiation to pass. All three layers of infrared film are sensitive to blue, so a yellow filter is used. All blue radiation is absorbed by this filter. In regular photography, color film consists of three basic layers: the top layer is sensitive to blue light, the middle layer is sensitive to green, and the third layer is sensitive to red. Exposing the film to light causes a latent image to be formed in the silver halide crystals that make up each of the three layers. In infrared photography, color film consists of a top layer that is sensitive to infrared radiation, a middle layer sensitive to green, and a bottom layer sensitive to red. “Reversal processing” produces blue in the infrared-sensitive layer, yellow in the green-sensitive layer, and magenta in the red-sensitive layer. The blue, yellow, and magenta layers of the film produce the “false colors” that accentuate the various levels of infrared radiation shown as red in a color transparency, slide, or print.

Infrared photography

/

427

During his long career Sir William Herschel passed from human music to the music of the spheres, and in doing so revealed the invisible unlike any astronomer before him. He was born Friedrich Wilhelm Herschel in Hannover, Germany, in 1738. Like his brothers, he trained to be a musician in a local regimental band. In 1757 he had to flee to England because his regiment was on the losing side of a war. Settling in the town of Bath, he supported himself with music, eventually becoming the organist for the city’s celebrated Octagon Chapel. He studied the music theory in Robert Smith’s book on harmonics and, discovering another book by Smith about optics and astronomy, read that too. He was immediately hooked. By 1773 he was assembling his own telescopes, and within ten years he had built the most powerful instruments in the land. He interested King George III in astronomy and was rewarded with a royal pension that gave him the leisure to survey the heavens. Herschel looked deeper into space than anyone before him. He discovered thousands of double stars and nebulae that had been invisible to astronomers with less powerful telescopes than his. He was the first person in recorded history to discover a planet—Uranus. While trying to learn the construction of the sun, he conducted hundreds of experiments with light. He found, unexpectedly, that he could feel heat from the sun even when visible light was filtered out, and concluded that some solar radiation—in this case infrared—was invisible to human eyes. Late in his career Herschel addressed the grandest of all invisible aspects of the nature: the structure of the universe. His investigations led him to conclude that the nebulae he had so often observed were in themselves vast clouds of stars, very far away—they were galaxies. It was a key conceptual step in the development of modern cosmology. By the time Herschel died in 1822, he had trained his sister Caroline and his son John to carry on his work. Both became celebrated astronomers in their own right.

(Library of Congess)

Sir William Herschel

428

/

Infrared photography

The color of the dye that is formed in a particular layer bears no relationship to the color of light to which the layer is sensitive. If the relationship is not complementary, the resulting colors will be false. This means that objects whose colors appear to be similar to the human eye will not necessarily be recorded as similar colors on infrared film. A red rose with healthy green leaves will appear on infrared color film as being yellow with red leaves, because the chlorophyll contained in the plant leaf reflects infrared radiation and causes the green leaves to be recorded as red. Infrared radiation from about 700 nanometers to about 900 nanometers on the electromagnetic spectrum can be recorded by infrared color film. Above 900 nanometers, infrared radiation exists as heat patterns that must be recorded by nonphotographic means. Impact Infrared photography has proved to be valuable in many of the sciences and the arts. It has been used to create artistic images that are often unexpected visual explosions of everyday views. Because infrared radiation penetrates haze easily, infrared films are often used in mapping areas or determining vegetation types. Many cloud-covered tropical areas would be impossible to map without infrared photography. False-color infrared film can differentiate between healthy and unhealthy plants, so it is widely used to study insect and disease problems in plants. Medical research uses infrared photography to trace blood flow, detect and monitor tumor growth, and to study many other physiological functions that are invisible to the human eye. Some forms of cancer can be detected by infrared analysis before any other tests are able to perceive them. Infrared film is used in criminology to photograph illegal activities in the dark and to study evidence at crime scenes. Powder burns around a bullet hole, which are often invisible to the eye, show clearly on infrared film. In addition, forgeries in documents and works of art can often be seen clearly when photographed on infrared film. Archaeologists have used infrared film to locate ancient sites that are invisible in daylight. Wildlife biologists also document the behavior of animals at night with infrared equipment.

Infrared photography

/

429

See also Autochrome plate; Color film; Fax machine; Instant photography. Further Reading Collins, Douglas. The Story of Kodak. New York: Harry N. Abrams, 1990. Cummins, Richard. “Infrared Revisited.” Petersen’s Photographic Magazine 23 (February, 1995). Paduano, Joseph. The Art of Infrared Photography. 4th ed. Buffalo, N.Y: Amherst Media, 1998. Richards, Dan. “The Strange Otherworld of Infrared.” Popular Photography 62, no. 6 (June, 1998). White, Laurie. Infrared Photography Handbook. Amherst, N.Y.: Amherst Media, 1995.

430

Instant photography Instant photography

The invention: Popularly known by its Polaroid tradename, a camera capable of producing finished photographs immediately after its film was exposed. The people behind the invention: Edwin Herbert Land (1909-1991), an American physicist and chemist Howard G. Rogers (1915), a senior researcher at Polaroid and Land’s collaborator William J. McCune (1915), an engineer and head of the Polaroid team Ansel Adams (1902-1984), an American photographer and Land’s technical consultant The Daughter of Invention Because he was a chemist and physicist interested primarily in research relating to light and vision, and to the materials that affect them, it was inevitable that Edwin Herbert Land should be drawn into the field of photography. Land founded the Polaroid Corporation in 1929. During the summer of 1943, while Land and his wife were vacationing in Santa Fe, New Mexico, with their three-yearold daughter, Land stopped to take a picture of the child. After the picture was taken, his daughter asked to see it. When she was told she could not see the picture immediately, she asked how long it would be. Within an hour after his daughter’s question, Land had conceived a preliminary plan for designing the camera, the film, and the physical chemistry of what would become the instant camera. Such a device would, he hoped, produce a picture immediately after exposure. Within six months, Land had solved most of the essential problems of the instant photography system. He and a small group of associates at Polaroid secretly worked on the project. Howard G. Rogers was Land’s collaborator in the laboratory. Land conferred the responsibility for the engineering and mechanical phase of the project on William J. McCune, who led the team that eventually de-

Instant photography

/

431

signed the original camera and the machinery that produced both the camera and Land’s new film. The first Polaroid Land camera—the Model 95—produced photographs measuring 8.25 by 10.8 centimeters; there were eight pictures to a roll. Rather than being black-and-white, the original Polaroid prints were sepia-toned (producing a warm, reddish-brown color). The reasons for the sepia coloration were chemical rather than aesthetic; as soon as Land’s researchers could devise a workable formula for sharp black-and-white prints (about ten months after the camera was introduced commercially), they replaced the sepia film. A Sophisticated Chemical Reaction Although the mechanical process involved in the first demonstration camera was relatively simple, this process was merely the means by which a highly sophisticated chemical reaction— the diffusion transfer process—was produced. In the basic diffusion transfer process, when an exposed negative image is developed, the undeveloped portion corresponds to the opposite aspect of the image, the positive. Almost all selfprocessing instant photography materials operate according to three phases—negative development, diffusion transfer, and positive development. These occur simultaneously, so that positive image formation begins instantly. With black-and-white materials, the positive was originally completed in about sixty seconds; with color materials (introduced later), the process took somewhat longer. The basic phenomenon of silver in solution diffusing from one emulsion to another was first observed in the 1850’s, but no practical use of this action was made until 1939. The photographic use of diffusion transfer for producing normal-continuous-tone images was investigated actively from the early 1940’s by Land and his associates. The instant camera using this method was demonstrated in 1947 and marketed in 1948. The fundamentals of photographic diffusion transfer are simplest in a black-and-white peel-apart film. The negative sheet is exposed in the camera in the normal way. It is then pulled out of the camera, or film pack holder, by a paper tab. Next, it passes through a set of rollers, which press it face-to-face with a sheet of receiving ma-

432

/

Instant photography

Edwin H. Land Born in Bridgeport, Connecticut in 1909, Edwin Herbert Land developed an obsession with color vision. As a boy, he slept with a copy of an optics textbook under his pillow. When he went to Harvard to study physics, he found the instruction too elementary and spent much of the time educating himself at the New York Public Library. While there, he thought of the first of his many sight-related inventions. He realized that by lining up tiny crystals and embedding them in clear plastic he could make a large, inexpensive light polarizer. He patented the idea for this “Polaroid” lens in 1929 (the first of more than five hundred patents) and in 1932 set up a commercial laboratory with his Harvard physics professor, George Wheelwright III. Five years later he opened the Polaroid Corporation in Boston to exploit the commercial potential of the lenses. They were to be used most famously as sunglasses, camera filters, eyeglasses for producing three-dimensional effects in movies, and glare-reduction screens for visual display terminals. In 1937, with Joseph Mallory, Land invented the vectograph— a device that superimposed two photographs in order to create a three-dimensional image. The invention dramatically improved the aerial photography during World War II and the Cold War. In fact, Land had a hand in designing both the camera carried aboard Lockheed’s U2 spyplane and the plane itself. While not busy running the Polaroid Corporation and overseeing development of its cameras, Land pursued his passion for experimenting with color and developed a widely respected theory of color vision. When he retired in 1982, he launched the Rowland Institute for Science in Boston, once described as a cross between a private laboratory and a private art gallery. (Land had a deep interest in modern art.) He and other scientists there conducted research on artificial intelligence, genetics, microscopy, holography, protein dynamics, and color vision. Land died in 1991 in Cambridge, Massachusetts, but the institute carries forward his legacy of scientific curiosity and practical application.

terial included in the film pack. Simultaneously, the rollers rupture a pod of reagent chemicals that are spread evenly by the rollers between the two layers. The reagent contains a strong alkali and a silver halide solvent, both of which diffuse into the negative emul-

Instant photography

/

433

sion. There the alkali activates the developing agent, which immediately reduces the exposed halides to a negative image. At the same time, the solvent dissolves the unexposed halides. The silver in the dissolved halides forms the positive image. Impact The Polaroid Land camera had a tremendous impact on the photographic industry as well as on the amateur and professional photographer. Ansel Adams, who was known for his monumental, ultrasharp black-and-white panoramas of the American West, suggested to Land ways in which the tonal value of Polaroid film could be enhanced, as well as new applications for Polaroid photographic technology. Soon after it was introduced, Polaroid photography became part of the American way of life and changed the face of amateur photography forever. By the 1950’s, Americans had become accustomed to the world of recorded visual information through films, magazines, and newspapers; they also had become enthusiastic picturetakers as a result of the growing trend for simpler and more convenient cameras. By allowing these photographers not only to record their perceptions but also to see the results almost immediately, Polaroid brought people closer to the creative process. See also Autochrome plate; Brownie camera; Color film; Fax machine; Xerography. Further Reading Adams, Ansel. Polaroid Land Photography Manual. New York: Morgan & Morgan, 1963. Innovation/Imagination: Fifty Years of Polaroid Photography. New York: H. N. Abrams in association with the Friends of Photography, 1999. McElheny, Victor K. Insisting on the Impossible: The Life of Edwin Land. Cambridge, Mass.: Perseus Books, 1998. Olshaker, Mark. The Instant Image. New York: Stein & Day, 1978. Wensberg, Peter C. Land’s Polaroid. Boston: Houghton Mifflin, 1987.

434

Interchangeable parts Interchangeable parts

The invention: A key idea in the late Industrial Revolution, the interchangeability of parts made possible mass production of identical products. The people behind the invention: Henry M. Leland (1843-1932), president of Cadillac Motor Car Company in 1908, known as a master of precision Frederick Bennett, the British agent for Cadillac Motor Car Company who convinced the Royal Automobile Club to run the standardization test at Brooklands, England Henry Ford (1863-1947), founder of Ford Motor Company who introduced the moving assembly line into the automobile industry in 1913 An American Idea Mass production is a twentieth century methodology that for the most part is a result of nineteenth century ideas. It is a phenomenon that, although its origins were mostly American, has consequently changed the entire world. The use of interchangeable parts, the feasibility of which was demonstrated by the Cadillac Motor Car Company in 1908, was instrumental in making mass production possible. The British phase of the Industrial Revolution saw the application of division of labor, the first principle of industrialization, to capitalistdirected manufacturing processes. Centralized power sources were connected through shafts, pulleys, and belts to machines housed in factories. Even after these dramatic changes, the British preferred to produce unique, handcrafted products formed one step at a time using general-purpose machine tools. Seldom did they make separate components to be assembled into standardized products. Stories about American products that were assembled from fully interchangeable parts began to reach Great Britain. In 1851, the British public saw a few of these products on display at an exhibition in London’s Crystal Palace. In 1854, they were informed by one of their own investigative commissions that American manufacturers were

Interchangeable parts

/

435

building military weapons and a number of consumer products with separately made parts that could be easily assembled, with little filing and fitting, by semiskilled workers. English industrialists had probably heard as much as they ever wanted to about this so-called “American system of manufacturing” by the first decade of the twentieth century, when word came that American companies were building automobiles with parts manufactured so precisely that they were interchangeable. The Cadillac During the fall of 1907, Frederick Bennett, an Englishman who served as the British agent for the Cadillac Motor Car Company, paid a visit to the company’s Detroit, Michigan, factory and was amazed at what he saw. He later described the assembling of the relatively inexpensive Cadillac vehicles as a demonstration of the beauty and practicality of precision. He was convinced that if his countrymen could see what he had seen they would also be impressed. Most automobile builders at the time claimed that their vehicles were built with handcrafted quality, yet at the same time they advertised that they could supply repair parts that would fit perfectly. In actuality, machining and filing were almost always required when parts were replaced, and only shops with proper equipment could do the job. Upon his return to London, Bennett convinced the Royal Automobile Club to sponsor a test of the precision of automobile parts. A standardization test was set to begin on February 29, 1908, and all of the companies then selling automobiles were invited to participate. Only the company that Bennett represented, Cadillac, was willing to enter the contest. Three one-cylinder Cadillacs, each painted a different color, were taken from stock at the company’s warehouse in London to a garage near the Brooklands race track. The cars were first driven around the track ten times to prove that they were operable. British mechanics then dismantled the vehicles, placing their parts in piles in the center of the garage, making sure that there was no way of identifying from which car each internal piece came. Then, as a further test, eighty-nine randomly selected parts were removed from the piles

436

/

Interchangeable parts

and replaced with new ones straight from Cadillac’s storeroom in London. The mechanics then proceeded to reassemble the automobiles, using only screwdrivers and wrenches. After the reconstruction, which took two weeks, the cars were driven from the garage. They were a motley looking trio, with fenders, doors, hoods, and wheels of mixed colors. All three were then driven five hundred miles around the Brooklands track. The British were amazed. Cadillac was awarded the club’s prestigious Dewar Trophy, considered in the young automobile industry to be almost the equivalent of a Nobel Prize. A number of European and American automobile manufacturers began to consider the promise of interchangeable parts and the assembly line system. Henry M. Leland Cadillac’s precision-built automobiles were the result of a lifetime of experience of Henry M. Leland, an American engineer. Known in Detroit at the turn of the century as a master of precision, Leland became the primary connection between a series of nineteenth century attempts to make interchangeable parts and the large-scale use of precision parts in mass production manufacturing during the twentieth century. The first American use of truly interchangeable parts had occurred in the military, nearly three-quarters of a century before the test at Brooklands. Thomas Jefferson had written from France about a demonstration of uniform parts for musket locks in 1785. A few years later, Eli Whitney attempted to make muskets for the American military by producing separate parts for assembly using specialized machines. He was never able to produce the precision necessary for truly interchangeable parts, but he promoted the idea intensely. It was in 1822 at the Harpers Ferry Armory in Virginia, and then a few years later at the Springfield Armory in Massachusetts, that the necessary accuracy in machining was finally achieved on a relatively large scale. Leland began his career at the Springfield Armory in 1863, at the age of nineteen. He worked as a tool builder during the Civil War years and soon became an advocate of precision manufacturing. In 1890, Leland moved to Detroit, where he began a firm, Leland &

Interchangeable parts

/

437

Henry Martyn Leland Henry Martyn Leland (1843-1932) is the unsung giant of early automobile manufacturers, launching two of the bestknown American car companies, Cadillac and Lincoln, and influenced the success of General Motors, as well as introducing the use of interchangeable parts. Had he allowed a model to be named after him, as did Henry Ford and Ransom Olds, he might have become a household name too, but he refused any such suggestion. Leland worked in factories during his youth. During the Civil War he honed his skills as a machinist at the U.S. Armory in Springfield, Massachusetts, helping build rifles with interchangeable parts. After the war, he learned how to machine parts to within one-thousandth of an inch, fabricated the first mechanical barber’s clippers, and refined the workings of air brakes for locomotives. This was all warm-up. In 1890 he moved to Detroit and opened his own business, Leland and Faulconer Manufacturing Company, specializing in automobile engines. The 10.25-horsepower engine he built for Olds in 1901 was rejected, but the single-cylinder (“one-lunger”) design that powered the first Cadillacs set him on the high road in the automotive industry. More innovations followed. He developed the electric starter, electric lights, and dimmable headlights. During World War I he built airplane engines for the U.S. government, and afterward converted the design for use in his new creation, the Lincoln. Throughout, he demanded precision from himself and those working for him. Once, for example, he complained to Alfred P. Sloan that a lot of ball bearings that Sloan had sold him varied from the required engineering tolerances and showed Sloan a few misshapen bearings to prove the claim. “Even though you make thousands,” Leland admonished Sloan, “the first and last should be precisely the same.” Sloan took the lesson very seriously. When he later led General Motors to the top of the industry, he credited Leland with teaching him what mass production was all about.

Faulconer, that would become internationally known for precision machining. His company did well supplying parts to the bicycle industry and internal combustion engines and transmissions to early

438

/

Interchangeable parts

automobile makers. In 1899, Leland & Faulconer became the primary supplier of engines to the first of the major automobile producers, the Olds Motor Works. In 1902, the directors of another Detroit firm, the Henry Ford Company, found themselves in a desperate situation. Henry Ford, the company founder and chief engineer, had resigned after a disagreement with the firm’s key owner, William Murphy. Leland was asked to take over the reorganization of the company. Because it could no longer use Ford’s name, the business was renamed in memory of the French explorer who had founded Detroit two hundred years earlier, Antoine de la Mothe Cadillac. Leland was appointed president of the Cadillac Motor Car Company. The company, under his influence, soon became known for its precision manufacturing. He disciplined its suppliers, rejecting anything that did not meet his specifications, and insisted on precision machining for all parts. By 1906, Cadillac was outselling all of its competitors, including Oldsmobile and Ford’s new venture, the Ford Motor Company. After the Brooklands demonstration in 1908, Cadillac became recognized worldwide for quality and interchangeability at a reasonable price. Impact The Brooklands demonstration went a long way in proving that mass-produced goods could be durable and of relatively high quality. It showed that standardized products, although often less costly to make, were not necessarily cheap substitutes for handcrafted and painstakingly fitted products. It also demonstrated that, through the use of interchangeable parts, the job of repairing such complex machines as automobiles could be made comparatively simple, moving maintenance and repair work from the well-equipped machine shop to the neighborhood garage or even to the home. Because of the international publicity Cadillac received, Leland’s methods began to be emulated by others in the automobile industry. His precision manufacturing, as his daughter-in-law would later write in his biography, “laid the foundation for the future American [automobile] industry.” The successes of automobile manufacturers quickly led to the introduction of mass production methods, and

Interchangeable parts

/

439

strategies designed to promote their necessary corollary mass consumption, in many other American businesses. In 1909, Cadillac was acquired by William Crapo Durant as the flagship company of his new holding company, which he labeled General Motors. Leland continued to improve his production methods, while also influencing his colleagues in the other General Motors companies to implement many of his techniques. By the mid1920’s, General Motors had become the world’s largest manufacturer of automobiles. Much of its success resulted from extensions of Leland’s ideas. The company began offering a number of brand name vehicles in a variety of price ranges for marketing purposes, while still keeping the costs of production down by including in each design a large number of commonly used, highly standardized components. Henry Leland resigned from Cadillac during World War I after trying to convince Durant that General Motors should play an important part in the war effort by contracting to build Liberty aircraft engines for the military. He formed his own firm, named after his favorite president, Abraham Lincoln, and went on to build about four thousand aircraft engines in 1917 and 1918. In 1919, ready to make automobiles again, Leland converted the Lincoln Motor Company into a car manufacturer. Again he influenced the industry by setting high standards for precision, but in 1921 an economic recession forced his new venture into receivership. Ironically, Lincoln was purchased at auction by Henry Ford. Leland retired, his name overshadowed by those of individuals to whom he had taught the importance of precision and interchangeable parts. Ford, as one example, went on to become one of America’s industrial legends by applying the standardized parts concept. Ford and the Assembly Line In 1913, Henry Ford, relying on the ease of fit made possible through the use of machined and stamped interchangeable parts, introduced the moving assembly line to the automobile industry. He had begun production of the Model T in 1908 using stationary assembly methods, bringing parts to assemblers. After having learned how to increase component production significantly, through experi-

440

/

Interchangeable parts

ments with interchangeable parts and moving assembly methods in the magneto department, he began to apply this same concept to final assembly. In the spring of 1913, Ford workers began dragging car frames past stockpiles of parts for assembly. Soon a power source was attached to the cars through a chain drive, and the vehicles were pulled past the stockpiles at a constant rate. From this time on, the pace of tasks performed by assemblers would be controlled by the rhythm of the moving line. As demand for the Model T increased, the number of employees along the line was increased and the jobs were broken into smaller and simpler tasks. With stationary assembly methods, the time required to assemble a Model T had averaged twelve and one-half person-hours. Dragging the chassis to the parts cut the time to six hours per vehicle, and the power-driven, constant-rate line produced a Model T with only ninety-three minutes of labor time. Because of these amazing increases in productivity, Ford was able to lower the selling price of the basic model from $900 in 1910 to $260 in 1925. He had revolutionized automobile manufacturing: The average family could now afford an automobile. Soon the average family would also be able to afford many of the other new products they had seen in magazines and newspapers. At the turn of the century, there were many new household appliances, farm machines, ready-made fashions, and prepackaged food products on the market, but only the wealthier class could afford most of these items. Major consumer goods retailers such as Sears, Roebuck and Company, Montgomery Ward, and the Great Atlantic and Pacific Tea Company were anxious to find lower-priced versions of these products to sell to a growing middle-class constituency. The methods of mass production that Henry Ford had popularized seemed to carry promise for these products as well. During the 1920’s, by working with such key manufacturers as Whirlpool, Hoover, General Electric, and Westinghouse, these large distributors helped introduce mass production methods into a large number of consumer product industries. They changed class markets into mass markets. The movement toward precision also led to the birth of a separate industry based on the manufacture of machine tools. A general purpose lathe, milling machine, or grinder could be used for a num-

Interchangeable parts

/

441

ber of operations, but mass production industries called for narrowpurpose machines designed for high-speed use in performing one specialized step in the production process. Many more machines were now required, one at each step in the production process. Each machine had to be simpler to operate, with more automatic features, because of an increased dependence on unskilled workers. The machine tool industry became the foundation of modern production. The miracle of mass production that followed, in products as diverse as airplanes, communication systems, and hamburgers, would not have been possible without the precision insisted upon by Henry Leland in the first decade of the twentieth century. It would not have come about without the lessons learned by Henry Ford in the use of specialized machines and assembly methods, and it would not have occurred without the growth of the machine tool industry. Cadillac’s demonstration at Brooklands in 1908 proved the practicality of precision manufacturing and interchangeable parts to the world. It inspired American manufacturers to continue to develop these ideas; it convinced Europeans that such production was possible; and, for better or for worse, it played a major part in changing the world. See also CAD/CAM; Assembly line; Internal combustion engine. Further Reading Hill, Frank Ernest. The Automobile: How It Came, Grew, and Has Changed Our Lives. New York: Dodd, Mead, 1967. Hounshell, David A. From the American System to Mass Production, 1800-1932. Baltimore: Johns Hopkins University Press, 1984. Leland, Ottilie M., and Minnie Dubbs Millbrook. Master of Precision: Henry M. Leland. 1966. Reprint. Detroit: Wayne State University Press, 1996. Marcus, Alan I., and Howard P. Segal. Technology in America: A Brief History. Fort Worth, Texas: Harcourt Brace College, 1999. Nevins, Allan, and Frank Ernest Hill. The Times, the Man, the Company. Vol. 1 in Ford. New York: Charles Scribner’s Sons, 1954.

442

Internal combustion engine Internal combustion engine

The invention: The most common type of engine in automobiles and many other vehicles, the internal combusion engine is characterized by the fact that it burns its liquid fuelly internally—in contrast to engines, such as the steam engine, that burn fuel in external furnaces. The people behind the invention: Sir Harry Ralph Ricardo (1885-1974), an English engineer Oliver Thornycroft (1885-1956), an engineer and works manager Sir David Randall Pye (1886-1960), an engineer and administrator Sir Robert Waley Cohen (1877-1952), a scientist and industrialist The Internal Combustion Engine: 1900-1916 By the beginning of the twentieth century, internal combustion engines were almost everywhere. City streets in Berlin, London, and New York were filled with automobile and truck traffic; gasoline- and diesel-powered boat engines were replacing sails; stationary steam engines for electrical generation were being edged out by internal combustion engines. Even aircraft use was at hand: To progress from the Wright brothers’ first manned flight in 1903 to the fighting planes of World War I took only a little more than a decade. The internal combustion engines of the time, however, were primitive in design. They were heavy (10 to 15 pounds per output horsepower, as opposed to 1 to 2 pounds today), slow (typically 1,000 or fewer revolutions per minute or less, as opposed to 2,000 to 5,000 today), and extremely inefficient in extracting the energy content of their fuel. These were not major drawbacks for stationary applications, or even for road traffic that rarely went faster than 30 or 40 miles per hour, but the advent of military aircraft and tanks demanded that engines be made more efficient.

Internal combustion engine

/

443

Engine and Fuel Design Harry Ricardo, son of an architect and grandson (on his mother’s side) of an engineer, was a central figure in the necessary redesign of internal combustion engines. As a schoolboy, he built a coal-fired steam engine for his bicycle, and at Cambridge University he produced a single-cylinder gasoline motorcycle, incorporating many of his own ideas, which won a fuel-economy competition when it traveled almost 40 miles on a quart of gasoline. He also began development of a two-cycle engine called the “Dolphin,” which later was produced for use in fishing boats and automobiles. In fact, in 1911, Ricardo took his new bride on their honeymoon trip in a Dolphinpowered car. The impetus that led to major engine research came in 1916 when Ricardo was an engineer in his family’s firm. The British government asked for newly designed tank engines, which had to operate in the dirt and mud of battle, at a tilt of up to 35 degrees, and could not give off telltale clouds of blue oil smoke. Ricardo solved the problem with a special piston design and with air circulation around the carburetor and within the engine to keep the oil cool. Design work on the tank engines turned Ricardo into a fullfledged research engineer. In 1917, he founded his own company, and a remarkable series of discoveries quickly followed. He investigated the problem of detonation of the fuel-air mixture in the internal combustion cylinder. The mixture is supposed to be ignited by the spark plug at the top of the compression stroke, with a controlled flame front spreading at a rate about equal to the speed of the piston head as it moves downward in the power stroke. Some fuels, however, detonated (ignited spontaneously throughout the entire fuel-air mixture) as a result of the compression itself, causing loss of fuel efficiency and damage to the engine. With the cooperation of Robert Waley Cohen of Shell Petroleum, Ricardo evaluated chemical mixtures of fuels and found that paraffins (such as n-heptane, the current low-octane standard) detonated readily, but aromatics such as toluene were nearly immune to detonation. He established a “toluene number” rating to describe the tendency of various fuels to detonate; this number was replaced in

444

/

Internal combustion engine Standard Four-Stroke Internal Combustion Engine

Spark Plug Spark plug

Intake Port Intake port

Intake

Intake

Compression

Compression

Ignition

Power

Exhaust Port Exhaust port

Expansion and Exhaust Exhaust

The four cycles of a standard internal combustion engine (left to right): (1) intake, when air enters the cylinder and mixes with gasoline vapor; (2) compression, when the cylinder is sealed and the piston moves up to compress the air-fuel mixture; (3) power, when the spark plug ignites the mixture, creating more pressure that propels the piston downward; and (4) exhaust, when the burned gases exit the cylinder through the exhaust port.

the 1920’s by the “octane number” devised by Thomas Midgley at the Delco laboratories in Dayton, Ohio. The fuel work was carried out in an experimental engine designed by Ricardo that allowed direct observation of the flame front as it spread and permitted changes in compression ratio while the engine was running. Three principles emerged from the investigation: the fuel-air mixture should be admitted with as much turbulence as possible, for thorough mixing and efficient combustion; the spark plug should be centrally located to prevent distant pockets of the mixture from detonating before the flame front reaches them; and the mixture should be kept as cool as possible to prevent detonation. These principles were then applied in the first truly efficient sidevalve (“L-head”) engine—that is, an engine with the valves in a chamber at the side of the cylinder, in the engine block, rather than overhead, in the engine head. Ricardo patented this design, and after winning a patent dispute in court in 1932, he received royalties or consulting fees for it from engine manufacturers all over the world.

Internal combustion engine

/

445

Impact The side-valve engine was the workhorse design for automobile and marine engines until after World War II. With its valves actuated directly by a camshaft in the crankcase, it is simple, rugged, and easy to manufacture. Overhead valves with overhead camshafts are the standard in automobile engines today, but the sidevalve engine is still found in marine applications and in small engines for lawn mowers, home generator systems, and the like. In its widespread use and its decades of employment, the side-valve engine represents a scientific and technological breakthrough in the twentieth century. Ricardo and his colleagues, Oliver Thornycroft and D. R. Pye, went on to create other engine designs—notably, the sleeve-valve aircraft engine that was the basic pattern for most of the great British planes of World War II and early versions of the aircraft jet engine. For his technical advances and service to the government, Ricardo was elected a Fellow of the Royal Society in 1929, and he was knighted in 1948. See also Alkaline storage battery; Assembly line; Diesel locomotive; Dirigible; Gas-electric car; Interchangeable parts; Thermal cracking process. Further Reading A History of the Automotive Internal Combustion Engine. Warrendale, Pa.: Society of Automotive Engineers, 1976. Mowery, David C., and Nathan Rosenberg. Paths of Innovation: Technological Change in Twentieth Century America. New York: Cambridge University Press, 1999. Ricardo, Harry R. Memories and Machines: The Pattern of My Life. London: Constable, 1968.

446

The Internet The Internet

The invention: A worldwide network of interlocking computer systems, developed out of a U.S. government project to improve military preparedness. The people behind the invention: Paul Baran, a researcher for the RAND corporation Vinton G. Cerf (1943), an American computer scientist regarded as the “father of the Internet” Cold War Computer Systems In 1957, the world was stunned by the launching of the satellite Sputnik I by the Soviet Union. The international image of the United States as the world’s technology superpower and its perceived edge in the Cold War were instantly brought into question. As part of the U.S. response, the Defense Department quickly created the Advanced Research Projects Agency (ARPA) to conduct research into “command, control, and communications” systems. Military planners in the Pentagon ordered ARPA to develop a communications network that would remain usable in the wake of a nuclear attack. The solution, proposed by Paul Baran, a scientist at the RAND Corporation, was the creation of a network of linked computers that could route communications around damage to any part of the system. Because the centralized control of data flow by major “hub” computers would make such a system vulnerable, the system could not have any central command, and all surviving points had to be able to reestablish contact following an attack on any single point. This redundancy of connectivity (later known as “packet switching”) would not monopolize a single circuit for communications, as telephones do, but would automatically break up computer messages into smaller packets, each of which could reach a destination by rerouting along different paths. ARPA then began attempting to link university computers over telephone lines. The historic connecting of four sites conducting ARPA research was accomplished in 1969 at a computer laboratory

The Internet

/

447

at the University of California at Los Angeles (UCLA), which was connected to computers at the University of California at Santa Barbara, the Stanford Research Institute, and the University of Utah. UCLA graduate student Vinton Cerf played a major role in establishing the connection, which was first known as “ARPAnet.” By 1971, more than twenty sites had been connected to the network, including supercomputers at the Massachusetts Institute of Technology and Harvard University; by 1981, there were more than two hundred computers on the system. The Development of the Internet Because factors such as equipment failure, overtaxed telecommunications lines, and power outages can quickly reduce or abort (“crash”) computer network performance, the ARPAnet managers and others quickly sought to build still larger “internetting” projects. In the late 1980’s, the National Science Foundation built its own network of five supercomputer centers to give academic researchers access to high-power computers that had previously been available only to military contractors. The “NSFnet” connected university networks by linking them to the closest regional center; its development put ARPAnet out of commission in 1990. The economic savings that could be gained from the use of electronic mail (“e-mail”), which reduced postage and telephone costs, were motivation enough for many businesses and institutions to invest in hardware and network connections. The evolution of ARPAnet and NSFnet eventually led to the creation of the “Internet,” an international web of interconnected government, education, and business computer networks that has been called “the largest machine ever constructed.” Using appropriate software, a computer terminal or personal computer can send and receive data via an “Internet Protocol” packet (an electronic envelope with an address). Communications programs on the intervening networks “read” the addresses on packets moving through the Internet and forward the packets toward their destinations. From approximately one thousand networks in the mid-1980’s, the Internet grew to an estimated thirty thousand connected networks by 1994, with an estimated 25 million users accessing it regularly. The

448

/

The Internet

Vinton Cerf Although Vinton Cerf is widely hailed as the “father of the Internet,” he himself disavows that honor. He has repeatedly emphasized that the Internet was built on the work of countless others, and that he and his partner merely happened to make a crucial contribution at a turning point in Internet development. The path leading Cerf to the Internet began early. He was born in New Haven, Connecticut, in 1943. He read widely, devouring L. Frank Baum’s Oz books and science fiction novels— especially those dealing with real-science themes. When he was ten, a book called The Boy Scientist fired his interest in science. After starting high school in Los Angeles in 1958, he got his first glimpse of computers, which were very different devices in those days. During a visit to a Santa Monica lab, he inspected a computer filling three rooms with wires and vacuum tubes that analyzed data from a Canadian radar system built to detect sneak missile attacks from the Soviet Union. Two years later he and a friend began programming a paper-tape computer at UCLA while they were still in high school. After graduating from Stanford University in 1965 with a degree in computer science, Cerf worked for IBM for two years, then entered graduate school at UCLA. His work on multiprocessing computer systems got sidetracked when a Defense Department request came in asking for help on a packet-switching project. This new project drew him into the brand-new field of computer networking on a system that became known as the ARPAnet. In 1972 Cerf returned to Stanford as an assistant professor. There he and a colleague, Robert Kahn, developed the concepts and protocols that became the basis of the modern Internet—a term they coined in a paper they delivered in 1974. Afterward Cerf made development of the Internet the focus of his distinguished career, and he later moved back into the business world. In 1994 he returned to MCI as senior vice president of Internet architecture. Meanwhile, he founded the Internet Society in 1992 and the Internet Societal Task Force in 1999.

majority of Internet users live in the United States and Europe, but the Internet has continued to expand internationally as telecommunications lines are improved in other countries.

The Internet

/

449

Impact Most individual users access the Internet through modems attached to their home personal computers by subscribing to local area networks. These services make information sources available such as on-line encyclopedias and magazines and embrace electronic discussion groups and bulletin boards on nearly every specialized interest area imaginable. Many universities converted large libraries to electronic form for Internet distribution, with an ambitious example being Cornell University’s conversion to electronic form of more than 100,000 books on the development of America’s infrastructure. Numerous corporations and small businesses soon began to market their products and services over the Internet. Problems soon became apparent with the commercial use of the new medium, however, as the protection of copyrighted material proved to be difficult; data and other text available on the system can be “downloaded,” or electronically copied. To protect their resources from unauthorized use via the Internet, therefore, most companies set up a “firewall” computer to screen incoming communications. The economic policies of the Bill Clinton administration highlighted the development of the “information superhighway” for improving the delivery of social services and encouraging new businesses; however, many governmental agencies and offices, including the U.S. Senate and House of Representative, have been slow to install high-speed fiber-optic network links. Nevertheless, the Internet soon came to contain numerous information sites to improve public access to the institutions of government. See also Cell phone; Communications satellite; Fax machine; Personal computer. Further Reading Abbate, Janet. Inventing the Internet. Cambridge, Mass.: MIT Press, 2000. Brody, Herb. “Net Cerfing.” Technology Review (Cambridge, Mass.) 101, no. 3 (May-June, 1998). Bryant, Stephen. The Story of the Internet. London: Peason Education, 2000.

450

/

The Internet

Rodriguez, Karen. “Plenty Deserve Credit as ‘Father’ of the Internet.” Business Journal 17, no. 27 (October 22, 1999). Stefik, Mark J., and Vinton Cerf. Internet Dreams: Archetypes, Myths, and Metaphors. Cambridge, Mass.: MIT Press, 1997. “Vint Cerf.” Forbes 160, no. 7 (October 6, 1997). Wollinsky, Art. The History of the Internet and the World Wide Web. Berkeley Heights, N.J.: Enslow, 1999.

451

Iron lung Iron lung

The invention: A mechanical respirator that saved the lives of victims of poliomyelitis. The people behind the invention: Philip Drinker (1894-1972), an engineer who made many contributions to medicine Louis Shaw (1886-1940), a respiratory physiologist who assisted Drinker Charles F. McKhann III (1898-1988), a pediatrician and founding member of the American Board of Pediatrics A Terrifying Disease Poliomyelitis (polio, or infantile paralysis) is an infectious viral disease that damages the central nervous system, causing paralysis in many cases. Its effect results from the destruction of neurons (nerve cells) in the spinal cord. In many cases, the disease produces crippled limbs and the wasting away of muscles. In others, polio results in the fatal paralysis of the respiratory muscles. It is fortunate that use of the Salk and Sabin vaccines beginning in the 1950’s has virtually eradicated the disease. In the 1920’s, poliomyelitis was a terrifying disease. Paralysis of the respiratory muscles caused rapid death by suffocation, often within only a few hours after the first signs of respiratory distress had appeared. In 1929, Philip Drinker and Louis Shaw, both of Harvard University, reported the development of a mechanical respirator that would keep those afflicted with the disease alive for indefinite periods of time. This device, soon nicknamed the “iron lung,” helped thousands of people who suffered from respiratory paralysis as a result of poliomyelitis or other diseases. Development of the iron lung arose after Drinker, then an assistant professor in Harvard’s Department of Industrial Hygiene, was appointed to a Rockefeller Institute commission formed to improve methods for resuscitating victims of electric shock. The best-known use of the iron lung—treatment of poliomyelitis—was a result of numerous epidemics of the disease that occurred from 1898 until

452

/

Iron lung

the 1920’s, each leaving thousands of Americans paralyzed. The concept of the iron lung reportedly arose from Drinker’s observation of physiological experiments carried out by Shaw and Drinker’s brother, Cecil. The experiments involved the placement of a cat inside an airtight box—a body plethysmograph—with the cat’s head protruding from an airtight collar. Shaw and Cecil Drinker then measured the volume changes in the plethysmograph to identify normal breathing patterns. Philip Drinker then placed cats paralyzed by curare inside plethysmographies and showed that they could be kept breathing artificially by use of air from a hypodermic syringe connected to the device. Next, they proceeded to build a human-sized plethysmographlike machine, with a five-hundred-dollar grant from the New York Consolidated Gas Company. This was done by a tinsmith and the Harvard Medical School machine shop. Breath for Paralyzed Lungs The first machine was tested on Drinker and Shaw, and after several modifications were made, a workable iron lung was made available for clinical use. This machine consisted of a metal cylinder large enough to hold a human being. One end of the cylinder, which contained a rubber collar, slid out on casters along with a stretcher on which the patient was placed. Once the patient was in position and the collar was fitted around the patient’s neck, the stretcher was pushed back into the cylinder and the iron lung was made airtight. The iron lung then “breathed” for the patient by using an electric blower to remove and replace air alternatively inside the machine. In the human chest, inhalation occurs when the diaphragm contracts and powerful muscles (which are paralyzed in poliomyelitis sufferers) expand the rib cage. This lowers the air pressure in the lungs and allows inhalation to occur. In exhalation, the diaphragm and chest muscles relax, and air is expelled as the chest cavity returns to its normal size. In cases of respiratory paralysis treated with an iron lung, the air coming into or leaving the iron lung alternately compressed the patient’s chest, producing artificial exhalation, and the allowed it to expand to so that the chest could fill with air. In this way, iron lungs “breathed” for the patients using them.

Iron lung

/

453

Careful examination of each patient was required to allow technicians to adjust the rate of operation of the machine. A cooling system and ports for drainage lines, intravenous lines, and the other apparatus needed to maintain a wide variety of patients were included in the machine. The first person treated in an iron lung was an eight-year-old girl afflicted with respiratory paralysis resulting from poliomyelitis. The iron lung kept her alive for five days. Unfortunately, she died from heart failure as a result of pneumonia. The next iron lung patient, a Harvard University student, was confined to the machine for several weeks and later recovered enough to resume a normal life. Impact The Drinker respirator, or iron lung, came into use in 1929 and soon was considered indispensable, saving lives of poliomyelitis victims until the development of the Salk vaccine in the 1950’s. Although the iron lung is no longer used, it played a critical role in the development of modern respiratory care, proving that large numbers of patients could be kept alive with mechanical support. The iron lung and polio treatment began an entirely new era in treatment of respiratory conditions. In addition to receiving a number of awards and honorary degrees for his work, Drinker was elected president of the American Industrial Hygiene Association in 1942 and became chairman of Harvard’s Department of Industrial Hygiene. See also Electrocardiogram; Electroencephalogram; Heart-lung machine; Pacemaker; Polio vaccine (Sabin); Polio vaccine (Salk). Further Reading DeJauregui, Ruth. One Hundred Medical Milestones That Shaped World History. San Mateo, Calif.: Bluewood Books, 1998. Hawkins, Leonard C. The Man in the Iron Lung: The Frederick B. Snite, Jr., Story. Garden City, N.Y.: Doubleday, 1956. Rudulph, Mimi. Inside the Iron Lung. Buckinghamshire: Kensal Press, 1984.

454

Laminated glass Laminated glass

The invention: Double sheets of glass separated by a thin layer of plastic sandwiched between them. The people behind the invention: Edouard Benedictus (1879-1930), a French artist Katherine Burr Blodgett (1898-1979), an American physicist The Quest for Unbreakable Glass People have been fascinated for centuries by the delicate transparency of glass and the glitter of crystals. They have also been frustrated by the brittleness and fragility of glass. When glass breaks, it forms sharp pieces that can cut people severely. During the 1800’s and early 1900’s, a number of people demonstrated ways to make “unbreakable” glass. In 1855 in England, the first “unbreakable” glass panes were made by embedding thin wires in the glass. The embedded wire grid held the glass together when it was struck or subjected to the intense heat of a fire. Wire glass is still used in windows that must be fire resistant. The concept of embedding the wire within a glass sheet so that the glass would not shatter was a predecessor of the concept of laminated glass. A series of inventors in Europe and the United States worked on the idea of using a durable, transparent inner layer of plastic between two sheets of glass to prevent the glass from shattering when it was dropped or struck by an impact. In 1899, Charles E. Wade of Scranton, Pennsylvania, obtained a patent for a kind of glass that had a sheet or netting of mica fused within it to bind it. In 1902, Earnest E. G. Street of Paris, France, proposed coating glass battery jars with pyroxylin plastic (celluloid) so that they would hold together if they cracked. In Swindon, England, in 1905, John Crewe Wood applied for a patent for a material that would prevent automobile windshields from shattering and injuring people when they broke. He proposed cementing a sheet of material such as celluloid between two sheets of glass. When the window was broken, the inner material would hold the glass splinters together so that they would not cut anyone.

Laminated glass

/

455

Katharine Burr Blodgett Besides the danger of shattering, glass poses another problem. It reflects light, as much as 10 percent of the rays hitting it, and that is bad for many precision instruments. Katharine Burr Blodgett cleared away that problem. Blodgett was born in 1898 in Schenectady, New York, just months after her father died. Her widowed mother, intent upon giving her and her brother the best upbringing possible, devoted herself to their education and took them abroad to live for extended periods. She succeeded. Blodgett attended Bryn Mawr and then earned a master’s degree in physics from the University of Chicago. With the help of a family friend, Irving Langmuir, who later won a Nobel Prize in Chemistry, she was promised a job at the General Electric (GE) research laboratory. However, Langmuir first wanted her to study more physics. Blodgett went to Cambridge University and under the guidance of Ernest Rutherford became the first women to receive a doctorate in physics there. Then she went to work at GE. Collaborating with Langmuir, Blodgett found that she could coat glass with a film one layer of molecules at a time, a feat never accomplished before. Moreover, the color of light reflected differed with the number of layers of film. She discovered that by adjusting the number of layers she could cancel out the light reflected by the glass beneath, so as much as 99 percent of natural light would pass through the glass. Producing almost no reflection, this treated glass was “invisible.” It was perfect for lenses, such as those in cameras and microscopes. Blodgett also devised a way to measure the thickness of films based on the wavelengths of light they reflect—a color gauge—that became a standard laboratory technique. Blodgett died in the town of her birth in 1979.

Remembering a Fortuitous Fall In his patent application, Edouard Benedictus described himself as an artist and painter. He was also a poet, musician, and philosopher who was descended from the philosopher Baruch Benedictus Spinoza; he seemed an unlikely contributor to the progress of glass manufacture. In 1903, Benedictus was cleaning

456

/

Laminated glass

his laboratory when he dropped a glass bottle that held a nitrocellulose solution. The solvents, which had evaporated during the years that the bottle had sat on a shelf, had left a strong celluloid coating on the glass. When Benedictus picked up the bottle, he was surprised to see that it had not shattered: It was starred, but all the glass fragments had been held together by the internal celluloid coating. He looked at the bottle closely, labeled it with the date (November, 1903) and the height from which it had fallen, and put it back on the shelf. One day some years later (the date is uncertain), Benedictus became aware of vehicular collisions in which two young women received serious lacerations from broken glass. He wrote a poetic account of a daydream he had while he was thinking intently about the two women. He described a vision in which the faintly illuminated bottle that had fallen some years before but had not shattered appeared to float down to him from the shelf. He got up, went into his laboratory, and began to work on an idea that originated with his thoughts of the bottle that would not splinter. Benedictus found the old bottle and devised a series of experiments that he carried out until the next evening. By the time he had finished, he had made the first sheet of Triplex glass, for which he applied for a patent in 1909. He also founded the Société du Verre Triplex (The Triplex Glass Society) in that year. In 1912, the Triplex Safety Glass Company was established in England. The company sold its products for military equipment in World War I, which began two years later. Triplex glass was the predecessor of laminated glass. Laminated glass is composed of two or more sheets of glass with a thin layer of plastic (usually polyvinyl butyral, although Benedictus used pyroxylin) laminated between the glass sheets using pressure and heat. The plastic layer will yield rather than rupture when subjected to loads and stresses. This prevents the glass from shattering into sharp pieces. Because of this property, laminated glass is also known as “safety glass.” Impact Even after the protective value of laminated glass was known,

Laminated glass

/

457

the product was not widely used for some years. There were a number of technical difficulties that had to be solved, such as the discoloring of the plastic layer when it was exposed to sunlight; the relatively high cost; and the cloudiness of the plastic layer, which obscured vision—especially at night. Nevertheless, the expanding automobile industry and the corresponding increase in the number of accidents provided the impetus for improving the qualities and manufacturing processes of laminated glass. In the early part of the century, almost two-thirds of all injuries suffered in automobile accidents involved broken glass. Laminated glass is used in many applications in which safety is important. It is typically used in all windows in cars, trucks, ships, and aircraft. Thick sheets of bullet-resistant laminated glass are used in banks, jewelry displays, and military installations. Thinner sheets of laminated glass are used as security glass in museums, libraries, and other areas where resistance to break-in attempts is needed. Many buildings have large ceiling skylights that are made of laminated glass; if the glass is damaged, it will not shatter, fall, and hurt people below. Laminated glass is used in airports, hotels, and apartments in noisy areas and in recording studios to reduce the amount of noise that is transmitted. It is also used in safety goggles and in viewing ports at industrial plants and test chambers. Edouard Benedictus’s recollection of the bottle that fell but did not shatter has thus helped make many situations in which glass is used safer for everyone. See also Buna rubber; Contact lenses; Neoprene; Plastic; Pyrex glass; Silicones. Further Reading Eastman, Joel W. Styling vs. Safety: The American Automobile Industry and the Development of Automotive Safety, 1900-1966. Lanham: University Press of America, 1984. Fariss, Robert H. “Fifty Years of Safer Windshields.” CHEMTECH 23, no. 9 (September, 1993). Miel, Rhoda. “New Process Promises Safer Glass.” Automotive News 74, no. 5863 (February 28, 2000).

458

/

Laminated glass

Polak, James L. “Eighty Years Plus of Automotive Glass Development: Windshields Were Once an Option, Today They Are an Integral Part of the Automobile.” Automotive Engineering 98, no. 6 (June, 1990).

459

Laser Laser

The invention: Taking its name from the acronym for light amplification by the stimulated emission of radiation, a laser is a beam of electromagnetic radiation that is monochromatic, highly directional, and coherent. Lasers have found multiple applications in electronics, medicine, and other fields. The people behind the invention: Theodore Harold Maiman (1927), an American physicist Charles Hard Townes (1915), an American physicist who was a cowinner of the 1964 Nobel Prize in Physics Arthur L. Schawlow (1921-1999), an American physicist, cowinner of the 1981 Nobel Prize in Physics Mary Spaeth (1938), the American inventor of the tunable laser Coherent Light Laser beams differ from other forms of electromagnetic radiation in being consisting of a single wavelength, being highly directional, and having waves whose crests and troughs are aligned. A laser beam launched from Earth has produced a spot a few kilometers wide on the Moon, nearly 400,000 kilometers away. Ordinary light would have spread much more and produced a spot several times wider than the Moon. Laser light can also be concentrated so as to yield an enormous intensity of energy, more than that of the surface of the Sun, an impossibility with ordinary light. In order to appreciate the difference between laser light and ordinary light, one must examine how light of any kind is produced. An ordinary light bulb contains atoms of gas. For the bulb to light up, these atoms must be excited to a state of energy higher then their normal, or ground, state. This is accomplished by sending a current of electricity through the bulb; the current jolts the atoms into the higher-energy state. This excited state is unstable, however, and the atoms will spontaneously return to their ground state by ridding themselves of excess energy.

460

/

Laser

Scanner device using a laser beam to read shelf labels. (PhotoDisc)

As these atoms emit energy, light is produced. The light emitted by a lamp full of atoms is disorganized and emitted in all directions randomly. This type of light, common to all ordinary sources, from fluorescent lamps to the Sun, is called “incoherent light.” Laser light is different. The excited atoms in a laser emit their excess energy in a unified, controlled manner. The atoms remain in the excited state until there are a great many excited atoms. Then, they are stimulated to emit energy, not independently, but in an organized fashion, with all their light waves traveling in the same direction, crests and troughs perfectly aligned. This type of light is called “coherent light.” Theory to Reality In 1958, Charles Hard Townes of Columbia University, together with Arthur L. Schawlow, explored the requirements of the laser in a theoretical paper. In the Soviet Union, F. A. Butayeva and V. A. Fabrikant had amplified light in 1957 using mercury; however, their work was not published for two years and was not published in a scientific journal. The work of the Soviet scientists, therefore, re-

Laser

/

461

Mary Spaeth Born in 1938, Mary Dietrich Spaeth, inventor of the tunable laser, learned to put things together early. When she was just three years old, her father began giving her tools to play with. She learned to use them well and got interested in science along the way. She studied mathematics and physics at Valparaiso University, graduating in 1960, and earned a master’s degree in nuclear physics from Wayne State University in 1962. The same year she joined Hughes Aircraft Company as a researcher. While waiting for supplies for her regular research in 1966, she examined the lasers in her laboratory. She wondered if, by adding dyes, she could cause the beams to change colors. Cobbling together two lasers—one to boost the power of the test laser—with Duco cement, she added dyes and succeeded at once. She found that she could produce light in a wide range of colors with different dyes. The tunable dye laser afterward was used to separate isotopes in nuclear reactor fuel, to purify plutonium for weapons, and to boost the power of ground-based astronomical telescopes. She also invented a resonant reflector for ruby range finders and performed basic research on passive Q switches used in lasers. Because Spaeth considered Hughes’s promotion policies to discriminate against women scientists, she moved to the Lawrence Livermore National Laboratory in 1974. In 1986 she became the deputy associate director of its Laser Isotope Separation program.

ceived virtually no attention in the Western world. In 1960, Theodore Harold Maiman constructed the first laser in the United States using a single crystal of synthetic pink ruby, shaped into a cylindrical rod about 4 centimeters long and 0.5 centimeter across. The ends, polished flat and made parallel to within about a millionth of a centimeter, were coated with silver to make them mirrors. It is a property of stimulated emission that stimulated light waves will be aligned exactly (crest to crest, trough to trough, and with respect to direction) with the radiation that does the stimulating. From the group of excited atoms, one atom returns to its ground

462

/

Laser

state, emitting light. That light hits one of the other exited atoms and stimulates it to fall to its ground state and emit light. The two light waves are exactly in step. The light from these two atoms hits other excited atoms, which respond in the same way, “amplifying” the total sum of light. If the first atom emits light in a direction parallel to the length of the crystal cylinder, the mirrors at both ends bounce the light waves back and forth, stimulating more light and steadily building up an increasing intensity of light. The mirror at one end of the cylinder is constructed to let through a fraction of the light, enabling the light to emerge as a straight, intense, narrow beam. Consequences When the laser was introduced, it was an immediate sensation. In the eighteen months following Maiman’s announcement that he had succeeded in producing a working laser, about four hundred companies and several government agencies embarked on work involving lasers. Activity centered on improving lasers, as well as on exploring their applications. At the same time, there was equal activity in publicizing the near-miraculous promise of the device, in applications covering the spectrum from “death” rays to sight-saving operations. A popular film in the James Bond series, Goldfinger (1964), showed the hero under threat of being sliced in half by a laser beam—an impossibility at the time the film was made because of the low power-output of the early lasers. In the first decade after Maiman’s laser, there was some disappointment. Successful use of lasers was limited to certain areas of medicine, such as repairing detached retinas, and to scientific applications, particularly in connection with standards: The speed of light was measured with great accuracy, as was the distance to the Moon. By 1990, partly because of advances in other fields, essentially all the laser’s promise had been fulfilled, including the death ray and James Bond’s slicer. Yet the laser continued to find its place in technologies not envisioned at the time of the first laser. For example, lasers are now used in computer printers, in compact disc players, and even in arterial surgery.

Laser

/

463

See also Atomic clock; Compact disc; Fiber-optics; Holography; Laser-diode recording process; Laser vaporization; Optical disk. Further Reading Townes, Charles H. How the Laser Happened: Adventures of a Scientist. New York: Oxford University Press, 1999. Weber, Robert L. Pioneers of Science: Nobel Prize Winners in Physics. 2d ed. Philadelphia: A. Hilger, 1988. Yen, W. M., Marc D. Levenson, and Arthur L. Schawlow. Lasers, Spectroscopy, and New Ideas: A Tribute to Arthur L. Schawlow. New York: Springer-Verlag, 1987.

464

Laser-diode recording process Laser-diode recording process

The invention: Video and audio playback system that uses a lowpower laser to decode information digitally stored on reflective disks. The organization behind the invention: The Philips Corporation, a Dutch electronics firm The Development of Digital Systems Since the advent of the computer age, it has been the goal of many equipment manufacturers to provide reliable digital systems for the storage and retrieval of video and audio programs. A need for such devices was perceived for several reasons. Existing storage media (movie film and 12-inch, vinyl, long-playing records) were relatively large and cumbersome to manipulate and were prone to degradation, breakage, and unwanted noise. Thus, during the late 1960’s, two different methods for storing video programs on disc were invented. A mechanical system was demonstrated by the Telefunken Company, while the Radio Corporation of America (RCA) introduced an electrostatic device (a device that used static electricity). The first commercially successful system, however, was developed during the mid-1970’s by the Philips Corporation. Philips devoted considerable resources to creating a digital video system, read by light beams, which could reproduce an entire feature-length film from one 12-inch videodisc. An integral part of this innovation was the fabrication of a device small enough and fast enough to read the vast amounts of greatly compacted data stored on the 12-inch disc without introducing unwanted noise. Although Philips was aware of the other formats, the company opted to use an optical scanner with a small “semiconductor laser diode” to retrieve the digital information. The laser diode is only a fraction of a millimeter in size, operates quite efficiently with high amplitude and relatively low power (0.1 watt), and can be used continuously. Because this configuration operates at a high frequency, its informationcarrying capacity is quite large.

Laser-diode recording process

/

465

Although the digital videodisc system (called “laservision”) works well, the low level of noise and the clear images offered by this system were masked by the low quality of the conventional television monitors on which they were viewed. Furthermore, the high price of the playback systems and the discs made them noncompetitive with the videocassette recorders (VCRs) that were then capturing the market for home systems. VCRs had the additional advantage that programs could be recorded or copied easily. The Philips Corporation turned its attention to utilizing this technology in an area where low noise levels and high quality would be more readily apparent—audio disc systems. By 1979, they had perfected the basic compact disc (CD) system, which soon revolutionized the world of stereophonic home systems. Reading Digital Discs with Laser Light Digital signals (signals composed of numbers) are stored on discs as “pits” impressed into the plastic disc and then coated with a thin reflective layer of aluminum. A laser beam, manipulated by delicate, fast-moving mirrors, tracks and reads the digital information as changes in light intensity. These data are then converted to a varying electrical signal that contains the video or audio information. The data are then recovered by means of a sophisticated pickup that consists of the semiconductor laser diode, a polarizing beam splitter, an objective lens, a collective lens system, and a photodiode receiver. The beam from the laser diode is focused by a collimator lens (a lens that collects and focuses light) and then passes through the polarizing beam splitter (PBS). This device acts like a one-way mirror mounted at 45 degrees to the light path. Light from the laser passes through the PBS as if it were a window, but the light emerges in a polarized state (which means that the vibration of the light takes place in only one plane). For the beam reflected from the CD surface, however, the PBS acts like a mirror, since the reflected beam has an opposite polarization. The light is thus deflected toward the photodiode detector. The objective lens is needed to focus the light onto the disc surface. On the outer surface of the transparent disc, the main spot of light has a diameter of 0.8 millimeter, which narrows to only 0.0017 millimeter at the reflective sur-

466

/

Laser-diode recording process

face. At the surface, the spot is about three times the size of the microscopic pits (0.0005 millimeter). The data encoded on the disc determine the relative intensity of the reflected light, on the basis of the presence or absence of pits. When the reflected laser beam enters the photodiode, a modulated light beam is changed into a digital signal that becomes an analog (continuous) audio signal after several stages of signal processing and error correction. Consequences The development of the semiconductor laser diode and associated circuitry for reading stored information has made CD audio systems practical and affordable. These systems can offer the quality of a live musical performance with a clarity that is undisturbed by noise and distortion. Digital systems also offer several other significant advantages over analog devices. The dynamic range (the difference between the softest and the loudest signals that can be stored and reproduced) is considerably greater in digital systems. In addition, digital systems can be copied precisely; the signal is not degraded by copying, as is the case with analog systems. Finally, error-correcting codes can be used to detect and correct errors in transmitted or reproduced digital signals, allowing greater precision and a higher-quality output sound. Besides laser video systems, there are many other applications for laser-read CDs. Compact disc read-only memory (CD-ROM) is used to store computer text. One standard CD can store 500 megabytes of information, which is about twenty times the storage of a hard-disk drive on a typical home computer. Compact disc systems can also be integrated with conventional televisions (called CD-V) to present twenty minutes of sound and five minutes of sound with picture. Finally, CD systems connected with a computer (CD-I) mix audio, video, and computer programming. These devices allow the user to stop at any point in the program, request more information, and receive that information as sound with graphics, film clips, or as text on the screen. See also Compact disc; Laser; Videocassette recorder; Walkman cassette player.

Laser-diode recording process

/

467

Further Reading Atkinson, Terry. “Picture This: CD’s with Video, By Christmas ‘87.” Los Angeles Times (February 20, 1987). Botez, Dan, and Luis Figueroa. Laser-Diode Technology and Applications II: 16-19 January 1990, Los Angeles, California. Bellingham, Wash.: SPIE, 1990. Clemens, Jon K. “Video Disks: Three Choices.” IEEE Spectrum 19, no. 3 (March, 1982). “Self-Pulsating Laser for DVD.” Electronics Now 67, no. 5 (May, 1996).

468

Laser eye surgery Laser eye surgery

The invention: The first significant clinical ophthalmic application of any laser system was the treatment of retinal tears with a pulsed ruby laser. The people behind the invention: Charles J. Campbell (1926), an ophthalmologist H. Christian Zweng (1925), an ophthalmologist Milton M. Zaret (1927), an ophthalmologist Theodore Harold Maiman (1927), the physicist who developed the first laser Monkeys and Rabbits The term “laser” is an acronym for light amplification by the stimulated emission of radiation. The development of the laser for ophthalmic (eye surgery) surgery arose from the initial concentration of conventional light by magnifying lenses. Within a laser, atoms are highly energized. When one of these atoms loses its energy in the form of light, it stimulates other atoms to emit light of the same frequency and in the same direction. A cascade of these identical light waves is soon produced, which then oscillate back and forth between the mirrors in the laser cavity. One mirror is only partially reflective, allowing some of the laser light to pass through. This light can be concentrated further into a small burst of high intensity. On July 7, 1960, Theodore Harold Maiman made public his discovery of the first laser—a ruby laser. Shortly thereafter, ophthalmologists began using ruby lasers for medical purposes. The first significant medical uses of the ruby laser occurred in 1961, with experiments on animals conducted by Charles J. Campbell in New York, H. Christian Zweng, and Milton M. Zaret. Zaret and his colleagues produced photocoagulation (a thickening or drawing together of substances by use of light) of the eyes of rabbits by flashes from a ruby laser. Sufficient energy was delivered to cause immediate thermal injury to the retina and iris of the rabbit. The beam also was

Laser eye surgery

/

469

directed to the interior of the rabbit eye, resulting in retinal coagulations. The team examined the retinal lesions and pointed out both the possible advantages of laser as a tool for therapeutic photocoagulation and the potential applications in medical research. In 1962, Zweng, along with several of his associates, began experimenting with laser photocoagulation on the eyes of monkeys and rabbits in order to establish parameters for the use of lasers on the human eye. Reflected by Blood The vitreous humor, a transparent jelly that usually fills the vitreous cavity of the eyes of younger individuals, commonly shrinks with age, with myopia, or with certain pathologic conditions. As these conditions occur, the vitreous humor begins to separate from the adjacent retina. In some patients, the separating vitreous humor produces a traction (pulling), causing a retinal tear to form. Through this opening in the retina, liquefied vitreous humor can pass to a site underneath the retina, producing retinal detachment and loss of vision. A laser can be used to cause photocoagulation of a retinal tear. As a result, an adhesive scar forms between the retina surrounding the tear and the underlying layers so that, despite traction, the retina does not detach. If more than a small area of retina has detached, the laser often is ineffective and major retinal detachment surgery must be performed. Thus, in the experiments of Campbell and Zweng, the ruby laser was used to prevent, rather than treat, retinal detachment. In subsequent experiments with humans, all patients were treated with the experimental laser photocoagulator without anesthesia. Although usually no attempt was made to seal holes or tears, the diseased portions of the retina were walled off satisfactorily so that no detachments occurred. One problem that arose involved microaneurysms. A “microaneurysm” is a tiny aneurysm, or blood-filled bubble extending from the wall of a blood vessel. When attempts to obliterate microaneurysms were unsuccessful, the researchers postulated that the color of the ruby pulse so resembled the red of blood that the light was reflected rather than absorbed. They believed that another lasing material emitting light in another part of the spectrum might have performed more successfully.

470

/

Laser eye surgery

Previously, xenon-arc lamp photocoagulators had been used to treat retinal tears. The long exposure time required of these systems, combined with their broad spectral range emission (versus the single wavelength output of a laser), however, made the retinal spot on which the xenon-arc could be focused too large for many applications. Focused laser spots on the retina could be as small as 50 microns. Consequences The first laser in ophthalmic use by Campbell, Zweng, and Zaret, among others, was a solid laser—Maiman’s ruby laser. While the results they achieved with this laser were more impressive than with the previously used xenon-arc, in the decades following these experiments, argon gas replaced ruby as the most frequently used material in treating retinal tears. Argon laser energy is delivered to the area around the retinal tear through a slit lamp or by using an intraocular probe introduced directly into the eye. The argon wavelength is transmitted through the clear structures of the eye, such as the cornea, lens, and vitreous. This beam is composed of blue-green light that can be effectively aimed at the desired portion of the eye. Nevertheless, the beam can be absorbed by cataracts and by vitreous or retinal blood, decreasing its effectiveness. Moreover, while the ruby laser was found to be highly effective in producing an adhesive scar, it was not useful in the treatment of vascular diseases of the eye. A series of laser sources, each with different characteristics, was considered, investigated, and used clinically for various durations during the period that followed Campbell and Zweng’s experiments. Other laser types that are being adapted for use in ophthalmology are carbon dioxide lasers for scleral surgery (surgery on the tough, white, fibrous membrane covering the entire eyeball except the area covered by the cornea) and eye wall resection, dye lasers to kill or slow the growth of tumors, eximer lasers for their ability to break down corneal tissue without heating, and pulsed erbium lasers used to cut intraocular membranes.

Laser eye surgery

/

471

See also Contact lenses; Coronary artery bypass surgery; Laser; Laser vaporization. Further Reading Constable, Ian J., and Arthur Siew Ming Lin. Laser: Its Clinical Uses in Eye Diseases. Edinburgh: Churchill Livingstone, 1981. Guyer, David R. Retina, Vitreous, Macula. Philadelphia: Saunders, 1999. Hecht, Jeff. Laser Pioneers. Rev. ed. Boston: Academic Press, 1992. Smiddy, William E., Lawrence P. Chong, and Donald A. Frambach. Retinal Surgery and Ocular Trauma. Philadelphia: Lippincott, 1995.

472

Laser vaporization Laser vaporization

The invention: Technique using laser light beams to vaporize the plaque that clogs arteries. The people behind the invention: Albert Einstein (1879-1955), a theoretical American physicist Theodore Harold Maiman (1927), inventor of the laser Light, Lasers, and Coronary Arteries Visible light, a type of electromagnetic radiation, is actually a form of energy. The fact that the light beams produced by a light bulb can warm an object demonstrates that this is the case. Light beams are radiated in all directions by a light bulb. In contrast, the device called the “laser” produces light that travels in the form of a “coherent” unidirectional beam. Coherent light beams can be focused on very small areas, generating sufficient heat to melt steel. The term “laser” was invented in 1957 by R. Gordon Gould of Columbia University. It stands for light amplification by stimulated emission of radiation, the means by which laser light beams are made. Many different materials—including solid ruby gemstones, liquid dye solutions, and mixtures of gases—can produce such beams in a process called “lasing.” The different types of lasers yield light beams of different colors that have many uses in science, industry, and medicine. For example, ruby lasers, which were developed in 1960, are widely used in eye surgery. In 1983, a group of physicians in Toulouse, France, used a laser for cardiovascular treatment. They used the laser to vaporize the “atheroma” material that clogs the arteries in the condition called “atherosclerosis.” The technique that they used is known as “laser vaporization surgery.” Laser Operation, Welding, and Surgery Lasers are electronic devices that emit intense beams of light when a process called “stimulated emission” occurs. The principles of laser operation, including stimulated emission, were established by Albert Einstein and other scientists in the first third of the twenti-

Laser vaporization

/

473

eth century. In 1960, Theodore H. Maiman of the Hughes Research Center in Malibu, California, built the first laser, using a ruby crystal to produce a laser beam composed of red light. All lasers are made up of three main components. The first of these, the laser’s “active medium,” is a solid (like Maiman’s ruby crystal), a liquid, or a gas that can be made to lase. The second component is a flash lamp or some other light energy source that puts light into the active medium. The third component is a pair of mirrors that are situated on both sides of the active medium and are designed in such a way that one mirror transmits part of the energy that strikes it, yielding the light beam that leaves the laser. Lasers can produce energy because light is one of many forms of energy that are called, collectively, electromagnetic radiation (among the other forms of electromagnetic radiation are X rays and radio waves). These forms of electromagnetic radiation have different wavelengths; the smaller the wavelength, the higher the energy level. The energy level is measured in units called “quanta.” The emission of light quanta from atoms that are said to be in the “excited state” produces energy, and the absorption of quanta by unexcited atoms— atoms said to be in the “ground state”—excites those atoms. The familiar light bulb spontaneously and haphazardly emits light of many wavelengths from excited atoms. This emission occurs in all directions and at widely varying times. In contrast, the light reflection between the mirrors at the ends of a laser causes all of the many excited atoms present in the active medium simultaneously to emit light waves of the same wavelength. This process is called “stimulated emission.” Stimulated emission ultimately causes a laser to yield a beam of coherent light, which means that the wavelength, emission time, and direction of all the waves in the laser beam are the same. The use of focusing devices makes it possible to convert an emitted laser beam into a point source that can be as small as a few thousandths of an inch in diameter. Such focused beams are very hot, and they can be used for such diverse functions as cutting or welding metal objects and performing delicate surgery. The nature of the active medium used in a laser determines the wavelength of its emitted light beam; this in turn dictates both the energy of the emitted quanta and the appropriate uses for the laser.

474

/

Laser vaporization

A blocked artery (top) can be threaded with a flexible fiber-optic fiber or bundle of fibers until it reaches the blockage; the fiber then emits laser light, vaporizing the plaque (bottom) and restoring circulation.

Maiman’s ruby laser, for example, has been used since the 1960’s in eye surgery to reattach detached retinas. This is done by focusing the laser on the tiny retinal tear that causes a retina to become detached. The very hot, high-intensity light beam then “welds” the retina back into place, bloodlessly, by burning it to produce scar tissue. The burning process has no effect on nearby tissues. Other types of lasers have been used in surgeries on the digestive tract and the uterus since the 1970’s. In 1983, a group of physicians began using lasers to treat cardiovascular disease. The original work, which was carried out by a number of physicians in Toulouse, France, involved the vaporization of atheroma deposits (atherosclerotic plaque) in a human ar-

Laser vaporization

/

475

tery. This very exciting event added a new method to medical science’s arsenal of life-saving techniques.

Consequences Since their discovery, lasers have been used for many purposes in science and industry. Such uses include the study of the laws of chemistry and physics, photography, communications, and surveying. Lasers have been utilized in surgery since the mid-1960’s, and their use has had a tremendous impact on medicine. The first type of laser surgery to be conducted was the repair of detached retinas via ruby lasers. This technique has become the method of choice for such eye surgery because it takes only minutes to perform rather than the hours required for conventional surgical methods. It is also beneficial because the lasing of the surgical site cauterizes that site, preventing bleeding. In the late 1970’s, the use of other lasers for abdominal cancer surgery and uterine surgery began and flourished. In these forms of surgery, more powerful lasers are used. In the 1980’s, laser vaporization surgery (LVS) began to be used to clear atherosclerotic plaque (atheromas) from clogged arteries. This methodology gives cardiologists a useful new tool. Before LVS was available, surgeons dislodged atheromas by means of “transluminal angioplasty,” which involved pushing small, fluoroscopeguided inflatable balloons through clogged arteries. See also Blood transfusion; CAT scanner; Coronary artery bypass surgery; Electrocardiogram; Laser; Laser eye surgery; Ultrasound.

Further Reading Fackelmann, Kathleen. “Internal Laser Blast Might Ease Heart Pain.” USA Today (March 8, 1999). Hecht, Jeff. Laser Pioneers. Rev. ed. Boston: Academic Press, 1992. “Is Cervical Laser Therapy Painful?” Lancet no. 8629 (January 14, 1989).

476

/

Laser vaporization

Lothian, Cheri L. “Laser Angioplasty: Vaporizing Coronary Artery Plaque.” Nursing 22, no. 1 (January, 1992). “New Cool Laser Procedure Has Promise for Treating Blocked Coronary Arteries.” Wall Street Journal (May 15, 1989). Rundle, Rhonda L. “FDA Approves Laser Systems for Angioplasty.” Wall Street Journal (February 3, 1992). Sutton, C. J. G., and Michael P. Diamond. Endoscopic Surgery for Gynecologists. Philadelphia: W. B. Saunders, 1993.

477

Long-distance radiotelephony Long-distance radiotelephony

The invention: The first radio transmissions from the United States to Europe opened a new era in telecommunications. The people behind the invention: Guglielmo Marconi (1874-1937), Italian inventor of transatlantic telegraphy Reginald Aubrey Fessenden (1866-1932), an American radio engineer Lee de Forest (1873-1961), an American inventor Harold D. Arnold (1883-1933), an American physicist John J. Carty (1861-1932), an American electrical engineer An Accidental Broadcast The idea of commercial transatlantic communication was first conceived by Italian physicist and inventor Guglielmo Marconi, the pioneer of wireless telegraphy. Marconi used a spark transmitter to generate radio waves that were interrupted, or modulated, to form the dots and dashes of Morse code. The rapid generation of sparks created an electromagnetic disturbance that sent radio waves of different frequencies into the air—a broad, noisy transmission that was difficult to tune and detect. The inventor Reginald Aubrey Fessenden produced an alternative method that became the basis of radio technology in the twentieth century. His continuous radio waves kept to one frequency, making them much easier to detect at long distances. Furthermore, the continuous waves could be modulated by an audio signal, making it possible to transmit the sound of speech. Fessenden used an alternator to generate electromagnetic waves at the high frequencies required in radio transmission. It was specially constructed at the laboratories of the General Electric Company. The machine was shipped to Brant Rock, Massachusetts, in 1906 for testing. Radio messages were sent to a boat cruising offshore, and the feasibility of radiotelephony was thus demonstrated. Fessenden followed this success with a broadcast of messages and

478

/

Long-distance radiotelephony

music between Brant Rock and a receiving station constructed at Plymouth, Massachusetts. The equipment installed at Brant Rock had a range of about 160 kilometers. The transmission distance was determined by the strength of the electric power delivered by the alternator, which was measured in watts. Fessenden’s alternator was rated at 500 watts, but it usually delivered much less power. Yet this was sufficient to send a radio message across the Atlantic. Fessenden had built a receiving station at Machrihanish, Scotland, to test the operation of a large rotary spark transmitter that he had constructed. An operator at this station picked up the voice of an engineer at Brant Rock who was sending instructions to Plymouth. Thus, the first radiotelephone message had been sent across the Atlantic by accident. Fessenden, however, decided not to make this startling development public. The station at Machrihanish was destroyed in a storm, making it impossible to carry out further tests. The successful transmission undoubtedly had been the result of exceptionally clear atmospheric conditions that might never again favor the inventor. One of the parties following the development of the experiments in radio telephony was the American Telephone and Telegraph (AT&T) Company. Fessenden entered into negotiations to sell his system to the telephone company, but, because of the financial panic of 1907, the sale was never made. Virginia to Paris and Hawaii The English physicist John Ambrose Fleming had invented a twoelement (diode) vacuum tube in 1904 that could be used to generate and detect radio waves. Two years later, the American inventor Lee de Forest added a third element to the diode to produce his “audion” (triode), which was a more sensitive detector. John J. Carty, head of a research and development effort at AT&T, examined these new devices carefully. He became convinced that an electronic amplifier, incorporating the triode into its design, could be used to increase the strength of telephone signals and to long distances. On Carty’s advice, AT&T purchased the rights to de Forest’s audion. A team of about twenty-five researchers, under the leader-

Long-distance radiotelephony

/

479

Reginald Aubrey Fessenden Reginald Aubrey Fessenden was born in Canada in 1866 to a small-town minister and his wife. After graduating from Bishop’s College in Lennoxville, Quebec, he took a job as head of Whitney Institute in Bermuda. However, he was brilliant and volatile and had greater ambitions. After two years, he landed a job as a tester for his idol, Thomas Edison. Soon he was working as an engineer and chemist. Fessenden became a professor of electrical engineering at Purdue University in 1892 and then a year later at the University of Pittsburgh. His ideas were often advanced, so far advanced that some were not developed until much later, and by others. His first patented invention, an electrolyte detector in 1900, was far more sensitive than others in use and made it possible to pick up radio signals carrying complex sound. To transmit such signals, he pioneered the use of carrier waves. During his career he registered more than three hundred patents. Suspicious and feisty, he also spent a lot of time in disputes, and frequently in court, over his inventions. He sued his backers at the National Electric Signaling Company over rights to operate a connection to Great Britain, and won a $406,000 settlement, which bankrupted the company. He sued Radio Corporation of America (RCA) claiming it prevented him from exploiting his own patents commercially. RCA settled out of court but was enriched by Fessenden’s invention. Having returned to Bermuda, Fessenden died in 1932. He never succeeded in winning the fame and wealth for the radio that he felt was due to him.

ship of physicist Harold D. Arnold, were assigned the job of perfecting the triode and turning it into a reliable amplifier. The improved triode was responsible for the success of transcontinental cable telephone service, which was introduced in January, 1915. The triode was also the basis of AT&T’s foray into radio telephony. Carty’s research plan called for a system with three components: an oscillator to generate the radio waves, a modulator to add the audio signals to the waves, and an amplifier to transmit the radio waves. The total power output of the system was 7,500 watts, enough to send the radio waves over thousands of kilometers.

480

/

Long-distance radiotelephony

The apparatus was installed in the U.S. Navy’s radio tower in Arlington, Virginia, in 1915. Radio messages from Arlington were picked up at a receiving station in California, a distance of 4,000 kilometers, then at a station in Pearl Harbor, Hawaii, which was 7,200 kilometers from Arlington. AT&T’s engineers had succeeded in joining the company telephone lines with the radio transmitter at Arlington; therefore, the president of AT&T, Theodore Vail, could pick up his telephone and talk directly with someone in California. The next experiment was to send a radio message from Arlington to a receiving station set up in the Eiffel Tower in Paris. After several unsuccessful attempts, the telephone engineers in the Eiffel Tower finally heard Arlington’s messages on October 21, 1915. The AT&T receiving station in Hawaii also picked up the messages. The two receiving stations had to send their reply by telegraph to the United States because both stations were set up to receive only. Two-way radio communication was still years in the future. Impact The announcement that messages had been received in Paris was front-page news and brought about an outburst of national pride in the United States. The demonstration of transatlantic radio telephony was more important as publicity for AT&T than as a scientific advance. All the credit went to AT&T and to Carty’s laboratory. Both Fessenden and de Forest attempted to draw attention to their contributions to long-distance radio telephony, but to no avail. The Arlington-to-Paris transmission was a triumph for corporate public relations and corporate research. The development of the triode had been achieved with large teams of highly trained scientists—in contrast to the small-scale efforts of Fessenden and de Forest, who had little formal scientific training. Carty’s laboratory was an example of the new type of industrial research that was to dominate the twentieth century. The golden days of the lone inventor, in the mold of Thomas Edison or Alexander Graham Bell, were gone. In the years that followed the first transatlantic radio telephone messages, little was done by AT&T to advance the technology or to develop a commercial service. The equipment used in the 1915 dem-

Long-distance radiotelephony

/

481

onstration was more a makeshift laboratory apparatus than a prototype for a new radio technology. The messages sent were short and faint. There was a great gulf between hearing “hello” and “goodbye” amid the static. The many predictions of a direct telephone connection between New York and other major cities overseas were premature. It was not until 1927 that a transatlantic radio circuit was opened for public use. By that time, a new technological direction had been taken, and the method used in 1915 had been superseded by shortwave radio communication. See also Communications satellite; Internet; Long-distance telephone; Radio; Radio crystal sets; Radiotelephony; Television. Further Reading Marconi, Degna. My Father: Marconi. Toronto: Guernica Editions, 1996. Masini, Giancarlo. Marconi. New York: Marsilio, 1995. Seitz, Frederick. The Cosmic Inventor: Reginald Aubrey Fessenden. Philadelphia: American Philosophical Society, 1999. Streissguth, Thomas. Communications: Sending the Message. Minneapolis, Minn.: Oliver Press, 1997.

482

Long-distance telephone Long-distance telephone

The invention: System for conveying voice signals via wires over long distances. The people behind the invention: Alexander Graham Bell (1847-1922), a Scottish American inventor Thomas A. Watson (1854-1934), an American electrical engineer The Problem of Distance The telephone may be the most important invention of the nineteenth century. The device developed by Alexander Graham Bell and Thomas A. Watson opened a new era in communication and made it possible for people to converse over long distances for the first time. During the last two decades of the nineteenth century and the first decade of the twentieth century, the American Telephone and Telegraph (AT&T) Company continued to refine and upgrade telephone facilities, introducing such innovations as automatic dialing and long-distance service. One of the greatest challenges faced by Bell engineers was to develop a way of maintaining signal quality over long distances. Telephone wires were susceptible to interference from electrical storms and other natural phenomena, and electrical resistance and radiation caused a fairly rapid drop-off in signal strength, which made long-distance conversations barely audible or unintelligible. By 1900, Bell engineers had discovered that signal strength could be improved somewhat by wrapping the main wire conductor with thinner wires called “loading coils” at prescribed intervals along the length of the cable. Using this procedure, Bell extended longdistance service from New York to Denver, Colorado, which was then considered the farthest point that could be reached with acceptable quality. The result, however, was still unsatisfactory, and Bell engineers realized that some form of signal amplification would be necessary to improve the quality of the signal.

Long-distance telephone

/

483

A breakthrough came in 1906, when Lee de Forest invented the “audion tube,” which could send and amplify radio waves. Bell scientists immediately recognized the potential of the new device for long-distance telephony and began building amplifiers that would be placed strategically along the long-distance wire network. Work progressed so quickly that by 1909, Bell officials were predicting that the first transcontinental long-distance telephone service, between New York and San Francisco, was imminent. In that year, Bell president Theodore N. Vail went so far as to promise the organizers of the Panama-Pacific Exposition, scheduled to open in San Francisco in 1914, that Bell would offer a demonstration at the exposition. The promise was risky, because certain technical problems associated with sending a telephone signal over a 4,800kilometer wire had not yet been solved. De Forest’s audion tube was a crude device, but progress was being made. Two more breakthroughs came in 1912, when de Forest improved on his original concept and Bell engineer Harold D. Arnold improved it further. Bell bought the rights to de Forest’s vacuumtube patents in 1913 and completed the construction of the New York-San Francisco circuit. The last connection was made at the Utah-Nevada border on June 17, 1914. Success Leads to Further Improvements Bell’s long-distance network was tested successfully on June 29, 1914, but the official demonstration was postponed until January 25, 1915, to accommodate the Panama-Pacific Exposition, which had also been postponed. On that date, a connection was established between Jekyll Island, Georgia, where Theodore Vail was recuperating from an illness, and New York City, where Alexander Graham Bell was standing by to talk to his former associate Thomas Watson, who was in San Francisco. When everything was in place, the following conversation took place. Bell: “Hoy! Hoy! Mr. Watson? Are you there? Do you hear me?” Watson: “Yes, Dr. Bell, I hear you perfectly. Do you hear me well?” Bell: “Yes, your voice is perfectly distinct. It is as clear as if you were here in New York.” The first transcontinental telephone conversation transmitted by wire was followed quickly by another that was transmitted via

484

/

Long-distance telephone

radio. Although the Bell company was slow to recognize the potential of radio wave amplification for the “wireless” transmission of telephone conversations, by 1909 the company had made a significant commitment to conduct research in radio telephony. On April 4, 1915, a wireless signal was transmitted by Bell technicians from Montauk Point on Long Island, New York, to Wilmington, Delaware, a distance of more than 320 kilometers. Shortly thereafter, a similar test was conducted between New York City and Brunswick, Georgia, via a relay station at Montauk Point. The total distance of the transmission was more than 1,600 kilometers. Finally, in September, 1915, Vail placed a successful transcontinental radiotelephone call from his office in New York to Bell engineering chief J. J. Carty in San Francisco. Only a month later, the first telephone transmission across the Atlantic Ocean was accomplished via radio from Arlington, Virginia, to the Eiffel Tower in Paris, France. The signal was detectable, although its quality was poor. It would be ten years before true transatlantic radio-telephone service would begin. The Bell company recognized that creating a nationwide longdistance network would increase the volume of telephone calls simply by increasing the number of destinations that could be reached from any single telephone station. As the network expanded, each subscriber would have more reason to use the telephone more often, thereby increasing Bell’s revenues. Thus, the company’s strategy became one of tying local and regional networks together to create one large system. Impact Just as the railroads had interconnected centers of commerce, industry, and agriculture all across the continental United States in the nineteenth century, the telephone promised to bring a new kind of interconnection to the country in the twentieth century: instantaneous voice communication. During the first quarter century after the invention of the telephone and during its subsequent commercialization, the emphasis of telephone companies was to set up central office switches that would provide interconnections among subscribers within a fairly limited geographical area. Large cities

Long-distance telephone

/

485

were wired quickly, and by the beginning of the twentieth century most were served by telephone switches that could accommodate thousands of subscribers. The development of intercontinental telephone service was a milestone in the history of telephony for two reasons. First, it was a practical demonstration of the almost limitless applications of this innovative technology. Second, for the first time in its brief history, the telephone network took on a national character. It became clear that large central office networks, even in large cities such as New York, Chicago, and Baltimore, were merely small parts of a much larger, universally accessible communication network that spanned a continent. The next step would be to look abroad, to Europe and beyond. See also Cell phone; Fax machine; Internet; Long-distance radiotelephony; Rotary dial telephone; Telephone switching; Touch-tone telephone. Further Reading Coe, Lewis. The Telephone and Its Several Inventors: A History. Jefferson, N.C.: McFarland, 1995. Mackay, James A. Alexander Graham Bell: A Life. New York: J. Wiley, 1997. Young, Peter. Person to Person: The International Impact of the Telephone. Cambridge: Granta Editions, 1991.

486

Mammography Mammography

The invention: The first X-ray procedure for detecting and diagnosing breast cancer. The people behind the invention: Albert Salomon, the first researcher to use X-ray technology instead of surgery to identify breast cancer Jacob Gershon-Cohen (1899-1971), a breast cancer researcher Studying Breast Cancer Medical researchers have been studying breast cancer for more than a century. At the end of the nineteenth century, however, no one knew how to detect breast cancer until it was quite advanced. Often, by the time it was detected, it was too late for surgery; many patients who did have surgery died. So after X-ray technology first appeared in 1896, cancer researchers were eager to experiment with it. The first scientist to use X-ray techniques in breast cancer experiments was Albert Salomon, a German surgeon. Trying to develop a biopsy technique that could tell which tumors were cancerous and thereby avoid unnecessary surgery, he X-rayed more than three thousand breasts that had been removed from patients during breast cancer surgery. In 1913, he published the results of his experiments, showing that X rays could detect breast cancer. Different types of Xray images, he said, showed different types of cancer. Though Salomon is recognized as the inventor of breast radiology, he never actually used his technique to diagnose breast cancer. In fact, breast cancer radiology, which came to be known as “mammography,” was not taken up quickly by other medical researchers. Those who did try to reproduce his research often found that their results were not conclusive. During the 1920’s, however, more research was conducted in Leipzig, Germany, and in South America. Eventually, the Leipzig researchers, led by Erwin Payr, began to use mammography to diagnose cancer. In the 1930’s, a Leipzig researcher named W. Vogel published a paper that accurately described differences between cancerous and noncancerous tumors as they appeared on X-ray pho-

Mammography /

487

tographs. Researchers in the United States paid little attention to mammography until 1926. That year, a physician in Rochester, New York, was using a fluoroscope to examine heart muscle in a patient and discovered that the fluoroscope could be used to make images of breast tissue as well. The physician, Stafford L. Warren, then developed a stereoscopic technique that he used in examinations before surgery. Warren published his findings in 1930; his article also described changes in breast tissue that occurred because of pregnancy, lactation (milk production), menstruation, and breast disease. Yet Stafford’s technique was complicated and required equipment that most physicians of the time did not have. Eventually, he lost interest in mammography and went on to other research. Using the Technique In the late 1930’s, Jacob Gershon-Cohen became the first clinician to advocate regular mammography for all women to detect breast cancer before it became a major problem. Mammography was not very expensive, he pointed out, and it was already quite accurate. A milestone in breast cancer research came in 1956, when GershonCohen and others began a five-year study of more than 1,300 women to test the accuracy of mammography for detecting breast cancer. Each woman studied was screened once every six months. Of the 1,055 women who finished the study, 92 were diagnosed with benign tumors and 23 with malignant tumors. Remarkably, out of all these, only one diagnosis turned out to be wrong. During the same period, Robert Egan of Houston began tracking breast cancer X rays. Over a span of three years, one thousand X-ray photographs were used to make diagnoses. When these diagnoses were compared to the results of surgical biopsies, it was confirmed that mammography had produced 238 correct diagnoses of cancer, out of 240 cases. Egan therefore joined the crusade for regular breast cancer screening. Once mammography was finally accepted by doctors in the late 1950’s and early 1960’s, researchers realized that they needed a way to teach mammography quickly and effectively to those who would use it. A study was done, and it showed that any radiologist could conduct the procedure with only five days of training.

488

/

Mammography

In the early 1970’s, the American Cancer Society and the National Cancer Institute joined forces on a nationwide breast cancer screening program called the “Breast Cancer Detection Demonstration Project.” Its goal in 1971 was to screen more than 250,000 women over the age of thirty-five. Since the 1960’s, however, some people had argued that mammography was dangerous because it used radiation on patients. In 1976, Ralph Nader, a consumer advocate, stated that women who were to undergo mammography should be given consent forms that would list the dangers of radiation. In the years that followed, mammography was refined to reduced the amount of radiation needed to detect cancer. It became a standard tool for diagnosis, and doctors recommended that women have a mammogram every two or three years after the age of forty. Impact Radiology is not a science that concerns only breast cancer screening. While it does provide the technical facilities necessary to practice mammography, the photographic images obtained must be interpreted by general practitioners, as well as by specialists. Once

Physicians recommend that women have a mammogram every two or three years after the age of forty. (Digital Stock)

Mammography /

489

Gershon-Cohen had demonstrated the viability of the technique, a means of training was devised that made it fairly easy for clinicians to learn how to practice mammography successfully. Once all these factors—accuracy, safety, simplicity—were in place, mammography became an important factor in the fight against breast cancer. The progress made in mammography during the twentieth century was a major improvement in the effort to keep more women from dying of breast cancer. The disease has always been one of the primary contributors to the number of female cancer deaths that occur annually in the United States and around the world. This high figure stems from the fact that women had no way of detecting the disease until tumors were in an advanced state. Once Salomon’s procedure was utilized, physicians had a means by which they could look inside breast tissue without engaging in exploratory surgery, thus giving women a screening technique that was simple and inexpensive. By 1971, a quarter million women over age thirty-five had been screened. Twenty years later, that number was in the millions. See also Amniocentesis; CAT scanner; Electrocardiogram; Electroencephalogram; Holography; Nuclear magnetic resonance; Pap test; Syphilis test; Ultrasound. Further Reading “First Digital Mammography System Approved by FDA.” FDA Consumer 34, no. 3 (May/June, 2000). Hindle, William H. Breast Care: A Clinical Guidebook for Women’s Primary Health Care Providers. New York: Springer, 1999. Okie, Susan. “More Women Are Getting Mammograms: Experts Agree That the Test Has Played Big Role in Reducing Deaths from Breast Cancer.” Washington Post (January 21, 1997). Wolbarst, Anthony B. Looking Within: How X-ray, CT, MRI, Ultrasound, and Other Medical Images Are Created, and How They Help Physicians Save Lives. Berkeley: University of California Press, 1999.

490

Mark I calculator Mark I calculator

The invention: Early digital calculator designed to solve differential equations that was a forerunner of modern computers. The people behind the invention: Howard H. Aiken (1900-1973), Harvard University professor and architect of the Mark I Clair D. Lake (1888-1958), a senior engineer at IBM Francis E. Hamilton (1898-1972), an IBM engineer Benjamin M. Durfee (1897-1980), an IBM engineer The Human Computer The physical world can be described by means of mathematics. In principle, one can accurately describe nature down to the smallest detail. In practice, however, this is impossible except for the simplest of atoms. Over the years, physicists have had great success in creating simplified models of real physical processes whose behavior can be described by the branch of mathematics called “calculus.” Calculus relates quantities that change over a period of time. The equations that relate such quantities are called “differential equations,” and they can be solved precisely in order to yield information about those quantities. Most natural phenomena, however, can be described only by differential equations that can be solved only approximately. These equations are solved by numerical means that involve performing a tremendous number of simple arithmetic operations (repeated additions and multiplications). It has been the dream of many scientists since the late 1700’s to find a way to automate the process of solving these equations. In the early 1900’s, people who spent day after day performing the tedious operations that were required to solve differential equations were known as “computers.” During the two world wars, these human computers created ballistics tables by solving the differential equations that described the hurling of projectiles and the dropping of bombs from aircraft. The war effort was largely responsible for accelerating the push to automate the solution to these problems.

Mark I calculator

/

491

A Computational Behemoth The ten-year period from 1935 to 1945 can be considered the prehistory of the development of the digital computer. (In a digital computer, digits represent magnitudes of physical quantities. These digits can have only certain values.) Before this time, all machines for automatic calculation were either analog in nature (in which case, physical quantities such as current or voltage represent the numerical values of the equation and can vary in a continuous fashion) or were simplistic mechanical or electromechanical adding machines. This was the situation that faced Howard Aiken. At the time, he was a graduate student working on his doctorate in physics. His dislike for the tremendous effort required to solve the differential equations used in his thesis drove him to propose, in the fall of 1937, constructing a machine that would automate the process. He proposed taking existing business machines that were commonly used in accounting firms and combining them into one machine that would be controlled by a series of instructions. One goal was to eliminate all manual intervention in the process in order to maximize the speed of the calculation. Aiken’s proposal came to the attention of Thomas Watson, who was then the president of International Business Machines Corporation (IBM). At that time, IBM was a major supplier of business machines and did not see much of a future in such “specialized” machines. It was the pressure provided by the computational needs of the military in World War II that led IBM to invest in building automated calculators. In 1939, a contract was signed in which IBM agreed to use its resources (personnel, equipment, and finances) to build a machine for Howard Aiken and Harvard University. IBM brought together a team of seasoned engineers to fashion a working device from Aiken’s sketchy ideas. Clair D. Lake, who was selected to manage the project, called on two talented engineers— Francis E. Hamilton and Benjamin M. Durfee—to assist him. After four years of effort, which was interrupted at times by the demands of the war, a machine was constructed that worked remarkably well. Completed in January, 1943, at Endicott, New York, it was then disassembled and moved to Harvard University in Cam-

492

/

Mark I calculator

bridge, Massachusetts, where it was reassembled. Known as the IBM automatic sequence controlled calculator (ASCC), it began operation in the spring of 1944 and was formally dedicated and revealed to the public on August 7, 1944. Its name indicates the machine’s distinguishing feature: the ability to load automatically the instructions that control the sequence of the calculation. This capability was provided by punching holes, representing the instructions, in a long, ribbonlike paper tape that could be read by the machine. Computers of that era were big, and the ASCC I was particularly impressive. It was 51 feet long by 8 feet tall, and it weighed 5 tons. It contained more than 750,000 parts, and when it was running, it sounded like a room filled with sewing machines. The ASCC later became known as the Harvard Mark I. Impact Although this machine represented a significant technological achievement at the time and contributed ideas that would be used in subsequent machines, it was almost obsolete from the start. It was electromechanical, since it relied on relays, but it was built at the dawn of the electronic age. Fully electronic computers offered better reliability and faster speeds. Howard Aiken continued, without the help of IBM, to develop successors to the Mark I. Because he resisted using electronics, however, his machines did not significantly affect the direction of computer development. For all its complexity, the Mark I operated reasonably well, first solving problems related to the war effort and then turning its attention to the more mundane tasks of producing specialized mathematical tables. It remained in operation at the Harvard Computational Laboratory until 1959, when it was retired and disassembled. Parts of this landmark computational tool are now kept at the Smithsonian Institute. See also BASIC programming language; Differential analyzer; Personal computer; Pocket calculator; UNIVAC computer.

Mark I calculator

/

493

Further Reading Cohen, I. Bernard. Howard Aiken: Portrait of a Computer Pioneer. Cambridge, Mass.: MIT Press, 1999. Ritchie, David. The Computer Pioneers: The Making of the Modern Computer. New York: Simon and Schuster, 1986. Slater, Robert. Portraits in Silicon. Cambridge, Mass.: MIT Press, 1987.

494

Mass spectrograph Mass spectrograph

The invention: The first device used to measure the mass of atoms, which was found to be the result of the combination of isotopes. The people behind the invention: Francis William Aston (1877-1945), an English physicist who was awarded the 1922 Nobel Prize in Chemistry Sir Joseph John Thomson (1856-1940), an English physicist William Prout (1785-1850), an English biochemist Ernest Rutherford (1871-1937), an English physicist Same Element, Different Weights Isotopes are different forms of a chemical element that act similarly in chemical or physical reactions. Isotopes differ in two ways: They possess different atomic weights and different radioactive transformations. In 1803, John Dalton proposed a new atomic theory of chemistry that claimed that chemical elements in a compound combine by weight in whole number proportions to one another. By 1815, William Prout had taken Dalton’s hypothesis one step further and claimed that the atomic weights of elements were integral (the integers are the positive and negative whole numbers and zero) multiples of the hydrogen atom. For example, if the weight of hydrogen was 1, then the weight of carbon was 12, and that of oxygen 16. Over the next decade, several carefully controlled experiments were conducted to determine the atomic weights of a number of elements. Unfortunately, the results of these experiments did not support Prout’s hypothesis. For example, the atomic weight of chlorine was found to be 35.5. It took a theory of isotopes, developed in the early part of the twentieth century, to verify Prout’s original theory. After his discovery of the electron, Sir Joseph John Thomson, the leading physicist at the Cavendish Laboratory in Cambridge, England, devoted much of his remaining research years to determining the nature of “positive electricity.” (Since electrons are negatively charged, most electricity is negative.) While developing an instrument sensitive enough to analyze the positive electron, Thomson in-

Mass spectrograph

/

495

vited Francis William Aston to work with him at the Cavendish Laboratory. Recommended by J. H. Poynting, who had taught Aston physics at Mason College, Aston began a lifelong association at Cavendish, and Trinity College became his home. When electrons are stripped from an atom, the atom becomes positively charged. Through the use of magnetic and electrical fields, it is possible to channel the resulting positive rays into parabolic tracks. By examining photographic plates of these tracks, Thomson was able to identify the atoms of different elements. Aston’s first contribution at Cavendish was to improve the instrument used to photograph the parabolic tracks. He developed a more efficient pump to create the required vacuum and devised a camera that would provide sharper photographs. By 1912, the improved apparatus had provided proof that the individual molecules of a substance have the same mass. While working on the element neon, however, Thomson obtained two parabolas, one with a mass of 20 and the other with a mass of 22, which seemed to contradict the previous findings that molecules of any substance have the same mass. Aston was given the task of resolving this mystery. Treating Particles Like Light In 1919, Aston began to build a device called a “mass spectrograph.” The idea was to treat ionized or positive atoms like light. He reasoned that, because light can be dispersed into a rainbowlike spectrum and analyzed by means of its different colors, the same procedure could be used with atoms of an element such as neon. By creating a device that used magnetic fields to focus the stream of particles emitted by neon, he was able to create a mass spectrum and record it on a photographic plate. The heavier mass of neon (the first neon isotope) was collected on one part of a spectrum and the lighter neon (the second neon isotope) showed up on another. This mass spectrograph was a magnificent apparatus: The masses could be analyzed without reference to the velocity of the particles, which was a problem with the parabola method devised by Thomson. Neon possessed two isotopes: one with a mass of 20 and the other with a mass of 22, in a ratio of 10:1. When combined, this gave the atomic weight 20.20, which was the accepted weight of neon.

496

/

Mass spectrograph

Francis William Aston Francis W. Aston was born near Birmingham, England, in 1877 to William Aston, a farmer and metals dealer, and Fanny Charlotte Hollis, a gunmaker’s daughter. As a boy he loved to perform experiments by himself in his own small laboratory at home. His diligence helped him earn top marks in school, and he attended Mason College (later the University of Birmingham). However, he failed to win a scholarship to continue his studies after graduation in 1901. He did not give up on experiments, however, even while holding a job as the chemist for a local brewery. He built his own equipment and investigated the nature of electricity. This work attracted the attention of the most famous researchers of the day. He finally got a scholarship in 1903 to the University of Birmingham and then joined the staff of Joseph John Thomson at the Royal Institution in London and Cambridge University, which remained his home until his death in 1945. Aston liked to work alone as much as possible. Given his unflagging attention to the details of measurement and his inventiveness with experimental equipment, his colleagues respected his lone-dog approach. Their trust was rewarded. After refining the mass spectrograph, Aston was able to explain a thorny problem in chemistry by showing that elements are composed of differing percentages of isotopes and that atomic weight varied slightly depending on the density of their atoms’ nuclei. The research earned him the Nobel Prize in Chemistry in 1922. Aston’s solitude extended into his private life. He never married, lavishing his affection instead on animals, outdoor sports, photography, travel, and music.

Aston’s accomplishment in developing the mass spectrograph was recognized immediately by the scientific community. His was a simple device that was capable of accomplishing a large amount of research quickly. The field of isotope research, which had been opened up by Aston’s research, ultimately played an important part in other areas of physics.

Mass spectrograph

/

497

Impact The years following 1919 were highly charged with excitement, since month after month new isotopes were announced. Chlorine had two; bromine had isotopes of 79 and 81, which gave an almost exact atomic weight of 80; krypton had six isotopes; and xenon had even more. In addition to the discovery of nonradioactive isotopes, the “whole-number rule” for chemistry was verified: Protons were the basic building blocks for different atoms, and they occurred exclusively in whole numbers. Aston’s original mass spectrograph had an accuracy of 1 in 1,000. In 1927, he built an even more accurate instrument, which was ten times more accurate. The new apparatus was sensitive enough to measure Albert Einstein’s law of mass energy conversion during a nuclear reaction. Between 1927 and 1935, Aston reviewed all the elements that he had worked on earlier and published updated results. He also began to build a still more accurate instrument, which proved to be of great value to nuclear chemistry. The discovery of isotopes opened the way to further research in nuclear physics and completed the speculations begun by Prout during the previous century. Although radioactivity was discovered separately, isotopes played a central role in the field of nuclear physics and chain reactions. See also Cyclotron; Electron microscope; Neutrino detector; Scanning tunneling microscope; Synchrocyclotron; Tevatron accelerator; Ultramicroscope. Further Reading Aston, Francis William. “Mass Spectra and Isotopes” [Nobel lecture]. In Chemistry, 1922-1941. River Edge, N.J.: World Scientific, 1999. Squires, Gordon. “Francis Aston and the Mass Spectrograph.” Journal of the Chemical Society. Dalton Transactions no. 23 (1998). Thackray, Arnold. Atoms and Powers: An Essay on Newtonian MatterTheory and the Development of Chemistry. Cambridge, Mass.: Harvard University Press, 1970.

498

Memory metal Memory metal

The invention: Known as nitinol, a metal alloy that returns to its original shape, after being deformed, when it is heated to the proper temperature. The person behind the invention: William Buehler (1923), an American metallurgist The Alloy with a Memory In 1960, William Buehler developed an alloy that consisted of 53 to 57 percent nickel (by weight) and the balance titanium. This alloy, which is called nitinol, turned out to have remarkable properties. Nitinol is a “memory metal,” which means that, given the proper conditions, objects made of nitinol can be restored to their original shapes even after they have been radically deformed. The return to the original shape is triggered by heating the alloy to a moderate temperature. As the metal “snaps back” to its original shape, considerable force is exerted and mechanical work can be done. Alloys made of nickel and titanium have great potential in a wide variety of industrial and government applications. These include: for the computer market, a series of high-performance electronic connectors; for the medical market, intravenous fluid devices that feature precise fluid control; for the consumer market, eyeglass frame components; and, for the industrial market, power cable couplings that provide durability at welded joints. The Uncoiling Spring At one time, the “uncoiling spring experiment” was used to amuse audiences, and a number of scientists have had fun with nitinol in front of unsuspecting viewers. It is now generally recognized that the shape memory effect involves a thermoelastic transformation at the atomic level. This process is unique in that the transformation back to the original shape occurs as a result of stored elastic energy that assists the chemical driving force that is unleashed by heating the metal.

Memory metal

/

499

The mechanism, simply stated, is that shape memory alloys are rather easily deformed below their “critical temperature.” Provided that the extent of the deformation is not too great, the original, undeformed state can be recovered by heating the alloy to a temperature just below the critical temperature. It is also significant that substantial stresses are generated when a deformed specimen “springs back” to its original shape. This phenomenon is very peculiar compared to the ordinary behavior of most materials. Researchers at the Naval Ordnance Laboratory discovered nitinol by accident in the process of trying to learn how to make titanium less brittle. They tried adding nickel, and when they were showing a wire of the alloy to some administrators, someone smoking a cigar held his match too close to the sample, causing the nitinol to spring back into shape. One of the first applications of the discovery was a new way to link hydraulic lines on the Navy’s F-14 fighter jets. The nitinol “sleeve” was cooled with liquid nitrogen, which enlarged the sample. Then it was slipped into place between two pipes. When the sleeve was warmed up, it contracted, clamping the pipes together and keeping them clamped with a force of nearly 50,000 pounds per square inch. Nitinol is not an easy alloy with which to work. When it is drilled or passed through a lathe, it becomes hardened and resists change. Welding nitinol and electroplating it have become manufacturing nightmares. It also resists taking on a desired shape. The frictional forces of many processes heat the nitinol, which activates its memory. Its fantastic elasticity also causes difficulties. If it is placed in a press with too little force, the spring comes out of the die unchanged. With too much force, the metal breaks into fragments. Using oil as a cooling lubricant and taking a step-wise approach to altering the alloy, however, allows it to be fashioned into particular shapes. One unique use of nitinol occurs in cardiac surgery. Surgical tools made of nitinol can be bent up to 90 degrees, allowing them to be passed into narrow vessels and then retrieved. The tools are then straightened out in an autoclave so that they can be reused.

500

/

Memory metal

Consequences Many of the technical problems of working with nitinol have been solved, and manufacturers of the alloy are selling more than twenty different nitinol products to countless companies in the fields of medicine, transportation, consumer products, and toys. Nitinol toys include blinking movie posters, butterflies with flapping wings, and dinosaurs whose tails move; all these applications are driven by a contracting bit of wire that is connected to a watch battery. The “Thermobile” and the “Icemobile” are toys whose wheels are set in motion by hot water or by ice cubes. Orthodontists sometimes use nitinol wires and springs in braces because the alloy pulls with a force that is more gentle and even than that of stainless steel, thus causing less pain. Nitinol does not react with organic materials, and it is also useful as a new type of blood-clot filter. Best of all, however, is the use of nitinol for eyeglass frames. If the wearer deforms the frames by sitting on them (and people do so frequently), the optometrist simply dips the crumpled frames in hot water and the frames regain their original shape. From its beginnings as an “accidental” discovery, nitinol has gone on to affect various fields of science and technology, from the “Cryofit” couplings used in the hydraulic tubing of aircraft to the pin-and-socket contacts used in electrical circuits. Nitinol has also found its way into integrated circuit packages. In an age of energy conservation, the unique phase transformation of nickel-titanium alloys allows them to be used in lowtemperature heat engines. The world has abundant resources of low-grade thermal energy, and the recovery of this energy can be accomplished by the use of materials such as nitinol. Despite the limitations imposed on heat engines working at low temperatures across a small temperature change, sources of low-grade heat are so widespread that the economical conversion of a fractional percentage of that energy could have a significant impact on the world’s energy supply. Nitinol has also become useful as a material capable of absorbing internal vibrations in structural materials, and it has been used as “Harrington rods” to treat scoliosis (curvature of the spine).

Memory metal

/

501

See also Disposable razor; Neoprene; Plastic; Steelmaking process; Teflon; Tungsten filament. Further Reading Gisser, Kathleen R. C., et al. “Nickel-Titanium Memory Metal.” Journal of Chemical Education 71, no. 4 (April, 1994). Iovine, John. “The World’s ‘Smartest’ Metal.” Poptronics 1, no. 12 (December, 2000). Jackson, Curtis M., H. J. Wagner, and Roman Jerzy Wasilewski. 55Nitinol: The Alloy with a Memory: Its Physical Metallurgy, Properties, and Applications. Washington: Technology Utilization Office, 1972. Walker, Jearl. “The Amateur Scientist.” Scientific American 254, no. 5 (May, 1986).

502

Microwave cooking Microwave cooking

The invention: System of high-speed cooking that uses microwave radition to agitate liquid molecules to raise temperatures by friction. The people behind the invention: Percy L. Spencer (1894-1970), an American engineer Heinrich Hertz (1857-1894), a German physicist James Clerk Maxwell (1831-1879), a Scottish physicist The Nature of Microwaves Microwaves are electromagnetic waves, as are radio waves, X rays, and visible light. Water waves and sound waves are waveshaped disturbances of particles in the media—water in the case of water waves and air or water in the case of sound waves—through which they travel. Electromagnetic waves, however, are wavelike variations of intensity in electric and magnetic fields. Electromagnetic waves were first studied in 1864 by James Clerk Maxwell, who explained mathematically their behavior and velocity. Electromagnetic waves are described in terms of their “wavelength” and “frequency.” The wavelength is the length of one cycle, which is the distance from the highest point of one wave to the highest point of the next wave, and the frequency is the number of cycles that occur in one second. Frequency is measured in units called “hertz,” named for the German physicist Heinrich Hertz. The frequencies of microwaves run from 300 to 3,000 megahertz (1 megahertz equals 1 million hertz, or 1 million cycles per second), corresponding to wavelengths of 100 to 10 centimeters. Microwaves travel in the same way that light waves do; they are reflected by metallic objects, absorbed by some materials, and transmitted by other materials. When food is subjected to microwaves, it heats up because the microwaves make the water molecules in foods (water is the most common compound in foods) vibrate. Water is a “dipole molecule,” which means that it contains both positive and negative charges. When the food is subjected to microwaves, the di-

Microwave cooking

/

503

pole water molecules try to align themselves with the alternating electromagnetic field of the microwaves. This causes the water molecules to collide with one another and with other molecules in the food. Consequently, heat is produced as a result of friction. Development of the Microwave Oven Percy L. Spencer apparently discovered the principle of microwave cooking while he was experimenting with a radar device at the Raytheon Company. A candy bar in his pocket melted after being exposed to microwaves. After realizing what had happened, Spencer made the first microwave oven from a milk can and applied for two patents, “Method of Treating Foodstuffs” and “Means for Treating Foodstuffs,” on October 8, 1945, giving birth to microwaveoven technology. Spencer wrote that his invention “relates to the treatment of foodstuffs and, more particularly, to the cooking thereof through the use of electromagnetic energy.” Though the use of electromagnetic energy for heating was recognized at that time, the frequencies that were used were lower than 50 megahertz. Spencer discovered that heating at such low frequencies takes a long time. He eliminated the time disadvantage by using shorter wavelengths in the microwave region. Wavelengths of 10 centimeters or shorter were comparable to the average dimensions of foods. When these wavelengths were used, the heat that was generated became intense, the energy that was required was minimal, and the process became efficient enough to be exploited commercially. Although Spencer’s patents refer to the cooking of foods with microwave energy, neither deals directly with a microwave oven. The actual basis for a microwave oven may be patents filed by other researchers at Raytheon. A patent by Karl Stiefel in 1949 may be the forerunner of the microwave oven, and in 1950, Fritz Gross received a patent entitled “Cooking Apparatus,” which specifically describes an oven that is very similar to modern microwave ovens. Perhaps the first mention of a commercial microwave oven was made in the November, 1946, issue of Electronics magazine. This article described the newly developed Radarange as a device that could bake biscuits in 29 seconds, cook hamburgers in 35 seconds,

504

/

Microwave cooking

Percy L. Spencer Percy L. Spencer (1894-1970) had an unpromising background for the inventor of the twentieth century’s principal innovation in the technology of cooking. He was orphaned while still a young boy and never completed grade school. However, he possessed a keen curiosity and the imaginative intelligence to educate himself and recognize how to make things better. In 1941 the magnetron, which produces microwaves, was so complex and difficult to make that fewer than two dozen were produced in a day. This pace delayed the campaign to improve radar, which used magnetrons, so Spencer, while working for Raytheon Corporation, set out to speed things along. He simplified the design and made it more efficient at the same time. Production of magnetrons soon increased more than a thousandfold. In 1945 he discovered by accident that microwaves could heat chocolate past the melting point. He immediately tried an experiment by training microwaves on popcorn kernels and was delighted to see them puff up straight away. The first microwave oven based on his discovery stood five feet, six inches tall and weighed 750 pounds, suitable only for restaurants. However, it soon got smaller, thanks to researchers at Raytheon. And after some initial hostility from cooks, it became popular. Raytheon bought Amana Refrigeration in 1965 to manufacture the home models and marketed them worldwide. Meanwhile, Spencer had become a senior vice president at the company and a member of its board of directors. Raytheon named one of its buildings after him, the U.S. Navy presented him with the Distinguished Service Medal for his contributions, and in 1999 he entered the Inventors Hall of Fame.

and grill a hot dog in 8 to 10 seconds. Another article that appeared a month later mentioned a unit that had been developed specifically for airline use. The frequency used in this oven was 3,000 megahertz. Within a year, a practical model 13 inches wide, 14 inches deep, and 15 inches high appeared, and several new models were operating in and around Boston. In June, 1947, Electronics magazine reported the installation of a Radarange in a restaurant, signaling the commercial use of microwave cooking. It was reported that this

Microwave cooking

/

505

method more than tripled the speed of service. The Radarange became an important addition to a number of restaurants, and in 1948, Bernard Proctor and Samuel Goldblith used it for the first time to conduct research into microwave cooking. In the United States, the radio frequencies that can be used for heating are allocated by the Federal Communications Commission (FCC). The two most popular frequencies for microwave cooking are 915 and 2,450 megahertz, and the 2,450 frequency is used in home microwave ovens. It is interesting that patents filed by Spencer in 1947 mention a frequency on the order of 2,450 megahertz. This fact is another example of Spencer’s vision in the development of microwave cooking principles. The Raytheon Company concentrated on using 2,450 megahertz, and in 1955, the first domestic microwave oven was introduced. It was not until the late 1960’s, however, that the price of the microwave oven decreased sufficiently for the device to become popular. The first patent describing a microwave heating system being used in conjunction with a conveyor was issued to Spencer in 1952. Later, based on this development, continuous industrial applications of microwaves were developed. Impact Initially, microwaves were viewed as simply an efficient means of rapidly converting electric energy to heat. Since that time, however, they have become an integral part of many applications. Because of the pioneering efforts of Percy L. Spencer, microwave applications in the food industry for cooking and for other processing operations have flourished. In the early 1970’s, there were eleven microwave oven companies worldwide, two of which specialized in food processing operations, but the growth of the microwave oven industry has paralleled the growth in the radio and television industries. In 1984, microwave ovens accounted for more shipments than had ever been achieved by any appliance—9.1 million units. By 1989, more than 75 percent of the homes in the United States had microwave ovens, and in the 1990’s, microwavable foods were among the fastest-growing products in the food industry. Microwave energy facilitates reductions in operating costs and required energy, higher-quality and more reliable products, and positive en-

506

/

Microwave cooking

vironmental effects. To some degree, the use of industrial microwave energy remains in its infancy. New and improved applications of microwaves will continue to appear. See also Electric refrigerator; Fluorescent lighting; Food freezing; Robot (household); Television; Tupperware; Vacuum cleaner; Washing machine. Further Reading Baird, Davis, R. I. G. Hughes, and Alfred Nordmann. Heinrich Hertz: Classical Physicist, Modern Philosopher. Boston: Kluwer Academic, 1998. Roman, Mark. “That Marvelous Machine in Your Kitchen.” Reader’s Digest (February, 1990). Scott, Otto. The Creative Ordeal: The Story of Raytheon. New York: Atheneum, 1974. Simpson, Thomas K. Maxwell on the Electromagnetic Field: A Guided Study. New Brunswick, N.J.: Rutgers University Press, 1997. Tolstoy, Ivan. James Clerk Maxwell: A Biography. Chicago: University of Chicago Press, 1982.

507

Neoprene Neoprene

The invention: The first commercially practical synthetic rubber, Neoprene gave a boost to polymer chemistry and the search for new materials. The people behind the invention: Wallace Hume Carothers (1896-1937), an American chemist Arnold Miller Collins (1899), an American chemist Elmer Keiser Bolton (1886-1968), an American chemist Julius Arthur Nieuwland (1879-1936), a Belgian American priest, botanist, and chemist Synthetic Rubber: A Mirage? The growing dependence of the industrialized nations upon elastomers (elastic substances) and the shortcomings of natural rubber motivated the twentieth century quest for rubber substitutes. By 1914, rubber had become nearly as indispensable as coal or iron. The rise of the automobile industry, in particular, had created a strong demand for rubber. Unfortunately, the availability of rubber was limited by periodic shortages and spiraling prices. Furthermore, the particular properties of natural rubber, such as its lack of resistance to oxygen, oils, and extreme temperatures, restrict its usefulness in certain applications. These limitations stimulated a search for special-purpose rubber substitutes. Interest in synthetic rubber dates back to the 1860 discovery by the English chemist Greville Williams that the main constituent of rubber is isoprene, a liquid hydrocarbon. Nineteenth century chemists attempted unsuccessfully to transform isoprene into rubber. The first large-scale production of a rubber substitute occurred during World War I. A British blockade forced Germany to begin to manufacture methyl rubber in 1916, but methyl rubber turned out to be a poor substitute for natural rubber. When the war ended in 1918, a practical synthetic rubber was still only a mirage. Nevertheless, a breakthrough was on the horizon.

508

/

Neoprene

Mirage Becomes Reality In 1930, chemists at E. I. Du Pont de Nemours discovered the elastomer known as neoprene. Of the more than twenty chemists who helped to make this discovery possible, four stand out: Elmer Bolton, Julius Nieuwland, Wallace Carothers, and Arnold Collins. Bolton directed Du Pont’s drystuffs department in the mid1920’s. Largely because of the rapidly increasing price of rubber, he initiated a project to synthesize an elastomer from acetylene, a gaseous hydrocarbon. In December, 1925, Bolton attended the American Chemical Society’s convention in Rochester, New York, and heard a presentation dealing with acetylene reactions. The presenter was Julius Nieuwland, the foremost authority on the chemistry of acetylene. Nieuwland was a professor of organic chemistry at the University of Notre Dame. (One of his students was the legendary football coach Knute Rockne.) The priest-scientist had been investigating acetylene reactions for more than twenty years. Using a copper chloride catalyst he had discovered, he isolated a new compound, divinylacetylene (DVA). He later treated DVA with a vulcanizing (hardening) agent and succeeded in producing a rubberlike substance, but the substance proved to be too soft for practical use. Bolton immediately recognized the importance of Nieuwland’s discoveries and discussed with him the possibility of using DVA as a raw material for a synthetic rubber. Seven months later, an alliance was formed that permitted Du Pont researchers to use Nieuwland’s copper catalyst. Bolton hoped that the catalyst would be the key to making an elastomer from acetylene. As it turned out, Nieuwland’s catalyst was indispensable for manufacturing neoprene. Over the next several years, Du Pont scientists tried unsuccessfully to produce rubberlike materials. Using Nieuwland’s catalyst, they managed to prepare DVA and also to isolate monovinylacetylene (MVA), a new compound that eventually proved to be the vital intermediate chemical in the making of neoprene. Reactions of MVA and DVA, however, produced only hard, brittle materials. In 1928, Du Pont hired a thirty-one-year-old Harvard instructor, Wallace Carothers, to direct the organic chemicals group. He began a systematic exploration of polymers (complex molecules). In early

Neoprene

/

509

1930, he accepted an assignment to investigate the chemistry of DVA. He appointed one of his assistants, Arnold Collins, to conduct the laboratory experiments. Carothers suggested that Collins should explore the reaction between MVA and hydrogen chloride. His suggestion would lead to the discovery of neoprene. One of Collins’s experiments yielded a new liquid, and on April 17, 1930, he recorded in his laboratory notebook that the liquid had solidified into a rubbery substance. When he dropped it on a bench, it bounced. This was the first batch of neoprene. Carothers named Collins’s liquid “chloroprene.” Chloroprene is analogous structurally to isoprene, but it polymerizes much more rapidly. Carothers conducted extensive investigations of the chemistry of chloroprene and related compounds. His studies were the foundation for Du Pont’s development of an elastomer that was superior to all previously known synthetic rubbers. Du Pont chemists, including Carothers and Collins, formally introduced neoprene—originally called “DuPrene”—on November 3, 1931, at the meeting of the American Chemical Society in Akron, Ohio. Nine months later, the new elastomer began to be sold. Impact The introduction of neoprene was a milestone in humankind’s development of new materials. It was the first synthetic rubber worthy of the name. Neoprene possessed higher tensile strength than rubber and much better resistance to abrasion, oxygen, heat, oils, and chemicals. Its main applications included jacketing for electric wires and cables, work-shoe soles, gasoline hoses, and conveyor and powertransmission belting. By 1939, when Adolf Hitler’s troops invaded Poland, nearly every major industry in America was using neoprene. After the Japanese bombing of Pearl Harbor, in 1941, the elastomer became even more valuable to the United States. It helped the United States and its allies survive the critical shortage of natural rubber that resulted when Japan seized Malayan rubber plantations. A scientifically and technologically significant side effect of the introduction of neoprene was the stimulus that the breakthrough gave to polymer research. Chemists had long debated whether polymers were mysterious aggregates of smaller units or were gen-

510

/

Neoprene

uine molecules. Carothers ended the debate by demonstrating in a series of now-classic papers that polymers were indeed ordinary— but very large—molecules. In the 1930’s, he put polymer studies on a firm footing. The advance of polymer science led, in turn, to the development of additional elastomers and synthetic fibers, including nylon, which was invented by Carothers himself in 1935. See also Buna rubber; Nylon; Orlon; Plastic; Polyester; Polyethylene; Polystyrene; Silicones; Teflon. Further Reading Furukawa, Yasu. Inventing Polymer Science: Staudinger, Carothers, and the Emergence of Macromolecular Chemistry. Philadelphia: University of Pennsylvania Press, 1998. Hermes, Matthew E. Enough for One Lifetime: Wallace Carothers, Inventor of Rayon. Washington, D.C.: American Chemical Society and the Chemical Heritage Foundation, 1996. Taylor, Graham D., and Patricia E. Sudnik. Du Pont and the International Chemical Industry. Boston, Mass.: Twayne, 1984.

511

Neutrino detector Neutrino detector

The invention: A device that provided the first direct evidence that the Sun runs on thermonuclear power and challenged existing models of the Sun. The people behind the invention: Raymond Davis, Jr. (1914), an American chemist John Norris Bahcall (1934), an American astrophysicist Missing Energy In 1871, Hermann von Helmholtz, the German physicist, anatomist, and physiologist, suggested that no ordinary chemical reaction could be responsible for the enormous energy output of the Sun. By the 1920’s, astrophysicists had realized that the energy radiated by the Sun must come from nuclear fusion, in which protons or nuclei combine to form larger nuclei and release energy. These reactions were assumed to be taking place deep in the interior of the Sun, in an immense thermonuclear furnace, where the pressures and temperatures were high enough to allow fusion to proceed. Conventional astronomical observations could record only the particles of light emitted by the much cooler outer layers of the Sun and could not provide evidence for the existence of a thermonuclear furnace in the interior. Then scientists realized that the neutrino might be used to prove that this huge furnace existed. Of all the particles released in the fusion process, only one type—the neutrino— interacts so infrequently with matter that it can pass through the Sun and reach the earth. These neutrinos provide a way to verify directly the hypothesis of thermonuclear energy generated in stars. The neutrino was “invented” in 1930 by the American physicist Wolfgang Pauli to account for the apparent missing energy in the beta decay, or emission of an electron, from radioactive nuclei. He proposed that an unseen nuclear particle, which he called a neutrino, was also emitted in beta decay, and that it carried off the “missing” energy. To balance the energy but not be observed in the decay process, Pauli’s hypothetical particle had to have no electrical

512

/

Neutrino detector

charge, have little or no mass, and interact only very weakly with ordinary matter. Typical neutrinos would have to be able to pass through millions of miles of ordinary matter in order to reach the earth. Scientists’ detectors, and even the whole earth or Sun, were essentially transparent as far as Pauli’s neutrinos were concerned. Because the neutrino is so difficult to detect, it took more than twenty-five years to confirm its existence. In 1956, Clyde Cowan and Frederick Reines, both physicists at the Los Alamos National Laboratory, built the world’s largest scintillation counter, a device to detect the small flash of light given off when the neutrino strikes (“interacts” with) a certain substance in the apparatus. They placed this scintillation counter near the Savannah River Nuclear Reactor, which was producing about 1 trillion neutrinos every second. Although only one neutrino interaction was observed in their detector every twenty minutes, Cowan and Reines were able to confirm the existence of Pauli’s elusive particle. The task of detecting the solar neutrinos was even more formidable. If an apparatus similar to the Cowan and Reines detector were employed to search for the neutrinos from the Sun, only one interaction could be expected every few thousand years. Missing Neutrinos At about the same time that Cowan and Reines performed their experiment, another type of neutrino detector was under development by Raymond Davis, Jr., a chemist at the Brookhaven National Laboratory. Davis employed an idea, originally suggested in 1948 by the nuclear physicist Bruno Pontecorvo, that when a neutrino interacts with a chlorine-37 nucleus, it produces a nucleus of argon 37. Any argon so produced could then be extracted from large volumes of chlorine-rich liquid by passing helium gas through the liquid. Since argon 37 is radioactive, it is relatively easy to detect. Davis tested a version of this neutrino detector, containing about 3,785 liters of carbon tetrachloride liquid, near a nuclear reactor at the Brookhaven National Laboratory from 1954 to 1956. In the scientific paper describing his results, Davis suggested that this type of neutrino detector could be made large enough to permit detection of solar neutrinos.

Neutrino detector

/

513

Patients undergoing nuclear magnetic resonance image (MRI) examinations are placed inside cylindrical chambers in which their bodies are held rigidly in place. (Digital Stock)

Although Davis’s first attempt to detect solar neutrinos from a limestone mine at Barberton, Ohio, failed, he continued his search with a much larger detector 1,478 meters underground in the Homestake Gold Mine in Lead, South Dakota. The cylindrical tank (6.1 meters in diameter, 16 meters long, and containing 378,540 liters of perchloroethylene) was surrounded by water to shield the detector from neutrons emitted by trace quantities of uranium and thorium in the walls of the mine. The experiment was conducted underground to shield it from cosmic radiation. To describe his results, Davis coined a new unit, the “solar neutrino unit” (SNU), with 1 SNU indicating the production of one atom of argon 37 every six days. Astrophysicist John Norris Bahcall, using the best available astronomical models of the nuclear reactions going on in the sun’s interior, as well as the physical properties of the neutrinos, had predicted a capture rate of 50 SNUs in 1963. The 1967 results from Davis’s detector, however, had an upper limit of only 3 SNUs.

514

/

Neutrino detector

Consequences The main significance of the detection of solar neutrinos by Davis was the direct confirmation that thermonuclear fusion must be occurring at the center of the Sun. The low number of solar neutrinos Davis detected, however, has called into question some of the fundamental beliefs of astrophysics. As Bahcall explained: “We know more about the Sun than about any other star. . . . The Sun is also in what is believed to be the best-understood stage of stellar evolution. . . . If we are to have confidence in the many astronomical and cosmological applications of the theory of stellar evolution, it ought at least to give the right answers about the Sun.” Many solutions to the problem of the “missing” solar neutrinos have been proposed. Most of these solutions can be divided into two broad classes: those that challenge the model of the sun’s interior and those that challenge the understanding of the behavior of the neutrino. Since the number of neutrinos produced is very sensitive to the temperature of the sun’s interior, some astrophysicists have suggested that the true solar temperature may be lower than expected. Others suggest that the sun’s outer layer may absorb more neutrinos than expected. Some physicists, however, believe neutrinos may occur in several different forms, only one of which can be detected by the chlorine detectors. Metal Metal Skin (0.12 cm aluminum) (lead) Alpha Rays (charged helium nuclei) Beta Rays (charged electrons) Gamma Rays, X Rays (photons)

Neutrinos (chargeless, nearly massless subatomic particles)

Neutrinos can pass through most forms of matter without interacting with other nuclear particles.

Neutrino detector

/

515

Davis’s discovery of the low number of neutrinos reaching Earth has focused years of attention on a better understanding of how the Sun generates its energy and how the neutrino behaves. New and more elaborate solar neutrino detectors have been built with the aim of understanding stars, including the Sun, as well as the physics and behavior of the elusive neutrino. See also Radio interferometer; Weather satellite. Further Reading Bartusiak, Marcia. “Underground Astronomer.” Astronomy 28, no. 1 (January, 2000). “Neutrino Test to Probe Sun.” New Scientist 140, no. 1898 (November 6, 1993). “Pioneering Neutrino Astronomers to Share 2000 Wolf Prize in Physics.” Physics Today 53, no. 3 (March, 2000). Schwarzschild, Bertram. “Can Helium Mixing Explain the Solar Neutrino Shortages?” Physics Today 50, no. 3 (March, 1997). Zimmerman, Robert. “The Shadow Boxer.” The Sciences 36, no. 1 (January/February, 1996).

516

Nuclear magnetic resonance Nuclear magnetic resonance

The invention: Procedure that uses hydrogen atoms in the human body, strong electromagnets, radio waves, and detection equipment to produce images of sections of the brain. The people behind the invention: Raymond Damadian (1936), an American physicist and inventor Paul C. Lauterbur (1929), an American chemist Peter Mansfield (1933), a scientist at the University of Nottingham, England Peering into the Brain Doctors have always wanted the ability to look into the skull and see the human brain without harming the patient who is being examined. Over the years, various attempts were made to achieve this ability. At one time, the use of X rays, which were first used by Wilhelm Conrad Röntgen in 1895, seemed to be an option, but it was found that X rays are absorbed by bone, so the skull made it impossible to use X-ray technology to view the brain. The relatively recent use of computed tomography (CT) scanning, a computer-assisted imaging technology, made it possible to view sections of the head and other areas of the body, but the technique requires that the part of the body being “imaged,” or viewed, be subjected to a small amount of radiation, thereby putting the patient at risk. Positron emission tomography (PET) could also be used, but it requires that small amounts of radiation be injected into the patient, which also puts the patient at risk. Since the early 1940’s, however, a new technology had been developing. This technology, which appears to pose no risk to patients, is called “nuclear magnetic resonance spectroscopy.” It was first used to study the molecular structures of pure samples of chemicals. This method developed until it could be used to follow one chemical as it changed into another, and then another, in a living cell. By 1971, Raymond Damadian had proposed that body images that were

Nuclear magnetic resonance

/

517

more vivid and more useful than X rays could be produced by means of nuclear magnetic resonance spectroscopy. In 1978, he founded his own company, FONAR, which manufactured the scanners that are necessary for the technique. Magnetic Resonance Images The first nuclear magnetic resonance images (MRIs) were published by Paul Lauterbur in 1973. Although there seemed to be no possibility that MRI could be harmful to patients, everyone involved in MRI research was very cautious. In 1976, Peter Mansfield, at the University of Nottingham, England, obtained an MRI of his partner’s finger. The next year, Paul Bottomley, a member of Waldo Hinshaw’s research group at the same university, put his left wrist into an experimental machine that the group had developed. A vivid cross section that showed layers of skin, muscle, bone, muscle, and skin, in that order, appeared on the machine’s monitor. Studies with animals showed no apparent memory or other brain problems. In 1978, Electrical and Musical Industries (EMI), a British corporate pioneer in electronics that merged with Thorn in 1980, obtained the first MRI of the human head. It took six minutes. An MRI of the brain, or any other part of the body, is made possible by the water content of the body. The gray matter of the brain contains more water than the white matter does. The blood vessels and the blood itself also have water contents that are different from those of other parts of the brain. Therefore, the different structures and areas of the brain can be seen clearly in an MRI. Bone contains very little water, so it does not appear on the monitor. This is why the skull and the backbone cause no interference when the brain or the spinal cord is viewed. Every water molecule contains two hydrogen atoms and one oxygen atom. A strong electromagnetic field causes the hydrogen molecules to line up like marchers in a parade. Radio waves can be used to change the position of these parallel hydrogen molecules. When the radio waves are discontinued, a small radio signal is produced as the molecules return to their marching position. This distinct radio signal is the basis for the production of the image on a computer screen.

518

/

Nuclear magnetic resonance

Hydrogen was selected for use in MRI work because it is very abundant in the human body, it is part of the water molecule, and it has the proper magnetic qualities. The nucleus of the hydrogen atom consists of a single proton, a particle with a positive charge. The signal from the hydrogen’s proton is comparatively strong. There are several methods by which the radio signal from the hydrogen atom can be converted into an image. Each method uses a computer to create first a two-dimensional, then a threedimensional, image. Peter Mansfield’s team at the University of Nottingham holds the patent for the slice-selection technique that makes it possible to excite and image selectively a specific cross section of the brain or any other part of the body. This is the key patent in MRI technology. Damadian was granted a patent that described the use of two coils, one to drive and one to pick up signals across selected portions of the human body. EMI, the company that introduced the X-ray scanner for CT images, developed a commercial prototype for the MRI. The British Technology Group, a state-owned company that helps to bring innovations to the marketplace, has sixteen separate MRIrelated patents. Ten years after EMI produced the first image of the human brain, patents and royalties were still being sorted out. Consequences MRI technology has revolutionized medical diagnosis, especially in regard to the brain and the spinal cord. For example, in multiple sclerosis, the loss of the covering on nerve cells can be detected. Tumors can be identified accurately. The painless and noninvasive use of MRI has almost completely replaced the myelogram, which involves using a needle to inject dye into the spine. Although there is every indication that the use of MRI is very safe, there are some people who cannot benefit from this valuable tool. Those whose bodies contain metal cannot be placed into the MRI machine. No one instrument can meet everyone’s needs. The development of MRI stands as an example of the interaction of achievements in various fields of science. Fundamental physics, biochemistry, physiology, electronic image reconstruction, advances in superconducting wires, the development of computers, and ad-

Nuclear magnetic resonance

/

519

vancements in anatomy all contributed to the development of MRI. Its development is also the result of international efforts. Scientists and laboratories in England and the United States pioneered the technology, but contributions were also made by scientists in France, Switzerland, and Scotland. This kind of interaction and cooperation can only lead to greater understanding of the human brain. See also Amniocentesis; CAT scanner; Electrocardiogram; Electroencephalogram; Mammography; Ultrasound; X-ray image intensifier. Further Reading Elster, Allen D., and Jonathan H. Burdette. Questions and Answers in Magnetic Resonance Imaging. 2d ed. St. Louis, Mo.: Mosby, 2001. Mackay, R. Stuart. Medical Images and Displays: Comparisons of Nuclear Magnetic Resonance, Ultrasound, X-rays, and Other Modalities. New York: Wiley, 1984. Mattson, James, and Merrill Simon. The Story of MRI: The Pioneers of NMR and Magnetic Resonance in Medicine. Jericho, N.Y.: Dean Books, 1996. Wakefield, Julie. “The ‘Indomitable’ MRI.” Smithsonian 31, no. 3 (June, 2000). Wolbarst, Anthony B. Looking Within: How X-ray, CT, MRI, Ultrasound, and Other Medical Images Are Created, and How they Help Physicians Save Lives. Berkeley: University of California Press, 1999.

520

Nuclear power plant Nuclear power plant

The invention: The first full-scale commercial nuclear power plant, which gave birth to the nuclear power industry. The people behind the invention: Enrico Fermi (1901-1954), an Italian American physicist who won the 1938 Nobel Prize in Physics Otto Hahn (1879-1968), a German physical chemist who won the 1944 Nobel Prize in Chemistry Lise Meitner (1878-1968), an Austrian Swedish physicist Hyman G. Rickover (1898-1986), a Polish American naval officer Discovering Fission Nuclear fission involves the splitting of an atomic nucleus, leading to the release of large amounts of energy. Nuclear fission was discovered in Germany in 1938 by Otto Hahn after he had bombarded uranium with neutrons and observed traces of radioactive barium. When Hahn’s former associate, Lise Meitner, heard of this, she realized that the neutrons may have split the uranium nuclei (each of which holds 92 protons) into two smaller nuclei to produce barium (56 protons) and krypton (36 protons). Meitner and her nephew, Otto Robert Frisch, were able to calculate the enormous energy that would be released in this type of reaction. They published their results early in 1939. Nuclear fission was quickly verified in several laboratories, and the Danish physicist Niels Bohr soon demonstrated that the rare uranium 235 (U-235) isotope is much more likely to fission than the common uranium 238 (U-238) isotope, which makes up 99.3 percent of natural uranium. It was also recognized that fission would produce additional neutrons that could cause new fissions, producing even more neutrons and thus creating a self-sustaining chain reaction. In this process, the fissioning of one gram of U-235 would release about as much energy as the burning of three million tons of coal. The first controlled chain reaction was demonstrated on December 2, 1942, in a nuclear reactor at the University of Chicago, under

Nuclear power plant

/

521

the leadership of Enrico Fermi. He used a graphite moderator to slow the neutrons by collisions with carbon atoms. “Critical mass” was achieved when the mass of graphite and uranium assembled was large enough that the number of neutrons not escaping from the pile would be sufficient to sustain a U-235 chain reaction. Cadmium control rods could be inserted to absorb neutrons and slow the reaction. It was also recognized that the U-238 in the reactor would absorb accelerated neutrons to produce the new element plutonium, which is also fissionable. During World War II (1939-1945), large reactors were built to “breed” plutonium, which was easier to separate than U-235. An experimental breeder reactor at Arco, Idaho, was the first to use the energy of nuclear fission to produce a small amount of electricity (about 100 watts) on December 20, 1951. Nuclear Electricity Power reactors designed to produce substantial amounts of electricity use the heat generated by fission to produce steam or hot gas to drive a turbine connected to an ordinary electric generator. The first power reactor design to be developed in the United States was the pressurized water reactor (PWR). In the PWR, water under high pressure is used both as the moderator and as the coolant. After circulating through the reactor core, the hot pressurized water flows through a heat exchanger to produce steam. Reactors moderated by “heavy water” (in which the hydrogen in the water is replaced with deuterium, which contains an extra neutron) can operate with natural uranium. The pressurized water system was used in the first reactor to produce substantial amounts of power, the experimental Mark I reactor. It was started up on May 31, 1953, at the Idaho National Engineering Laboratory. The Mark I became the prototype for the reactor used in the first nuclear-powered submarine. Under the leadership of Hyman G. Rickover, who was head of the Division of Naval Reactors of the Atomic Energy Commission (AEC), Westinghouse Electric Corporation was engaged to build a PWR system to power the submarine USS Nautilus. It began sea trials in January of 1955 and ran for two years before refueling.

522

/

Nuclear power plant

Cooling towers of a nuclear power plant. (PhotoDisc)

In the meantime, the first experimental nuclear power plant for generating electricity was completed in the Soviet Union in June of 1954, under the direction of the Soviet physicist Igor Kurchatov. It produced 5 megawatts of electric power. The first full-scale nuclear power plant was built in England under the direction of the British nuclear engineer Sir Christopher Hinton. It began producing about 90 megawatts of electric power in October, 1956.

Nuclear power plant

/

523

On December 2, 1957, on the fifteenth anniversary of the first controlled nuclear chain reaction, the Shippingport Atomic Power Station in Shippingport, Pennsylvania, became the first full-scale commercial nuclear power plant in the United States. It produced about 60 megawatts of electric power for the Duquesne Light Company until 1964, when its reactor core was replaced, increasing its power to 100 megawatts with a maximum capacity of 150 megawatts. Consequences The opening of the Shippingport Atomic Power Station marked the beginning of the nuclear power industry in the United States, with all of its glowing promise and eventual problems. It was predicted that electrical energy would become too cheap to meter. The AEC hoped to encourage the participation of industry, with government support limited to research and development. They encouraged a variety of reactor types in the hope of extending technical knowledge. The Dresden Nuclear Power Station, completed by Commonwealth Edison in September, 1959, at Morris, Illinois, near Chicago, was the first full-scale privately financed nuclear power station in the United States. By 1973, forty-two plants were in operation producing 26,000 megawatts, fifty more were under construction, and about one hundred were on order. Industry officials predicted that 50 percent of the nation’s electric power would be nuclear by the end of the twentieth century. The promise of nuclear energy has not been completely fulfilled. Growing concerns about safety and waste disposal have led to increased efforts to delay or block the construction of new plants. The cost of nuclear plants rose as legal delays and inflation pushed costs higher, so that many in the planning stages could no longer be competitive. The 1979 Three Mile Island accident in Pennsylvania and the much more serious 1986 Chernobyl accident in the Soviet Union increased concerns about the safety of nuclear power. Nevertheless, by 1986, more than one hundred nuclear power plants were operating in the United States, producing about 60,000 megawatts of power. More than three hundred reactors in twenty-five countries provide about 200,000 megawatts of electric power worldwide.

524

/

Nuclear power plant

Many believe that, properly controlled, nuclear energy offers a clean-energy solution to the problem of environmental pollution. See also Breeder reactor; Compressed-air-accumulating power plant; Fuel cell; Geothermal power; Nuclear reactor; Solar thermal engine; Tidal power plant. Further Reading Henderson, Harry. Nuclear Power: A Reference Handbook. Santa Barbara, Calif.: ABC-CLIO, 2000. Rockwell, Theodore. The Rickover Effect: The Inside Story of How Admiral Hyman Rickover Built the Nuclear Navy. New York: J. Wiley, 1995. Shea, William R. Otto Hahn and the Rise of Nuclear Physics. Boston: D. Reidel, 1983. Sime, Ruth Lewin. Lise Meitner: A Life in Physics. Berkeley: University of California Press, 1996.

525

Nuclear reactor Nuclear reactor

The invention: The first nuclear reactor to produce substantial quantities of plutonium, making it practical to produce usable amounts of energy from a chain reaction. The people behind the invention: Enrico Fermi (1901-1954), an American physicist Martin D. Whitaker (1902-1960), the first director of Oak Ridge National Laboratory Eugene Paul Wigner (1902-1995), the director of research and development at Oak Ridge The Technology to End a War The construction of the nuclear reactor at Oak Ridge National Laboratory in 1943 was a vital part of the Manhattan Project, the effort by the United States during World War II (1939-1945) to develop an atomic bomb. The successful operation of that reactor was a major achievement not only for the project itself but also for the general development and application of nuclear technology. The first director of the Oak Ridge National Laboratory was Martin D. Whitaker; the director of research and development was Eugene Paul Wigner. The nucleus of an atom is made up of protons and neutrons. “Fission” is the process by which the nucleus of certain elements is split in two by a neutron from some material that emits an occasional neutron naturally. When an atom splits, two things happen: A tremendous amount of thermal energy is released, and two or three neutrons, on the average, escape from the nucleus. If all the atoms in a kilogram of “uranium 235” were to fission, they would produce as much heat energy as the burning of 3 million kilograms of coal. The neutrons that are released are important, because if at least one of them hits another atom and causes it to fission (and thus to release more energy and more neutrons), the process will continue. It will become a self-sustaining chain reaction that will produce a continuing supply of heat.

526

/

Nuclear reactor

Inside a reactor, a nuclear chain reaction is controlled so that it proceeds relatively slowly. The most familiar use for the heat thus released is to boil water and make steam to turn the turbine generators that produce electricity to serve industrial, commercial, and residential needs. The fissioning process in a weapon, however, proceeds very rapidly, so that all the energy in the atoms is produced and released virtually at once. The first application of nuclear technology, which used a rapid chain reaction, was to produce the two atomic bombs that ended World War II. Breeding Bomb Fuel The work that began at Oak Ridge in 1943 was made possible by a major event that took place in 1942. At the University of Chicago, Enrico Fermi had demonstrated for the first time that it was possible to achieve a self-sustaining atomic chain reaction. More important, the reaction could be controlled: It could be started up, it could generate heat and sufficient neutrons to keep itself going, and it could be turned off. That first chain reaction was very slow, and it generated very little heat; but it demonstrated that controlled fission was possible. Any heat-producing nuclear reaction is an energy conversion process that requires fuel. There is only one readily fissionable element that occurs naturally and can be used as fuel. It is a form of uranium called uranium 235. It makes up less than 1 percent of all naturally occurring uranium. The remainder is uranium 238, which does not fission readily. Even uranium 235, however, must be enriched before it can be used as fuel. The process of enrichment increases the concentration of uranium 235 sufficiently for a chain reaction to occur. Enriched uranium is used to fuel the reactors used by electric utilities. Also, the much more plentiful uranium 238 can be converted into plutonium 239, a form of the human-made element plutonium, which does fission readily. That conversion process is the way fuel is produced for a nuclear weapon. Therefore, the major objective of the Oak Ridge effort was to develop a pilot operation for separating plutonium from the uranium in which it was produced. Large-scale plutonium production, which had never been attempted before, eventually would be done at the Hanford Engineer Works in Washington. First, however, plutonium had to be pro-

Nuclear reactor

/

527

Part of the Oak Ridge National Laboratory, where plutonium was separated to create the first atomic bomb. (Martin Marietta)

duced successfully on a small scale at Oak Ridge. The reactor was started up on November 4, 1943. By March 1, 1944, the Oak Ridge laboratory had produced several grams of plutonium. The material was sent to the Los Alamos laboratory in New Mexico for testing. By July, 1944, the reactor operated at four times its original power level. By the end of that year, however, plutonium production at Oak Ridge had ceased, and the reactor thereafter was used principally to produce radioisotopes for physical and biological research and for medical treatment. Ultimately, the Hanford Engineer Works’ reactors produced the plutonium for the bomb that was dropped on Nagasaki, Japan, on August 9, 1945. The original objectives for which Oak Ridge had been built had been achieved, and subsequent activity at the facility was directed toward peacetime missions that included basic studies of the structure of matter. Impact The most immediate impact of the work done at Oak Ridge was its contribution to ending World War II. When the atomic bombs were dropped, the war ended, and the United States emerged intact. The immediate and long-range devastation to the people of Japan,

528

/

Nuclear reactor

however, opened the public’s eyes to the almost unimaginable death and destruction that could be caused by a nuclear war. Fears of such a war remain to this day, especially as more and more nations develop the technology to build nuclear weapons. On the other hand, great contributions to human civilization have resulted from the development of nuclear energy. Electric power generation, nuclear medicine, spacecraft power, and ship propulsion have all profited from the pioneering efforts at the Oak Ridge National Laboratory. Currently, the primary use of nuclear energy is to produce electric power. Handled properly, nuclear energy may help to solve the pollution problems caused by the burning of fossil fuels. See also Breeder reactor; Compressed-air-accumulating power plant; Fuel cell; Geothermal power; Heat pump; Nuclear power plant; Solar thermal engine; Tidal power plant. Further Reading Epstein, Sam, Beryl Epstein, and Raymond Burns. Enrico Fermi: Father of Atomic Power. Champaign, Ill.: Garrard, 1970. Johnson, Leland, and Daniel Schaffer. Oak Ridge National Laboratory: The First Fifty Years. Knoxville: University of Tennessee Press, 1994. Morgan, K. Z., and Ken M. Peterson. The Angry Genie: One Man’s Walk Through the Nuclear Age. Norman: University of Oklahoma Press, 1999. Wagner, Francis S. Eugene P. Wigner, An Architect of the Atomic Age. Toronto: Rákóczi Foundation, 1981.

529

Nylon Nylon

The invention: A resilient, high-strength polymer with applications ranging from women’s hose to safety nets used in space flights. The people behind the invention: Wallace Hume Carothers (1896-1937), an American organic chemist Charles M. A. Stine (1882-1954), an American chemist and director of chemical research at Du Pont Elmer Keiser Bolton (1886-1968), an American industrial chemist Pure Research In the twentieth century, American corporations created industrial research laboratories. Their directors became the organizers of inventions, and their scientists served as the sources of creativity. The research program of E. I. Du Pont de Nemours and Company (Du Pont), through its most famous invention—nylon—became the model for scientifically based industrial research in the chemical industry. During World War I (1914-1918), Du Pont tried to diversify, concerned that after the war it would not be able to expand with only explosives as a product. Charles M. A. Stine, Du Pont’s director of chemical research, proposed that Du Pont should move into fundamental research by hiring first-rate academic scientists and giving them freedom to work on important problems in organic chemistry. He convinced company executives that a program to explore the fundamental science underlying Du Pont’s technology would ultimately result in discoveries of value to the company. In 1927, Du Pont gave him a new laboratory for research. Stine visited universities in search of brilliant, but not-yetestablished, young scientists. He hired Wallace Hume Carothers. Stine suggested that Carothers do fundamental research in polymer chemistry.

530

/

Nylon

Before the 1920’s, polymers were a mystery to chemists. Polymeric materials were the result of ingenious laboratory practice, and this practice ran far ahead of theory and understanding. German chemists debated whether polymers were aggregates of smaller units held together by some unknown special force or genuine molecules held together by ordinary chemical bonds. German chemist Hermann Staudinger asserted that they were large molecules with endlessly repeating units. Carothers shared this view, and he devised a scheme to prove it by synthesizing very large molecules by simple reactions in such a way as to leave no doubt about their structure. Carothers’s synthesis of polymers revealed that they were ordinary molecules but giant in size. The Longest Molecule In April, 1930, Carothers’s research group produced two major innovations: neoprene synthetic rubber and the first laboratorysynthesized fiber. Neither result was the goal of their research. Neoprene was an incidental discovery during a project to study short polymers of acetylene. During experimentation, an unexpected substance appeared that polymerized spontaneously. Carothers studied its chemistry and developed the process into the first successful synthetic rubber made in the United States. The other discovery was an unexpected outcome of the group’s project to synthesize polyesters by the reaction of acids and alcohols. Their goal was to create a polyester that could react indefinitely to form a substance with high molecular weight. The scientists encountered a molecular weight limit of about 5,000 units to the size of the polyesters, until Carothers realized that the reaction also produced water, which was decomposing polyesters back into acid and alcohol. Carothers and his associate Julian Hill devised an apparatus to remove the water as it formed. The result was a polyester with a molecular weight of more than 12,000, far higher than any previous polymer. Hill, while removing a sample from the apparatus, found that he could draw it out into filaments that on cooling could be stretched to form very strong fibers. This procedure, called “cold-drawing,” oriented the molecules from a random arrangement into a long, linear

Nylon

/

531

one of great strength. The polyester fiber, however, was unsuitable for textiles because of its low melting point. In June, 1930, Du Pont promoted Stine; his replacement as research director was Elmer Keiser Bolton. Bolton wanted to control fundamental research more closely, relating it to projects that would pay off and not allowing the research group freedom to pursue purely theoretical questions. Despite their differences, Carothers and Bolton shared an interest in fiber research. On May 24, 1934, Bolton’s assistant Donald Coffman “drew” a strong fiber from a new polyamide. This was the first nylon fiber, although not the one commercialized by Du Pont. The nylon fiber was high-melting and tough, and it seemed that a practical synthetic fiber might be feasible. By summer of 1934, the fiber project was the heart of the research group’s activity. The one that had the best fiber properties was nylon 5-10, the number referring to the number of carbon atoms in the amine and acid chains. Yet the nylon 6-6 prepared on February 28, 1935, became Du Pont’s nylon. Nylon 5-10 had some advantages, but Bolton realized that its components would be unsuitable for commercial production, whereas those of nylon 6-6 could be obtained from chemicals in coal. A determined Bolton pursued nylon’s practical development, a process that required nearly four years. Finally, in April, 1937, Du Pont filed a patent for synthetic fibers, which included a statement by Carothers that there was no previous work on polyamides; this was a major breakthrough. After Carothers’s death on April 29, 1937, the patent was issued posthumously and assigned to Du Pont. Du Pont made the first public announcement of nylon on October 27, 1938. Impact Nylon was a generic term for polyamides, and several types of nylon became commercially important in addition to nylon 6-6. These nylons found widespread use as both a fiber and a moldable plastic. Since it resisted abrasion and crushing, was nonabsorbent, was stronger than steel on a weight-for-weight basis, and was almost nonflammable, it embraced an astonishing range of uses: in

532

/

Nylon

laces, screens, surgical sutures, paint, toothbrushes, violin strings, coatings for electrical wires, lingerie, evening gowns, leotards, athletic equipment, outdoor furniture, shower curtains, handbags, sails, luggage, fish nets, carpets, slip covers, bus seats, and even safety nets on the space shuttle. The invention of nylon stimulated notable advances in the chemistry and technology of polymers. Some historians of technology have even dubbed the postwar period as the “age of plastics,” the age of synthetic products based on the chemistry of giant molecules made by ingenious chemists and engineers. The success of nylon and other synthetics, however, has come at a cost. Several environmental problems have surfaced, such as those created by the nondegradable feature of some plastics, and there is the problem of the increasing utilization of valuable, vanishing resources, such as petroleum, which contains the essential chemicals needed to make polymers. The challenge to reuse and recycle these polymers is being addressed by both scientists and policymakers. See also Buna rubber; Neoprene; Orlon; Plastic; Polyester; Polyethylene; Polystyrene. Further Reading Furukawa, Yasu. Inventing Polymer Science: Staudinger, Carothers, and the Emergence of Macromolecular Chemistry. Philadelphia: University of Pennsylvania Press, 1998. Handley, Susannah. Nylon: The Story of a Fashion Revolution: A Celebration of Design from Art Silk to Nylon and Thinking Fibres. Baltimore: Johns Hopkins University Press, 1999. Hermes, Matthew E. Enough for One Lifetime: Wallace Carothers, Inventor of Rayon. Washington, D.C.: American Chemical Society and the Chemical Heritage Foundation, 1996. Joyce, Robert M. Elmer Keiser Bolton: June 23, 1886-July 30, 1968. Washington, D.C.: National Academy Press, 1983.

533

Oil-well drill bit Oil-well drill bit

The invention: A rotary cone drill bit that enabled oil-well drillers to penetrate hard rock formations. The people behind the invention: Howard R. Hughes (1869-1924), an American lawyer, drilling engineer, and inventor Walter B. Sharp (1860-1912), an American drilling engineer, inventor, and partner to Hughes Digging for Oil A rotary drill rig of the 1990’s is basically unchanged in its essential components from its earlier versions of the 1900’s. A drill bit is attached to a line of hollow drill pipe. The latter passes through a hole on a rotary table, which acts essentially as a horizontal gear wheel and is driven by an engine. As the rotary table turns, so do the pipe and drill bit. During drilling operations, mud-laden water is pumped under high pressure down the sides of the drill pipe and jets out with great force through the small holes in the rotary drill bit against the bottom of the borehole. This fluid then returns outside the drill pipe to the surface, carrying with it rock material cuttings from below. Circulated rock cuttings and fluids are regularly examined at the surface to determine the precise type and age of rock formation and for signs of oil and gas. A key part of the total rotary drilling system is the drill bit, which has sharp cutting edges that make direct contact with the geologic formations to be drilled. The first bits used in rotary drilling were paddlelike “fishtail” bits, fairly successful for softer formations, and tubular coring bits for harder surfaces. In 1893, M. C. Baker and C. E. Baker brought a rotary water-well drill rig to Corsicana, Texas, for modification to deeper oil drilling. This rig led to the discovery of the large Corsicana-Powell oil field in Navarro County, Texas. This success also motivated its operators, the American Well and Prospecting Company, to begin the first large-scale manufacture of rotary drilling rigs for commercial sale.

534

/

Oil-well drill bit

In the earliest rotary drilling for oil, short fishtail bits were the tool of choice, insofar as they were at that time the best at being able to bore through a wide range of geologic strata without needing frequent replacement. Even so, in the course of any given oil well, many bits were required typically in coastal drilling in the Gulf of Mexico. Especially when encountering locally harder rock units such as limestone, dolomite, or gravel beds, fishtail bits would typically either curl backward or break off in the hole, requiring the time-consuming work of pulling out all drill pipe and “fishing” to retrieve fragments and clear the hole. Because of the frequent bit wear and damage, numerous small blacksmith shops established themselves near drill rigs, dressing or sharpening bits with a hand forge and hammer. Each bit-forging shop had its own particular way of shaping bits, producing a wide variety of designs. Nonstandard bit designs were frequently modified further as experiments to meet the specific requests of local drillers encountering specific drilling difficulties in given rock layers. Speeding the Process In 1907 and 1908, patents were obtained in New Jersey and Texas for steel, cone-shaped drill bits incorporating a roller-type coring device with many serrated teeth. Later in 1908, both patents were bought by lawyer Howard R. Hughes. Although comparatively weak rocks such as sands, clays, and soft shales could be drilled rapidly (at rates exceeding 30 meters per hour), in harder shales, lime-dolostones, and gravels, drill rates of 1 meter per hour or less were not uncommon. Conventional drill bits of the time had average operating lives of three to twelve hours. Economic drilling mandated increases in both bit life and drilling rate. Directly motivated by his petroleum prospecting interests, Hughes and his partner, Walter B. Sharp, undertook what were probably the first recorded systematic studies of drill bit performance while matched against specific rock layers. Although many improvements in detail and materials have been made to the Hughes cone bit since its inception in 1908, its basic design is still used in rotary drilling. One of Hughes’s major innovations was the much larger size of the cutters, symmetrically distrib-

Oil-well drill bit

/

535

Howard R. Hughes Howard Hughes (1905-1976) is famous for having been one of the most dashing, innovative, quirky tycoons of the twentieth century. It all started with his father, Howard R. Hughes. In fact it was the father’s enterprise, Hughes Tool Company, that the son took over at age eighteen and built into an immense financial empire based on high-tech products. The senior Hughes was born in Lancaster, Missouri, in 1869. He spent his boyhood in Keokuk, Iowa, where his own father practiced law. He himself studied law at Harvard University and the University of Iowa and then joined his father’s practice, but not for long. In 1901 news came of a big oil strike near Beaumont, Texas. Like hundreds of other ambitious men, Hughes headed there. By 1906 he had immersed himself in the technical problems of drilling and began experimenting to improve drill bits. He produced a wooden model of the roller-type drill two years later while in Oil City, Louisiana. With business associate Walter Sharp he successfully tested a prototype in an oil well in the Goose Creek field near Houston. It drilled faster and more efficiently than those then in use. Hughes and Sharp opened the Sharp-Hughes Tool Company to manufacture the drills and related equipment, and their products quickly became the industry standard. A shrewd business strategist, Hughes leased, rather than sold, his drill bits for $30,000 per well, retaining his patents to preserve his monopoly over the rotary drill technology. After Sharp died in 1912, Hughes changed the company to the Hughes Tool Company. When Hughes himself died in 1924, he left his son, then a student at Rice Institute (later Rice University), the company and a million-dollar fortune, which Hughes junior would eventually multiply hundreds of times over.

uted as a large number of small individual teeth on the outer face of two or more cantilevered bearing pins. In addition, “hard facing” was employed to drill bit teeth to increase usable life. Hard facing is a metallurgical process basically consisting of wedding a thin layer of a hard metal or alloy of special composition to a metal surface to increase its resistance to abrasion and heat. A less noticeable but equally essential innovation, not included in other drill bit patents,

536

/

Oil-well drill bit

was an ingeniously designed gauge surface that provided strong uniform support for all the drill teeth. The force-fed oil lubrication was another new feature included in Hughes’s patent and prototypes, reducing the power necessary to rotate the bit by 50 percent over that of prior mud or water lubricant designs. Impact In 1925, the first superhard facing was used on cone drill bits. In addition, the first so-called self-cleaning rock bits appeared from Hughes, with significant advances in roller bearings and bit tooth shape translating into increased drilling efficiency. The much larger teeth were more adaptable to drilling in a wider variety of geological formations than earlier models. In 1928, tungsten carbide was introduced as an additional bit facing hardener by Hughes metallurgists. This, together with other improvements, resulted in the Hughes ACME tooth form, which has been in almost continuous use since 1926. Many other drilling support technologies, such as drilling mud, mud circulation pumps, blowout detectors and preventers, and pipe properties and connectors have enabled rotary drilling rigs to reach new depths (exceeding 5 kilometers in 1990). The successful experiments by Hughes in 1908 were critical initiators of these developments. See also Geothermal power; Steelmaking process; Thermal cracking process. Further Reading Brantly, John Edward. History of Oil Well Drilling. Houston: Gulf Publishing, 1971. Charlez, Philippe A. Rock Mechanics. Vol. 2: Petroleum Applications. Paris: Editions Technip, 1997. Rao, Karanam Umamaheshwar, and Misra Banabihari. Principles of Rock Drilling. Brookfield, Vt.: Balkema, 1998.

537

Optical disk Optical disk

The invention: A nonmagnetic storage medium for computers that can hold much greater quantities of data than similar size magnetic media, such as hard and floppy disks. The people behind the invention: Klaas Compaan, a Dutch physicist Piet Kramer, head of Philips’ optical research laboratory Lou F. Ottens, director of product development for Philips’ musical equipment division George T. de Kruiff, manager of Philips’ audio-product development department Joop Sinjou, a Philips project leader Holograms Can Be Copied Inexpensively Holography is a lensless photographic method that uses laser light to produce three-dimensional images. This is done by splitting a laser beam into two beams. One of the beams is aimed at the object whose image is being reproduced so that the laser light will reflect from the object and strike a photographic plate or film. The second beam of light is reflected from a mirror near the object and also strikes the photographic plate or film. The “interference pattern,” which is simply the pattern created by the differences between the two reflected beams of light, is recorded on the photographic surface. The recording that is made in this way is called a “hologram.” When laser light or white light strikes the hologram, an image is created that appears to be a three-dimensional object. Early in 1969, Radio Corporation of America (RCA) engineers found a way to copy holograms inexpensively by impressing interference patterns on a nickel sheet that then became a mold from which copies could be made. Klaas Compaan, a Dutch physicist, learned of this method and had the idea that images could be recorded in a similar way and reproduced on a disk the size of a phonograph record. Once the images were on the disk, they could be projected onto a screen in any sequence. Compaan saw the possibilities of such a technology in the fields of training and education.

538

/

Optical disk

Computer Data Storage Breakthrough In 1969, Compaan shared his idea with Piet Kramer, who was the head of Philips’ optical research laboratory. The idea intrigued Kramer. Between 1969 and 1971, Compaan spent much of his time working on the development of a prototype. By September, 1971, Compaan and Kramer, together with a handful of others, had assembled a prototype that could read a blackand-white video signal from a spinning glass disk. Three months later, they demonstrated it for senior managers at Philips. In July, 1972, a color prototype was demonstrated publicly. After the demonstration, Philips began to consider putting sound, rather than images, on the disks. The main attraction of that idea was that the 12inch (305-millimeter) disks would hold up to forty-eight hours of music. Very quickly, however, Lou F. Ottens, director of product development for Philips’ musical equipment division, put an end to any talk of a long-playing audio disk. Ottens had developed the cassette-tape cartridge in the 1960’s. He had plenty of experience with the recording industry, and he had no illusions that the industry would embrace that new medium. He was convinced that the recording companies would consider fortyeight hours of music unmarketable. He also knew that any new medium would have to offer a dramatic improvement over existing vinyl records. In 1974, only three years after the first microprocessor (the basic element of computers) was invented, designing a digital consumer product—rather than an analog product such as those that were already commonly accepted—was risky. (Digital technology uses numbers to represent information, whereas analog technology represents information by mechanical or physical means.) When George T. de Kruiff became Ottens’s manager of audio-product development in June, 1974, he was amazed that there were no digital circuit specialists in the audio department. De Kruiff recruited new digital engineers, bought computer-aided design tools, and decided that the project should go digital. Within a few months, Ottens’s engineers had rigged up a digital system. They used an audio signal that was representative of an acoustical wave, sampled it to change it to digital form, and en-

Optical disk

/

539

coded it as a series of pulses. On the disk itself, they varied the length of the “dimples” that were used to represent the sound so that the rising and falling edges of the series of pulses corresponded to the dimples’ walls. A helium-neon laser was reflected from the dimples to photodetectors that were connected to a digital-toanalog converter. In 1978, Philips demonstrated a prototype for Polygram (a West German company) and persuaded Polygram to develop an inexpensive disk material with the appropriate optical qualities. Most important was that the material could not warp. Polygram spent about $150,000 and three months to develop the disk. In addition, it was determined that the gallium-arsenide (GaAs) laser would be used in the project. Sharp Corporation agreed to manufacture a long-life GaAs diode laser to Philips’ specifications. The optical-system designers wanted to reduce the number of parts in order to decrease manufacturing costs and improve reliability. Therefore, the lenses were simplified and considerable work was devoted to developing an error-correction code. Philips and Sony engineers also worked together to create a standard format. In 1983, Philips made almost 100,000 units of optical disks.

Optical Disk

Laser Beam

Direction of Rotation

An optical memory.

Direction of Laser

540

/

Optical disk

Consequences In 1983, one of the most successful consumer products of all time was introduced: the optical-disk system. The overwhelming success of optical-disk reproduction led to the growth of a multibillion-dollar industry around optical information and laid the groundwork for a whole crop of technologies that promise to revolutionize computer data storage. Common optical-disk products are the compact disc (CD), the compact disc read-only memory (CD-ROM), the write-once, read-many (WORM) erasable disk, and CD-I (interactive CD). The CD-ROM, the WORM, and the erasable optical disk, all of which are used in computer applications, can hold more than 550 megabytes, from 200 to 800 megabytes, and 650 megabytes of data, respectively. The CD-ROM is a nonerasable disc that is used to store computer data. After the write-once operation is performed, a WORM becomes a read-only optical disk. An erasable optical disk can be erased and rewritten easily. CD-ROMs, coupled with expert-system technology, are expected to make data retrieval easier. The CD-ROM, the WORM, and the erasable optical disk may replace magnetic hard and floppy disks as computer data storage devices. See also Bubble memory; Compact disc; Computer chips; Floppy disk; Hard disk; Holography. Further Reading Fox, Barry. “Head to Head in the Recording Wars.” New Scientist 136, no. 1843 (October 17, 1992). Goff, Leslie. “Philips’ Eye on the Future.” Computerworld 33, no. 32 (August 9, 1999). Kolodziej, Stan. “Optical Discs: The Dawn of a New Era in Mass Storage.” Canadian Datasystems 14, no. 9 (September, 1982). 36-39. Savage, Maria. “Beyond Film.” Bulletin of the American Society for Information Science 7, no. 1 (October, 1980).

541

Orlon Orlon

The invention: A synthetic fiber made from polyacrylonitrile that has become widely used in textiles and in the preparation of high-strength carbon fibers. The people behind the invention: Herbert Rein (1899-1955), a German chemist Ray C. Houtz (1907), an American chemist A Difficult Plastic “Polymers” are large molecules that are made up of chains of many smaller molecules, called “monomers.” Materials that are made of polymers are also called polymers, and some polymers, such as proteins, cellulose, and starch, occur in nature. Most polymers, however, are synthetic materials, which means that they were created by scientists. The twenty-year period beginning in 1930 was the age of great discoveries in polymers by both chemists and engineers. During this time, many of the synthetic polymers, which are also known as plastics, were first made and their uses found. Among these polymers were nylon, polyester, and polyacrylonitrile. The last of these materials, polyacrylonitrile (PAN), was first synthesized by German chemists in the late 1920’s. They linked more than one thousand of the small, organic molecules of acrylonitrile to make a polymer. The polymer chains of this material had the properties that were needed to form strong fibers, but there was one problem. Instead of melting when heated to a high temperature, PAN simply decomposed. This made it impossible, with the technology that existed then, to make fibers. The best method available to industry at that time was the process of melt spinning, in which fibers were made by forcing molten polymer through small holes and allowing it to cool. Researchers realized that, if PAN could be put into a solution, the same apparatus could be used to spin PAN fibers. Scientists in Germany and the United States tried to find a solvent or liquid that would dissolve PAN, but they were unsuccessful until World War II began.

542

/

Orlon

Fibers for War In 1938, the German chemist Walter Reppe developed a new class of organic solvents called “amides.” These new liquids were able to dissolve many materials, including some of the recently discovered polymers. When World War II began in 1940, both the Germans and the Allies needed to develop new materials for the war effort. Materials such as rubber and fibers were in short supply. Thus, there was increased governmental support for chemical and industrial research on both sides of the war. This support was to result in two independent solutions to the PAN problem. In 1942, Herbert Rein, while working for I. G. Farben in Germany, discovered that PAN fibers could be produced from a solution of polyacrylonitrile dissolved in the newly synthesized solvent dimethylformamide. At the same time Ray C. Houtz, who was working for E. I. Du Pont de Nemours in Wilmington, Delaware, found that the related solvent dimethylacetamide would also form excellent PAN fibers. His work was patented, and some fibers were produced for use by the military during the war. In 1950, Du Pont began commercial production of a form of polyacrylonitrile fibers called Orlon. The Monsanto Company followed with a fiber called Acrilon in 1952, and other companies began to make similar products in 1958. There are two ways to produce PAN fibers. In both methods, polyacrylonitrile is first dissolved in a suitable solvent. The solution is next forced through small holes in a device called a “spinneret.” The solution emerges from the spinneret as thin streams of a thick, gooey liquid. In the “wet spinning method,” the streams then enter another liquid (usually water or alcohol), which extracts the solvent from the solution, leaving behind the pure PAN fiber. After air drying, the fiber can be treated like any other fiber. The “dry spinning method” uses no liquid. Instead, the solvent is evaporated from the emerging streams by means of hot air, and again the PAN fiber is left behind. In 1944, another discovery was made that is an important part of the polyacrylonitrile fiber story. W. P. Coxe of Du Pont and L. L. Winter at Union Carbide Corporation found that, when PAN fibers are heated under certain conditions, the polymer decomposes and changes into graphite (one of the elemental forms of carbon) but still

Orlon

/

543

keeps its fiber form. In contrast to most forms of graphite, these fibers were exceptionally strong. These were the first carbon fibers ever made. Originally known as “black Orlon,” they were first produced commercially by the Japanese in 1964, but they were too weak to find many uses. After new methods of graphitization were developed jointly by labs in Japan, Great Britain, and the United States, the strength of the carbon fibers was increased, and the fibers began to be used in many fields. Impact As had been predicted earlier, PAN fibers were found to have some very useful properties. Their discovery and commercialization helped pave the way for the acceptance and wide use of polymers. The fibers derive their properties from the stiff, rodlike structure of polyacrylonitrile. Known as acrylics, these fibers are more durable than cotton, and they are the best alternative to wool for sweaters. Acrylics are resistant to heat and chemicals, can be dyed easily, resist fading or wrinkling, and are mildew-resistant. Thus, after their introduction, PAN fibers were very quickly made into yarns, blankets, draperies, carpets, rugs, sportswear, and various items of clothing. Often, the fibers contain small amounts of other polymers that give them additional useful properties. A significant amount of PAN fiber is used in making carbon fibers. These lightweight fibers are stronger for their weight than any known material, and they are used to make high-strength composites for applications in aerospace, the military, and sports. A “fiber composite” is a material made from two parts: a fiber, such as carbon or glass, and something to hold the fibers together, which is usually a plastic called an “epoxy.” Fiber composites are used in products that require great strength and light weight. Their applications can be as ordinary as a tennis racket or fishing pole or as exotic as an airplane tail or the body of a spacecraft. See also Buna rubber; Neoprene; Nylon; Plastic; Polyester; Polyethylene; Polystyrene.

544

/

Orlon

Further Reading Handley, Susannah. Nylon: The Story of a Fashion Revolution: A Celebration of Design from Art Silk to Nylon and Thinking Fibres. Baltimore: Johns Hopkins University Press, 1999. Hunter, David. “Du Pont Bids Adieu to Acrylic Fibers.” Chemical Week 146, no. 24 (June 20, 1990). Kornheiser, Tony. “So Long, Orlon.” Washington Post (June 13, 1990). Seymour, Raymond Benedict, and Roger Stephen Porter. Manmade Fibers: Their Origin and Development. New York: Elsevier Applied Science, 1993.

545

Pacemaker Pacemaker

The invention: A small device using transistor circuitry that regulates the heartbeat of the patient in whom it is surgically emplaced. The people behind the invention: Ake Senning (1915), a Swedish physician Rune Elmquist, co-inventor of the first pacemaker Paul Maurice Zoll (1911), an American cardiologist Cardiac Pacing The fundamentals of cardiac electrophysiology (the electrical activity of the heart) were determined during the eighteenth century; the first successful cardiac resuscitation by electrical stimulation occurred in 1774. The use of artificial pacemakers for resuscitation was demonstrated in 1929 by Mark Lidwell. Lidwell and his coworkers developed a portable apparatus that could be connected to a power source. The pacemaker was used successfully on several stillborn infants after other methods of resuscitation failed. Nevertheless, these early machines were unreliable. Ake Senning’s first experience with the effect of electrical stimulation on cardiac physiology was memorable; grasping a radio ground wire, Senning felt a brief episode of ventricular arrhythmia (irregular heartbeat). Later, he was able to apply a similar electrical stimulation to control a heartbeat during surgery. The principle of electrical regulation of the heart was valid. It was shown that pacemakers introduced intravenously into the sinus node area of a dog’s heart could be used to control the heartbeat rate. Although Paul Maurice Zoll utilized a similar apparatus in several patients with cardiac arrhythmia, it was not appropriate for extensive clinical use; it was large and often caused unpleasant sensations or burns. In 1957, however, Ake Senning observed that attaching stainless steel electrodes to a child’s heart made it possible to regulate the heart’s rate of contraction. Senning considered this to represent the beginning of the era of clinical pacing.

546

/

Pacemaker

Development of Cardiac Pacemakers Senning’s observations of the successful use of the cardiac pacemaker had allowed him to identify the problems inherent in the device. He realized that the attachment of the device to the lower, ventricular region of the heart made possible more reliable control, but other problems remained unsolved. It was inconvenient, for example, to carry the machine externally; a cord was wrapped around the patient that allowed the pacemaker to be recharged, which had to be done frequently. Also, for unknown reasons, heart resistance would increase with use of the pacemaker, which meant that increasingly large voltages had to be used to stimulate the heart. Levels as high as 20 volts could cause quite a “start” in the patient. Furthermore, there was a continuous threat of infection. In 1957, Senning and his colleague Rune Elmquist developed a pacemaker that was powered by rechargeable nickel-cadmium batteries, which had to be recharged once a month. Although Senning and Elmquist did not yet consider the pacemaker ready for human testing, fate intervened. A forty-three-year-old man was admitted to the hospital suffering from an atrioventricular block, an inability of the electrical stimulus to travel along the conductive fibers of the “bundle of His” (a band of cardiac muscle fibers). As a result of this condition, the patient required repeated cardiac resuscitation. Similar types of heart block were associated with a mortality rate higher than 50 percent per year and nearly 95 percent over five years. Senning implanted two pacemakers (one failed) into the myocardium of the patient’s heart, one of which provided a regulatory rate of 64 beats per minute. Although the pacemakers required periodic replacement, the patient remained alive and active for twenty years. (He later became president of the Swedish Association for Heart and Lung Disease.) During the next five years, the development of more reliable and more complex pacemakers continued, and implanting the pacemaker through the vein rather than through the thorax made it simpler to use the procedure. The first pacemakers were of the “asynchronous” type, which generated a regular charge that overrode the natural pacemaker in the heart. The rate could be set by the physician but could not be altered if the need arose. In 1963, an atrial-

Pacemaker

/

547

triggered synchronous pacemaker was installed by a Swedish team. The advantage of this apparatus lay in its ability to trigger a heart contraction only when the normal heart rhythm was interrupted. Most of these pacemakers contained a sensing device that detected the atrial impulse and generated an electrical discharge only when the heart rate fell below 68 to 72 beats per minute. The biggest problems during this period lay in the size of the pacemaker and the short life of the battery. The expiration of the electrical impulse sometimes caused the death of the patient. In addition, the most reliable method of checking the energy level of the battery was to watch for a decreased pulse rate. As improvements were made in electronics, the pacemaker became smaller, and in 1972, the more reliable lithium-iodine batteries were introduced. These batteries made it possible to store more energy and to monitor the energy level more effectively. The use of this type of power source essentially eliminated the battery as the limiting factor in the longevity of the pacemaker. The period of time that a pacemaker could operate continuously in the body increased from a period of days in 1958 to five to ten years by the 1970’s. Consequences The development of electronic heart pacemakers revolutionized cardiology. Although the initial machines were used primarily to control cardiac bradycardia, the often life-threatening slowing of the heartbeat, a wide variety of arrhythmias and problems with cardiac output can now be controlled through the use of these devices. The success associated with the surgical implantation of pacemakers is attested by the frequency of its use. Prior to 1960, only three pacemakers had been implanted. During the 1990’s, however, some 300,000 were implanted each year throughout the world. In the United States, the prevalence of implants is on the order of 1 per 1,000 persons in the population. Pacemaker technology continues to improve. Newer models can sense pH and oxygen levels in the blood, as well as respiratory rate. They have become further sensitized to minor electrical disturbances and can adjust accordingly. The use of easily sterilized circuitry has eliminated the danger of infection. Once the pacemaker

548

/

Pacemaker

has been installed in the patient, the basic electronics require no additional attention. With the use of modern pacemakers, many forms of electrical arrhythmias need no longer be life-threatening. See also Artificial heart; Contact lenses; Coronary artery bypass surgery; Electrocardiogram; Hearing aid; Heart-lung machine. Further Reading Bigelow, W. G. Cold Hearts: The Story of Hypothermia and the Pacemaker in Heart Surgery. Toronto: McClelland and Stewart, 1984. Greatbatch, Wilson. The Making of the Pacemaker: Celebrating a Lifesaving Invention. Amherst, N.Y.: Prometheus Books, 2000. “The Pacemaker.” Newsweek 130, no. 24A (Winter, 1997/1998). Thalen, H. J. The Artificial Cardiac Pacemaker: Its History, Development and Clinical Application. London: Heinemann Medical, 1969.

549

Pap test Pap test

The invention: A cytologic technique the diagnosing uterine cancer, the second most common fatal cancer in American women. The people behind the invention: George N. Papanicolaou (1883-1962), a Greek-born American physician and anatomist Charles Stockard (1879-1939), an American anatomist Herbert Traut (1894-1972), an American gynecologist Cancer in History Cancer, first named by the ancient Greek physician Hippocrates of Cos, is one of the most painful and dreaded forms of human disease. It occurs when body cells run wild and interfere with the normal activities of the body. The early diagnosis of cancer is extremely important because early detection often makes it possible to effect successful cures. The modern detection of cancer is usually done by the microscopic examination of the cancer cells, using the techniques of the area of biology called “cytology, ” or cell biology. Development of cancer cytology began in 1867, after L. S. Beale reported tumor cells in the saliva from a patient who was afflicted with cancer of the pharynx. Beale recommended the use in cancer detection of microscopic examination of cells shed or removed (exfoliated) from organs including the digestive, the urinary, and the reproductive tracts. Soon, other scientists identified numerous striking differences, including cell size and shape, the size of cell nuclei, and the complexity of cell nuclei. Modern cytologic detection of cancer evolved from the work of George N. Papanicolaou, a Greek physician who trained at the University of Athens Medical School. In 1913, he emigrated to the United States. In 1917, he began studying sex determination of guinea pigs with Charles Stockard at New York’s Cornell Medical College. Papanicolaou’s efforts required him to obtain ova (egg cells) at a precise period in their maturation cycle, a process that required an indicator

550

/

Pap test

of the time at which the animals ovulated. In search of this indicator, Papanicolaou designed a method that involved microscopic examination of the vaginal discharges from female guinea pigs. Initially, Papanicolaou sought traces of blood, such as those seen in the menstrual discharges from both primates and humans. Papanicolaou found no blood in the guinea pig vaginal discharges. Instead, he noticed changes in the size and the shape of the uterine cells shed in these discharges. These changes recurred in a fifteento-sixteen-day cycle that correlated well with the guinea pig menstrual cycle. “New Cancer Detection Method” Papanicolaou next extended his efforts to the study of humans. This endeavor was designed originally to identify whether comparable changes in the exfoliated cells of the human vagina occurred in women. Its goal was to gain an understanding of the human menstrual cycle. In the course of this work, Papanicolaou observed distinctive abnormal cells in the vaginal fluid from a woman afflicted with cancer of the cervix. This led him to begin to attempt to develop a cytologic method for the detection of uterine cancer, the second most common type of fatal cancer in American women of the time. In 1928, Papanicolaou published his cytologic method of cancer detection in the Proceedings of the Third Race Betterment Conference, held in Battle Creek, Michigan. The work was received well by the news media (for example, the January 5, 1928, New York World credited him with a “new cancer detection method”). Nevertheless, the publication—and others he produced over the next ten years—was not very interesting to gynecologists of the time. Rather, they preferred use of the standard methodology of uterine cancer diagnosis (cervical biopsy and curettage). Consequently, in 1932, Papanicolaou turned his energy toward studying human reproductive endocrinology problems related to the effects of hormones on cells of the reproductive system. One example of this work was published in a 1933 issue of The American Journal of Anatomy, where he described “the sexual cycle in the human female.” Other such efforts resulted in better understanding of

Pap test

/

551

reproductive problems that include amenorrhea and menopause. It was not until Papanicolaou’s collaboration with gynecologist Herbert Traut (beginning in 1939), which led to the publication of Diagnosis of Uterine Cancer by the Vaginal Smear (1943), that clinical acceptance of the method began to develop. Their monograph documented an impressive, irrefutable group of studies of both normal and disease states that included nearly two hundred cases of cancer of the uterus. Soon, many other researchers began to confirm these findings; by 1948, the newly named American Cancer Society noted that the “Pap” smear seemed to be a very valuable tool for detecting vaginal cancer. Wide acceptance of the Pap test followed, and, beginning in 1947, hundreds of physicians from all over the world flocked to Papanicolaou’s course on the subject. They learned his smear/diagnosis techniques and disseminated them around the world. Impact The Pap test has been cited by many physicians as being the most significant and useful modern discovery in the field of cancer research. One way of measuring its impact is the realization that the test allows the identification of uterine cancer in the earliest stages, long before other detection methods can be used. Moreover, because of resultant early diagnosis, the disease can be cured in more than 80 percent of all cases identified by the test. In addition, Pap testing allows the identification of cancer of the uterine cervix so early that its cure rate can be nearly 100 percent. Papanicolaou extended the use of the smear technique from examination of vaginal discharges to diagnosis of cancer in many other organs from which scrapings, washings, and discharges can be obtained. These tissues include the colon, the kidney, the bladder, the prostate, the lung, the breast, and the sinuses. In most cases, such examination of these tissues has made it possible to diagnose cancer much sooner than is possible by using other existing methods. As a result, the smear method has become a basis of cancer control in national health programs throughout the world.

552

/

Pap test

See also Amniocentesis; Birth control pill; Mammography; Syphilis test; Ultrasound. Further Reading Apgar, Barbara, Lawrence L. Gabel, and Robert T. Brown. Oncology. Philadelphia: W. B. Saunders, 1998. Entman, Stephen S., and Charles B. Rush. Office Gynecology. Philadelphia: Saunders, 1995. Glass, Robert H., Michèle G. Curtis, and Michael P. Hopkins. Glass’s Office Gynecology. 5th ed. Baltimore: Williams & Wilkins, 1999. Rushing, Lynda, and Nancy Joste. Abnormal Pap Smears: What Every Woman Needs to Know. Amherst, N.Y.: Prometheus Books, 2001.

553

Penicillin Penicillin

The invention: The first successful and widely used antibiotic drug, penicillin has been called the twentieth century’s greatest “wonder drug.” The people behind the invention: Sir Alexander Fleming (1881-1955), a Scottish bacteriologist, cowinner of the 1945 Nobel Prize in Physiology or Medicine Baron Florey (1898-1968), an Australian pathologist, cowinner of the 1945 Nobel Prize in Physiology or Medicine Ernst Boris Chain (1906-1979), an émigré German biochemist, cowinner of the 1945 Nobel Prize in Physiology or Medicine The Search for the Perfect Antibiotic During the early twentieth century, scientists were aware of antibacterial substances but did not know how to make full use of them in the treatment of diseases. Sir Alexander Fleming discovered penicillin in 1928, but he was unable to duplicate his laboratory results of its antibiotic properties in clinical tests; as a result, he did not recognize the medical potential of penicillin. Between 1935 and 1940, penicillin was purified, concentrated, and clinically tested by pathologist Baron Florey, biochemist Ernst Boris Chain, and members of their Oxford research group. Their achievement has since been regarded as one of the greatest medical discoveries of the twentieth century. Florey was a professor at Oxford University in charge of the Sir William Dunn School of Pathology. Chain had worked for two years at Cambridge University in the laboratory of Frederick Gowland Hopkins, an eminent chemist and discoverer of vitamins. Hopkins recommended Chain to Florey, who was searching for a candidate to lead a new biochemical unit in the Dunn School of Pathology. In 1938, Florey and Chain formed a research group to investigate the phenomenon of antibiosis, or the antagonistic association between different forms of life. The union of Florey’s medical knowledge and Chain’s biochemical expertise proved to be an ideal com-

554

/

Penicillin

bination for exploring the antibiosis potential of penicillin. Florey and Chain began their investigation with a literature search in which Chain came across Fleming’s work and added penicillin to their list of potential antibiotics. Their first task was to isolate pure penicillin from a crude liquid extract. A culture of Fleming’s original Penicillium notatum was maintained at Oxford and was used by the Oxford group for penicillin production. Extracting large quantities of penicillin from the medium was a painstaking task, as the solution contained only one part of the antibiotic in ten million. When enough of the raw juice was collected, the Oxford group focused on eliminating impurities and concentrating the penicillin. The concentrated liquid was then freeze-dried, leaving a soluble brown powder. Spectacular Results In May, 1940, Florey’s clinical tests of the crude penicillin proved its value as an antibiotic. Following extensive controlled experiments with mice, the Oxford group concluded that they had discovered an antibiotic that was nontoxic and far more effective against pathogenic bacteria than any of the known sulfa drugs. Furthermore, penicillin was not inactivated after injection into the bloodstream but was excreted unchanged in the urine. Continued tests showed that penicillin did not interfere with white blood cells and had no adverse effect on living cells. Bacteria susceptible to the antibiotic included those responsible for gas gangrene, pneumonia, meningitis, diphtheria, and gonorrhea. American researchers later proved that penicillin was also effective against syphilis. In January, 1941, Florey injected a volunteer with penicillin and found that there were no side effects to treatment with the antibiotic. In February, the group began treatment of Albert Alexander, a forty-three-year-old policeman with a serious staphylococci and streptococci infection that was resisting massive doses of sulfa drugs. Alexander had been hospitalized for two months after an infection in the corner of his mouth had spread to his face, shoulder, and lungs. After receiving an injection of 200 milligrams of penicillin, Alexander showed remarkable progress, and for the next ten days his condition improved. Unfortunately, the Oxford

Penicillin

Sir Alexander Fleming In 1900 Alexander Fleming (1881-1955) enlisted in the London Scottish Regiment, hoping to see action in the South African (Boer) War then underway between Great Britain and South Africa’s independent Afrikaner republics. However, the war ended too soon for him. So, having come into a small inheritance, he decided to become a physician instead. Accumulating honors and prizes along the way, he succeeded and became a fellow of the Royal College of Surgeons of England in 1909. His mentor was Sir Almroth Wright. Fleming assisted him at St. Mary’s Hospital in Paddington, and they were at the forefront of the burgeoning field of bacteriology. They were, for example, among the first to treat syphilis with the newly discovered Salvarsan, and they championed immunization through vaccination. With the outbreak of World War I, Fleming followed Wright into the Royal Army Medical Corps, conducting research on battlefield wounds at a laboratory near Boulogne. The infections Fleming inspected horrified him. After the war, again at St. Mary’s Hospital, he dedicated himself to finding anti-bacterial agents. He succeed twice: “lysozyme” in 1921 and penicillin in 1928. To his great disappointment, he was unable to produce pure, potent concentrations of the drug. That had to await the work of Ernst Chain and Howard Florey in 1940. Meanwhile, Fleming studied the antibacterial properties of sulfa drugs. He was overjoyed that Chain and Florey succeeded where he had failed and that penicillin saved lives during World War II and afterward, but he was taken aback when with them he began to receive a stream of tributes, awards, decorations, honorary degrees, and fellowships, including the Nobel Prize in Physiology or Medicine in 1945. He was by nature a reserved man. However, he adjusted to his role as one of the most lionized medical researchers of his generation and continued his work, both as a professor of medicine at the University of London from 1928 until 1948 and as director of the same St. Mary’s Hospital laboratory where he had started his career (renamed the WrightFleming Institute in 1948). He died soon after he retired in 1955.

/

555

556

/

Penicillin

production facility was unable to generate enough penicillin to overcome Alexander’s advanced infection completely, and he died on March 15. A later case involving a fourteen-year-old boy with staphylococcal septicemia and osteomyelitis had a more spectacular result: The patient made a complete recovery in two months. In all the early clinical treatments, patients showed vast improvement, and most recovered completely from infections that resisted all other treatment. Impact Penicillin is among the greatest medical discoveries of the twentieth century. Florey and Chain’s chemical and clinical research brought about a revolution in the treatment of infectious disease. Almost every organ in the body is vulnerable to bacteria. Before penicillin, the only antimicrobial drugs available were quinine, arsenic, and sulfa drugs. Of these, only the sulfa drugs were useful for treatment of bacterial infection, but their high toxicity often limited their use. With this small arsenal, doctors were helpless to treat thousands of patients with bacterial infections. The work of Florey and Chain achieved particular attention because of World War II and the need for treatments of such scourges as gas gangrene, which had infected the wounds of numerous World War I soldiers. With the help of Florey and Chain’s Oxford group, scientists at the U.S. Department of Agriculture’s Northern Regional Research Laboratory developed a highly efficient method for producing penicillin using fermentation. After an extended search, scientists were also able to isolate a more productive penicillin strain, Penicillium chrysogenum. By 1945, a strain was developed that produced five hundred times more penicillin than Fleming’s original mold had. Penicillin, the first of the “wonder drugs,” remains one of the most powerful antibiotic in existence. Diseases such as pneumonia, meningitis, and syphilis are still treated with penicillin. Penicillin and other antibiotics also had a broad impact on other fields of medicine, as major operations such as heart surgery, organ transplants, and management of severe burns became possible once the threat of bacterial infection was minimized.

Penicillin

/

557

Florey and Chain received numerous awards for their achievement, the greatest of which was the 1945 Nobel Prize in Physiology or Medicine, which they shared with Fleming for his original discovery. Florey was among the most effective medical scientists of his generation, and Chain earned similar accolades in the science of biochemistry. This combination of outstanding medical and chemical expertise made possible one of the greatest discoveries in human history. See also Antibacterial drugs; Artificial hormone; Genetically engineered insulin; Polio vaccine (Sabin); Polio vaccine (Salk); Reserpine; Salvarsan; Tuberculosis vaccine; Typhus vaccine; Yellow fever vaccine. Further Reading Bickel, Lennard. Florey, The Man Who Made Penicillin. Carlton South, Victoria, Australia: Melbourne University Press, 1995. Clark, Ronald William. The Life of Ernst Chain: Penicillin and Beyond. New York: St. Martin’s Press, 1985. Hughes, William Howard. Alexander Fleming and Penicillin. Hove: Wayland, 1979. Mateles, Richard I. Penicillin: A Paradigm for Biotechnology. Chicago: Canadida Corporation, 1998.

558

Personal computer Personal computer

The invention: Originally a tradename of the IBM Corporation, “personal computer” has become a generic term for increasingly powerful desktop computing systems using microprocessors. The people behind the invention: Tom J. Watson, (1874-1956), the founder of IBM, who set corporate philosophy and marketing principles Frank Cary (1920), the chief executive officer of IBM at the time of the decision to market a personal computer John Opel (1925), a member of the Corporate Management Committee George Belzel, a member of the Corporate Management Committee Paul Rizzo, a member of the Corporate Management Committee Dean McKay (1921), a member of the Corporate Management Committee William L. Sydnes, the leader of the original twelve-member design team Shaking up the System For many years, the International Business Machines (IBM) Corporation had been set in its ways, sticking to traditions established by its founder, Tom Watson, Sr. If it hoped to enter the new microcomputer market, however, it was clear that only nontraditional methods would be useful. Apple Computer was already beginning to make inroads into large IBM accounts, and IBM stock was starting to stagnate on Wall Street. A 1979 Business Week article asked: “Is IBM just another stodgy, mature company?” The microcomputer market was expected to grow more than 40 percent in the early 1980’s, but IBM would have to make some changes in order to bring a competitive personal computer (PC) to the market. The decision to build and market the PC was made by the company’s Corporate Management Committee (CMC). CMC members included chief executive officer Frank Cary, John Opel, George

Personal computer

/

559

Belzel, Paul Rizzo, Dean McKay, and three senior vice presidents. In July of 1980, Cary gave the order to proceed. He wanted the PC to be designed and built within a year. The CMC approved the initial design of the PC one month later. Twelve engineers, with William L. Sydnes as their leader, were appointed as the design team. At the end of 1980, the team had grown to 150. Most parts of the PC had to be produced outside IBM. Microsoft Corporation won the contract to produce the PC’s disk operating system (DOS) and the BASIC (Beginner’s All-purpose Symbolic Instruction Code) language that is built into the PC’s read-only memory (ROM). Intel Corporation was chosen to make the PC’s central processing unit (CPU) chip, the “brains” of the machine. Outside programmers wrote software for the PC. Ten years earlier, this strategy would have been unheard of within IBM since all aspects of manufacturing, service, and repair were traditionally taken care of in-house. Marketing the System IBM hired a New York firm to design a media campaign for the new PC. Readers of magazines and newspapers saw the character of Charlie Chaplin advertising the new PC. The machine was delivered on schedule on August 12, 1981. The price of the basic “system unit” was $1,565. A system with 64 kilobytes of random access memory (RAM), a 13-centimeter single-sided disk drive holding 160 kilobytes, and a monitor was priced at about $3,000. A system with color graphics, a second disk drive, and a dot matrix printer cost about $4,500. Many useful computer programs had been adapted to the PC and were available when it was introduced. VisiCalc from Personal Software—the program that is credited with “making” the microcomputer revolution—was one of the first available. Other packages included a comprehensive accounting system by Peachtree Software and a word processing package called Easywriter by Information Unlimited Software. As the selection of software grew, so did sales. In the first year after its introduction, the IBM PC went from a zero market share to 28 percent of the market. Yet the credit for the success of the PC does not go to IBM alone. Many hundreds of companies were able to pro-

560

/

Personal computer

duce software and hardware for the PC. Within two years, powerful products such as Lotus Corporation’s 1-2-3 business spreadsheet had come to the market. Many believed that Lotus 1-2-3 was the program that caused the PC to become so phenomenally successful. Other companies produced hardware features (expansion boards) that increased the PC’s memory storage or enabled the machine to “drive” audiovisual presentations such as slide shows. Business especially found the PC to be a powerful tool. The PC has survived because of its expansion capability. IBM has continued to upgrade the PC. In 1983, the PC/XT was introduced. It had more expansion slots and a fixed disk offering 10 million bytes of storage for programs and data. Many of the companies that made expansion boards found themselves able to make whole PCs. An entire range of PC-compatible systems was introduced to the market, many offering features that IBM did not include in the original PC. The original PC has become a whole family of computers, sold by both IBM and other companies. The hardware and software continue to evolve; each generation offers more computing power and storage with a lower price tag. Consequences IBM’s entry into the microcomputer market gave microcomputers credibility. Apple Computer’s earlier introduction of its computer did not win wide acceptance with the corporate world. Apple did, however, thrive within the educational marketplace. IBM’s name already carried with it much clout, because IBM was a successful company. Apple Computer represented all that was great about the “new” microcomputer, but the IBM PC benefited from IBM’s image of stability and success. IBM coined the term personal computer and its acronym PC. The acronym PC is now used almost universally to refer to the microcomputer. It also had great significance with users who had previously used a large mainframe computer that had to be shared with the whole company. This was their personal computer. That was important to many PC buyers, since the company mainframe was perceived as being complicated and slow. The PC owner now had complete control.

Personal computer

/

561

See also Apple II computer; BINAC computer; Colossus computer; ENIAC computer; Floppy disk; Hard disk; IBM Model 1401 computer; Internet; Supercomputer; UNIVAC computer. Further Reading Cerruzi, Paul E. A History of Modern Computing. Cambridge, Mass.: MIT Press, 2000. Chposky, James, and Ted Leonsis. Blue Magic: The People, Power, and Politics Behind the IBM Personal Computer. New York: Facts on File, 1988. Freiberger, Paul, and Michael Swaine. Fire in the Valley: The Making of the Personal Computer. New York: McGraw-Hill, 2000. Grossman. Wendy. Remembering the Future: Interviews from Personal Computer World. New York: Springer, 1997.

562

Photoelectric cell Photoelectric cell

The invention: The first devices to make practical use of the photoelectric effect, photoelectric cells were of decisive importance in the electron theory of metals. The people behind the invention: Julius Elster (1854-1920), a German experimental physicist Hans Friedrich Geitel (1855-1923), a German physicist Wilhelm Hallwachs (1859-1922), a German physicist Early Photoelectric Cells The photoelectric effect was known to science in the early nineteenth century when the French physicist Alexandre-Edmond Becquerel wrote of it in connection with his work on glass-enclosed primary batteries. He discovered that the voltage of his batteries increased with intensified illumination and that green light produced the highest voltage. Since Becquerel researched batteries exclusively, however, the liquid-type photocell was not discovered until 1929, when the Wein and Arcturus cells were introduced commercially. These cells were miniature voltaic cells arranged so that light falling on one side of the front plate generated a considerable amount of electrical energy. The cells had short lives, unfortunately; when subjected to cold, the electrolyte froze, and when subjected to heat, the gas generated would expand and explode the cells. What came to be known as the photoelectric cell, a device connecting light and electricity, had its beginnings in the 1880’s. At that time, scientists noticed that a negatively charged metal plate lost its charge much more quickly in the light (especially ultraviolet light) than in the dark. Several years later, researchers demonstrated that this phenomenon was not an “ionization” effect because of the air’s increased conductivity, since the phenomenon took place in a vacuum but did not take place if the plate were positively charged. Instead, the phenomenon had to be attributed to the light that excited the electrons of the metal and caused them to fly off: A neutral plate even acquired a slight positive charge under

Photoelectric cell

/

563

the influence of strong light. Study of this effect not only contributed evidence to an electronic theory of matter—and, as a result of some brilliant mathematical work by the physicist Albert Einstein, later increased knowledge of the nature of radiant energy—but also further linked the studies of light and electricity. It even explained certain chemical phenomena, such as the process of photography. It is important to note that all the experimental work on photoelectricity accomplished prior to the work of Julius Elster and Hans Friedrich Geitel was carried out before the existence of the electron was known. Explaining Photoelectric Emission After the English physicist Sir Joseph John Thomson’s discovery of the electron in 1897, investigators soon realized that the photoelectric effect was caused by the emission of electrons under the influence of radiation. The fundamental theory of photoelectric emission was put forward by Einstein in 1905 on the basis of the German physicist Max Planck’s quantum theory (1900). Thus, it was not surprising that light was found to have an electronic effect. Since it was known that the longer radio waves could shake electrons into resonant oscillations and the shorter X rays could detach electrons from the atoms of gases, the intermediate waves of visual light would have been expected to have some effect upon electrons—such as detaching them from metal plates and therefore setting up a difference of potential. The photoelectric cell, developed by Elster and Geitel in 1904, was a practical device that made use of this effect. In 1888, Wilhelm Hallwachs observed that an electrically charged zinc electrode loses its charge when exposed to ultraviolet radiation if the charge is negative, but is able to retain a positive charge under the same conditions. The following year, Elster and Geitel discovered a photoelectric effect caused by visible light; however, they used the alkali metals potassium and sodium for their experiments instead of zinc. The Elster-Geitel photocell (a vacuum emission cell, as opposed to a gas-filled cell) consisted of an evacuated glass bulb containing two electrodes. The cathode consisted of a thin film of a rare, chemically active metal (such as potassium) that lost its electrons fairly readily;

564

/

Photoelectric cell

Julius Elster and Hans Geitel Nicknamed the Castor and Pollux of physics after the twins of Greek mythology, Johann Philipp Ludwig Julius Elster and Hans Friedrich Geitel were among the most productive teams in the history of science. Elster, born in 1854, and Geitel, born in 1855, met in 1875 while attending university in Heidelberg, Germany. Graduate studies took them to separate cities, but then in 1881 they were together again as mathematics and physics teachers at Herzoglich Gymnasium in Wolfenbüttel. In 1884 they began their scientific collaboration, which lasted more than thirty years and produced more than 150 reports. Essentially experimentalists, they investigated phenomena that were among the greatest mysteries of the times. Their first works concerned the electrification of flames and the electrical properties of thunderstorms. They went on to study the photoelectric effect, thermal electron emission, practical uses for photocells, and Becquerel rays in the earth and air. They developed a method for measuring electrical phenomena in gases that remained the standard for the following forty years. Their greatest achievements, however, lay with radioactivity and radiation. Their demonstration that incandescent filaments emitted “negative electricity” proved beyond doubt that electrons, which J. J. Thomson had recently claimed to have detected, did in fact exist. They also proved that radioactivity, such as that from uranium, came wholly from within the atom, not from environmental influences. Ernest Rutherford, the great English physicist, said in 1913 that Elster and Geitel had contributed more to the understanding of terrestrial and atmospheric radioactivity than anyone else. The pair were practically inseparable until Elster died in 1920. Geitel died three years later.

the anode was simply a wire sealed in to complete the circuit. This anode was maintained at a positive potential in order to collect the negative charges released by light from the cathode. The Elster-Geitel photocell resembled two other types of vacuum tubes in existence at the time: the cathode-ray tube, in which the cathode emitted electrons under the influence of a high potential, and the thermionic valve (a valve that permits the passage of current in one direction

Photoelectric cell

/

565

only), in which it emitted electrons under the influence of heat. Like both of these vacuum tubes, the photoelectric cell could be classified as an “electronic” device. The new cell, then, emitted electrons when stimulated by light, and at a rate proportional to the intensity of the light. Hence, a current could be obtained from the cell. Yet Elster and Geitel found that their photoelectric currents fell off gradually; they therefore spoke of “fatigue” (instability). It was discovered later that most of this change was not a direct effect of a photoelectric current’s passage; it was not even an indirect effect but was caused by oxidation of the cathode by the air. Since all modern cathodes are enclosed in sealed vessels, that source of change has been completely abolished. Nevertheless, the changes that persist in modern cathodes often are indirect effects of light that can be produced independently of any photoelectric current. Impact The Elster-Geitel photocell was, for some twenty years, used in all emission cells adapted for the visible spectrum, and throughout the twentieth century, the photoelectric cell has had a wide variety of applications in numerous fields. For example, if products leaving a factory on a conveyor belt were passed between a light and a cell, they could be counted as they interrupted the beam. Persons entering a building could be counted also, and if invisible ultraviolet rays were used, those persons could be detected without their knowledge. Simple relay circuits could be arranged that would automatically switch on street lamps when it grew dark. The sensitivity of the cell with an amplifying circuit enabled it to “see” objects too faint for the human eye, such as minor stars or certain lines in the spectra of elements excited by a flame or discharge. The fact that the current depended on the intensity of the light made it possible to construct photoelectric meters that could judge the strength of illumination without risking human error—for example, to determine the right exposure for a photograph. A further use for the cell was to make talking films possible. The early “talkies” had depended on gramophone records, but it was very difficult to keep the records in time with the film. Now, the waves of speech and music could be recorded in a “sound track” by turning the

566

/

Photoelectric cell

sound first into current through a microphone and then into light with a neon tube or magnetic shutter; next, the variations in the intensity of this light on the side of the film were photographed. By reversing the process and running the film between a light and a photoelectric cell, the visual signals could be converted back to sound. See also Alkaline storage battery; Photovoltaic cell; Solar thermal engine. Further Reading Hoberman, Stuart. Solar Cell and Photocell Experimenters Guide. Indianapolis, Ind.: H. W. Sams, 1965. Perlin, John. From Space to Earth: The Story of Solar Electricity. Ann Arbor, Mich.: Aatec Publications, 1999. Walker, R. C., and T. M. C. Lance. Photoelectric Cell Applications: A Practical Book Describing the Uses of Photoelectric Cells in Television, Talking Pictures, Electrical Alarms, Counting Devices, Etc. 3d ed. London: Sir I. Pitman & Sons, 1938.

567

Photovoltaic cell Photovoltaic cell

The invention: Drawing their energy directly from the Sun, the first photovoltaic cells powered instruments on early space vehicles and held out hope for future uses of solar energy. The people behind the invention: Daryl M. Chapin (1906-1995), an American physicist Calvin S. Fuller (1902-1994), an American chemist Gerald L. Pearson (1905), an American physicist Unlimited Energy Source All the energy that the world has at its disposal ultimately comes from the Sun. Some of this solar energy was trapped millions of years ago in the form of vegetable and animal matter that became the coal, oil, and natural gas that the world relies upon for energy. Some of this fuel is used directly to heat homes and to power factories and gasoline vehicles. Much of this fossil fuel, however, is burned to produce the electricity on which modern society depends. The amount of energy available from the Sun is difficult to imagine, but some comparisons may be helpful. During each forty-hour period, the Sun provides the earth with as much energy as the earth’s total reserves of coal, oil, and natural gas. It has been estimated that the amount of energy provided by the sun’s radiation matches the earth’s reserves of nuclear fuel every forty days. The annual solar radiation that falls on about twelve hundred square miles of land in Arizona matched the world’s estimated total annual energy requirement for 1960. Scientists have been searching for many decades for inexpensive, efficient means of converting this vast supply of solar radiation directly into electricity. The Bell Solar Cell Throughout its history, Bell Systems has needed to be able to transmit, modulate, and amplify electrical signals. Until the 1930’s, these tasks were accomplished by using insulators and metallic con-

568

/

Photovoltaic cell

ductors. At that time, semiconductors, which have electrical properties that are between those of insulators and those of conductors, were developed. One of the most important semiconductor materials is silicon, which is one of the most common elements on the earth. Unfortunately, silicon is usually found in the form of compounds such as sand or quartz, and it must be refined and purified before it can be used in electrical circuits. This process required much initial research, and very pure silicon was not available until the early 1950’s. Electric conduction in silicon is the result of the movement of negative charges (electrons) or positive charges (holes). One way of accomplishing this is by deliberately adding to the silicon phosphorus or arsenic atoms, which have five outer electrons. This addition creates a type of semiconductor that has excess negative charges (an n-type semiconductor). Adding boron atoms, which have three outer electrons, creates a semiconductor that has excess positive charges (a p-type semiconductor). Calvin Fuller made an important study of the formation of p-n junctions, which are the points at which p-type and n-type semiconductors meet, by using the process of diffusing impurity atoms—that is, adding atoms of materials that would increase the level of positive or negative charges, as described above. Fuller’s work stimulated interested in using the process of impurity diffusion to create cells that would turn solar energy into electricity. Fuller and Gerald Pearson made the first largearea p-n junction by using the diffusion process. Daryl Chapin, Fuller, and Pearson made a similar p-n junction very close to the surface of a silicon crystal, which was then exposed to sunlight. The cell was constructed by first making an ingot of arsenicdoped silicon that was then cut into very thin slices. Then a very thin layer of p-type silicon was formed over the surface of the n-type wafer, providing a p-n junction close to the surface of the cell. Once the cell cooled, the p-type layer was removed from the back of the cell and lead wires were attached to the two surfaces. When light was absorbed at the p-n junction, electron-hole pairs were produced, and the electric field that was present at the junction forced the electrons to the n side and the holes to the p side. The recombination of the electrons and holes takes place after the electrons have traveled through the external wires, where they do

Photovoltaic cell

/

569

Parabolic mirrors at a solar power plant. (PhotoDisc)

useful work. Chapin, Fuller, and Pearson announced in 1954 that the resulting photovoltaic cell was the most efficient (6 percent) means then available for converting sunlight into electricity. The first experimental use of the silicon solar battery was in amplifiers for electrical telephone signals in rural areas. An array of 432 silicon cells, capable of supplying 9 watts of power in bright sunlight, was used to charge a nickel-cadmium storage battery. This, in turn, powered the amplifier for the telephone signal. The electrical energy derived from sunlight during the day was sufficient to keep the storage battery charged for continuous operation. The system was successfully tested for six months of continuous use in Americus, Georgia, in 1956. Although it was a technical success, the silicon solar cell was not ready to compete economically with conventional means of producing electrical power. Consequences One of the immediate applications of the solar cell was to supply electrical energy for Telstar satellites. These cells are used extensively on all satellites to generate power. The success of the U.S. sat-

570

/

Photovoltaic cell

ellite program prompted serious suggestions in 1965 for the use of an orbiting power satellite. A large satellite could be placed into a synchronous orbit of the earth. It would collect sunlight, convert it to microwave radiation, and beam the energy to an Earth-based receiving station. Many technical problems must be solved, however, before this dream can become a reality. Solar cells are used in small-scale applications such as power sources for calculators. Large-scale applications are still not economically competitive with more traditional means of generating electric power. The development of the Third World countries, however, may provide the incentive to search for less-expensive solar cells that can be used, for example, to provide energy in remote villages. As the standards of living in such areas improve, the need for electric power will grow. Solar cells may be able to provide the necessary energy while safeguarding the environment for future generations. See also Alkaline storage battery; Fluorescent lighting; Fuel cell; Photoelectric cell; Solar thermal engine. Further Reading Green, Martin A. Power to the People: Sunlight to Electricity Using Solar Cells. Sydney, Australia: University of South Wales Press, 2000. _____. “Photovoltaics: Technology Overview.” Energy Policy 28, no. 14 (November, 2000). Perlin, John. From Space to Earth: The Story of Solar Electricity. Ann Arbor, Mich.: Aatec Publications, 1999.

571

Plastic Plastic

The invention: The first totally synthetic thermosetting plastic, which paved the way for modern materials science. The people behind the invention: John Wesley Hyatt (1837-1920), an American inventor Leo Hendrik Baekeland (1863-1944), a Belgian-born chemist, consultant, and inventor Christian Friedrich Schönbein (1799-1868), a German chemist who produced guncotton, the first artificial polymer Adolf von Baeyer (1835-1917), a German chemist Exploding Billiard Balls In the 1860’s, the firm of Phelan and Collender offered a prize of ten thousand dollars to anyone producing a substance that could serve as an inexpensive substitute for ivory, which was somewhat difficult to obtain in large quantities at reasonable prices. Earlier, Christian Friedrich Schönbein had laid the groundwork for a breakthrough in the quest for a new material in 1846 by the serendipitous discovery of nitrocellulose, more commonly known as “guncotton,” which was produced by the reaction of nitric acid with cotton. An American inventor, John Wesley Hyatt, while looking for a substitute for ivory as a material for making billiard balls, discovered that the addition of camphor to nitrocellulose under certain conditions led to the formation of a white material that could be molded and machined. He dubbed this substance “celluloid,” and this product is now acknowledged as the first synthetic plastic. Celluloid won the prize for Hyatt, and he promptly set out to exploit his product. Celluloid was used to make baby rattles, collars, dentures, and other manufactured goods. As a billiard ball substitute, however, it was not really adequate, for various reasons. First, it is thermoplastic—in other words, a material that softens when heated and can then be easily deformed or molded. It was thus too soft for billiard ball use. Second, it was highly flammable, hardly a desirable characteristic. A widely circu-

572

/

Plastic

lated, perhaps apocryphal, story claimed that celluloid billiard balls detonated when they collided. Truly Artificial Since celluloid can be viewed as a derivative of a natural product, it is not a completely synthetic substance. Leo Hendrik Baekeland has the distinction of being the first to produce a completely artificial plastic. Born in Ghent, Belgium, Baekeland emigrated to the United States in 1889 to pursue applied research, a pursuit not encouraged in Europe at the time. One area in which Baekeland hoped to make an inroad was in the development of an artificial shellac. Shellac at the time was a natural and therefore expensive product, and there would be a wide market for any reasonably priced substitute. Baekeland’s research scheme, begun in 1905, focused on finding a solvent that could dissolve the resinous products from a certain class of organic chemical reaction. The particular resins he used had been reported in the mid1800’s by the German chemist Adolf von Baeyer. These resins were produced by the condensation reaction of formaldehyde with a class of chemicals called “phenols.” Baeyer found that frequently the major product of such a reaction was a gummy residue that was virtually impossible to remove from glassware. Baekeland focused on finding a material that could dissolve these resinous products. Such a substance would prove to be the shellac substitute he sought. These efforts proved frustrating, as an adequate solvent for these resins could not be found. After repeated attempts to dissolve these residues, Baekeland shifted the orientation of his work. Abandoning the quest to dissolve the resin, he set about trying to develop a resin that would be impervious to any solvent, reasoning that such a material would have useful applications. Baekeland’s experiments involved the manipulation of phenolformaldehyde reactions through precise control of the temperature and pressure at which the reactions were performed. Many of these experiments were performed in a 1.5-meter-tall reactor vessel, which he called a “Bakelizer.” In 1907, these meticulous experiments paid off when Baekeland opened the reactor to reveal a clear solid that was heat resistant, nonconducting, and machinable. Experimenta-

Plastic

/

573

tion proved that the material could be dyed practically any color in the manufacturing process, with no effect on the physical properties of the solid. Baekeland filed a patent for this new material in 1907. (This patent was filed one day before that filed by James Swinburne, a British

John Wesley Hyatt John Wesley Hyatt’s parents wanted him to be a minister, a step up in status from his father’s job as a blacksmith. Born in 1837 in Starkey, New York, Hyatt received the standard primary education and then obediently went to a seminary as a teenager. However, his mind was on making things rather than spirituality; he was especially ingenious with machinery. The seminary held him only a year. He became a printer’s apprentice at sixteen and later set up shop in Albany. His mind ranged beyond printing too. He invented a method to make emery wheels for sharpening cutlery, which brought him his first patent at twenty-four. In an attempt to win the Phelan and Collender Company contest for artificial billiard balls, he developed several moldable compounds from wood pulp. He started the Embossing Company in Albany to make chess and checker pieces from the compounds and put his youngest brother in charge. With another brother he experimented with guncotton until he invented celluloid. In 1872, he and his brothers started the Celluloid Manufacturing Company. They designed new milling machinery for the new substance and turned out billiard balls, bowling balls, golf club heads and other sporting goods but then branched out into domestic items, such as boxes, handles, combs, and even collars. Celluloid became the basic material of photographic film and, later, motion picture film. Meanwhile, Hyatt continued to invent—machinery for cutting and molding plastic and rolling steel, a water purification system, a method for squeezing juice from sugar cane, an industrial sewing machine, roller bearings for heavy machinery—registering more than 250 patents, which is impressive for a person with no formal scientific or technical training. The Society of Chemical Industry awarded Hyatt its prestigious Perkin Medal in 1914. Hyatt died in 1920.

574

/

Plastic

electrical engineer who had developed a similar material in his quest to produce an insulating material.) Baekeland dubbed his new creation “Bakelite” and announced its existence to the scientific community on February 15, 1909, at the annual meeting of the American Chemical Society. Among its first uses was in the manufacture of ignition parts for the rapidly growing automobile industry. Impact Bakelite proved to be the first of a class of compounds called “synthetic polymers.” Polymers are long chains of molecules chemically linked together. There are many natural polymers, such as cotton. The discovery of synthetic polymers led to vigorous research into the field and attempts to produce other useful artificial materials. These efforts met with a fair amount of success; by 1940, a multitude of new products unlike anything found in nature had been discovered. These included such items as polystyrene and low-density polyethylene. In addition, artificial substitutes for natural polymers, such as rubber, were a goal of polymer chemists. One of the results of this research was the development of neoprene. Industries also were interested in developing synthetic polymers to produce materials that could be used in place of natural fibers such as cotton. The most dramatic success in this area was achieved by Du Pont chemist Wallace Carothers, who had also developed neoprene. Carothers focused his energies on forming a synthetic fiber similar to silk, resulting in the synthesis of nylon. Synthetic polymers constitute one branch of a broad area known as “materials science.” Novel, useful materials produced synthetically from a variety of natural materials have allowed for tremendous progress in many areas. Examples of these new materials include high-temperature superconductors, composites, ceramics, and plastics. These materials are used to make the structural components of aircraft, artificial limbs and implants, tennis rackets, garbage bags, and many other common objects. See also Buna rubber; Contact lenses; Laminated glass; Neoprene; Nylon; Orlon; Polyester; Polyethylene; Polystyrene; Pyrex glass; Silicones; Teflon; Velcro.

Plastic

/

575

Further Reading Amato, Ivan. “Chemist: Leo Baekeland.” Time 153, no. 12 (March 29, 1999). Clark, Tessa. Bakelite Style. Edison, N.J.: Chartwell Books, 1997. Fenichell, Stephen. Plastic: The Making of a Synthetic Century. New York: HarperBusiness, 1997. Sparke, Penny. The Plastics Age: From Bakelite to Beanbags and Beyond. Woodstock, N.Y.: Overlook Press, 1990.

576

Pocket calculator Pocket calculator

The invention: The first portable and reliable hand-held calculator capable of performing a wide range of mathematical computations. The people behind the invention: Jack St. Clair Kilby (1923), the inventor of the semiconductor microchip Jerry D. Merryman (1932), the first project manager of the team that invented the first portable calculator James Van Tassel (1929), an inventor and expert on semiconductor components An Ancient Dream In the earliest accounts of civilizations that developed number systems to perform mathematical calculations, evidence has been found of efforts to fashion a device that would permit people to perform these calculations with reduced effort and increased accuracy. The ancient Babylonians are regarded as the inventors of the first abacus (or counting board, from the Greek abakos, meaning “board” or “tablet”). It was originally little more than a row of shallow grooves with pebbles or bone fragments as counters. The next step in mechanical calculation did not occur until the early seventeenth century. John Napier, a Scottish baron and mathematician, originated the concept of “logarithms” as a mathematical device to make calculating easier. This concept led to the first slide rule, created by the English mathematician William Oughtred of Cambridge. Oughtred’s invention consisted of two identical, circular logarithmic scales held together and adjusted by hand. The slide rule made it possible to perform rough but rapid multiplication and division. Oughtred’s invention in 1623 was paralleled by the work of a German professor, Wilhelm Schickard, who built a “calculating clock” the same year. Because the record of Schickard’s work was lost until 1935, however, the French mathematician Blaise Pascal was generally thought to have built the first mechanical calculator, the “Pascaline,” in 1645.

Pocket calculator

/

577

Other versions of mechanical calculators were built in later centuries, but none was rapid or compact enough to be useful beyond specific laboratory or mercantile situations. Meanwhile, the dream of such a machine continued to fascinate scientists and mathematicians. The development that made a fast, small calculator possible did not occur until the middle of the twentieth century, when Jack St. Clair Kilby of Texas Instruments invented the silicon microchip (or integrated circuit) in 1958. An integrated circuit is a tiny complex of electronic components and their connections that is produced in or on a small slice of semiconductor material such as silicon. Patrick Haggerty, then president of Texas Instruments, wrote in 1964 that “integrated electronics” would “remove limitations” that determined the size of instruments, and he recognized that Kilby’s invention of the microchip made possible the creation of a portable, hand-held calculator. He challenged Kilby to put together a team to design a calculator that would be as powerful as the large, electromechanical models in use at the time but small enough to fit into a coat pocket. Working with Jerry D. Merryman and James Van Tassel, Kilby began to work on the project in October, 1965. An Amazing Reality At the outset, there were basically five elements that had to be designed. These were the logic designs that enabled the machine to perform the actual calculations, the keyboard or keypad, the power supply, the readout display, and the outer case. Kilby recalls that once a particular size for the unit had been determined (something that could be easily held in the hand), project manager Merryman was able to develop the initial logic designs in three days. Van Tassel contributed his experience with semiconductor components to solve the problems of packaging the integrated circuit. The display required a thermal printer that would work on a low power source. The machine also had to include a microencapsulated ink source so that the paper readouts could be imprinted clearly. Then the paper had to be advanced for the next calculation. Kilby, Merryman, and Van Tassel filed for a patent on their work in 1967. Although this relatively small, working prototype of the minicalculator made obsolete the transistor-operated design of the much

578

/

Pocket calculator

Jerry D. Merryman In 1965 Texas Instruments assigned two engineers to join Jack St. Clair Kilby, inventor of the integrated circuit, in an effort to produce a pocket-sized calculator: James H. Van Tassel, a specialist in semiconductor components, and Jerry D. Merryman, a versatile engineer who became the project manager. It took Merryman only seventy-two hours to work out the logic design for the calculator, and the team set about designing, fabricating, and testing its components. After two years, it had a prototype, the first pocket calculator. However, it required a large, strong pocket. It measured 4.25 inches by 6.12 inches by 1.76 inches and weighed 2.8 pounds. Kilby, Van Tassel, and Merry filed for a patent and received it in 1975. In 1989 the team was jointly presented the Holley Medical for the achievement by the American Society of Mechanical Engineers. By then Merryman held sixty other patents, foreign and domestic. Born in 1932, Merryman grew up in Hearne, Texas, and after high school went to Texas A&M University. He never graduated, but he did become extraordinarily adept at electrical engineering, teaching himself what he needed to know while doing small jobs on his own. He was said to have almost an intuitive sense for circuitry. After he joined Texas Instruments in 1963 he quickly earned a reputation for solving complex problems, one of the reasons he was made part of the hand calculator team. He became a Texas Instruments Fellow in 1975 and helped design semiconductor manufacturing equipment, particularly by adapting high-speed lasers for use in extremely fine optical lithography. He also invented thermal data systems. Along with Kilby and Van Tassel, Merryman received the George R. Stibitz Computer Pioneer Award in 1997.

larger desk calculators, the cost of setting up new production lines and the need to develop a market made it impractical to begin production immediately. Instead, Texas Instruments and Canon of Tokyo formed a joint venture, which led to the introduction of the Canon Pocketronic Printing Calculator in Japan in April, 1970, and in the United States that fall. Built entirely of Texas Instruments parts, this four-function machine with three metal oxide semicon-

Pocket calculator

/

579

True pocket calculators fit as easily in shirt pockets as pencils and pens. (PhotoDisc)

ductor (MOS) circuits was similar to the prototype designed in 1967. The calculator was priced at $400, weighed 740 grams, and measured 101 millimeters wide by 208 millimeters long by 49 millimeters high. It could perform twelve-digit calculations and worked up to four decimal places. In September, 1972, Texas Instruments put the Datamath, its first commercial hand-held calculator using a single MOS chip, on the retail market. It weighed 340 grams and measured 75 millimeters wide by 137 millimeters long by 42 millimeters high. The Datamath was priced at $120 and included a full-floating decimal point that could appear anywhere among the numbers on its eight-digit, lightemitting diode (LED) display. It came with a rechargeable battery that could also be connected to a standard alternating current (AC) outlet. The Datamath also had the ability to conserve power while awaiting the next keyboard entry. Finally, the machine had a built-in limited amount of memory storage.

580

/

Pocket calculator

Consequences Prior to 1970, most calculating machines were of such dimensions that professional mathematicians and engineers were either tied to their desks or else carried slide rules whenever they had to be away from their offices. By 1975, Keuffel & Esser, the largest slide rule manufacturer in the world, was producing its last model, and mechanical engineers found that problems that had previously taken a week could now be solved in an hour using the new machines. That year, the Smithsonian Institution accepted the world’s first miniature electronic calculator for its permanent collection, noting that it was the forerunner of more than one hundred million pocket calculators then in use. By the 1990’s, more than fifty million portable units were being sold each year in the United States. In general, the electronic pocket calculator revolutionized the way in which people related to the world of numbers. Moreover, the portability of the hand-held calculator made it ideal for use in remote locations, such as those a petroleum engineer might have to explore. Its rapidity and reliability made it an indispensable instrument for construction engineers, architects, and real estate agents, who could figure the volume of a room and other building dimensions almost instantly and then produce cost estimates almost on the spot. See also Cell phone; Differential analyzer; Mark I calculator; Personal computer; Transistor radio; Walkman cassette player. Further Reading Ball, Guy. Collector’s Guide to Pocket Calculators. Tustin, Calif.: Wilson/Barnett Publishing, 1996. Clayton, Mark. “Calculators in Class: Freedom from Scratch Paper or ‘Crutch’?” Christian Science Monitor (May 23, 2000). Lederer, Victor. “Calculators: The Applications Are Unlimited. Administrative Management 38 (July, 1977). Lee, Jennifer. “Throw Teachers a New Curve.” New York Times (September 2, 1999). “The Semiconductor Becomes a New Marketing Force.” Business Week (August 24, 1974).

581

Polio vaccine (Sabin) Polio vaccine (Sabin)

The invention: Albert Bruce Sabin’s vaccine was the first to stimulate long-lasting immunity against polio without the risk of causing paralytic disease. The people behind the invention: Albert Bruce Sabin (1906-1993), a Russian-born American virologist Jonas Edward Salk (1914-1995), an American physician, immunologist, and virologist Renato Dulbecco (1914), an Italian-born American virologist who shared the 1975 Nobel Prize in Physiology or Medicine The Search for a Living Vaccine Almost a century ago, the first major poliomyelitis (polio) epidemic was recorded. Thereafter, epidemics of increasing frequency and severity struck the industrialized world. By the 1950’s, as many as sixteen thousand individuals, most of them children, were being paralyzed by the disease each year. Poliovirus enters the body through ingestion by the mouth. It replicates in the throat and the intestines and establishes an infection that normally is harmless. From there, the virus can enter the bloodstream. In some individuals it makes its way to the nervous system, where it attacks and destroys nerve cells crucial for muscle movement. The presence of antibodies in the bloodstream will prevent the virus from reaching the nervous system and causing paralysis. Thus, the goal of vaccination is to administer poliovirus that has been altered so that it cannot cause disease but nevertheless will stimulate the production of antibodies to fight the disease. Albert Bruce Sabin received his medical degree from New York University College of Medicine in 1931. Polio was epidemic in 1931, and for Sabin polio research became a lifelong interest. In 1936, while working at the Rockefeller Institute, Sabin and Peter Olinsky successfully grew poliovirus using tissues cultured in vitro. Tissue culture proved to be an excellent source of virus. Jonas Edward Salk

582

/

Polio vaccine (Sabin)

soon developed an inactive polio vaccine consisting of virus grown from tissue culture that had been inactivated (killed) by chemical treatment. This vaccine became available for general use in 1955, almost fifty years after poliovirus had first been identified. Sabin, however, was not convinced that an inactivated virus vaccine was adequate. He believed that it would provide only temporary protection and that individuals would have to be vaccinated repeatedly in order to maintain protective levels of antibodies. Knowing that natural infection with poliovirus induced lifelong immunity, Sabin believed that a vaccine consisting of a living virus was necessary to produce long-lasting immunity. Also, unlike the inactive vaccine, which is injected, a living virus (weakened so that it would not cause disease) could be taken orally and would invade the body and replicate of its own accord. Sabin was not alone in his beliefs. Hilary Koprowski and Harold Cox also favored a living virus vaccine and had, in fact, begun searching for weakened strains of poliovirus as early as 1946 by repeatedly growing the virus in rodents. When Sabin began his search for weakened virus strains in 1953, a fiercely competitive contest ensued to achieve an acceptable live virus vaccine. Rare, Mutant Polioviruses Sabin’s approach was based on the principle that, as viruses acquire the ability to replicate in a foreign species or tissue (for example, in mice), they become less able to replicate in humans and thus less able to cause disease. Sabin used tissue culture techniques to isolate those polioviruses that grew most rapidly in monkey kidney cells. He then employed a technique developed by Renato Dulbecco that allowed him to recover individual virus particles. The recovered viruses were injected directly into the brains or spinal cords of monkeys in order to identify those viruses that did not damage the nervous system. These meticulously performed experiments, which involved approximately nine thousand monkeys and more than one hundred chimpanzees, finally enabled Sabin to isolate rare mutant polioviruses that would replicate in the intestinal tract but not in the nervous systems of chimpanzees or, it was hoped, of humans. In addition, the weakened virus strains were shown to stimulate an-

Polio vaccine (Sabin)

/

583

tibodies when they were fed to chimpanzees; this was a critical attribute for a vaccine strain. By 1957, Sabin had identified three strains of attenuated viruses that were ready for small experimental trials in humans. A small group of volunteers, including Sabin’s own wife and children, were fed the vaccine with promising results. Sabin then gave his vaccine to virologists in the Soviet Union, Eastern Europe, Mexico, and Holland for further testing. Combined with smaller studies in the United States, these trials established the effectiveness and safety of his oral vaccine. During this period, the strains developed by Cox and by Koprowski were being tested also in millions of persons in field trials around the world. In 1958, two laboratories independently compared the vaccine strains and concluded that the Sabin strains were superior. In 1962, after four years of deliberation by the U.S. Public Health Service, all three of Sabin’s vaccine strains were licensed for general use.

Albert Sabin Born in Bialystok, Poland, in 1906, Albert Bruce Sabin emigrated with his family to the United States in 1921. Like Jonas Salk—the other great inventor of a polio vaccine—Sabin earned his medical degree at New York University (1931), where he began his research on polio. While in the U.S. Army Medical Corps during World War II, he helped produce vaccines for dengue fever and Japanese encephalitis. After the war he returned to his professorship at the University of Cincinnati College of Medicine and Children’s Hospital Research Foundation. The polio vaccine he developed there saved millions of children worldwide from paralytic polio. Many of these lives were doubtless saved because of his refusal to patent the vaccine, thereby making it simpler to produce and distribute and less expensive to administer Sabin’s work brought him more than forty honorary degrees from American and foreign universities and medals from the governments of the United States and Soviet Union. He was president of the Weizmann Institute of Science after 1970 and later became a professor of biomedicine at the Medical University of South Carolina. He died in 1993.

584

/

Polio vaccine (Sabin)

Consequences The development of polio vaccines ranks as one of the triumphs of modern medicine. In the early 1950’s, paralytic polio struck 13,500 out of every 100 million Americans. The use of the Salk vaccine greatly reduced the incidence of polio, but outbreaks of paralytic disease continued to occur: Fifty-seven hundred cases were reported in 1959 and twenty-five hundred cases in 1960. In 1962, the oral Sabin vaccine became the vaccine of choice in the United States. Since its widespread use, the number of paralytic cases in the United States has dropped precipitously, eventually averaging fewer than ten per year. Worldwide, the oral vaccine prevented an estimated 5 million cases of paralytic poliomyelitis between 1970 and 1990. The oral vaccine is not without problems. Occasionally, the living virus mutates to a disease-causing (virulent) form as it multiplies in the vaccinated person. When this occurs, the person may develop paralytic poliomyelitis. The inactive vaccine, in contrast, cannot mutate to a virulent form. Ironically, nearly every incidence of polio in the United States is caused by the vaccine itself. In the developing countries of the world, the issue of vaccination is more pressing. Millions receive neither form of polio vaccine; as a result, at least 250,000 individuals are paralyzed or die each year. The World Health Organization and other health providers continue to work toward the very practical goal of completely eradicating this disease. See also Antibacterial drugs; Birth control pill; Iron lung; Penicillin; Polio vaccine (Salk); Reserpine; Salvarsan; Tuberculosis vaccine; Typhus vaccine; Yellow fever vaccine. Further Reading DeJauregui, Ruth. 100 Medical Milestones That Shaped World History. San Mateo, Calif.: Bluewood Books, 1998. Grady, Denise. “As Polio Fades, Dr. Salk’s Vaccine Re-emerges.” New York Times (December 14, 1999). Plotkin, Stanley A., and Edward A. Mortimer. Vaccines. 2d ed. Philadelphia: W. B. Saunders, 1994. Seavey, Nina Gilden, Jane S. Smith, and Paul Wagner. A Paralyzing Fear: The Triumph over Polio in America. New York: TV Books, 1998.

585

Polio vaccine (Salk) Polio vaccine (Salk)

The invention: Jonas Salk’s vaccine was the first that prevented polio, resulting in the virtual eradication of crippling polio epidemics. The people behind the invention: Jonas Edward Salk (1914-1995), an American physician, immunologist, and virologist Thomas Francis, Jr. (1900-1969), an American microbiologist Cause for Celebration Poliomyelitis (polio) is an infectious disease that can adversely affect the central nervous system, causing paralysis and great muscle wasting due to the destruction of motor neurons (nerve cells) in the spinal cord. Epidemiologists believe that polio has existed since ancient times, and evidence of its presence in Egypt, circa 1400 b.c.e., has been presented. Fortunately, the Salk vaccine and the later vaccine developed by the American virologist Albert Bruce Sabin can prevent the disease. Consequently, except in underdeveloped nations, polio is rare. Moreover, although once a person develops polio, there is still no cure for it, a large number of polio cases end without paralysis or any observable effect. Polio is often called “infantile paralysis.” This results from the fact that it is seen most often in children. It is caused by a virus and begins with body aches, a stiff neck, and other symptoms that are very similar to those of a severe case of influenza. In some cases, within two weeks after its onset, the course of polio begins to lead to muscle wasting and paralysis. On April 12, 1955, the world was thrilled with the announcement that Jonas Edward Salk’s poliomyelitis vaccine could prevent the disease. It was reported that schools were closed in celebration of this event. Salk, the son of a New York City garment worker, has since become one of the most well-known and publicly venerated medical scientists in the world. Vaccination is a method of disease prevention by immunization, whereby a small amount of virus is injected into the body to prevent

586

/

Polio vaccine (Salk)

Jonas Salk The son of a garment industry worker, Jonas Edward Salk was born in New York City in 1914. He worked his way through school, graduating from New York University School of Medicine in 1938. Afterward he joined microbiologist Thomas Francis, Jr., in developing a vaccine for influenza. In 1942, Salk began a research fellowship at the University of Michigan and subsequently joined the epidemiology faculty. He moved to the University of Pittsburgh in 1947, directing its Viral Research Lab, and while there developed his vaccine for poliomyelitis. The discovery catapulted Salk into worldwide fame, but he was a controversial figure among scientists. Although Salk received the Presidential Medal of Freedom, a Congressional gold medal, and the Nehru Award for International Understanding, he was turned down for membership in the National Academy of Sciences. In 1963 he opened the Salk Institute for Biological Sciences in La Jolla, California. Well aware of his reputation among medical researchers, he once joked, “I couldn’t possibly have become a member of this institute if I hadn’t founded it myself.” He died in 1995.

a viral disease. The process depends on the production of antibodies (body proteins that are specifically coded to prevent the disease spread by the virus) in response to the vaccination. Vaccines are made of weakened or killed virus preparations. Electrifying Results The Salk vaccine was produced in two steps. First, polio viruses were grown in monkey kidney tissue cultures. These polio viruses were then killed by treatment with the right amount of formaldehyde to produce an effective vaccine. The killed-virus polio vaccine was found to be safe and to cause the production of antibodies against the disease, a sign that it should prevent polio. In early 1952, Salk tested a prototype vaccine against Type I polio virus on children who were afflicted with the disease and were thus deemed safe from reinfection. This test showed that the vaccination

Polio vaccine (Salk)

/

587

greatly elevated the concentration of polio antibodies in these children. On July 2, 1952, encouraged by these results, Salk vaccinated fortythree children who had never had polio with vaccines against each of the three virus types (Type I, Type II, and Type III). All inoculated children produced high levels of polio antibodies, and none of them developed the disease. Consequently, the vaccine appeared to be both safe in humans and likely to become an effective public health tool. In 1953, Salk reported these findings in the Journal of the American Medical Association. In April, 1954, nationwide testing of the Salk vaccine began, via the mass vaccination of American schoolchildren. The results of the trial were electrifying. The vaccine was safe, and it greatly reduced the incidence of the disease. In fact, it was estimated that Salk’s vaccine gave schoolchildren 60 to 90 percent protection against polio. Salk was instantly praised. Then, however, several cases of polio occurred as a consequence of the vaccine. Its use was immediately suspended by the U.S. surgeon general, pending a complete examination. Soon, it was evident that all the cases of vaccine-derived polio were attributable to faulty batches of vaccine made by one pharmaceutical company. Salk and his associates were in no way responsible for the problem. Appropriate steps were taken to ensure that such an error would not be repeated, and the Salk vaccine was again released for use by the public. Consequences The first reports on the polio epidemic in the United States had occurred on June 27, 1916, when one hundred residents of Brooklyn, New York, were afflicted. Soon, the disease had spread. By August, twenty-seven thousand people had developed polio. Nearly seven thousand afflicted people died, and many survivors of the epidemic were permanently paralyzed to varying extents. In New York City alone, nine thousand people developed polio and two thousand died. Chaos reigned as large numbers of terrified people attempted to leave and were turned back by police. Smaller polio epidemics occurred throughout the nation in the years that followed (for example, the Catawba County, North Carolina, epidemic of 1944). A particularly horrible aspect of polio was the fact that more than 70

588

/

Polio vaccine (Salk)

percent of polio victims were small children. Adults caught it too; the most famous of these adult polio victims was U.S. President Franklin D. Roosevelt. There was no cure for the disease. The best available treatment was physical therapy. As of August, 1955, more than four million polio vaccines had been given. The Salk vaccine appeared to work very well. There were only half as many reported cases of polio in 1956 as there had been in 1955. It appeared that polio was being conquered. By 1957, the number of cases reported nationwide had fallen below six thousand. Thus, in two years, its incidence had dropped by about 80 percent. This was very exciting, and soon other countries clamored for the vaccine. By 1959, ninety other countries had been supplied with the Salk vaccine. Worldwide, the disease was being eradicated. The introduction of an oral polio vaccine by Albert Bruce Sabin supported this progress. Salk received many honors, including honorary degrees from American and foreign universities, the Lasker Award, a Congressional Medal for Distinguished Civilian Service, and membership in the French Legion of Honor, yet he received neither the Nobel Prize nor membership in the American National Academy of Sciences. It is believed by many that this neglect was a result of the personal antagonism of some of the members of the scientific community who strongly disagreed with his theories of viral inactivation. See also Antibacterial drugs; Birth control pill; Iron lung; Penicillin; Polio vaccine (Sabin); Reserpine; Salvarsan; Tuberculosis vaccine; Typhus vaccine; Yellow fever vaccine. Further Reading DeJauregui, Ruth. 100 Medical Milestones That Shaped World History. San Mateo, Calif.: Bluewood Books, 1998. Plotkin, Stanley A., and Edward A. Mortimer. Vaccines. 2d ed. Philadelphia: W. B. Saunders, 1994. Seavey, Nina Gilden, Jane S. Smith, and Paul Wagner. A Paralyzing Fear: The Triumph over Polio in America. New York: TV Books, 1998. Smith, Jane S. Patenting the Sun: Polio and the Salk Vaccine. New York: Anchor/Doubleday, 1991.

589

Polyester Polyester

The invention: A synthetic fibrous polymer used especially in fabrics. The people behind the invention: Wallace H. Carothers (1896-1937), an American polymer chemist Hilaire de Chardonnet (1839-1924), a French polymer chemist John R. Whinfield (1901-1966), a British polymer chemist A Story About Threads Human beings have worn clothing since prehistoric times. At first, clothing consisted of animal skins sewed together. Later, people learned to spin threads from the fibers in plant or animal materials and to weave fabrics from the threads (for example, wool, silk, and cotton). By the end of the nineteenth century, efforts were begun to produce synthetic fibers for use in fabrics. These efforts were motivated by two concerns. First, it seemed likely that natural materials would become too scarce to meet the needs of a rapidly increasing world population. Second, a series of natural disasters— affecting the silk industry in particular—had demonstrated the problems of relying solely on natural fibers for fabrics. The first efforts to develop synthetic fabric focused on artificial silk, because of the high cost of silk, its beauty, and the fact that silk production had been interrupted by natural disasters more often than the production of any other material. The first synthetic silk was rayon, which was originally patented by a French count, Hilaire de Chardonnet, and was later much improved by other polymer chemists. Rayon is a semisynthetic material that is made from wood pulp or cotton. Because there was a need for synthetic fabrics whose manufacture did not require natural materials, other avenues were explored. One of these avenues led to the development of totally synthetic polyester fibers. In the United States, the best-known of these is Dacron, which is manufactured by E. I. Du Pont de Nemours. Easily made into

590

/

Polyester

threads, Dacron is widely used in clothing. It is also used to make audiotapes and videotapes and in automobile and boat bodies. From Polymers to Polyester Dacron belongs to a group of chemicals known as “synthetic polymers.” All polymers are made of giant molecules, each of which is composed of a large number of simpler molecules (“monomers”) that have been linked, chemically, to form long strings. Efforts by industrial chemists to prepare synthetic polymers developed in the twentieth century after it was discovered that many natural building materials and fabrics (such as rubber, wood, wool, silk, and cotton) were polymers, and as the ways in which monomers could be joined to make polymers became better understood. One group of chemists who studied polymers sought to make inexpensive synthetic fibers to replace expensive silk and wool. Their efforts led to the development of well-known synthetic fibers such as nylon and Dacron. Wallace H. Carothers of Du Pont pioneered the development of polyamide polymers, collectively called “nylon,” and was the first researcher to attempt to make polyester. It was British polymer chemists John R. Whinfield and J. T. Dickson of Calico Printers Association (CPA) Limited, however, who in 1941 perfected and patented polyester that could be used to manufacture clothing. The first polyester fiber products were produced in 1950 in Great Britain by London’s British Imperial Chemical Industries, which had secured the British patent rights from CPA. This polyester, which was made of two monomers, terphthalic acid and ethylene glycol, was called Terylene. In 1951, Du Pont, which had acquired Terylene patent rights for the Western Hemisphere, began to market its own version of this polyester, which was called Dacron. Soon, other companies around the world were selling polyester materials of similar composition. Dacron and other polyesters are used in many items in the United States. Made into fibers and woven, Dacron becomes cloth. When pressed into thin sheets, it becomes Mylar, which is used in videotapes and audiotapes. Dacron polyester, mixed with other materials, is also used in many industrial items, including motor vehi-

Polyester

/

591

cle and boat bodies. Terylene and similar polyester preparations serve the same purposes in other countries. The production of polyester begins when monomers are mixed in huge reactor tanks and heated, which causes them to form giant polymer chains composed of thousands of alternating monomer units. If T represents terphthalic acid and E represents ethylene glycol, a small part of a necklace-like polymer can be shown in the following way: (TETETETETE). Once each batch of polyester polymer has the desired composition, it is processed for storage until it is needed. In this procedure, the material, in liquid form in the hightemperature reactor, is passed through a device that cools it and forms solid strips. These strips are then diced, dried, and stored. When polyester fiber is desired, the diced polyester is melted and then forced through tiny holes in a “spinneret” device; this process is called “extruding.” The extruded polyester cools again, while passing through the spinneret holes, and becomes fine fibers called “filaments.” The filaments are immediately wound into threads that are collected in rolls. These rolls of thread are then dyed and used to weave various fabrics. If polyester sheets or other forms of polyester are desired, the melted, diced polyester is processed in other ways. Polyester preparations are often mixed with cotton, glass fibers, or other synthetic polymers to produce various products. Impact The development of polyester was a natural consequence of the search for synthetic fibers that developed from work on rayon. Once polyester had been developed, its great utility led to its widespread use in industry. In addition, the profitability of the material spurred efforts to produce better synthetic fibers for specific uses. One example is that of stretchy polymers such as Helance, which is a form of nylon. In addition, new chemical types of polymer fibers were developed, including the polyurethane materials known collectively as “spandex” (for example, Lycra and Vyrenet). The wide variety of uses for polyester is amazing. Mixed with cotton, it becomes wash-and-wear clothing; mixed with glass, it is used to make boat and motor vehicle bodies; combined with other materials, it is used to make roofing materials, conveyor belts,

592

/

Polyester

hoses, and tire cords. In Europe, polyester has become the main packaging material for consumer goods, and the United States does not lag far behind in this area. The future is sure to hold more uses for polyester and the invention of new polymers. These spinoffs of polyester will be essential in the development of high technology. See also Buna rubber; Neoprene; Nylon; Orlon; Plastic; Polyethylene; Polystyrene. Further Reading Furukawa, Yasu. Inventing Polymer Science: Staudinger, Carothers, and the Emergence of Macromolecular Chemistry. Philadelphia: University of Pennsylvania Press, 1998. Handley, Susannah. Nylon: The Story of a Fashion Revolution, A Celebration of Design from Art Silk to Nylon and Thinking Fibres. Baltimore: Johns Hopkins University Press, 1999. Hermes, Matthew E. Enough for One Lifetime: Wallace Carothers, Inventor of Nylon. Washington, D.C.: American Chemical Society and the Chemical Heritage Foundation, 1996. Smith, Matthew Boyd. Polyester: The Indestructible Fashion. Atglen, Pa.: Schiffer, 1998.

593

Polyethylene Polyethylene

The invention: An artificial polymer with strong insulating properties and many other applications. The people behind the invention: Karl Ziegler (1898-1973), a German chemist Giulio Natta (1903-1979), an Italian chemist August Wilhelm von Hofmann (1818-1892), a German chemist The Development of Synthetic Polymers In 1841, August Hofmann completed his Ph.D. with Justus von Liebig, a German chemist and founding father of organic chemistry. One of Hofmann’s students, William Henry Perkin, discovered that coal tars could be used to produce brilliant dyes. The German chemical industry, under Hofmann’s leadership, soon took the lead in this field, primarily because the discipline of organic chemistry was much more developed in Germany than elsewhere. The realities of the early twentieth century found the chemical industry struggling to produce synthetic substitutes for natural materials that were in short supply, particularly rubber. Rubber is a natural polymer, a material composed of a long chain of small molecules that are linked chemically. An early synthetic rubber, neoprene, was one of many synthetic polymers (some others were Bakelite, polyvinyl chloride, and polystyrene) developed in the 1920’s and 1930’s. Another polymer, polyethylene, was developed in 1936 by Imperial Chemical Industries. Polyethylene was a tough, waxy material that was produced at high temperature and at pressures of about one thousand atmospheres. Its method of production made the material expensive, but it was useful as an insulating material. World War II and the material shortages associated with it brought synthetic materials into the limelight. Many new uses for polymers were discovered, and after the war they were in demand for the production of a variety of consumer goods, although polyethylene was still too expensive to be used widely.

594

/

Polyethylene

Organometallics Provide the Key Karl Ziegler, an organic chemist with an excellent international reputation, spent most of his career in Germany. With his international reputation and lack of political connections, he was a natural candidate to take charge of the Kaiser Wilhelm Institute for Coal Research (later renamed the Max Planck Institute) in 1943. Wise planners saw him as a director who would be favored by the conquering Allies. His appointment was a shrewd one, since he was allowed to retain his position after World War II ended. Ziegler thus played a key role in the resurgence of German chemical research after the war. Before accepting the position at the Kaiser Wilhelm Institute, Ziegler made it clear that he would take the job only if he could pursue his own research interests in addition to conducting coal research. The location of the institute in the Ruhr Valley meant that abundant supplies of ethylene were available from the local coal industry, so it is not surprising that Ziegler began experimenting with that material. Although Ziegler’s placement as head of the institute was an important factor in his scientific breakthrough, his previous research was no less significant. Ziegler devoted much time to the field of organometallic compounds, which are compounds that contain a metal atom that is bonded to one or more carbon atoms. Ziegler was interested in organoaluminum compounds, which are compounds that contain aluminum-carbon bonds. Ziegler was also interested in polymerization reactions, which involve the linking of thousands of smaller molecules into the single long chain of a polymer. Several synthetic polymers were known, but chemists could exert little control on the actual process. It was impossible to regulate the length of the polymer chain, and the extent of branching in the chain was unpredictable. It was as a result of studying the effect of organoaluminum compounds on these chain formation reactions that the key discovery was made. Ziegler and his coworkers already knew that ethylene would react with organoaluminum compounds to produce hydrocarbons, which are compounds that contain only carbon and hydrogen and that have varying chain lengths. Regulating the product chain length continued to be a problem.

Polyethylene

/

595

At this point, fate intervened in the form of a trace of nickel left in a reactor from a previous experiment. The nickel caused the chain lengthening to stop after two ethylene molecules had been linked. Ziegler and his colleagues then tried to determine whether metals other than nickel caused a similar effect with a longer polymeric chain. Several metals were tested, and the most important finding was that a trace of titanium chloride in the reactor caused the deposition of large quantities of high-density polyethylene at low pressures. Ziegler licensed the procedure, and within a year, Giulio Natta had modified the catalysts to give high yields of polymers with highly ordered side chains branching from the main chain. This opened the door for the easy production of synthetic rubber. For their discovery of Ziegler-Natta catalysts, Ziegler and Natta shared the 1963 Nobel Prize in Chemistry. Consequences Ziegler’s process produced polyethylene that was much more rigid than the material produced at high pressure. His product also had a higher density and a higher softening temperature. Industrial exploitation of the process was unusually rapid, and within ten years more than twenty plants utilizing the process had been built throughout Europe, producing more than 120,000 metric tons of polyethylene. This rapid exploitation was one reason Ziegler and Natta were awarded the Nobel Prize after such a relatively short time. By the late 1980’s, total production stood at roughly 18 billion pounds worldwide. Other polymeric materials, including polypropylene, can be produced by similar means. The ready availability and low cost of these versatile materials have radically transformed the packaging industry. Polyethylene bottles are far lighter than their glass counterparts; in addition, gases and liquids do not diffuse into polyethylene very easily, and it does not break easily. As a result, more and more products are bottled in containers made of polyethylene or other polymers. Other novel materials possessing properties unparalleled by any naturally occurring material (Kevlar, for example, which is used to make bullet-resistant vests) have also been an outgrowth of the availability of low-cost polymeric materials.

596

/

Polyethylene

See also Buna rubber; Neoprene; Nylon; Orlon; Plastic; Polyester; Polystyrene. Further Reading Boor, John. Ziegler-Natta Catalysts and Polymerizations. New York: Academic Press, 1979. Clarke, Alison J. Tupperware: The Promise of Plastic in 1950s America. Washington, D.C.: Smithsonian Institution Press, 1999. Natta, Giulio. “From Stereospecific Polymerization to Asymmetric Autocatalytic Synthesis of Macromolecules.” In Chemistry, 19631970. River Edge, N.J.: World Scientific, 1999. Ziegler, Karl. “Consequences and Development of an Invention.” In Chemistry, 1963-1970. River Edge, N.J.: World Scientific, 1999.

597

Polystyrene Polystyrene

The invention: A clear, moldable polymer with many industrial uses whose overuse has also threatened the environment. The people behind the invention: Edward Simon, an American chemist Charles Gerhardt (1816-1856), a French chemist Marcellin Pierre Berthelot (1827-1907), a French chemist Polystyrene Is Characterized In the late eighteenth century, a scientist by the name of Casper Neuman described the isolation of a chemical called “storax” from a balsam tree that grew in Asia Minor. This isolation led to the first report on the physical properties of the substance later known as “styrene.” The work of Neuman was confirmed and expanded upon years later, first in 1839 by Edward Simon, who evaluated the temperature dependence of styrene, and later by Charles Gerhardt, who proposed its molecular formula. The work of these two men sparked an interest in styrene and its derivatives. Polystyrene belongs to a special class of molecules known as polymers. A polymer (the name means “many parts”) is a giant molecule formed by combining small molecular units, called “monomers.” This combination results in a macromolecule whose physical properties—especially its strength and flexibility—are significantly different from those of its monomer components. Such polymers are often simply called “plastics.” Polystyrene has become an important material in modern society because it exhibits a variety of physical characteristics that can be manipulated for the production of consumer products. Polystyrene is a “thermoplastic,” which means that it can be softened by heat and then reformed, after which it can be cooled to form a durable and resilient product. At 94 degrees Celsius, polystyrene softens; at room temperature, however, it rings like a metal when struck. Because of the glasslike nature and high refractive index of polystyrene, products made

598

/

Polystyrene

from it are known for their shine and attractive texture. In addition, the material is characterized by a high level of water resistance and by electrical insulating qualities. It is also flammable, can by dissolved or softened by many solvents, and is sensitive to light. These qualities make polystyrene a valuable material in the manufacture of consumer products. Plastics on the Market In 1866, Marcellin Pierre Berthelot prepared styrene from ethylene and benzene mixtures in a heated reaction flask. This was the first synthetic preparation of polystyrene. In 1925, the Naugatuck Chemical Company began to operate the first commercial styrene/ polystyrene manufacturing plant. In the 1930’s, the Dow Chemical Company became involved in the manufacturing and marketing of styrene/polystyrene products. Dow’s Styron 666 was first marketed as a general-purpose polystyrene in 1938. This material was the first plastic product to demonstrate polystyrene’s excellent mechanical properties and ease of fabrication. The advent of World War II increased the need for plastics. When the Allies’ supply of natural rubber was interrupted, chemists sought to develop synthetic substitutes. The use of additives with polymer species was found to alter some of the physical properties of those species. Adding substances called “elastomers” during the polymerization process was shown to give a rubberlike quality to a normally brittle species. An example of this is Dow’s Styron 475, which was marketed in 1948 as the first “impact” polystyrene. It is called an impact polystyrene because it also contains butadiene, which increases the product’s resistance to breakage. The continued characterization of polystyrene products has led to the development of a worldwide industry that fills a wide range of consumer needs. Following World War II, the plastics industry revolutionized many aspects of modern society. Polystyrene is only one of the many plastics involved in this process, but it has found its way into a multitude of consumer products. Disposable kitchen utensils, trays and packages, cups, videocassettes, insulating foams, egg cartons, food wrappings, paints, and appliance parts are only a few of the typical applications of polystyrenes. In fact, the production of

Polystyrene

/

599

polystyrene has grown to exceed 5 billion pounds per year. The tremendous growth of this industry in the postwar era has been fueled by a variety of factors. Having studied the physical and chemical properties of polystyrene, chemists and engineers were able to envision particular uses and to tailor the manufacture of the product to fit those uses precisely. Because of its low cost of production, superior performance, and light weight, polystyrene has become the material of choice for the packaging industry. The automobile industry also enjoys its benefits. Polystyrene’s lower density compared to those of glass and steel makes it appropriate for use in automobiles, since its light weight means that using it can reduce the weight of automobiles, thereby increasing gas efficiency. Impact There is no doubt that the marketing of polystyrene has greatly affected almost every aspect of modern society. From computer keyboards to food packaging, the use of polystyrene has had a powerful impact on both the quality and the prices of products. Its use is not, however, without drawbacks; it has also presented humankind with a dilemma. The wholesale use of polystyrene has created an environmental problem that represents a danger to wildlife, adds to roadside pollution, and greatly contributes to the volume of solid waste in landfills. Polystyrene has become a household commodity because it lasts. The reciprocal effect of this fact is that it may last forever. Unlike natural products, which decompose upon burial, polystyrene is very difficult to convert into degradable forms. The newest challenge facing engineers and chemists is to provide for the safe and efficient disposal of plastic products. Thermoplastics such as polystyrene can be melted down and remolded into new products, which makes recycling and reuse of polystyrene a viable option, but this option requires the cooperation of the same consumers who have benefited from the production of polystyrene products. See also Food freezing; Nylon; Orlon; Plastic; Polyester; Polyethylene; Pyrex glass; Teflon; Tupperware.

600

/

Polystyrene

Further Reading Fenichell, Stephen. Plastic: The Making of a Synthetic Century. New York: HarperBusiness, 1997. Mossman, S. T. I. Early Plastics: Perspectives, 1850-1950. London: Science Museum, 1997. Wünsch, J. R. Polystyrene: Synthesis, Production and Applications. Shropshire, England: Rapra Technology, 2000.

601

Propeller-coordinated machine gun Propeller-coordinated machine gun

The invention: A mechanism that synchronized machine gun fire with propeller movement to prevent World War I fighter plane pilots from shooting off their own propellers during combat. The people behind the invention: Anthony Herman Gerard Fokker (1890-1939), a Dutch-born American entrepreneur, pilot, aircraft designer, and manufacturer Roland Garros (1888-1918), a French aviator Max Immelmann (1890-1916), a German aviator Raymond Saulnier (1881-1964), a French aircraft designer and manufacturer French Innovation The first true aerial combat of World War I took place in 1915. Before then, weapons attached to airplanes were inadequate for any real combat work. Hand-held weapons and clumsily mounted machine guns were used by pilots and crew members in attempts to convert their observation planes into fighters. On April 1, 1915, this situation changed. From an airfield near Dunkerque, France, a French airman, Lieutenant Roland Garros, took off in an airplane equipped with a device that would make his plane the most feared weapon in the air at that time. During a visit to Paris, Garros met with Raymond Saulnier, a French aircraft designer. In April of 1914, Saulnier had applied for a patent on a device that mechanically linked the trigger of a machine gun to a cam on the engine shaft. Theoretically, such an assembly would allow the gun to fire between the moving blades of the propeller. Unfortunately, the available machine gun Saulnier used to test his device was a Hotchkiss gun, which tended to fire at an uneven rate. On Garros’s arrival, Saulnier showed him a new invention: a steel deflector shield that, when fastened to the propeller, would deflect the small percentage of mistimed bullets that would otherwise destroy the blade.

602

/

Propeller-coordinated machine gun

The first test-firing was a disaster, shooting the propeller off and destroying the fuselage. Modifications were made to the deflector braces, streamlining its form into a wedge shape with gutterchannels for deflected bullets. The invention was attached to a Morane-Saulnier monoplane, and on April 1, Garros took off alone toward the German lines. Success was immediate. Garros shot down a German observation plane that morning. During the next two weeks, Garros shot down five more German aircraft.

German Luck The German high command, frantic over the effectiveness of the French “secret weapon,” sent out spies to try to steal the secret and also ordered engineers to develop a similar weapon. Luck was with them. On April 18, 1915, despite warnings by his superiors not to fly over enemy-held territory, Garros was forced to crash-land behind German lines with engine trouble. Before he could destroy his aircraft, Garros and his plane were captured by German troops. The secret weapon was revealed. The Germans were ecstatic about the opportunity to examine the new French weapon. Unlike the French, the Germans had the first air-cooled machine gun, the Parabellum, which shot continuous bands of one hundred bullets and was reliable enough to be adapted to a timing mechanism. In May of 1915, Anthony Herman Gerard Fokker was shown Garros’s captured plane and was ordered to copy the idea. Instead, Fokker and his assistant designed a new firing system. It is unclear whether Fokker and his team were already working on a synchronizer or to what extent they knew of Saulnier’s previous work in France. Within several days, however, they had constructed a working prototype and attached it to a Fokker Eindecker 1 airplane. The design consisted of a simple linkage of cams and push-rods connected to the oil-pump drive of an Oberursel engine and the trigger of a Parabellum machine gun. The firing of the gun had to be timed precisely to fire its six hundred rounds per minute between the twelve-hundred-revolutions-per-minute propeller blades. Fokker took his invention to Doberitz air base, and after a series

Propeller-coordinated machine gun

/

603

Anthony Herman Gerard Fokker Anthony Fokker was born on the island of Java in the Dutch East Indies (now Indonesia) in 1890. He returned to his parent’s home country, the Netherlands, to attend school and then studied aeronautics in Germany. He built his first plane in 1910 and established Fokker Aeroplanbau near Berlin in 1912. His monoplanes were highly esteemed when World War I erupted in 1914, and he offered his designs to both the German and the French governments. The Germans hired him. By the end of the war his fighters, especially the Dr I triplane and D VII biplane, were practically synonymous with German air warfare because they had been the scourge of Allied pilots. In 1922 Fokker moved to the United States and opened the Atlantic Aircraft Corporation in New Jersey. He had lost enthusiasm for military aircraft and turned his skills toward producing advanced designs for civilian use. The planes his company turned out established one first after another. His T-2 monoplane became the first to fly nonstop from coast to coast, New York to San Diego. His ten-seat airliner, the F VII/3m, carried Lieutenant Commander Richard Byrd over the North Pole in 1926 and Charles Kingsford-Smith across the Pacific Ocean in 1928. By the time Fokker died in New York in 1939, he had become a visionary. He foresaw passenger planes as the means to knit together the far-flung nations of the world into a network of rapid travel and communications.

of exhausting trials before the German high command, both on the ground and in the air, he was allowed to take two prototypes of the machine-gun-mounted airplanes to Douai in German-held France. At Douai, two German pilots crowded into the cockpit with Fokker and were given demonstrations of the plane’s capabilities. The airmen were Oswald Boelcke, a test pilot and veteran of forty reconnaissance missions, and Max Immelmann, a young, skillful aviator who was assigned to the front. When the first combat-ready versions of Fokker’s Eindecker 1 were delivered to the front lines, one was assigned to Boelcke, the other to Immelmann. On August 1, 1915, with their aerodrome un-

604

/

Propeller-coordinated machine gun

der attack from nine English bombers, Boelcke and Immelmann manned their aircraft and attacked. Boelcke’s gun jammed, and he was forced to cut off his attack and return to the aerodrome. Immelmann, however, succeeded in shooting down one of the bombers with his synchronized machine gun. It was the first victory credited to the Fokker-designed weapon system. Impact At the outbreak of World War I, military strategists and commanders on both sides saw the wartime function of airplanes as a means to supply intelligence information behind enemy lines or as airborne artillery spotting platforms. As the war progressed and aircraft flew more or less freely across the trenches, providing vital information to both armies, it became apparent to ground commanders that while it was important to obtain intelligence on enemy movements, it was important also to deny the enemy similar information. Early in the war, the French used airplanes as strategic bombing platforms. As both armies began to use their air forces for strategic bombing of troops, railways, ports, and airfields, it became evident that aircraft would have to be employed against enemy aircraft to prevent reconnaissance and bombing raids. With the invention of the synchronized forward-firing machine gun, pilots could use their aircraft as attack weapons. A pilot finally could coordinate control of his aircraft and his armaments with maximum efficiency. This conversion of aircraft from nearly passive observation platforms to attack fighters is the single greatest innovation in the history of aerial warfare. The development of fighter aircraft forced a change in military strategy, tactics, and logistics and ushered in the era of modern warfare. Fighter planes are responsible for the battle-tested military adage: Whoever controls the sky controls the battlefield. See also Airplane; Radar; Stealth aircraft.

Propeller-coordinated machine gun

/

605

Further Reading Dierikx, M. L. J. Fokker: A Transatlantic Biography. Washington: Smithsonian Institution Press, 1997. Franks, Norman L. R. Aircraft Versus Aircraft: The Illustrated Story of Fighter Pilot Combat from 1914 to the Present Day. New York: Barnes & Noble Books, 1999. Guttman, Jon. Fighting Firsts: Fighter Aircraft Combat Debuts from 1914 to 1944. London: Cassell, 2000.

606

Pyrex glass Pyrex glass

The invention: A superhard and durable glass product with widespread uses in industry and home products. The people behind the invention: Jesse T. Littleton (1888-1966), the chief physicist of Corning Glass Works’ research department Eugene G. Sullivan (1872-1962), the founder of Corning’s research laboratories William C. Taylor (1886-1958), an assistant to Sullivan Cooperating with Science By the twentieth century, Corning Glass Works had a reputation as a corporation that cooperated with the world of science to improve existing products and develop new ones. In the 1870’s, the company had hired university scientists to advise on improving the optical quality of glasses, an early example of today’s common practice of academics consulting for industry. When Eugene G. Sullivan established Corning’s research laboratory in 1908 (the first of its kind devoted to glass research), the task that he undertook with William C. Taylor was that of making a heatresistant glass for railroad lantern lenses. The problem was that ordinary flint glass (the kind in bottles and windows, made by melting together silica sand, soda, and lime) has a fairly high thermal expansion, but a poor heat conductivity. The glass thus expands unevenly when exposed to heat. This condition can cause the glass to break, sometimes violently. Colored lenses for oil or gas railroad signal lanterns sometimes shattered if they were heated too much by the flame that produced the light and were then sprayed by rain or wet snow. This changed a red “stop” light to a clear “proceed” signal and caused many accidents or near misses in railroading in the late nineteenth century. Two solutions were possible: to improve the thermal conductivity or reduce the thermal expansion. The first is what metals do: When exposed to heat, most metals have an expansion much greater

Pyrex glass

/

607

than that of glass, but they conduct heat so quickly that they expand nearly equally throughout and seldom lose structural integrity from uneven expansion. Glass, however, is an inherently poor heat conductor, so this approach was not possible. Therefore, a formulation had to be found that had little or no thermal expansivity. Pure silica (one example is quartz) fits this description, but it is expensive and, with its high melting point, very difficult to work. The formulation that Sullivan and Taylor devised was a borosilicate glass—essentially a soda-lime glass with the lime replaced by borax, with a small amount of alumina added. This gave the low thermal expansion needed for signal lenses. It also turned out to have good acid-resistance, which led to its being used for the battery jars required for railway telegraph systems and other applications. The glass was marketed as “Nonex” (for “nonexpansion glass”). From the Railroad to the Kitchen Jesse T. Littleton joined Corning’s research laboratory in 1913. The company had a very successful lens and battery jar material, but no one had even considered it for cooking or other heat-transfer applications, because the prevailing opinion was that glass absorbed and conducted heat poorly. This meant that, in glass pans, cakes, pies, and the like would cook on the top, where they were exposed to hot air, but would remain cold and wet (or at least undercooked) next to the glass surface. As a physicist, Littleton knew that glass absorbed radiant energy very well. He thought that the heatconduction problem could be solved by using the glass vessel itself to absorb and distribute heat. Glass also had a significant advantage over metal in baking. Metal bakeware mostly reflects radiant energy to the walls of the oven, where it is lost ultimately to the surroundings. Glass would absorb this radiation energy and conduct it evenly to the cake or pie, giving a better result than that of the metal bakeware. Moreover, glass would not absorb and carry over flavors from one baking effort to the next, as some metals do. Littleton took a cut-off battery jar home and asked his wife to bake a cake in it. He took it to the laboratory the next day, handing pieces around and not disclosing the method of baking until all had

608

/

Pyrex glass

Jesse T. Littleton To prove that glass is good for baking, place an uncooked pie in a pie tin and place another pie pan under it, made half of tin and half of non-expanding glass. Place it in all the oven. That is the experiment Jesse Talbot Littleton, Jr., used at Corning Glass Works soon after he hired on in 1913. The story behind it began with a ceramic dish that cracked when his wife baked a cake. That would not happen, he realized, with the right kind of glass. Although his wife baked a cake successfully in a glass battery jar bottom at his request, Littleton had to demonstrate the feat for his superiors scientifically. The half of the pie over the glass, it turned out, cooked faster and more evenly. Kitchen glassware was born. Littleton was born in Belle Haven, Virginia, in 1888. After taking degrees from Southern University and Tulane University, he earned a doctorate in physics from the University of Wisconsin in 1911. He briefly vowed to remain a bachelor and dedicate his life to physics, but Besse Cook, a pretty Mississippi school teacher, turned his head, and so he got married instead. He was the first physicist added to the newly organized research laboratories at Corning in New York. There he studied practical problems involved in the industrial applications of glass, including tempering, and helped invent a gas pressure meter to measure the flow of air in blowing glass and a sensitive, faster thermometer. He rose rapidly in the organization. In 1920 he became chief of the physical lab, assistant director of research in 1940, vice president in 1943, director of all Corning research and development in 1946, and general technical adviser in 1951. Littleton retired a year later and, a passionate outdoorsman, devoted himself to hunting and fishing. A leading figure in the ceramics industry, he belonged to the American Academy for the Advancement of Science, American Physical Society, and the American Institute of Engineers and was an editor for the Journal of Applied Physics. He died in 1966.

Pyrex glass

/

609

agreed that the results were excellent. With this agreement, he was able to commit laboratory time to developing variations on the Nonex formula that were more suitable for cooking. The result was Pyrex, patented and trademarked in May of 1915. Impact In the 1930’s, Pyrex “Flameware” was introduced, with a new glass formulation that could resist the increased heat of stovetop cooking. In the half century since Flameware was introduced, Corning went on to produce a variety of other products and materials: tableware in tempered opal glass; cookware in Pyroceram, a glass product that during heat treatment gained such mechanical strength as to be virtually unbreakable; even hot plates and stoves topped with Pyroceram. In the same year that Pyrex was marketed for cooking, it was also introduced for laboratory apparatus. Laboratory glassware had been coming from Germany at the beginning of the twentieth century; World War I cut off the supply. Corning filled the gap with Pyrex beakers, flasks, and other items. The delicate blownglass equipment that came from Germany was completely displaced by the more rugged and heat-resistant machine-made Pyrex ware. Any number of operations are possible with Pyrex that cannot be performed safely in flint glass: Test tubes can be thrust directly into burner flames, with no preliminary warming; beakers and flasks can be heated on hot plates; and materials that dissolve when exposed to heat can be made into solutions directly in Pyrex storage bottles, a process that cannot be performed in regular glass. The list of such applications is almost endless. Pyrex has also proved to be the material of choice for lenses in the great reflector telescopes, beginning in 1934 with that at Mount Palomar. By its nature, astronomical observation must be done with the scope open to the weather. This means that the mirror must not change shape with temperature variations, which rules out metal mirrors. Silvered (or aluminized) Pyrex serves very well, and Corning has developed great expertise in casting and machining Pyrex blanks for mirrors of all sizes.

610

/

Pyrex glass

See also Laminated glass; Microwave cooking; Plastic; Polystyrene; Teflon; Tupperware. Further Reading Blaszczyk, Regina Lee. Imagining Consumers: Design and Innovation from Wedgwood to Corning. Baltimore: Johns Hopkins University Press, 2000. Graham, Margaret B. W., and Alec T. Shuldiner. Corning and the Craft of Innovation. New York: Oxford University Press, 2001. Stage, Sarah, and Virginia Bramble Vincenti. Rethinking Home Economics: Women and the History of a Profession. Ithaca, N.Y.: Cornell University Press, 1997. Rogove, Susan Tobier, and Marcia B. Steinhauer. Pyrex by Corning: A Collector’s Guide. Marietta, Ohio: Antique Publications, 1993.

611

Radar Radar

The invention: An electronic system for detecting objects at great distances, radar was a major factor in the Allied victory of World War II and now pervades modern life, including scientific research. The people behind the invention: Sir Robert Watson-Watt (1892-1973), the father of radar who proposed the chain air-warning system Arnold F. Wilkins, the person who first calculated the intensity of a radio wave William C. Curtis (1914-1976), an American engineer Looking for Thunder Sir Robert Watson-Watt, a scientist with twenty years of experience in government, led the development of the first radar, an acronym for radio detection and ranging. “Radar” refers to any instrument that uses the reflection of radio waves to determine the distance, direction, and speed of an object. In 1915, during World War I (1914-1918), Watson-Watt joined Great Britain’s Meteorological Office. He began work on the detection and location of thunderstorms at the Royal Aircraft Establishment in Farnborough and remained there throughout the war. Thunderstorms were known to be a prolific source of “atmospherics” (audible disturbances produced in radio receiving apparatus by atmospheric electrical phenomena), and Watson-Watt began the design of an elementary radio direction finder that gave the general position of such storms. Research continued after the war and reached a high point in 1922 when sealed-off cathode-ray tubes first became available. With assistance from J. F. Herd, a fellow Scot who had joined him at Farnborough, he constructed an instantaneous direction finder, using the new cathode-ray tubes, that gave the direction of thunderstorm activity. It was admittedly of low sensitivity, but it worked, and it was the first of its kind.

612

/

Radar

William C. Curtis In addition to radar’s applications in navigation, civil aviation, and science, it rapidly became an integral part of military aircraft by guiding weaponry and detecting enemy aircraft and missiles. The research and development industry that grew to provide offensive and defensive systems greatly expanded the opportunities for young scientists during the Cold War. Among them was William C. Curtis (1914-1976), one of the most influential African Americans in defense research. Curtis graduated from the Tuskegee Institute (later Tuskegee University), where he later served as its first dean of engineering. While there, he helped form and train the Tuskegee Airmen, a famous squadron of African American fighter pilots during World War II. He also worked for the Radio Corporation of American (RCA) for twenty-three years. It was while at RCA that he contributed innovations to military radar. These include the Black Cat weapons system, MG-3 fire control system, 300-A weapon radar system, and Airborne Interceptor Data Link.

Watson-Watt did much of this work at a new site at Ditton Park, near Slough, where the National Physical Laboratory had a field station devoted to radio research. In 1927, the two endeavors were combined as the Radio Research Station; it came under the general supervision of the National Physical Laboratory, with Watson-Watt as the first superintendent. This became a center with unrivaled expertise in direction finding using the cathode-ray tube and in studying the ionosphere using radio waves. No doubt these facilities were a factor when Watson-Watt invented radar in 1935. As radar developed, its practical uses expanded. Meteorological services around the world, using ground-based radar, gave warning of approaching rainstorms. Airborne radars proved to be a great help to aircraft by allowing them to recognize potentially hazardous storm areas. This type of radar was used also to assist research into cloud and rain physics. In this type of research, radar-equipped research aircraft observe the radar echoes inside a cloud as rain develops, and then fly through the cloud, using on-board instruments to measure the water content.

Technician at a modern radar display. (PhotoDisc)

614

/

Radar

Aiming Radar at the Moon The principles of radar were further developed through the discipline of radio astronomy. This field began with certain observations made by the American electrical engineer Karl Jansky in 1933 at the Bell Laboratories at Holmdell, New Jersey. Radio astronomers learn about objects in space by intercepting the radio waves that these objects emit. Jansky found that radio signals were coming to Earth from space. He called these mysterious pulses “cosmic noise.” In particular, there was an unusual amount of radar noise when the radio antennas were pointed at the Sun, which increased at the time of sun-spot activity. All this information lay dormant until after World War II (19391945), at which time many investigators turned their attention to interpreting the cosmic noise. The pioneers were Sir Bernard Lovell at Manchester, England, Sir Martin Ryle at Cambridge, England, and Joseph Pawsey of the Commonwealth of Science Industrial Research Organization, in Australia. The intensity of these radio waves was first calculated by Arnold F. Wilkins. As more powerful tools became available toward the end of World War II, curiosity caused experimenters to try to detect radio signals from the Moon. This was accomplished successfully in the late 1940’s and led to experiments on other objects in the solar system: planets, satellites, comets, and asteroids. Impact Radar introduced some new and revolutionary concepts into warfare, and in doing so gave birth to entirely new branches of technology. In the application of radar to marine navigation, the long-range navigation system developed during the war was taken up at once by the merchant fleets that used military-style radar equipment without modification. In addition, radar systems that could detect buoys and other ships and obstructions in closed waters, particularly under conditions of low visibility, proved particularly useful to peacetime marine navigation. In the same way, radar was adopted to assist in the navigation of civil aircraft. The various types of track guidance systems devel-

Radar

/

615

oped after the war were aimed at guiding aircraft in the critical last hundred kilometers or so of their run into an airport. Subsequent improvements in the system meant that an aircraft could place itself on an approach or landing path with great accuracy. The ability of radar to measure distance to an extraordinary degree of accuracy resulted in the development of an instrument that provided pilots with a direct measurement of the distances between airports. Along with these aids, ground-based radars were developed for the control of aircraft along the air routes or in the airport control area. The development of electronic computers can be traced back to the enormous advances in circuit design, which were an integral part of radar research during the war. During that time, some elements of electronic computing had been built into bombsights and other weaponry; later, it was realized that a whole range of computing operations could be performed electronically. By the end of the war, many pulse-forming networks, pulse-counting circuits, and memory circuits existed in the form needed for an electronic computer. Finally, the developing radio technology has continued to help astronomers explore the universe. Large radio telescopes exist in almost every country and enable scientists to study the solar system in great detail. Radar-assisted cosmic background radiation studies have been a building block for the big bang theory of the origin of the universe. See also Airplane; Cruise missile; Radio interferometer; Sonar; Stealth aircraft. Further Reading Brown, Louis. A Radar History of World War II: Technical and Military Imperatives. Philadelphia: Institute of Physics, 1999. Latham, Colin, and Anne Stobbs. Pioneers of Radar. Gloucestershire: Sutton, 1999. Rowland, John. The Radar Man: The Story of Sir Robert Watson-Watt. New York: Roy Publishers, 1964. Watson-Watt, Robert Alexander. The Pulse of Radar: The Autobiography of Sir Robert Watson-Watt. New York: Dial Press, 1959.

616

Radio Radio

The invention: The first radio transmissions of music and voice laid the basis for the modern radio and television industries. The people behind the invention: Guglielmo Marconi (1874-1937), an Italian physicist and inventor Reginald Aubrey Fessenden (1866-1932), an American radio pioneer True Radio The first major experimenter in the United States to work with wireless radio was Reginald Aubrey Fessenden. This transplanted Canadian was a skilled, self-made scientist, but unlike American inventor Thomas Alva Edison, he lacked the business skills to gain the full credit and wealth that such pathbreaking work might have merited. Guglielmo Marconi, in contrast, is most often remembered as the person who invented wireless (as opposed to telegraphic) radio. There was a great difference between the contributions of Marconi and Fessenden. Marconi limited himself to experiments with radio telegraphy; that is, he sought to send through the air messages that were currently being sent by wire—signals consisting of dots and dashes. Fessenden sought to perfect radio telephony, or voice communication by wireless transmission. Fessenden thus pioneered the essential precursor of modern radio broadcasting. At the beginning of the twentieth century, Fessenden spent much time and energy publicizing his experiments, thus promoting interest in the new science of radio broadcasting. Fessenden began his career as an inventor while working for the U.S. Weather Bureau. He set out to invent a radio system by which to broadcast weather forecasts to users on land and at sea. Fessenden believed that his technique of using continuous waves in the radio frequency range (rather than interrupted waves Marconi had used to produce the dots and dashes of Morse code) would provide the power necessary to carry Morse telegraph code yet be effective enough to handle voice communication. He would turn out to be

Radio /

617

correct. He conducted experiments as early as 1900 at Rock Point, Maryland, about 80 kilometers south of Washington, D.C., and registered his first patent in the area of radio research in 1902. Fame and Glory In 1900, Fessenden asked the General Electric Company to produce a high-speed generator of alternating current—or alternator— to use as the basis of his radio transmitter. This proved to be the first major request for wireless radio apparatus that could project voices and music. It took the engineers three years to design and deliver the alternator. Meanwhile, Fessenden worked on an improved radio receiver. To fund his experiments, Fessenden aroused the interest of financial backers, who put up one million dollars to create the National Electric Signalling Company in 1902. Fessenden, along with a small group of handpicked scientists, worked at Brant Rock on the Massachusetts coast south of Boston. Working outside the corporate system, Fessenden sought fame and glory based on his own work, rather than on something owned by a corporate patron. Fessenden’s moment of glory came on December 24, 1906, with the first announced broadcast of his radio telephone. Using an ordinary telephone microphone and his special alternator to generate the necessary radio energy, Fessenden alerted ships up and down the Atlantic coast with his wireless telegraph and arranged for newspaper reporters to listen in from New York City. Fessenden made himself the center of the show. He played the violin, sang, and read from the Bible. Anticipating what would become standard practice fifty years later, Fessenden also transmitted the sounds of a phonograph recording. He ended his first broadcast by wishing those listening “a Merry Christmas.” A similar, equally well-publicized demonstration came on December 31. Although Fessenden was skilled at drawing attention to his invention and must be credited, among others, as one of the engineering founders of the principles of radio, he was far less skilled at making money with his experiments, and thus his long-term impact was limited. The National Electric Signalling Company had a fine beginning and for a time was a supplier of equipment to the United

618

/

Radio

Fruit Company. The financial panic of 1907, however, wiped out an opportunity to sell the Fessenden patents—at a vast profit—to a corporate giant, the American Telephone and Telegraph Corporation. Impact Had there been more receiving equipment available and in place, a massive audience could have heard Fessenden’s first broadcast. He had the correct idea, even to the point of playing a crude phonograph record. Yet Fessenden, Marconi, and their rivals were unable to establish a regular series of broadcasts. Their “stations” were experimental and promotional. It took the stresses of World War I to encourage broader use of wireless radio based on Fessenden’s experiments. Suddenly, communicating from ship to ship or from a ship to shore became a frequent matter of life or death. Generating publicity was no longer necessary. Governments fought over crucial patent rights. The Radio Corporation of America (RCA) pooled vital knowledge. Ultimately, RCA came to acquire the Fessenden patents. Radio broadcasting commenced, and the radio industry, with its multiple uses for mass communication, was off and running.

Antique tabletop radio. (PhotoDisc)

Radio /

619

Guglielmo Marconi failed his entrance examinations to the University of Bologna in 1894. He had a weak educational background, particularly in science, but he was not about to let that—or his father’s disapproval—stop him after he conceived a deep interest in wireless telegraphy during his teenage years. Marconi was born in 1874 to a wealthy Italian landowner and an Irish whiskey distiller’s daughter and grew up both in Italy and England. His parents provided tutors for him, but he and his brother often accompanied their mother, a socialite, on extensive travels. He acquired considerable social skills, easy self-confidence, and determination from the experience. Thus, when he failed his exams, he simply tried another route for his ambitions. He and his mother persuaded a science professor to let Marconi use a university laboratory unofficially. His father thought it a waste of time. However, he changed his mind when his son succeeded in building equipment that could transmit electronic signals around their house without wires, an achievement right at the vanguard of technology. Now supported by his father’s money, Marconi and his brother built an elaborate set of equipment—including an oscillator, coherer, galvanometer, and antennas—that they hoped would send a signal outside over a long distance. His brother walked off a mile and a half, out of sight, with the galvanometer and a rifle. When the galvanometer moved, indicating a signal had arrived from the oscillator, he fired the rifle to let Marconi know he had succeeded. The incident is widely cited as the first radio transmission. Marconi went on to send signals over greater and greater distances. He patented a tuner to permit transmissions at specific frequencies, and he started the Wireless Telegraph and Signal Company to bring his inventions to the public; its American branch was the Radio Corporation of America (RCA). He not only grew wealthy at a young age; he also was awarded half of the 1909 Nobel Prize in Physics for his work. He died in Rome in 1937, one of the most famous inventors in the world.

(Library of Congress)

Guglielmo Marconi

620

/

Radio

See also Communications satellite; Compact disc; Dolby noise reduction; FM radio; Long-distance radiotelephony; Radio crystal sets; Television; Transistor; Transistor radio.

Further Reading Fessenden, and Helen May Trott. Fessenden: Builder of Tomorrows. New York: Arno Press, 1974. Lewis, Tom. Empire of the Air: The Men Who Made Radio. New York: HarperPerennial, 1993. Masini, Giancarlo. Marconi. New York: Marsilio Publishers, 1995. Seitz. Frederick. The Cosmic Inventor: Reginald Aubrey Fessenden, 1866-1932. Philadelphia: American Philosophical Society, 1999.

621

Radio crystal sets Radio crystal sets

The invention: The first primitive radio receivers, crystal sets led to the development of the modern radio. The people behind the invention: H. H. Dunwoody (1842-1933), an American inventor Sir John A. Fleming (1849-1945), a British scientist-inventor Heinrich Rudolph Hertz (1857-1894), a German physicist Guglielmo Marconi (1874-1937), an Italian engineer-inventor James Clerk Maxwell (1831-1879), a Scottish physicist Greenleaf W. Pickard (1877-1956), an American inventor

From Morse Code to Music In the 1860’s, James Clerk Maxwell demonstrated that electricity and light had electromagnetic and wave properties. The conceptualization of electromagnetic waves led Maxwell to propose that such waves, made by an electrical discharge, would eventually be sent long distances through space and used for communication purposes. Then, near the end of the nineteenth century, the technology that produced and transmitted the needed Hertzian (or radio) waves was devised by Heinrich Rudolph Hertz, Guglielmo Marconi (inventor of the wireless telegraph), and many others. The resultant radio broadcasts, however, were limited to the dots and dashes of the Morse code. Then, in 1901, H. H. Dunwoody and Greenleaf W. Pickard invented the crystal set. Crystal sets were the first radio receivers that made it possible to hear music and the many other types of now-familiar radio programs. In addition, the simple construction of the crystal set enabled countless amateur radio enthusiasts to build “wireless receivers” (the name for early radios) and to modify them. Although, except as curiosities, crystal sets were long ago replaced by more effective radios, they are where it all began.

622

/

Radio crystal sets

Crystals, Diodes, Transistors, and Chips Radio broadcasting works by means of electromagnetic radio waves, which are low-energy cousins of light waves. All electromagnetic waves have characteristic vibration frequencies and wavelengths. This article will deal mostly with long radio waves of frequencies from 550 to 1,600 kilocycles (kilohertz), which can be seen on amplitude-modulation (AM) radio dials. Frequency-modulation (FM), shortwave, and microwave radio transmission use higherenergy radio frequencies. The broadcasting of radio programs begins with the conversion of sound to electrical impulses by means of microphones. Then, radio transmitters turn the electrical impulses into radio waves that are broadcast together with higher-energy carrier waves. The combined waves travel at the speed of light to listeners. Listeners hear radio programs by using radio receivers that pick up broadcast waves through antenna wires and reverse the steps used in broadcasting. This is done by converting those waves to electrical impulses and then into sound waves. The two main types of radio broadcasting are AM and FM, which allow the selection (modulation) of the power (amplitude) or energy (frequency) of the broadcast waves. The crystal set radio receiver of Dunwoody and Pickard had many shortcomings. These led to the major modifications that produced modern radios. Crystal sets, however, began the radio industry and fostered its development. Today, it is possible to purchase somewhat modified forms of crystal sets, as curiosity items. All crystal sets, original or modern versions, are crude AM radio receivers that are composed of four components: an antenna wire, a crystal detector, a tuning circuit, and a headphone or loudspeaker. Antenna wires (aerials) pick up radio waves broadcast by external sources. Originally simple wires, today’s aerials are made to work better by means of insulation and grounding. The crystal detector of a crystal set is a mineral crystal that allows radio waves to be selected (tuned). The original detectors were crystals of a leadsulfur mineral, galena. Later, other minerals (such as silicon and carborundum) were also found to work. The tuning circuit is composed of 80 to 100 turns of insulated wire, wound on a 0.33-inch

Radio crystal sets

/

623

support. Some surprising supports used in homemade tuning circuits include cardboard toilet-paper-roll centers and Quaker Oats cereal boxes. When realism is desired in collector crystal sets, the coil is usually connected to a wire probe selector called a “cat’s whisker.” In some such crystal sets, a condenser (capacitor) and additional components are used to extend the range of tunable signals. Headphones convert chosen radio signals to sound waves that are heard by only one listener. If desired, loudspeakers can be used to enable a roomful of listeners to hear chosen programs. An interesting characteristic of the crystal set is the fact that its operation does not require an external power supply. Offsetting this are its short reception range and a great difficulty in tuning or maintaining tuned-in radio signals. The short range of these radio receivers led to, among other things, the use of power supplies (house current or batteries) in more sophisticated radios. Modern solutions to tuning problems include using manufactured diode vacuum tubes to replace crystal detectors, which are a kind of natural diode. The first manufactured diodes, used in later crystal sets and other radios, were invented by John Ambrose Fleming, a colleague of Marconi’s. Other modifications of crystal sets that led to more sophisticated modern radios include more powerful aerials, better circuits, and vacuum tubes. Then came miniaturization, which was made possible by the use of transistors and silicon chips. Impact The impact of the invention of crystal sets is almost incalculable, since they began the modern radio industry. These early radio receivers enabled countless radio enthusiasts to build radios, to receive radio messages, and to become interested in developing radio communication systems. Crystal sets can be viewed as having spawned all the variant modern radios. These include boom boxes and other portable radios; navigational radios used in ships and supersonic jet airplanes; and the shortwave, microwave, and satellite networks used in the various aspects of modern communication. The later miniaturization of radios and the development of sophisticated radio system components (for example, transistors and silicon chips) set the stage for both television and computers.

624

/

Radio crystal sets

Certainly, if one tried to assess the ultimate impact of crystal sets by simply counting the number of modern radios in the United States, one would find that few Americans more than ten years old own fewer than two radios. Typically, one of these is run by house electric current and the other is a portable set that is carried almost everywhere. See also FM radio; Long-distance radiotelephony; Radio; Television; Transistor radio. Further Reading Masini, Giancarlo. Marconi. New York: Marsilio, 1995. Sievers, Maurice L. Crystal Clear: Vintage American Crystal Sets, Crystal Detectors, and Crystals. Vestal, N.Y.: Vestal Press, 1991. Tolstoy, Ivan. James Clerk Maxwell: A Biography. Chicago: University of Chicago Press, 1982.

625

Radio interferometer Radio interferometer

The invention: An astronomical instrument that combines multiple radio telescopes into a single system that makes possible the exploration of distant space. The people behind the invention: Sir Martin Ryle (1918-1984), an English astronomer Karl Jansky (1905-1950), an American radio engineer Hendrik Christoffel van de Hulst (1918), a Dutch radio astronomer Harold Irving Ewan (1922), an American astrophysicist Edward Mills Purcell (1912-1997), an American physicist Seeing with Radio Since the early 1600’s, astronomers have relied on optical telescopes for viewing stellar objects. Optical telescopes detect the visible light from stars, galaxies, quasars, and other astronomical objects. Throughout the late twentieth century, astronomers developed more powerful optical telescopes for peering deeper into the cosmos and viewing objects located hundreds of millions of lightyears away from the earth. In 1933, Karl Jansky, an American radio engineer with Bell Telephone Laboratories, constructed a radio antenna receiver for locating sources of telephone interference. Jansky discovered a daily radio burst that he was able to trace to the center of the Milky Way galaxy. In 1935, Grote Reber, another American radio engineer, followed up Jansky’s work with the construction of the first dishshaped “radio” telescope. Reber used his 9-meter-diameter radio telescope to repeat Jansky’s experiments and to locate other radio sources in space. He was able to map precisely the locations of various radio sources in space, some of which later were identified as galaxies and quasars. Following World War II (that is, after 1945), radio astronomy blossomed with the help of surplus radar equipment. Radio astronomy tries to locate objects in space by picking up the radio waves

626

/

Radio interferometer

that they emit. In 1944, the Dutch astronomer Hendrik Christoffel van de Hulst had proposed that hydrogen atoms emit radio waves with a 21-centimeter wavelength. Because hydrogen is the most abundant element in the universe, van de Hulst’s discovery had explained the nature of extraterrestrial radio waves. His theory later was confirmed by the American radio astronomers Harold Irving Ewen and Edward Mills Purcell of Harvard University. By coupling the newly invented computer technology with radio telescopes, astronomers were able to generate a radio image of a star almost identical to the star’s optical image. A major advantage of radio telescopes over optical telescopes is the ability of radio telescopes to detect extraterrestrial radio emissions day or night, as well as their ability to bypass the cosmic dust that dims or blocks visible light. More with Less After 1945, major research groups were formed in England, Australia, and The Netherlands. Sir Martin Ryle was head of the Mullard Radio Astronomy Observatory of the Cavendish Laboratory, University of Cambridge. He had worked with radar for the Telecommunications Research Establishment during World War II. The radio telescopes developed by Ryle and other astronomers operate on the same basic principle as satellite television receivers. A constant stream of radio waves strikes the parabolic-shaped reflector dish, which aims all the radio waves at a focusing point above the dish. The focusing point directs the concentrated radio beam to the center of the dish, where it is sent to a radio receiver, then an amplifier, and finally to a chart recorder or computer. With large-diameter radio telescopes, astronomers can locate stars and galaxies that cannot be seen with optical telescopes. This ability to detect more distant objects is called “resolution.” Like optical telescopes, large-diameter radio telescopes have better resolution than smaller ones. Very large radio telescopes were constructed in the late 1950’s and early 1960’s (Jodrell Bank, England; Green Bank, West Virginia; Arecibo, Puerto Rico). Instead of just building larger radio telescopes to achieve greater resolution, however, Ryle developed a method called “interferometry.” In Ryle’s method, a computer is used to combine the incoming radio waves

Radio interferometer

Moving Spacecraft

Angular Separation

California

/

627

Fixed Radio Star (Quasar)

Spain

One use of VLBI is to navigate a spacecraft: By measuring the angular separation between a fixed radio star, such as a quasar, and a moving spacecraft, the craft’s location, orientation, and path can be precisely monitored and adjusted.

of two or more movable radio telescopes pointed at the same stellar object. Suppose that one had a 30-meter-diameter radio telescope. Its radio wave-collecting area would be limited to its diameter. If a second identical 30-meter-diameter radio telescope was linked with the first, then one would have an interferometer. The two radio telescopes would point exactly at the same stellar object, and the radio emissions from this object captured by the two telescopes would be combined by computer to produce a higher-resolution image. If the two radio telescopes were located 1.6 kilometers apart, then their combined resolution would be equivalent to that of a single radio telescope dish 1.6 kilometers in diameter. Ryle constructed the first true radio telescope interferometer at the Mullard Radio Astronomy Observatory in 1955. He used combinations of radio telescopes to produce interferometers containing about twelve radio receivers. Ryle’s interferometer greatly improved radio telescope resolution for detecting stellar radio sources, mapping the locations of stars and galaxies, assisting in the discovery of

628

/

Radio interferometer

“quasars” (quasi-stellar radio sources), measuring the earth’s rotation around the Sun, and measuring the motion of the solar system through space. Consequences Following Ryle’s discovery, interferometers were constructed at radio astronomy observatories throughout the world. The United States established the National Radio Astronomy Observatory (NRAO) in rural Green Bank, West Virginia. The NRAO is operated by nine eastern universities and is funded by the National Science Foundation. At Green Bank, a three-telescope interferometer was constructed, with each radio telescope having a 26-meter-diameter dish. During the late 1970’s, the NRAO constructed the largest radio interferometer in the world, the Very Large Array (VLA). The VLA, located approximately 80 kilometers west of Socorro, New Mexico, consists of twenty-seven 25-meter-diameter radio telescopes linked by a supercomputer. The VLA has a resolution equivalent to that of a single radio telescope 32 kilometers in diameter. Even larger radio telescope interferometers can be created with a technique known as “very long baseline interferometry” (VLBI). VLBI has been used to construct a radio telescope having an effective diameter of several thousand kilometers. Such an arrangement involves the precise synchronization of radio telescopes located in several different parts of the world. Supernova 1987A in the Large Magellanic Cloud was studied using a VLBI arrangement between observatories located in Australia, South America, and South Africa. Launching radio telescopes into orbit and linking them with ground-based radio telescopes could produce a radio telescope whose effective diameter would be larger than that of the earth. Such instruments will enable astronomers to map the distribution of galaxies, quasars, and other cosmic objects, to understand the origin and evolution of the universe, and possibly to detect meaningful radio signals from extraterrestrial civilizations. See also Neutrino detector; Weather satellite; Artificial satellite; Communications satellite; Radar; Rocket; Weather satellite.

Radio interferometer

/

629

Further Reading Graham-Smith, Francis. Sir Martin Ryle: A Biographical Memoir. London: Royal Society, 1987. Malphrus, Benjamin K. The History of Radio Astronomy and the National Radio Astronomy Observatory: Evolution Toward Big Science. Malabar, Fla.: Krieger, 1996. Pound, Robert V. Edward Mills Purcell: August 30, 1912-March 7, 1997. Washington, D.C.: National Academy Press, 2000.

630

Refrigerant gas Refrigerant gas

The invention: A safe refrigerant gas for domestic refrigerators, dichlorodifluoromethane helped promote a rapid growth in the acceptance of electrical refrigerators in homes. The people behind the invention: Thomas Midgley, Jr. (1889-1944), an American engineer and chemist Charles F. Kettering (1876-1958), an American engineer and inventor who was the head of research for General Motors Albert Henne (1901-1967), an American chemist who was Midgley’s chief assistant Frédéric Swarts (1866-1940), a Belgian chemist Toxic Gases Refrigerators, freezers, and air conditioners have had a major impact on the way people live and work in the twentieth century. With them, people can live more comfortably in hot and humid areas, and a great variety of perishable foods can be transported and stored for extended periods. As recently as the early nineteenth century, the foods most regularly available to Americans were bread and salted meats. Items now considered essential to a balanced diet, such as vegetables, fruits, and dairy products, were produced and consumed only in small amounts. Through the early part of the twentieth century, the pattern of food storage and distribution evolved to make perishable foods more available. Farmers shipped dairy products and frozen meats to mechanically refrigerated warehouses. Smaller stores and most American households used iceboxes to keep perishable foods fresh. The iceman was a familiar figure on the streets of American towns, delivering large blocks of ice regularly. In 1930, domestic mechanical refrigerators were being produced in increasing numbers. Most of them were vapor compression machines, in which a gas was compressed in a closed system of pipes outside the refrigerator by a mechanical pump and condensed to a

Refrigerant gas

/

631

liquid. The liquid was pumped into a sealed chamber in the refrigerator and allowed to evaporate to a gas. The process of evaporation removes heat from the environment, thus cooling the interior of the refrigerator. The major drawback of early home refrigerators involved the types of gases used. In 1930, these included ammonia, sulfur dioxide, and methyl chloride. These gases were acceptable if the refrigerator’s gas pipes never sprang a leak. Unfortunately, leaks sometimes occurred, and all these gases are toxic. Ammonia and sulfur dioxide both have unpleasant odors; if they leaked, at least they would be detected rapidly. Methyl chloride however, can form a dangerously explosive mixture with air, and it has only a very faint, and not unpleasant, odor. In a hospital in Cleveland during the 1920’s, a refrigerator with methyl chloride leaked, and there was a disastrous explosion of the methyl chloride-air mixture. After that, methyl chloride for use in refrigerators was mixed with a small amount of a very bad-smelling compound to make leaks detectable. (The same tactic is used with natural gas.) Three-Day Success General Motors, through its Frigidaire division, had a substantial interest in the domestic refrigerator market. Frigidaire refrigerators used sulfur dioxide as the refrigerant gas. Charles F. Kettering, director of research for General Motors, decided that Frigidaire needed a new refrigerant gas that would have good thermal properties but would be nontoxic and nonexplosive. In early 1930, he sent Lester S. Keilholtz, chief engineer of General Motors’ Frigidaire division, to Thomas Midgley, Jr., a mechanical engineer and selftaught chemist. He challenged them to develop such a new gas. Midgley’s associates, Albert Henne and Robert McNary, researched what types of compounds might already fit Kettering’s specifications. Working with research that had been done by the Belgian chemist Frédéric Swarts in the late nineteenth and early twentieth centuries, Midgley, Henne, and McNary realized that dichlorodifluoromethane would have ideal thermal properties and the right boiling point for a refrigerant gas. The only question left to be answered was whether the compound was toxic.

632

/

Refrigerant gas

The chemists prepared a few grams of dichlorodifluoromethane and put it, along with a guinea pig, into a closed chamber. They were delighted to see that the animal seemed to suffer no ill effects at all and was able to breathe and move normally. They were briefly puzzled when a second batch of the compound killed a guinea pig almost instantly. Soon, they discovered that an impurity in one of the ingredients had produced a potent poison in their refrigerant gas. A simple washing procedure completely removed the poisonous contaminant. This astonishingly successful research project was completed in three days. The boiling point of dichlorodifluoromethane is −5.6 degrees Celsius. It is nontoxic and nonflammable and possesses excellent thermal properties. When Midgley was awarded the Perkin Medal for industrial chemistry in 1937, he gave the audience a graphic demonstration of the properties of dichlorodifluoromethane: He inhaled deeply of its vapors and exhaled gently into a jar containing a burning candle. The candle flame promptly went out. This visual evidence proved that dichlorodifluoromethane was not poisonous and would not burn. Impact The availability of this safe refrigerant gas, which was renamed Freon, led to drastic changes in the United States. The current patterns of food production, distribution, and consumption are a direct result, as is air conditioning. Air conditioning was developed early in the twentieth century; by the late 1970’s, most American cars and residences were equipped with air conditioning, and other countries with hot climates followed suit. Consequently, major relocations of populations and businesses have become possible. Since World War II, there have been steady migrations to the “Sun Belt,” the states spanning the United States from southeast to southwest, because air conditioners have made these areas much more livable. Freon is a member of a family of chemicals called “chlorofluorocarbons.” In addition to refrigeration, it is also used as a propellant in aerosols and in the production of polystyrene plastics. In 1974, scientists began to suspect that chlorofluorocarbons, when released into the air, might have a serious effect on the environment. They

Refrigerant gas

/

633

speculated that the compounds might migrate into the stratosphere, where they could be decomposed by the intense ultraviolet light from the sunlight that is prevented from reaching the earth’s surface by the thin but vital layer of ozone in the stratosphere. In the process, large amounts of the ozone layer might also be destroyed— letting in the dangerous ultraviolet light. In addition to possible climatic effects, the resulting increase in ultraviolet light reaching the earth’s surface would raise the incidence of skin cancers. As a result, chemical manufacturers are trying to develop alternative refrigerant gases that will not harm the ozone layer. See also Electric refrigerator; Electric refrigerator; Food freezing; Microwave cooking. Further Reading Leslie, Stuart W. Boss Kettering. New York: Columbia University Press, 1983. Mahoney, Thomas A. “The Seventy-one-year Saga of CFC’s.” Air Conditioning, Heating and Refrigeration News (March 15, 1999). Preville, Cherie R., and Chris King. “Cooling Takes Off in the Roaring Twenties.” Air Conditioning, Heating and Refrigeration News (April 30, 2001).

634

Reserpine Reserpine

The invention: A drug with unique hypertension-decreasing effects that provides clinical medicine with a versatile and effective tool. The people behind the invention: Robert Wallace Wilkins (1906), an American physician and clinical researcher Walter E. Judson (1916) , an American clinical researcher Treating Hypertension Excessively elevated blood pressure, clinically known as “hypertension,” has long been recognized as a pervasive and serious human malady. In a few cases, hypertension is recognized as an effect brought about by particular pathologies (diseases or disorders). Often, however, hypertension occurs as the result of unknown causes. Despite the uncertainty about its origins, unattended hypertension leads to potentially dramatic health problems, including increased risk of kidney disease, heart disease, and stroke. Recognizing the need to treat hypertension in a relatively straightforward and effective way, Robert Wallace Wilkins, a clinical researcher at Boston University’s School of Medicine and the head of Massachusetts Memorial Hospital’s Hypertension Clinic, began to experiment with reserpine in the early 1950’s. Initially, the samples that were made available to Wilkins were crude and unpurified. Eventually, however, a purified version was used. Reserpine has a long and fascinating history of use—both clinically and in folk medicine—in India. The source of reserpine is the root of the shrub Rauwolfia serpentina, first mentioned in Western medical literature in the 1500’s but virtually unknown, or at least unaccepted, outside India until the mid-twentieth century. Crude preparations of the shrub had been used for a variety of ailments in India for centuries prior to its use in the West. Wilkins’s work with the drug did not begin on an encouraging note, because reserpine does not act rapidly—a fact that had been

Reserpine

/

635

noted in Indian medical literature. The standard observation in Western pharmacotherapy, however, was that most drugs work rapidly; if a week has elapsed without positive effects being shown by a drug, the conventional Western wisdom is that it is unlikely to work at all. Additionally, physicians and patients alike tend to look for rapid improvement or at least positive indications. Reserpine is deceptive in this temporal context, and Wilkins and his coworkers were nearly deceived. In working with crude preparations of Rauwolfia serpentina, they were becoming very pessimistic, when a patient who had been treated for many consecutive days began to show symptomatic relief. Nevertheless, only after months of treatment did Wilkins become a believer in the drug’s beneficial effects. The Action of Reserpine When preparations of pure reserpine became available in 1952, the drug did not at first appear to be the active ingredient in the crude preparations. When patients’ heart rate and blood pressure began to drop after weeks of treatment, however, the investigators saw that reserpine was indeed responsible for the improvements. Once reserpine’s activity began, Wilkins observed a number of important and unique consequences. Both the crude preparations and pure reserpine significantly reduced the two most meaningful measures of blood pressure. These two measures are systolic blood pressure and diastolic blood pressure. Systolic pressure represents the peak of pressure produced in the arteries following a contraction of the heart. Diastolic pressure is the low point that occurs when the heart is resting. To lower the mean blood pressure in the system significantly, both of these pressures must be reduced. The administration of low doses of reserpine produced an average drop in pressure of about 15 percent, a figure that was considered less than dramatic but still highly significant. The complex phenomenon of blood pressure is determined by a multitude of factors, including the resistance of the arteries, the force of contraction of the heart, and the heartbeat rate. In addition to lowering the blood pressure, reserpine reduced the heartbeat rate by about 15 percent, providing an important auxiliary action.

636

/

Reserpine

In the early 1950’s, various therapeutic drugs were used to treat hypertension. Wilkins recognized that reserpine’s major contribution would be as a drug that could be used in combination with drugs that were already in use. His studies established that reserpine, combined with at least one of the drugs already in use, produced an additive effect in lowering blood pressure. Indeed, at times, the drug combinations produced a “synergistic effect,” which means that the combination of drugs created an effect that was more effective than the sum of the effects of the drugs when they were administered alone. Wilkins also discovered that reserpine was most effective when administered in low dosages. Increasing the dosage did not increase the drug’s effect significantly, but it did increase the likelihood of unwanted side effects. This fact meant that reserpine was indeed most effective when administered in low dosages along with other drugs. Wilkins believed that reserpine’s most unique effects were not those found directly in the cardiovascular system but those produced indirectly by the brain. Hypertension is often accompanied by neurotic anxiety, which is both a consequence of the justifiable fears of future negative health changes brought on by prolonged hypertension and contributory to the hypertension itself. Wilkins’s patients invariably felt better mentally, were less anxious, and were sedated, but in an unusual way. Reserpine made patients drowsy but did not generally cause sleep, and if sleep did occur, patients could be awakened easily. Such effects are now recognized as characteristic of tranquilizing drugs, or antipsychotics. In effect, Wilkins had discovered a new and important category of drugs: tranquilizers. Impact Reserpine holds a vital position in the historical development of antihypertensive drugs for two reasons. First, it was the first drug that was discovered to block activity in areas of the nervous system that use norepinephrine or its close relative dopamine as transmitter substances. Second, it was the first hypertension drug to be widely accepted and used. Its unusual combination of characteristics made it effective in most patients.

Reserpine

/

637

Since the 1950’s, medical science has rigorously examined cardiovascular functioning and diseases such as hypertension. Many new factors, such as diet and stress, have been recognized as factors in hypertension. Controlling diet and life-style help tremendously in treating hypertension, but if the nervous system could not be partially controlled, many cases of hypertension would continue to be problematic. Reserpine has made that control possible. See also Abortion pill; Antibacterial drugs; Artificial kidney; Birth control pill; Salvarsan. Further Reading MacGregor, G. A., and Norman M. Kaplan. Hypertension. 2d ed. Abingdon: Health Press, 2001. “Reconsidering Reserpine.” American Family Physician 45 (March, 1992). Weber, Michael A. Hypertension Medicine. Totowa, N.J.: Humana, 2001.

638

Rice and wheat strains Rice and wheat strains

The invention: Artificially created high-yielding wheat and rice varieties that are helping food producers in developing countries keep pace with population growth The people behind the invention: Orville A. Vogel (1907-1991), an agronomist who developed high-yielding semidwarf winter wheats and equipment for wheat research Norman E. Borlaug (1914), a distinguished agricultural scientist Robert F. Chandler, Jr. (1907-1999), an international agricultural consultant and director of the International Rice Research Institute, 1959-1972 William S. Gaud (1907-1977), a lawyer and the administrator of the U.S. Agency for International Development, 1966-1969 The Problem of Hunger In the 1960’s, agricultural scientists created new, high-yielding strains of rice and wheat designed to fight hunger in developing countries. Although the introduction of these new grains raised levels of food production in poor countries, population growth and other factors limited the success of the so-called “Green Revolution.” Before World War II, many countries of Asia, Africa, and Latin America exported grain to Western Europe. After the war, however, these countries began importing food, especially from the United States. By 1960, they were importing about nineteen million tons of grain a year; that level nearly doubled to thirty-six million tons in 1966. Rapidly growing populations forced the largest developing countries—China, India, and Brazil in particular—to import huge amounts of grain. Famine was averted on the Indian subcontinent in 1966 and 1967 only by the United States shipping wheat to the region. The United States then changed its food policy. Instead of contributing food aid directly to hungry countries, the U.S. began

Rice and wheat strains

/

639

working to help such countries feed themselves. The new rice and wheat strains were introduced just as countries in Africa and Asia were gaining their independence from the European nations that had colonized them. The Cold War was still going strong, and Washington and other Western capitals feared that the Soviet Union was gaining influence in the emerging countries. To help counter this threat, the U.S. Agency for International Development (USAID) was active in the Third World in the 1960’s, directing or contributing to dozens of agricultural projects, including building rural infrastructure (farm-to-market roads, irrigation projects, and rural electric systems), introducing modern agricultural techniques, and importing fertilizer or constructing fertilizer factories in other countries. By raising the standard of living of impoverished people in developing countries through applying technology to agriculture, policymakers hoped to eliminate the socioeconomic conditions that would support communism. The Green Revolution It was against this background that William S. Gaud, administrator of USAID from 1966 to 1969, first talked about a “green revolution” in a 1968 speech before the Society for International Development in Washington, D.C. The term “green revolution” has been used to refer to both the scientific development of highyielding food crops and the broader socioeconomic changes in a country’s agricultural sector stemming from farmers’ adoption of these crops. In 1947, S. C. Salmon, a United States Department of Agriculture (USDA) scientist, brought a wheat-dwarfing gene to the United States. Developed in Japan, the gene produced wheat on a short stalk that was strong enough to bear a heavy head of grain. Orville Vogel, another USDA scientist, then introduced the gene into local wheat strains, creating a successful dwarf variety known as Gaines wheat. Under irrigation, Gaines wheat produced record yields. After hearing about Vogel’s work, Norman Borlaug, who headed the Rockefeller Foundation’s wheat-breeding program in Mexico, adapted Gaines wheat, later called “miracle wheat,” to a variety of growing conditions in Mexico.

640

/

Rice and wheat strains

Workers in an Asian rice field. (PhotoDisc)

Success with the development of high-yielding wheat varieties persuaded the Rockefeller and Ford foundations to pursue similar ends in rice culture. The foundations funded the International Rice Research Institute (IRRI) in Los Banos, Philippines, appointing as director Robert F. Chandler, Jr., an international agricultural consultant. Under his leadership, IRRI researchers cross-bred Peta, a tall variety of rice from Indonesia, with Deo-geo-woo-gen, a dwarf rice from Taiwan, to produce a new strain, IR-8. Released in 1966 and dubbed “miracle rice,” IR-8 produced yields double those of other Asian rice varieties and in a shorter time, 120 days in contrast to 150 to 180 days. Statistics from India illustrate the expansion of the new grain varieties. During the 1966-1967 growing season, Indian farmers planted improved rice strains on 900,000 hectares, or 2.5 percent of the total area planted in rice. By 1984-1985, the surface area planted in improved rice varieties stood at 23.4 million hectares, or 56.9 percent of the total. The rate of adoption was even faster for wheat. In 19661967, improved varieties covered 500,000 hectares, comprising 4.2 percent of the total wheat crop. By the 1984-1985 growing season, the surface area had expanded to 19.6 million hectares, or 82.9 percent of the total wheat crop.

Rice and wheat strains

/

641

To produce such high yields, IR-8 and other genetically engineered varieties of rice and wheat required the use of irrigation, fertilizers, and pesticides. Irrigation further increased food production by allowing year-round farming and the planting of multiple crops on the same plot of land, either two crops of high-yielding grain varieties or one grain crop and another food crop.

Expectations The rationale behind the introduction of high-yielding grains in developing countries was that it would start a cycle of improvement in the lives of the rural poor. High-yielding grains would lead to bigger harvests and better-nourished and healthier families. If better nutrition enabled more children to survive, the need to have large families to ensure care for elderly parents would ease. A higher survival rate of children would lead couples to use family planning, slowing overall population growth and allowing per capita food intake to rise. The greatest impact of the Green Revolution has been seen in Asia, which experienced dramatic increases in rice production, and on the Indian subcontinent, with increases in rice and wheat yields. Latin America, especially Mexico, enjoyed increases in wheat harvests. Subsaharan Africa initially was left out of the revolution, as scientists paid scant attention to increasing the yields of such staple food crops as yams, cassava, millet, and sorghum. By the 1980’s, however, this situation was being remedied with new research directed toward millet and sorghum. Research is conducted by a network of international agricultural research centers. Backed by both public and private funds, these centers cooperate with international assistance agencies, private foundations, universities, multinational corporations, and government agencies to pursue and disseminate research into improved crop varieties to farmers in the Third World. IRRI and the International Maize and Wheat Improvement Center (IMMYT) in Mexico City are two of these agencies.

642

/

Rice and wheat strains

Impact Expectations went unrealized in the first few decades following the green revolution. Despite the higher yields from millions of tons of improved grain seeds imported into the developing world, lower-yielding grains still accounted for much of the surface area planted in grain. The reasons for this explain the limits and impact of the Green Revolution. The subsistence mentality dies hard. The main targets of Green Revolution programs were small farmers, people whose crops provide barely enough to feed their families and provide seed for the next crop. If an experimental grain failed, they faced starvation. Such farmers hedged their bets when faced with a new proposition, for example, by intercropping, alternating rows of different grains in the same field. In this way, even if one crop failed, another might feed the family. Poor farmers in developing countries also were likely to be illiterate and not eager to try something they did not fully understand. Also, by definition, poor farmers often did not have the means to purchase the inputs—irrigation, fertilizer, and pesticides—required to grow the improved varieties. In many developing countries, therefore, rich farmers tended to be the innovators. More likely than poor farmers to be literate, they also had the money to exploit fully the improved grain varieties. They also were more likely than subsistence-level farmers to be in touch with the monetary economy, making purchases from the agricultural supply industry and arranging sales through established marketing channels, rather than producing primarily for personal or family use. Once wealthy farmers adopted the new grains, it often became more difficult for poor farmers to do so. Increased demand for limited supplies, such as pesticides and fertilizers, raised costs, while bigger-than-usual harvests depressed market prices. With high sales volumes, owners of large farms could withstand the higher costs and lower-per-unit profits, but smaller farmers often could not. Often, the result of adopting improved grains was that small farmers could no longer make ends meet solely by farming. Instead, they were forced to hire themselves out as laborers on large farms. Surges of laborers into a limited market depressed rural wages,

Rice and wheat strains

/

643

Orville A. Vogel Born in 1907, Orville Vogel grew up on a farm in eastern Nebraska, and farming remained his passion for his entire life. He earned bachelor’s and master’s degrees in agriculture from the University of Nebraska, and then a doctorate in agronomy from Washington State University (WSU) in 1939. Eastern Washington agreed with him, and he stayed there. He began his career as a wheat breeder 1931 for the U.S. Department of Agriculture, stationed at WSU. During the next fortytwo years, he also took on the responsibilities of associate agronomist for the university’s Division of Agronomy and from 1960 until his retirement in 1973 was professor of agronomy. At heart Vogel was an experimenter and tinkerer, renowned among his peers for his keen powers of observation and his unselfishness. In addition to the wheat strains he bred that helped launch the Green Revolution, he took part in the search for plant varieties resistant to snow mold and foot rot. However, according to the father of the Green Revolution, Nobel laureate Norman Borlaug, Vogel’s greatest contribution may not have been semi-dwarf wheat varieties but the many innovations in farming equipment he built as a sideline. These unheralded inventions automated the planting and harvesting of research plots, and so made research much easier to carry out and faster. In recognition of his achievements, Vogel received the U.S. National Medal of Science in 1975 and entered the Agricultural Research Service’s Science Hall of Fame in 1987. Vogel died in Washington in 1991.

making it even more difficult for small farmers to eke out a living. The result was that rich farmers got richer and poor farmers got poorer. Often, small farmers who could no longer support their families would leave rural areas and migrate to the cities, seeking work and swelling the ranks of the urban poor. Mixed Results The effects of the Green Revolution were thus mixed. The dissemination of improved grain varieties unquestionably increased grain harvests in some of the poorest countries of the world. Seed

644

/

Rice and wheat strains

companies developed, produced, and sold commercial quantities of improved grains, and fertilizer and pesticide manufacturers logged sales to developing countries thanks to USAID-sponsored projects. Along with disrupting the rural social structure and encouraging rural flight to the cities, the Green Revolution has had other negative effects. For example, the millions of tube wells sunk in India to irrigate crops reduced groundwater levels in some regions faster than they could be recharged. In other areas, excessive use of pesticides created health hazards, and fertilizer use led to streams and ponds being clogged by weeds. The scientific community became concerned that the use of improved varieties of grain, many of which were developed from the same mother variety, reduced the genetic diversity of the world’s food crops, making them especially vulnerable to attack by disease or pests. Perhaps the most significant impact of the Green Revolution is the change it wrought in the income and class structure of rural areas; often, malnutrition was not eliminated in either the countryside or the cities. Almost without exception, the relative position of peasants deteriorated. Many analysts admit that the Green Revolution did not end world hunger, but they argue that it did buy time. The poorest of the poor would be even worse off without it. See also Artificial chromosome; Cloning; Genetic “fingerprinting”; Genetically engineered insulin; In vitro plant culture. Further Reading Glaeser, Bernhard, ed. The Green Revolution Revisited: Critique and Alternatives. London: Allen & Unwin, 1987. Hayami, Yujiro, and Masao Kikuchi. A Rice Village Saga: Three Decades of Green Revolution in the Philippines. Lanham, Md.: Barnes and Noble, 2000. Karim, M. Bazlul. The Green Revolution: An International Bibliography. New York: Greenwood Press, 1986. Lipton, Michael, and Richard Longhurst. New Seeds and Poor People. Baltimore: Johns Hopkins University Press, 1989. Perkins, John H. Geopolitics and the Green Revolution: Wheat, Genes, and the Cold War. New York: Oxford University Press, 1997.

645

Richter scale Richter scale

The invention: A scale for measuring the strength of earthquakes based on their seismograph recordings. The people behind the invention: Charles F. Richter (1900-1985), an American seismologist Beno Gutenberg (1889-1960), a German American seismologist Kiyoo Wadati (1902), a pioneering Japanese seismologist Giuseppe Mercalli (1850-1914), an Italian physicist, volcanologist, and meteorologist Earthquake Study by Eyewitness Report Earthquakes range in strength from barely detectable tremors to catastrophes that devastate large regions and take hundreds of thousands of lives. Yet the human impact of earthquakes is not an accurate measure of their power; minor earthquakes in heavily populated regions may cause great destruction, whereas powerful earthquakes in remote areas may go unnoticed. To study earthquakes, it is essential to have an accurate means of measuring their power. The first attempt to measure the power of earthquakes was the development of intensity scales, which relied on damage effects and reports by witnesses to measure the force of vibration. The first such scale was devised by geologists Michele Stefano de Rossi and François-Alphonse Forel in 1883. It ranked earthquakes on a scale of 1 to 10. The de Rossi-Forel scale proved to have two serious limitations: Its level 10 encompassed a great range of effects, and its description of effects on human-made and natural objects was so specifically European that it was difficult to apply the scale elsewhere. To remedy these problems, Giuseppe Mercalli published a revised intensity scale in 1902. The Mercalli scale, as it came to be called, added two levels to the high end of the de Rossi-Forel scale, making its highest level 12. It also was rewritten to make it more globally applicable. With later modifications by Charles F. Richter, the Mercalli scale is still in use. Intensity measurements, even though they are somewhat subjec-

646

/

Richter scale

Charles F. Richter Charles Francis Richter was born in Ohio in 1900. After his mother divorced his father, she moved the family to Los Angles in 1909. A precocious student, Richter entered the University of Southern California at sixteen and transferred to Stanford University a year later, majoring in physics. He graduated in 1920 and finished a doctorate in theoretical physics at the California Institute of Technology in 1928. While Richter was a graduate student at Caltech, Noble laureate Robert A. Millikan lured him away from his original interest, astronomy, to become an assistant at the seismology laboratory. Richter realized that seismology was then a relatively new discipline and that he could help it mature. He stayed with it— and Caltech—for the rest of his university career, retiring as professor emeritus in 1970. In 1971 he opened a consulting firm—Lindvall, Richter and Associates—to assess the earthquake readiness of structures. Richter published more than two hundred articles about earthquakes and earthquake engineering and two influential books, Elementary Seismology and Seismicity of the Earth (with Beno Gutenberg). These works, together with his teaching, trained a generation of earthquake researchers and gave them a basic tool, the Richter scale, to work with. He died in California in 1985.

tive, are very useful in mapping the extent of earthquake effects. Nevertheless, intensity measurements are still not ideal measuring techniques. Intensity varies from place to place and is strongly influenced by geologic features, and different observers frequently report different intensities. There is a need for an objective method of describing the strength of earthquakes with a single measurement. Measuring Earthquakes One Hundred Kilometers Away An objective technique for determining the power of earthquakes was devised in the early 1930’s by Richter at the California Institute of Technology in Pasadena, California. The eventual usefulness of the scale that came to be called the “Richter scale” was completely unforeseen at first.

Richter scale

Amplified Maximum Ground Motion (Microns)

10

9

/

647

8.9 New Madrid, Missouri, 1812 Alaska, 1964 Great

8

108

Major

7

107 106 105 10 4 10 2 10 1

San Francisco, 1906

-1 0 -1

0

1

2

1

2

3

4

5

Minor

6

Great devastation; many fatalities possible

Strong

Moderate

Damage begins; fatalities rare

Small

Not felt 3

4

5

Magnitude

6

7

8

9

Graphic representation of the Richter scale showing examples of historically important earthquakes.

In 1931, the California Institute of Technology was preparing to issue a catalog of all earthquakes detected by its seismographs in the preceding three years. Several hundred earthquakes were listed, most of which had not been felt by humans, but detected only by instruments. Richter was concerned about the possible misinterpretations of the listing. With no indication of the strength of the earthquakes, the public might overestimate the risk of earthquakes in areas where seismographs were numerous and underestimate the risk in areas where seismographs were few. To remedy the lack of a measuring method, Richter devised the scale that now bears his name. On this scale, earthquake force is expressed in magnitudes, which in turn are expressed in whole numbers and decimals. Each increase of one magnitude indicates a tenfold jump in the earthquake’s force. These measurements were defined for a standard seismograph located one hundred kilometers from the earthquake. By comparing records for earthquakes recorded on different

648

/

Richter scale

devices at different distances, Richter was able to create conversion tables for measuring magnitudes for any instrument at any distance. Impact Richter had hoped to create a rough means of separating small, medium, and large earthquakes, but he found that the scale was capable of making much finer distinctions. Most magnitude estimates made with a variety of instruments at various distances from earthquakes agreed to within a few tenths of a magnitude. Richter formally published a description of his scale in January, 1935, in the Bulletin of the Seismological Society of America. Other systems of estimating magnitude had been attempted, notably that of Kiyoo Wadati, published in 1931, but Richter’s system proved to be the most workable scale yet devised and rapidly became the standard. Over the next few years, the scale was refined. One critical refinement was in the way seismic recordings were converted into magnitude. Earthquakes produce many types of waves, but it was not known which type should be the standard for magnitude. So-called surface waves travel along the surface of the earth. It is these waves that produce most of the damage in large earthquakes; therefore, it seemed logical to let these waves be the standard. Earthquakes deep within the earth, however, produce few surface waves. Magnitudes based on surface waves would therefore be too small for these earthquakes. Deep earthquakes produce mostly waves that travel through the solid body of the earth; these are the so-called body waves. It became apparent that two scales were needed: one based on surface waves and one on body waves. Richter and his colleague Beno Gutenberg developed scales for the two different types of waves, which are still in use. Magnitudes estimated from surface waves are symbolized by a capital M, and those based on body waves are denoted by a lowercase m. From a knowledge of Earth movements associated with seismic waves, Richter and Gutenberg succeeded in defining the energy output of an earthquake in measurements of magnitude. A magnitude 6 earthquake releases about as much energy as a one-megaton nuclear explosion; a magnitude 0 earthquake releases about as much energy as a small car dropped off a two-story building.

Richter scale

/

649

See also Carbon dating; Geiger counter; Gyrocompass; Sonar; Scanning tunneling microscope. Further Reading Bates, Charles C., Thomas Frohock Gaskell, and Robert B. Rice. Geophysics in the Affairs of Man: A Personalized History of Exploration Geophysics and Its Allied Sciences of Seismology and Oceanography. New York: Pergamon Press, 1982. Davison, Charles. 1927. Reprint. The Founders of Seismology. New York: Arno Press, 1978. Howell, Benjamin F. An Introduction to Seismological Research: History and Development. Cambridge, N.Y.: Cambridge University Press, 1990.

650

Robot (household) Robot (household)

The invention: The first available personal robot, Hero 1 could speak; carry small objects in a gripping arm, and sense light, motion, sound, and time. The people behind the invention: Karel Capek (1890-1938), a Czech playwright The Health Company, an American electronics manufacturer

Personal Robots In 1920, the Czech playwright Karel Capek introduced the term robot, which he used to refer to intelligent, humanoid automatons that were subservient to humans. Robots such as those described by Capek have not yet been developed; their closest counterparts are the nonintelligent automatons used by industry and by private individuals. Most industrial robots are heavy-duty, immobile machines designed to replace humans in routine, undesirable, monotonous jobs. Most often, they use programmed gripping arms to carry out tasks such as spray painting cars, assembling watches, and shearing sheep. Modern personal robots are smaller, more mobile, less expensive models that serve mostly as toys or teaching tools. In some cases, they can be programmed to carry out activities such as walking dogs or serving mixed drinks. Usually, however, it takes more effort to program a robot to perform such activities than it does to do them oneself. The Hero 1, which was first manufactured by the Heath Company in 1982, has been a very popular personal robot. Conceived as a toy and a teaching tool, the Hero 1 can be programmed to speak; to sense light, sound, motion, and time; and to carry small objects. The Hero 1 and other personal robots are often viewed as tools that will someday make it possible to produce intelligent robots.

Robot (household)

/

651

Hero 1 Operation The concept of artificial beings serving humanity has existed since antiquity (for example, it is found in Greek mythology). Such devices, which are now called robots, were first actualized, in a simple form, in the 1960’s. Then, in the mid-1970’s, the manufacture of personal robots began. One of the first personal robots was the Turtle, which was made by the Terrapin Company of Cambridge, Massachusetts. The Turtle was a toy that entertained owners via remote control, programmable motion, a beeper, and blinking displays. The Turtle was controlled by a computer to which it was linked by a cable. Among the first significant personal robots was the Hero 1. This robot, which was usually sold in the form of a $1,000 kit that had to be assembled, is a squat, thirty-nine-pound mobile unit containing a head, a body, and a base. The head contains control boards, sensors, and a manipulator arm. The body houses control boards and related electronics, while the base contains a three-wheel-drive unit that renders the robot mobile. The Heath Company, which produced the Hero 1, viewed it as providing entertainment for and teaching people who are interested in robot applications. To facilitate these uses, the following abilities were incorporated into the Hero 1: independent operation via rechargeable batteries; motion- and distance/position-sensing capability; light, sound, and language use/recognition; a manipulator arm to carry out simple tasks; and easy programmability. The Hero 1 is powered by four rechargeable batteries arranged as two 12-volt power supplies. Recharging is accomplished by means of a recharging box that is plugged into a home outlet. It takes six to eight hours to recharge depleted batteries, and complete charging is signaled by an indicator light. In the functioning robot, the power supplies provide 5-volt and 12-volt outputs to logic and motor circuits, respectively. The Hero 1 moves by means of a drive mechanism in its base. The mechanism contains three wheels, two of which are unpowered drones. The third wheel, which is powered for forward and reverse motion, is connected to a stepper motor that makes possible directional steering. Also included in the powered wheel is a metal disk

652

/

Robot (household)

with spaced reflective slots that helps Hero 1 to identify its position. As the robot moves, light is used to count the slots, and the slot count is used to measure the distance the robot has traveled, and therefore its position. The robot’s “senses,” located in its head, consist of sound, light, and motion detectors as well as a phoneme synthesizer (phonemes are sounds, or units of speech). All these components are connected with the computer. The Hero 1 can detect sounds between 200 and 5,000 hertz. Its motion sensor detects all movement within a 15-foot radius. The phoneme synthesizer is capable of producing most words by using combinations of 64 phonemes. In addition, the robot keeps track of time by using an internal clock/calendar. The Hero 1 can carry out various tasks by using a gripper that serves as a hand. The arm on which the gripper is located is connected to the back of the robot’s head. The head (and, therefore, the arm) can rotate 350 degrees horizontally. In addition, the arm contains a shoulder motor that allows it to rise or drop 150 degrees vertically, and its forearm can be either extended or retracted. Finally, a wrist motor allows the gripper’s tip to rotate by 350 degrees, and the two-fingered gripper can open up to a maximum width of 3.5 inches. The arm is not useful except as an educational tool, since its load-bearing capacity is only about a pound and its gripper can exert a force of only 6 ounces. The computational capabilities of the robot are much more impressive than its physical capabilities. Programming is accomplished by means of a simple keypad located on the robot’s head, which provides an inexpensive, easy-to-use method of operator-computer communication. To make things simpler for users who want entertainment without having to learn robotics, a manual mode is included for programming. In the manual mode, a hand-held teaching pendant is connected to Hero 1 and used to program all the motion capabilities of the robot. The programming of sensory and language abilities, however, must be accomplished by using the keyboard. Using the keyboard and the various options that are available enables Hero 1 owners to program the robot to perform many interesting activities.

Robot (household)

/

653

Consequences The Hero 1 had a huge impact on robotics; thousands of people purchased it and used it for entertainment, study, and robot design. The Heath Company itself learned from the Hero 1 and later introduced an improved version: Heathkit 2000. This personal robot, which costs between $2,000 and $4,500, has ten times the capabilities of Hero 1, operates via radio-controlled keyboard, contains a voice synthesizer that can be programmed in any language, and plugs itself in for recharging. Other companies, including the Androbot Company in California, have manufactured personal robots that sell for up to $10,000. One such robot is the Androbot BOB (brains on board). It can guard a home, call the police, walk at 2.5 kilometers per hour, and sing. Androbot has also designed Topo, a personal robot that can serve drinks. Still other robots can sort laundry and/or vacuum-clean houses. Although modern robots lack intelligence and merely have the ability to move when they are directed to by a program or by remote control, there is no doubt that intelligent robots will be developed in the future. See also Electric refrigerator; Microwave cooking; Robot (industrial); Vacuum cleaner; Washing machine. Further Reading Aleksander, Igor, and Piers Burnett. Reinventing Man: The Robot Becomes Reality. London: Kogan Page, 1983. Asimov, Isaac. Robots: Machines in Man’s Image. New York: Harmony Books, 1985. Bell, Trudy E. “Robots in the Home: Promises, Promises.” IEEE Spectrum 22, no. 5 (May, 1985). Whalen, Bernie. “Upscale Consumers Adopt Home Robots, but Widespread Lifestyle Impact Is Years Away.” Marketing News 17, no. 24 (November 25, 1983).

654

Robot (industrial) Robot (industrial)

The invention: The first industrial robots, Unimates were designed to replace humans in undesirable, hazardous, and monotonous jobs. The people behind the invention: Karel Capek (1890-1938), a Czech playwright George C. Devol, Jr. (1912), an American inventor Joseph F. Engelberger (1925), an American entrepreneur Robots, from Concept to Reality The 1920 play Rossum’s Universal Robots, by Czech writer Karel Capek, introduced robots to the world. Capek’s humanoid robots— robot, a word created by Capek, essentially means slave—revolted and took over the world, which made the concept of robots somewhat frightening. The development of robots, which are now defined as machines that do work that would ordinarily be carried out by humans, has not yet advanced to the stage of being able to produce humanoid robots, however, much less robots capable of carrying out a revolt. Most modern robots are found in industry, where they perform dangerous or monotonous tasks that previously were done by humans. The first industrial robots were the Unimates (short for “universal automaton”), which were derived from a robot design invented by George C. Devol and patented in 1954. The first Unimate prototypes, developed by Devol and Joseph F. Engelberger, were completed in 1962 by Unimation Incorporated and tested in industry. They were so successful that the company, located in Danbury, Connecticut, manufactured and sold thousands of Unimates to companies in the United States and abroad. Unimates are very versatile at performing routine industrial tasks and are easy to program and reprogram. The tasks they perform include various steps in automobile manufacturing, spray painting, and running lathes. The huge success of the Unimates led companies in other countries to produce their own industrial robots, and advancing technology has improved all industrial robots tremendously.

Robot (industrial)

/

655

A New Industrial Revolution Each of the first Unimate robots, which were priced at $25,000, was almost five feet tall and stood on a four-foot by five-foot base. It has often been said that a Unimate resembles the gun turret of a minitank, set atop a rectangular box. In operation, such a robot will swivel, swing, and/or dip and turn at the wrist of its hydraulically powered arm, which has a steel hand. The precisely articulated hand can pick up an egg without breaking it. At the same time, however, it is powerful enough to lift a hundred-pound weight. The Unimate is a robotic jack of all trades: It can be programmed, in about an hour, to carry out a complex operation, after which it can have its memory erased and be reprogrammed in another hour to do something entirely different. In addition, programming a Unimate requires no special training. The programmer simply uses a teachcable selector that allows the programmer to move the Unimate arm through the desired operation. This selector consists of a group of pushbutton control boxes, each of which is equipped with buttons in opposed pairs. Each button pair records the motion that will put a Unimate arm through one of five possible motions, in opposite directions. For example, pushing the correct buttons will record a motion in which the robot’s arm moves out to one side, aims upward, and angles appropriately to carry out the first portion of its intended job. If the Unimate overshoots, undershoots, or otherwise performs the function incorrectly, the activity can be fine-tuned with the buttons. Once the desired action has been performed correctly, pressing a “record” button on the robot’s main control panel enters the operation into its computer memory. In this fashion, Unimates can be programmed to carry out complex actions that require as many as two hundred commands. Each command tells the Unimate to move its arm or hand in a given way by combining the following five motions: sliding the arm forward, swinging the arm horizontally, tilting the arm up or down, bending the wrist up or down, and swiveling the hand in a half-circle clockwise or counterclockwise. Before pressing the “record” button on the Unimate’s control panel, the operator can also command the hand to grasp an item when in a particular position. Furthermore, the strength of the

656

/

Robot (industrial)

grasp can be controlled, as can the duration of time between each action. Finally, the Unimate can be instructed to start or stop another routine (such as operating a paint sprayer) at any point. Once the instructor is satisfied with the robot’s performance, pressing a “repeat continuous” control starts the Unimate working. The robot will stop repeating its program only when it is turned off. Inside the base of an original Unimate is a magnetic drum that contains its memory. The drum turns intermittently, moving each of two hundred long strips of metal beneath recording heads. This strip movement brings specific portions of each strip—dictated by particular motions—into position below the heads. When the “record” button is pressed after a motion is completed, the hand position is recorded as a series of numbers that tells the computer the complete hand position in each of the five permissible movement modes. Once “repeat continuous” is pressed, the computer begins the command series by turning the drum appropriately, carrying out each memorized command in the chosen sequence. When the sequence ends, the computer begins again, and the process repeats until the robot is turned off. If a Unimate user wishes to change the function of such a robot, its drum can be erased and reprogrammed. Users can also remove programmed drums, store them for future use, and replace them with new drums. Consequences The first Unimates had a huge impact on industrial manufacturing. In time, different sizes of robots became available so that additional tasks could be performed, and the robots’ circuitry was improved. Because they have no eyes and cannot make judgments, Unimates are limited to relatively simple tasks that are coordinated by means of timed operations and simple computer interactions. Most of the thousands of modern Unimates and their multinational cousins in industry are very similar to the original Unimates in terms of general capabilities, although they can now assemble watches and perform other delicate tasks that the original Unimates could not perform. The crude magnetic drums and computer controls have given way to silicon chips and microcomputers, which

Robot (industrial)

/

657

have made the robots more accurate and reliable. Some robots can even build other robots, and others can perform tasks such as mowing lawns and walking dogs. Various improvements have been planned that will ultimately lead to some very interesting and advanced modifications. It is likely that highly sophisticated humanoid robots like those predicted by Karel Capek will be produced at some future time. One can only hope that these robots will not rebel against their human creators. See also CAD/CAM; Robot (household); SAINT; Virtual machine. Further Reading Aleksander, Igor, and Piers Burnett. Reinventing Man: The Robot Becomes Reality. London: Kogan Page, 1983. Asimov, Isaac. Robots: Machines in Man’s Image. New York: Harmony Books, 1985. Chakravarty, Subrata N. “Springtime for an Ugly Duckling.” Forbes 127, no. 9 (April, 1981). Hartley, J. “Robots Attack the Quiet World of Arc Welding.” Engineer 246, no. 6376 (June, 1978). Lamb, W. G. Unimates at Work. Edited by C. W. Burckhardt. Basel, Switzerland: Birkhauser Verlag, 1975. Tuttle, Howard C. “Robots’ Contribution: Faster Cycles, Better Quality.” Production 88, no. 5 (November, 1981).

658

Rocket Rocket

The invention: Liquid-fueled rockets developed by Robert H. Goddard made possible all later developments in modern rocketry, which in turn has made the exploration of space practical. The person behind the invention: Robert H. Goddard (1882-1945), an American physics professor History in a Cabbage Patch Just as the age of air travel began on an out-of-the-way shoreline at Kitty Hawk, North Carolina, with the Wright brothers’ airplane in 1903, so too the seemingly impossible dream of spaceflight began in a cabbage patch in Auburn, Massachusetts, with Robert H. Goddard’s launch of a liquid-fueled rocket on March 16, 1926. On that clear, cold day, with snow still on the ground, Goddard launched a three-meter-long rocket using liquid oxygen and gasoline. The flight lasted only about two and one-half seconds, during which the rocket rose 12 meters and landed about 56 meters away. Although the launch was successful, the rocket’s design was clumsy. At first, Goddard had thought that a rocket would be steadier if the motor and nozzles were ahead of the fuel tanks, rather like a horse and buggy. After this first launch, it was clear that the motor needed to be placed at the rear of the rocket. Although Goddard had spent several years working on different pumps to control the flow of fuel to the motor, the first rocket had no pumps or electrical system. Henry Sacks, a Clark University machinist, launched the rocket by turning a valve, placing an alcohol stove beneath the motor, and dashing for safety. Goddard and his coworker Percy Roope watched the launch from behind an iron wall. Despite its humble setting, this simple event changed the course of history. Many people saw in Goddard’s launch the possibilities for high-altitude research, space travel, and modern weaponry. Although Goddard invented and experimented mostly in private,

Rocket

/

659

others in the United States, the Soviet Union, and Germany quickly followed in his footsteps. The V-2 rockets used by Nazi Germany in World War II (1939-1945) included many of Goddard’s designs and ideas. A Lifelong Interest Goddard’s success was no accident. He had first become interested in rockets and space travel when he was seventeen, no doubt because of reading books such as H. G. Wells’s The War of the Worlds (1898) and Garrett P. Serviss’s Edison’s Conquest of Mars (1898). In 1907, he sent to several scientific journals a paper describing his ideas about traveling through a near vacuum. Although the essay was rejected, Goddard began thinking about liquid fuels in 1909. After finishing his doctorate in physics at Clark University and postdoctoral studies at Princeton University, he began to experiment. One of the things that made Goddard so successful was his ability to combine things he had learned from chemistry, physics, and engineering into rocket design. More than anyone else at the time, Goddard had the ability to combine ideas with practice. Goddard was convinced that the key for moving about in space was the English physicist and mathematician Sir Isaac Newton’s third law of motion (for every action there is an equal and opposite reaction). To prove this, he showed that a gun recoiled when it was fired in a vacuum. During World War I (1914-1918), Goddard moved to the Mount Wilson Observatory in California, where he investigated the use of black powder and smokeless powder as rocket fuel. Goddard’s work led to the invention of the bazooka, a weapon that was much used during World War II, as well as bombardment and antiaircraft rockets. After World War I, Goddard returned to Clark University. By 1920, mostly because of the experiments he had done during the war, he had decided that a liquid-fuel motor, with its smooth thrust, had the best chance of boosting a rocket into space. The most powerful fuel was hydrogen, but it is very difficult to handle. Oxygen had many advantages, but it was hard to find and extremely dangerous, since it boils at −148 degrees Celsius and explodes when it comes in contact with oils, greases, and flames. Other possible fuels were pro-

660

/

Rocket

(Library of Congress)

Robert H. Goddard In 1920 The New York Times made fun of Robert Hutchings Goddard (1882-1945) for claiming that rockets could travel through outer space to the Moon. It was impossible, the newspaper’s editorial writer confidently asserted, because in outer space the engine would have no air to push against and so could not move the rocket. A sensitive, quiet man, the Clark University physics professor was stung by the public rebuke, all the more so because it displayed ignorance of basic physics. “Every vision is a joke,” Goddard said, somewhat bitterly, “until the first man accomplishes it.” Goddard had already proved that a rocket could move in a vacuum, but he refrained from rebutting the Times article. In 1919 he had become the first American to describe mathematically the theory of rocket propulsion in his classic article “A Method of Reaching Extreme Altitude,” and during World War I he had acquired experience designing solid-fuel rockets. However, even though he was the world’s leading expert on rocketry, he decided to seek privacy for his experiments. His successful launch of a liquidfuel rocket in 1926, followed by new designs that reached ever higher altitudes, was a source of satisfaction, as were his 214 patents, but real recognition of his achievements did not come his way until World War II. In 1942 he was named director of research at the U.S. Navy’s Bureau of Aeronautics, for which he worked on jet-assisted takeoff rockets and variable-thrust liquid-propellant rockets. In 1943 the Curtiss-Wright Corporation hired him as a consulting engineer, and in 1945 he became director of the American Rocket Society. The New York Times finally apologized to Goddard for its 1920 article on the morning after Apollo 11 took off for the Moon in 1969. However, Goddard, who battled tuberculosis most of his life, had died twenty-four years earlier.

pane, ether, kerosene, or gasoline, but they all had serious disadvantages. Finally, Goddard found a local source of oxygen and was able to begin testing its thrust.

Rocket

/

661

Another problem was designing a fuel pump. Goddard and his assistant Nils Riffolt spent years on this problem before the historic test flight of March, 1926. In the end, because of pressure from the Smithsonian Institution and others who were funding his research, Goddard decided to do without a pump and use an inert gas to push the fuel into the explosion chamber. Goddard worked without much funding between 1920 and 1925. Riffolt helped him greatly in designing a pump, and Goddard’s wife, Esther, photographed some of the tests and helped in other ways. Clark University had granted him some research money in 1923, but by 1925 money was in short supply, and the Smithsonian Institution did not seem willing to grant more. Goddard was convinced that his research would be taken seriously if he could show some serious results, so on March 16, 1926, he launched a rocket even though his design was not yet perfect. The success of that launch not only changed his career but also set the stage for rocketry experiments both in the United States and in Europe. Impact Goddard was described as being secretive and a loner. He never tried to cash in on his invention but continued his research during the next three years. On July 17, 1929, Goddard launched a rocket carrying a camera and instruments for measuring temperature and air pressure. The New York Times published a story about the noisy crash of this rocket and local officials’ concerns about public safety. The article also mentioned Goddard’s idea that a similar rocket might someday strike the Moon. When American aviation hero Charles A. Lindbergh learned of Goddard’s work, Lindbergh helped him to get grants from the Carnegie Institution and the Guggenheim Foundation. By the middle of 1930, Goddard and a small group of assistants had established a full-time research program near Roswell, New Mexico. Now that money was not so much of a problem, Goddard began to make significant advances in almost every area of astronautics. In 1941, Goddard launched a rocket to a height of 2,700 meters. Flight stability was helped by a gyroscope, and he was finally able to use a fuel pump.

662

/

Rocket

During the 1920’s and 1930’s, members of the American Rocket Society and the German Society for Space Travel continued their own research. When World War II began, rocket research became a high priority for the American and German governments. Germany’s success with the V-2 rocket was a direct result of Goddard’s research and inventions, but the United States did not benefit fully from Goddard’s work until after his death. Nevertheless, Goddard remains modern rocketry’s foremost pioneer—a scientist with vision, understanding, and practical skill. See also Airplane; Artificial satellite; Communications satellite; Cruise missile; Hydrogen bomb; Stealth aircraft; Supersonic passenger plane; Turbojet; V-2 rocket; Weather satellite. Further Reading Alway, Peter. Retro Rockets: Experimental Rockets, 1926-1941. Ann Arbor, Mich.: Saturn Press, 1996. Goddard, Robert Hutchings. The Autobiography of Robert Hutchings Goddard, Father of the Space Age: Early Years to 1927. Worcester, Mass.: A. J. St. Onge, 1966. Lehman, Milton. Robert H. Goddard: Pioneer of Space Research. New York: Da Capo Press, 1988.

663

Rotary dial telephone Rotary dial telephone

The invention: The first device allowing callers to connect their telephones to other parties without the aid of an operator, the rotary dial telephone preceded the touch-tone phone. The people behind the invention: Alexander Graham Bell (1847-1922), an American inventor Antoine Barnay (1883-1945), a French engineer Elisha Gray (1835-1901), an American inventor Rotary Telephones Dials Make Phone Linkups Automatic The telephone uses electricity to carry sound messages over long distances. When a call is made from a telephone set, the caller speaks into a telephone transmitter and the resultant sound waves are converted into electrical signals. The electrical signals are then transported over a telephone line to the receiver of a second telephone set that was designated when the call was initiated. This receiver reverses the process, converting the electrical signals into the sounds heard by the recipient of the call. The process continues as the parties talk to each other. The telephone was invented in the 1870’s and patented in 1876 by Alexander Graham Bell. Bell’s patent application barely preceded an application submitted by his competitor Elisha Gray. After a heated patent battle between Bell and Gray, which Bell won, Bell founded the Bell Telephone Company, which later came to be called the American Telephone and Telegraph Company. At first, the transmission of phone calls between callers and recipients was carried out manually, by switchboard operators. In 1923, however, automation began with Antoine Barnay’s development of the rotary telephone dial. This dial caused the emission of variable electrical impulses that could be decoded automatically and used to link the telephone sets of callers and call recipients. In time, the rotary dial system gave way to push-button dialing and other more modern networking techniques.

664

/

Rotary dial telephone

Rotary-dial telephone. (Image Club Graphics)

Telephones, Switchboards, and Automation The carbon transmitter, which is still used in many modern telephone sets, was the key to the development of the telephone by Alexander Graham Bell. This type of transmitter—and its more modern replacements—operates like an electric version of the human ear. When a person talks into the telephone set in a carbon transmitter-equipped telephone, the sound waves that are produced strike an electrically connected metal diaphragm and cause it to vibrate. The speed of vibration of this electric eardrum varies in accordance with the changes in air pressure caused by the changing tones of the speaker’s voice. Behind the diaphragm of a carbon transmitter is a cup filled with powdered carbon. As the vibrations cause the diaphragm to press against the carbon, the electrical signals—electrical currents of varying strength—pass out of the instrument through a telephone wire. Once the electrical signals reach the receiver of the phone being called, they activate electromagnets in the receiver that make a second diaphragm vibrate. This vibration converts the electrical signals into sounds that are very similar to the sounds made by the person who is speaking. Therefore, a telephone receiver may be viewed as an electric mouth. In modern telephone systems, transportation of the electrical signals between any two phone sets requires the passage of those signals through vast telephone networks consisting of huge numbers of wires, radio systems, and other media. The linkup of any two

Rotary dial telephone

Alexander Graham Bell During the funeral for Alexander Graham Bell in 1922, telephone service throughout the United States stopped for one minute to honor him. To most people he was the inventor of the telephone. In fact, his genius ranged much further. Bell was born in Edinburgh, Scotland, in 1847. His father, an elocutionist who invented a phonetic alphabet, and his mother, who was deaf, imbued him with deep curiosity, especially about sound. As a boy Bell became an exceptional pianist, and he produced his first invention, for cleaning wheat, at fourteen. After Edinburgh’s Royal High School, he attended classes at Edinburgh University and University College, London, but at the age of twenty-three, battling tuberculosis, he left school to move with his parents to Ontario, Canada, to convalesce. Meanwhile, he worked on his idea for a telegraph capable of sending multiple messages at once. From it grew the basic concept for the telephone. He developed it while teaching Visible Speech at the Boston School for Deaf Mutes after 1871. Assisted by Thomas Watson, he succeeded in sending speech over a wire and was issued a patent for his device, among the most valuable ever granted, in 1876. His demonstration of the telephone later that year at Philadelphia’s Centennial Exhibition and its subsequent development into a household appliance brought him wealth and fame. He moved to Nova Scotia, Canada, and continued inventing. He created a photophone, tetrahedron modules for construction, and an airplane, the Silver Dart, which flew in 1909. Even though existing technology made them impracticable, some of his ideas anticipated computers and magnetic sound recording. His last patented invention, tested three years before his death, was a hydrofoil. Capable of reaching seventy-one miles per hour and freighting fourteen thousand pounds, the HD-4 was then the fastest watercraft in the world. Bell also helped found the National Geographic Society in 1888 and became its president in 1898. He hired Gilbert Grosvenor to edit the society’s famous magazine, National Geographic and together they planned the format—breathtaking photography and vivid writing—that made it one of the world’s best known magazines.

/

665

666

/

Rotary dial telephone

phone sets was originally, however, accomplished manually—on a relatively small scale—by a switchboard operator who made the necessary connections by hand. In such switchboard systems, each telephone set in the network was associated with a jack connector in the switchboard. The operator observed all incoming calls, identified the phone sets for which they were intended, and then used wires to connect the appropriate jacks. At the end of the call, the jacks were disconnected. This cumbersome methodology limited the size and efficiency of telephone networks and invaded the privacy of callers. The development of automated switching systems soon solved these problems and made switchboard operators obsolete. It was here that Antoine Barnay’s rotary dial was used, making possible an exchange that automatically linked the phone sets of callers and call recipients in the following way. First, a caller lifted a telephone “off the hook,” causing a switchhook, like those used in modern phones, to close the circuit that connected the telephone set to the telephone network. Immediately, a dial tone (still familiar to callers) came on to indicate that the automatic switching system could handle the planned call. When the phone dial was used, each number or letter that was dialed produced a fixed number of clicks. Every click indicated that an electrical pulse had been sent to the network’s automatic switching system, causing switches to change position slightly. Immediately after a complete telephone number was dialed, the overall operation of the automatic switchers connected the two telephone sets. This connection was carried out much more quickly and accurately than had been possible when telephone operators at manual switchboards made the connection. Impact The telephone has become the world’s most important communication device. Most adults use it between six and eight times per day, for personal and business calls. This widespread use has developed because huge changes have occurred in telephones and telephone networks. For example, automatic switching and the rotary dial system were only the beginning of changes in phone calling.

Rotary dial telephone

/

667

Touch-tone dialing replaced Barnay’s electrical pulses with audio tones outside the frequency of human speech. This much-improved system can be used to send calls over much longer distances than was possible with the rotary dial system, and it also interacts well with both answering machines and computers. Another advance in modern telephoning is the use of radio transmission techniques in mobile phones, rendering telephone cords obsolete. The mobile phone communicates with base stations arranged in “cells” throughout the service area covered. As the user changes location, the phone link automatically moves from cell to cell in a cellular network. In addition, the use of microwave, laser, and fiber-optic technologies has helped to lengthen the distance over which phone calls can be transmitted. These technologies have also increased the number of messages that phone networks can handle simultaneously and have made it possible to send radio and television programs (such as cable television), scientific data (via modems), and written messages (via facsimile, or “fax,” machines) over phone lines. Many other advances in telephone technology are expected as society’s needs change and new technology is developed. See also Cell phone; Internet; Long-distance telephone; Telephone switching; Touch-tone telephone. Further Reading Aitken, William. Who Invented the Telephone? London: Blackie and Son, 1939. Coe, Lewis. The Telephone and Its Several Inventors: A History. Jefferson, N.C.: McFarland, 1995. Evenson, A. Edward. The Telephone Patent Conspiracy of 1876: The Elisha Gray-Alexander Bell Controversy and Its Many Players. Jefferson, N.C.: McFarland, 2000. Lisser, Eleena de. “Telecommunications: If You Have a Rotary Phone, Press 1: The Trials of Using the Old Apparatus.” Wall Street Journal (July 28, 1994). Mackay, James A. Alexander Graham Bell: A Life. New York: J. Wiley, 1997.

668

SAINT SAINT

The invention: Taking its name from the acronym for symbolic automatic integrator, SAINT is recognized as the first “expert system”—a computer program designed to perform mental tasks requiring human expertise. The person behind the invention: James R. Slagle (1934-1994), an American computer scientist The Advent of Artificial Intelligence In 1944, the Harvard-IBM Mark I was completed. This was an electromechanical (that is, not fully electronic) digital computer that was operated by means of coding instructions punched into paper tape. The machine took about six seconds to perform a multiplication operation, twelve for a division operation. In the following year, 1945, the world’s first fully electronic digital computer, the Electronic Numerical Integrator and Calculator (ENIAC), became operational. This machine, which was constructed at the University of Pennsylvania, was thirty meters long, three meters high, and one meter deep. At the same time that these machines were being built, a similar machine was being constructed in the United Kingdom: the automated computing engine (ACE). A key figure in the British development was Alan Turing, a mathematician who had used computers to break German codes during World War II. After the war, Turing became interested in the area of “computing machinery and intelligence.” He posed the question “Can machines think?” and set the following problem, which is known as the “Turing test.” This test involves an interrogator who sits at a computer terminal and asks questions on the terminal about a subject for which he or she seeks intelligent answers. The interrogator does not know, however, whether the system is linked to a human or if the responses are, in fact, generated by a program that is acting intelligently. If the interrogator cannot tell the difference between the human operator and the computer system, then the system is said to have passed the Turing test and has exhibited intelligent behavior.

SAINT

/

669

SAINT: An Expert System In the attempt to answer Turing’s question and create machines that could pass the Turing test, researchers investigated techniques for performing tasks that were considered to require expert levels of knowledge. These tasks included games such as checkers, chess, and poker. These games were chosen because the total possible number of variations in each game was very large. This led the researchers to several interesting questions for study. How do experts make a decision in a particular set of circumstances? How can a problem such as a game of chess be represented in terms of a computer program? Is it possible to know why the system chose a particular solution? One researcher, James R. Slagle at the Massachusetts Institute of Technology, chose to develop a program that would be able to solve elementary symbolic integration problems (involving the manipulation of integrals in calculus) at the level of a good college freshman. The program that Slagle constructed was known as SAINT, an acronym for symbolic automatic integrator, and it is acknowledged as the first “expert system.” An expert system is a system that performs at the level of a human expert. An expert system has three basic components: a knowledge base, in which domain-specific information is held (for example, rules on how best to perform certain types of integration problems); an inference engine, which decides how to break down a given problem utilizing the rules in the knowledge base; and a human-computer interface that inputs data—in this case, the integral to be solved—and outputs the result of performing the integration. Another feature of expert systems is their ability to explain their reasoning. The integration problems that could be solved by SAINT were in the form of elementary integral functions. SAINT could perform indefinite integration (also called “antidifferentiation”) on these functions. In addition, it was capable of performing definite and indefinite integration on trivial extensions of indefinite integration. SAINT was tested on a set of eighty-six problems, fifty-four of which were drawn from the MIT final examinations in freshman calculus; it succeeded in solving all but two. Slagle added more rules to the knowledge base so that problems of the type it encountered but could not solve could be solved in the future.

670

/

SAINT Global

Database

User

Interface

Control

Mechanism

Knowledge Base

Facts

Search and Resolve

Rules

Basic structure of an expert system.

The power of the SAINT system was, in part, based on its ability to perform integration through the adoption of a “heuristic” processing system. A heuristic method is one that helps in discovering a problem’s solution by making plausible but feasible guesses about the best strategy to apply next to the current problem situation. A heuristic is a rule of thumb that makes it possible to take short cuts in reaching a solution, rather than having to go through every step in a solution path. These heuristic rules are contained in the knowledge base. SAINT was written in the LISP programming language and ran on an IBM 7090 computer. The program and research were Slagle’s doctoral dissertation. Consequences The SAINT system that Slagle developed was significant for several reasons: First, it was the first serious attempt at producing a program that could come close to passing the Turing test. Second, it brought the idea of representing an expert’s knowledge in a computer program together with strategies for solving complex and difficult problems in an area that previously required human expertise. Third, it identified the area of knowledge-based systems and

SAINT

/

671

James R. Slagle James R. Slagle was born in 1934 in Brooklyn, New York, and attended nearby St. John’s University. He majored in mathematics and graduated with a bachelor of science degree in 1955, also winning the highest scholastic average award. While earning his master’s degree (1957) and doctorate (1961) at the Massachusetts Institute of Technology (MIT), he was a staff mathematician in the university’s Lincoln Laboratory. Slagle taught in MIT’s electrical engineering department part-time after completing his dissertation on the first expert computer system and then moved to Lawrence-Livermore National Laboratory near Berkeley, California. While working there he also taught at the University of California. From 1967 until 1974 he was an adjunct member of the computer science faculty of Johns Hopkins University in Baltimore, Maryland, and then was appointed chief of the computer science laboratory at the Naval Research Laboratory (NRL) in Washington, D.C., receiving the Outstanding Handicapped Federal Employee of the Year Award in 1979. In 1984 he was made a special assistant in the Navy Center for Applied Research in Artificial Intelligence at NRL but left in 1984 to become Distinguished Professor of Computer Science at the University of Minnesota. In these various positions Slagle helped mature the fledgling discipline of artificial intelligence, publishing the influential book Artificial Intelligence in 1971. He developed an expert system designed to set up other expert systems—A Generalized Network-based Expert System Shell, or AGNESS. He also worked on parallel expert systems, artificial neural networks, timebased logic, and methods for uncovering causal knowledge in large databases. He died in 1994.

showed that computers could feasibly be used for programs that did not relate to business data processing. Fourth, the SAINT system showed how the use of heuristic rules and information could lead to the solution of problems that could not have been solved previously because of the amount of time needed to calculate a solution. SAINT’s major impact was in outlining the uses of these techniques, which led to continued research in the subfield of artificial intelligence that became known as expert systems.

672

/

SAINT

See also BASIC programming language; CAD/CAM; COBOL computer language; Differential analyzer; FORTRAN programming language; Robot (industrial). Further Reading Campbell-Kelly, Martin, and William Aspray. Computer: A History of the Information Machine. New York: Basic Books, 1996. Ceruzzi, Paul E. A History of Modern Computing. Cambridge, Mass.: MIT Press, 2000. Rojas, Paul. Encyclopedia of Computers and Computer History. London: Fitzroy Dearborn, 2001.

673

Salvarsan Salvarsan

The invention: The first successful chemotherapeutic for the treatment of syphilis The people behind the invention: Paul Ehrlich (1854-1915), a German research physician and chemist Wilhelm von Waldeyer (1836-1921), a German anatomist Friedrich von Frerichs (1819-1885), a German physician and professor Sahachiro Hata (1872-1938), a Japanese physician and bacteriologist Fritz Schaudinn (1871-1906), a German zoologist The Great Pox The ravages of syphilis on humankind are seldom discussed openly. A disease that struck all varieties of people and was transmitted by direct and usually sexual contact, syphilis was both feared and reviled. Many segments of society across all national boundaries were secure in their belief that syphilis was divine punishment of the wicked for their evil ways. It was not until 1903 that bacteriologists Élie Metchnikoff and Pierre-Paul-Émile Roux demonstrated the transmittal of syphilis to apes, ending the long-held belief that syphilis was exclusively a human disease. The disease destroyed families, careers, and lives, driving its infected victims mad, destroying the brain, or destroying the cardiovascular system. It was methodical and slow, but in every case, it killed with singular precision. There was no hope of a safe and effective cure prior to the discovery of Salvarsan. Prior to 1910, conventional treatment consisted principally of mercury or, later, potassium iodide. Mercury, however, administered in large doses, led to severe ulcerations of the tongue, jaws, and palate. Swelling of the gums and loosening of the teeth resulted. Dribbling saliva and the attending fetid odor also occurred. These side effects of mercury treatment were so severe that many pre-

674

/

Salvarsan

ferred to suffer the disease to the end rather than undergo the standard cure. About 1906, Metchnikoff and Roux demonstrated that mercurial ointments, applied very early, at the first appearance of the primary lesion, were effective. Once the spirochete-type bacteria invaded the bloodstream and tissues, the infected person experienced symptoms of varying nature and degree—high fever, intense headaches, and excruciating pain. The patient’s skin often erupted in pustular lesions similar in appearance to smallpox. It was the distinguishing feature of these pustular lesions that gave syphilis its other name: the “Great Pox.” Death brought the only relief then available. Poison Dyes Paul Ehrlich became fascinated by the reactions of dyes with biological cells and tissues while a student at the University of Strasbourg under Wilhelm von Waldeyer. It was von Waldeyer who sparked Ehrlich’s interest in the chemical viewpoint of medicine. Thus, as a student, Ehrlich spent hours at this laboratory experimenting with different dyes on various tissues. In 1878, he published a book that detailed the discriminate staining of cells and cellular components by various dyes. Ehrlich joined Friedrich von Frerichs at the Charité Hospital in Berlin, where Frerichs allowed Ehrlich to do as much research as he wanted. Ehrlich began studying atoxyl in 1908, the year he won jointly with Metchnikoff the Nobel Prize in Physiology or Medicine for his work on immunity. Atoxyl was effective against trypanosome—a parasite responsible for a variety of infections, notably sleeping sickness—but also imposed serious side effects upon the patient, not the least of which was blindness. It was Ehrlich’s study of atoxyl, and several hundred derivatives sought as alternatives to atoxyl in trypanosome treatment, that led to the development of derivative 606 (Salvarsan). Although compound 606 was the first chemotherapeutic to be used effectively against syphilis, it was discontinued as an atoxyl alternative and shelved as useless for five years. The discovery and development of compound 606 was enhanced by two critical events. First, the Germans Fritz Schaudinn and Erich

The wonder drug Salvarsan was often called “Ehrlich’s silver bullet,” after its developer, Paul Ehrlich (left). (Library of Congress)

676

/

Salvarsan

Hoffmann discovered that syphilis is a bacterially caused disease. The causative microorganism is a spirochete so frail and gossameric in substance that it is nearly impossible to detect by casual microscopic examination; Schaudinn chanced upon it one day in March, 1905. This discovery led, in turn, to German bacteriologist August von Wassermann’s development of the now famous test for syphilis: the Wassermann test. Second, a Japanese bacteriologist, Sahachiro Hata, came to Frankfurt in 1909 to study syphilis with Ehrlich. Hata had studied syphilis in rabbits in Japan. Hata’s assignment was to test every atoxyl derivative ever developed under Ehrlich for its efficacy in syphilis treatment. After hundreds of tests and clinical trials, Ehrlich and Hata announced Salvarsan as a “magic bullet” that could cure syphilis, at the April, 1910, Congress of Internal Medicine in Wiesbaden, Germany. The announcement was electrifying. The remedy was immediately and widely sought, but it was not without its problems. A few deaths resulted from its use, and it was not safe for treatment of the gravely ill. Some of the difficulties inherent in Salvarsan were overcome by the development of neosalvarsan in 1912 and sodium salvarsan in 1913. Although Ehrlich achieved much, he fell short of his own assigned goal, a chemotherapeutic that would cure in one injection. Impact The significance of the development of Salvarsan as an antisyphilitic chemotherapeutic agent cannot be overstated. Syphilis at that time was as frightening and horrifying as leprosy and was a virtual sentence of slow, torturous death. Salvarsan was such a significant development that Ehrlich was recommended for a 1912 and 1913 Nobel Prize for his work in chemotherapy. It was several decades before any further significant advances in “wonder drugs” occurred, namely, the discovery of prontosil in 1932 and its first clinical use in 1935. On the heels of prontosil—a sulfa drug—came other sulfa drugs. The sulfa drugs would remain supreme in the fight against bacterial infection until the antibiotics, the first being penicillin, were discovered in 1928; however, they were not clinically recognized until World War II (1939-1945). With the discovery of streptomycin in 1943 and Aureomycin in 1944, the assault

Salvarsan

/

677

against bacteria was finally on a sound basis. Medicine possessed an arsenal with which to combat the pathogenic microbes that for centuries before had visited misery and death upon humankind. See also Abortion pill; Antibacterial drugs; Birth control pill; Penicillin; Reserpine; Syphilis test; Tuberculosis vaccine; Typhus vaccine; Yellow fever vaccine. Further Reading Bäumler, Ernst. Paul Ehrlich: Scientist for Life. New York: Holmes & Meier, 1984. Leyden, John G. “From Nobel Prize to Courthouse Battle: Paul Ehrlich’s ‘Wonder Drug’ for Syphilis Won Him Acclaim but also Led Critics to Hound Him.” Washington Post (July 27, 1999). Quétel, Claude. History of Syphilis. Baltimore: Johns Hopkins University Press, 1992.

678

Scanning tunneling microscope Scanning tunneling microscope

The invention: A major advance on the field ion microscope, the scanning tunneling microscope has pointed toward new directions in the visualization and control of matter at the atomic level. The people behind the invention: Gerd Binnig (1947), a West German physicist who was a cowinner of the 1986 Nobel Prize in Physics Heinrich Rohrer (1933), a Swiss physicist who was a cowinner of the 1986 Nobel Prize in Physics Ernst Ruska (1906-1988), a West German engineer who was a cowinner of the 1986 Nobel Prize in Physics Antoni van Leeuwenhoek (1632-1723), a Dutch naturalist The Limit of Light The field of microscopy began at the end of the seventeenth century, when Antoni van Leeuwenhoek developed the first optical microscope. In this type of microscope, a magnified image of a sample is obtained by directing light onto it and then taking the light through a lens system. Van Leeuwenhoek’s microscope allowed him to observe the existence of life on a scale that is invisible to the naked eye. Since then, developments in the optical microscope have revealed the existence of single cells, pathogenic agents, and bacteria. There is a limit, however, to the resolving power of optical microscopes. Known as “Abbe’s barrier,” after the German physicist and lens maker Ernst Abbe, this limit means that objects smaller than about 400 nanometers (about a millionth of a millimeter) cannot be viewed by conventional microscopes. In 1925, the physicist Louis de Broglie predicted that electrons would exhibit wave behavior as well as particle behavior. This prediction was confirmed by Clinton J. Davisson and Lester H. Germer of Bell Telephone Laboratories in 1927. It was found that highenergy electrons have shorter wavelengths than low-energy electrons and that electrons with sufficient energies exhibit wave-

Scanning tunneling microscope

/

679

lengths comparable to the diameter of the atom. In 1927, Hans Busch showed in a mathematical analysis that current-carrying coils behave like electron lenses and that they obey the same lens equation that governs optical lenses. Using these findings, Ernst Ruska developed the electron microscope in the early 1930’s. By 1944, the German corporation of Siemens and Halske had manufactured electron microscopes with a resolution of 7 nanometers; modern instruments are capable of resolving objects as small as 0.5 nanometer. This development made it possible to view structures as small as a few atoms across as well as large atoms and large molecules. The electron beam used in this type of microscope limits the usefulness of the device. First, to avoid the scattering of the electrons, the samples must be put in a vacuum, which limits the applicability of the microscope to samples that can sustain such an environment. Most important, some fragile samples, such as organic molecules, are inevitably destroyed by the high-energy beams required for high resolutions. Viewing Atoms From 1936 to 1955, Erwin Wilhelm Müller developed the field ion microscope (FIM), which used an extremely sharp needle to hold the sample. This was the first microscope to make possible the direct viewing of atomic structures, but it was limited to samples capable of sustaining the high electric fields necessary for its operation. In the early 1970’s, Russel D. Young and Clayton Teague of the National Bureau of Standards (NBS) developed the “topografiner,” a new kind of FIM. In this microscope, the sample is placed at a large distance from the tip of the needle. The tip is scanned across the surface of the sample with a precision of about a nanometer. The precision in the three-dimensional motion of the tip was obtained by using three legs made of piezoelectric crystals. These materials change shape in a reproducible manner when subjected to a voltage. The extent of expansion or contraction of the crystal depends on the amount of voltage that is applied. Thus, the operator can control the motion of the probe by varying the voltage acting on the three legs. The resolution of the topografiner is limited by the size of the probe.

680

/

Scanning tunneling microscope

Gerd Binnig and Heinrich Rohrer Both Gerd Binnig and Heinrich Rohrer believe an early and pleasurable introduction to teamwork led to their later success in inventing the scanning tunneling microscope, for which they shared the 1986 Nobel Prize in Physics with Ernst Ruska. Binnig was born in Frankfurt, Germany, in 1947. He acquired an early interest in physics but was always deeply influenced by classical music, introduced to him by his mother, and the rock music that his younger brother played for him. Binnig played in rock bands as a teenager and learned to enjoy the creative interplay of teamwork. At J. W. Goethe University in Frankfurt he earned a bachelor’s degree (1973) and doctorate (1978) in physics and then took a position at International Business Machine’s Zurich Research Laboratory. There he recaptured the pleasures of working with a talented team after joining Rohrer in research. Rohrer had been at the Zurich facility since just after it opened in 1963. He was born in Buch, Switzerland, in 1933, and educated at the Swiss Federal Institute of Technology in Zurich, where he completed his doctorate in 1960. After post-doctoral work at Rutgers University, he joined the IBM research team, a time that he describes as among the most enjoyable passages of his career. In addition to the Nobel Prize, the pair also received the German Physics Prize, Otto Klung Prize, Hewlett Packard Prize, and King Faisal Prize. Rohrer became an IBM Fellow in 1986 and was selected to manage the physical sciences department at the Zurich Research Laboratory. He retired from IBM in July 1997. Binnig became an IBM Fellow in 1987.

The idea for the scanning tunneling microscope (STM) arose when Heinrich Rohrer of the International Business Machines (IBM) Corporation’s Zurich research laboratory met Gerd Binnig in Frankfurt in 1978. The STM is very similar to the topografiner. In the STM, however, the tip is kept at a height of less than a nanometer away from the surface, and the voltage that is applied between the specimen and the probe is low. Under these conditions, the electron cloud of atoms at the end of the tip overlaps with the electron cloud of atoms at the surface of the specimen. This overlapping results in a

Scanning tunneling microscope

/

681

measurable electrical current flowing through the vacuum or insulating material existing between the tip and the sample. When the probe is moved across the surface and the voltage between the probe and sample is kept constant, the change in the distance between the probe and the surface (caused by surface irregularities) results in a change of the tunneling current. Two methods are used to translate these changes into an image of the surface. The first method involves changing the height of the probe to keep the tunneling current constant; the voltage used to change the height is translated by a computer into an image of the surface. The second method scans the probe at a constant height away from the sample; the voltage across the probe and sample is changed to keep the tunneling current constant. These changes in voltage are translated into the image of the surface. The main limitation of the technique is that it is applicable only to conducting samples or to samples with some surface treatment. Consequences In October, 1989, the STM was successfully used in the manipulation of matter at the atomic level. By letting the probe sink into the surface of a metal-oxide crystal, researchers at Rutgers University were able to dig a square hole about 250 atoms across and 10 atoms deep. A more impressive feat was reported in the April 5, 1990, issue of Nature; M. Eigler and Erhard K. Schweiser of IBM’s Almaden Research Center spelled out their employer’s three-letter acronym using thirty-five atoms of xenon. This ability to move and place individual atoms precisely raises several possibilities, which include the creation of custom-made molecules, atomic-scale data storage, and ultrasmall electrical logic circuits. The success of the STM has led to the development of several new microscopes that are designed to study other features of sample surfaces. Although they all use the scanning probe technique to make measurements, they use different techniques for the actual detection. The most popular of these new devices is the atomic force microscope (AFM). This device measures the tiny electric forces that exist between the electrons of the probe and the electrons of the sample without the need for electron flow, which makes the tech-

682

/

Scanning tunneling microscope

nique particularly useful in imaging nonconducting surfaces. Other scanned probe microscopes use physical properties such as temperature and magnetism to probe the surfaces. See also Cyclotron; Electron microscope; Ion field microscope; Mass spectrograph; Neutrino detector; Sonar; Synchrocyclotron; Tevatron accelerator; Ultramicroscope. Further Reading Morris, Michael D. Microscopic and Spectroscopic Imaging of the Chemical State. New York: M. Dekker, 1993. Wiesendanger, Robert. Scanning Probe Microscopy: Analytical Methods. New York: Springer-Verlag, 1998. _____, and Hans-Joachim Güntherodt. Scanning Tunneling Microscopy II: Further Applications and Related Scanning Techniques. 2d ed. New York: Springer, 1995. _____. Scanning Tunneling Microscopy III: Theory of STM and Related Scanning Probe Methods. 2d ed. New York: Springer, 1996.

683

Silicones Silicones

The invention: Synthetic polymers characterized by lubricity, extreme water repellency, thermal stability, and inertness that are widely used in lubricants, protective coatings, paints, adhesives, electrical insulation, and prosthetic replacements for body parts. The people behind the invention: Eugene G. Rochow (1909), an American research chemist Frederic Stanley Kipping (1863-1949), a Scottish chemist and professor James Franklin Hyde (1903), an American organic chemist Synthesizing Silicones Frederic Stanley Kipping, in the first four decades of the twentieth century, made an extensive study of the organic (carbon-based) chemistry of the element silicon. He had a distinguished academic career and summarized his silicon work in a lecture in 1937 (“Organic Derivatives of Silicon”). Since Kipping did not have available any naturally occurring compounds with chemical bonds between carbon and silicon atoms (organosilicon compounds), it was necessary for him to find methods of establishing such bonds. The basic method involved replacing atoms in naturally occurring silicon compounds with carbon atoms from organic compounds. While Kipping was probably the first to prepare a silicone and was certainly the first to use the term silicone, he did not pursue the commercial possibilities of silicones. Yet his careful experimental work was a valuable starting point for all subsequent workers in organosilicon chemistry, including those who later developed the silicone industry. On May 10, 1940, chemist Eugene G. Rochow of the General Electric (GE) Company’s corporate research laboratory in Schenectady, New York, discovered that methyl chloride gas, passed over a heated mixture of elemental silicon and copper, reacted to form compounds with silicon-carbon bonds. Kipping had shown that these silicon compounds react with water to form silicones.

684

/

Silicones

The importance of Rochow’s discovery was that it opened the way to a continuous process that did not consume expensive metals, such as magnesium, or flammable ether solvents, such as those used by Kipping and other researchers. The copper acts as a catalyst, and the desired silicon compounds are formed with only minor quantities of by-products. This “direct synthesis,” as it came to be called, is now done commercially on a large scale. Silicone Structure Silicones are examples of what chemists call polymers. Basically, a polymer is a large molecule made up of many smaller molecules that are linked together. At the molecular level, silicones consist of long, repeating chains of atoms. In this molecular characteristic, silicones resemble plastics and rubber. Silicone molecules have a chain composed of alternate silicon and oxygen atoms. Each silicon atom bears two organic groups as substituents, while the oxygen atoms serve to link the silicon atoms into a chain. The silicon-oxygen backbone of the silicones is responsible for their unique and useful properties, such as the ability of a silicone oil to remain liquid over an extremely broad temperature range and to resist oxidative and thermal breakdown at high temperatures. A fundamental scientific consideration with silicone, as with any polymer, is to obtain the desired physical and chemical properties in a product by closely controlling its chemical structure and molecular weight. Oily silicones with thousands of alternating silicon and oxygen atoms have been prepared. The average length of the molecular chain determines the flow characteristics (viscosity) of the oil. In samples with very long chains, rubber-like elasticity can be achieved by cross-linking the silicone chains in a controlled manner and adding a filler such as silica. High degrees of cross-linking could produce a hard, intractable material instead of rubber. The action of water on the compounds produced from Rochow’s direct synthesis is a rapid method of obtaining silicones, but does not provide much control of the molecular weight. Further development work at GE and at the Dow-Corning company showed that the best procedure for controlled formation of silicone polymers involved treating the crude silicones with acid to produce a mixture

Silicones

/

685

Eugene G. Rochow Eugene George Rochow was born in 1909 and grew up in the rural New Jersey town of Maplewood. There his father, who worked in the tanning industry, and his big brother maintained a small attic laboratory. They experimented with electricity, radio—Eugene put together his own crystal set in an oatmeal box—and chemistry. Rochow followed his brother to Cornell University in 1927. The Great Depression began during his junior year, and although he had to take jobs as lecture assistant to get by, he managed to earn his bachelor’s degree in chemistry in 1931 and his doctorate in 1935. Luck came his way in the extremely tight job market: General Electric (GE) hired him for his expertise in inorganic chemistry. In 1938 the automobile industry, among other manufacturers, had a growing need for a high-temperature-resistant insulators. Scientists at Corning were convinced that silicone would have the best properties for the purpose, but they could not find a way to synthesize it cheaply and in large volume. When word about their ideas got back to Rochow at GE, he reasoned that a flexible silicone able to withstand temperatures of 200 to 300 degrees Celsius could be made by bonding silicone to carbon. His research succeeded in producing methyl silicone in 1939, and he devised a way to make it cheaply in 1941. It was the first commercially practical silicone. His process is still used. After World War II GE asked him to work on an effort to make aircraft carriers nuclear powered. However, Rochow was a Quaker and pacifist, and he refused. Instead, he moved to Harvard University as a chemistry professor in 1948 and remained there until his retirement in 1970. As the founder of a new branch of industrial chemistry, he received most of the discipline’s awards and medals, including the Perkin Award, and honorary doctorates.

from which high yields of an intermediate called “D4” could be obtained by distillation. The intermediate D4 could be polymerized in a controlled manner by use of acidic or basic catalysts. Wilton I. Patnode of GE and James F. Hyde of Dow-Corning made important advances in this area. Hyde’s discovery of the use of traces of potassium hydroxide as a polymerization catalyst for D4 made possible

686

/

Silicones

the manufacture of silicone rubber, which is the most commercially valuable of all the silicones. Impact Although Kipping’s discovery and naming of the silicones occurred from 1901 to 1904, the practical use and impact of silicones started in 1940, with Rochow’s discovery of direct synthesis. Production of silicones in the United States came rapidly enough to permit them to have some influence on military supplies for World War II (1939-1945). In aircraft communication equipment, extensive waterproofing of parts by silicones resulted in greater reliability of the radios under tropical conditions of humidity, where condensing water could be destructive. Silicone rubber, because of its ability to withstand heat, was used in gaskets under hightemperature conditions, in searchlights, and in the engines on B-29 bombers. Silicone grease applied to aircraft engines also helped to protect spark plugs from moisture and promote easier starting. After World War II, the uses for silicones multiplied. Silicone rubber appeared in many products from caulking compounds to wire insulation to breast implants for cosmetic surgery. Silicone rubber boots were used on the moon walks where ordinary rubber would have failed. Silicones in their present form owe much to years of patient developmental work in industrial laboratories. Basic research, such as that conducted by Kipping and others, served to point the way and catalyzed the process of commercialization. See also Buna rubber; Neoprene; Nylon; Plastic; Polystyrene; Teflon. Further Reading Clarson, Stephen J. Silicones and Silicone-Modified Materials. Washington, D.C.: American Chemical Society, 2000. Koerner, G. Silicones, Chemistry and Technology. Boca Raton, Fla.: CRC Press, 1991. Potter, Michael, and Noel R. Rose. Immunology of Silicones. New York: Springer, 1996. Smith, A. Lee. The Analytical Chemistry of Silicones. New York: Wiley, 1991.

687

Solar thermal engine Solar thermal engine

The invention: The first commercially practical plant for generating electricity from solar energy. The people behind the invention: Frank Shuman (1862-1918), an American inventor John Ericsson (1803-1889), an American engineer Augustin Mouchout (1825-1911), a French physics professor Power from the Sun According to tradition, the Greek scholar Archimedes used reflective mirrors to concentrate the rays of the Sun and set afire the ships of an attacking Roman fleet in 212 b.c.e. The story illustrates the long tradition of using mirrors to concentrate solar energy from a large area onto a small one, producing very high temperatures. With the backing of Napoleon III, the Frenchman Augustin Mouchout built, between 1864 and 1872, several steam engines that were powered by the Sun. Mirrors concentrated the sun’s rays to a point, producing a temperature that would boil water. The steam drove an engine that operated a water pump. The largest engine had a cone-shaped collector, or “axicon,” lined with silverplated metal. The French government operated the engine for six months but decided it was too expensive to be practical. John Ericsson, the American famous for designing and building the Civil War ironclad ship Monitor, built seven steam-driven solar engines between 1871 and 1878. In Ericsson’s design, rays were focused onto a line rather than a point. Long mirrors, curved into a parabolic shape, tracked the Sun. The rays were focused onto a water-filled tube mounted above the reflectors to produce steam. The engineer’s largest engine, which used an 11- × 16-foot trough-shaped mirror, delivered nearly 2 horsepower. Because his solar engines were ten times more expensive than conventional steam engines, Ericsson converted them to run on coal to avoid financial loss.

688

/

Solar thermal engine

Frank Shuman, a well-known inventor in Philadelphia, Pennsylvania, entered the field of solar energy in 1906. The self-taught engineer believed that curved, movable mirrors were too expensive. His first large solar engine was a hot-box, or flat-plate, collector. It lay flat on the ground and had blackened pipes filled with a liquid that had a low boiling point. The solar-heated vapor ran a 3.5-horsepower engine. Shuman’s wealthy investors formed the Sun Power Company to develop and construct the largest solar plant ever built. The site chosen was in Egypt, but the plant was built near Shuman’s home for testing before it was sent to Egypt. When the inventor added ordinary flat mirrors to reflect more sunlight into each collector, he doubled the heat production of the collectors. The 572 trough-type collectors were assembled in twentysix rows. Water was piped through the troughs and converted to steam. A condenser converted the steam to water, which reentered the collectors. The engine pumped 3,000 gallons of water per minute and produced 14 horsepower per day; performance was expected to improve 25 percent in the sunny climate of Egypt. British investors requested that professor C. V. Boys review the solar plant before it was shipped to Egypt. Boys pointed out that the bottom of each collector was not receiving any direct solar energy; in fact, heat was being lost through the bottom. He suggested that each row of flat mirrors be replaced by a single parabolic reflector, and Shuman agreed. Shuman thought Boys’s idea was original, but he later realized it was based on Ericsson’s design. The company finally constructed the improved plant in Meadi, Egypt, a farming district on the Nile River. Five solar collectors, spaced 25 feet apart, were built in a north-south line. Each was about 200 feet long and 10 feet wide. Trough-shaped reflectors were made of mirrors held in place by brass springs that expanded and contracted with changing temperatures. The parabolic mirrors shifted automatically so that the rays were always focused on the boiler. Inside the 15-inch boiler that ran down the middle of the collector, water was heated and converted to steam. The engine produced more than 55 horsepower, which was enough to pump 6,000 gallons of water per minute. The purchase price of Shuman’s solar plant was twice as high as

Solar thermal engine

/

689

Trough-shaped collectors with flat mirrors (above) produced enough solar thermal energy to pump 3,000 gallons of water per minute. Trough-shaped collectors with parabolic mirrors (below) produced enough solar thermal energy to pump 6,000 gallons of water per minute.

that of a coal-fired plant, but its operating costs were far lower. In Egypt, where coal was expensive, the entire purchase price would be recouped in four years. Afterward, the plant would operate for practically nothing. The first practical solar engine was now in operation, providing enough energy to drive a large-scale irrigation system in the floodplain of the Nile River. By 1914, Shuman’s work was enthusiastically supported, and solar plants were planned for India and Africa. Shuman hoped to build 20,000 reflectors in the Sahara Desert and generate energy equal to all the coal mined in one year, but the outbreak of World

690

/

Solar thermal engine

War I ended his dreams of large-scale solar developments. The Meadi project was abandoned in 1915, and Shuman died before the war ended. Powerful nations lost interest in solar power and began to replace coal with oil. Rich oil reserves were discovered in many desert zones that were ideal locations for solar power. Impact Although World War I ended Frank Shuman’s career, his breakthrough proved to the world that solar power held great promise for the future. His ideas were revived in 1957, when the Soviet Union planned a huge solar project for Siberia. A large boiler was fixed on a platform 140 feet high. Parabolic mirrors, mounted on 1,300 railroad cars, revolved on circular tracks to focus light on the boiler. The full-scale model was never built, but the design inspired the solar power tower. In the Mojave desert near Barstow, California, an experimental power tower, Solar One, began operation in 1982. The system collects solar energy to deliver steam to turbines that produce electric power. The 30-story tower is surrounded by more than 1,800 mirrors that adjust continually to track the Sun. Solar One generates about 10 megawatts per day, enough power for 5,000 people. Solar One was expensive, but future power towers will generate electricity as cheaply as fossil fuels can. If the costs of the air and water pollution caused by coal burning were considered, solar power plants would already be recognized as cost effective. Meanwhile, Frank Shuman’s success in establishing and operating a thoroughly practical large-scale solar engine continues to inspire research and development. See also Compressed-air-accumulating power plant; Fuel cell; Geothermal power; Nuclear power plant; Photoelectric cell; Photovoltaic cell; Tidal power plant. Further Reading De Kay, James T. Monitor: The Story of the Legendary Civil War Ironclad and the Man Whose Invention Changed the Course of History. New York: Ballantine, 1999.

Solar thermal engine

/

691

Mancini, Thomas R., James M. Chavez, and Gregory J. Kolb. “Solar Thermal Power Today and Tomorrow.” Mechanical Engineering 116, no. 8 (August, 1994). Moore, Cameron M. “Cooking Up Electricity with Sunlight.” The World & I 12, no. 7 (July, 1997). Parrish, Michael. “Enron Makes Electrifying Proposal Energy: The Respected Developer Announces a Huge Solar Plant and a Breakthrough Price.” Los Angeles Times (November 5, 1994).

692

Sonar Sonar

The invention: A device that detects soundwaves transmitted through water, sonar was originally developed to detect enemy submarines but is also used in navigation, fish location, and ocean mapping. The people behind the invention: Jacques Curie (1855-1941), a French physicist Pierre Curie (1859-1906), a French physicist Paul Langévin (1872-1946), a French physicist

Active Sonar, Submarines, and Piezoelectricity Sonar, which stands for sound navigation and ranging, is the American name for a device that the British call “asdic.” There are two types of sonar. Active sonar, the more widely used of the two types, detects and locates underwater objects when those objects reflect sound pulses sent out by the sonar. Passive sonar merely listens for sounds made by underwater objects. Passive sonar is used mostly when the loud signals produced by active sonar cannot be used (for example, in submarines). The invention of active sonar was the result of American, British, and French efforts, although it is often credited to Paul Langévin, who built the first working active sonar system by 1917. Langévin’s original reason for developing sonar was to locate icebergs, but the horrors of German submarine warfare in World War I led to the new goal of submarine detection. Both Langévin’s short-range system and long-range modern sonar depend on the phenomenon of “piezoelectricity,” which was discovered by Pierre and Jacques Curie in 1880. (Piezoelectricity is electricity that is produced by certain materials, such as certain crystals, when they are subjected to pressure.) Since its invention, active sonar has been improved and its capabilities have been increased. Active sonar systems are used to detect submarines, to navigate safely, to locate schools of fish, and to map the oceans.

Sonar

/

693

Sonar Theory, Development, and Use Although active sonar had been developed by 1917, it was not available for military use until World War II. An interesting major use of sonar before that time was measuring the depth of the ocean. That use began when the 1922 German Meteor Oceanographic Expedition was equipped with an active sonar system. The system was to be used to help pay German World War I debts by aiding in the recovery of gold from wrecked vessels. It was not used successfully to recover treasure, but the expedition’s use of sonar to determine ocean depth led to the discovery of the Mid-Atlantic Ridge. This development revolutionized underwater geology. Active sonar operates by sending out sound pulses, often called “pings,” that travel through water and are reflected as echoes when they strike large objects. Echoes from these targets are received by the system, amplified, and interpreted. Sound is used instead of light or radar because its absorption by water is much lower. The time that passes between ping transmission and the return of an echo is used to identify the distance of a target from the system by means of a method called “echo ranging.” The basis for echo ranging is the normal speed of sound in seawater (5,000 feet per second). The distance of the target from the radar system is calculated by means of a simple equation: range = speed of sound × 0.5 elapsed time. The time is divided in half because it is made up of the time taken to reach the target and the time taken to return. The ability of active sonar to show detail increases as the energy of transmitted sound pulses is raised by decreasing the sound wavelength. Figuring out active sonar data is complicated by many factors. These include the roughness of the ocean, which scatters sound and causes the strength of echoes to vary, making it hard to estimate the size and identity of a target; the speed of the sound wave, which changes in accordance with variations in water temperature, pressure, and saltiness; and noise caused by waves, sea animals, and ships, which limits the range of active sonar systems. A simple active pulse sonar system produces a piezoelectric signal of a given frequency and time duration. Then, the signal is amplified and turned into sound, which enters the water. Any echo

694

/

Sonar

that is produced returns to the system to be amplified and used to determine the identity and distance of the target. Most active sonar systems are mounted near surface vessel keels or on submarine hulls in one of three ways. The first and most popular mounting method permits vertical rotation and scanning of a section of the ocean whose center is the system’s location. The second method, which is most often used in depth sounders, directs the beam downward in order to measure ocean depth. The third method, called wide scanning, involves the use of two sonar systems, one mounted on each side of the vessel, in such a way that the two beams that are produced scan the whole ocean at right angles to the direction of the vessel’s movement. Active single-beam sonar operation applies an alternating voltage to a piezoelectric crystal, making it part of an underwater loudspeaker (transducer) that creates a sound beam of a particular frequency. When an echo returns, the system becomes an underwater microphone (receiver) that identifies the target and determines its range. The sound frequency that is used is determined by the sonar’s

Sonar

Active sonar detects and locates underwater objects that reflect sound pulses sent out by the sonar.

Sonar

/

695

Paul Langévin If he had not published the Special Theory of Relativity in 1905, Albert Einstein once said, Paul Langévin would have done so not long afterward. Born in Paris in 1872, Langévin was among the foremost physicists of his generation. He studied in the best French schools of science—and with such teachers as Pierre Curie and Jean Perrin—and became a professor of physics at the College de France in 1904. He moved to the Sorbonne in 1909. Langévin’s research was always widely influential. In addition to his invention of active sonar, he was especially noted for his studies of the molecular structure of gases, analysis of secondary X rays from irradiated metals, his theory of magnetism, and work on piezoelectricity and piezoceramics. His suggestion that magnetic properties are linked to the valence electrons of atoms inspired Niels Bohr’s classic model of the atom. In his later career, a champion of Einstein’s theories of relativity, Langévin worked on the implications of the space-time continuum. During World War II, Langévin, a pacifist, publicly denounced the Nazis and their occupation of France. They jailed him for it. He escaped to Switzerland in 1944, returning as soon as France was liberated. He died in late 1946.

purpose and the fact that the absorption of sound by water increases with frequency. For example, long-range submarine-seeking sonar systems (whose detection range is about ten miles) operate at 3 to 40 kilohertz. In contrast, short-range systems that work at about 500 feet (in mine sweepers, for example) use 150 kilohertz to 2 megahertz. Impact Modern active sonar has affected military and nonmilitary activities ranging from submarine location to undersea mapping and fish location. In all these uses, two very important goals have been to increase the ability of sonar to identify a target and to increase the effective range of sonar. Much work related to these two goals has involved the development of new piezoelectric materials and the replacement of natural minerals (such as quartz) with synthetic piezoelectric ceramics.

696

/

Sonar

Efforts have also been made to redesign the organization of sonar systems. One very useful development has been changing beammaking transducers from one-beam units to multibeam modules made of many small piezoelectric elements. Systems that incorporate these developments have many advantages, particularly the ability to search simultaneously in many directions. In addition, systems have been redesigned to be able to scan many echo beams simultaneously with electronic scanners that feed into a central receiver. These changes, along with computer-aided tracking and target classification, have led to the development of greatly improved active sonar systems. It is expected that sonar systems will become even more powerful in the future, finding uses that have not yet been imagined. See also Aqualung; Bathyscaphe; Bathysphere; Geiger counter; Gyrocompass; Radar; Richter scale; Ultrasound. Further Reading Curie, Marie. Pierre Curie. New York: Dover Publications, 1923. Hackmann, Willem Dirk. Seek and Strike: Sonar, Anti-Submarine Warfare, and the Royal Navy, 1914-54. London: H.M.S.O., 1984. Segrè, Emilio. From X-Rays to Quarks: Modern Physicists and Their Discoveries. San Francisco: W. H. Freeman, 1980. Senior, John E. Marie and Pierre Curie. Gloucestershire: Sutton, 1998.

697

Stealth aircraft Stealth aircraft

The invention: The first generation of “radar-invisible” aircraft, stealth planes were designed to elude enemy radar systems. The people behind the invention: Lockhead Corporation, an American research and development firm Northrop Corporation, an American aerospace firm Radar During World War II, two weapons were developed that radically altered the thinking of the U.S. military-industrial establishment and the composition of U.S. military forces. These weapons were the atomic bombs that were dropped on the Japanese cities of Hiroshima and Nagasaki by U.S. forces and “radio detection and ranging,” or radar. Radar saved the English during the Battle of Britain, and it was radar that made it necessary to rethink aircraft design. With radar, attacking aircraft can be detected hundreds of miles from their intended targets, which makes it possible for those aircraft to be intercepted before they can attack. During World War II, radar, using microwaves, was able to relay the number, distance, direction, and speed of German aircraft to British fighter interceptors. This development allowed the fighter pilots of the Royal Air Force, “the few” who were so highly praised by Winston Churchill, to shoot down four times as many planes as they lost. Because of the development of radar, American airplane design strategy has been to reduce the planes’ cross sections, reduce or eliminate the use of metal by replacing it with composite materials, and eliminate the angles that are found on most aircraft control surfaces. These actions help make aircraft less visible—and in some cases, almost invisible—to radar. The Lockheed F-117A Nightrider and the Northrop B-2 Stealth Bomber are the results of these efforts. Airborne “Ninjas” Hidden inside Lockheed Corporation is a research and development organization that is unique in the corporate world. This

698

/

Stealth aircraft

facility has provided the Air Force with the Sidewinder heatseeking missile; the SR-71, a titanium-skinned aircraft that can fly at four times the speed of sound; and, most recently, the F-117A Nightrider. The Nightrider eluded Iraqi radar so effectively during the 1991 Persian Gulf War that the Iraqis nicknamed it Shaba, which is an Arabic word that means ghost. In an unusual move for military projects, the Nightrider was delivered to the Air Force in 1982, before the plane had been perfected. This was done so that Air Force pilots could test fly the plane and provide input that could be used to improve the aircraft before it went into full production. The Northrop B-2 Stealth Bomber was the result of a design philosophy that was completely different from that of the F-117A Nightrider. The F-117A, for example, has a very angular appearance, but the angles are all greater than 180 degrees. This configuration spreads out radar waves rather than allowing them to be concentrated and sent back to their point of origin. The B-2, however, stays away from angles entirely, opting for a smooth surface that also acts to spread out the radar energy. (The B-2 so closely resembles the YB-49 Flying Wing, which was developed in the late 1940’s, that it even has the same wingspan.) The surface of the aircraft is covered with radar-absorbing material and carries its engines and weapons inside to reduce the radar cross section. There are no vertical control surfaces, which has the disadvantage of making the aircraft unstable, so the stabilizing system uses computers to make small adjustments in the control elements on the trailing edges of the wings, thus increasing the craft’s stability. The F-117A Nightrider and the B-2 Stealth Bomber are the “ninjas” of military aviation. Capable of striking powerfully, rapidly, and invisibly, these aircraft added a dimension to the U.S. Air Force that did not exist previously. Before the advent of these aircraft, missions that required radar-avoidance tactics had to be flown below the horizon of ground-based radar, which is 30.5 meters above the ground. Such low-altitude flight is dangerous because of both the increased difficulty of maneuvering and vulnerability to ground fire. Additionally, such flying does not conceal the aircraft from the airborne radar carried by such craft as the American E-3A AWACS and the former Soviet Mainstay. In a major conflict, the only aircraft

Stealth aircraft

/

699

that could effectively penetrate enemy airspace would be the Nightrider and the B-2. The purpose of the B-2 was to carry nuclear weapons into hostile airspace undetected. With the demise of the Soviet Union, mainland China seemed the only remaining major nuclear threat. For this reason, many defense experts believed that there was no longer a need for two radar-invisible planes, and cuts in U.S. military expenditures threatened the B-2 program during the early 1990’s. Consequences The development of the Nightrider and the B-2 meant that the former Soviet Union would have had to spend at least $60 billion to upgrade its air defense forces to meet the challenge offered by these aircraft. This fact, combined with the evolution of the Strategic Defense Initiative, commonly called “Star Wars,” led to the United States’ victory in the arms race. Additionally, stealth technology has found its way onto the conventional battlefield. As was shown in 1991 during the Desert Storm campaign in Iraq, targets that have strategic importance are often surrounded by a network of anti-air missiles and gun emplacements. During the Desert Storm air war, the F-117A was the only Allied aircraft to be assigned to targets in Baghdad. Nightriders destroyed more than 47 percent of the strategic areas that were targeted, and every pilot and plane returned to base unscathed. Since the world appears to be moving away from superpower conflicts and toward smaller regional conflicts, stealth aircraft may come to be used more for surveillance than for air attacks. This is particularly true because the SR-71, which previously played the primary role in surveillance, has been retired from service. See also Airplane; Cruise missile; Hydrogen bomb; Radar; Rocket; Turbojet; V-2 rocket. Further Reading Chun, Clayton K. S. The Lockheed F-117A. Santa Monica, Calif.: Rand, 1991.

700

/

Stealth aircraft

Goodall, James C. America’s Stealth Fighters and Bombers. Osceola, Wis.: Motorbooks, 1992. Pape, Garry R., and John M. Campbell. Northrop Flying Wings: A History of Jack Northrop’s Visionary Aircraft. Atglen, Pa.: Schiffer, 1995. Thornborough, Anthony M. Stealth. London: Ian Allen, 1991.

701

Steelmaking process Steelmaking process

The invention: Known as the basic oxygen, or L-D, process, a method for producing steel that worked about twelve times faster than earlier methods. The people behind the invention: Henry Bessemer (1813-1898), the English inventor of a process for making steel from iron Robert Durrer (1890-1978), a Swiss scientist who first proved the workability of the oxygen process in a laboratory F. A. Loosley (1891-1966), head of research and development at Dofasco Steel in Canada Theodor Suess (1894-1956), works manager at Voest Ferrous Metal The modern industrial world is built on ferrous metal. Until 1857, ferrous metal meant cast iron and wrought iron, though a few specialty uses of steel, especially for cutlery and swords, had existed for centuries. In 1857, Henry Bessemer developed the first largescale method of making steel, the Bessemer converter. By the 1880’s, modification of his concepts (particularly the development of a ‘’basic” process that could handle ores high in phosphor) had made large-scale production of steel possible. Bessemer’s invention depended on the use of ordinary air, infused into the molten metal, to burn off excess carbon. Bessemer himself had recognized that if it had been possible to use pure oxygen instead of air, oxidation of the carbon would be far more efficient and rapid. Pure oxygen was not available in Bessemer’s day, except at very high prices, so steel producers settled for what was readily available, ordinary air. In 1929, however, the Linde-Frakl process for separating the oxygen in air from the other elements was discovered, and for the first time inexpensive oxygen became available. Nearly twenty years elapsed before the ready availability of pure oxygen was applied to refining the method of making steel. The first experiments were carried out in Switzerland by Robert Durrer. In

702

/

Steelmaking process

1949, he succeeded in making steel expeditiously in a laboratory setting through the use of a blast of pure oxygen. Switzerland, however, had no large-scale metallurgical industry, so the Swiss turned the idea over to the Austrians, who for centuries had exploited the large deposits of iron ore in a mountain in central Austria. Theodor Suess, the works manager of the state-owned Austrian steel complex, Voest, instituted some pilot projects. The results were sufficiently favorable to induce Voest to authorize construction of production converters. In 1952, the first ‘’heat” (as a batch of steel is called) was “blown in,” at the Voest works in Linz. The following year, another converter was put into production at the works in Donauwitz. These two initial locations led to the basic oxygen process sometimes being referred to as the L-D process. The L-D Process The basic oxygen, or L-D, process makes use of a converter similar to the Bessemer converter. Unlike the Bessemer, however, the LD converter blows pure oxygen into the molten metal from above through a water-cooled injector known as a lance. The oxygen burns off the excess carbon rapidly, and the molten metal can then be poured off into ingots, which can later be reheated and formed into the ultimately desired shape. The great advantage of the process is the speed with which a “heat” reaches the desirable metallurgical composition for steel, with a carbon content between 0.1 percent and 2 percent. The basic oxygen process requires about forty minutes. In contrast, the prevailing method of making steel, using an open-hearth furnace (which transferred the technique from the closed Bessemer converter to an open-burning furnace to which the necessary additives could be introduced by hand) requires eight to eleven hours for a “heat” or batch. The L-D process was not without its drawbacks, however. It was adopted by the Austrians because, by carefully calibrating the timing and amount of oxygen introduced, they could turn their moderately phosphoric ore into steel without further intervention. The process required ore of a standardized metallurgical, or chemical, content, for which the lancing had been calculated. It produced a large amount of iron-oxide dust that polluted the surrounding at-

Steelmaking process

/

703

mosphere, and it required a lining in the converter of dolomitic brick. The specific chemical content of the brick contributed to the chemical mixture that produced the desired result. The Austrians quickly realized that the process was an improvement. In May, 1952, the patent specifications for the new process were turned over to a new company, Brassert Oxygen Technik, or BOT, which filed patent applications around the world. BOT embarked on an aggressive marketing campaign, bringing potential customers to Austria to observe the process in action. Despite BOT’s efforts, the new process was slow to catch on, even though in 1953 BOT licensed a U.S. firm, Kaiser Engineers, to spread the process in the United States. Many factors serve to explain the reluctance of steel producers around the world to adopt the new process. One of these was the large investment most major steel producers had in their openhearth furnaces. Another was uncertainty about the pollution factor. Later, special pollution-control equipment would be developed to deal with this problem. A third concern was whether the necessary refractory liners for the new converters would be available. A fourth was the fact that the new process could handle a load that contained no more than 30 percent scrap, preferably less. In practice, therefore, it would only work where a blast furnace smelting ore was already set up. One of the earliest firms to show serious interest in the new technology was Dofasco, a Canadian steel producer. Between 1952 and 1954, Dofasco, pushed by its head of research and development, F. A. Loosley, built pilot operations to test the methodology. The results were sufficiently promising that in 1954 Dofasco built the first basic oxygen furnace outside Austria. Dofasco had recently built its own blast furnace, so it had ore available on site. It was able to devise ways of dealing with the pollution problem, and it found refractory liners that would work. It became the first North American producer of basic oxygen steel. Having bought the licensing rights in 1953, Kaiser Engineers was looking for a U.S. steel producer adventuresome enough to invest in the new technology. It found that producer in McLouth Steel, a small steel plant in Detroit, Michigan. Kaiser Engineers supplied much of the technical advice that enabled McLouth to build the first

704

/

Steelmaking process

Henry Bessemer Henry Bessemer was born in the small village of Charlton, England, in 1813. His father was an early example of a technician, specializing in steam engines, and operated a business making metal type for printing presses. The elder Bessemer wanted his son to attend university, but Henry preferred to study under his father. During his apprenticeship, he learned the properties of alloys. At seventeen he moved to London to open his own business, which fabricated specialty metals. Three years later the Royal Academy held an exhibition of Bessemer’s work. His career, well begun, moved from one invention to another until at his death in 1898 he held 114 patents. Among them were processes for casting type and producing graphite for pencils; methods for manufacturing glass, sugar, bronze powder, and ships; and his best known creation, the Bessemer converter for making steel from iron. Bessemer built his first converter in 1855; fifteen years later Great Britain was producing half of the world’s steel. Bessemer’s life and career were models of early Industrial Age industry, prosperity, and longevity. A millionaire from patent royalties, he retired at fifty-nine, lived another twenty-six years, working on yet more inventions and cultivating astronomy as a hobby, and was married for sixty-four years. Among his many awards and honors was a knighthood, bestowed by Queen Victoria.

U.S. basic oxygen steel facility, though McLouth also sent one of its engineers to Europe to observe the Austrian operations. McLouth, which had backing from General Motors, also made use of technical descriptions in the literature. The Specifications Question One factor that held back adoption of basic oxygen steelmaking was the question of specifications. Many major engineering projects came with precise specifications detailing the type of steel to be used and even the method of its manufacture. Until basic oxygen steel was recognized as an acceptable form by the engineering fra-

Steelmaking process

/

705

ternity, so that job specifications included it as appropriate in specific applications, it could not find large-scale markets. It took a number of years for engineers to modify their specifications so that basic oxygen steel could be used. The next major conversion to the new steelmaking process occurred in Japan. The Japanese had learned of the process early, while Japanese metallurgical engineers were touring Europe in 1951. Some of them stopped off at the Voest works to look at the pilot projects there, and they talked with the Swiss inventor, Robert Durrer. These engineers carried knowledge of the new technique back to Japan. In 1957 and 1958, Yawata Steel and Nippon Kokan, the largest and third-largest steel producers in Japan, decided to implement the basic oxygen process. An important contributor to this decision was the Ministry of International Trade and Industry, which brokered a licensing arrangement through Nippon Kokan, which in turn had signed a one-time payment arrangement with BOT. The licensing arrangement allowed other producers besides Nippon Kokan to use the technique in Japan. The Japanese made two important technical improvements in the basic oxygen technology. They developed a multiholed lance for blowing in oxygen, thus dispersing it more effectively in the molten metal and prolonging the life of the refractory lining of the converter vessel. They also pioneered the OG process for recovering some of the gases produced in the converter. This procedure reduced the pollution generated by the basic oxygen converter. The first large American steel producer to adopt the basic oxygen process was Jones and Laughlin, which decided to implement the new process for several reasons. It had some of the oldest equipment in the American steel industry, ripe for replacement. It also had experienced significant technical difficulties at its Aliquippa plant, difficulties it was unable to solve by modifying its openhearth procedures. It therefore signed an agreement with Kaiser Engineers to build some of the new converters for Aliquippa. These converters were constructed on license from Kaiser Engineers by Pennsylvania Engineering, with the exception of the lances, which were imported from Voest in Austria. Subsequent lances, however, were built in the United States. Some of Jones and Laughlin’s production managers were sent to Dofasco for training, and technical

706

/

Steelmaking process

advisers were brought to the Aliquippa plant both from Kaiser Engineers and from Austria. Other European countries were somewhat slower to adopt the new process. A major cause for the delay was the necessary modification of the process to fit the high phosphoric ores available in Germany and France. Europeans also experimented with modifications of the basic oxygen technique by developing converters that revolved. These converters, known as Kaldo in Sweden and Rotor in Germany, proved in the end to have sufficient technical difficulties that they were abandoned in favor of the standard basic oxygen converter. The problems they had been designed to solve could be better dealt with through modifications of the lance and through adjustments in additives. By the mid-1980’s, the basic oxygen process had spread throughout the world. Neither Japan nor the European Community was producing any steel by the older, open-hearth method. In conjunction with the electric arc furnace, fed largely on scrap metal, the basic oxygen process had transformed the steel industry of the world. Impact The basic oxygen process has significant advantages over older procedures. It does not require additional heat, whereas the openhearth technique calls for the infusion of nine to twelve gallons of fuel oil to raise the temperature of the metal to the level necessary to burn off all the excess carbon. The investment cost of the converter is about half that of an open-hearth furnace. Fewer refractories are required, less than half those needed in an open-hearth furnace. Most important of all, however, a “heat” requires less than an hour, as compared with the eight or more hours needed for a “heat” in an open-hearth furnace. There were some disadvantages to the basic oxygen process. Perhaps the most important was the limited amount of scrap that could be included in a “heat,” a maximum of 30 percent. Because the process required at least 70 percent new ore, it could be operated most effectively only in conjunction with a blast furnace. Counterbalancing this last factor was the rapid development of the electric arc furnace, which could operate with 100 percent scrap. A firm with its

Steelmaking process

/

707

own blast furnace could, with both an oxygen converter and an electric arc furnace, handle the available raw material. The advantages of the basic oxygen process overrode the disadvantages. Some other new technologies combined to produce this effect. The most important of these was continuous casting. Because of the short time required for a “heat,” it was possible, if a plant had two or three converters, to synchronize output with the fill needs of a continuous caster, thus largely canceling out some of the economic drawbacks of the batch process. Continuous production, always more economical, was now possible in the basic steel industry, particularly after development of computer-controlled rolling mills. These new technologies forced major changes in the world’s steel industry. Labor requirements for the basic oxygen converter were about half those for the open-hearth furnace. The high speed of the new technology required far less manual labor but much more technical expertise. Labor requirements were significantly reduced, producing major social dislocations in steel-producing regions. This effect was magnified by the fact that demand for steel dropped sharply in the 1970’s, further reducing the need for steelworkers. The U.S. steel industry was slower than either the Japanese or the European to convert to the basic oxygen technique. The U.S. industry generally operated with larger quantities, and it took a number of years before the basic oxygen technique was adapted to converters with an output equivalent to that of the open-hearth furnace. By the time that had happened, world steel demand had begun to drop. U.S. companies were less profitable, failing to generate internally the capital needed for the major investment involved in abandoning open-hearth furnaces for oxygen converters. Although union contracts enabled companies to change work assignments when new technologies were introduced, there was stiff resistance to reducing employment of steelworkers, most of whom had lived all their lives in one-industry towns. Finally, engineers at the steel firms were wedded to the old methods and reluctant to change, as were the large bureaucracies of the big U.S. steel firms. The basic oxygen technology in steel is part of a spate of new technical developments that have revolutionized industrial production, drastically reducing the role of manual labor and dramatically increasing the need for highly skilled individuals with technical ex-

708

/

Steelmaking process

pertise. Because capital costs are significantly lower than for alternative processes, it has allowed a number of developing countries to enter a heavy industry and compete successfully with the old industrial giants. It has thus changed the face of the steel industry. See also Assembly line; Buna rubber; Disposable razor; Laminated glass; Memory metal; Neoprene; Oil-well drill bit; Pyrex glass. Further Reading Bain, Trevor. Banking the Furnace: Restructuring of the Steel Industry in Eight Countries. Kalamazoo, Mich.: W. E. Upjohn Institute for Employment Research, 1992. Gold, Bela, Gerhard Rosegger, and Myles G. Boylan, Jr. Evaluating Technological Innovations: Methods, Expectations, and Findings. Lexington, Mass.: Lexington Books, 1980. Hall, Christopher. Steel Phoenix: The Fall and Rise of the U.S. Steel Industry. New York: St. Martin’s Press, 1997. Hoerr, John P. And the Wolf Finally Came: The Decline of the American Steel Industry. Pittsburgh, Pa.: University of Pittsburgh Press, 1988. Lynn, Leonard H. How Japan Innovates: A Comparison with the United States in the Case of Oxygen Steelmaking. Boulder, Colo.: Westview Press, 1982. Seely, Burce Edsall. Iron and Steel in the Twentieth Century. New York: Facts on File, 1994.

709

Supercomputer Supercomputer

The invention: A computer that had the greatest computational power that then existed. The person behind the invention: Seymour R. Cray (1928-1996), American computer architect and designer The Need for Computing Power Although modern computers have roots in concepts first proposed in the early nineteenth century, it was only around 1950 that they became practical. Early computers enabled their users to calculate equations quickly and precisely, but it soon became clear that even more powerful computers—machines capable of receiving, computing, and sending out data with great precision and at the highest speeds— would enable researchers to use computer “models,” which are programs that simulate the conditions of complex experiments. Few computer manufacturers gave much thought to building the fastest machine possible, because such an undertaking is expensive and because the business use of computers rarely demands the greatest processing power. The first company to build computers specifically to meet scientific and governmental research needs was Control Data Corporation (CDC). The company had been founded in 1957 by William Norris, and its young vice president for engineering was the highly respected computer engineer Seymour R. Cray. When CDC decided to limit high-performance computer design, Cray struck out on his own, starting Cray Research in 1972. His goal was to design the most powerful computer possible. To that end, he needed to choose the principles by which his machine would operate; that is, he needed to determine its architecture. The Fastest Computer All computers rely upon certain basic elements to process data. Chief among these elements are the central processing unit, or CPU

710

/

Supercomputer

(which handles data), memory (where data are stored temporarily before and after processing), and the bus (the interconnection between memory and the processor, and the means by which data are transmitted to or from other devices, such as a disk drive or a monitor). The structure of early computers was based on ideas developed by the mathematician John von Neumann, who, in the 1940’s, conceived a computer architecture in which the CPU controls all events in a sequence: It fetches data from memory, performs calculations on those data, and then stores the results in memory. Because it functions in sequential fashion, the speed of this “scalar processor” is limited by the rate at which the processor is able to complete each cycle of tasks. Before Cray produced his first supercomputer, other designers tried different approaches. One alternative was to link a vector processor to a scalar unit. A vector processor achieves its speed by performing computations on a large series of numbers (called a vector) at one time rather than in sequential fashion, though specialized and complex programs were necessary to make use of this feature. In fact, vector processing computers spent most of their time operating as traditional scalar processors and were not always efficient at switching back and forth between the two processing types. Another option chosen by Cray’s competitors was the notion of “pipelining” the processor’s tasks. A scalar processor often must wait while data are retrieved or stored in memory. Pipelining techniques allow the processor to make use of idle time for calculations in other parts of the program being run, thus increasing the effective speed. A variation on this technique is “parallel processing,” in which multiple processors are linked. If each of, for example, eight central processors is given a portion of a computing task to perform, the task will be completed more quickly than the traditional von Neumann architecture, with its single processor, would allow. Ever the pragmatist, however, Cray decided to employ proved technology rather than use advanced techniques in his first supercomputer, the Cray 1, which was introduced in 1976. Although the Cray 1 did incorporate vector processing, Cray used a simple form of vector calculation that made the technique practical and easy to use. Most striking about this computer was its shape, which was far more modern than its internal design. The Cray 1 was shaped like a

Supercomputer

/

711

Seymour R. Cray Seymour R. Cray was born in 1928 in Chippewa Falls, Wisconsin. The son of a civil engineer, he became interested in radio and electronics as a boy. After graduating from high school in 1943, he joined the U.S. Army, was posted to Europe in an infantry communications platoon, and fought in the Battle of the Bulge. Back from the war, he pursued his interest in electronics in college while majoring in mathematics at the University of Minnesota. Upon graduation in 1950, he took a job at Engineering Research Associates. It was there that he first learned about computers. In fact, he helped design the first digital computer, UNIVAC. Cray co-founded Control Data Corporation in 1957. Based on his ideas, the company built large-scale, high-speed computers. In 1972 he founded his own company, Cray Research Incorporated, with the intention of employing new processing methods and simplifying architecture and software to build the world’s fastest computers. He succeeded, and the series of computers that the company marketed made possible computer modeling as a central part of scientific research in areas as diverse as meteorology, oil exploration, and nuclear weapons design. Through the 1970’s and 1980’s Cray Research was at the forefront of supercomputer technology, which became one of the symbols of American technological leadership. In 1989 Cray left Cray Research to form still another company, Cray Computer Corporation. He planned to build the next generation supercomputer, the Cray 5, but advances in microprocessor technology undercut the demand for supercomputers. Cray Computer entered bankruptcy in 1995. A year later he died from injuries sustained in an automobile accident near Colorado Springs, Colorado.

cylinder with a small section missing and a hollow center, with what appeared to be a bench surrounding it. The shape of the machine was designed to minimize the length of the interconnecting wires that ran between circuit boards to allow electricity to move the shortest possible distance. The bench concealed an important part of the cooling system that kept the system at an appropriate operating temperature.

712

/

Supercomputer

The measurements that describe the performance of supercomputers are called MIPS (millions of instructions per second) for scalar processors and megaflops (millions of floating-point operations per second) for vector processors. (Floating-point numbers are numbers expressed in scientific notation; for example, 1027.) Whereas the fastest computer before the Cray 1 was capable of some 35 MIPS, the Cray 1 was capable of 80 MIPS. Moreover, the Cray 1 was theoretically capable of vector processing at the rate of 160 megaflops, a rate unheard of at the time. Consequences Seymour Cray first estimated that there would be few buyers for a machine as advanced as the Cray 1, but his estimate turned out to be incorrect. There were many scientists who wanted to perform computer modeling (in which scientific ideas are expressed in such a way that computer-based experiments can be conducted) and who needed raw processing power. When dealing with natural phenomena such as the weather or geological structures, or in rocket design, researchers need to make calculations involving large amounts of data. Before computers, advanced experimental modeling was simply not possible, since even the modest calculations for the development of atomic energy, for example, consumed days and weeks of scientists’ time. With the advent of supercomputers, however, large-scale computation of vast amounts of information became possible. Weather researchers can design a detailed program that allows them to analyze complex and seemingly unpredictable weather events such as hurricanes; geologists searching for oil fields can gather data about successful finds to help identify new ones; and spacecraft designers can “describe” in computer terms experimental ideas that are too costly or too dangerous to carry out. As supercomputer performance evolves, there is little doubt that scientists will make ever greater use of its power. See also Apple II computer; BINAC computer; Colossus computer; ENIAC computer; IBM Model 1401 computer; Personal computer; UNIVAC computer.

Supercomputer

/

713

Further Reading Edwards, Owen. “Seymour Cray.” Forbes 154, no. 5 (August 29, 1994). Lloyd, Therese, and Stanley N. Wellborn. “Computers’ Next Frontiers.” U.S. News & World Report 99 (August 26, 1985). Slater, Robert. Portraits in Silicon. Cambridge, Mass.: MIT Press, 1987. Zipper, Stuart. “Chief Exec. Leaves Cray Computer.” Electronic News 38, no. 1908 (April, 1992).

714

Supersonic passenger plane Supersonic passenger plane

The invention: The first commercial airliner that flies passengers at speeds in excess of the speed of sound. The people behind the invention: Sir Archibald Russell (1904), a designer with the British Aircraft Corporation Pierre Satre (1909), technical director at Sud-Aviation Julian Amery (1919), British minister of aviation, 1962-1964 Geoffroy de Cource (1912), French minister of aviation, 1962 William T. Coleman, Jr. (1920), U.S. secretary of transportation, 1975-1977 Birth of Supersonic Transportations On January 21, 1976, the Anglo-French Concorde became the world’s first supersonic airliner to carry passengers on scheduled commercial flights. British Airways flew a Concorde from London’s Heathrow Airport to the Persian Gulf emirate of Bahrain in three hours and thirty-eight minutes. At about the same time, Air France flew a Concorde from Paris’s Charles de Gaulle Airport to Rio de Janeiro, Brazil, in seven hours and twenty-five minutes. The Concordes’ cruising speeds were about twice the speed of sound, or 1,350 miles per hour. On May 24, 1976, the United States and Europe became linked for the first time with commercial supersonic air transportation. British Airways inaugurated flights between Dulles International Airport in Washington, D.C., and Heathrow Airport. Likewise, Air France inaugurated flights between Dulles International Airport and Charles de Gaulle Airport. The London-Washington, D.C., flight was flown in an unprecedented time of three hours and forty minutes. The ParisWashington, D.C., flight was flown in a time of three hours and fifty-five minutes.

Supersonic passenger plane

/

715

The Decision to Build the SST Events leading to the development and production of the AngloFrench Concorde went back almost twenty years and included approximately $3 billion in investment costs. Issues surrounding the development and final production of the supersonic transport (SST) were extremely complex and at times highly emotional. The concept of developing an SST brought with it environmental concerns and questions, safety issues both in the air and on the ground, political intrigue of international proportions, and enormous economic problems from costs of operations, research, and development. In England, the decision to begin the SST project was made in October, 1956. Under the promotion of Morien Morgan with the Royal Aircraft Establishment in Farnborough, England, it was decided at the Aviation Ministry headquarters in London to begin development of a supersonic aircraft. This decision was based on the intense competition from the American Boeing 707 and Douglas DC-8 subsonic jets going into commercial service. There was little point in developing another subsonic plane; the alternative was to go above the speed of sound. In November, 1956, at Farnborough, the first meeting of the Supersonic Transport Aircraft Committee, known as STAC, was held. Members of the STAC proposed that development costs would be in the range of $165 million to $260 million, depending on the range, speed, and payload of the chosen SST. The committee also projected that by 1970, there would be a world market for at least 150 to 500 supersonic planes. Estimates were that the supersonic plane would recover its entire research and development cost through thirty sales. The British, in order to continue development of an SST, needed a European partner as a way of sharing the costs and preempting objections to proposed funding by England’s Treasury. In 1960, the British government gave the newly organized British Aircraft Corporation (BAC) $1 million for an SST feasibility study. Sir Archibald Russell, BAC’s chief supersonic designer, visited Pierre Satre, the technical director at the French firm of Sud-Aviation. Satre’s suggestion was to evolve an SST from Sud-Aviation’s highly successful subsonic Caravelle transport. By September, 1962, an agreement was reached by Sud and BAC design teams on a new SST, the Super Caravelle.

716

/

Supersonic passenger plane

There was a bitter battle over the choice of engines with two British engine firms, Bristol-Siddeley and Rolls-Royce, as contenders. Sir Arnold Hall, the managing director of Bristol-Siddeley, in collaboration with the French aero-engine company SNECMA, was eventually awarded the contract for the engines. The engine chosen was a “civilianized” version of the Olympus, which Bristol had been developing for the multirole TRS-2 combat plane. The Concorde Consortium On November 29, 1962, the Concorde Consortium was created by an agreement between England and the French Republic, signed by Ministers of Aviation Julian Amery and Geoffroy de Cource (1912). The first Concorde, Model 001, rolled out from SudAviation’s St. Martin-du-Touch assembly plant on December 11, 1968. The second, Model 002, was completed at the British Aircraft Corporation a few months later. Eight years later, on January 21, 1976, the Concorde became the world’s first supersonic airliner to carry passengers on scheduled commercial flights. Development of the SST did not come easily for the AngloFrench consortium. The nature of supersonic flight created numerous problems and uncertainties not present for subsonic flight. The SST traveled faster than the speed of sound. Sound travels at 760 miles per hour at sea level at a temperature of 59 degrees Fahrenheit. This speed drops to about 660 miles per hour at sixty-five thousand feet, cruising altitude for the SST, where the air temperature drops to 70 degrees below zero. The Concorde was designed to fly at a maximum of 1,450 miles per hour. The European designers could use an aluminum alloy construction and stay below the critical skin-friction temperatures that required other airframe alloys, such as titanium. The Concorde was designed with a slender curved wing surface. The design incorporated widely separated engine nacelles, each housing two Olympus 593 jet engines. The Concorde was also designed with a “droop snoot,” providing three positions: the supersonic configuration, a heat-visor retracted position for subsonic flight, and a nose-lowered position for landing patterns.

Supersonic passenger plane

/

717

Impact Early SST designers were faced with questions such as the intensity and ionization effect of cosmic rays at flight altitudes of sixty to seventy thousand feet. The “cascade effect” concerned the intensification of cosmic radiation when particles from outer space struck a metallic cover. Scientists looked for ways to shield passengers from this hazard inside the aluminum or titanium shell of an SST flying high above the protective blanket of the troposphere. Experts questioned whether the risk of being struck by meteorites was any greater for the SST than for subsonic jets and looked for evidence on wind shear of jet streams in the stratosphere. Other questions concerned the strength and frequency of clear air turbulence above forty-five thousand feet, whether the higher ozone content of the air at SST cruise altitude would affect the materials of the aircraft, whether SST flights would upset or destroy the protective nature of the earth’s ozone layer, the effect of aerodynamic heating on material strength, and the tolerable strength of sonic booms over populated areas. These and other questions consumed the designers and researchers involved in developing the Concorde. Through design research and flight tests, many of the questions were resolved or realized to be less significant than anticipated. Several issues did develop into environmental, economic, and international issues. In late 1975, the British and French governments requested permission to use the Concorde at New York’s John F. Kennedy International Airport and at Dulles International Airport for scheduled flights between the United States and Europe. In December, 1975, as a result of strong opposition from anti-Concorde environmental groups, the U.S. House of Representatives approved a six-month ban on SSTs coming into the United States so that the impact of flights could be studied. Secretary of Transportation William T. Coleman, Jr., held hearings to prepare for a decision by February 5, 1976, as to whether to allow the Concorde into U.S. airspace. The British and French, if denied landing rights, threatened to take the United States to an international court, claiming that treaties had been violated. The treaties in question were the Chicago Convention and Bermuda agreements of February 11, 1946, and March 27, 1946. These

718

/

Supersonic passenger plane

treaties prohibited the United States from banning aircraft that both France and Great Britain had certified to be safe. The Environmental Defense Fund contended that the United States had the right to ban SST aircraft on environmental grounds. Under pressure from both sides, Coleman decided to allow limited Concorde service at Dulles and John F. Kennedy airports for a sixteen-month trial period. Service into John F. Kennedy Airport, however, was delayed by a ban by the Port Authority of New York and New Jersey until a pending suit was pursued by the airlines. During the test period, detailed records were to be kept on the Concorde’s noise levels, vibration, and engine emission levels. Other provisions included that the plane would not fly at supersonic speeds over the continental United States; that all flights could be cancelled by the United States with four months notice, or immediately if they proved harmful to the health and safety of Americans; and that at the end of a year, four months of study would begin to determine if the trial period should be extended. The Concorde’s noise was one of the primary issues in determining whether the plane should be allowed into U.S. airports. The Federal Aviation Administration measured the effective perceived noise in decibels. After three months of monitoring the Concorde’s departure noise at 3.5 nautical miles was found to vary from 105 to 130 decibels. The Concorde’s approach noise at one nautical mile from threshold varied from 115 to 130 decibels. These readings were approximately equal to noise levels of other four-engine jets, such as the Boeing 747, on landing but were twice as loud on takeoff. The Economics of Operation Another issue of significance was the economics of Concorde’s operation and its tremendous investment costs. In 1956, early predictions of Great Britain’s STAC were for a world market of 150 to 500 supersonic planes. In November, 1976, Great Britain’s Gerald Kaufman and France’s Marcel Cavaille said that production of the Concorde would not continue beyond the sixteen vehicles then contracted for with BAC and Sud-Aviation. There was no demand by U.S. airline corporations for the plane. Given that the planes could not fly at supersonic speeds over populated areas because of the

Supersonic passenger plane

/

719

sonic boom phenomenon, markets for the SST had to be separated by at least three thousand miles, with flight paths over mostly water or desert. Studies indicated that there were only twelve to fifteen routes in the world for which the Concorde was suitable. The planes were expensive, at a price of approximately $74 million each and had a limited seating capacity of one hundred passengers. The plane’s range was about four thousand miles. These statistics compared to a Boeing 747 with a cost of $35 million, seating capacity of 360, and a range of six thousand miles. In addition, the International Air Transport Association negotiated that the fares for the Concorde flights should be equivalent to current first-class fares plus 20 percent. The marketing promotion for the Anglo-French Concorde was thus limited to the elite business traveler who considered speed over cost of transportation. Given these factors, the recovery of research and development costs for Great Britain and France would never occur. See also Airplane; Bullet train; Dirigible; Rocket; Stealth aircraft; Turbojet; V-2 rocket. Further Reading Ellingsworth, Rosalind K. “Concorde Stresses Time, Service.” Aviation Week and Space Technology 105 (August 16, 1976). Kozicharow, Eugene. “Concorde Legal Questions Raised.” Aviation Week and Space Technology 104 (January 12, 1976). Ropelewski, Robert. “Air France Poised for Concorde Service.” Aviation Week and Space Technology 104 (January 19, 1976). Sparaco, Pierre. “Official Optimism Grows for Concorde’s Return.” Aviation Week and Space Technology 154, no. 8 (February 19, 2001). Trubshaw, Brian. Concorde: The Inside Story. Thrupp, Stroud: Sutton, 2000.

720

Synchrocyclotron Synchrocyclotron

The invention: A powerful particle accelerator that performed better than its predecessor, the cyclotron. The people behind the invention: Edwin Mattison McMillan (1907-1991), an American physicist who won the Nobel Prize in Chemistry in 1951 Vladimir Iosifovich Veksler (1907-1966), a Soviet physicist Ernest Orlando Lawrence (1901-1958), an American physicist Hans Albrecht Bethe (1906), a German American physicist

The First Cyclotron The synchrocyclotron is a large electromagnetic apparatus designed to accelerate atomic and subatomic particles at high energies. Therefore, it falls under the broad class of scientific devices known as “particle accelerators.” By the early 1920’s, the experimental work of physicists such as Ernest Rutherford and George Gamow demanded that an artificial means be developed to generate streams of atomic and subatomic particles at energies much greater than those occurring naturally. This requirement led Ernest Orlando Lawrence to develop the cyclotron, the prototype for most modern accelerators. The synchrocyclotron was developed in response to the limitations of the early cyclotron. In September, 1930, Lawrence announced the basic principles behind the cyclotron. Ionized—that is, electrically charged—particles are admitted into the central section of a circular metal drum. Once inside the drum, the particles are exposed to an electric field alternating within a constant magnetic field. The combined action of the electric and magnetic fields accelerates the particles into a circular path, or orbit. This increases the particles’ energy and orbital radii. This process continues until the particles reach the desired energy and velocity and are extracted from the machine for use in experiments ranging from particle-to-particle collisions to the synthesis of radioactive elements.

Synchrocyclotron

/

721

Although Lawrence was interested in the practical applications of his invention in medicine and biology, the cyclotron also was applied to a variety of experiments in a subfield of physics called “high-energy physics.” Among the earliest applications were studies of the subatomic, or nuclear, structure of matter. The energetic particles generated by the cyclotron made possible the very type of experiment that Rutherford and Gamow had attempted earlier. These experiments, which bombarded lithium targets with streams of highly energetic accelerated protons, attempted to probe the inner structure of matter. Although funding for scientific research on a large scale was scarce before World War II (1939-1945), Lawrence nevertheless conceived of a 467-centimeter cyclotron that would generate particles with energies approaching 100 million electronvolts. By the end of the war, increases in the public and private funding of scientific research and a demand for higher-energy particles created a situation in which this plan looked as if it would become reality, were it not for an inherent limit in the physics of cyclotron operation.

Overcoming the Problem of Mass In 1937, Hans Albrecht Bethe discovered a severe theoretical limitation to the energies that could be produced in a cyclotron. Physicist Albert Einstein’s special theory of relativity had demonstrated that as any mass particle gains velocity relative to the speed of light, its mass increases. Bethe showed that this increase in mass would eventually slow the rotation of each particle. Therefore, as the rotation of each particle slows and the frequency of the alternating electric field remains constant, particle velocity will decrease eventually. This factor set an upper limit on the energies that any cyclotron could produce. Edwin Mattison McMillan, a colleague of Lawrence at Berkeley, proposed a solution to Bethe’s problem in 1945. Simultaneously and independently, Vladimir Iosifovich Veksler of the Soviet Union proposed the same solution. They suggested that the frequency of the alternating electric field be slowed to meet the decreasing rotational frequencies of the accelerating particles—in essence, “synchroniz-

722

/

Synchrocyclotron

ing” the electric field with the moving particles. The result was the synchrocyclotron. Prior to World War II, Lawrence and his colleagues had obtained the massive electromagnet for the new 100-million-electronvolt cyclotron. This 467-centimeter magnet would become the heart of the new Berkeley synchrocyclotron. After initial tests proved successful, the Berkeley team decided that it would be reasonable to convert the cyclotron magnet for use in a new synchrocyclotron. The apparatus was operational in November of 1946. These high energies combined with economic factors to make the synchrocyclotron a major achievement for the Berkeley Radiation Laboratory. The synchrocyclotron required less voltage to produce higher energies than the cyclotron because the obstacles cited by Bethe were virtually nonexistent. In essence, the energies produced by synchrocyclotrons are limited only by the economics of building them. These factors led to the planning and construction of other synchrocyclotrons in the United States and Europe. In 1957, the Berkeley apparatus was redesigned in order to achieve energies of 720 million electronvolts, at that time the record for cyclotrons of any kind. Impact Previously, scientists had had to rely on natural sources for highly energetic subatomic and atomic particles with which to experiment. In the mid-1920’s, the American physicist Robert Andrews Millikan began his experimental work in cosmic rays, which are one natural source of energetic particles called “mesons.” Mesons are charged particles that have a mass more than two hundred times that of the electron and are therefore of great benefit in high-energy physics experiments. In February of 1949, McMillan announced the first synthetically produced mesons using the synchrocyclotron. McMillan’s theoretical development led not only to the development of the synchrocyclotron but also to the development of the electron synchrotron, the proton synchrotron, the microtron, and the linear accelerator. Both proton and electron synchrotrons have been used successfully to produce precise beams of muons and pimesons, or pions (a type of meson).

Synchrocyclotron

/

723

The increased use of accelerator apparatus ushered in a new era of physics research, which has become dominated increasingly by large accelerators and, subsequently, larger teams of scientists and engineers required to run individual experiments. More sophisticated machines have generated energies in excess of 2 trillion electronvolts at the United States’ Fermi National Accelerator Laboratory, or Fermilab, in Illinois. Part of the huge Tevatron apparatus at Fermilab, which generates these particles, is a proton synchrotron, a direct descendant of McMillan and Lawrence’s early efforts. See also Atomic bomb; Cyclotron; Electron microscope; Field ion microscope; Geiger counter; Hydrogen bomb; Mass spectrograph; Neutrino detector; Scanning tunneling microscope; Tevatron accelerator. Further Reading Bernstein, Jeremy. Hans Bethe: Prophet of Energy. New York: Basic Books, 1980. McMillan, Edwin. “The Synchrotron: A Proposed High-Energy Particle Accelerator.” Physical Review 68 (September, 1945). _____. “Vladimir Iosifovich Veksler.” Physics Today (November, 1966). “Witness to a Century.” Discover 20 (December, 1999).

724

Synthetic amino acid Synthetic amino acid

The invention: A method for synthesizing amino acids by combining water, hydrogen, methane, and ammonia and exposing the mixture to an electric spark. The people behind the invention: Stanley Lloyd Miller (1930), an American professor of chemistry Harold Clayton Urey (1893-1981), an American chemist who won the 1934 Nobel Prize in Chemistry Aleksandr Ivanovich Oparin (1894-1980), a Russian biochemist John Burdon Sanderson Haldane (1892-1964), a British scientist Prebiological Evolution The origin of life on Earth has long been a tough problem for scientists to solve. While most scientists can envision the development of life through geologic time from simple single-cell bacteria to complex mammals by the processes of mutation and natural selection, they have found it difficult to develop a theory to define how organic materials were first formed and organized into lifeforms. This stage in the development of life before biologic systems arose, which is called “chemical evolution,” occurred between 4.5 and 3.5 billion years ago. Although great advances in genetics and biochemistry have shown the intricate workings of the cell, relatively little light has been shed on the origins of this intricate machinery of the cell. Some experiments, however, have provided important data from which to build a scientific theory of the origin of life. The first of these experiments was the classic work of Stanley Lloyd Miller. Miller worked with Harold Clayton Urey, a Nobel laureate, on the environments of the early earth. John Burdon Sanderson Haldane, a British biochemist, had suggested in 1929 that the earth’s early atmosphere was a reducing one—that it contained no free oxygen. In 1952, Urey published a seminal work in planetology, The Planets, in which he elaborated on Haldane’s suggestion, and he postulated

Synthetic amino acid

/

725

that the earth had formed from a cold stellar dust cloud. Urey thought that the earth’s primordial atmosphere probably contained elements in the approximate relative abundances found in the solar system and the universe. It had been discovered in 1929 that the Sun is approximately 87 percent hydrogen, and by 1935 it was known that hydrogen encompassed the vast majority (92.8 percent) of atoms in the universe. Urey reasoned that the earth’s early atmosphere contained mostly hydrogen, with the oxygen, nitrogen, and carbon atoms chemically bonded to hydrogen to form water, ammonia, and methane. Most important, free oxygen could not exist in the presence of such an abundance of hydrogen. As early as the mid-1920’s, Aleksandr Ivanovich Oparin, a Russian biochemist, had argued that the organic compounds necessary for life had been built up on the early earth by chemical combinations in a reducing atmosphere. The energy from the Sun would have been sufficient to drive the reactions to produce life. Haldane later proposed that the organic compounds would accumulate in the oceans to produce a “dilute organic soup” and that life might have arisen by some unknown process from that mixture of organic compounds. Primordial Soup in a Bottle Miller combined the ideas of Oparin and Urey and designed a simple, but elegant, experiment. He decided to mix the gases presumed to exist in the early atmosphere (water vapor, hydrogen, ammonia, and methane) and expose them to an electrical spark to determine which, if any, organic compounds were formed. To do this, he constructed a relatively simple system, essentially consisting of two Pyrex flasks connected by tubing in a roughly circular pattern. The water and gases in the smaller flask were boiled and the resulting gas forced through the tubing into a larger flask that contained tungsten electrodes. As the gases passed the electrodes, an electrical spark was generated, and from this larger flask the gases and any other compounds were condensed. The gases were recycled through the system, whereas the organic compounds were trapped in the bottom of the system.

726

/

Synthetic amino acid

Miller was trying to simulate conditions that had prevailed on the early earth. During the one week of operation, Miller extracted and analyzed the residue of compounds at the bottom of the system. The results were truly astounding. He found that numerous organic compounds had, indeed, been formed in only that one week. As much as 15 percent of the carbon (originally in the gas methane) had been combined into organic compounds, and at least 5 percent of the carbon was incorporated into biochemically important compounds. The most important compounds produced were some of the twenty amino acids essential to life on Earth. The formation of amino acids is significant because they are the building blocks of proteins. Proteins consist of a specific sequence of amino acids assembled into a well-defined pattern. Proteins are necessary for life for two reasons. First, they are important structural Electrode Water Vapor

CH 4

Nh 3

H2

Vacuum Condenser

Water in

Boiling Water

Cooled Water Containing Organic Compounds Sample for Chemical Analysis

The Miller-Urey experiment.

Synthetic amino acid

/

727

materials used to build the cells of the body. Second, the enzymes that increase the rate of the multitude of biochemical reactions of life are also proteins. Miller not only had produced proteins in the laboratory but also had shown clearly that the precursors of proteins— the amino acids—were easily formed in a reducing environment with the appropriate energy. Perhaps the most important aspect of the experiment was the ease with which the amino acids were formed. Of all the thousands of organic compounds that are known to chemists, amino acids were among those that were formed by this simple experiment. This strongly implied that one of the first steps in chemical evolution was not only possible but also highly probable. All that was necessary for the synthesis of amino acids were the common gases of the solar system, a reducing environment, and an appropriate energy source, all of which were present on early Earth. Consequences Miller opened an entirely new field of research with his pioneering experiments. His results showed that much about chemical evolution could be learned by experimentation in the laboratory. As a result, Miller and many others soon tried variations on his original experiment by altering the combination of gases, using other gases, and trying other types of energy sources. Almost all the essential amino acids have been produced in these laboratory experiments. Miller’s work was based on the presumed composition of the primordial atmosphere of Earth. The composition of this atmosphere was calculated on the basis of the abundance of elements in the universe. If this reasoning is correct, then it is highly likely that there are many other bodies in the universe that have similar atmospheres and are near energy sources similar to the Sun. Moreover, Miller’s experiment strongly suggests that amino acids, and perhaps life as well, should have formed on other planets. See also Artificial hormone; Artificial kidney; Synthetic DNA; Synthetic RNA.

728

/

Synthetic amino acid

Further Reading Dronamraju, Krishna R., and J. B. S. Haldane. Haldane’s Daedalus Revisited. New York: Oxford University Press, 1995. Lipkin, Richard. “Early Earth May Have Had Two Key RNA Bases.” Science News 148, no. 1 (July 1, 1995). Miller, Stanley L., and Leslie E. Orgel. The Origins of Life on the Earth. Englewood Cliffs, N.J.: Prentice-Hall, 1974. Nelson, Kevin E., Matthew Levy, and Stanley L. Miller. “Peptide Nucleic Acids Rather than RNA May Have Been the First Genetic Molecule.” Proceedings of the National Academy of Sciences of the United States of America 97, no. 8 (April 11, 2000). Yockey, Hubert P. “Walther Lob, Stanley L. Miller, and Prebiotic ‘Building Blocks’ in the Silent Electrical Discharge.” Perspectives in Biology and Medicine 41, no. 1 (Autumn, 1997).

729

Synthetic DNA Synthetic DNA

The invention: A method for replicating viral deoxyribonucleic acid (DNA) in a test tube that paved the way for genetic engineering. The people behind the invention: Arthur Kornberg (1918), an American physician and biochemist Robert L. Sinsheimer (1920), an American biophysicist Mehran Goulian (1929), a physician and biochemist The Role of DNA Until the mid-1940’s, it was believed that proteins were the carriers of genetic information, the source of heredity. Proteins appeared to be the only biological molecules that had the complexity necessary to encode the enormous amount of genetic information required to reproduce even the simplest organism. Nevertheless, proteins could not be shown to have genetic properties, and by 1944, it was demonstrated conclusively that deoxyribonucleic acid (DNA) was the material that transmitted hereditary information. It was discovered that DNA isolated from a strain of infective bacteria that can cause pneumonia was able to transform a strain of noninfective bacteria into an infective strain; in addition, the infectivity trait was transmitted to future generations. Subsequently, it was established that DNA is the genetic material in virtually all forms of life. Once DNA was known to be the transmitter of genetic information, scientists sought to discover how it performs its role. DNA is a polymeric molecule composed of four different units, called “deoxynucleotides.” The units consist of a sugar, a phosphate group, and a base; they differ only in the nature of the base, which is always one of four related compounds: adenine, guanine, cytosine, or thymine. The way in which such a polymer could transmit genetic information, however, was difficult to discern. In 1953, biophysicists James D. Watson and Francis Crick brilliantly determined the three-dimensional

730

/

Synthetic DNA

structure of DNA by analyzing X-ray diffraction photographs of DNA fibers. From their analysis of the structure of DNA, Watson and Crick inferred DNA’s mechanism of replication. Their work led to an understanding of gene function in molecular terms. Watson and Crick showed that DNA has a very long doublestranded (duplex) helical structure. DNA has a duplex structure because each base forms a link to a specific base on the opposite strand. The discovery of this complementary pairing of bases provided a model to explain the two essential functions of a hereditary molecule: It must preserve the genetic code from one generation to the next, and it must direct the development of the cell. Watson and Crick also proposed that DNA is able to serve as a mold (or template) for its own reproduction because the two strands of DNA polymer can separate. Upon separation, each strand acts as a template for the formation of a new complementary strand. An adenine base in the existing strand gives rise to cytosine, and so on. In this manner, a new double-stranded DNA is generated that is identical to the parent DNA. DNA in a Test Tube Watson and Crick’s theory provided a valuable model for the reproduction of DNA, but it did not explain the biological mechanism by which the process occurs. The biochemical pathway of DNA reproduction and the role of the enzymes required for catalyzing the reproduction process were discovered by Arthur Kornberg and his coworkers. For his success in achieving DNA synthesis in a test tube and for discovering and isolating an enzyme—DNA polymerase— that catalyzed DNA synthesis, Kornberg won the 1959 Nobel Prize in Physiology or Medicine. To achieve DNA replication in a test tube, Kornberg found that a small amount of preformed DNA must be present, in addition to DNA polymerase enzyme and all four of the deoxynucleotides that occur in DNA. Kornberg discovered that the base composition of the newly made DNA was determined solely by the base composition of the preformed DNA, which had been used as a template in the test-tube synthesis. This result showed that DNA polymerase obeys instructions dictated by the template DNA. It is thus said to

Synthetic DNA

/

731

be “template-directed.” DNA polymerase was the first templatedirected enzyme to be discovered. Although test-tube synthesis was a most significant achievement, important questions about the precise character of the newly made DNA were still unanswered. Methods of analyzing the order, or sequence, of the bases in DNA were not available, and hence it could not be shown directly whether DNA made in the test tube was an exact copy of the template of DNA or merely an approximate copy. In addition, some DNAs prepared by DNA polymerase appeared to be branched structures. Since chromosomes in living cells contain long, linear, unbranched strands of DNA, this branching might have indicated that DNA synthesized in a test tube was not equivalent to DNA synthesized in the living cell. Kornberg realized that the best way to demonstrate that newly synthesized DNA is an exact copy of the original was to test the new DNA for biological activity in a suitable system. Kornberg reasoned that a demonstration of infectivity in viral DNA produced in a test tube would prove that polymerase-catalyzed synthesis was virtually error-free and equivalent to natural, biological synthesis. The experiment, carried out by Kornberg, Mehran Goulian at Stanford University, and Robert L. Sinsheimer at the California Institute of Technology, was a complete success. The viral DNAs produced in a test tube by the DNA polymerase enzyme, using a viral DNA template, were fully infective. This synthesis showed that DNA polymerase could copy not merely a single gene but also an entire chromosome of a small virus without error. Consequences The purification of DNA polymerase and the preparation of biologically active DNA were major achievements that influenced biological research on DNA for decades. Kornberg’s methodology proved to be invaluable in the discovery of other enzymes that synthesize DNA. These enzymes have been isolated from Escherichia coli bacteria and from other bacteria, viruses, and higher organisms. The test-tube preparation of viral DNA also had significance in the studies of genes and chromosomes. In the mid-1960’s, it had not been established that a chromosome contains a continuous strand of

732

/

Synthetic DNA

DNA. Kornberg and Sinsheimer’s synthesis of a viral chromosome proved that it was, indeed, a very long strand of uninterrupted DNA. Kornberg and Sinsheimer’s work laid the foundation for subsequent recombinant DNA research and for genetic engineering technology. This technology promises to revolutionize both medicine and agriculture. The enhancement of food production and the generation of new drugs and therapies are only a few of the subsequent benefits that may be expected. See also Artificial chromosome; Artificial hormone; Cloning; Genetic “fingerprinting”; Genetically engineered insulin; In vitro plant culture; Synthetic amino acid; Synthetic RNA. Further Reading Baker, Tania A., and Arthur Kornberg. DNA Replication. 2d ed. New York: W. H. Freeman, 1991. Kornberg, Arthur. The Golden Helix: Inside Biotech Ventures. Sausalito, Calif.: University Science Books, 1995. _____. For the Love of Enzymes: The Odyssey of a Biochemist. Harvard University Press, 1991. Sinsheimer, Robert. The Strands of a Life: The Science of DNA and the Art of Education. Berkeley: University of California Press, 1994.

733

Synthetic RNA Synthetic RNA

The invention: A method for synthesizing the biological molecule RNA established that this process can occur outside the living cell. The people behind the invention: Severo Ochoa (1905-1993), a Spanish biochemist who shared the 1959 Nobel Prize in Physiology or Medicine Marianne Grunberg-Manago (1921), a French biochemist Marshall W. Nirenberg (1927), an American biochemist who won the 1968 Nobel Prize in Physiology or Medicine Peter Lengyel (1929), a Hungarian American biochemist RNA Outside the Cells In the early decades of the twentieth century, genetics had not been experimentally united with biochemistry. This merging soon occurred, however, with work involving the mold Neurospora crassa. This Nobel award-winning work by biochemist Edward Lawrie Tatum and geneticist George Wells Beadle showed that genes control production of proteins, which are major functional molecules in cells. Yet no one knew the chemical composition of genes and chromosomes, or, rather, the molecules of heredity. The American bacteriologist Oswald T. Avery and his colleagues at New York’s Rockefeller Institute determined experimentally that the molecular basis of heredity was a large polymer known as deoxyribonucleic acid (DNA). Avery’s discovery triggered a furious worldwide search for the particular structural characteristics of DNA, which allow for the known biological characteristics of genes. One of the most famous studies in the history of science solved this problem in 1953. Scientists James D. Watson, Francis Crick, and Maurice H. F. Wilkins postulated that DNA exists as a double helix. That is, two long strands twist about each other in a predictable pattern, with each single strand held to the other by weak, reversible linkages known as “hydrogen bonds.” About this time, researchers recognized also that a molecule closely related to DNA, ribonucleic

734

/

Synthetic RNA

acid (RNA), plays an important role in transcribing the genetic information as well as in other biological functions. Severo Ochoa was born in Spain as the science of genetics was developing. In 1942, he moved to New York University, where he studied the bacterium Azobacter vinelandii. Specifically, Ochoa was focusing on the question of how cells process energy in the form of organic molecules such as the sugar glucose to provide usable biological energy in the form of adenosine triphosphate (ATP). With postdoctoral fellow Marianne Grunberg-Manago, he studied enzymatic reactions capable of incorporating inorganic phosphate (a compound consisting of one atom of phosphorus and four atoms of oxygen) into adenosine diphosphate (ADP) to form ATP. One particularly interesting reaction was followed by monitoring the amount of radioactive phosphate reacting with ADP. Following separation of the reaction products, it was discovered that the main product was not ATP, but a much larger molecule. Chemical characterization demonstrated that this product was a polymer of adenosine monophosphate. When other nucleocide diphosphates, such as inosine diphosphate, were used in the reaction, the corresponding polymer of inosine monophosphate was formed. Thus, in each case, a polymer (a long string of building-block units) was formed. The polymers formed were synthetic RNAs, and the enzyme responsible for the conversion became known as “polynucleotide phosphorylase.” This finding, once the early skepticism was resolved, was received by biochemists with great enthusiasm because no technique outside the cell had ever been discovered previously in which a nucleic acid similar to RNA could be synthesized. Learning the Language Ochoa, Peter Lengyel, and Marshall W. Nirenberg at the National Institute of Health took advantage of this breakthrough to synthesize different RNAs useful in cracking the genetic code. Crick had postulated that the flow of information in biological systems is from DNA to RNA to protein. In other words, genetic information contained in the DNA structure is transcribed into complementary RNA structures, which, in turn, are translated into the protein. Pro-

Synthetic RNA

/

735

tein synthesis, an extremely complex process, involves bringing a type of RNA, known as messenger RNA, together with amino acids and huge cellular organelles known as ribosomes. Yet investigators did not know the nature of the nucleic acid alphabet—for example, how many single units of the RNA polymer code were needed for each amino acid, and the order that the units must be in to stand for a “word” in the nucleic acid language. In 1961, Nirenberg demonstrated that the polymer of synthetic RNA with multiple units of uracil (poly U) would “code” only for a protein containing the amino acid phenylalanine. Each three units (U’s) gave one phenylalanine. Therefore, genetic words each contain three letters. UUU translates into phenylalanine. Poly A, the first polymer discovered with polynucleotide phosphorylase, was coded for a protein containing multiple lysines. That is, AAA translates into the amino acid lysine. The words, containing combinations of letters, such as AUG, were not as easily studied, but Nirenberg, Ochoa, and Gobind Khorana of the University of Wisconsin eventually uncovered the exact translation for each amino acid. In RNA, there are four possible letters (A, U, G, and C) and three letters in each word. Accordingly, there are sixtyfour possible words. With only twenty amino acids, it became clear that more than one RNA word can translate into a given amino acid. Yet, no given word stands for any more than one amino acid. A few words do not translate into any amino acid; they are stop signals, telling the ribosome to cease translating RNA. The question of which direction an RNA is translated is critical. For example, CAA codes for the amino acid glutamine, but the reverse, AAC, translates to the amino acid asparagine. Such a difference is critical because the exact sequence of a protein determines its activity—that is, what it will do in the body and therefore what genetic trait it will express. Consequences Synthetic RNAs provided the key to understanding the genetic code. The genetic code is universal; it operates in all organisms, simple or complex. It is used by viruses, which are nearly life but are not alive. Spelling out the genetic code was one of the top discoveries of

736

/

Synthetic RNA

the twentieth century. Nearly all work in molecular biology depends on this knowledge. The availability of synthetic RNAs has provided hybridization tools for molecular geneticists. Hybridization is a technique in which an RNA is allowed to bind in a complementary fashion to DNA under investigation. The greater the similarity between RNA and DNA, the greater the amount of binding. The differential binding allows for seeking, finding, and ultimately isolating a target DNA from a large, diverse pool of DNA—in short, finding a needle in a haystack. Hybridization has become an indispensable aid in experimental molecular genetics as well as in applied sciences, such as forensics. See also Artificial chromosome; Artificial hormone; Cloning; Genetic “fingerprinting”; Genetically engineered insulin; In vitro plant culture; Synthetic amino acid; Synthetic DNA. Further Reading “Biochemist Severo Ochoa Dies: Won Nobel Prize.” Washington Post (November 3, 1993). Santesmases, Maria Jesus. “Severo Ochoa and the Biomedical Sciences in Spain Under Franco, 1959-1975.” Isis 91, no. 4 (December, 2000). “Severo Ochoa, 1905-1993.” Nature 366, no. 6454 (December, 1993).

737

Syphilis test Syphilis test

The invention: The first simple test for detecting the presence of the venereal disease syphilis led to better syphilis control and other advances in immunology. The people behind the invention: Reuben Leon Kahn (1887-1974), a Soviet-born American serologist and immunologist August von Wassermann (1866-1925), a German physician and bacteriologist Columbus’s Discoveries Syphilis is one of the chief venereal diseases, a group of diseases whose name derives from Venus, the Roman goddess of love. The term “venereal” arose from the idea that the diseases were transmitted solely by sexual contact with an infected individual. Although syphilis is almost always passed from one person to another in this way, it occasionally arises after contact with objects used by infected people in highly unclean surroundings, particularly in the underdeveloped countries of the world. It is believed by many that syphilis was introduced to Europe by the members of Spanish explorer Christopher Columbus’s crew— supposedly after they were infected by sexual contact with West Indian women—during their voyages of exploration. Columbus is reported to have died of heart and brain problems very similar to symptoms produced by advanced syphilis. At that time, according to many historians, syphilis spread rapidly over sixteenth century Europe. The name “syphilis” was coined by the Italian physician Girolamo Fracastoro in 1530 in an epic poem he wrote. Modern syphilis is much milder than the original disease and relatively uncommon. Yet, if it is not identified and treated appropriately, syphilis can be devastating and even fatal. It can also be passed from pregnant mothers to their unborn children. In these cases, the afflicted children will develop serious health problems that can include paralysis, insanity, and heart disease. Therefore, the understanding,

738

/

Syphilis test

detection, and cure of syphilis are important worldwide. Syphilis is caused by a spiral-shaped germ called a “spirochete.” Spirochetes enter the body through breaks in the skin or through the mucous membranes, regardless of how they are transmitted. Once spirochetes enter the body, they spread rapidly. During the first four to six weeks after infection, syphilis—said to be in its primary phase—is very contagious. During this time, it is identified by the appearance of a sore, or chancre, at the entry site of the infecting spirochetes. The chancre disappears quickly, and within six to twenty-four weeks, the disease shows itself as a skin rash, feelings of malaise, and other flulike symptoms (secondary-phase syphilis). These problems also disappear quickly in most cases, and here is where the real trouble—latent syphilis—begins. In latent syphilis, now totally without symptoms, spirochetes that have spread through the body may lodge in the brain or the heart. When this happens, paralysis, mental incapacitation, and death may follow. Testing Before Marriage Because of the danger to unborn children, Americans wishing to marry must be certified as being free of the disease before a marriage license is issued. The cure for syphilis is easily accomplished through the use of penicillin or other types of antibiotics, though no vaccine is yet available to prevent the disease. It is for this reason that syphilis detection is particularly important. The first viable test for syphilis was originated by August von Wassermann in 1906. In this test, blood samples are taken and treated in a medical laboratory. The treatment of the samples is based on the fact that the blood of infected persons has formed antibodies to fight the syphilis spirochete, and that these antibodies will react with certain body chemicals to cause the blood sample to clot. This indicates the person has the disease. After the syphilis has been cured, the antibodies disappear, as does the clotting. Although the Wassermann test was effective in 95 percent of all infected persons, it was very time-consuming (requiring a two-day incubation period) and complex. In 1923, Reuben Leon Kahn developed a modified syphilis test, “the standard Kahn test,” that was

Syphilis test

/

739

simpler and faster: The test was complete after only a few minutes. By 1925, Kahn’s test had become the standard syphilis test of the United States Navy and later became a worldwide test for the detection of the disease. Kahn soon realized that his test was not perfect and that in some cases, the results were incorrect. This led him to a broader study of the immune reactions at the center of the Kahn test. He investigated the role of various tissues in immunity, as compared to the role of white blood antibodies and white blood cells. Kahn showed, for example, that different tissues of immunized or nonimmunized animals possessed differing immunologic capabilities. Furthermore, the immunologic capabilities of test animals varied with their age, being very limited in newborns and increasing as they matured. This effort led, by 1951, to Kahn’s “universal serological reaction,” a precipitation reaction in which blood serum was tested against a reagent composed of tissue lipids. Kahn viewed it as a potentially helpful chemical indicator of how healthy or ill an individual was. This effort is viewed as an important landmark in the development of the science of immunology. Impact At the time that Kahn developed his standard Kahn test for syphilis, the Wassermann test was used all over the world for the diagnosis of syphilis. As has been noted, one of the great advantages of the standard Kahn test was its speed, minutes versus days. For example, in October, 1923, Kahn is reported to have tested forty serum samples in fifteen minutes. Kahn’s efforts have been important to immunology and to medicine. Among the consequences of his endeavors was the stimulation of other developments in the field, including the VDRL test (originated by the Venereal Disease Research Laboratory), which has replaced the Kahn test as one of the most often used screening tests for syphilis. Even more specific syphilis tests developed later include a fluorescent antibody test to detect the presence of the antibody to the syphilis spirochete.

740

/

Syphilis test

See also Abortion pill; Amniocentesis; Antibacterial drugs; Birth control pill; Mammography; Pap test; Penicillin; Ultrasound. Further Reading Cates, William, Jr., Richard B. Rothenberg, and Joseph H. Blount. “Syphilis Control.” Sexually Transmitted Diseases 23, no. 1 (January, 1996). Cobb, W. Montague. “Reuben Leon Kahn.” Journal of the National Medical Association 63 (September, 1971). Quétel, Claude. History of Syphilis. Baltimore: Johns Hopkins University Press, 1992. St. Louis, Michael E., and Judith N. Wasserheit. “Elimination of Syphilis in the United States.” Science 281, no. 5375 (July, 1998).

741

Talking motion pictures Talking motion pictures

The invention: The first practical system for linking sound with moving pictures. The people behind the invention: Harry Warner (1881-1958), the brother who used sound to fashion a major filmmaking company Albert Warner (1884-1967), the brother who persuaded theater owners to show Warner films Samuel Warner (1887-1927), the brother who adapted soundrecording technology to filmmaking Jack Warner (1892-1978), the brother who supervised the making of Warner films Taking the Lead The silent films of the early twentieth century had live sound accompaniment featuring music and sound effects. Neighborhood theaters made do with a piano and violin; larger “picture palaces” in major cities maintained resident orchestras of more than seventy members. During the late 1920’s, Warner Bros. led the American film industry in producing motion pictures with their own soundtracks, which were first recorded on synchronized records and later added on to the film beside the images. The ideas that led to the addition of sound to film came from corporate-sponsored research by American Telephone and Telegraph Company (AT&T) and the Radio Corporation of America (RCA). Both companies worked to improve sound recording and playback, AT&T to help in the design of long-distance telephone equipment and RCA as part of the creation of better radio sets. Yet neither company could, or would, enter filmmaking. AT&T was willing to contract its equipment out to Paramount or one of the other major Hollywood studios of the day; such studios, however, did not want to risk their sizable profit positions by junking silent films. The giants of the film industry were doing fine with what they had and did not want to switch to something that had not been proved.

742

/

Talking motion pictures

In 1924, Warner Bros. was a prosperous, though small, corporation that produced films with the help of outside financial backing. That year, Harry Warner approached the important Wall Street investment banking house of Goldman, Sachs and secured the help he needed. As part of this initial wave of expansion, Warner Bros. acquired a Los Angeles radio station in order to publicize its films. Through this deal, the four Warner brothers learned of the new technology that the radio and telephone industries had developed to record sound, and they succeeded in securing the necessary equipment from AT&T. During the spring of 1925, the brothers devised a plan by which they could record the most popular musical artists on film and then offer these “shorts” as added attractions to theaters that booked its features. As a bonus, Warner Bros. could add recorded orchestral music to its feature films and offer this music to theaters that relied on small musical ensembles. “Vitaphone” On August 6, 1926, Warner Bros. premiered its new “Vitaphone” technology. The first package consisted of a traditional silent film (Don Juan) with a recorded musical accompaniment, plus six recordings of musical talent highlighted by a performance from Giovanni Martineli, the most famous opera tenor of the day. The first Vitaphone feature was The Jazz Singer, which premiered in October, 1927. The film was silent during much of the movie, but as soon as Al Jolson, the star, broke into song, the new technology would be implemented. The film was an immediate hit. The Jazz Singer package, which included accompanying shorts with sound, forced theaters in cities that rarely held films over for more than a single week to ask to have the package stay for two, three, and sometimes four straight weeks. The Jazz Singer did well at the box office, but skeptics questioned the staying power of talkies. If sound was so important, they wondered, why hadn’t The Jazz Singer moved to the top of the all-time box-office list? Such success, though, would come a year later with The Singing Fool, also starring Jolson. From its opening day (September 20, 1928), it was the financial success of its time; produced for an estimated $200,000, it took in $5 million. In New York City, The

Talking motion pictures

/

743

In the early days of sound films, cameras had to be soundproofed so their operating noises would not be picked up by the primitive sound-recording equipment. (Library of Congress)

Singing Fool registered the heaviest business in Broadway history, with an advance sale that exceeded more than $100,000 (equivalent to more than half a million dollars in 1990’s currency).

744

/

Talking motion pictures

Impact The coming of sound transformed filmmaking, ushering in what became known as the golden age of Hollywood. By 1930, there were more reporters stationed in the filmmaking capital of the world than in any capital of Europe or Asia. The Warner Brothers Businessmen rather than inventors, the four Warner brothers were hustlers who knew a good thing when they saw it. They started out running theaters in 1903, evolved into film distributors, and began making their own films in 1909, in defiance of the Patents Company, a trust established by Thomas A. Edison to eliminate competition from independent filmmakers. Harry Warner was the president of the company, Sam and Jack were vice presidents in charge of production, and Abe (or Albert) was the treasurer. Theirs was a small concern. Their silent films and serials attracted few audiences, and during World War I they made training films for the government. In fact, their film about syphilis, Open Your Eyes, was their first real success. In 1918, however, they released My Four Years in Germany, a dramatized documentary, and it was their first blockbuster. Although considered gauche upstarts, they were suddenly taken seriously by the movie industry. When Sam first heard an actor talk on screen in an experimental film at the Bell lab in New York in 1925, he recognized a revolutionary opportunity. He soon convinced Jack that talking movies would be a gold mine. However, Harry and Abe were against the idea because of its costs—and because earlier attempts at “talkies” had been dismal failures. Sam and Jack tricked Harry into a seeing a experimental film of an orchestra, however, and he grew enthusiastic despite his misgivings. Within a year, the brothers released the all-music Don Juan. The rave notices from critics astounded Harry and Abe. Still, they thought sound in movies was simply a novelty. When Sam pointed out that they could make movies in which the actors talked, as on stage, Harry, who detested actors, snorted, “Who the hell wants to hear actors talk?” Sam and Jack pressed for dramatic talkies, nonetheless, and prevailed upon Harry to finance them. The silver screen has seldom been silent since.

Talking motion pictures

/

745

As a result of its foresight, Warner Bros. was the sole small competitor of the early 1920’s to succeed in the Hollywood elite, producing successful films for consumption throughout the world. After Warner Bros.’ innovation, the soundtrack became one of the features that filmmakers controlled when making a film. Indeed, sound became a vital part of the filmmaker’s art; music, in particular, could make or break a film. Finally, the coming of sound helped make films a dominant medium of mass culture, both in the United States and throughout the world. Innumerable fashions, expressions, and designs were soon created or popularized by filmmakers. Many observers had not viewed the silent cinema as especially significant; with the coming of the talkies, however, there was no longer any question about the social and cultural importance of films. As one clear consequence of the new power of the movie industry, within a few years of the coming of sound, the notorious Hays Code mandating prior restraint of film content went into effect. The pairing of images and sound caused talking films to be deemed simply too powerful for uncensored presentation to audiences; although the Hays Code was gradually weakened and eventually abandoned, less onerous “rating systems” would continue to be imposed on filmmakers by various regulatory bodies. See also Autochrome plate; Dolby noise reduction; Electronic synthesizer; Television. Further Reading Brayer, Elizabeth. George Eastman: A Biography. Baltimore: Johns Hopkins University Press, 1996. Crafton, Donald. The Talkies: American Cinema’s Transition to Sound, 1926-1931. Berkeley: University of California Press, 1999. Geduld, Harry M. The Birth of the Talkies: From Edison to Jolson. Bloomington: Indiana University Press, 1975. Neale, Stephen. Cinema and Technology: Image, Sound, Colour. London: Macmillan Education, 1985. Wagner, A. F. Recollections of Thomas A. Edison: A Personal History of the Early Days of the Phonograph, the Silent and Sound Film, and Film Censorship. 2d ed. London: City of London Phonograph & Gramophone Society, 1996.

746

Teflon Teflon

The invention: A fluorocarbon polymer whose chemical inertness and physical properties have made it useful for many applications, from nonstick cookware coatings to suits for astronauts. The person behind the invention: Roy J. Plunkett (1910-1994), an American chemist Nontoxic Refrigerant Sought As the use of mechanical refrigeration increased in the late 1930’s, manufacturers recognized the need for a material to replace sulfur dioxide and ammonia, which, although they were the commonly used refrigerants of the time, were less than ideal for the purpose. The material sought had to be nontoxic, odorless, colorless, and not flammable. Thomas Midgley, Jr., and Albert Henne of General Motors Corporation’s Frigidaire Division concluded, from studying published reports listing properties of a wide variety of chemicals, that hydrocarbon-like materials with hydrogen atoms replaced by chlorine and fluorine atoms would be appropriate. Their conclusion led to the formation of a joint effort between the General Motors Corporation’s Frigidaire Division and E. I. Du Pont de Nemours to research and develop the chemistry of fluorocarbons. In this research effort, a number of scientists began making and studying the large number of individual chemicals in the general class of compounds being investigated. It fell to Roy J. Plunkett to do a detailed study of tetrafluoroethylene, a compound consisting of two carbon atoms, each of which is attached to the other as well as to two fluorine atoms. The “Empty” Tank Tetrafluoroethylene, at normal room temperature and pressure, is a gas that is supplied to users in small pressurized cylinders. On the morning of the day of the discovery, Plunkett attached such a tank to his experimental apparatus and opened the tank’s valve. To

Teflon

/

747

his great surprise, no gas flowed from the tank. Plunkett’s subsequent actions transformed this event from an experiment gone wrong into a historically significant discovery. Rather than replacing the tank with another and going on with the work planned for the day, Plunkett, who wanted to know what had happened, examined the “empty” tank. When he weighed the tank, he discovered that it was not empty; it did contain the chemical that was listed on the label. Opening the valve and running a wire through the opening proved that what had happened had not been caused by a malfunctioning valve. Finally, Plunkett sawed the cylinder in half and discovered what had happened. The chemical in the tank was no longer a gas; instead, it was a waxy white powder. Plunkett immediately recognized the meaning of the presence of the solid. The six-atom molecules of the tetrafluoroethylene gas had somehow linked with one another to form much larger molecules. The gas had polymerized, becoming polytetrafluoroethylene, a solid with a high molecular weight. Capitalizing on this occurrence, Plunkett, along with other Du Pont chemists, performed a series of experiments and soon learned to control the polymerization reaction so that the product could be produced, its properties could be studied, and applications for it could be developed. The properties of the substance were remarkable indeed. It was unaffected by strong acids and bases, withstood high temperatures without reacting or melting, and was not dissolved by any solvent that the scientists tried. In addition to this highly unusual behavior, the polymer had surface properties that made it very slick. It was so slippery that other materials placed on its surface slid off in much the same way that beads of water slide off the surface of a newly waxed automobile. Although these properties were remarkable, no applications were suggested immediately for the new material. The polymer might have remained a laboratory curiosity if a conversation had not taken place between Leslie R. Groves, the head of the Manhattan Project (which engineered the construction of the first atomic bombs), and a Du Pont chemist who described the polymer to him. The Manhattan Project research team was hunting for an inert material to use for gaskets to seal pumps and piping. The gaskets had to be able to withstand the highly corrosive uranium hexafluoride with

748

/

Teflon

Roy J. Plunkett Roy J. Plunkett was born in 1910 in New Carlisle, Ohio. In 1932 he received a bachelor’s degree in chemistry from Manchester College and transferred to Ohio State University for graduate school, earning a master’s degree in 1933 and a doctorate in 1936. The same year he went to work for E. I. Du Pont de Nemours and Company as a research chemist at the Jackson Laboratory in Deepwater, New Jersey. Less then two years later, when he was only twenty-seven years old, he found the strange polymer tetrafluoroethylene, whose trade name became Teflon. It would turn out to be among Du Pont’s most famous products. In 1938 Du Pont appointed Plunkett the chemical supervisor at its largest plant, the Chamber Works in Deepwater, which produced tetraethyl lead. He held the position until 1952 and afterward directed the company’s Freon Products Division. He retired in 1975. In 1985 he was inducted into the Inventor’s Hall of Fame, and after his death in 1994, Du Pont created the Plunkett Award, presented to inventors who find new uses for Teflon and Tefzel, a related fluoropolymer, in aerospace, automotive, chemical, or electrical applications.

which the team was working. This uranium compound is fundamental to the process of upgrading uranium for use in explosive devices and power reactors. Polytetrafluoroethylene proved to be just the material that they needed, and Du Pont proceeded, throughout World War II and after, to manufacture gaskets for use in uranium enrichment plants. The high level of secrecy of the Manhattan Project in particular and atomic energy in general delayed the commercial introduction of the polymer, which was called Teflon, until the late 1950’s. At that time, the first Teflon-coated cooking utensils were introduced. Impact Plunkett’s thoroughness in following up a chance observation gave the world a material that has found a wide variety of uses, ranging from home kitchens to outer space. Some applications make use

Teflon

/

749

of Teflon’s slipperiness, others make use of its inertness, and others take advantage of both properties. The best-known application of Teflon is as a nonstick coating for cookware. Teflon’s very slippery surface initially was troublesome, when it proved to be difficult to attach to other materials. Early versions of Teflon-coated cookware shed their surface coatings easily, even when care was taken to avoid scraping it off. A suitable bonding process was soon developed, however, and the present coated surAn important space application for Teflon is its use faces are very rugged and on the outer skins of suits worn by astronauts. provide a noncontaminating (PhotoDisc) coating that can be cleaned easily. Teflon has proved to be a useful material in making devices that are implanted in the human body. It is easily formed into various shapes and is one of the few materials that the human body does not reject. Teflon has been used to make heart valves, pacemakers, bone and tendon substitutes, artificial corneas, and dentures. Teflon’s space applications have included its use as the outer skin of the suits worn by astronauts, as insulating coating on wires and cables in spacecraft that must resist high-energy cosmic radiation, and as heat-resistant nose cones and heat shields on spacecraft. See also Buna rubber; Neoprene; Nylon; Plastic; Polystyrene; Pyrex glass; Tupperware.

750

/

Teflon

Further Reading Friedel, Robert. “The Accidental Inventor.” Discover 17, no. 10 (October, 1996). “Happy Birthday, Teflon.” Design News 44, no. 8 (April, 1988). “Teflon.” Newsweek 130, 24a (Winter, 1997/1998).

751

Telephone switching Telephone switching

The invention: The first completely automatic electronic system for switching telephone calls. The people behind the invention: Almon B. Strowger (1839-1902), an American inventor Charles Wilson Hoover, Jr. (1925), supervisor of memory system development Wallace Andrew Depp (1914), director of Electronic Switching Merton Brown Purvis (1923), designer of switching matrices Electromechanical Switching Systems The introduction of electronic switching technology into the telephone network was motivated by the desire to improve the quality of the telephone system, add new features, and reduce the cost of switching technology. Telephone switching systems have three features: signaling, control, and switching functions. There were several generations of telephone switching equipment before the first fully electronic switching “office” (device) was designed. The first automatic electromechanical (partly electronic and partly mechanical) switching office was the Strowger step-by-step switch. Strowger switches relied upon the dial pulses generated by rotary dial telephones to move their switching elements to the proper positions to connect one telephone with another. In the step-by-step process, the first digit dialed moved the first mechanical switch into position, the second digit moved the second mechanical switch into position, and so forth, until the proper telephone connection was established. These Strowger switching offices were quite large, and they lacked flexibility and calling features. The second generation of automatic electromechanical telephone switching offices was of the “crossbar” type. Initially, crossbar switches relied upon a specialized electromechanical controller called a “marker” to establish call connections. Electromechanical telephone

752

/

Telephone switching

switching offices had difficulty implementing additional features and were unable to handle large numbers of incoming calls. Electronic Switching Systems In the early 1940’s, research into the programmed control of switching offices began at the American Telephone and Telegraph Company’s Bell Labs. This early research resulted in a trial office being put into service in Morris, Illinois, in 1960. The Morris switch used a unique memory called the “flying spot store.” It used a photographic plate as a program memory, and the memory was accessed optically. In order to change the memory, one had to scratch out or cover parts of the photographic plate. Before the development of the Morris switch, gas tubes had been used to establish voice connections. This was accomplished by applying a voltage difference across the end points of the conversation. When this voltage difference was applied, the gas tubes would conduct electricity, thus establishing the voice connection. The Morris trial showed that gas tubes could not support the voltages that the new technology required to make telephones ring or to operate pay telephones. The knowledge gained from the Morris trial led to the development of the first full-scale, commercial, computer-controlled electronic switch, the electronic switching system 1 (ESS-1). The first ESS-1 went into service in New Jersey in 1965. In the ESS-1, electromechanical switching elements, or relays, were controlled by computer software. A centralized computer handled call processing. Because the telephone service of an entire community depends on the reliability of the telephone switching office, the ESS-1 had two central processors, so that one would be available if the other broke down. The switching system of the ESS-1 was composed of electromechanical relays; the control of the switching system was electronic, but the switching itself remained mechanical. Bell Labs developed models to demonstrate the concept of integrating digital transmission and switching systems. Unfortunately, the solid state electronics necessary for such an undertaking had not developed sufficiently at that time, so the commercial development

Telephone switching

/

753

Almon B. Strowger Some people thought Almon B. Strowger was strange, perhaps even demented. Certainly, he was hot-tempered, restless, and argumentative. One thing he was not, however, was unimaginative. Born near Rochester, New York, in 1839, Strowger was old enough to fight for the Union at the second battle of Manassas during the American Civil War. The bloody battle apparently shattered and embittered him. He wandered slowly west after the war, taught himself undertaking, and opened a funeral home in Topeka, Kansas, in 1882. There began his running war with telephone operators, which continued when he moved his business to Kansas City. With the help of technicians (whom he later cheated) he built the first “collar box,” an automatic switching device, in 1887. The round contraption held a pencil that could be revolved to different pins arrange around it in order to change phone connections. Two years later he produced a more sophisticated device that was operated by push-button, and despite initial misgivings brought out a rotary dial device in 1896. That same year he sold the rights to his patents to business partners for $1,800 and his share in Strowger Automatic Dial Telephone Exchange for $10,000 in 1898. He moved to St. Petersburg, Florida, and opened a small hotel, dying there in 1902. It surely would have done his temper no good to learn that fourteen years later the Bell system bought his patents for $2.5 million.

of digital switching was not pursued. New versions of the ESS continued to employ electromechanical technology, although mechanical switching elements can cause impulse noise in voice signals and are larger and more difficult to maintain than electronic switching elements. Ten years later, however, Bell Labs began to develop a digital toll switch, the ESS-4, in which both switching and control functions were electronic. Although the ESS-1 was the first electronically controlled switching system, it did not switch voices electronically. The ESS-1 used computer control to move mechanical contacts in order to establish a conversation. In a fully electronic switching system, the voices are

754

/

Telephone switching

digitized before switching is performed. This technique, which is called “digital switching,” is still used. The advent of electronically controlled switching systems made possible features such as call forwarding, call waiting, and detailed billing for long-distance calls. Changing these services became a matter of simply changing tables in computer programs. Telephone maintenance personnel could communicate with the central processor of the ESS-1 by using a teletype, and they could change numbers simply by typing commands on the teletype. In electromechanically controlled telephone switching systems, however, changing numbers required rewiring. Consequences Electronic switching has greatly decreased the size of switching offices. Digitization of the voice prior to transmission improves voice quality. When telephone switches were electromechanical, a large area was needed to house the many mechanical switches that were required. In the era of electronic switching, voices are switched digitally by computer. In this method, voice samples are read into a computer memory and then read out of the memory when it is time to connect a caller with a desired number. Basically, electronic telephone systems are specialized computer systems that move digitized voice samples between customers. Telephone networks are moving toward complete digitization. Digitization was first applied to the transmission of voice signals. This made it possible for a single pair of copper wires to be shared by a number of telephone users. Currently, voices are digitized upon their arrival at the switching office. If the final destination of the telephone call is not connected to the particular switching office, the voice is sent to the remote office by means of digital circuits. Currently, voice signals are sent between the switching office and homes or businesses. In the future, digitization of the voice signal will occur in the telephone sets themselves. Digital voice signals will be sent directly from one telephone to another. This will provide homes with direct digital communication. A network that provides such services is called the “integrated services digital network” (ISDN).

Telephone switching

/

755

See also Cell phone; Long-distance telephone; Rotary dial telephone; Touch-tone telephone. Further Reading Briley, Bruce E. Introduction to Telephone Switching. Reading, Mass.: Addison-Wesley, 1983. Talley, David. Basic Electronic Switching for Telephone Systems. 2d ed. Rochelle Park, N.J.: Hayden, 1982. Thompson, Richard A. Telephone Switching Systems. Boston: Artech House, 2000.

756

Television Television

The invention: System that converts moving pictures and sounds into electronic signals that can be broadcast at great distances. The people behind the invention: Vladimir Zworykin (1889-1982), a Soviet electronic engineer and recipient of the National Medal of Science in 1967 Paul Gottlieb Nipkow (1860-1940), a German engineer and inventor Alan A. Campbell Swinton (1863-1930), a Scottish engineer and Fellow of the Royal Society Charles F. Jenkins (1867-1934), an American physicist, engineer, and inventor The Persistence of Vision In 1894, an American inventor, Charles F. Jenkins, described a scheme for electrically transmitting moving pictures. Jenkins’s idea, however, was only one in an already long tradition of theoretical television systems. In 1842, for example, the English physicist Alexander Bain had invented an automatic copying telegraph for sending still pictures. Bain’s system scanned images line by line. Similarly, the wide recognition of the persistence of vision—the mind’s ability to retain a visual image for a short period of time after the image has been removed—led to experiments with systems in which the image to be projected was repeatedly scanned line by line. Rapid scanning of images became the underlying principle of all television systems, both electromechanical and all-electronic. In 1884, a German inventor, Paul Gottlieb Nipkow, patented a complete television system that utilized a mechanical sequential scanning system and a photoelectric cell sensitized with selenium for transmission. The selenium photoelectric cell converted the light values of the image being scanned into electrical impulses to be transmitted to a receiver where the process would be reversed. The electrical impulses led to light of varying brightnesses being produced and projected on to a rotating disk that was scanned to repro-

Television

/

757

Phosphor Screen

Electron Gun

Electron Beam

Deflection and Focus Coils Glass Envelope Schematic of a television picture tube.

duce the original image. If the system—that is, the transmitter and the receiver—were in perfect synchronization and if the disk rotated quickly enough, persistence of vision enabled the viewer to see a complete image rather than a series of moving points of light. For a television image to be projected onto a screen of reasonable size and retain good quality and high resolution, any system employing only thirty to one hundred lines (as early mechanical systems did) is inadequate. A few systems were developed that utilized two hundred or more lines, but the difficulties these presented made the possibility of an all-electronic system increasingly attractive. These difficulties were not generally recognized until the early 1930’s, when television began to move out of the laboratory and into commercial production. Interest in all-electronic television paralleled interest in mechanical systems, but solutions to technical problems proved harder to achieve. In 1908, a Scottish engineer, Alan A. Campbell Swinton, proposed what was essentially an all-electronic television system. Swinton theorized that the use of magnetically deflected cathoderay tubes for both the transmitter and receiver in a system was possible. In 1911, Swinton formally presented his idea to the Röntgen

758

/

Television

Vladimir Zworykin Born in 1889, Vladimir Kosma Zworykin grew up in Murom, a small town two hundred miles east of Moscow. His father ran a riverboat service, and Zworykin sometimes helped him, but his mind was on electricity, which he studied on his own while aboard his father’s boats. In 1906, he entered the St. Petersburg Institute of Technology, and there he became acquainted with the idea of television through the work of Professor Boris von Rosing. Zworykin assisted Rosing in his attempts to transmit pictures with a cathode-ray tube. He served with the Russian Signal Corps during World War I, but then fled to the United States after the Bolshevist Revolution. In 1920 he got a job at Westinghouse’s research laboratory in Pittsburgh, helping develop radio tubes and photoelectric cells. He became an American citizen in 1924 and completed a doctorate at the University of Pittsburgh in 1926. By then he had already demonstrated his iconoscope and applied for a patent. Unable to interest Westinghouse in his invention, he moved to the Radio Corporation of America (RCA) in 1929, and later became director of its electronics research laboratory. RCA’s president, David Sarnoff, also a Russian immigrant, had faith in Zworykin and his ideas. Before Zworykin retired in 1954, RCA had invested $50 million in television. Among the many awards Zworykin received for his culturechanging invention was the National Medal of Science, presented by President Lyndon Johnson in 1966. Zworykin died on his birthday in 1982.

Society in London, but the technology available did not allow for practical experiments. Zworykin’s Picture Tube In 1923, Vladimir Zworykin, a Soviet electronic engineer working for the Westinghouse Electric Corporation, filed a patent application for the “iconoscope,” or television transmission tube. On March 17, 1924, Zworykin applied for a patent for a two-way system. The first cathode-ray tube receiver had a cathode, a modulating grid, an anode, and a fluorescent screen.

Television

/

759

Early console model television. (PhotoDisc)

Zworykin later admitted that the results were very poor and the system, as shown, was still far removed from a practical television system. Zworykin’s employers were so unimpressed that they admonished him to forget television and work on something more useful. Zworykin’s interest in television was thereafter confined to his nonworking hours, as he spent the next year working on photographic sound recording. It was not until the late 1920’s that he was able to devote his full attention to television. Ironically, Westinghouse had by then resumed research in television, but Zworykin was not part of the team. After he returned from a trip to France, where in 1928 he had witnessed an exciting demonstration of an electrostatic tube, Westinghouse indicated that it was not interested. This lack of corporate support in Pittsburgh led Zworykin to approach the Radio Corporation of America (RCA). According to reports, Zworykin demonstrated his system to the Institute of Radio Engineers at Rochester, New York, on November 18, 1929, claiming to have developed a

760

/

Television

working picture tube, a tube that would revolutionize television development. Finally, RCA recognized the potential. Impact The picture tube, or “kinescope,” developed by Zworykin changed the history of television. Within a few years, mechanical systems disappeared and television technology began to utilize systems similar to Zworykin’s by use of cathode-ray tubes at both ends of the system. At the transmitter, the image is focused upon a mosaic screen composed of light-sensitive cells. A stream of electrons sweeps the image, and each cell sends off an electric current pulse as it is hit by the electrons, the light and shade of the focused image regulating the amount of current. This string of electrical impulses, after amplification and modification into ultrahigh frequency wavelengths, is broadcast by antenna to be picked up by any attuned receiver, where it is retransformed into a moving picture in the cathode-ray tube receiver. The cathode-ray tubes contain no moving parts, as the electron stream is guided entirely by electric attraction. Although both the iconoscope and the kinescope were far from perfect when Zworykin initially demonstrated them, they set the stage for all future television development. See also Color television; Community antenna television; Communications satellite; Fiber-optics; FM radio; Holography; Internet; Radio; Talking motion pictures. Further Reading Abramson, Albert. Zworykin: Pioneer of Television. Urbana: University of Illinois Press, 1995. Sconce, Jeffrey. Haunted Media: Electronic Presence from Telegraphy to Television. Durham, N.C.: Duke University Press, 2000. Zworykin, Vladimir Kosma, and George Ashmun Morton. Television: The Electronics of Image Transmission in Color and Monochrome. 2d ed. New York: J. Wiley, 1954.

761

Tevatron accelerator Tevatron accelerator

The invention: A particle accelerator that generated collisions between beams of protons and antiprotons at the highest energies ever recorded. The people behind the invention: Robert Rathbun Wilson (1914), an American physicist and director of Fermilab from 1967 to 1978 John Peoples (1933), an American physicist and deputy director of Fermilab from 1987 Putting Supermagnets to Use The Tevatron is a particle accelerator, a large electromagnetic device used by high-energy physicists to generate subatomic particles at sufficiently high energies to explore the basic structure of matter. The Tevatron is a circular, tubelike track 6.4 kilometers in circumference that employs a series of superconducting magnets to accelerate beams of protons, which carry a positive charge in the atom, and antiprotons, the proton’s negatively charged equivalent, at energies up to 1 trillion electronvolts (equal to 1 teraelectronvolt, or 1 TeV; hence the name Tevatron). An electronvolt is the unit of energy that an electron gains through an electrical potential of 1 volt. The Tevatron is located at the Fermi National Accelerator Laboratory, which is also known as Fermilab. The laboratory was one of several built in the United States during the 1960’s. The heart of the original Fermilab was the 6.4-kilometer main accelerator ring. This main ring was capable of accelerating protons to energies approaching 500 billion electronvolts, or 0.5 teraelectronvolt. The idea to build the Tevatron grew out of a concern for the millions of dollars spent annually on electricity to power the main ring, the need for higher energies to explore the inner depths of the atom and the consequences of new theories of both matter and energy, and the growth of superconductor technology. Planning for a second accelerator ring, the Tevatron, to be installed beneath the main ring began in 1972.

762

/

Tevatron accelerator

Robert Rathbun Wilson, the director of Fermilab at that time, realized that the only way the laboratory could achieve the higher energies needed for future experiments without incurring intolerable electricity costs was to design a second accelerator ring that employed magnets made of superconducting material. Extremely powerful magnets are the heart of any particle accelerator; charged particles such as protons are given a “push” as they pass through an electromagnetic field. Each successive push along the path of the circular accelerator track gives the particle more and more energy. The enormous magnetic fields required to accelerate massive particles such as protons to energies approaching 1 trillion electronvolts would require electricity expenditures far beyond Fermilab’s operating budget. Wilson estimated that using superconducting materials, however, which have virtually no resistance to electrical current, would make it possible for the Tevatron to achieve double the main ring’s magnetic field strength, doubling energy output without significantly increasing energy costs. Tevatron to the Rescue The Tevatron was conceived in three phases. Most important, however, were Tevatron I and Tevatron II, where the highest energies were to be generated and where it was hoped new experimental findings would emerge. Tevatron II experiments were designed to be very similar to other proton beam experiments, except that in this case, the protons would be accelerated to an energy of 1 trillion electronvolts. More important still are the proton-antiproton colliding beam experiments of Tevatron I. In this phase, beams of protons and antiprotons rotating in opposite directions are caused to collide in the Tevatron, producing a combined, or center-of-mass, energy approaching 2 trillion electronvolts, nearly three times the energy achievable at the largest accelerator at Centre Européen de Recherche Nucléaire (the European Center for Nuclear Research, or CERN). John Peoples was faced with the problem of generating a beam of antiprotons of sufficient intensity to collide efficiently with a beam of protons. Knowing that he had the use of a large proton accelerator—the old main ring—Peoples employed the two-ring mode in which 120 billion electronvolt protons from the main ring are aimed

Tevatron accelerator

/

763

at a fixed tungsten target, generating antiprotons, which scatter from the target. These particles were extracted and accumulated in a smaller storage ring. These particles could be accelerated to relatively low energies. After sufficient numbers of antiprotons were collected, they were injected into the Tevatron, along with a beam of protons for the colliding beam experiments. On October 13, 1985, Fermilab scientists reported a proton-antiproton collision with a center-of-mass energy measured at 1.6 trillion electronvolts, the highest energy ever recorded. Consequences The Tevatron’s success at generating high-energy protonantiproton collisions affected future plans for accelerator development in the United States and offered the potential for important discoveries in high-energy physics at energy levels that no other accelerator could achieve. Physics recognized four forces in nature: the electromagnetic force, the gravitational force, the strong nuclear force, and the weak nuclear force. A major goal of the physics community is to formulate a theory that will explain all these forces: the so-called grand unification theory. In 1967, one of the first of the so-called gauge theories was developed that unified the weak nuclear force and the electromagnetic force. One consequence of this theory was that the weak force was carried by massive particles known as “bosons.” The search for three of these particles—the intermediate vector bosons W+, W−, and Z0—led to the rush to conduct colliding beam experiments to the early 1970’s. Because the Tevatron was in the planning phase at this time, these particles were discovered by a team of international scientists based in Europe. In 1989, Tevatron physicists reported the most accurate measure to date of the Z0 mass. The Tevatron is thought to be the only particle accelerator in the world with sufficient power to conduct further searches for the elusive Higgs boson, a particle attributed to weak interactions by University of Edinburgh physicist Peter Higgs in order to account for the large masses of the intermediate vector bosons. In addition, the Tevatron has the ability to search for the so-called top quark. Quarks are believed to be the constituent particles of protons and neutrons.

764

/

Tevatron accelerator

Evidence has been gathered of five of the six quarks believed to exist. Physicists have yet to detect evidence of the most massive quark, the top quark. See also Atomic bomb; Cyclotron; Electron microscope; Field ion microscope; Geiger counter; Hydrogen bomb; Mass spectrograph; Neutrino detector; Scanning tunneling microscope; Synchrocyclotron. Further Reading Hilts, Philip J. Scientific Temperaments: Three Lives in Contemporary Science. New York: Simon and Schuster, 1984. Ladbury, Ray. “Fermilab Tevatron Collider Group Goes over the Top—Cautiously.” Physics Today 47, no. 6 (June, 1994). Lederman, Leon M. “The Tevatron.” Scientific American 264, no. 3 (March, 1991). Wilson, Robert R., and Raphael Littauer. Accelerators: Machines of Nuclear Physics. London: Heinemann, 1962.

765

Thermal cracking process Thermal cracking process

The invention: Process that increased the yield of refined gasoline extracted from raw petroleum by using heat to convert complex hydrocarbons into simpler gasoline hydrocarbons, thereby making possible the development of the modern petroleum industry. The people behind the invention: William M. Burton (1865-1954), an American chemist Robert E. Humphreys (1942), an American chemist Gasoline, Motor Vehicles, and Thermal Cracking Gasoline is a liquid mixture of hydrocarbons (chemicals made up of only hydrogen and carbon) that is used primarily as a fuel for internal combustion engines. It is produced by petroleum refineries that obtain it by processing petroleum (crude oil), a naturally occurring mixture of thousands of hydrocarbons, the molecules of which can contain from one to sixty carbon atoms. Gasoline production begins with the “fractional distillation” of crude oil in a fractionation tower, where it is heated to about 400 degrees Celsius at the tower’s base. This heating vaporizes most of the hydrocarbons that are present, and the vapor rises in the tower, cooling as it does so. At various levels of the tower, various portions (fractions) of the vapor containing simple hydrocarbon mixtures become liquid again, are collected, and are piped out as “petroleum fractions.” Gasoline, the petroleum fraction that boils between 30 and 190 degrees Celsius, is mostly a mixture of hydrocarbons that contain five to twelve carbon atoms. Only about 25 percent of petroleum will become gasoline via fractional distillation. This amount of “straight run” gasoline is not sufficient to meet the world’s needs. Therefore, numerous methods have been developed to produce the needed amounts of gasoline. The first such method, “thermal cracking,” was developed in 1913 by William M. Burton of Standard Oil of Indiana. Burton’s cracking process used heat to convert complex hydrocarbons (whose molecules contain many carbon atoms) into simpler gasoline hydrocar-

766

/

Thermal cracking process

bons (whose molecules contain fewer carbon atoms), thereby increasing the yield of gasoline from petroleum. Later advances in petroleum technology, including both an improved Burton method and other methods, increased the gasoline yield still further. More Gasoline! Starting in about 1900, gasoline became important as a fuel for the internal combustion engines of the new vehicles called automobiles. By 1910, half a million automobiles traveled American roads. Soon, the great demand for gasoline—which was destined to grow and grow—required both the discovery of new crude oil fields around the world and improved methods for refining the petroleum mined from these new sources. Efforts were made to increase the yield of gasoline—at that time, about 15 percent—from petroleum. The Burton method was the first such method. At the time that the cracking process was developed, Burton was the general superintendent of the Whiting refinery, owned by Standard Oil of Indiana. The Burton process was developed in collaboration with Robert E. Humphreys and F. M. Rogers. This three-person research group began work knowing that heating petroleum fractions that contained hydrocarbons more complex than those present in gasoline—a process called “coking”—produced kerosene, coke (a form of carbon), and a small amount of gasoline. The process needed to be improved substantially, however, before it could be used commercially. Initially, Burton and his coworkers used the “heavy fuel” fraction of petroleum (the 66 percent of petroleum that boils at a temperature higher than the boiling temperature of kerosene). Soon, they found that it was better to use only the part of the material that contained its smaller hydrocarbons (those containing fewer carbon atoms), all of which were still much larger than those present in gasoline. The cracking procedure attempted first involved passing the starting material through a hot tube. This hot-tube treatment vaporized the material and broke down 20 to 30 percent of the larger hydrocarbons into the hydrocarbons found in gasoline. Various tarry products were also produced, however, that reduced the quality of the gasoline that was obtained in this way.

Thermal cracking process

/

767

CRUDE OIL IN

PETROLEUM SEPARATING

Asphalt

Industrial Fuel Oil

Roofing Paints

Candles Polish

REFINERY CONVERSION

PURIFICATION

De-waxing Lubricants and Greases

Fuel Oil

Diesel Oils Cracking

Bottled Gas Gasoline

Jet Fuel

Waxed Paper

Plastics

Medicines

Ointments and Creams

Photographic Film

Detergents

Solvents

Enamel

Insecticides

Synthetic Rubber

Weed-killers and Fertilizers

Synthetic Fibers

Burton’s process contributed to the development of petroleum refining, shown in this diagram.

Next, the investigators attempted to work at a higher temperature by bubbling the starting material through molten lead. More gasoline was made in this way, but it was so contaminated with gummy material that it could not be used. Continued investigation showed, however, that moderate temperatures (between those used in the hot-tube experiments and that of molten lead) produced the best yield of useful gasoline. The Burton group then had the idea of using high pressure to “keep starting materials still.” Although the theoretical basis for the use of high pressure was later shown to be incorrect, the new method worked quite well. In 1913, the Burton method was patented and put into use. The first cracked gasoline, called Motor Spirit, was not very popular, because it was yellowish and had a somewhat unpleasant odor. The addition of some minor refining procedures, however, soon made cracked gasoline indistinguishable from straight run gasoline. Standard Oil of Indiana made huge profits from cracked gasoline over the next ten years. Ultimately, thermal cracking subjected the petroleum fractions that were uti-

768

/

Thermal cracking process

lized to temperatures between 550 and 750 degrees Celsius, under pressures between 250 and 750 pounds per square inch. Impact In addition to using thermal cracking to make gasoline for sale, Standard Oil of Indiana also profited by licensing the process for use by other gasoline producers. Soon, the method was used throughout the oil industry. By 1920, it had been perfected as much as it could be, and the gasoline yield from petroleum had been significantly increased. The disadvantages of thermal cracking include a relatively low yield of gasoline (compared to those of other methods), the waste of hydrocarbons in fractions converted to tar and coke, and the relatively high cost of the process. A partial solution to these problems was found in “catalytic cracking”—the next logical step from the Burton method—in which petroleum fractions to be cracked are mixed with a catalyst (a substance that causes a chemical reaction to proceed more quickly, without reacting itself). The most common catalysts used in such cracking were minerals called “zeolites.” The wide use of catalytic cracking soon enabled gasoline producers to work at lower temperatures (450 to 550 degrees Celsius) and pressures (10 to 50 pounds per square inch). This use decreased manufacturing costs because catalytic cracking required relatively little energy, produced only small quantities of undesirable side products, and produced highquality gasoline. Various other methods of producing gasoline have been developed—among them catalytic reforming, hydrocracking, alkylation, and catalytic isomerization—and now about 60 percent of the petroleum starting material can be turned into gasoline. These methods, and others still to come, are expected to ensure that the world’s needs for gasoline will continue to be satisfied—as long as petroleum remains available. See also Fuel cell; Gas-electric car; Geothermal power; Internal combustion engine; Oil-well drill bit; Solar thermal engine.

Thermal cracking process

/

769

Further Reading Gorman, Hugh S. Redefining Efficiency: Pollution Concerns, Regulatory Mechanisms, and Technological Change in the U.S. Petroleum Industry. Akron, Ohio: University of Akron Press, 2001. Sung, Hsun-chang, Robert Roy White, and George Granger Brown. Thermal Cracking of Petroleum. Ann Arbor: University of Michigan, 1945. William Meriam Burton: A Pioneer in Modern Petroleum Technology. Cambridge, Mass.: University Press, 1952.

770

Tidal power plant Tidal power plant

The invention: Plant that converts the natural ocean tidal forces into electrical power. The people behind the invention: Mariano di Jacopo detto Taccola (Mariano of Siena, 1381-1453), an Italian notary, artist, and engineer Bernard Forest de Bélidor (1697 or 1698-1761), a French engineer Franklin D. Roosevelt (1882-1945), president of the United States Tidal Enersgy Ocean tides have long been harnessed to perform useful work. Ancient Greeks, Romans, and medieval Europeans all left records and ruins of tidal mills, and Mariano di Jacopo included tidal power in his treatise De Ingeneis (1433; on engines). Some mills consisted of water wheels suspended in tidal currents, others lifted weights that powered machinery as they fell, and still others trapped the high tide to run a mill. Bernard Forest de Bélidor’s Architecture hydraulique (1737; hydraulic architecture) is often cited as initiating the modern era of tidal power exploitation. Bélidor was an instructor in the French École d’Artillerie et du Génie (School of Artillery and Engineering). Industrial expansion between 1700 and 1800 led to the construction of many tidal mills. In these mills, waterwheels or simple turbines rotated shafts that drove machinery by means of gears or belts. They powered small enterprises located on the seashore. Steam engines, however, soon began to replace tidal mills. Steam could be generated wherever it was needed, and steam mills were not dependent upon the tides or limited in their production capacity by the amount of tidal flow. Thus, tidal mills gradually were abandoned, although a few still operate in New England, Great Britain, France, and elsewhere.

Tidal power plant

/

771

Electric Power from Tides Modern society requires tremendous amounts of electric energy generated by large power stations. This need was first met by using coal and by damming rivers. Later, oil and nuclear power became important. Although small mechanical tidal mills are inadequate for modern needs, tidal power itself remains an attractive source of energy. Periodic alarms about coal or oil supplies and concern about the negative effects on the environment of using coal, oil, or nuclear energy continue to stimulate efforts to develop renewable energy sources with fewer negative effects. Every crisis—for example, the perceived European coal shortages in the early 1900’s, oil shortages in the 1920’s and 1970’s, and growing anxiety about nuclear power—revives interest in tidal power. In 1912, a tidal power plant was proposed at Busum, Germany. The English, in 1918 and more recently, promoted elaborate schemes for the Severn Estuary. In 1928, the French planned a plant at AberWrach in Brittany. In 1935, under the leadership of Franklin Delano Roosevelt, the United States began construction of a tidal power plant at Passamaquoddy, Maine. These plants, however, were never built. All of them had to be located at sites where tides were extremely high, and such sites are often far from power users. So much electricity was lost in transmission that profitable quantities of power could not be sent where they were needed. Also, large tidal power stations were too expensive to compete with existing steam plants and river dams. In addition, turbines and generators capable of using the large volumes of slow-moving tidal water that reversed flow had not been invented. Finally, large tidal plants inevitably hampered navigation, fisheries, recreation, and other uses of the sea and shore. French engineers, especially Robert Gibrat, the father of the La Rance project, have made the most progress in solving the problems of tidal power plants. France, a highly industrialized country, is short of coal and petroleum, which has brought about an intense search by the French for alternative energy supplies. La Rance, which was completed in December, 1967, is the first full-scale tidal electric power plant in the world. The Chinese, however, have built more than a hundred small tidal electric stations

772

/

Tidal power plant

about the size of the old mechanical tidal mills, and the Canadians and the Russians have both operated plants of pilot-plant size. La Rance, which was selected from more than twenty competing localities in France, is one of a few places in the world where the tides are extremely high. It also has a large reservoir that is located above a narrow constriction in the estuary. Finally, interference with navigation, fisheries, and recreational activities is minimal at La Rance. Submersible “bulbs” containing generators and mounting propeller turbines were specially designed for the La Rance project. These turbines operate using both incoming and outgoing tides, and they can pump water either into or out of the reservoir. These features allow daily and seasonal changes in power generation to be “smoothed out.” These turbines also deliver electricity most economically. Many engineering problems had to be solved, however, before the dam could be built in the tidal estuary. The La Rance plant produces 240 megawatts of electricity. Its twenty-four highly reliable turbine generator sets operate about 95 percent of the time. Output is coordinated with twenty-four other hydroelectric plants by means of a computer program. In this system, pump-storage stations use excess La Rance power during periods of low demand to pump water into elevated reservoirs. Later, during peak demand, this water is fed through a power plant, thus “saving” the excess generated at La Rance when it was not immediately needed. In this way, tidal energy, which must be used or lost as the tides continue to flow, can be saved. Consequences The operation of La Rance proved the practicality of tide-generated electricity. The equipment, engineering practices, and operating procedures invented for La Rance have been widely applied. Submersible, low-head, high-flow reversible generators of the La Rance type are now used in Austria, Switzerland, Sweden, Russia, Canada, the United States, and elsewhere. Economic problems have prevented the building of more large tidal power plants. With technological advances, the inexorable depletion of oil and coal resources, and the increasing cost of nu-

Tidal power plant

/

773

clear power, tidal power may be used more widely in the future. Construction costs may be significantly lowered by using preconstructed power units and dam segments that are floated into place and submerged, thus making unnecessary expensive dams and reducing pumping costs. See also Compressed-air-accumulating power plant; Geothermal power; Nuclear power plant; Nuclear reactor; Solar thermal engine; Thermal cracking process. Further Reading Bernshtein, L. B. Tidal Power Plants. Seoul, Korea: Korea Ocean Research and Development Institute, 1996. Boyle, Godfrey. Renewable Energy: Power for a Sustainable Future. Oxford: Oxford University Press, 1998. Ross, David. Power from the Waves. New York: Oxford University Press, 1995. Seymour, Richard J. Ocean Energy Recovery: The State of the Art. New York: American Society of Civil Engineers, 1992.

774

Touch-tone telephone Touch-tone telephone

The invention: A push-button dialing system for telephones that replaced the earlier rotary-dial phone. The person behind the invention: Bell Labs, the research and development arm of the American Telephone and Telegraph Company Dialing Systems A person who wishes to make a telephone call must inform the telephone switching office which number he or she wishes to reach. A telephone call begins with the customer picking up the receiver and listening for a dial tone. The action of picking up the telephone causes a switch in the telephone to close, allowing electric current to flow between the telephone and the switching office. This signals the telephone office that the user is preparing to dial a number. To acknowledge its readiness to receive the digits of the desired number, the telephone office sends a dial tone to the user. Two methods have been used to send telephone numbers to the telephone office: dial pulsing and touch-tone dialing. “Dial pulsing” is the method used by telephones that have rotary dials. In this method, the dial is turned until it stops, after which it is released and allowed to return to its resting position. When the dial is returning to its resting position, the telephone breaks the current between the telephone and the switching office. The switching office counts the number of times that current flow is interrupted, which indicates the number that had been dialed. Introduction of Touch-tone Dialing The dial-pulsing technique was particularly appropriate for use in the first electromechanical telephone switching offices, because the dial pulses actually moved mechanical switches in the switching office to set up the telephone connection. The introduction of touch-tone dialing into electromechanical systems was made possi-

Touch-tone telephone

/

775

ble by a special device that converted the touch-tones into rotary dial pulses that controlled the switches. At the American Telephone and Telegraph Company’s Bell Labs, experimental studies were pursued that explored the use of “multifrequency key pulsing” (in other words, using keys that emitted tones of various frequencies) by both operators and customers. Initially, plucked tuned reeds were proposed. These were, however, replaced with “electronic transistor oscillators,” which produced the required signals electronically. The introduction of “crossbar switching” made dial pulse signaling of the desired number obsolete. The dial pulses of the telephone were no longer needed to control the mechanical switching process at the switching office. When electronic control was introduced into switching offices, telephone numbers could be assigned by computer rather than set up mechanically. This meant that a single touch-tone receiver at the switching office could be shared by a large number of telephone customers. Before 1963, telephone switching offices relied upon rotary dial pulses to move electromechanical switching elements. Touch-tone dialing was difficult to use in systems that were not computer controlled, such as the electromechanical step-by-step method. In about 1963, however, it became economically feasible to implement centralized computer control and touch-tone dialing in switching offices. Computerized switching offices use a central touch-tone receiver to detect dialed numbers, after which the receiver sends the number to a call processor so that a voice connection can be established. Touch-tone dialing transmits two tones simultaneously to represent a digit. The tones that are transmitted are divided into two groups: a high-band group and a low-band group. For each digit that is dialed, one tone from the low-frequency (low-band) group and one tone from the high-frequency (high-band) group are transmitted. The two frequencies of a tone are selected so that they are not too closely related harmonically. In addition, touch-tone receivers must be designed so that false digits cannot be generated when people are speaking into the telephone. For a call to be completed, the first digit dialed must be detected in the presence of a dial tone, and the receiver must not interpret

776

/

Touch-tone telephone

background noise or speech as valid digits. In order to avoid such misinterpretation, the touch-tone receiver uses both the relative and the absolute strength of the two simultaneous tones of the first digit dialed to determine what that digit is. A system similar to the touch-tone system is used to send telephone numbers between telephone switching offices. This system, which is called “multifrequency signaling,” also uses two tones to indicate a single digit, but the frequencies used are not the same frequencies that are used in the touch-tone system. Multifrequency signaling is currently being phased out; new computer-based systems are being introduced to replace it. Impact Touch-tone dialing has made new caller features available. The touch-tone system can be used not only to signal the desired number to the switching office but also to interact with voice-response systems. This means that touch-tone dialing can be used in conjunction with such devices as bank teller machines. A customer can also dial many more digits per second with a touch-tone telephone than with a rotary dial telephone. Touch-tone dialing has not been implemented in Europe, and one reason may be that the economics of touch-tone dialing change as a function of technology. In the most modern electronic switching offices, rotary signaling can be performed at no additional cost, whereas the addition of touch-tone dialing requires a centralized touch-tone receiver at the switching office. Touch-tone signaling was developed in an era of analog telephone switching offices, and since that time, switching offices have become overwhelmingly digital. When the switching network becomes entirely digital, as will be the case when the integrated services digital network (ISDN) is implemented, touch-tone dialing will become unnecessary. In the future, ISDN telephone lines will use digital signaling methods exclusively. See also Cell phone; Rotary dial telephone; Telephone switching.

Touch-tone telephone

/

777

Further Reading Coe, Lewis. The Telephone and Its Several Inventors: A History. Jefferson, N.C.: McFarland, 1995. Young, Peter. Person to Person: The International Impact of the Telephone. Cambridge: Granta Editions, 1991.

778

Transistor Transistor

The invention: A miniature electronic device, comprising a tiny semiconductor and multiple electrical contacts, used in circuits as an amplifier, detector, or switch, that revolutionized electronics in the mid-twentieth century. The people behind the invention: William B. Shockley (1910-1989), an American physicist who led the Bell Laboratories team that produced the first transistors Akio Morita (1921-1999), a Japanese physicist and engineer who was the cofounder of the Sony electronics company Masaru Ibuka (1908-1997), a Japanese electrical engineer and businessman who cofounded Sony with Morita The Birth of Sony In 1952, a Japanese engineer visiting the United States learned that the Western Electric company was granting licenses to use its transistor technology. He was aware of the development of this device and thought that it might have some commercial applications. Masaru Ibuka told his business partner in Japan about the opportunity, and they decided to raise the $25,000 required to obtain a license. The following year, his partner, Akio Morita, traveled to New York City and concluded negotiations with Western Electric. This was a turning point in the history of the Sony company and in the electronics industry, for transistor technology was to open profitable new fields in home entertainment. The origins of the Sony corporation were in the ruins of postwar Japan. The Tokyo Telecommunications Company was incorporated in 1946 and manufactured a wide range of electrical equipment based on the existing vacuum tube technology. Morita and Ibuka were involved in research and development of this technology during the war and intended to put it to use in the peacetime economy. In the United States and Europe, electrical engineers who had done the same sort of research founded companies to build advanced audio products such as high-performance amplifiers, but Morita

Transistor

/

779

and Ibuka did not have the resources to make such sophisticated products and concentrated on simple items such as electric water heaters and small electric motors for record players. In addition to their experience as electrical engineers, both men were avid music lovers, as a result of their exposure to Americanbuilt phonographs and gramophones exported to Japan in the early twentieth century. They decided to combine their twin interests by devising innovative audio products and looked to the new field of magnetic recording as a likely area for exploitation. They had learned about tape recorders from technical journals and had seen them in use by the American occupation force. They developed a reel-to-reel tape recorder and introduced it in 1950. It was a large machine with vacuum tube amplifiers, so heavy that they transported it by truck. Although it worked well, they had a hard job selling it. Ibuka went to the United States in 1952 partly on a fact-finding mission and partly to get some ideas about marketing the tape recorder to schools and businesses. It was not seen as a consumer product. Ibuka and Morita had read about the invention of the transistor in Western Electric’s laboratories shortly after the war. John Bardeen and Walter H. Brattain had discovered that a semiconducting material could be used to amplify or control electric current. Their point contact transistor of 1948 was a crude laboratory apparatus that served as the basis for further research. The project was taken over by William B. Shockley, who had suggested the theory of the transistor effect. A new generation of transistors was devised; they were simpler and more efficient than the original. The junction transistors were the first to go into production. Ongoing Research Bell Laboratories had begun transistor research because Western Electric, one of its parent companies along with American Telephone and Telegraph, was interested in electronic amplification. This was seen as a means to increase the strength of telephone signals traveling over long distances, a job carried out by vacuum tubes. The junction transistor was developed as an amplifier. Western Electric thought that the hearing aid was the only consumer

780

/

Transistor

product that could be based on it and saw the transistor solely as a telecommunications technology. The Japanese purchased the license with only the slightest understanding of the workings of semiconductors and despite the belief that transistors could not be used at the high frequencies associated with radio. The first task of Ibuka and Morita was to develop a highfrequency transistor. Once this was accomplished, in 1954, a method had to be found to manufacture it cheaply. Transistors were made from crystals, which had to be grown and doped with impurities to form different layers of conductivity. This was not an exact science, and Sony engineers found that the failure rate for high-frequency transistors was very high. This increased costs and put the entire project into doubt, because the adoption of transistors was based on simplicity, reliability, and low cost. The introduction of the first Sony transistor radio, the TR-55, in 1955 was the result of basic research combined with extensive industrial engineering. Morita admitted that its sound was poor, but because it was the only transistor radio in Japan, it sold well. These were not cheap products, nor were they particularly compact. The selling point was that they consumed much less battery power than the old portable radios. The TR-55 carried the brand name Sony, a relative of the Soni magnetic tape made by the company and a name influenced by the founders’ interest in sound. Morita and Ibuka had already decided that the future of their company would be in international trade and wanted its name to be recognized all over the world. In 1957, they changed the company’s name from Tokyo Telecomunications Engineering to Sony. The first product intended for the export market was a small transistor radio. Ibuka was disappointed at the large size of the TR55 because one of the advantages of the transistor over the vacuum tube was supposed to be smaller size. He saw a miniature radio as a promising consumer product and gave his engineers the task of designing one small enough to fit into his shirt pocket. All elements of the radio had to be reduced in size: amplifier, transformer, capacitor, and loudspeaker. Like many other Japanese manufacturers, Sony bought many of the component parts of its products from small manufacturers, all of which had to be cajoled

Transistor

/

781

into decreasing the size of their parts. Morita and Ibuka stated that the hardest task in developing this new product was negotiating with the subcontractors. Finally, the Type 63 pocket transistor radio—the “Transistor Six”—was introduced in 1957. Impact When the transistor radio was introduced, the market for radios was considered to be saturated. People had rushed to buy them when they were introduced in the 1920’s, and by the time of the Great Depression, the majority of American households had one. Improvements had been made to the receiver and more attractive radio/phonograph console sets had been introduced, but these developments did not add many new customers. The most manufacturers could hope for was the replacement market with a few sales as children moved out of their parents’ homes and established new households. The pocket radio created a new market. It could be taken anywhere and used at any time. Its portability was its major asset, and it became an indispensable part of youth-oriented popular culture of the 1950’s and 1960’s. It provided an outlet for the crowded airwaves of commercial AM radio and was the means to bring the new music of rock and roll to a mass audience. As soon as Sony introduced the Transistor Six, it began to redesign it to reduce manufacturing cost. Subsequent transistor radios were smaller and cheaper. Sony sold them by the millions, and millions more were made by other companies under brand names such as “Somy” and “Sonny.” By 1960, more than twelve million transistor radios had been sold. The transistor radio was the product that established Sony as an international audio concern. Morita had resisted the temptation to make radios for other companies to sell under their names. Exports of Sony radios increased name recognition and established a bridgehead in the United States, the biggest market for electronic consumer products. Morita planned to follow the radio with other transistorized products. The television had challenged radio’s position as the mechanical entertainer in the home. Like the radio, it stood in nearly every

782

/

Transistor

William Shockley William Shockley’s reputation contains extremes. He helped invent one of the basic devices supporting modern technological society, the transistor. He also tried to revive one of the most infamous social theories, eugenics. His parents, mining engineer William Hillman Shockley, and surveyor May Bradford Shockley, were on assignment in England in 1910 when he was born. The family returned to Northern California when the younger William was three, and they schooled him at home until he was eight. He acquired an early interest in physics from a neighbor who taught at Stanford University. Shockley pursed that interest at the California Institute of Technology and the Massachusetts Institute of Technology, which awarded him a doctorate in 1936. Shockley went to work for Bell Telephone Laboratories in the same year. While trying to design a vacuum tube that could amplify current, it occurred to him that solid state components might work better than the fragile tubes. He experimented with the semiconductors germanium and silicon, but the materials available were too impure for his purpose. World War II interrupted the experiments, and he worked instead to improve radar and anti-submarine devices for the military. Back at Bell Labs in 1945, Shockley teamed with theorist John Bardeen and experimentalist Walter Brattain. Two years later they succeeded in making the first amplifier out of semiconductor materials and called it a transistor (short for transfer resistor). Its effect on the electronics industry was revolutionary, and the three shared the 1956 Nobel Prize in Physics for their achievement. In the mid-1950’s Shockley left Bell Labs to start Shockley Transistor, then switched to academia in 1963, becoming Stanford University’s Alexander M. Poniatoff Professor of Engineering and Applied Science. He grew interested in the relation between race and intellectual ability. Teaching himself psychology and genetics, he conceived the theory that Caucasians were inherently more intelligent than other races because of their genetic make-up. When he lectured on his brand of eugenics, he was denounced by the public as a racist and by scientists for shoddy thinking. Shockley retired in 1975 and died in 1989.

Transistor

/

783

American living room and used the same vacuum tube amplification unit. The transistorized portable television set did for images what the transistor radio did for sound. Sony was the first to develop an all-transistor television, in 1959. At a time when the trend in television receivers was toward larger screens, Sony produced extremely small models with eight-inch screens. Ignoring the marketing experts who said that Americans would never buy such a product, Sony introduced these models into the United States in 1960 and found that there was a huge demand for them. As in radio, the number of television stations on the air and broadcasts for the viewer to choose from grew. A personal television or radio gave the audience more choices. Instead of one machine in the family room, there were now several around the house. The transistorization of mechanical entertainers allowed each family member to choose his or her own entertainment. Sony learned several important lessons from the success of the transistor radio and television. The first was that small size and low price could create new markets for electronic consumer products. The second was that constant innovation and cost reduction were essential to keep ahead of the numerous companies that produced cheaper copies of original Sony products. In 1962, Sony introduced a tiny television receiver with a fiveinch screen. In the 1970’s and 1980’s, it produced even smaller models, until it had a TV set that could sit in the palm of the hand—the Video Walkman. Sony’s scientists had developed an entirely new television screen that worked on a new principle and gave better color resolution; the company was again able to blend the fruits of basic scientific research with innovative industrial engineering. The transistorized amplifier unit used in radio and television sets was applied to other products, including amplifiers for record players and tape recorders. Japanese manufacturers were slow to take part in the boom in high-fidelity audio equipment that began in the United States in the 1950’s. The leading manufacturers of highquality audio components were small American companies based on the talents of one engineer, such as Avery Fisher or Henry Koss. They sold expensive amplifiers and loudspeakers to audiophiles. The transistor reduced the size, complexity, and price of these components. The Japanese took the lead devising complete audio units

784

/

Transistor

based on transistorized integrated circuits, thus developing the basic home stereo. In the 1960’s, companies such as Sony and Matsushita dominated the market for inexpensive home stereos. These were the basic radio/phonograph combination, with two detached speakers. The finely crafted wooden consoles that had been the standard for the home phonograph were replaced by small plastic boxes. The Japanese were also quick to exploit the opportunities of the tape cassette. The Philips compact cassette was enthusiastically adopted by Japanese manufacturers and incorporated into portable tape recorders. This was another product with its ancestry in the transistor radio. As more of them were sold, the price dropped, encouraging more consumers to buy. The cassette player became as commonplace in American society in the 1970’s as the transistor radio had been in the 1960’s. The Walkman The transistor took another step in miniaturization in the Sony Walkman, a personal stereo sound system consisting of a cassette player and headphones. It was based on the same principles as the transistor radio and television. Sony again confounded marketing experts by creating a new market for a personal electronic entertainer. In the ten years following the introduction of the Walkman in 1979, Sony sold fifty million units worldwide, half of those in the United States. Millions of imitation products were sold by other companies. Sony’s acquisition of the Western Electric transistor technology was a turning point in the fortunes of that company and of Japanese manufacturers in general. Less than ten years after suffering defeat in a disastrous war, Japanese industry served notice that it had lost none of its engineering capabilities and innovative skills. The production of the transistor radio was a testament to the excellence of Japanese research and development. Subsequent products proved that the Japanese had an uncanny sense of the potential market for consumer products based on transistor technology. The ability to incorporate solid-state electronics into innovative home entertainment products allowed Japanese manufacturers to dominate the

Transistor

/

785

world market for electronic consumer products and to eliminate most of their American competitors. The little transistor radio was the vanguard of an invasion of new products unparalleled in economic history. Japanese companies such as Sony and Panasonic later established themselves at the leading edge of digital technology, the basis of a new generation of entertainment products. Instead of Japanese engineers scraping together the money to buy a license for an American technology, the great American companies went to Japan to license compact disc and other digital technologies. See also Cassette recording; Color television; FM radio; Radio; Television; Transistor radio; Videocassette recorder; Walkman cassette player. Further Reading Lyons, Nick. The Sony Vision. New York: Crown Publishers, 1976. Marshall, David V. Akio Morita and Sony. Watford: Exley, 1995. Morita, Akio, with Edwin M. Reingold, and Mitsuko Shimomura. Made in Japan: Akio Morita and Sony. London: HarperCollins, 1994. Reid, T. R. The Chip: How Two Americans Invented the Microchip and Launched a Revolution. New York: Simon and Schuster, 1984. Riordan, Michael. Crystal Fire: The Invention of the Transistor and the Birth of the Information Age. New York: Norton, 1998. Scott, Otto. The Creative Ordeal: The Story of Raytheon. New York: Atheneum, 1974.

786

Transistor radio Transistor radio

The invention: Miniature portable radio that used transistors and created a new mass market for electronic products. The people behind the invention: John Bardeen (1908-1991), an American physicist Walter H. Brattain (1902-1987), an American physicist William Shockley (1910-1989), an American physicist Akio Morita (1921-1999), a Japanese physicist and engineer Masaru Ibuka (1907-1997), a Japanese electrical engineer and industrialist A Replacement for Vacuum Tubes The invention of the first transistor by William Shockley, John Bardeen, and Walter H. Brattain of Bell Labs in 1947 was a scientific event of great importance. Its commercial importance at the time, however, was negligible. The commercial potential of the transistor lay in the possibility of using semiconductor materials to carry out the functions performed by vacuum tubes, the fragile and expensive tubes that were the electronic hearts of radios, sound amplifiers, and telephone systems. Transistors were smaller, more rugged, and less power-hungry than vacuum tubes. They did not suffer from overheating. They offered an alternative to the unreliability and short life of vacuum tubes. Bell Labs had begun the semiconductor research project in an effort to find a better means of electronic amplification. This was needed to increase the strength of telephone signals over long distances. Therefore, the first commercial use of the transistor was sought in speech amplification, and the small size of the device made it a perfect component for hearing aids. Engineers from the Raytheon Company, the leading manufacturer of hearing aids, were invited to Bell Labs to view the new transistor and to help assess the commercial potential of the technology. The first transistorized consumer product, the hearing aid, was soon on the market. The early models built by Raytheon used three junction-type transistors and cost more than two hundred dollars. They were small enough to go

Transistor radio

/

787

directly into the ear or to be incorporated into eyeglasses. The commercial application of semiconductors was aimed largely at replacing the control and amplification functions carried out by vacuum tubes. The perfect vehicle for this substitution was the radio set. Vacuum tubes were the most expensive part of a radio set and the most prone to break down. The early junction transistors operated best at low frequencies, and subsequently more research was needed to produce a commercial high-frequency transistor. Several of the licensees embarked on this quest, including the Radio Corporation of America (RCA), Texas Instruments, and the Tokyo Telecommunications Engineering Company of Japan. Perfecting the Transistor The Tokyo Telecommunications Engineering Company of Japan, formed in 1946, had produced a line of instruments and consumer products based on vacuum-tube technology. Its most successful product was a magnetic tape recorder. In 1952, one of the founders of the company, Masaru Ibuka, visited the United States to learn more about the use of tape recorders in schools and found out that Western Electric was preparing to license the transistor patent. With only the slightest understanding of the workings of semiconductors, Tokyo Telecommunications purchased a license in 1954 with the intention of using transistors in a radio set. The first task facing the Japanese was to increase the frequency response of the transistor to make it suitable for radio use. Then a method of manufacturing transistors cheaply had to be found. At the time, junction transistors were made from slices of germanium crystal. Growing the crystal was not an exact science, nor was the process of “doping” it with impurities to form the different layers of conductivity that made semiconductors useful. The Japanese engineers found that the failure rate for high-frequency transistors was extremely high. The yield of good transistors from one batch ran as low as 5 percent, which made them extremely expensive and put the whole project in doubt. The effort to replace vacuum tubes with components made of semiconductors was motivated by cost rather than performance; if transistors proved to be more expensive, then it was not worth using them.

788

/

Transistor radio

Engineers from Tokyo Telecommunications again came to the United States to search for information about the production of transistors. In 1954, the first high-frequency transistor was produced in Japan. The success of Texas Instruments in producing the components for the first transistorized radio (introduced by the Regency Company in 1954) spurred the Japanese to greater efforts. Much of their engineering and research work was directed at the manufacture and quality control of transistors. In 1955, they introduced their transistor radio, the TR-55, which carried the brand name “Sony.” The name was chosen because the executives of the company believed that the product would have an international appeal and therefore needed a brand name that could be recognized easily and remembered in many languages. In 1957, the name of the entire company was changed to Sony. Impact Although Sony’s transistor radios were successful in the marketplace, they were still relatively large and cumbersome. Ibuka saw a consumer market for a miniature radio and gave his engineers the task of designing a radio small enough to fit into a shirt pocket. The realization of this design—“Transistor Six”—was introduced in 1957. It was an immediate success. Sony sold the radios by the millions, and numerous imitations were also marketed under brand names such as “Somy” and “Sonny.” The product became an indispensable part of popular culture of the late 1950’s and 1960’s; its low cost enabled the masses to enjoy radio wherever there were broadcasts. The pocket-sized radio was the first of a line of electronic consumer products that brought technology into personal contact with the user. Sony was convinced that miniaturization did more than make products more portable; it established a one-on-one relationship between people and machines. Sony produced the first alltransistor television in 1960. Two years later, it began to market a miniature television in the United States. The continual reduction in the size of Sony’s tape recorders reached a climax with the portable tape player introduced in the 1980’s. The Sony Walkman was a marketing triumph and a further reminder that Japanese companies led the way in the design and marketing of electronic products.

Transistor radio

/

789

John Bardeen The transistor reduced the size of electronic circuits and at the same time the amount of energy lost from them as heat. Superconduction gave rise to electronic circuits with practically no loss of energy at all. John Bardeen helped unlock the secrets of both. Bardeen was born in 1908 in Madison, Wisconsin, where his mother was an artist and his father was a professor of anatomy at the University of Wisconsin. Bardeen attended the university, earning a bachelor’s degree in electrical engineering in 1928 and a master’s degree in geophysics in 1929. After working as a geophysicist, he entered Princeton University, studying with Eugene Wigner, the leading authority on solid-state physics, and received a doctorate in mathematics and physics in 1936. Bardeen taught at Harvard University and the University of Minnesota until World War II, when he moved to the Naval Ordnance Laboratory. Finding academic salaries too low to support his family after the war, he accepted a position at Bell Telephone Laboratories. There, with Walter Brattain, he turned William Shockley’s theory of semiconductors into a practical device—the transfer resistor, or transistor. He returned to academia as a professor at the University of Illinois and began to investigate a long-standing mystery in physics, superconductivity, with a postdoctoral associate, Leon Cooper, and a graduate student, J. Robert Schrieffer. In 1956 Cooper made a key discovery—superconducting electrons travel in pairs. And while Bardeen was in Stockholm, Sweden, collecting a share of the 1956 Nobel Prize in Physics for his work on transistors, Schrieffer worked out a mathematical analysis of the phenomenon. The theory that the three men published since became known as BCS theory from the first letters of their last names, and as well as explain superconductors, it pointed toward a great deal of technology and additional basic research. The team won the 1972 Nobel Prize in Physics for BCS theory, making Bardeen the only person to ever win two Nobel Prizes for physics. He retired in 1975 and died sixteen years later.

See also Compact disc; FM radio; Radio; Radio crystal sets; Television; Transistor; Walkman cassette player.

790

/

Transistor radio

Further Reading Handy, Roger, Maureen Erbe, and Aileen Antonier. Made in Japan: Transistor Radios of the 1950s and 1960s. San Francisco: Chronicle Books, 1993. Marshall, David V. Akio Morita and Sony. Watford: Exley, 1995. Morita, Akio, with Edwin M. Reingold, and Mitsuko Shimomura. Made in Japan: Akio Morita and Sony. London: HarperCollins, 1994. Nathan, John. Sony: The Private Life. London: HarperCollinsBusiness, 2001.

791

Tuberculosis vaccine Tuberculosis vaccine

The invention: Vaccine that uses an avirulent (nondisease) strain of bovine tuberculosis bacilli that is safer than earlier vaccines. The people behind the invention: Albert Calmette (1863-1933), a French microbiologist Camille Guérin (1872-1961), a French veterinarian and microbiologist Robert Koch (1843-1910), a German physician and microbiologist Isolating Bacteria Tuberculosis, once called “consumption,” is a deadly, contagious disease caused by the bacterium Mycobacterium tuberculosis, first identified by the eminent German physician Robert Koch in 1882. The bacterium can be transmitted from person to person by physical contact or droplet infection (for example, sneezing). The condition eventually inflames and damages the lungs, causing difficulty in breathing and failure of the body to deliver sufficient oxygen to various tissues. It can spread to other body tissues, where further complications develop. Without treatment, the disease progresses, disabling and eventually killing the victim. Tuberculosis normally is treated with a combination of antibiotics and other drugs. Koch developed his approach for identifying bacterial pathogens (disease producers) with simple equipment, primarily microscopy. Having taken blood samples from diseased animals, he would identify and isolate the bacteria he found in the blood. Each strain of bacteria would be injected into a healthy animal. The latter would then develop the disease caused by the particular strain. In 1890, he discovered that a chemical released from tubercular bacteria elicits a hypersensitive (allergic) reaction in individuals previously exposed to or suffering from tuberculosis. This chemical, called “tuberculin,” was isolated from culture extracts in which tubercular bacteria were being grown.

792

/

Tuberculosis vaccine

When small amounts of tuberculin are injected into a person subcutaneously (beneath the skin), a reddened, inflamed patch approximately the size of a quarter develops if the person has been exposed to or is suffering from tuberculosis. Injection of tuberculin into an uninfected person yields a negative response (that is, no inflammation). Tuberculin does not harm those being tested. Tuberculosis’s Weaker Grandchildren The first vaccine to prevent tuberculosis was developed in 1921 by two French microbiologists, Albert Calmette and Camille Guérin. Calmette was a student of the eminent French microbiologist Louis Pasteur at Pasteur’s Institute in Paris. Guérin was a veterinarian who joined Calmette’s laboratory in 1897. At Lille, Calmette and Guérin focused their research upon the microbiology of infectious diseases, especially tuberculosis. In 1906, they discovered that individuals who had been exposed to tuberculosis or who had mild infections were developing resistance to the disease. They found that resistance to tuberculosis was initiated by the body’s immune system. They also discovered that tubercular bacteria grown in culture over many generations become progressively weaker and avirulent, losing their ability to cause disease. From 1906 through 1921, Calmette and Guérin cultured tubercle bacilli from cattle. With proper nutrients and temperature, bacteria can reproduce by fission (that is, one bacterium splits into two bacteria) in as little time as thirty minutes. Calmette and Guérin cultivated these bacteria in a bile-derived food medium for thousands of generations over fifteen years, periodically testing the bacteria for virulence by injecting them into cattle. After many generations, the bacteria lost their virulence, their ability to cause disease. Nevertheless, these weaker, or “avirulent” bacteria still stimulated the animals’ immune systems to produce antibodies. Calmette and Guérin had successfully bred a strain of avirulent bacteria that could not cause tuberculosis in cows but could also stimulate immunity against the disease. There was considerable concern over whether the avirulent strain was harmless to humans. Calmette and Guérin continued cultivating weaker versions of the avirulent strain that retained antibody-

Tuberculosis vaccine

/

793

stimulating capacity. By 1921, they had isolated an avirulent antibody-stimulating strain that was harmless to humans, a strain they called “Bacillus Calmette-Guérin” (BCG). In 1922, they began BCG-vaccinating newborn children against tuberculosis at the Charité Hospital in Paris. The immunized children exhibited no ill effects from the BCG vaccination. Calmette and Guérin’s vaccine was so successful in controlling the spread of tuberculosis in France that it attained widespread use in Europe and Asia beginning in the 1930’s. Impact Most bacterial vaccines involve the use of antitoxin or heat- or chemical-treated bacteria. BCG is one of the few vaccines that use specially bred live bacteria. Its use sparked some controversy in the United States and England, where the medical community questioned its effectiveness and postponed BCG immunization until the late 1950’s. Extensive testing of the vaccine was performed at the University of Illinois before it was adopted in the United States. Its effectiveness is questioned by some physicians to this day. Some of the controversy stems from the fact that the avirulent, antibody-stimulating BCG vaccine conflicts with the tuberculin skin test. The tuberculin skin test is designed to identify people suffering from tuberculosis so that they can be treated. A BCGvaccinated person will have a positive tuberculin skin test similar to that of a tuberculosis sufferer. If a physician does not know that a patient has had a BCG vaccination, it will be presumed (incorrectly) that the patient has tuberculosis. Nevertheless, the BCG vaccine has been invaluable in curbing the worldwide spread of tuberculosis, although it has not eradicated the disease. See also Antibacterial drugs; Birth control pill; Penicillin; Polio vaccine (Sabin); Polio vaccine (Salk); Salvarsan; Typhus vaccine; Yellow fever vaccine.

794

/

Tuberculosis vaccine

Further Reading Daniel, Thomas M. Pioneers of Medicine and their Impact on Tuberculosis. Rochester, N.Y.: University of Rochester Press, 2000. DeJauregui, Ruth. 100 Medical Milestones That Shaped World History. San Mateo, Calif.: Bluewood Books, 1998. Fry, William F. “Prince Hamlet and Professor Koch.” Perspectives in Biology and Medicine 40, no. 3 (Spring, 1997). Lutwick, Larry I. New Vaccines and New Vaccine Technology. Philadelphia: Saunders, 1999.

795

Tungsten filament Tungsten filament

The invention: Metal filament used in the incandescent light bulbs that have long provided most of the world’s electrical lighting. The people behind the invention: William David Coolidge (1873-1975), an American electrical engineer Thomas Alva Edison (1847-1931), an American inventor The Incandescent Light Bulb The electric lamp developed along with an understanding of electricity in the latter half of the nineteenth century. In 1841, the first patent for an incandescent lamp was granted in Great Britain. A patent is a legal claim that protects the patent holder for a period of time from others who might try to copy the invention and make a profit from it. Although others tried to improve upon the incandescent lamp, it was not until 1877, when Thomas Alva Edison, the famous inventor, became interested in developing a successful electric lamp, that real progress was made. The Edison Electric Light Company was founded in 1878, and in 1892, it merged with other companies to form the General Electric Company. Early electric lamps used platinum wire as a filament. Because platinum is expensive, alternative filament materials were sought. After testing many substances, Edison finally decided to use carbon as a filament material. Although carbon is fragile, making it difficult to manufacture filaments, it was the best choice available at the time. The Manufacture of Ductile Tungsten Edison and others had tested tungsten as a possible material for lamp filaments but discarded it as unsuitable. Tungsten is a hard, brittle metal that is difficult to shape and easy to break, but it possesses properties that are needed for lamp filaments. It has the highest melting point (3,410 degrees Celsius) of any known metal; therefore, it can be heated to a very high temperature, giving off a

796

/

Tungsten filament

relatively large amount of radiation without melting (as platinum does) or decomposing (as carbon does). The radiation it emits when heated is primarily visible light. Its resistance to the passage of electricity is relatively high, so it requires little electric current to reach its operating voltage. It also has a high boiling point (about 5,900 degrees Celsius) and therefore does not tend to boil away, or vaporize, when heated. In addition, it is mechanically strong, resisting breaking caused by mechanical shock. William David Coolidge, an electrical engineer with the General Electric Company, was assigned in 1906 the task of transforming tungsten from its natural state into a form suitable for lamp filaments. The accepted procedure for producing fine metal wires was (and still is) to force a wire rod through successively smaller holes in a hard metal block until a wire of the proper diameter is achieved. The property that allows a metal to be drawn into a fine wire by means of this procedure is called “ductility.” Tungsten is not naturally ductile, and it was Coolidge’s assignment to make it into a ductile form. Over a period of five years, and after many failures, Coolidge and his workers achieved their goal. By 1911, General Electric was selling lamps that contained tungsten filaments. Originally, Coolidge attempted to mix powdered tungsten with a suitable substance, form a paste, and squirt that paste through a die to form the wire. The paste-wire was then sintered (heated at a temperature slightly below its melting point) in an effort to fuse the powder into a solid mass. Because of its higher boiling point, the tungsten would remain after all the other components in the paste boiled away. At about 300 degrees Celsius, tungsten softens sufficiently to be hammered into an elongated form. Upon cooling, however, tungsten again becomes brittle, which prevents it from being shaped further into filaments. It was suggested that impurities in the tungsten caused the brittleness, but specially purified tungsten worked no better than the unpurified form. Many metals can be reduced from rods to wires if the rods are passed through a series of rollers that are successively closer together. Some success was achieved with this method when the rollers were heated along with the metal, but it was still not possible to produce sufficiently fine wire. Next, Coolidge tried a procedure called “swaging,” in which a thick wire is repeatedly and rapidly

Tungsten filament

/

797

struck by a series of rotating hammers as the wire is drawn past them. After numerous failures, a fine wire was successfully produced using this procedure. It was still too thick for lamp filaments, but it was ductile at room temperature. Microscopic examination of the wire revealed a change in the crystalline structure of tungsten as a result of the various treatments. The individual crystals had elongated, taking on a fiberlike appearance. Now the wire could be drawn through a die to achieve the appropriate thickness. Again, the wire had to be heated, and if the temperature was too high, the tungsten reverted to a brittle state. The dies themselves were heated, and the reduction progressed in stages, each of which reduced the wire’s diameter by a thousandth of an inch. Finally, Coolidge had been successful. Pressed tungsten bars measuring 1 4 × 3 8 × 6 inches were hammered and rolled into rods 1 8 inch, or 125 1000 inch, in diameter. The unit 1 1000 inch is often called a “mil.” These rods were then swaged to approximately 30 mil and then passed through dies to achieve the filament size of 25 mil or smaller, depending on the power output of the lamp in which the filament was to be used. Tungsten wires of 1 mil or smaller are now readily available. Impact Ductile tungsten wire filaments are superior in several respects to platinum, carbon, or sintered tungsten filaments. Ductile filament lamps can withstand more mechanical shock without breaking. This means that they can be used in, for example, automobile headlights, in which jarring frequently occurs. Ductile wire can also be coiled into compact cylinders within the lamp bulb, which makes for a more concentrated source of light and easier focusing. Ductile tungsten filament lamps require less electricity than do carbon filament lamps, and they also last longer. Because the size of the filament wire can be carefully controlled, the light output from lamps of the same power rating is more reproducible. One 60-watt bulb is therefore exactly like another in terms of light production. Improved production techniques have greatly reduced the cost of manufacturing ductile tungsten filaments and of light-bulb man-

798

/

Tungsten filament

ufacturing in general. The modern world is heavily dependent upon this reliable, inexpensive light source, which turns darkness into daylight. See also Fluorescent lighting; Memory metal; Steelmaking process. Further Reading Baldwin, Neil. Edison: Inventing the Century. Chicago: University of Chicago Press, 2001. Cramer, Carol. Thomas Edison. San Diego, Calif.: Greenhaven Press, 2001. Israel, Paul. Edison: A Life of Invention. New York: John Wiley, 1998. Liebhafsky, H. A. William David Coolidge: A Centenarian and His Work. New York: Wiley, 1974. Miller, John A. Yankee Scientist: William David Coolidge. Schenectady, N.Y.: Mohawk Development Service, 1963.

799

Tupperware Tupperware

The invention: Trademarked food-storage products that changed the way Americans viewed plastic products and created a model for selling products in consumers’ homes. The people behind the invention: Earl S. Tupper (1907-1983), founder of Tupperware Brownie Wise, the creator of the vast home sales network for Tupperware Morison Cousins (1934-2001), a designer hired by Tupperware to modernize its products in the early 1990’s “The Wave of the Future”? Relying on a belief that plastic was the wave of the future and wanting to improve on the newest refrigeration technology, Earl S. Tupper, who called himself “a ham inventor and Yankee trader,” created an empire of products that changed America’s kitchens. Tupper, a self-taught chemical engineer, began working at Du Pont in the 1930’s. This was a time of important developments in the field of polymers and the technology behind plastics. Wanting to experiment with this new material yet unable to purchase the needed supplies, Tupper went to his employer for help. Because of the limited availability of materials, major chemical companies had been receiving all the raw goods for plastic production. Although Du Pont would not part with raw materials, the company was willing to let Tupper have the slag. Polyethylene slag was a black, rock-hard, malodorous waste product of oil refining. It was virtually unusable. Undaunted, Tupper developed methods to purify the slag. He then designed an injection molding machine to form bowls and other containers out of his “Poly-T.” Tupper did not want to call the substance plastic because of a public distrust of that substance. In 1938, he founded the Tupper Plastics Company to pursue his dream. It was during those first years that he formulated the design for the famous Tupperware seal.

800

/

Tupperware

Refrigeration techniques had improved tremendously during the first part of the twentieth century. The iceboxes in use prior to the 1940’s were inconsistent in their interior conditions and were usually damp inside because of melting of the ice. In addition, the metal, glass, or earthenware food storage containers used during the first half of the century did not seal tightly and allowed food to stay moist. Iceboxes allowed mixing of food odors, particularly evident with strong-smelling items such as onions and fish. Electric Refrigerators In contrast to iceboxes, the electric refrigerators available starting in the 1940’s maintained dry interiors and low temperatures. This change in environment resulted in food drying out and wilting. Tupper set out to alleviate this problem through his plastic containers. The key to Tupper’s solution was his containers’ seal. He took his design from paint can lids and inverted it. This tight seal created a partial vacuum that protected food from the dry refrigeration process and kept food odors sealed within containers. In 1942, Tupper bought his first manufacturing plant, in Farnumsville, Massachusetts. There he continued to improve on his designs. In 1945, Tupper introduced Tupperware, selling it through hardware and department stores as well as through catalog sales. Tupperware products were made of flexible, translucent plastic. Available in frosted crystal and five pastel colors, the new containers were airtight and waterproof. In addition, they carried a lifetime warranty against chipping, cracking, peeling, and breaking in normal noncommercial use. Early supporters of Tupperware included the American Thermos Bottle Company, which purchased seven million nesting cups, and the Tek Corporation, which ordered fifty thousand tumblers to sell with toothbrushes. Even though he benefited from this type of corporate support, Tupper wanted his products to be for home use. Marketing the new products proved to be difficult in the early years. Tupperware sat on hardware and department store shelves, and catalog sales were nearly nonexistent. The problem appeared to involve a basic distrust of plastic by consumers and an unfamiliarity with how to use the new products. The product did not come with instructions on

Tupperware

/

801

how to seal the containers or descriptions of how the closed container protected the food within. Brownie Wise, an early direct seller and veteran distributor of Stanley Home Products, stated that it took her several days to understand the technology behind the seal and the now-famous Tupperware “burp,” the sound made when air leaves the container as it seals. Wise and two other direct sellers, Tom Damigella and Harvey Hollenbush, found the niche for selling Tupperware for daily use— home sales. Wise approached Tupper with a home party sales strategy and detailed how it provided a relaxed atmosphere in which to learn about the products and thus lowered sales resistance. In April, 1951, Tupper took his product off store shelves and hired Wise to create a new direct selling system under the name of Tupperware Home Parties, Inc. Impact Home sales had already proved to be successful for the Fuller Brush Company and numerous encyclopedia publishers, yet Brownie Wise wanted to expand the possibilities. Her first step was to found a campus-like headquarters in Kissimmee, Florida. There, Tupper and a design department worked to develop new products, and Tupperware Home Parties, Inc., under Wise’s direction, worked to develop new incentives for Tupperware’s direct sellers, called hostesses. Wise added spark to the notion of home demonstrations. “Parties,” as they were called, included games, recipes, giveaways, and other ideas designed to help housewives learn how to use Tupperware products. The marketing philosophy was to make parties appealing events at which women could get together while their children were in school. This fit into the suburban lifestyle of the 1950’s. These parties offered a nonthreatening means for home sales representatives to attract audiences for their demonstrations and gave guests a chance to meet and socialize with their neighbors. Often compared to the barbecue parties of the 1950’s, Tupperware parties were social, yet educational, affairs. While guests ate lunch or snacked on desserts, the Tupperware hostess educated them about the technology behind the bowls and their seals as well as suggesting a wide variety of uses for the products. For example, a party might include

802

/

Tupperware

recipes for dinner parties, with information provided on how party leftovers could be stored efficiently and economically with Tupperware products. While Tupperware products were changing the kitchens of America, they were also changing the women who sold them (almost all the hosts were women). Tupperware sales offered employment for women at a time when society disapproved of women working outside the home. Being a hostess, however, was not a nine-to-five position. The job allowed women freedom to tailor their schedules to meet family needs. Employment offered more than the economic incentive of 35 percent of gross sales. Hostesses also learned new skills and developed self-esteem. An acclaimed mentoring program for new and advancing employees provided motivational training. Managers came only from the ranks of hostesses; moving up the corporate ladder meant spending time selling Tupperware at home parties. The opportunity to advance offered incentive. In addition, annual sales conventions were renowned for teaching new marketing strategies in fun-filled classes. These conventions also gave women an opportunity to network and establish contacts. These experiences proved to be invaluable as women entered the workforce in increasing numbers in later decades. Expanding Home-Sales Business The tremendous success of Tupperware’s marketing philosophy helped to set the stage for other companies to enter home sales. These companies used home-based parties to educate potential customers in familiar surroundings, in their own homes or in the homes of friends. The Mary Kay Cosmetics Company, founded in 1963, used beauty makeovers in the home party setting as its chief marketing tool. Discovery Toys, founded in 1978, encouraged guests to get on the floor and play with the toys demonstrated at its home parties. Both companies extended the socialization aspects found in Tupperware parties. In addition to setting the standard for home sales, Tupperware is also credited with starting the plastic revolution. Early plastics were of poor quality and cracked or broke easily. This created distrust of plastic products among consumers. Earl Tupper’s demand

Tupperware

/

803

for quality set the stage for the future of plastics. He started with high-quality resin and developed a process that kept the “Poly-T” from splitting. He then invented an injection molding machine that mass-produced his bowl and cup designs. His standards of quality from start to finish helped other companies expand into plastics. The 1950’s saw a wide variety of products appear in the improved material, including furniture and toys. This shift from wood, glass, and metal to plastic continued for decades. Maintaining the position of Tupperware within the housewares

Earl S. Tupper Born in 1907, Earl Silas Tupper came from a family of go-getters. His mother, Lulu Clark Tupper, kept a boardinghouse and took in laundry, while his father, Earnest, ran a small farm and greenhouse in New Hampshire. The elder Tupper was also a small-time inventor, patenting a device for stretching out chickens to make cleaning them easier. Earl absorbed the family’s taste for invention and enterprise. Fresh out of high school in 1925, Tupper vowed to turn himself into a millionaire by the time he was thirty. He started a landscaping and nursery business in 1928, but the Depression led his company, Tupper Tree, into bankruptcy in 1936. Tupper was undeterred. He hired on with Du Pont the next year. Du Pont taught him a great deal about the chemistry and manufacturing of plastics, but it did not give him scope to apply his ideas, so in 1938 he founded the Earl S. Tupper Company. He continued to work as a contractor for Du Pont to make the fledgling company profitable, and during World War II the company made plastic moldings for gas masks and Navy signal lamps. Finally, in the 1940’s Tupper could devote himself to his dream—designing plastic food containers, cups, and such small household conveniences as cases for cigarette packs. Thanks to aggressive, innovative direct marketing, Tupper’s kitchenware, Tupperware, became synonymous with plastic containers during the 1950’s. In 1958 Tupper sold his company to Rexall for $16 million, having finally realized his youthful ambition to make himself wealthy through Yankee wit and hard work. He died in 1983.

804

/

Tupperware

market meant keeping current. As more Americans were able to purchase the newest refrigerators, Tupperware expanded to meet their needs. The company added new products, improved marketing strategies, and changed or updated designs. Over the years, Tupperware added baking items, toys, and home storage containers for such items as photographs, sewing materials, and holiday ornaments. The 1980’s and 1990’s brought microwaveable products. As women moved into the work force in great numbers, Tupperware moved with them. The company introduced lunchtime parties at the workplace and parties at daycare centers for busy working parents. Tupperware also started a fund-raising line, in special colors, that provided organizations with a means to bring in money while not necessitating full-fledged parties. New party themes developed around time-saving techniques and health concerns such as diet planning. Beginning in 1992, customers too busy to attend a party could call a toll-free number, request a catalog, and be put in contact with a “consultant,” as “hostesses” now were called. Another marketing strategy developed out of a public push for environmentally conscious products. Tupperware consultants stressed the value of buying food in bulk to create less trash as well as saving money. To store these increased purchases, the company developed a new line for kitchen staples called Modular Mates. These stackable containers came in a wide variety of shapes and sizes to hold everything from cereal to flour to pasta. They were made of see-through plastic, allowing the user to see if the contents needed replenishing. Some consultants tailored parties around ideas to better organize kitchen cabinets using the new line. Another environmentally conscious product idea was the Tupperware lunch kit. These kits did away with the need for throwaway products such as paper plates, plastic storage bags, and aluminum foil. Lunch kits marketed in other countries were developed to accommodate the countries’ particular needs. For example, Japanese designs included chopsticks, while Latin American styles were designed to hold tortillas. Design Changes Tupperware designs have been well received over the years. Early designs prompted a 1947 edition of House Beautiful to call the

Tupperware

/

805

product “Fine Art for 39 cents.” Fifteen of Tupper’s earliest designs are housed in a permanent collection at the Museum of Modern Art in New York City. Other museums, such as the Metropolitan Museum of Art and the Brooklyn Museum, also house Tupperware designs. Tupperware established its own Museum of Historic Food Containers at its international headquarters in Florida. Despite this critical acclaim, the company faced a constant struggle to keep product lines competitive with more accessible products, such as those made by Rubbermaid, that could be found on the shelves of local grocery or department stores. Some of the biggest design changes came with the hiring of Morison Cousins in the early 1990’s. Cousins, an accomplished designer, set out to modernize the Tupperware line. He sought to return to simple, traditional styles while bringing in time-saving aspects. He changed lid designs to make them easier to clean and rounded the bottoms of bowls so that every portion could be scooped out. Cousins also added thumb handles to bowls. Backed by a knowledgeable sales force and quality product, the company experienced tremendous growth. Tupperware sales reached $25 million in 1954. By 1958, the company had grown from seven distributorships to a vast system covering the United States and Canada. That same year, Brownie Wise left the company, and Tupper Plastics was sold to Rexall Drug Company for $9 million. Rexall Drug changed its name to Dart Industries, Inc., in 1969, then merged with Kraft, Inc., eleven years later to become Dart and Kraft, Inc. During this time of parent-company name changing, Tupperware continued to be an important subsidiary. Through the 1960’s and 1970’s, the company spread around the world, with sales in Western Europe, the Far East, and Latin America. In 1986, Dart and Kraft, Inc., split into Kraft, Inc., and Premark International, Inc., of which Dart (and therefore Tupperware) was a subsidiary. Premark International included other home product companies such as West Bend, Precor, and Florida Tile. By the early 1990’s, annual sales of Tupperware products reached $1.1 billion. Manufacturing plants in Halls, Tennessee, and Hemingway, South Carolina, worked to meet the high demand for Tupperware products in more than fifty countries. Foreign sales accounted for almost 75 percent of the company’s business. By meeting the

806

/

Tupperware

needs of consumers and keeping current with design changes, new sales techniques, and new products, Tupperware was able to reach 90 percent of America’s homes. See also Electric refrigerator; Food freezing; Freeze-drying; Microwave cooking; Plastic; Polystyrene; Pyrex glass; Teflon. Further Reading Brown, Patricia Leigh. “New Designs to Keep Tupperware Fresh.” New York Times (June 10, 1993). Clarke, Alison J. Tupperware: The Promise of Plastic in 1950s America. Washington, D.C.: Smithsonian Institution Press, 1999. Gershman, Michael. Getting It Right the Second Time. Reading, Mass.: Addison-Wesley, 1990. Martin, Douglas. “Morison S. Cousins, Sixty-six, Designer, Dies; Revamped Tupperware’s Look with Flair.” New York Times (February 18, 2001). Sussman, Vic. “I Was the Only Virgin at the Party.” Sales and Marketing Management 141 (September 1, 1989).

807

Turbojet Turbojet

The invention: A jet engine with a turbine-driven compressor that uses its hot-gas exhaust to develop thrust. The people behind the invention: Henry Harley Arnold (1886-1950), a chief of staff of the U.S. Army Air Corps Gerry Sayer, a chief test pilot for Gloster Aircraft Limited Hans Pabst von Ohain (1911), a German engineer Sir Frank Whittle (1907-1996), an English Royal Air Force officer and engineer Developments in Aircraft Design On the morning of May 15, 1941, some eleven months after France had fallen to Adolf Hitler’s advancing German army, an experimental jet-propelled aircraft was successfully tested by pilot Gerry Sayer. The airplane had been developed in a little more than two years by the English company Gloster Aircraft under the supervision of Sir Frank Whittle, the inventor of England’s first jet engine. Like the jet engine that powered it, the plane had a number of predecessors. In fact, the May, 1941, flight was not the first jetpowered test flight: That flight occurred on August 27, 1939, when a Heinkel aircraft powered by a jet engine developed by Hans Pabst von Ohain completed a successful test flight in Germany. During this period, Italian airplane builders were also engaged in jet aircraft testing, with lesser degrees of success. Without the knowledge that had been gained from Whittle’s experience in experimental aviation, the test flight at the Royal Air Force’s Cranwell airfield might never have been possible. Whittle’s repeated efforts to develop turbojet propulsion engines had begun in 1928, when, as a twenty-one-year-old Royal Air Force (RAF) flight cadet at Cranwell Academy, he wrote a thesis entitled “Future Developments in Aircraft Design.” One of the principles of Whittle’s earliest research was that if aircraft were eventually to achieve very high speeds over long distances, they would have to fly at very

808

/

Turbojet

high altitudes, benefiting from the reduced wind resistance encountered at such heights. Whittle later stated that the speed he had in mind at that time was about 805 kilometers per hour—close to that of the first jetpowered aircraft. His earliest idea of the engines that would be necessary for such planes focused on rocket propulsion (that is, “jets” in which the fuel and oxygen required to produce the explosion needed to propel an air vehicle are entirely contained in the engine, or, alternatively, in gas turbines driving propellers at very high speeds). Later, it occurred to him that gas turbines could be used to provide forward thrust by what would become “ordinary” jet propulsion (that is, “thermal air” engines that take from the surrounding atmosphere the oxygen they need to ignite their fuel). Eventually, such ordinary jet engines would function according to one of four possible systems: the so-called athodyd, or continuous-firing duct; the pulsejet, or intermittent-firing duct; the turbojet, or gas-turbine jet; or the propjet, which uses a gas turbine jet to rotate a conventional propeller at very high speeds.

Passing the Test The aircraft that was to be used to test the flight performance was completed by April, 1941. On April 7, tests were conducted on the ground at Gloster Aircraft’s landing strip at Brockworth by chief test pilot Sayer. At this point, all parties concerned tried to determine whether the jet engine’s capacity would be sufficient to push the aircraft forward with enough speed to make it airborne. Sayer dared to take the plane off the ground for a limited distance of between 183 meters and 273 meters, despite the technical staff’s warnings against trying to fly in the first test flights. On May 15, the first real test was conducted at Cranwell. During that test, Sayer flew the plane, now called the Pioneer, for seventeen minutes at altitudes exceeding 300 meters and at a conservative test speed exceeding 595 kilometers per hour, which was equivalent to the top speed then possible in the RAF’s most versatile fighter plane, the Spitfire.

Turbojet

/

809

Once it was clear that the tests undertaken at Cranwell were not only successful but also highly promising in terms of even better performance, a second, more extensive test was set for May 21, 1941. It was this later demonstration that caused the Ministry of Air Production (MAP) to initiate the first steps to produce the Meteor jet fighter aircraft on a full industrial scale barely more than a year after the Cranwell test flight.

Impact Since July, 1936, the Junkers engine and aircraft companies in Hitler’s Germany had been a part of a new secret branch dedicated to the development of a turbojet-driven aircraft. In the same period, Junkers’ rival in the German aircraft industry, Heinkel, Inc., approached von Ohain, who was far enough along in his work on the turbojet principle to have patented a device very similar to Whittle’s in 1935. A later model of this jet engine would power a test aircraft in August, 1939. In the meantime, the wider impact of the flight was the result of decisions made by General Henry Harley Arnold, chief of staff of the U.S. Army Air Corps. Even before learning of the successful flight in May, he made arrangements to have one of Whittle’s engines shipped to the United States to be used by General Electric Company as a model for U.S. production. The engine arrived in October, 1941, and within one year, a General Electric-built engine powered a Bell Aircraft plane, the XP-59 A Airacomet, in its maiden flight. The jet airplane was not perfected in time to have any significant impact on the outcome of World War II, but all of the wartime experimental jet aircraft developments that were either sparked by the flight in 1941 or preceded it prepared the way for the research and development projects that would leave a permanent revolutionary mark on aviation history in the early 1950’s. See also Airplane; Dirigible; Rocket; Rocket; Stealth aircraft; Supersonic passenger plane; V-2 rocket.

810

/

Turbojet

Further Reading Adams, Robert. “Smithsonian Horizons.” Smithsonian 18 (July, 1987). Boyne, Walter J., Donald S. Lopez, and Anselm Franz. The Jet Age: Forty Years of Jet Aviation. Washington: National Air and Space Museum, 1979. Constant, Edward W. The Origins of the Turbojet Revolution. Baltimore: Johns Hopkins University Press, 1980. Launius, Roger D. Innovation and the Development of Flight. College Station: Texas A&M University Press, 1999.

811

Typhus vaccine Typhus vaccine

The invention: the first effective vaccine against the virulent typhus disease. The person behind the invention: Hans Zinsser (1878-1940), an American bacteriologist and immunologist Studying Diseases As a bacteriologist and immunologist, Hans Zinsser was interested in how infectious diseases spread. During an outbreak of typhus in Serbia in 1915, he traveled with a Red Cross team so that he could study the disease. He made similar trips to the Soviet Union in 1923, Mexico in 1931, and China in 1938. His research showed that, as had been suspected, typhus was caused by the rickettsia, an organism that had been identified in 1916 by Henrique da RochaLima. The organism was known to be carried by a louse or a rat flea and transmitted to humans through a bite. Poverty, dirt, and overcrowding led to environments that helped the typhus disease to spread. The rickettsia is a microorganism that is rod-shaped or spherical. Within the insect’s body, it works its way into the cells that line the gut. Multiplying within this tissue, the rickettsia passes from the insect body with the feces. Since its internal cells are being destroyed, the insect dies within three weeks after it has been infected with the microorganism. As the infected flea or louse feeds on a human, it causes itching. When the bite is scratched, the skin may be opened, and the insect feces, carrying rickettsia, can then enter the body. Also, dried airborne feces can be inhaled. Once inside the human, the rickettsia invades endothelial cells and causes an inflammation of the blood vessels. Cell death results, and this leads to tissue death. In a few days, the infected person may have a rash, a severe headache, a fever, dizziness, ringing in the ears, or deafness. Also, light may hurt the person’s eyes, and the thinking processes become foggy and mixed up. (The word “typhus” comes

812

/

Typhus vaccine

from a Greek word meaning “cloudy” or “misty.”) Without treatment, the victim dies within nine to eighteen days. Medical science now recognizes three forms of typhus: the epidemic louse-borne, the Brill-Zinsser, and the murine (or rodentrelated) form. The epidemic louse-borne (or “classical”) form is the most severe. The Brill-Zinsser (or “endemic”) form is similar but less severe. The murine form of typhus is also milder then the epidemic type. In 1898, a researcher named Brill studied typhus among immigrants in New York City; the form of typhus he found was called “Brill’s disease.” In the late 1920’s, Hermann Mooser proved that Brill’s disease was carried by the rat flea. When Zinsser began his work on typhus, he realized that what was known about the disease had never been properly organized. Zinsser and his coworkers, including Mooser and others, worked to identify the various types of typhus. In the 1930’s, Zinsser suggested that the typhus studied by Brill in New York City had actually included two types: the rodent-associated form and Brill’s disease. As a result of Zinsser’s effort to identify the types of typhus disease, it was renamed Brill-Zinsser disease. Making a Vaccine Zinsser’s studies had shown him that the disease-causing organism in typhus contained some kind of antigen, most likely a polysaccharide. In 1932, Zinsser would identify agglutinins, or antibodies, in the blood serum of patients who had the murine and classical forms of typhus. Zinsser believed that a vaccine could be developed to prevent the spread of typhus. He realized, however, that a large number of dead microorganisms was needed to help people develop an immunity. Zinsser and his colleagues set out to develop a method of growing organisms in large quantities in tissue culture. The infected tissue was used to inoculate large quantities of normal chick tissue, and this tissue was then grown in flasks. In this way, Zinsser’s team was able to produce the quantities of microorganisms they needed. The type of immunization that Zinsser developed (in 1930) is known as “passive immunity.” The infecting organisms carry anti-

Typhus vaccine

/

813

gens, which stimulate the production of antibodies. The antigens can elicit an immune reaction even if the cell is weak or dead. “B” cells and macrophages, both of which are used in fighting disease organisms, recognize and respond to the antigen. The B cells produce antibodies that can destroy the invading organism directly or attract more macrophages to the area so that they can attack the organism. B cells also produce “memory cells,” which remain in the blood and trigger a quick second response if there is a later infection. Since the vaccine contains weakened or dead organisms, the person who is vaccinated may have a mild reaction but does not actually come down with the disease. Impact Typhus is still common in many parts of the world, especially where there is poverty and overcrowding. Classical typhus is quite rare; the last report of this type of typhus in the United States was in 1921. Endemic and murine typhus are more common. In the United States, where children are vaccinated against the disease, only about fifty cases are now reported each year. Antibiotics such as tetracycline and chloramphenicol are effective in treating the disease, so few infected people now die of the disease in areas where medical care is available. The work of Zinsser and his colleagues was very important in stopping the spread of typhus. Zinsser’s classification of different types of the disease meant that it was better understood, and this led to the development of cures. The control of lice and rodents and improved cleanliness in living conditions helped bring typhus under control. Once Zinsser’s vaccine was available, even people who lived in crowded inner cities could be protected against the disease. Zinsser’s research in growing the rickettsia in tissue culture also inspired further work. Other researchers modified and improved his technique so that the use of tissue culture is now standard in laboratories. See also Antibacterial drugs; Birth control pill; Penicillin; Polio vaccine (Sabin); Polio vaccine (Salk); Salvarsan; Tuberculosis vaccine; Yellow fever vaccine.

814

/

Typhus vaccine

Further Reading DeJauregui, Ruth. 100 Medical Milestones That Shaped World History. San Mateo, Calif.: Bluewood Books, 1998. Gray, Michael W. “Rickettsia in Medicine and History.” Nature 396, no. 6707 (November, 1998). Hoff, Brent H., Carter Smith, and Charles H. Calisher. Mapping Epidemics: A Historical Atlas of Disease. New York: Franklin Watts, 2000.

815

Ultracentrifuge Ultracentrifuge

The invention: A super-high-velocity centrifuge designed to separate colloidal or submicroscopic substances, the ultracentrifuge was used to measure the molecular weight of proteins and proved that proteins are large molecules. The people behind the invention: Theodor Svedberg (1884-1971), a Swedish physical chemist and 1926 Nobel laureate in chemistry Jesse W. Beams (1898-1977), an American physicist Arne Tiselius (1902-1971), a Swedish physical biochemist and 1948 Nobel laureate in chemistry Svedberg Studies Colloids Theodor “The” Svedberg became the principal founder of molecular biology when he invented the ultracentrifuge and used it to examine proteins in the mid-1920’s. He began to study materials called “colloids” as a Swedish chemistry student at the University of Uppsala and continued to conduct experiments with colloidal systems when he joined the faculty in 1907. A colloid is a kind of mixture in which very tiny particles of one substance are mixed uniformly with a dispersing medium (often water) and remain suspended indefinitely. These colloidal dispersions play an important role in many chemical and biological systems. The size of the colloid particles must fall within a certain range. The force of gravity will cause them to settle if they are too large. If they are too small, the properties of the mixture change, and a solution is formed. Some examples of colloidal systems include mayonnaise, soap foam, marshmallows, the mineral opal, fog, India ink, jelly, whipped cream, butter, paint, and milk. Svedberg wondered what such different materials could have in common. His early work helped to explain why colloids remain in suspension. Later, he developed the ultracentrifuge to measure the weight of colloid particles by causing them to settle in a controlled way.

816

/

Ultracentrifuge

Svedberg Builds an Ultracentrifuge Svedberg was a successful chemistry professor at the University of Uppsala in Sweden when he had the idea that colloids could be made to separate from suspension by means of centrifugal force. Centrifugal force is caused by circular motion and acts on matter much as gravity does. A person can feel this force by tying a ball to a rope and whirling it rapidly in a circle. The pull on the rope becomes stronger as the ball moves faster in its circular orbit. A centrifuge works the same way: It is a device that spins balanced containers of substances very rapidly. Svedberg figured that it would take a centrifugal force thousands of times the force of gravity to cause colloid particles to settle. How fast they settle depends on their size and weight, so the ultracentrifuge can also provide a measure of these properties. Centrifuges were already used to separate cream from whole milk and blood corpuscles from plasma, but these centrifuges were too slow to cause the separation of colloids. An ultracentrifuge—one that could spin samples much faster—was needed, and Svedberg made plans to build one. The opportunity came in 1923, when Svedberg spent eight months as visiting professor in the chemistry department of the University of Wisconsin at Madison and worked with J. Burton Nichols, one of the six graduate students assigned to assist him. Here, Svedberg announced encouraging results with an electrically driven centrifuge—not yet an ultracentrifuge—which attained a rotation equal to about 150 times the force of gravity. Svedberg returned to Sweden and, within a year, built a centrifuge capable of generating 7,000 times the force of gravity. He used it with Herman Rinde, a colleague at the University of Uppsala, to separate the suspended particles of colloidal gold. This was in 1924, which is generally accepted as the date of the first use of a true ultracentrifuge. From 1925 to 1926, Svedberg raised the funds to build an even more powerful ultracentrifuge. It would be driven by an oil turbine, a machine capable of producing more than 40,000 revolutions per minute to generate a force 100,000 times that of gravity. Svedberg and Robin Fahraeus used the new ultracentrifuge to separate the protein hemoglobin from its colloidal suspension. Together with fats and carbohydrates, proteins are one of the most

Ultracentrifuge

/

817

abundant organic constituents of living organisms. No protein had been isolated in pure form before Svedberg began this study, and it was uncertain whether proteins consisted of molecules of a single compound or mixtures of different substances working together in biological systems. The colloid particles of Svedberg’s previous studies separated at different rates, some settling faster than others, showing that they had different sizes and weights. Colloid particles of the protein, however, separated together. The uniform separation observed for proteins, such as hemoglobin, demonstrated for the first time that each protein consists of identical well-defined molecules. More than one hundred proteins were studied by Svedberg and his coworkers, who extended their technique to carbohydrate polymers such as cellulose and starch. Impact Svedberg built more and more powerful centrifuges so that smaller and smaller molecules could be studied. In 1936, he built an ultracentrifuge that produced a centrifugal force of more than a halfmillion times the force of gravity. Jesse W. Beams was an American pioneer in ultracentrifuge design. He reduced the friction of an airdriven rotor by first housing it in a vacuum, in 1934, and later by supporting it with a magnetic field. The ultracentrifuge was a central tool for providing a modern understanding of the molecular basis of living systems, and it is employed in thousands of laboratories for a variety of purposes. It is used to analyze the purity and the molecular properties of substances containing large molecules, from the natural products of the biosciences to the synthetic polymers of chemistry. The ultracentrifuge is also employed in medicine to analyze body fluids, and it is used in biology to isolate viruses and the components of fractured cells. Svedberg, while at Wisconsin in 1923, invented a second, very different method to separate proteins in suspension using electric currents. It is called “electrophoresis,” and it was later improved by his student, Arne Tiselius, for use in his famous study of the proteins in blood serum. The technique of electrophoresis is as widespread and important as is the ultracentrifuge. See also Ultramicroscope; X-ray crystallography.

818

/

Ultracentrifuge

Further Reading Lechner, M. D. Ultracentrifugation. New York: Springer, 1994. Rickwood, David. Preparative Centrifugation: A Practical Approach. New York: IRL Press at Oxford University Press, 1992. Schuster, Todd M. Modern Analytical Ultracentrifugation: Acquisition and Interpretation of Data for Biological and Synthetic Polymer Systems. Boston: Birkhäuser, 1994. Svedberg, Theodor B., Kai Oluf Pedersen, and Johannes Henrik Bauer. The Ultracentrifuge. Oxford: Clarendon Press, 1940.

819

Ultramicroscope Ultramicroscope

The invention: A microscope characterized by high-intensity illumination for the study of exceptionally small objects, such as colloidal substances. The people behind the invention: Richard Zsigmondy (1865-1929), an Austrian-born German organic chemist who won the 1925 Nobel Prize in Chemistry H. F. W. Siedentopf (1872-1940), a German physicist-optician Max von Smouluchowski (1879-1961), a German organic chemist Accidents of Alchemy Richard Zsigmondy’s invention of the ultramicroscope grew out of his interest in colloidal substances. Colloids consist of tiny particles of a substance that are dispersed throughout a solution of another material or substance (for example, salt in water). Zsigmondy first became interested in colloids while working as an assistant to the physicist Adolf Kundt at the University of Berlin in 1892. Although originally trained as an organic chemist, in which discipline he took his Ph.D. at the University of Munich in 1890, Zsigmondy became particularly interested in colloidal substances containing fine particles of gold that produce lustrous colors when painted on porcelain. For this reason, he abandoned organic chemistry and devoted his career to the study of colloids. Zsigmondy began intensive research into his new field of interest in 1893, when he returned to Austria to accept a post as lecturer at a technical school at Graz. Zsigmondy became especially interested in gold-ruby glass, the accidental invention of the seventeenth century alchemist Johann Kunckle. Kunckle, while pursuing the alchemist’s pipe dream of transmuting base substances (such as lead) into gold, discovered instead a method of producing glass with a beautiful, deep red luster by suspending very fine particles of gold throughout the liquid glass before it was cooled. Zsigmondy also began studying a colloidal pigment called “purple of Cassius,” the discovery of another seventeenth century alchemist, Andreas Cassius.

820

/

Ultramicroscope

Zsigmondy soon discovered that purple of Cassius was a colloidal solution and not, as most chemists believed at the time, a chemical compound. This fact allowed him to develop techniques for glass and porcelain coloring with great commercial value, which led directly to his 1897 appointment to a research post with the Schott Glass Manufacturing Company in Jena, Germany. With the Schott Company, Zsigmondy concentrated on the commercial production of colored glass objects. His most notable achievement during this period was the invention of Jena milk glass, which is still prized by collectors throughout the world. Brilliant Proof While studying colloids, Zsigmondy devised experiments that proved that purple of Cassius was colloidal. When he published the results of his research in professional journals, however, they were not widely accepted by the scientific community. Other scientists were not able to replicate Zsigmondy’s experiments and consequently denounced them as flawed. The criticism of his work in technical literature stimulated Zsigmondy to make his greatest discovery, the ultramicroscope, which he developed to prove his theories regarding purple of Cassius. The problem with proving the exact nature of purple of Cassius was that the scientific instruments available at the time were not sensitive enough for direct observation of the particles suspended in a colloidal substance. Using the facilities and assisted by the staff (especially H. F. W. Siedentopf, an expert in optical lens grinding) of the Zeiss Glass Manufacturing Company of Jena, Zsigmondy developed an ingenious device that permitted direct observation of individual colloidal particles. This device, which its developers named the “ultramicroscope,” made use of a principle that already existed. Sometimes called “darkfield illumination,” this method consisted of shining a light (usually sunlight focused by mirrors) through the solution under the microscope at right angles to the observer, rather than shining the light directly from the observer into the solution. The resulting effect is similar to that obtained when a beam of sunlight is admitted to a closed room through a small window. If an observer stands back from and at

Ultramicroscope

/

821

Richard Zsigmondy Born in Vienna, Austria, in 1865, Richard Adolf Zsigmondy came from a talented, energetic family. His father, a celebrated dentist and inventor of medical equipment, inspired his children to study the sciences, while his mother urged them to spend time outdoors in strenuous exercise. Although his father died when Zsigmondy was fifteen, the teenager’s interest in chemistry was already firmly established. He read advanced chemistry textbooks and worked on experiments in his own home laboratory. After taking his doctorate at the University of Munich and teaching in Berlin and Graz, Austria, he became an industrial chemist at the glassworks in Jena, Germany. However, pure research was his love, and he returned to it, working entirely on his own after 1900. In 1907 he received an appointment as professor and director of the Institute of Inorganic Chemistry at the University of Göttingen, one of the scientific centers of the world. There he accomplished much of his ground-breaking work on colloids and Brownian motion, despite the severe shortages that hampered him during the economic depression in Germany following World War I. His 1925 Nobel Prize in Chemistry, especially the substantial money award, helped him overcome his supply problems. He retired in early 1929 and died seven months later.

right angles to such a beam, many dust particles suspended in the air will be observed that otherwise would not be visible. Zsigmondy’s device shines a very bright light through the substance or solution being studied. From the side, the microscope then focuses on the light shaft. This process enables the observer using the ultramicroscope to view colloidal particles that are ordinarily invisible even to the strongest conventional microscope. To a scientist viewing purple of Cassius, for example, colloidal gold particles as small as one ten-millionth of a millimeter in size become visible. Impact After Zsigmondy’s invention of the ultramicroscope in 1902, the University of Göttingen appointed him professor of inorganic

822

/

Ultramicroscope

chemistry and director of its Institute for Inorganic Chemistry. Using the ultramicroscope, Zsigmondy and his associates quickly proved that purple of Cassius is indeed a colloidal substance. That finding, however, was the least of the spectacular discoveries that resulted from Zsigmondy’s invention. In the next decade, Zsigmondy and his associates found that color changes in colloidal gold solutions result from coagulation—that is, from changes in the size and number of gold particles in the solution caused by particles bonding together. Zsigmondy found that coagulation occurs when the negative electrical charge of the individual particles is removed by the addition of salts. Coagulation can be prevented or slowed by the addition of protective colloids. These observations also made possible the determination of the speed at which coagulation takes place, as well as the number of particles in the colloidal substance being studied. With the assistance of the organic chemist Max von Smouluchowski, Zsigmondy worked out a complete mathematical formula of colloidal coagulation that is valid not only for gold colloidal solutions but also for all other colloids. Colloidal substances include blood and milk, which both coagulate, thus giving Zsigmondy’s work relevance to the fields of medicine and agriculture. These observations and discoveries concerning colloids—in addition to the invention of the ultramicroscope—earned for Zsigmondy the 1925 Nobel Prize in Chemistry. See also Scanning tunneling microscope; Ultracentrifuge; X-ray crystallography. Further Reading Zsigmondy, Richard, and Jerome Alexander. Colloids and the Ultramicroscope. New York: J. Wiley & Sons, 1909. Zsigmondy, Richard, Ellwood Barker Spear, and John Foote Norton. The Chemistry of Colloids. New York: John Wiley & Sons, 1917.

823

Ultrasound Ultrasound

The invention: A medically safe alternative to X-ray examination, ultrasound uses sound waves to detect fetal problems in pregnant women. The people behind the invention: Ian T. Donald (1910-1987), a British obstetrician Paul Langévin (1872-1946), a French physicist Marie Curie (1867-1946) and Pierre Curie (1859-1906), the French husband-and-wife team that researched and developed the field of radioactivity Alice Stewart, a British researcher An Underwater Beginning In the early 1900’s, two major events made it essential to develop an appropriate means for detecting unseen underwater objects. The first event was the Titanic disaster in 1912, which involved a largely submerged, unseen, and silent iceberg. This iceberg caused the sinking of the Titanic and resulted in the loss of many lives as well as valuable treasure. The second event was the threat to the Allied Powers from German U-boats during World War I (1914-1918). This threat persuaded the French and English Admiralties to form a joint committee in 1917. The Anti-Submarine Detection and Investigation Committee (ASDIC) found ways to counter the German naval developments. Paul Langévin, a former colleague of Pierre Curie and Marie Curie, applied techniques developed in the Curies’ laboratories in 1880 to formulate a crude ultrasonic system to detect submarines. These techniques used beams of sound waves of very high frequency that were highly focused and directional. The advent of World War II (1939-1945) made necessary the development of faster electronic detection technology to improve the efforts of ultrasound researchers. Langévin’s crude invention evolved into the sophisticated system called “sonar” (sound navigation ranging), which was important in the success of the Allied forces. Sonar was based on pulse echo principles and, like the system called “ra-

824

/

Ultrasound

Ian Donald Ian Donald was born in Paisley, Scotland, in 1910 and educated in Edinburgh until he was twenty, when he moved to South Africa with his parents. He graduated with a bachelor of arts degree from Diocesan College, Cape Town, and then moved to London to study medicine, graduating from the University of London in 1937. During World War II he served as a medical officer in the Royal Air Force and received a medal for rescuing flyers from a burning airplane. After the war he began his long teaching career in medicine, first at St. Thomas Hospital Medical School and then as the Regius Professor of Midwifery at Glasgow University. His specialties were obstetrics and gynecology. While at Glasgow he accomplished his pioneering work with diagnostic ultrasound technology, but he also championed laparoscopy, breast feeding, and the preservation of membranes during the delivery of babies. In addition to his teaching duties and medical practice he wrote a widely used textbook, oversaw the building of the Queen Mother’s Hospital in Glasgow, and campaigned against England’s 1967 Abortion Act. His expertise with ultrasound came to his own rescue after he had cardiac surgery in the 1960’s. He diagnosed himself as having internal bleeding from a broken blood vessel. The cardiologists taking care of him were skeptical until an ultrasound proved him right. Widely honored among physicians, he died in England in 1987.

dar” (radio detecting and ranging), had military implications. This vital technology was classified as a military secret and was kept hidden until after the war. An Alternative to X Rays Ian Donald’s interest in engineering and the principles of sound waves began when he was a schoolboy. Later, while he was in the British Royal Air Force, he continued and maintained his enthusiasm by observing the development of the anti-U-boat warfare efforts. He went to medical school after World War II and began a career in obstetrics. By the early 1950’s, Donald had em-

Ultrasound

/

825

Safe and not requiring surgery, ultrasonography has become the principal means for obtaining information about fetal structures. (Digital Stock)

barked on a study of how to apply sonar technology in medicine. He moved to Glasgow, Scotland, a major engineering center in Europe that presented a fertile environment for interdisciplinary research. There Donald collaborated with engineers and technicians in his medical ultrasound research. They used inanimate and tissue materials in many trials. Donald hoped to apply ultrasound technology to medicine, especially to gynecology and obstetrics, his specialty. His efforts led to new pathways and new discoveries. He was interested in adapting a certain type of ultrasound technology method (used to probe metal structures and welds for cracks and flaws) to medicine. Kelvin Hughes, the engineering manufacturing company that produced the flaw detector apparatus, gave advice, expertise, and equipment to Donald and his associates, who were then able to devise water tanks with flexible latex bottoms. These were coated with a film of grease and placed into contact with the abdomens of pregnant women. The use of diagnostic radiography (such as X rays) became controversial when it was evident that it caused potential leukemias

826

/

Ultrasound

and other injuries to the fetus. It was realized from the earliest days of radiology that radiation could cause tumors, particularly of the skin. The aftereffects of radiological studies were recognized much later and confirmed by studies of atomic bomb survivors and of patients receiving therapeutic irradiation. The use of radiation in obstetrics posed several major threats to the developing fetus, most notably the production of tumors later in life, genetic damage, and developmental anomalies in the unborn fetus. In 1958, bolstered by earlier clinical reports and animal research findings, Alice Stewart and her colleagues presented a major case study of more than thirteen hundred children in England and Wales who had died of cancer before the age of ten between 1953 and 1958. There was a 91 percent increase in leukemias in children who were exposed to intrauterine radiation, as well as a higher percentage of fetal death. Although controversial, this report led to a reduction in the exposure of pregnant women to X rays, with subsequent reductions in fetal abnormalities and death. These reports came at a very opportune time for Donald: The development of ultrasonography would provide useful information about the unborn fetus without the adverse effects of radiation. Stewart’s findings and Donald’s experiments convinced others of the need for ultrasonography in obstetrics. Consequences Diagnostic ultrasound first gained clinical acceptance in obstetrics, and its major contributions have been in the assessment of fetal size and growth. In combination with amniocentesis (the study of fluid taken from the womb), ultrasound is an invaluable tool in operative procedures necessary to improve the outcomes of pregnancies. As can be expected, safety has been a concern, especially for a developing, vulnerable fetus that is exposed to high-frequency sound. Research has not been able to document any harmful effect of ultrasonography on the developing fetus. The procedure produces neither heat nor cold. It has not been shown to produce any toxic or destructive effect on the auditory or balancing organs of the developing fetus. Chromosomal abnormalities have not been reported in any of the studies conducted.

Ultrasound

/

827

Ultrasonography, because it is safe and does not require surgery, has become the principal means for obtaining information about fetal structures. With this procedure, the contents of the uterus—as well as the internal structure of the placenta, fetus, and fetal organs—can be evaluated at any time during pregnancy. The use of ultrasonography remains a most valued tool in medicine, especially obstetrics, because of Donald’s work. See also Amniocentesis; Birth control pill; CAT scanner; Electrocardiogram; Electroencephalogram; Mammography; Nuclear magnetic resonance; Pap test; Sonar; Syphilis test; X-ray image intensifier. Further Reading Danforth, David N., and James R. Scott. Danforth’s Obstetrics and Gynecology. 7th ed. Philadelphia: Lippincott, 1994. DeJauregui, Ruth. 100 Medical Milestones That Shaped World History. San Mateo, Calif.: Bluewood Books, 1998. Rozycki, Grace S. Surgeon-Performed Ultrasound: Its Use in Clinical Practice. Philadelphia: W. B. Saunders, 1998. Wolbarst, Anthony B. Looking Within: How X-ray, CT, MRI, Ultrasound, and Other Medical Images Are Created, and How They Help Physicians Save Lives. Berkeley: University of California Press, 1999.

828

UNIVAC computer UNIVAC computer

The invention: The first commercially successful computer system. The people behind the invention: John Presper Eckert (1919-1995), an American electrical engineer John W. Mauchly (1907-1980), an American physicist John von Neumann (1903-1957), a Hungarian American mathematician Howard Aiken (1900-1973), an American physicist George Stibitz (1904-1995), a scientist at Bell Labs The Origins of Computing On March 31, 1951, the U.S. Census Bureau accepted delivery of the first Universal Automatic Computer (UNIVAC). This powerful electronic computer, far surpassing anything then available in technological features and capability, ushered in the first computer generation and pioneered the commercialization of what had previously been the domain of academia and the interest of the military. The fanfare that surrounded this historic occasion, however, masked the turbulence of the previous five years for the young upstart EckertMauchly Computer Corporation (EMCC), which by this time was a wholly owned subsidiary of Remington Rand Corporation. John Presper Eckert and John W. Mauchly met in the summer of 1941 at the University of Pennsylvania. A short time later, Mauchly, then a physics professor at Ursinus College, joined the Moore School of Engineering at the University of Pennsylvania and embarked on a crusade to convince others of the feasibility of creating electronic digital computers. Up to this time, the only computers available were called “differential analyzers,” which were used to solve complex mathematical equations known as “differential equations.” These slow machines were good only for solving a relatively narrow range of mathematical problems. Eckert and Mauchly landed a contract that eventually resulted in the development and construction of the world’s first operational

UNIVAC computer

/

829

general-purpose electronic computer, the Electronic Numerical Integrator and Calculator (ENIAC). This computer, used eventually by the Army for the calculation of ballistics tables, was deficient in many obvious areas, but this was caused by economic rather than engineering constraints. One major deficiency was the lack of automatic program control; the ENIAC did not have stored program memory. This was addressed in the development of the Electronic Discrete Variable Automatic Computer (EDVAC), the successor to the ENIAC. Fighting the Establishment A symbiotic relationship had developed between Eckert and Mauchly that worked to their advantage on technical matters. They worked well with each other, and this contributed to their success in spite of external obstacles. They both were interested in the commercial applications of computers and envisioned uses for these machines far beyond the narrow applications required by the military. This interest brought them into conflict with the administration at the Moore School of Engineering as well as with the noted mathematician John von Neumann, who “joined” the ENIAC/EDVAC development team in 1945. Von Neumann made significant contributions and added credibility to the Moore School group, which often had to fight against the conservative scientific establishment characterized by Howard Aiken at Harvard University and George Stibitz at Bell Labs. Philosophical differences between von Neumann and Eckert and Mauchly, as well as patent issue disputes with the Moore School administration, eventually caused the resignation of Eckert and Mauchly on March 31, 1946. Eckert and Mauchly, along with some of their engineering colleagues at the University of Pennsylvania, formed the Electronic Control Company and proceeded to interest potential customers (including the Census Bureau) in an “EDVAC-type” machine. On May 24, 1947, the EDVAC-type machine became the UNIVAC. This new computer would overcome the shortcomings of the ENIAC and the EDVAC (which was eventually completed by the Moore School in 1951). It would be a stored-program computer and would

830

/

UNIVAC computer

allow input to and output from the computer via magnetic tape. The prior method of input/output used punched paper cards that were extremely slow compared to the speed at which data in the computer could be processed. A series of poor business decisions and other unfortunate circumstances forced the newly renamed Eckert-Mauchly Computer Corporation to look for a buyer. They found one in Remington Rand in 1950. Remington Rand built tabulating equipment and was a competitor of International Business Machines Corporation (IBM). IBM was approached about buying EMCC, but the negotiations fell apart. EMCC became a division of Remington Rand and had access to the resources necessary to finish the UNIVAC. Consequences Eckert and Mauchly made a significant contribution to the advent of the computer age with the introduction of the UNIVAC I. The words “computer” and “UNIVAC” entered the popular vocabulary as synonyms. The efforts of these two visionaries were rewarded quickly as contracts started to pour in, taking IBM by surprise and propelling the inventors into the national spotlight. This spotlight shone brightest, perhaps, on the eve of the national presidential election of 1952, which pitted war hero General Dwight D. Eisenhower against statesman Adlai Stevenson. At the suggestion of Remington Rand, CBS was invited to use UNIVAC to predict the outcome of the election. Millions of television viewers watched as CBS anchorman Walter Cronkite “asked” UNIVAC for its predictions. A program had been written to analyze the results of thousands of voting districts in the elections of 1944 and 1948. Based on only 7 percent of the votes coming in, UNIVAC had Eisenhower winning by a landslide, in contrast with all the prior human forecasts of a close election. Surprised by this answer and not willing to suffer the embarrassment of being wrong, the programmers quickly directed the program to provide an answer that was closer to the perceived situation. The outcome of the election, however, matched UNIVAC’s original answer. This prompted CBS commentator Edward R. Murrow’s famous quote, “The trouble with machines is people.”

UNIVAC computer

/

831

The development of the UNIVAC I produced many technical innovations. Primary among these is the use of magnetic tape for input and output. All machines that preceded the UNIVAC (with one exception) used either paper tape or cards for input and cards for output. These methods were very slow and created a bottleneck of information. The great advantage of magnetic tape was the ability to store the equivalent of thousands of cards of data on one 30-centimeter reel of tape. Another advantage was its speed. See also Apple II computer; BINAC computer; Colossus computer; ENIAC computer; IBM Model 1401 computer; Personal computer; Supercomputer. Further Reading Metropolis, Nicholas, Jack Howlett, and Gian Carlo Rota. A History of Computing in the Twentieth Century: A Collection of Essays. New York: Academic Press, 1980. Slater, Robert. Portraits in Silicon. Cambridge, Mass.: MIT Press, 1987. Stern, Nancy B. From ENIAC to UNIVAC: An Appraisal of the EckertMauchly Computers. Bedford, Mass.: Digital Press, 1981.

832

Vacuum cleaner Vacuum cleaner

The invention: The first portable domestic vacuum cleaner successfully adapted to electricity, the original machine helped begin the electrification of domestic appliances in the early twentieth century. The people behind the invention: H. Cecil Booth (1871-1955), a British civil engineer Melville R. Bissell (1843-1889), the inventor and marketer of the Bissell carpet sweeper in 1876 William Henry Hoover (1849-1932), an American industrialist James Murray Spangler (1848-1915), an American inventor From Brooms to Bissells During most of the nineteenth century, the floors of homes were cleaned primarily with brooms. Carpets were periodically dragged out of the home by the boys and men of the family, stretched over rope lines or fences, and given a thorough beating to remove dust and dirt. In the second half of the century, carpet sweepers, perhaps inspired by the success of street-sweeping machines, began to appear. Although there were many models, nearly all were based upon the idea of a revolving brush within an outer casing that moved on rollers or wheels when pushed by a long handle. Melville Bissell’s sweeper, patented in 1876, featured a knob for adjusting the brushes to the surface. The Bissell Carpet Company, also formed in 1876, became the most successful maker of carpet sweepers and dominated the market well into the twentieth century. Electric vacuum cleaners were not feasible until homes were wired for electricity and the small electric motor was invented. Thomas Edison’s success with an incandescent lighting system in the 1880’s and Nikola Tesla’s invention of a small electric motor that was used in 1889 to drive a Westinghouse Electric Corporation fan opened the way for the application of electricity to household technologies.

Vacuum cleaner

/

833

Cleaning with Electricity In 1901, H. Cecil Booth, a British civil engineer, observed a London demonstration of an American carpet cleaner that blew compressed air at the fabric. Booth was convinced that the process should be reversed so that dirt would be sucked out of the carpet. In developing this idea, Booth invented the first successful suction vacuum sweeper. Booth’s machines, which were powered by gasoline or electricity, worked without brushes. Dust was extracted by means of a suction action through flexible tubes with slot-shaped nozzles. Some machines were permanently installed in buildings that had wall sockets for the tubes in every room. Booth’s British Vacuum Cleaner Company also employed horse-drawn mobile units from which white-uniformed men unrolled long tubes that they passed into buildings through windows and doors. His company’s commercial triumph came when it cleaned Westminster Abbey for the coronation of Edward VII in 1902. Booth’s company also manufactured a 1904 domestic model that had a direct-current electric motor and a vacuum pump mounted on a wheeled carriage. Dust was sucked into the nozzle of a long tube and deposited into a metal container. Booth’s vacuum cleaner used electricity from overhead light sockets. The portable electric vacuum cleaner was invented in 1907 in the United States by James Murray Spangler. When Spangler was a janitor in a department store in Canton, Ohio, his asthmatic condition was worsened by the dust he raised with a large Bissell carpet sweeper. Spangler’s modifications of the Bissell sweeper led to his own invention. On June 2, 1908, he received a patent for his Electric Suction Sweeper. The device consisted of a cylindrical brush in the front of the machine, a vertical-shaft electric motor above a fan in the main body, and a pillowcase attached to a broom handle behind the main body. The brush dislodged the dirt, which was sucked into the pillowcase by the movement of air caused by a fan powered by the electric motor. Although Spangler’s initial attempt to manufacture and sell his machines failed, Spangler had, luckily for him, sold one of his machines to a cousin, Susan Troxel Hoover, the wife of William Henry Hoover.

834

/

Vacuum cleaner

The Hoover family was involved in the production of leather goods, with an emphasis on horse saddles and harnesses. William Henry Hoover, president of the Hoover Company, recognizing that the adoption of the automobile was having a serious impact on the family business, was open to investigating another area of production. In addition, Mrs. Hoover liked the Spangler machine that she had been using for a couple of months, and she encouraged her husband to enter into an agreement with Spangler. An agreement made on August 5, 1908, allowed Spangler, as production manager, to manufacture his machine with a small work force in a section of Hoover’s plant. As sales of vacuum cleaners increased, what began as a sideline for the Hoover Company became the company’s main line of production. Few American homes were wired for electricity when Spangler and Hoover joined forces; not until 1920 did 35 percent of American homes have electric power. In addition to this inauspicious fact, the first Spangler-Hoover machine, the Model O, carried the relatively high price of seventy-five dollars. Yet a full-page ad for the Model O in the December, 1908, issue of the Saturday Evening Post brought a deluge of requests. American women had heard of the excellent performance of commercial vacuum cleaners, and they hoped that the Hoover domestic model would do as well in the home. Impact As more and more homes in the United States and abroad became wired for electric lighting, a clean and accessible power source became available for household technologies. Whereas electric lighting was needed only in the evening, the electrification of household technologies made it necessary to use electricity during the day. The electrification of domestic technologies therefore matched the needs of the utility companies, which sought to maximize the use of their facilities. They became key promoters of electric appliances. In the first decades of the twentieth century, many household technologies became electrified. In addition to fans and vacuum cleaners, clothes-washing machines, irons, toasters, dishwashing machines, refrigerators, and kitchen ranges were being powered by electricity.

Vacuum cleaner

/

835

The application of electricity to household technologies came as large numbers of women entered the work force. During and after World War I, women found new employment opportunities in industrial manufacturing, department stores, and offices. The employment of women outside the home continued to increase throughout the twentieth century. Electrical appliances provided the means by which families could maintain the same standards of living in the home while both parents worked outside the home. It is significant that Bissell was motivated by an allergy to dust and Spangler by an asthmatic condition. The employment of the carpet sweeper, and especially the electric vacuum cleaner, not only

H. Cecil Booth Although Hubert Cecil Booth (1871-1955), an English civil engineer, designed battleship engines, factories, and bridges, he was not above working on homier problems when they intrigued him. That happened in 1900 when he watched the demonstration of a device that used forced air to blow the dirt out of railway cars. It worked poorly, and the reason, it seemed to Booth, was that blowing just stirred up the dirt. Sucking it into a receptacle, he thought, would work better. He tested his idea by placing a wet cloth over furniture upholstery and sucking through it. The grime that collected on the side of the cloth facing the upholstery proved him right. He built his first vacuum cleaner—a term that he coined—in 1901. It cleaned houses, but only with considerable effort. Measuring 54 inches by 42 inches by 10 inches, it had to be carried in a horse-driven van to the cleaning site. A team of workmen from Booth’s Vacuum Cleaner Company then did the cleaning with hoses that reached inside the house through windows and doors. Moreover, the machine cost the equivalent of more than fifteen hundred dollars. It was beyond the finances and physical powers of home owners. Booth marketed the first successful British one-person vacuum cleaner, the Trolley-Vac, in 1906. Weighing one hundred pounds, it was still difficult to wrestle into position, but it came with hoses and attachments that made possible the cleaning of different types of surfaces and hard-to-reach areas.

836

/

Vacuum cleaner

made house cleaning more efficient and less physical but also led to a healthier home environment. Whereas sweeping with a broom tended only to move dust to a different location, the carpet sweeper and the electric vacuum cleaner removed the dirt from the house. See also Disposable razor; Electric refrigerator; Microwave cooking; Robot (household); Washing machine. Further Reading Jailer-Chamberlain, Mildred. “This Is the Way We Cleaned Our Floors.” Antiques & Collecting Magazine 101, no. 4 (June, 1996). Kirkpatrick, David D. “The Ultimate Victory of Vacuum Cleaners.” New York Times (April 14, 2001). Shapiro, Laura. “Household Appliances.” Newsweek 130, no. 24A (Winter, 1997/1998).

837

Vacuum tube Vacuum tube

The invention: A sealed glass tube from which air and gas have been removed to permit electrons to move more freely, the vacuum tube was the heart of electronic systems until it was displaced by transistors. The people behind the invention: Sir John Ambrose Fleming (1849-1945), an English physicist and professor of electrical engineering Thomas Alva Edison (1847-1931), an American inventor Lee de Forest (1873-1961), an American scientist and inventor Arthur Wehnelt (1871-1944), a German inventor A Solution in Search of a Problem The vacuum tube is a sealed tube or container from which almost all the air has been pumped out, thus creating a near vacuum. When the tube is in operation, currents of electricity are made to travel through it. The most widely used vacuum tubes are cathode-ray tubes (television picture tubes). The most important discovery leading to the invention of the vacuum tube was the Edison effect by Thomas Alva Edison in 1884. While studying why the inner glass surface of light bulbs blackened, Edison inserted a metal plate near the filament of one of his light bulbs. He discovered that electricity would flow from the positive side of the filament to the plate, but not from the neg ative side to the plate. Edison offered no explanation for the effect. Edison had, in fact, invented the first vacuum tube, which was later termed the diode; at that time there was no use for this device. Therefore, the discovery was not recognized for its true significance. A diode converts electricity that alternates in direction (alternating current) to electricity that flows in the same direction (direct current). Since Edison was more concerned with producing direct current in generators, and not household electric lamps, he essentially ignored this aspect of his discovery. Like many other in-

838

/

Vacuum tube

ventions or discoveries that were ahead of their time—such as the laser—for a number of years, the Edison effect was “a solution in search of a problem.” The explanation for why this phenomenon occurred would not come until after the discovery of the electron in 1897 by Sir Joseph John Thomson, an English physicist. In retrospect, the Edison effect can be identified as one of the first observations of thermionic emission, the freeing up of electrons by the application of heat. Electrons were attracted to the positive charges and would collect on the positively charged plate, thus providing current; but they were repelled from the plate when it was made negative, meaning that no current was produced. Since the diode permitted the electrical current to flow in only one direction, it was compared to a valve that allowed a liquid to flow in only one direction. This analogy is popular since the behavior of water has often been used as an analogy for electricity, and this is the reason that the term valves became popular for vacuum tubes. Same Device, Different Application Sir John Ambrose Fleming, acting as adviser to the Edison Electric Light Company, had studied the light bulb and the Edison effect starting in the early 1880’s, before the days of radio. Many years later, he came up with an application for the Edison effect as a radio detector when he was a consultant for the Marconi Wireless Telegraph Company. Detectors (devices that conduct electricity in one direction only, just as the diode does, but at higher frequencies) were required to make the high-frequency radio waves audible by converting them from alternating current to direct current. Fleming was able to detect radio waves quite effectively by using the Edison effect. Fleming used essentially the same device that Edison had created, but for a different purpose. Fleming applied for a patent on his detector on November 16, 1904. In 1906, Lee de Forest refined Fleming’s invention by adding a zigzag piece of wire between the metal plate and the filament of the vacuum tube. The zigzag piece of wire was later replaced by a screen called a “grid.” The grid allowed a small voltage to control a larger voltage between the filament and plate. It was the first com-

Vacuum tube

/

839

John Ambrose Fleming John Ambrose Fleming had a remarkably long and fruitful scientific career. He was born in Lancaster, England, in 1849, the eldest son of a minister. When he was a boy, the family moved to London, which remained his home for the rest of his life. An outstanding student, Fleming matriculated at University College, London, graduating in 1870 with honors. Scholarships took him to other colleges until his skill with electrical experiments earned him a job as a lab instructor at Cambridge University in 1880. In 1885, he returned to University College, London, as professor of electrical technology. He taught there for the following forty-one years, occasionally taking time off to serve as a consultant for such electronics industry leaders as Thomas Edison and Guglielmo Marconi. Fleming’s passion was electricity and electronics, and he was sought after as a teacher with a knack for memorable explanations. For instance, he thought up the “right-hand” rule (also called Fleming’s rule) to illustrate the relation of electromagnetic forces during induction: When the thumb, index finger, and middle finger of a human hand are held at right angles to one another so that the thumb points in the direction of motion through a magnetic field—which is indicated by the index finger—then the middle finger shows the direction of induced current. During his extensive research, Fleming investigated transformers, high-voltage transmitters, electrical conduction, cryogenic electrical effects, radio, and television, and also invented the vacuum tube. Advanced age hardly slowed him down. He wrote three books and more than one hundred articles and remarried at eighty-four. He also delivered public lectures—to audiences at the Royal Institution and the Royal Society among other venues— until he was ninety. He died in 1945, ninety-five years old, having helped give birth to telecommunications.

plete vacuum tube and the first device ever constructed capable of amplifying a signal—that is, taking a small-voltage signal and making it much larger. He named it the “audion” and was granted a U.S. patent in 1907.

840

/

Vacuum tube

In 1907-1908, the American Navy carried radios equipped with de Forest’s audion in its goodwill tour around the world. While useful as an amplifier of the weak radio signals, it was not useful at this point for the more powerful signals of the telephone. Other developments were made quickly as the importance of the emerging fields of radio and telephony were realized. Impact With many industrial laboratories working on vacuum tubes, improvements came quickly. For example, tantalum and tungsten filaments quickly replaced the early carbon filaments. In 1904, Arthur Wehnelt, a German inventor, discovered that if metals were coated with certain materials such as metal oxides, they emitted far more electrons at a given temperature. These materials enabled electrons to escape the surface of the metal oxides more easily. Thermionic emission and, therefore, tube efficiencies were greatly improved by this method. Another important improvement in the vacuum tube came with the work of the American chemist Irving Langmuir of the General Electric Research Laboratory, starting in 1909, and Harold D. Arnold of Bell Telephone Laboratories. They used new devices such as the mercury diffusion pump to achieve higher vacuums. Working independently, Langmuir and Arnold discovered that very high vacuum used with higher voltages increased the power these tubes could handle from small fractions of a watt to hundreds of watts. The de Forest tube was now useful for the higher-power audio signals of the telephone. This resulted in the introduction of the first transamerican speech transmission in 1914, followed by the first transatlantic communication in 1915. The invention of the transistor in 1948 by the American physicists William Shockley, Walter H. Brattain, and John Bardeen ultimately led to the downfall of the tube. With the exception of the cathode-ray tube, transistors could accomplish the jobs of nearly all vacuum tubes much more efficiently. Also, the development of the integrated circuit allowed the creation of small, efficient, highly complex devices that would be impossible with radio tubes. By 1977, the major producers of the vacuum tube had stopped making it.

Vacuum tube

/

841

See also Color television; FM radio; Radar; Radio; Radio crystal sets; Television; Transistor; Transistor radio. Further Reading Baldwin, Neil. Edison: Inventing the Century. Chicago: University of Chicago Press, 2001. Fleming, John Ambrose. Memories of a Scientific Life. London: Marshall, Morgan & Scott, 1934. Hijiya, James A. Lee de Forest and the Fatherhood of Radio. Bethlehem, Pa.: Lehigh University Press, 1992. Read, Oliver, and Walter L. Welch. From Tin Foil to Stereo: Evolution of the Phonograph. 2d ed. Indianapolis: H. W. Sams, 1976.

842

Vat dye Vat dye

The invention: The culmination of centuries of efforts to mimic the brilliant colors displayed in nature in dyes that can be used in many products. The people behind the invention: Sir William Henry Perkin (1838-1907), an English student in Hofmann’s laboratory René Bohn (1862-1922), a synthetic organic chemist Karl Heumann (1850-1894), a German chemist who taught Bohn Roland Scholl (1865-1945), a Swiss chemist who established the correct structure of Bohn’s dye August Wilhelm von Hofmann (1818-1892), an organic chemist Synthesizing the Compounds of Life From prehistoric times until the mid-nineteenth century, all dyes were derived from natural sources, primarily plants. Among the most lasting of these dyes were the red and blue dyes derived from alizarin and indigo. The process of making dyes took a great leap forward with the advent of modern organic chemistry in the early years of the nineteenth century. At the outset, this branch of chemistry, dealing with the compounds of the element carbon and associated with living matter, hardly existed, and synthesis of carbon compounds was not attempted. Considerable data had accumulated showing that organic, or living, matter was basically different from the compounds of the nonliving mineral world. It was widely believed that although one could work with various types of organic matter in physical ways and even analyze their composition, they could be produced only in a living organism. Yet, in 1828, the German chemist Friedrich Wöhler found that it was possible to synthesize the organic compound urea from mineral compounds. As more chemists reported the successful preparation of compounds previously isolated only from plants or animals, the theory that organic compounds could be produced only in a living organism faded.

Vat dye

/

843

One field ripe for exploration was that committed to exploiting the uses of coal tar. Here, August Wilhelm von Hofmann was an active worker. He and his students made careful studies of this complex mixture. The high-quality stills they designed allowed for the isolation of pure samples of important compounds for further study. Of greater importance was the collection of able students Hofmann attracted. Among them was Sir William Henry Perkin, who is regarded as the founder of the dyestuffs industry. In 1856, Perkin undertook the task of synthesizing quinine (a bitter crystalline alkaloid used in medicine) from a nitrogen-containing coal tar material called toluidine. Luck played a decisive role in the outcome of his experiment. The sticky compound Perkin obtained contained no quinine, so he decided to investigate the simpler related compound aniline. A small amount of the impurity toluidine in his aniline gave Perkin the first synthetic dye, Mauveine. Searching for Structure From this beginning, the great dye industries of Europe, particularly Germany, grew. The trial-and-error methods gave way to more systematic searches as the structural theory of organic chemistry was formulated. As the twentieth century began, great progress had been made, and German firms dominated the industry. Badische Anilin- und Soda-Fabrik (BASF) was incorporated at Ludwigshafen in 1865 and undertook extensive explorations of both alizarin and indigo. A chemist, René Bohn, had made important discoveries in 1888, which helped the company recover lost ground in the alizarin field. In 1901, he undertook the synthesis of a dye he hoped would combine the desirable attributes of both alizarin and indigo. As so often happens in science, nothing like the expected occurred. Bohn realized that the beautiful blue crystals that resulted from his synthesis represented a far more important product. Not only was this the first synthetic vat dye, Indanthrene, ever prepared, but also, by studying the reaction at higher temperature, a useful yellow dye, Flavanthrone, could be produced. The term vat dye is used to describe a method of applying the dye, but it also serves to characterize the structure of the dye, because all

844

/

Vat dye

William Henry Perkin Born in England in 1838, William Henry Perkin saw a chemical experiment for the first time when he was a small boy. He found his calling there and then, much to the dismay of his father, who wanted him to be a builder and architect like himself. Perkin studied chemistry every chance he found as a teenager and was only seventeen when he won an appointment as the assistant to the German chemist August Wilhelm von Hofmann. A year later, while trying to synthesize quinine at Hofmann’s suggestion, Perkin discovered a deep purple dye—now known as aniline purple or Mauveine, but popularly called mauve. In 1857 he opened a small dyeworks by the Grand Union Canal in West London, hoping to make his fortune by manufacturing the dye. He succeeded brilliantly. His ambitions were helped along royally when Queen Victoria wore a silk gown dyed with Mauveine to the Royal Exhibition of 1862. In 1869, he perfected a method for producing another new dye, alizarin, which is red. A wealthy man, he sold his business in 1874 when he was just thirty-six years old and devoted himself to research, which included isolation of the first synthetic perfume, coumarin, from coal tar. Perkin died in 1907, a year after receiving a knighthood, one of his many awards and honors for starting the artificial dye industry. His son William Henry Perkin, Jr. (1860-1927) also became a well-known researcher in organic chemistry.

currently useful vat dyes share a common unit. One fundamental problem in dyeing relates to the extent to which the dye is watersoluble. A beautifully colored molecule that is easily soluble in water might seem attractive given the ease with which it binds with the fiber; however, this same solubility will lead to the dye’s rapid loss in daily use. Vat dyes are designed to solve this problem by producing molecules that can be made water-soluble, but only during the dyeing or vatting process. This involves altering the chemical structure of the dye so that it retains its color throughout the life of the cloth. By 1907, Roland Scholl had showed unambiguously that the

Vat dye

/

845

chemical structure proposed by Bohn for Indanthrene was correct, and a major new area of theoretical and practical importance was opened for organic chemists. Impact Bohn’s discovery led to the development of many new and useful dyes. The list of patents issued in his name fills several pages in Chemical Abstracts indexes. The true importance of this work is to be found in a consideration of all synthetic chemistry, which may perhaps be represented by this particular event. More than two hundred dyes related to Indanthrene are in commercial use. The colors represented by these substances are a rainbow making nature’s finest hues available to all. The dozen or so natural dyes have been synthesized into more than seven thousand superior products through the creativity of the chemist. Despite these desirable outcomes, there is doubt whether there is any real benefit to society from the development of new dyes. This doubt is the result of having to deal with limited natural resources. With so many urgent problems to be solved, scientists are not sure whether to search for greater luxury. If the field of dye synthesis reveals a single theme, however, it must be to expect the unexpected. Time after time, the search for one goal has led to something quite different—and useful. See also Buna rubber; Color film; Neoprene. Further Reading Clark, Robin J. H., et al. “Indigo, Woad, and Tyrian Purple: Important Vat Dyes from Antiquity to the Present.” Endeavour 17, no. 4 (December, 1993). Farber, Eduard. The Evolution of Chemistry: A History of Its Ideas, Methods, and Materials. 2d ed. New York: Ronald Press, 1969. Partington, J. R. A History of Chemistry. Staten Island, N.Y.: Martino, 1996. Schatz, Paul F. “Anniversaries: 2001.” Journal of Chemical Education 78, no. 1 (January, 2001).

846

Velcro Velcro

The invention: A material comprising millions of tiny hooks and loops that work together to create powerful and easy-to-use fasteners for a wide range of applications. The person behind the invention: Georges de Mestral (1904-1990), a Swiss engineer and inventor From Cockleburs to Fasteners Since prehistoric times, people have walked through weedy fields and arrived at home with cockleburs all over their clothing. In 1948, a Swiss engineer and inventor, Georges de Mestral, found his clothing full of cockleburs after walking in the Swiss Alps near Geneva. Wondering why cockleburs stuck to clothing, he began to examine them under a microscope. De Mestral’s initial examination showed that each of the thousands of fibrous ends of the cockleburs was tipped with a tiny hook; it was the hooks that made the cockleburs stick to fabric. This observation, combined with much subsequent work, led de Mestral to invent velcro, which was patented in 1957 in the form of two strips of nylon material. One of the strips contained millions of tiny hooks, while the other contained a similar number of tiny loops. When the two strips were pushed together, the hooks were inserted into the loops, joining the two strips of nylon very firmly. This design makes velcro extremely useful as a material for fasteners that is used in applications ranging from sneaker fasteners to fasteners used to join heart valves during surgery. Making Velcro Practical Velcro is not the only invention credited to de Mestral, who also invented such items as a toy airplane and an asparagus peeler, but it was his greatest achievement. It is said that his idea for the material was partly the result of a problem his wife had with a jammed dress zipper just before an important social engagement. De Mestral’s idea was to design a sort of locking tape that used the hook-andloop principle that he had observed under the microscope. Such a

Velcro

/

847

tape, he believed, would never jam. He also believed that the tape would do away with such annoyances as buttons that popped open unexpectedly and knots in shoelaces that refused to be untied. The design of the material envisioned by de Mestral took seven years of painstaking effort. When it was finished, de Mestral named it “velcro” (a contraction of the French phrase velvet crochet, meaning velvet hook), patented it, and opened a factory to manufacture it. Velcro’s design required that de Mestral identify the optimal number of hooks and loops to be used. He eventually found that using approximately three hundred per square inch worked best. In addition, his studies showed that nylon was an excellent material for his purposes, although it had to be stiffened somewhat to work well. Much additional experimentation showed that the most effective way of producing the necessary stiffening was to subject the velcro to infrared light after manufacturing it. Other researchers have demonstrated that velcrolike materials need not be made of nylon. For example, a new micromechanical velcrolike material (microvelcro) that medical researchers believe will soon be used to hold together blood vessels after surgery is made of minute silicon loops and hooks. This material is thought to be superior to other materials for such applications because it will not be redissolved prematurely by the body. Other uses for microvelcro may be to hold together tiny electronic components in miniaturized computers without the use of glue or other adhesives. A major advantage of the use of microvelcro in such situations is that it is resistant to changes of temperature as well as to most chemicals that destroy glue and other adhesives. Impact In 1957, when velcro was patented, there were four main ways to hold things together. These involved the use of buttons, laces, snaps, and zippers (which had been invented by Chicagoan Whitcomb L. Judson in 1892). All these devices had drawbacks; zippers can jam, buttons can come open at embarrassing times, and shoelaces can form knots that are difficult to unfasten. Almost immediately after velcro was introduced, its use became widespread; velcro fasteners can be found on or in clothing, shoes, watchbands, wallets, back-

848

/

Velcro

Georges de Mestral Georges de Mestral got his idea for Velcro in part during a hunting trip on his estates and in part before an important formal social function. These contexts are evidence of the high standing in Swiss society held by de Mestral, an engineer and manufacturer. In fact, de Mestral, who was born in 1904, came from a illustrious line of noble landowners. Their prize possession was one of Switzerland’s famous residences, the castle of Saint Saphorin on Morges. Built on the site of yet older fortifications, the castle was completed by François-Louis de Pesme in 1710. An enemy of King Louis XIV, de Pesme served in the military forces of Austria, Holland, and England, rising to the rank of lieutenant general, but he is best known for driving off a Turkish invasion fleet on the Danube in 1695. Other forebears include the diplomat Armand- François Louis de Mestral (1738-1805) and his father, Albert-Georges-Constantin de Mestral (1878-1966), an agricultural engineer. The castle passed to the father’s four sons and eventually into the care of the inventor. It in turn was inherited by Georges de Mestral’s sons Henri and François when he died in 1990 in Genolier, Switzerland.

packs, bookbags, motor vehicles, space suits, blood-pressure cuffs, and in many other places. There is even a “wall jumping” game incorporating velcro in which a wall is covered with a well-supported piece of velcro. People who want to play put on jackets made of velcro and jump as high as they can. Wherever they land on the wall, the velcro will join together, making them stick. Wall jumping, silly though it may be, demonstrates the tremendous holding power of velcro; a velcro jacket can keep a twohundred-pound person suspended from a wall. This great strength is used in a more serious way in the design of the items used to anchor astronauts to space shuttles and to buckle on parachutes. In addition, velcro is washable, comes in many colors, and will not jam. No doubt many more uses for this innovative product will be found. See also Artificial heart.

Velcro

/

849

Further Reading “George De Mestral: Inventor of Velcro Fastener.” Los Angeles Times (February 13, 1990). LaFavre Yorks, Cindy. “Hidden Helpers Velcro Fasteners, Pull-On Loops and Other Extras Make Dressing Easier for People with Disabilities.” Los Angeles Times (November 1, 1991). Roberts, Royston M., and Jeanie Roberts. Lucky Science: Accidental Discoveries from Gravity to Velcro, with Experiments. New York: John Wiley, 1994. Stone, Judith. “Stuck on Velcro!” Reader’s Digest (September, 1988). “Velcro-wrapped Armor Saves Lives in Bosnia.” Design News 52, no. 7 (April 7, 1997).

850

Vending machine slug rejector Vending machine slug rejector

The invention: A device that separates real coins from counterfeits, the slug rejector made it possible for coin-operated vending machines to become an important marketing tool for many products The people behind the invention: Thomas Adams, the founder of Adams Gum Company Frederick C. Lynde, an Englishman awarded the first American patent on a vending machine Nathaniel Leverone (1884-1969), a founder of the Automatic Canteen Company of America Louis E. Leverone (1880-1957), a founder, with his brother, of the Automatic Canteen Company of America The Growth of Vending Machines One of the most imposing phenomena to occur in the United States economy following World War II was the growth of vending machines. Following the 1930’s invention and perfection of the slug rejector, vending machines became commonplace as a means of marketing gum and candy. By the 1960’s, almost every building had soft drink and coffee machines. Street corners featured machines that dispensed newspapers, and post offices even used vending machines to sell stamps. Occasionally someone fishing in the backwoods could find a vending machine next to a favorite fishing hole that would dispense a can of fishing worms upon deposit of the correct amount of money. The primary advantage offered by vending machines is their convenience. Unlike people, machines can provide goods and services around the clock, with no charge for the “labor” of standing duty. The decade of the 1950’s brought not only an increase in the number of vending machines but also an increase in the types of goods that were marketed through them. Before World War II, the major products had been cigarettes, candy, gum, and soft drinks. The 1950’s brought far more products into the vending machine market.

Vending machine slug rejector

/

851

The first recognized vending machine in history was invented in the third century b.c.e. by the mathematician Hero. This first machine was a coin-activated device that dispensed sacrificial water in an Egyptian temple. It was not until the year 1615 that another vending machine was recorded. In that year, snuff and tobacco vending boxes began appearing in English pubs and taverns. These tobacco boxes were less sophisticated machines than was Hero’s, since they left much to the honesty of the customer. Insertion of a coin opened the box; once it was open, the customer could take out as much tobacco as desired. One of the first United States patents on a machine was issued in 1886 to Frederick C. Lynde. That machine was used to vend postcards. If any one person can be considered the father of vending machines in the United States, it would probably be Thomas Adams, the founder of Adams Gum Company. Adams began the first successful vending operation in America in 1888 when he placed gum machines on train platforms in New York City. Other early vending machines included scales (which vended a service rather than a product), photograph machines, strength testers, beer machines, and hot water vendors (to supply poor people who had no other source of hot water). These were followed, around 1900, by complete automatic restaurants in Germany, cigar vending machines in Chicago, perfume machines in Paris, and an automatic divorce machine in Utah. Also around 1900 came the introduction of coin-operated gambling machines. These “slot machines” are differentiated from normal vending machines. The vending machine industry does not consider gambling machines to be a part of the vending industry since they do not vend merchandise. The primary importance of the gambling machines was that they induced the industry to do research into slug rejection. Early machines allowed coins to be retrieved by the use of strings tied to them and accepted counterfeit lead coins, called slugs. It was not until the 1930’s that the slug rejector was perfected. Invention of the slug rejection device gave rise to the tremendous growth in the vending machine industry in the 1930’s by giving vendors more confidence that they would be paid for their products or services. Soft drink machines got their start just prior to the beginning of

852

/

Vending machine slug rejector

the twentieth century. By 1906, improved models of these machines could dispense up to ten different flavors of soda pop. The drinks were dispensed into a drinking glass or tin cup that was placed near the machine (there was usually only one glass or cup to a machine, since paper cups had not been invented). Public health officials became concerned that everyone was drinking from the same cup. At that point, someone came up with the idea of setting a bucket of water next to the machine so that each customer could rinse off the cup before drinking from it. The year 1909 witnessed one of the monumental inventions in the history of vending machines, the pay toilet. Impact The 1930’s witnessed improved vending machines. Slug rejectors were the most important introduction. In addition, change-making machines were instituted, and a few machines would even say “thank you” after a coin was deposited. These improved machines led many marketers to experiment with automatic vending. Coinoperated washing machines were one of the new applications of the 1930’s. During the Depression, many appliance dealers attached coin metering devices to washing machines, allowing the user to accumulate money to make the monthly payments by using the appliance. This was a form of forced saving. It was not long before some enterprising appliance dealer got the idea of placing washing machines in apartment house basements. This idea was soon followed by stores full of coin-operated laundry machines, giving rise to a new kind of automatic vending business. Following World War II, there was a surge of innovation in the vending machine industry. Much of that surge resulted from the discovery of vending machines by industrial management. Prior to the war, the managements of most factories had been tolerant of vending machines. Following the war, managers discovered that the machines could be an inexpensive means of keeping workers happy. They became aware that worker productivity could be increased by access to candy bars or soft drinks. As a result, the demand for machines exceeded the supply offered by the industry during the late 1940’s.

Vending machine slug rejector

/

853

Vending machines have had a surprising effect on the total retail sales of the U.S. economy. In 1946, sales through vending machines totaled $600 million. By 1960, that figure had increased to $2.5 billion; by 1970, it exceeded $6 billion. The decade of the 1950’s began with individual machines that would dispense cigarettes, candy, gum, coffee, and soft drinks. By the end of that decade, it was much more common to see vending machines in groups. The combination of machines in a group could, in many cases, meet the requirements to assemble a complete meal. Convenience is the key to the popularity of vending machines. Their ability to sell around the clock has probably been the major impetus to vending machine sales as opposed to more conventional marketing. Lower labor costs have also played a role in their popularity, and their location in areas of dense pedestrian traffic prompts impulse purchases. Despite the advances made by the vending machine industry during the 1950’s, there was still one major limitation to growth, to be solved during the early 1960’s. That problem was that vending machines were effectively limited to low-priced items, since the machines would accept nothing but coins. The inconvenience of inserting many coins kept machine operators from trying to market expensive items; as they expected consumer reluctance. The early 1960’s witnessed the invention of vending machines that would accept and make change for $1, $5, and $10 bills. This invention paved the way for expansion into lines of grocery items and tickets. The first use of vending machines to issue tickets was at an Illinois race track, where pari-mutuel tickets were dispensed upon deposit of $2. Penn Central Railroad was one of the first transportation companies to sell tickets by means of vending machines. These machines, used in high-traffic areas on the East Coast, permitted passengers to deal directly with a computer when buying reserved-seat train tickets. The machines would accept $1 bills and $5 bills as well as coins. Limitations to Vending Machines There are limitations to the use of vending machines. Primary among these are mechanical failure and vandalism of machines. Another limitation often mentioned is that not every product can be

854

/

Vending machine slug rejector

sold by machine. There are several factors that make some goods more vendable than others. National advertising and wide consumer acceptance help. Product must have a high turnover in order to justify the cost of a machine and the cost of servicing it. A third factor in measuring the potential success of an item is where it will be consumed or used. The most successful products are used within a short distance of the machine; consumers must be made willing to pay the usually higher prices of machine-bought products by the convenience of machine location. The automatic vending of merchandise plays the largest role in the vending machine industry, but the vending of services also plays a role. The largest percentage of service vending comes from coin laundries. Other types of services are vended by weighing machines, parcel lockers, and pay toilets. By depositing a coin, a person can even get shoes shined. Some motel beds offer a “massage.” Even the lowly parking meter is an example of a vending machine that dispenses services. Coin-operated photocopy machines account for a large portion of service vending. A later advance in the vending machine industry is the use of credit. The cashless society began to make strides with vending machines as well as conventional vendors. As of the early 1990’s, credit cards could be used to operate only a few types of vending machines, primarily those that dispense transportation tickets. Vending machines operated by banks dispense money upon deposit of a credit card. Credit-card gasoline pumps reduced labor requirements at gasoline stations, pushing the concept of self-service a step further. As credit card transactions become more common in general and as the cost of making them falls, use of credit cards for vending machines will increase. Thousands of items have been marketed through vending machines, and firms must continue to evaluate the use of automatic retailing as a marketing channel. Many products are not conducive to automatic vending, but before dismissing that option for a particular product, a marketer should consider the range of products sold through vending machines. The producers of Band-Aid flexible plastic bandages saw the possibilities in the vending field. The only product modification necessary was to put Band-Aids in a package the size of a candy bar, able to be sold from renovated candy machines.

Vending machine slug rejector

/

855

The next problem was to determine areas where there would be a high turnover of Band-Aids. Bowling alleys were an obvious answer, since many bowlers suffered from abrasions on their fingers. The United States is not alone in the development of vending machines; in fact, it is not as advanced as some nations of the world. In Japan, machines operated by credit cards have been used widely since the mid-1960’s, and the range of products offered has been larger than in the United States. Western Europe is probably the most advanced area of the world in terms of vending machine technology. Germany of the early 1990’s probably had the largest selection of vending machines of any European country. Many gasoline stations in Germany featured beer dispensing machines. In rural areas of the country, vending machines hung from utility poles. These rural machines provided candy and gum, among other products, to farmers who did not often travel into town. Most vending machine business in Europe was done not in individual machines but in automated vending shops. The machines offered a creative solution to obstacles created by regulations and laws. Some countries had laws stating that conventional retail stores could not be open at night or on Sundays. To increase sales and satisfy consumer needs, stores built vending operations that could be used by customers during off hours. The machines, or combinations of them, often stocked a tremendous variety of items. At one German location, consumers could choose among nearly a thousand grocery items. The Future The future will see a broadening of product lines offered in vending machines as marketers come to recognize the opportunities that exist in automatic retailing. In the United States, vending machines of the early 1990’s primarily dispensed products for immediate consumption. If labor costs increase, it will become economically feasible to sell more items from vending machines. Grocery items and tickets offered the most potential for expansion. Vending machines offer convenience to the consumer. Virtually any company that produces for the retail market must consider vending machines as a marketing channel. Machines offer an alter-

856

/

Vending machine slug rejector

native to conventional stores that cannot be ignored as the range of products offered through machines increases. Vending machines appear to be a permanent fixture and have only scratched the surface of the market. Although machines have a long history, their popularization came from innovations of the 1930’s, particularly the slug rejector. Marketing managers came to recognize that vending machine sales are more than a sideline. Increasingly, firms established separate departments to handle sales through vending machines. Successful companies make the best use of all channels of distribution, and vending machines had become an important marketing channel. See also Geiger counter; Sonar; Radio interferometer. Further Reading Ho, Rodney. “Vending Machines Make Change—-Now They Sell Movie Soundtracks, Underwear—Even Art.” Wall Street Journal (July 7, 1999). Rosen, Cheryl. “Vending Machines Get a High-Tech Makeover. Informationweek 822 (January 29, 2001). Ryan, James. “In Vending Machine, Brains That Tell Good Money from Bad.” New York Times (April 8, 1999). Tagliabue, John. “Vending Machines Face an Upheaval of Change.” New York Times (February 16, 1999).

857

Videocassette recorder Videocassette recorder

The invention: A device for recording and playing back movies and television programs, the videocassette recorder (VCR) revolutionized the home entertainment industry in the late 1970’s. The company behind the invention: Philips Corporation, a Dutch Company Videotape Recording Although television sets first came on the market before World War II, video recording on magnetic tape was not developed until the 1950’s. Ampex marketed the first practical videotape recorder in 1956. Unlike television, which manufacturers aimed at retail consumers from its inception, videotape recording was never expected to be attractive to the individual consumer. The first videotape recorders were meant for use within the television industry. Developed not long after the invention of magnetic tape recording of audio signals, the early videotape recorders were large machines that employed an open reel-to-reel tape drive similar to that of a conventional audiotape recorder. Recording and playback heads scanned the tape longitudinally (lengthwise). Because video signals have a much wider frequency (“frequency” is the distance between the tops and the bottoms of the signal waves) than audio signals do, this scanning technique meant that the amount of recording time available on one reel of tape was extremely limited. In addition, open reels were large and awkward, and the magnetic tape itself was quite expensive. Still, within the limited application area of commercial television, videotape recording had its uses. It made it possible to play back recorded material immediately rather than having to wait for film to be processed in a laboratory. As television became more popular and production schedules became more hectic, with more material being produced in shorter and shorter periods of time, videotape solved some significant problems.

858

/

Videocassette recorder

Helical Scanning Breakthrough Engineers in the television industry continued to search for innovations and improvements in videotape recording following Ampex’s marketing of the first practical videotape recorder in the 1950’s. It took more than ten years, however, for the next major breakthrough to occur. The innovation that proved to be the key to reducing the size and awkwardness of video recording equipment came in 1967 with the invention by the Philips Corporation of helical scanning. All videocassette recorders eventually employed multiple-head helical scanning systems. In a helical scanning system, the record and playback heads are attached to a spinning drum or head that rotates at exactly 1,800 revolutions per minute, or 30 revolutions per second. This is the number of video frames per second used in the NTSC-TV broadcasts in the United States and Canada. The heads are mounted in pairs 180 degrees apart on the drum. Two fields on the tape are scanned for each revolution of the drum. Perhaps the easiest way to understand the helical scanning system is to visualize the spiral path followed by the stripes on a barber’s pole. Helical scanning deviated sharply from designs based on audio recording systems. In an audiotape recorder, the tape passes over stationary playback and record heads; in a videocassette recorder, both the heads and the tape move. Helical scanning is, however, one of the few things that competing models and formats of videocassette recorders have in common. Different models employ different tape delivery systems and, in the case of competing formats such as Beta and VHS, there may be differences in the composition of the video signal to be recorded. Beta uses a 688-kilohertz (kHz) frequency, while VHS employs a frequency of 629 kHz. This difference in frequency is what allows Beta videocassette recorders (VCRs) to provide more lines of resolution and thus a superior picture quality; VHS provides 240 lines of resolution, while Beta has 400. (For this reason, it is perhaps unfortunate that the VHS format eventually dominated the market.) In addition to helical scanning, Philips introduced another innovation: the videocassette. Existing videotape recorders employed a reel-to-reel tape drive, as do videocassettes, but videocassettes en-

Videocassette recorder

/

859

close the tape reels in a protective case. The case prevents the tape from being damaged in handling. The first VCRs were large and awkward compared to later models. Industry analysts still thought that the commercial television and film industries would be the primary markets for VCRs. The first videocassettes employed wide—3 4-inch or 1-inch—videotapes, and the machines themselves were cumbersome. Although Philips introduced a VCR in 1970, it took until 1972 before the machines actually became available for purchase, and it would be another ten years before VCRs became common appliances in homes. Consequences Following the introduction of the VCR in 1970, the home entertainment industry changed radically. Although the industry did not originally anticipate that the VCR would have great commercial potential as a home entertainment device, it quickly became obvious that it did. By the late 1970’s, the size of the cassette had been reduced and the length of recording time available per cassette had been increased from one hour to six. VCRs became so widespread that advertisers on television became concerned with a phenomenon known as “timeshifting,” which refers to viewers setting the VCR to record programs for later viewing. Jokes about the complexity of programming VCRs appeared in the popular culture, and an inability to cope with the VCR came to be seen as evidence of technological illiteracy. Consumer demand for VCRs was so great that, by the late 1980’s, compact portable video cameras became widely available. The same technology—helical scanning with multiple heads—was successfully miniaturized, and “camcorders” were developed that were not much larger than a paperback book. By the early 1990’s, “reality television”—that is, television shows based on actual events—began relying on video recordings supplied by viewers rather than material produced by professionals. The video recorder had completed a circle: It began as a tool intended for use in the television studio, and it returned there four decades later. Along the way, it had an effect no one could have predicted; passive viewers in the audience had evolved into active participants in the production process.

860

/

Videocassette recorder

See also Cassette recording; Color television; Compact disc; Dolby noise reduction; Television; Walkman cassette player. Further Reading Gilder, George. Life After Television. New York: W. W. Norton, 1992. Lardner, James. Fast Forward: Hollywood, the Japanese, and the Onslaught of the VCR. New York: Norton, 1987. Luther, Arch C. Digital Video in the PC Environment. New York: McGraw-Hill, 1989. Wassser, Frederick. Veni, Vidi, Video: The Hollywood Empire and the VCR. Austin: University of Texas Press, 2001.

861

Virtual machine Virtual machine

The invention: The first computer to swap storage space between its random access memory (RAM) and hard disk to create a larger “virtual” memory that enabled it to increase its power. The people behind the invention: International Business Machines (IBM) Corporation, an American data processing firm Massachusetts Institute of Technology (MIT), an American university Bell Labs, the research and development arm of the American Telephone and Telegraph Company A Shortage of Memory During the late 1950’s and the 1960’s, computers generally used two types of data storage areas. The first type, called “magnetic disk storage,” was slow and large, but its storage space was relatively cheap and abundant. The second type, called “main memory” (also often called “random access memory,” or RAM), was much faster. Computation and program execution occurred primarily in the “central processing unit” (CPU), which is the “brain” of the computer. The CPU accessed RAM as an area in which to perform intermediate computations, store data, and store program instructions. To run programs, users went through a lengthy process. At that time, keyboards with monitors that allowed on-line editing and program storage were very rare. Instead, most users used typewriter-like devices to type their programs or text on paper cards. Holding decks of such cards, users waited in lines to use card readers. The cards were read and returned to the user, and the programs were scheduled to run later. Hours later or even overnight, the output of each program was printed in some predetermined order, after which all the outputs were placed in user bins. It might take as long as several days to make any program corrections that were necessary.

862

/

Virtual machine

Because CPUs were expensive, many users had to share a single CPU. If a computer had a monitor that could be used for editing or could run more than one program at a time, more memory was required. RAM was extremely expensive, and even multimilliondollar computers had small memories. In addition, this primitive RAM was extremely bulky. Virtually Unlimited Memory The solution to the problem of creating affordable, convenient memory came in a revolutionary reformulation of the relationship between main memory and disk space. Since disk space was large and cheap, it could be treated as an extended “scratch pad,” or temporary-use area, for main memory. While a program ran, only small parts of it (called pages or segments), normally the parts in use at that moment, would be kept in the main memory. If only a few pages of each program were kept in memory at any time, more programs could coexist in memory. When pages lay idle, they would be sent from RAM to the disk, as newly requested pages were loaded from the disk to the RAM. Each user and program “thought” it had essentially unlimited memory (limited only by disk space), hence the term “virtual memory.” The system did, however, have its drawbacks. The swapping and paging processes reduced the speed at which the computer could process information. Coordinating these activities also required more circuitry. Integrating each program and the amount of virtual memory space it required was critical. To keep the system operating accurately, stably, and fairly among users, all computers have an “operating system.” Operating systems that support virtual memory are more complex than the older varieties are. Many years of research, design, simulations, and prototype testing were required to develop virtual memory. CPUs and operating systems were designed by large teams, not individuals. Therefore, the exact original discovery of virtual memory is difficult to trace. Many people contributed at each stage. The first rudimentary implementation of virtual memory concepts was on the Atlas computer, which was constructed in the early 1960’s in England, at the University of Manchester. It coupled RAM

Virtual machine

/

863

with a device that read a magnetizable cylinder, or drum, which meant that it was a two-part storage system. In the late 1960’s, the Massachusetts Institute of Technology (MIT), Bell Telephone Labs, and the General Electric Company (later Honeywell) jointly designed a high-level operating system called MULTICS, which had virtual memory. During the 1960’s, IBM worked on virtual memory, and the IBM 360 series supported the new memory system. With the evolution of engineering concepts such as circuit integration, IBM produced a new line of computers called the IBM 370 series. The IBM 370 supported several advances in hardware (equipment) and software (program instructions), including full virtual memory capabilities. It was a platform for a new and powerful “environment,” or set of conditions, in which software could be run; IBM called this environment the VM/370. The VM/370 went far beyond virtual memory, using virtual memory to create virtual machines. In a virtual machine environment, each user can select a separate and complete operating system. This means that separate copies of operating systems such as OS/360, CMS, DOS/360, and UNIX can all run in separate “compartments” on a single computer. In effect, each operating system has its own machine. Reliability and security were also increased. This was a major breakthrough, a second computer revolution. Another measure of the significance of the IBM 370 was the commercial success and rapid, widespread distribution of the system. The large customer base for the older IBM 360 also appreciated the IBM 370’s compatibility with that machine. The essentials of the IBM 370 virtual memory model were retained even in the 1990’s generation of large, powerful mainframe computers. Furthermore, its success carried over to the design decisions of other computers in the 1970’s. The second-largest computer manufacturer, Digital Equipment Corporation (DEC), followed suit; its popular VAX minicomputers had virtual memory in the late 1970’s. The celebrated UNIX operating system also added virtual memory. IBM’s success had led to industry-wide acceptance.

864

/

Virtual machine

Consequences The impact of virtual memory extends beyond large computers and the 1970’s. During the late 1970’s and early 1980’s, the computer world took a giant step backward. Small, single-user computers called personal computers (PCs) became very popular. Because they were single-user models and were relatively cheap, they were sold with weak CPUs and deplorable operating systems that did not support virtual memory. Only one program could run at a time. Larger and more powerful programs required more memory than was physically installed. These computers crashed often. Virtual memory raises PC user productivity. With virtual memory space, during data transmissions or long calculations, users can simultaneously edit files if physical memory runs out. Most major PCs now have improved CPUs and operating systems, and these advances support virtual memory. Popular virtual memory systems such as OS/2, Windows/DOS, and MAC-OS are available. Even old virtual memory UNIX has been used in PCs. The concept of a virtual machine has been revived, in a weak form, on PCs that have dual operating systems (such as UNIX and DOS, OS/2 and DOS, MAC and DOS, and UNIX and DOS combinations). Most powerful programs benefit from virtual memory. Many dazzling graphics programs require massive RAM but run safely in virtual memory. Scientific visualization, high-speed animation, and virtual reality all benefit from it. Artificial intelligence and computer reasoning are also part of a “virtual” future. See also Colossus computer; Differential analyzer; ENIAC computer; IBM Model 1401 computer; Personal computer; Robot (industrial); SAINT; Virtual reality. Further Reading Bashe, Charles J. IBM’s Early Computers. Cambridge, Mass.: MIT Press, 1986. Ceruzzi, Paul E. A History of Modern Computing. Cambridge, Mass.: MIT Press, 2000.

Virtual machine

/

865

Chposky, James, and Ted Leonsis. Blue Magic: The People, Power, and Politics Behind the IBM Personal Computer. New York: Facts on File, 1988. Seitz, Frederick, and Norman G. Einspruch. Electronic Genie: The Tangled History of Silicon. Urbana: University of Illinois Press, 1998.

866

Virtual reality Virtual reality

The invention: The creation of highly interactive, computer-based multimedia environments in which the user becomes a participant with the computer in a “virtually real” world. The people behind the invention: Ivan Sutherland (1938), an American computer scientist Myron W. Krueger (1942), an American computer scientist Fred P. Brooks (1931), an American computer scientist Human/Computer Interface In the early 1960’s, the encounter between humans and computers was considered to be the central event of the time. The computer was evolving more rapidly than any technology in history; humans seemed not to be evolving at all. The “user interface” (the devices and language with which a person communicates with a computer) was a veneer that had been applied to the computer to make it slightly easier to use, but it seemed obvious that the ultimate interface would be connecting the human body and senses directly to the computer. Against this background, Ivan Sutherland of the University of Utah identified the next logical step in the development of computer graphics. He implemented a head-mounted display that allowed a person to look around in a graphically created “room” simply by turning his or her head. Two small cathode-ray tubes, or CRTs (which are the basis of television screens and computer monitors), driven by vector graphics generators (mathematical imagecreating devices) provided the appropriate view for each eye, and thus, stereo vision. In the early 1970’s, Fred P. Brooks of the University of North Carolina created a system that allowed a person to handle graphic objects by using a mechanical manipulator. When the user moved the physical manipulator, a graphic manipulator moved accordingly. If a graphic block was picked up, the user felt its weight and its resistance to his or her fingers closing around it.

Virtual reality

/

867

A New Reality Beginning in 1969, Myron W. Krueger of the University of Wisconsin created a series of interactive environments that emphasized unencumbered, full-body, multisensory participation in computer events. In one demonstration, a sensory floor detected participants’ movements around a room. A symbol representing each participant moved through a projected graphic maze that changed in playful ways if participants tried to cheat. In another demonstration, participants could use the image of a finger to draw on the projection screen. In yet another, participants’ views of a projected threedimensional room changed appropriately as they moved around the physical space. It was interesting that people naturally accepted these projected experiences as reality. They expected their bodies to influence graphic objects and were delighted when they did. They regarded their electronic images as extensions of themselves. What happened to their images also happened to them; they felt what touched their images. These observations led to the creation of the Videoplace, a graphic world that people could enter from different places to interact with each other and with graphic creatures. Videoplace is an installation at the Connecticut Museum of Natural History in Storrs, Connecticut. Videoplace visitors in separate rooms can fingerpaint together, perform free-fall gymnastics, tickle each other, and experience additional interactive events. The computer combines and alters inputs from separate cameras trained on each person, each of whom responds in turn to the computer’s output, playing games in the world created by Videoplace software. Since participants’ live video images can be manipulated (moved, scaled, or rotated) in real time, the world that is created is not bound by the laws of physics. In fact, the result is a virtual reality in which new laws of cause and effect are created, and can be changed, from moment to moment. Indeed, the term “virtual reality” describes the type of experience that can be created with Videoplace or with the technology invented by Ivan Sutherland. Virtual realities are part of certain ongoing trends. Most obvious are the trend from interaction to participation in computer events and the trend from passive to active art forms. In addition, artificial

868

/

Virtual reality

Ivan Sutherland Ivan Sutherland was born in Hastings, Nebraska, in 1938. His father was an engineer, and from an early age Sutherland considered engineering his own destiny, too. He earned a bachelor’s degree from the Carnegie Institute of Technology in 1959, a master’s degree from the California Institute of Technology in 1960, and a doctorate from the Massachusetts Institute of Technology (MIT) in 1963. His adviser at MIT was Claude Shannon, creator of information theory, who directed Sutherland to find ways to simplify the interface between people and computers. Out of this research came Sketchpad. It was software that allowed people to draw designs on a computer terminal with a light pen, an early form of computer-assisted design (CAD). The U.S. Defense Department’s Advanced Research Projects Center became interested in Sutherland’s work and hired him to direct its Information Processing Techniques Office in 1964. In 1966 he left to become an associate professor of electrical engineering at Harvard University, moving to the University of Utah in 1968, and then to Caltech in 1975. During his academic career he developed the graphic interface for virtual reality, first announced in his ground-breaking 1968 article “A Head-Mounted Three-Dimensional Display.” In 1980 Sutherland left academia for industry. He already had business experience as cofounder of Evans & Sutherland in Salt Lake City. The new firm, Sutherland, Sproull, and Associates, which provided consulting services and venture capital, later became part of Sun Microsystems, Inc. Sutherland remained as a research fellow and vice president. A member of the National Academy of Engineering and National Academy of Sciences, in 1988 Sutherland was awarded the AM Turing Award, the highest honor in information technology.

experiences are taking on increasing significance. Businesspersons like to talk about “doing it right the first time.” This can now be done in many cases, not because fewer mistakes are being made by people but because those mistakes are being made in simulated environments. Most important is that virtual realities provide means of express-

Virtual reality

/

869

ing and experiencing, as well as new ways for people to interact. Entertainment uses of virtual reality will be as economically significant as more practical uses, since entertainment is the United States’ number-two export. Vicarious experience through theater, novels, movies, and television represents a significant percentage of people’s lives in developed countries. The addition of a radically new form of physically involving, interactive experience is a major cultural event that may shape human consciousness as much as earlier forms of experience have. Consequences Most religions offer their believers an escape from this world, but few technologies have been able to do likewise. Not so with virtual reality, the fledgling technology in which people explore a simulated three-dimensional environment generated by a computer. Using this technology, people can not only escape from this world but also design the world in which they want to live. In most virtual reality systems, many of which are still experimental, one watches the scene, or alternative reality, through threedimensional goggles in a headset. Sound and tactile sensations enhance the illusion of reality. Because of the wide variety of actual and potential applications of virtual reality, from three-dimensional video games and simulators to remotely operated “telepresence” systems for the nuclear and undersea industries, interest in the field is intense. The term “virtual reality” describes the computer-generated simulation of reality with physical, tactile, and visual dimensions. The interactive technology is used by science and engineering researchers as well as by the entertainment industry, especially in the form of video games. Virtual reality systems can, for example, simulate a walk-through of a building in an architectural graphics program. Virtual reality technology in which the artificial world overlaps with reality will have major social and psychological implications. See also Personal computer; Virtual machine.

870

/

Virtual reality

Further Reading Earnshaw, Rae A., M. A. Gigante, and H. Jones. Virtual Reality Systems. San Diego: Academic Press, 1993. Moody, Fred. The Visionary Position: The Inside Story of the Digital Dreamers Who Are Making Virtual Reality a Reality. New York: Times Business, 1999. Sutherland, Ivan Edward. Sketchpad: A Man-Machine Graphical Communication System. New York: Garland, 1980.

871

V-2 rocket V-2 rocket

The invention: The first first long-range, liquid-fueled rocket, the V-2 was developed by Germany to carry bombs during World War II. The people behind the invention: Wernher von Braun (1912-1977), the chief engineer and prime motivator of rocket research in Germany during the 1930’s and 1940’s Walter Robert Dornberger (1895-1980), the former commander of the Peenemünde Rocket Research Institute Ing Fritz Gosslau, the head of the V-1 development team Paul Schmidt, the designer of the impulse jet motor The “Buzz Bomb” On May 26, 1943, in the middle of World War II, key German military officials were briefed by two teams of scientists, one representing the air force and the other representing the army. Each team had launched its own experimental aerial war craft. The military chiefs were to decide which project merited further funding and development. Each experimental craft had both advantages and disadvantages, and each counterbalanced the other. Therefore, it was decided that both craft were to be developed. They were to become the V-1 and the V-2 aircraft. The impulse jet motor used in the V-1 craft was designed by Munich engineer Paul Schmidt. On April 30, 1941, the motor had been used to assist power on a biplane trainer. The development team for the V-1 was headed by Ing Fritz Gosslau; the aircraft was designed by Robert Lusser. The V-1, or “buzz bomb,” was capable of delivering a one-ton warhead payload. While still in a late developmental stage, it was launched, under Adolf Hitler’s orders, to terrorize inhabited areas of London in retaliation for the damage that had been wreaked on Germany during the war. More than one hundred V-1’s were launched daily between June 13 and early September, 1944. Because the V-1

872

/

V-2 rocket

flew in a straight line and at a constant speed, Allied aircraft were able to intercept it more easily than they could the V-2. Two innovative systems made the V-1 unique: the drive operation and the guidance system. In the motor, oxygen entered the grid valves through many small flaps. Fuel oil was introduced and the mixture of fuel and oxygen was ignited. After ignition, the expanded gases produced the reaction propulsion. When the expanded gases had vacated, the reduced internal pressure allowed the valve flaps to reopen, admitting more air for the next cycle. The guidance system included a small propeller connected to a revolution counter that was preset based on the distance to the target. The number of propeller revolutions that it would take to reach the target was calculated before launch and punched into the counter. During flight, after the counter had measured off the selected number of revolutions, the aircraft’s elevator flaps became activated, causing the craft to dive at the target. Understandably, the accuracy was not what the engineers had hoped. Vengeance Weapon 2 According to the Treaty of Versailles (1919), world military forces were restricted to 100,000 men and a certain level of weaponry. The German military powers realized very early, however, that the treaty had neglected to restrict rocket-powered weaponry, which did not exist at the end of World War I (1914-1918). Wernher von Braun was hired as chief engineer for developing the V-2 rocket. The V-2 had a lift-off thrust of 11,550.5 newtons and was propelled by the combustion of liquid oxygen and alcohol. The propellants were pumped into the combustion chamber by a steampowered turboprop. The steam was generated by the decomposition of hydrogen peroxide, using sodium permanganate as a catalyst. One innovative feature of the V-2 that is still used was regenerative cooling, which used alcohol to cool the double-walled combustion chamber. The guidance system included two phases: powered and ballistic. Four seconds after launch, a preprogrammed tilt to 17 degrees was begun, then acceleration was continued to achieve the desired trajectory. At the desired velocity, the engine power was cut off via

V-2 rocket

/

873

one of two systems. In the automatic system, a device shut off the engine at the velocity desired; this method, however, was inaccurate. The second system sent a radio signal to the rocket’s receiver, which cut off the power. This was a far more accurate method, but the extra equipment required at the launch site was an attractive target for Allied bombers. This system was more often employed toward the end of the war. Even the 907-kilogram warhead of the V-2 was a carefully tested device. The detonators had to be able to withstand 6 g’s of force during lift-off and reentry, as well as the vibrations inherent in a rocket flight. Yet they also had to be sensitive enough to ignite the bomb upon impact and before the explosive became buried in the target and lost power through diffusion of force. The V-2’s first successful test was in October of 1942, but it continued to be developed until August of 1944. During the next eight months, more than three thousand V-2’s were launched against England and the Continent, causing immense devastation and living up to its name: Vergeltungswaffe zwei (vengeance weapon 2). Unfortunately for Hitler’s regime, the weapon that took fourteen years of research and testing to perfect entered the war too late to make an impact upon the outcome. Impact The V-1 and V-2 had a tremendous impact on the history and development of space technology. Even during the war, captured V-2’s were studied by Allied scientists. American rocket scientists were especially interested in the technology, since they too were working to develop liquid-fueled rockets. After the war, German military personnel were sent to the United States, where they signed contracts to work with the U.S. Army in a program known as “Operation Paperclip.” Testing of the captured V-2’s was undertaken at White Sands Missile Range near Alamogordo, New Mexico. The JB-2 Loon Navy jet-propelled bomb was developed following the study of the captured German craft. The Soviet Union also benefited from captured V-2’s and from the German V-2 factories that were dismantled following the war. With these resources, the Soviet Union developed its own rocket technol-

874

/

V-2 rocket

ogy, which culminated in the launch of Sputnik 1, the world’s first artificial satellite, on October 4, 1957. The United States was not far behind. It launched its first satellite, Explorer 1, on January 31, 1958. On April 12, 1961, the world’s first human space traveler, Soviet cosmonaut Yuri A. Gagarin, was launched into Earth orbit. See also Airplane; Cruise missile; Hydrogen bomb; Radar; Rocket; Stealth aircraft. Further Reading Bergaust, Erik. Wernher von Braun: The Authoritative and Definitive Biographical Profile of the Father of Modern Space Flight. Washington: National Space Institute, 1976. De Maeseneer, Guido. Peenemünde: The Extraordinary Story of Hitler’s Secret Weapons V-1 and V-2. Vancouver: AJ Publishing, 2001. Piszkiewicz, Dennis. Wernher von Braun: The Man Who Sold the Moon. Westport, Conn.: Praeger, 1998.

875

Walkman cassette player Walkman cassette player

The invention: Inexpensive portable device for listening to stereo cassettes that was the most successful audio product of the 1980’s and the forerunner of other portable electronic devices. The people behind the invention: Masaru Ibuka (1908-1997), a Japanese engineer who cofounded Sony Akio Morita (1921-1999), a Japanese physicist and engineer, cofounder of Sony Norio Ohga (1930), a Japanese opera singer and businessman who ran Sony’s tape recorder division before becoming president of the company in 1982 Convergence of Two Technologies The Sony Walkman was the result of the convergence of two technologies: the transistor, which enabled miniaturization of electronic components, and the compact cassette, a worldwide standard for magnetic recording tape. As the smallest tape player devised, the Walkman was based on a systems approach that made use of advances in several unrelated areas, including improved loudspeaker design and reduced battery size. The Sony company brought them together in an innovative product that found a mass market in a remarkably short time. Tokyo Telecommunications Engineering, which became Sony, was one of many small entrepreneurial companies that made audio products in the years following World War II. It was formed in the ruins of Tokyo, Japan, in 1946, and got its start manufacturing components for inexpensive radios and record players. They were the ideal products for a company with some expertise in electrical engineering and a limited manufacturing capability. Akio Morita and Masaru Ibuka formed Tokyo Telecommunications Engineering to make a variety of electrical testing devices and instruments, but their real interests were in sound, and they decided to concentrate on audio products. They introduced a reel-to-reel

876

/

Walkman cassette player

tape recorder in 1946. Its success ensured that the company would remain in the audio field. The trade name of the magnetic tape they manufactured was “Soni,” this was the origin of the company’s new name, adopted in 1957. The 1953 acquisition of a license to use Bell Laboratories’ transistor technology was a turning point in the fortunes of Sony, for it led the company to the highly popular transistor radio and started it along the path to reducing the size of consumer products. In the 1960’s, Sony led the way to smaller and cheaper radios, tape recorders, and television sets, all using transistors instead of vacuum tubes. The Consumer Market The original marketing strategy for manufacturers of mechanical entertainment devices had been to put one into every home. This was the goal for Edison’s phonograph, the player piano, the Victrola, and the radio receiver. Sony and other Japanese manufacturers found out that if a product were small enough and cheap enough, two or three might be purchased for home use, or even for outdoor use. This was the marketing lesson of the transistor radio. The unparalleled sales of transistor radios indicated that consumer durables intended for entertainment were not exclusively used in the home. The appeal of the transistor radio was that it made entertainment portable. Sony applied this concept to televisions and tape recorders, developing small portable units powered by batteries. Sony was first to produce a “personal” television set, with a five-inch screen. To the surprise of many manufacturers who said there would never be a market for such a novelty item, it sold well. It was impossible to reduce tape recorders to the size of transistor radios because of the problems of handling very small reels of tape and the high power required to turn them. Portable tape recorders required several large flashlight batteries. Although tape had the advantage of recording capability, it could not challenge the popularity of the microgroove 45 revolution-per-minute (rpm) disc because the tape player was much more difficult to operate. In the 1960’s, several types of tape cartridge were introduced to overcome this problem, including the eight-track tape cartridge and the Philips compact cassette. Sony and Matsushita were two of the leading Japanese manu-

Walkman cassette player

/

877

facturers that quickly incorporated the compact cassette into their audio products, producing the first cassette players available in the United States. The portable cassette players of the 1960’s and 1970’s were based on the transistor radio concept: small loudspeaker, transistorized amplifier, and flashlight batteries all enclosed in a plastic case. The size of transistorized components was being reduced constantly, and new types of batteries, notably the nickel cadmium combination, offered higher power output in smaller sizes. The problem of reducing the size of the loudspeaker without serious deterioration of sound quality blocked the path to very small cassette players. Sony’s engineers solved the problem with a very small loudspeaker device using plastic diaphragms and new, lighter materials for the magnets. These devices were incorporated into tiny stereo headphones that set new standards of fidelity. The first “walkman” was made by Sony engineers for the personal use of Masaru Ibuka. He wanted to be able to listen to high-fidelity recorded sound wherever he went, and the tiny player was small enough to fit inside a pocket. Sony was experienced in reducing the size of machines. At the same time the walkman was being made up, Sony engineers were struggling to produce a video recording cassette that was also small enough to fit into Ibuka’s pocket. Although the portable stereo was part of a long line of successful miniaturized consumer products, it was not immediately recognized as a commercial technology. There were already plenty of cassette players in home units, in automobiles, and in portable players. Marketing experts questioned the need for a tiny version. The board of directors of Sony had to be convinced by Morita that the new product had commercial potential. The Sony Soundabout portable cassette player was introduced to the market in 1979. Impact The Soundabout was initially treated as a novelty in the audio equipment industry. At a price of $200, it could not be considered as a product for the mass market. Although it sold well in Japan, where people were used to listening to music on headphones, sales in the United States were not encouraging. Sony’s engineers, working un-

878

/

Walkman cassette player

der the direction of Kozo Ohsone, reduced the size and cost of the machine. In 1981, the Walkman II was introduced. It was 25 percent smaller than the original version and had 50 percent fewer moving parts. Its price was considerably lower and continued to fall. The Walkman opened a huge market for audio equipment that nobody knew existed. Sony had again confounded the marketing experts who doubted the appeal of a new consumer electronics product. It took about two years for Sony’s Japanese competitors, including Matsushita, Toshiba, and Aiwa, to bring out portable personal stereos. Such was the popularity of the device that any miniature cassette player was called a “walkman,” irrespective of the manufacturer. Sony kept ahead of the competition by constant innovation: Dolby noise reduction circuits were added in 1982, and a rechargeable battery feature was introduced in 1985. The machine became smaller, until it was barely larger than the audio cassette it played. Sony developed a whole line of personal stereos. Waterproofed Walkmans were marketed to customers who wanted musical accompaniment to water sports. There were special models for tennis players and joggers. The line grew to encompass about forty different types of portable cassette players, priced from about $30 to $500 for a high-fidelity model. In the ten years following the introduction of the Walkman, Sony sold fifty million units, including twenty-five million in the United States. Its competitors sold millions more. They were manufactured all over the Far East and came in a broad range of sizes and prices, with the cheapest models about $20. Increased competition in the portable tape player market continually forced down prices. Sony had to respond to the huge numbers of cheap copies by redesigning the Walkman to bring down its cost and by automating its production. The playing mechanism became part of the integrated circuit that provided amplification, allowing manufacturing as one unit. The Walkman did more than revive sales of audio equipment in the sagging market of the late 1970’s. It stimulated demand for cassette tapes and helped make the compact cassette the worldwide standard for magnetic tape. At the time the Walkman was introduced, the major form of prerecorded sound was the vinyl micro-

Walkman cassette player

/

879

Masaru Ibuka Nicknamed “genius inventor” in college, Masaru Ibuka developed into a visionary corporate leader and business philosopher. Born in Nikko City, Japan, in 1908, he took a degree in engineering from Waseda University in 1933 and went to work at Photo-Chemical Laboratory, which developed movie film. Changing to naval research during World War II, he met Akio Morita, another engineer. After the war they opened an electronics shop together, calling it the Tokyo Telecommunications Engineering Corporation, and began experimenting with tape recorders. Their first model was a modest success, and the business grew under Ibuka, who was president and later chairman He thought up a new, less daunting name for his company, Sony, in the 1950’s, when it rapidly became a leader in consumer electronics. His goal was to make existing technology useful to people in everyday life. “He sowed the seeds of a deep conviction that our products must bring joy and fun to users,” one of his successors as president, Nobuyuki Idei, said in 1997. While American companies were studying military applications for the newly developed transistor in the 1950’s, Ibuka and Morita put it to use in an affordable transistor radio and then found ways to shrink its size and power it with batteries so that it could be taken anywhere. In a similar fashion, they made tape recorders and players (such as the Walkman), video players, compact disc players, and televisions ever cheaper, more reliable, and more efficiently designed. A hero in the Japanese business world, Ibuka retired as Sony chairman in 1976 but continued to help out as a consultant until his death in 1997.

groove record. In 1983, the ratio of vinyl to cassette sales was 3:2. By the end of the decade, the audio cassette was the bestselling format for recorded sound, outselling vinyl records and compact discs combined by a ratio of 2:1. The compatibility of the audio cassette used in personal players with the home stereo ensured that it would be the most popular tape recording medium. The market for portable personal players in the United States during the decade of the 1990’s was estimated to be more than

880

/

Walkman cassette player

twenty million units each year. Sony accounted for half of the 1991 American market of fifteen million units selling at an average price of $50. It appeared that there would be more than one in every home. In some parts of Western Europe, there were more cassette players than people, reflecting the level of market penetration achieved by the Walkman. The ubiquitous Walkman had a noticeable effect on the way that people listen to music. The sound from the headphones of a portable player is more intimate and immediate than the sound coming from the loudspeakers of a home stereo. The listener can hear a wider range of frequencies and more of the lower amplitudes of music, while the reverberation caused by sound bouncing off walls is reduced. The listening public has become accustomed to the Walkman sound and expects it to be duplicated on commercial recordings. Recording studios that once mixed their master recordings to suit the reproduction characteristics of car or transistor radios began to mix them for Walkman headphones. Personal stereos also enable the listener to experience more of the volume of recorded sound because it is injected directly into the ear. The Walkman established a market for portable tape players that exerted an influence on all subsequent audio products. The introduction of the compact disc (CD) in 1983 marked a completely new technology of recording based on digital transformation of sound. It was jointly developed by the Sony and Philips companies. Despite the enormous technical difficulties of reducing the size of the laser reader and making it portable, Sony’s engineers devised the Discman portable compact disc player, which was unveiled in 1984. It followed the Walkman concept exactly and offered higher fidelity than the cassette tape version. The Discman sold for about $300 when it was introduced, but its price soon dropped to less than $100. It did not achieve the volume of sales of the audio cassette version because fewer CDs than audio cassettes were in use. The slow acceptance of the compact disc hindered sales growth. The Discman could not match the portability of the Walkman because vibrations caused the laser reader to skip tracks. In the competitive market for consumer electronics products, a company must innovate to survive. Sony had watched cheap compe-

Walkman cassette player

/

881

tition erode the sales of many of its most successful products, particularly the transistor radio and personal television, and was committed to both product improvement and new entertainment technologies. It knew that the personal cassette player had a limited sales potential in the advanced industrial countries, especially after the introduction of digital recording in the 1980’s. It therefore sought new technology to apply to the Walkman concept. Throughout the 1980’s, Sony and its many competitors searched for a new version of the Walkman. The next generation of personal players was likely to be based on digital recording. Sony introduced its digital audio tape (DAT) system in 1990. This used the same digital technology as the compact disc but came in tape form. It was incorporated into expensive home players; naturally, Sony engineered a portable version. The tiny DAT Walkman offered unsurpassed fidelity of reproduction, but its incompatibility with any other tape format and its high price limited its sales to professional musicians and recording engineers. After the failure of DAT, Sony refocused its digital technology into a format more similar to the Walkman. Its Mini Disc (MD) used the same technology as the compact disc but had the advantage of a recording capability. The 2.5-inch disc was smaller than the CD, and the player was smaller than the Walkman. The play-only version fit in the palm of a hand. A special feature prevented the skipping of tracks that caused problems with the Discman. The Mini Disc followed the path blazed by the Walkman and represented the most advanced technology applied to personal stereo players. At a price of about $500 in 1993, it was still too expensive to compete in the audio cassette Walkman market, but the history of similar products illustrates that rapid reduction of price could be achieved even with a complex technology. The Walkman had a powerful influence on the development of other digital and optical technologies. The laser readers of compact disc players can access visual and textual information in addition to sound. Sony introduced the Data Discman, a handheld device that displayed text and pictures on a tiny screen. Several other manufacturers marketed electronic books. Whatever the shape of future entertainment and information technologies, the legacy of the Walkman will put a high premium on portability, small size, and the interaction of machine and user.

882

/

Walkman cassette player

See also Cassette recording; Compact disc; Dolby noise reduction; Electronic synthesizer; Laser; Transistor; Videocassette recorder. Further Reading Bull, Michael. Sounding Out the City: Personal Stereos and the Management of Everyday Life. New York: Berg, 2000. Lyons, Nick. The Sony Vision. New York: Crown Publishers, 1976. Morita, Akio, with Edwin M. Reingold, and Mitsuko Shimomura. Made in Japan: Akio Morita and Sony. London: HarperCollins, 1994. Nathan, John. Sony: The Private Life. London: HarperCollins Business, 2001. Schlender, Brenton R. “How Sony Keeps the Magic Going.” Fortune 125 (February 24, 1992).

883

Washing machine Washing machine

The invention: Electrical-powered machines that replaced handoperated washing tubs and wringers, making the job of washing clothes much easier. The people behind the invention: O. B. Woodrow, a bank clerk who claimed to be the first to adapt electricity to a remodeled hand-operated washing machine Alva J. Fisher (1862-1947), the founder of the Hurley Machine Company, who designed the Thor electric washing machine, claiming that it was the first successful electric washer Howard Snyder, the mechanical genius of the Maytag Company Hand Washing Until the development of the electric washing machine in the twentieth century, washing clothes was a tiring and time-consuming process. With the development of the washboard, dirt was loosened by rubbing. Clothes and tubs had to be carried to the water, or the water had to be carried to the tubs and clothes. After washing and rinsing, clothes were hand-wrung, hang-dried, and ironed with heavy, heated irons. In nineteenth century America, the laundering process became more arduous with the greater use of cotton fabrics. In addition, the invention of the sewing machine resulted in the mass production of inexpensive ready-to-wear cotton clothing. With more clothing, there was more washing. One solution was hand-operated washing machines. The first American patent for a hand-operated washing machine was issued in 1805. By 1857, more than 140 patents had been issued; by 1880, between 4,000 and 5,000 patents had been granted. While most of these machines were never produced, they show how much the public wanted to find a mechanical means of washing clothes. Nearly all the early types prior to the Civil War (1861-1865) were modeled after the washboard.

884

/

Washing machine

Washing machines based upon the rubbing principle had two limitations: They washed only one item at a time, and the rubbing was hard on clothes. The major conceptual breakthrough was to move away from rubbing and to design machines that would clean by forcing water through a number of clothes at the same time. An early suction machine used a plunger to force water through clothes. Later electric machines would have between two and four suction cups, similar to plungers, attached to arms that went up and down and rotated on a vertical shaft. Another hand-operated washing machine was used to rock a tub on a frame back and forth. An electric motor was later substituted for the hand lever that rocked the tub. A third hand-operated washing machine was the dolly type. The dolly, which looked like an upside-down three-legged milking stool, was attached to the inside of the tub cover and was turned by a two-handled lever on top of the enclosed tub. Machine Washing The hand-operated machines that would later dominate the market as electric machines were the horizontal rotary cylinder and the underwater agitator types. In 1851, James King patented a machine of the first type that utilized two concentric half-full cylinders. Water in the outer cylinder was heated by a fire beneath it; a hand crank turned the perforated inner cylinder that contained clothing and soap. The inner-ribbed design of the rotating cylinder raised the clothes as the cylinder turned. Once the clothes reached the top of the cylinder, they dropped back down into the soapy water. The first underwater agitator-type machine, the second type, was patented in 1869. In this machine, four blades at the bottom of the tub were attached to a central vertical shaft that was turned by a hand crank on the outside. The agitation created by the blades washed the clothes by driving water through the fabric. It was not until 1922, when Howard Snyder of the Maytag Company developed an underwater agitator with reversible motion, that this type of machine was able to compete with the other machines. Without reversible action, clothes would soon wrap around the blades and not be washed.

Washing machine

/

885

Claims for inventing the first electric washing machine came from O. B. Woodrow, who founded the Automatic Electric Washer Company, and Alva J. Fisher, who developed the Thor electric washing machine for the Hurley Machine Corporation. Both Woodrow and Fisher made their innovations in 1907 by adapting electric power to modified hand-operated, dolly-type machines. Since only 8 percent of American homes were wired for electricity in 1907, the early machines were advertised as adaptable to electric or gasoline power but could be hand-operated if the power source failed. Soon, electric power was being applied to the rotary cylinder, oscillating, and suction-type machines. In 1910, a number of companies introduced washing machines with attached wringers that could be operated by electricity. The introduction of automatic washers in 1937 meant that washing machines could change phases without the action of the operator. Impact By 1907 (the year electricity was adapted to washing machines), electric power was already being used to operate fans, ranges, coffee percolators, flatirons, and sewing machines. By 1920, nearly 35 percent of American residences had been wired for electricity; by 1941, nearly 80 percent had been wired. The majority of American homes had washing machines by 1941; by 1958, this had risen to an estimated 90 percent. The growth of electric appliances, especially washing machines, is directly related to the decline in the number of domestic servants in the United States. The development of the electric washing machine was, in part, a response to a decline in servants, especially laundresses. Also, rather than easing the work of laundresses with technology, American families replaced their laundresses with washing machines. Commercial laundries were also affected by the growth of electric washing machines. At the end of the nineteenth century, they were in every major city and were used widely. Observers noted that just as spinning, weaving, and baking had once been done in the home but now were done in commercial establishments, laundry work had now begun its move out of the home. After World

886

/

Washing machine

War II (1939-1945), however, although commercial laundries continued to grow, their business was centered more and more on institutional laundry, rather than residential laundry, which they had lost to the home washing machine. Some scholars have argued that, on one hand, the return of laundry to the home resulted from marketing strategies that developed the image of the American woman as one who is home operating her appliances. On the other hand, it was probably because the electric washing machine made the task much easier that American women, still primarily responsible for the family laundry, were able to pursue careers outside the home. See also Electric refrigerator; Microwave cooking; Robot (household); Vacuum cleaner; Vending machine slug rejector. Further Reading Ierley, Merritt. Comforts of Home: The American House and the Evolution of Modern Convenience. New York: C. Potter, 1999. “Maytag Heritage Embraces Innovation, Dependable Products.” Machine Design 71, no. 18 (September, 1999). Shapiro, Laura. “Household Appliances.” Newsweek 130, no. 24A (Winter, 1997/1998).

887

Weather satellite Weather satellite

The invention: A series of cloud-cover meteorological satellites that pioneered the reconnaissance of large-scale weather systems and led to vast improvements in weather forecasting. The person behind the invention: Harry Wexler (1911-1962), director of National Weather Bureau meteorological research Cameras in Space The first experimental weather satellite, Tiros 1, was launched from Cape Canaveral on April 1, 1960. Tiros’s orbit was angled to cover the area from Montreal, Canada, to Santa Cruz, Argentina, in the Western Hemisphere. Tiros completed an orbit every ninetynine minutes and, when launched, was expected to survive at least three months in space, returning thousands of images of large-scale weather systems. Tiros 1 was equipped with a pair of vidicon scanner television cameras, one equipped with a wide-angle lens and the other with a narrow-angle lens. Both cameras created pictures with five hundred lines per frame at a shutter speed of 1.5 milliseconds. Each television camera’s imaging data were stored on magnetic tape for downloading to ground stations when Tiros 1 was in range. The wideangle lens provided a low-resolution view of an area covering 2,048 square kilometers. The narrow-angle lens had a resolution of half a kilometer within a viewing area of 205 square kilometers. Tiros transmitted its data to ground stations, which displayed the data on television screens. Photographs of these displays were then made for permanent records. Tiros weather data were sent to the Naval Photographic Interpretation Center for detailed meteorological analysis. Next, the photographs were passed along to the National Weather Bureau for further study. Tiros caused some controversy because it was able to image large areas of the communist world: the Soviet Union, Cuba, and Mongolia. The weather satellite’s imaging system was not, however, partic-

888

/

Weather satellite

Hurricane off the coast of Florida photographed from space. (PhotoDisc)

ularly useful as a spy satellite, and only large-scale surface features were visible in the images. Nevertheless, the National Aeronautics and Space Administration (NASA) skirted adverse international reactions by carefully scrutinizing Tiros’s images for evidence of sensitive surface features before releasing them publicly. A Startling Discovery Tiros 1 was not in orbit very long before it made a significant and startling discovery. It was the first satellite to document that large storms have vortex patterns that resemble whirling pinwheels. Within its lifetime, Tiros photographed more than forty northern mid-latitude storm systems, and each one had a vortex at its center. These storms were in various stages of development and were between 800 and 1,600 kilometers in diameter. The storm vortex in most of these was located inside a 560-kilometer-diameter circle around the center of the storm’s low-pressure zone. Nevertheless, Tiros’s images did not reveal at what stage in a storm’s development the vortex pattern formed.

Weather satellite

/

889

This was typical of Tiros’s data. The satellite was truly an experiment, and, as is the case with most initial experiments, various new phenomena were uncovered but were not fully understood. The data showed clearly that weather systems could be investigated from orbit and that future weather satellites could be outfitted with sensors that would lead to better understanding of meteorology on a global scale. Tiros 1 did suffer from a few difficulties during its lifetime in orbit. Low contrast in the television imaging system often made it difficult to distinguish between cloud cover and snow cover. The magnetic tape system for the high-resolution camera failed at an early stage. Also, Earth’s magnetic field tended to move Tiros 1 away from an advantageous Earth observation attitude. Experience with Tiros 1 led to improvements in later Tiros satellites and many other weather-related satellites. Consequences Prior to Tiros 1, weather monitoring required networks of groundbased instrumentation centers, airborne balloons, and instrumented aircraft. Brief high-altitude rocket flights provided limited coverage of cloud systems from above. Tiros 1 was the first step in the development of the permanent monitoring of weather systems. The resulting early detection and accurate tracking of hurricanes alone have resulted in savings in both property and human life. As a result of the Tiros 1 experiment, meteorologists were not ready to discard ground-based and airborne weather systems in favor of satellites alone. Such systems could not provide data about pressure, humidity, and temperature, for example. Tiros 1 did, however, introduce weather satellites as a necessary supplement to ground-based and airborne systems for large-scale monitoring of weather systems and storms. Satellites could provide more reliable and expansive coverage at a far lower cost than a large contingent of aircraft. Tiros 1, which was followed by nine similar spacecraft, paved the way for modern weather satellite systems. See also Artificial satellite; Communications satellite; Cruise missile; Radio interferometer; Rocket.

890

/

Weather satellite

Further Reading Fishman, Jack, and Robert Kalish. The Weather Revolution: Innovations and Imminent Breakthroughs in Accurate Forecasting. New York: Plenum Press, 1994. Kahl, Jonathan D. Weather Watch: Forecasting the Weather. Minneapolis, Minn.: Lerner, 1996. Rao, Krishna P. Weather Satellites: Systems, Data, and Environmental Applications. Boston: American Meteorological Society, 1990.

Artist’s depiction of a weather satellite. (PhotoDisc)

891

Xerography Xerography

The invention: Process that makes identical copies of documents with a system of lenses, mirrors, electricity, chemicals that conduct electricity in bright light, and dry inks (toners) that fuse to paper by means of heat. The people behind the invention: Chester F. Carlson (1906-1968), an American inventor Otto Kornei (1903), a German physicist and engineer Xerography, Xerography, Everywhere The term xerography is derived from the Greek for “dry writing.” The process of xerography was invented by an American, Chester F. Carlson, who made the first xerographic copy of a document in 1938. Before the development of xerography, the preparation of copies of documents was often difficult and tedious. Most often, unclear carbon copies of typed documents were the only available medium of information transfer. The development of xerography led to the birth of the giant Xerox Corporation, and the term xerographic was soon shortened to Xerox. The process of xerography makes identical copies of a document by using lens systems, mirrors, electricity, chemicals that conduct electricity in bright light (“semiconductors”), and dry inks called “toners” that are fused to copy paper by means of heat. The process makes it easy to produce identical copies of a document quickly and cheaply. In addition, xerography has led to huge advances in information transfer, the increased use of written documents, and rapid decision-making in all areas of society. Xeroxing can produce both color and black-and-white copies. From the First Xerox Copy to Modern Photocopies On October 22, 1938, after years of effort, Chester F. Carlson produced the first Xerox copy. Reportedly, his efforts grew out of his 1930’s job in the patent department of the New York firm P. R.

892

/

Xerography

Mallory and Company. He was looking for a quick, inexpensive method for making copies of patent diagrams and other patent specifications. Much of Carlson’s original work was conducted in the kitchen of his New York City apartment or in a room behind a beauty parlor in Astoria, Long Island. It was in Astoria that Carlson, with the help of Otto Kornei, produced the first Xerox copy (of the inscription “10-22-38 Astoria”) on waxed paper. The first practical method of xerography used the element selenium, a substance that conducts electricity only when it is exposed to light. The prototype Xerox copying machines were developed as a result of the often frustrating, nerve-wracking, fifteen-year collaboration of Carlson, scientists and engineers at the Battelle Memorial Institute in Columbus, Ohio, and the Haloid Company of Rochester, New York. The Haloid Company financed the effort after 1947, based on an evaluation made by an executive, John H. Dessauer. In return, the company obtained the right to manufacture and market Xerox machines. The company, which was originally a manufacturer of photographic paper, evolved into the giant Xerox Corporation. Carlson became very wealthy as a result of the royalties and dividends paid to him by the company. Early xerographic machines operated in several stages. First, the document to be copied was positioned above a mirror so that its image, lit by a flash lamp and projected by a lens, was reflected onto a drum coated with electrically charged selenium. Wherever dark sections of the document’s image were reflected, the selenium coating retained its positive charge. Where the image was light, the charge of the selenium was lost, because of the photoactive properties of the selenium. Next, the drum was dusted with a thin layer of a negatively charged black powder called a “toner.” Toner particles stuck to positively charged dark areas of the drum and produced a visible image on the drum. Then, Xerox copy paper, itself positively charged, was put in contact with the drum, where it picked up negatively charged toner. Finally, an infrared lamp heated the paper and the toner, fusing the toner to the paper and completing the copying process. In ensuing years, the Xerox Corporation engineered many changes in the materials and mechanics of Xerox copiers. For example, the semiconductors and toners were changed, which increased both the

Xerography /

893

Chester F. Carlson The copying machine changed Chester Floyd Carlson’s life even before he invented it. While he was experimenting with photochemicals in his apartment, the building owner’s daughter came by to complain about the stench Carlson was creating. However, she found Carlson himself more compelling than her complaints and married him not long afterward. Soon Carlson transferred his laboratory to a room behind his mother-in-law’s beauty parlor, where he devoted ten dollars a month from his meager wages to spend on research. Born in Seattle, Washington, in 1906, Carlson learned early to husband his resources, set his goals high, and never give up. Both his father and mother were sickly, and so after he was fourteen, Carlson was the family’s main breadwinner. His relentless drive and native intelligence got him through high school and into a community college, where an impressed teacher inspired him to go even further—into the California Institute of Technology. After he graduated, he worked for General Electric but lost his job during the layoffs caused by the Great Depression. In 1933 he hired on with P. R. Mallory Company, an electrical component manufacturer, which, although not interested in his invention, at least paid him enough in wages to keep going. His thirteen-year crusade to invent a copier and then find a manufacturer to build it ended just as Carlson was nearly broke. In 1946 Haloid Corporation licensed the rights to Carlson’s copying machine, but even then the invention did not become an important part of American communications culture until the company marketed the Xerox 914 in 1960. The earnings for Xerox Corporation (as it was called after 1961) leapt from $33 million to more than $500 million in the next six years, and Carlson became enormously wealthy. He won the Inventor of the Year Award in 1964 and the Horatio Alger Award in 1966. Before he died in 1968, he remembered the hardships of his youth by donating $100 million to research organizations and charitable foundations.

quality of copies and the safety of the copying process. In addition, auxiliary lenses of varying focal length were added, along with other features, which made it possible to produce enlarged or reduced copies. Furthermore, modification of the mechanical and chemical prop-

894

/

Xerography

erties of the components of the system made it possible to produce thousands of copies per hour, sort them, and staple them. The next development was color Xerox copying. Color systems use the same process steps that the black-and-white systems use, but the document exposure and toning operations are repeated three times to yield the three overlaid colored layers (yellow, magenta, and cyan) that are used to produce multicolored images in any color printing process. To accomplish this, blue, green, and red filters are rotated in front of the copier’s lens system. This action produces three different semiconductor images on three separate rollers. Next, yellow, magenta, and cyan toners are used—each on its own roller—to yield three images. Finally, all three images are transferred to one sheet of paper, which is heated to produce the multicolored copy. The complex color procedure is slower and much more expensive than the black-and-white process. Impact The quick, inexpensive copying of documents is commonly performed worldwide. Memoranda that must be distributed to hundreds of business employees can now be copied in moments, whereas in the past such a process might have occupied typists for days and cost hundreds of dollars. Xerox copying also has the advantage that each copy is an exact replica of the original; no new errors can be introduced, as was the case when documents had to be retyped. Xerographic techniques are also used to reproduce X rays and many other types of medical and scientific data, and the facsimile (fax) machines that are now used to send documents from one place to another over telephone lines are a variation of the Xerox process. All this convenience is not without some problems: The ease of photocopying has made it possible to reproduce copyrighted publications. Few students at libraries, for example, think twice about copying portions of books, since it is easy and inexpensive to do so. However, doing so can be similar to stealing, according to the law. With the advent of color photocopying, an even more alarming problem has arisen: Thieves are now able to use this technology to create counterfeit money and checks. Researchers will soon find a way to make such important documents impossible to copy.

Xerography /

895

See also Fax machine; Instant photography; Laser-diode recording process. Further Reading Kelley, Neil D. “Xerography: The Greeks Had a Word for It.” Infosystems 24, no. 1 (January, 1977). McClain, Dylan L. “Duplicate Efforts.” New York Times (November 30, 1998). Mort, J. The Anatomy of Xerography: Its Invention and Evolution. Jefferson, N.C.: McFarland, 1989.

896

X-ray crystallography X-ray crystallography

The invention: Technique for using X rays to determine the crystal structures of many substances. The people behind the invention: Sir William Lawrence Bragg (1890-1971), the son of Sir William Henry Bragg and cowinner of the 1915 Nobel Prize in Physics Sir William Henry Bragg (1862-1942), an English mathematician and physicist and cowinner of the 1915 Nobel Prize in Physics Max von Laue (1879-1960), a German physicist who won the 1914 Nobel Prize in Physics Wilhelm Conrad Röntgen (1845-1923), a German physicist who won the 1901 Nobel Prize in Physics René-Just Haüy (1743-1822), a French mathematician and mineralogist Auguste Bravais (1811-1863), a French physicist The Elusive Crystal A crystal is a body that is formed once a chemical substance has solidified. It is uniformly shaped, with angles and flat surfaces that form a network based on the internal structure of the crystal’s atoms. Determining what these internal crystal structures look like is the goal of the science of X-ray crystallography. To do this, it studies the precise arrangements into which the atoms are assembled. Central to this study is the principle of X-ray diffraction. This technique involves the deliberate scattering of X rays as they are shot through a crystal, an act that interferes with their normal path of movement. The way in which the atoms are spaced and arranged in the crystal determines how these X rays are reflected off them while passing through the material. The light waves thus reflected form a telltale interference pattern. By studying this pattern, scientists can discover variations in the crystal structure. The development of X-ray crystallography in the early twentieth century helped to answer two major scientific questions: What are X

X-ray crystallography

/

897

rays? and What are crystals? It gave birth to a new technology for the identification and classification of crystalline substances. From studies of large, natural crystals, chemists and geologists had established the elements of symmetry through which one could classify, describe, and distinguish various crystal shapes. René-Just Haüy, about a century before, had demonstrated that diverse shapes of crystals could be produced by the repetitive stacking of tiny solid cubes. Auguste Bravais later showed, through mathematics, that all crystal forms could be built from a repetitive stacking of three-dimensional arrangements of points (lattice points) into “space lattices,” but no one had ever been able to prove that matter really was arranged in space lattices. Scientists did not know if the tiny building blocks modeled by space lattices actually were solid matter throughout, like Haüy’s cubes, or if they were mostly empty space, with solid matter located only at the lattice points described by Bravais. With the disclosure of the atomic model of Danish physicist Niels Bohr in 1913, determining the nature of the building blocks of crystals took on a special importance. If crystal structure could be shown to consist of atoms at lattice points, then the Bohr model would be supported, and science then could abandon the theory that matter was totally solid. X Rays Explain Crystal Structure In 1912, Max von Laue first used X rays to study crystalline matter. Laue had the idea that irradiating a crystal with X rays might cause diffraction. He tested this idea and found that X rays were scattered by the crystals in various directions, revealing on a photographic plate a pattern of spots that depended on the orientation and the symmetry of the crystal. The experiment confirmed in one stroke that crystals were not solid and that their matter consisted of atoms occupying lattice sites with substantial space in between. Further, the atomic arrangements of crystals could serve to diffract light rays. Laue received the 1914 Nobel Prize in Physics for his discovery of the diffraction of X rays in crystals.

898

/

X-ray crystallography

(Library of Congess)

Sir William Henry Bragg and Sir William Lawrence Bragg William Henry Bragg, senior member of one of the most illustrious father-son scientific teams in history, was born in Cumberland, England, in 1862. Talented at mathematics, he studied that field at Trinity College, Cambridge, and physics at the Cavendish Laboratory, then moved into a professorship at the University of Adelaide in Australia. Despite an underequipped laboratory, he proved that the atom is not a solid body, and his work with X rays attracted the attention of Ernest Rutherford in England, who helped him win a professorship at the University of Leeds in 1908. By then his eldest son, William Lawrence Bragg, William Henry Bragg was showing considerable scientific abilities of his own. Born in Adelaide in 1890, he also attended Trinity College, Cambridge, and performed research at the Cavendish. It was while there that father and son worked together to establish the specialty of X-ray crystallography. When they shared the 1915 Nobel Prize in Physics for their work, the son was only twenty-five years old—the youngest person ever to receive a Nobel Prize in any field. The younger Bragg was also an artillery officer in France during World War I. Meanwhile, his father worked for the Royal Admiralty. The hydrophone he invented to detect submarines underwater earned him a knighthood in 1920. The father moved to University College, London, and became director of the Royal Institution. His popular lectures about the latest scientific developments made him famous among the public, while his elevation to president of the Royal Society in 1935 placed him among the most influential scientists in the world. He died in 1942. The son taught at the University of Manchester in 1919 and then in 1938 became director of the National Physics Laboratory and professor of physics at the Cavendish. Following the father’s example, he became an administrator and professor at the Royal Institution, where he also distinguished himself with his popular lectures. He encouraged research using X-ray crystallography, including the work that unlocked the structure of deoxyribonucleic acid (DNA). Knighted in 1941, he became a royal Companion of Honor in 1967. He died in 1971.

X-ray crystallography

/

899

Still, the diffraction of X rays was not yet a proved scientific fact. Sir William Henry Bragg contributed the final proof by passing one of the diffracted beams through a gas and achieving ionization of the gas, the same effect that true X rays would have caused. He also used the spectrometer he built for this purpose to detect and measure specific wavelengths of X rays and to note which orientations of crystals produced the strongest reflections. He noted that X rays, like visible light, occupy a definite part of the electromagnetic spectrum. Yet most of Bragg’s work focused on actually using X rays to deduce crystal structures. Sir Lawrence Bragg was also deeply interested in this new phenomenon. In 1912, he had the idea that the pattern of spots was an indication that the X rays were being reflected from the planes of atoms in the crystal. If that were true, Laue pictures could be used to obtain information about the structures of crystals. Bragg developed an equation that described the angles at which X rays would be most effectively diffracted by a crystal. This was the start of the X-ray analysis of crystals. Henry Bragg had at first used his spectrometer to try to determine whether X rays had a particulate nature. It soon became evident, however, that the device was a far more powerful way of analyzing crystals than the Laue photograph method had been. Not long afterward, father and son joined forces and founded the new science of X-ray crystallography. By experimenting with this technique, Lawrence Bragg came to believe that if the lattice models of Bravais applied to actual crystals, a crystal structure could be viewed as being composed of atoms arranged in a pattern consisting of a few sets of flat, regularly spaced, parallel planes. Diffraction became the means by which the Braggs deduced the detailed structures of many crystals. Based on these findings, they built three-dimensional scale models out of wire and spheres that made it possible for the nature of crystal structures to be visualized clearly even by nonscientists. Their results were published in the book X-Rays and Crystal Structure (1915). Impact The Braggs founded an entirely new discipline, X-ray crystallography, which continues to grow in scope and application. Of partic-

900

/

X-ray crystallography

ular importance was the early discovery that atoms, rather than molecules, determine the nature of crystals. X-ray spectrometers of the type developed by the Braggs were used by other scientists to gain insights into the nature of the atom, particularly the innermost electron shells. The tool made possible the timely validation of some of Bohr’s major concepts about the atom. X-ray diffraction became a cornerstone of the science of mineralogy. The Braggs, chemists such as Linus Pauling, and a number of mineralogists used the tool to do pioneering work in deducing the structures of all major mineral groups. X-ray diffraction became the definitive method of identifying crystalline materials. Metallurgy progressed from a technology to a science as metallurgists became able, for the first time, to deduce the structural order of various alloys at the atomic level. Diffracted X rays were applied in the field of biology, particularly at the Cavendish Laboratory under the direction of Lawrence Bragg. The tool proved to be essential for deducing the structures of hemoglobin, proteins, viruses, and eventually the double-helix structure of deoxyribonucleic acid (DNA). See also Field ion microscope; Geiger counter; Holography; Mass spectrograph; Neutrino detector; Scanning tunneling microscope; Thermal cracking process; Ultramicroscope. Further Reading Achilladelis, Basil, and Mary Ellen Bowden. Structures of Life. Philadelphia: The Center, 1989. Bragg, William Lawrence. The Development of X-Ray Analysis. New York: Hafner Press, 1975. Thomas, John Meurig. “Architecture of the Invisible.” Nature 364 (August 5, 1993).

901

X-ray image intensifier X-ray image intensifier

The invention: A complex electronic device that increases the intensity of the light in X-ray beams exiting patients, thereby making it possible to read finer details. The people behind the invention: Wilhelm Conrad Röntgen (1845-1923), a German physicist Thomas Alva Edison (1847-1931), an American inventor W. Edward Chamberlain, an American physician Thomson Electron Tubes, a French company Radiologists Need Dark Adaptation Thomas Alva Edison invented the fluoroscope in 1896, only one year after Wilhelm Conrad Röntgen’s discovery of X rays. The primary function of the fluoroscope is to create images of the internal structures and fluids in the human body. During fluoroscopy, the radiologist who performs the procedure views a continuous image of the motion of the internal structures. Although much progress was made during the first half of the twentieth century in recording X-ray images on plates and film, fluoroscopy lagged behind. In conventional fluoroscopy, a radiologist observed an image on a dim fluoroscopic screen. In the same way that it is more difficult to read a telephone book in dim illumination than in bright light, it is much harder to interpret a dim fluoroscopic image than a bright one. In the early years of fluoroscopy, the radiologist’s eyes had to be accustomed to dim illumination for at least fifteen minutes before performing fluoroscopy. “Dark adaptation” was the process of wearing red goggles under normal illumination so that the amount of light entering the eye was reduced. The human retina contains two kinds of light-sensitive elements: rods and cones. The dim light emitted by the screen of the fluoroscope, even under the best conditions, required the radiologist to see only with the rods, and vision is much less accurate in such circumstances. For normal rod-and-cone vision, the bright-

902

/

X-ray image intensifier

ness of the screen might have to be increased a thousandfold. Such an increase was impossible; even if an X-ray tube could have been built that was capable of emitting a beam of sufficient intensity, its rays would have been fatal to the patient in less than a minute. Fluoroscopy in an Undarkened Room In a classic paper delivered at the December, 1941, meeting of the Radiological Society of North America, Dr. W. Edward Chamberlain of Temple University Medical School proposed applying to fluoroscopy the techniques of image amplification (also known as image intensification) that had already been adapted for use in the electron microscope and in television. The idea was not original with him. Four or five years earlier, Irving Langmuir of General Electric Company had applied for a patent for a device that would intensify a fluoroscopic image. “It is a little hard to understand the delay in the creation of a practical device,” Chamberlain noted. “Perhaps what is needed is a realization by the physicists and the engineers of the great need for brighter fluoroscopic images and the great advantage to humanity which their arrival would entail.” Chamberlain’s brilliant analysis provided precisely that awareness. World War II delayed the introduction of fluoroscopic image intensification, but during the 1950’s, a number of image intensifiers based on the principles Chamberlain had outlined came on the market. The image-intensifier tube is a complex electronic device that receives the X-ray beam exiting the patient, converts it into light, and increases the intensity of that light. The tube is usually contained in a glass envelope that provides some structural support and maintains a vacuum. The X rays, after passing through the patient, impinge on the face of a screen and trigger the ejection of electrons, which are then speeded up and focused within the tube by means of electrical fields. When the speeded-up electrons strike the phosphor at the output end of the tube, they trigger the emission of light photons that re-create the desired image, which is several thousand times brighter than is the case with the conventional fluoroscopic screen. The output of the image intensifier can be viewed in an

X-ray image intensifier

/

903

undarkened room without prior dark adaptation, thus saving the radiologist much valuable time. Moving pictures can be taken of the output phosphor of the intensifying tube or of the television receiver image, and they can be stored on motion picture film or on magnetic tape. This permanently records the changing image and makes it possible to reduce further the dose of radiation that a patient must receive. Instead of prolonging the radiation exposure while examining various parts of the image or checking for various factors, the radiologist can record a relatively short exposure and then rerun the motion picture film or tape as often as necessary to analyze the information that it contains. The radiation dosage that is administered to the patient can be reduced to a tenth or even a hundredth of what it had been previously, and the same amount of diagnostic information or more can be obtained. The radiation dose that the radiologist receives is reduced to zero or almost zero. In addition, the combination of the brighter image and the lower radiation dosage administered to the patient has made it possible for radiologists to develop a number of important new diagnostic procedures that could not have been accomplished at all without image intensification. Impact The image intensifier that was developed by the French company Thomson Electron Tubes in 1959 had an input-phosphor diameter, or field, of four inches. Later on, image intensifiers with field sizes of up to twenty-two inches became available, making it possible to create images of much larger portions of the human anatomy. The most important contribution made by image intensifiers was to increase fluoroscopic screen illumination to the level required for cone vision. These devices have made dark adaptation a thing of the past. They have also brought the television camera into the fluoroscopic room and opened up a whole new world of fluoroscopy. See also Amniocentesis; CAT scanner; Electrocardiogram; Electroencephalogram; Mammography; Nuclear magnetic resonance; Ultrasound.

904

/

X-ray image intensifier

Further Reading Glasser, Otto. Dr. W. C. Röntgen. 2d ed. Springfield, Ill.: Charles C. Thomas, 1972. Isherwood, Ian, Adrian Thomas, and Peter Neil Temple Wells. The Invisible Light: One Hundred Years of Medical Radiology. Cambridge, Mass.: Blackwell Science, 1995. Lewis, Ricki. “Radiation Continuing Concern with Fluoroscopy.” FDA Consumer 27 (November, 1993).

905

Yellow fever vaccine Yellow fever vaccine

The invention: The first safe vaccine agaisnt the virulent yellow fever virus, which caused some of the deadliest epidemics of the nineteenth and early twentieth centuries. The people behind the invention: Max Theiler (1899-1972), a South African microbiologist Wilbur Augustus Sawyer (1879-1951), an American physician Hugh Smith (1902-1995), an American physician A Yellow Flag Yellow fever, caused by a virus and transmitted by mosquitoes, infects humans and monkeys. After the bite of the infecting mosquito, it takes several days before symptoms appear. The onset of symptoms is abrupt, with headache, nausea, and vomiting. Because the virus destroys liver cells, yellowing of the skin and eyes is common. Approximately 10 to 15 percent of patients die after exhibiting terrifying signs and symptoms. Death occurs usually from liver necrosis (decay) and liver shutdown. Those that survive recover completely and are immunized. At the beginning of the twentieth century, there was no cure for yellow fever. The best that medical authorities could do was to quarantine the afflicted. Those quarantines usually waved the warning yellow flag, which gave the disease its colloquial name, “yellow jack.” After the Aëdes aegypti mosquito was clearly identified as the carrier of the disease in 1900, efforts were made to combat the disease by wiping out the mosquito. Most famous in these efforts were the American army surgeon Walter Reed and the Cuban physician Carlos J. Finlay. This strategy was successful in Panama and Cuba and made possible the construction of the Panama Canal. Still, the yellow fever virus persisted in the tropics, and the opening of the Panama Canal increased the danger of its spreading aboard the ships using this new route. Moreover, the disease, which was thought to be limited to the jungles of South and Central America, had begun to spread arounds

906

/

Yellow fever vaccine

the world to wherever the mosquito Aëdes aegypti could carry the virus. Mosquito larvae traveled well in casks of water aboard trading vessels and spread the disease to North America and Europe. Immunization by Mutation Max Theiler received his medical education in London. Following that, he completed a four-month course at the London School of Hygiene and Tropical Medicine, after which he was invited to come to the United States to work in the department of tropical medicine at Harvard University. While there, Theiler started working to identify the yellow fever organism. The first problem he faced was finding a suitable laboratory animal that could be infected with yellow fever. Until that time, the only animal successfully infected with yellow fever was the rhesus monkey, which was expensive and difficult to care for under laboratory conditions. Theiler succeeded in infecting laboratory mice with the disease by injecting the virus directly into their brains. Laboratory work for investigators and assistants coming in contact with the yellow fever virus was extremely dangerous. At least six of the scientists at the Yellow Fever Laboratory at the Rockefeller Institute died of the disease, and many other workers were infected. In 1929, Theiler was infected with yellow fever; fortunately, the attack was so mild that he recovered quickly and resumed his work. During one set of experiments, Theiler produced successive generations of the virus. First, he took virus from a monkey that had died of yellow fever and used it to infect a mouse. Next, he extracted the virus from that mouse and injected it into a second mouse, repeating the same procedure using a third mouse. All of them died of encephalitis (inflammation of the brain). The virus from the third mouse was then used to infect a monkey. Although the monkey showed signs of yellow fever, it recovered completely. When Theiler passed the virus through more mice and then into the abdomen of another monkey, the monkey showed no symptoms of the disease. The results of these experiments were published by Theiler in the journal Science.

Yellow fever vaccine

/

907

This article caught the attention of Wilbur Augustus Sawyer, director of the Yellow Fever Laboratory at the Rockefeller Foundation International Health Division in New York. Sawyer, who was working on a yellow fever vaccine, offered Theiler a job at the Rockefeller Foundation, which Theiler accepted. Theiler’s mouse-adapted, “attenuated” virus was given to the laboratory workers, along with human immune serum, to protect them against the yellow fever virus. This type of vaccination, however, carried the risk of transferring other diseases, such as hepatitis, in the human serum. In 1930, Theiler worked with Eugen Haagen, a German bacteriologist, at the Rockefeller Foundation. The strategy of the Rockefeller laboratory was a cautious, slow, and steady effort to culture a strain of the virus so mild as to be harmless to a human but strong enough to confer a long-lasting immunity. (To “culture” something—tissue cells, microorganisms, or other living matter—is to grow it in a specially prepared medium under laboratory conditions.) They started with a new strain of yellow fever harvested from a twenty-eightyear-old West African named Asibi; it was later known as the “Asibi strain.” It was a highly virulent strain that in four to seven days killed almost all the monkeys that were infected with it. From time to time, Theiler or his assistant would test the culture on a monkey and note the speed with which it died. It was not until April, 1936, that Hugh Smith, Theiler’s assistant, called to his attention an odd development as noted in the laboratory records of strain 17D. In its 176th culture, 17D had failed to kill the test mice. Some had been paralyzed, but even these eventually recovered. Two monkeys who had received a dose of 17D in their brains survived a mild attack of encephalitis, but those who had taken the infection in the abdomen showed no ill effects whatever. Oddly, subsequent subcultures of the strain killed monkeys and mice at the usual rate. The only explanation possible was that a mutation had occurred unnoticed. The batch of strain 17D was tried over and over again on monkeys with no harmful effects. Instead, the animals were immunized effectively. Then it was tried on the laboratory staff, including Theiler and his wife, Lillian. The batch injected into humans had the same immunizing effect. Neither Theiler nor anyone else could explain how the mutation of the virus had resulted. Attempts to dupli-

908

/

Yellow fever vaccine

cate the experiment, using the same Asibi virus, failed. Still, this was the first safe vaccine for yellow fever. In June, 1937, Theiler reported this crucial finding in the Journal of Experimental Medicine. Impact Following the discovery of the vaccine, Theiler’s laboratory became a production plant for the 17D virus. Before World War II (1939-1945), more than one million vaccination doses were sent to Brazil and other South American countries. After the United States entered the war, eight million soldiers were given the vaccine before being shipped to tropical war zones. In all, approximately fifty million people were vaccinated in the war years. Yet although the vaccine, combined with effective mosquito control, eradicated the disease from urban centers, yellow fever is still present in large regions of South and Central America and of Africa. The most severe outbreak of yellow fever ever known occurred from 1960 to 1962 in Ethiopia; out of one hundred thousand people infected, thirty thousand died. The 17D yellow fever vaccine prepared by Theiler in 1937 continues to be the only vaccine used by the World Health Organization, more than fifty years after its discovery. There is a continuous effort by that organization to prevent infection by immunizing the people living in tropical zones. See also Antibacterial drugs; Penicillin; Polio vaccine (Sabin); Polio vaccine (Salk); Salvarsan; Tuberculosis vaccine; Typhus vaccine. Further Reading DeJauregui, Ruth. One Hundred Medical Milestones That Shaped World History. San Mateo, Calif.: Bluewood Books, 1998. Delaporte, François. The History of Yellow Fever: An Essay on the Birth of Tropical Medicine. Cambridge, Mass.: MIT Press, 1991. Theiler, Max, and Wilbur G. Downs. The Arthropod-borne Viruses of Vertebrates: An Account of the Rockefeller Foundation Virus Program, 1951-1970. New Haven, Conn.: Yale University Press, 1973. Williams, Greer. Virus Hunters. London: Hutchinson, 1960.

909

Time Line Time Line

Date c. 1900 1900 1900 1901 1901 1901-1904 1902 1903 1903 1903-1909 1904 1904 1904 1905 1905-1907 1906 1906 1906-1911 1907 1908 1908 1908 1908 1908 1910 1910 1910 1910-1939 1912 1912 1912-1913 1912-1914 1912-1915

Invention Electrocardiogram Brownie camera Dirigible Artificial insemination Vat dye Silicones Ultramicroscope Airplane Disposable razor Laminated glass Alkaline storage battery Photoelectric cell Vacuum tube Blood transfusion Plastic Gyrocompass Radio Tungsten filament Autochrome plate Ammonia Geiger counter Interchangeable parts Oil-well drill bit Vacuum cleaner Radio crystal sets Salvarsan Washing machine Electric refrigerator Color film Diesel locomotive Solar thermal engine Artificial kidney X-ray crystallography

910

/

Time Line

Date 1913 1913 1913 1913 1915 1915 1915 1915 1916-1922 1917 1917 1919 1921 1923 1923 1923 and 1951 1924 1925-1930 1926 1926 1926 1927 1928 1929 1929 1929 1930’s 1930’s 1930 1930 1930-1935 1931 1931 1931 1932 1932-1935

Invention Assembly line Geothermal power Mammography Thermal cracking process Long-distance telephone Propeller-coordinated machine gun Pyrex glass Long-distance radiotelephony Internal combustion engine Food freezing Sonar Mass spectrograph Tuberculosis vaccine Rotary dial telephone Television Syphilis test Ultracentrifuge Differential analyzer Buna rubber Rocket Talking motion pictures Heat pump Pap test Electric clock Electroencephalogram Iron lung Contact lenses Vending machine slug rejector Refrigerant gas Typhus vaccine FM Radio Cyclotron Electron microscope Neoprene Fuel cell Antibacterial drugs

Time Line

Date 1933-1954 1934 1935 1935 1935 1936 1937 1938 1938 1938 1940’s 1940 1940 1940-1955 1941 1941 1941 1942 1942-1950 1943 1943 1943 1943-1946 1944 1944 1945 1945 1946 1946 1947 1948 1948 1948 1948-1960 1949 1949

Invention Freeze-drying Bathysphere Nylon Radar Richter scale Fluorescent lighting Yellow fever vaccine Polystyrene Teflon Xerography Carbon dating Color television Penicillin Microwave cooking Polyester Touch-tone telephone Turbojet Infrared photography Orlon Aqualung Colossus computer Nuclear reactor ENIAC computer Mark I calculator V-2 rocket Atomic bomb Tupperware Cloud seeding Synchrocyclotron Holography Atomic clock Broadcaster guitar Instant photography Bathyscaphe BINAC computer Community antenna television

/

911

912

/

Time Line

Date 1950 1950-1964 1951 1951 1951-1952 1952 1952 1952 1952 1952 1952-1956 1953 1953 1953 1953 1953 1953-1959 mid-1950’s 1954 1955 1955-1957 1956 1957 1957 1957 1957 1957 1957-1972 1958 1959 1959 1959 1959 1960’s 1960’s 1960

Invention Cyclamate In vitro plant culture Breeder reactor UNIVAC computer Hydrogen bomb Amniocentesis Hearing aid Polio vaccine (Salk) Reserpine Steelmaking process Field ion microscope Artificial hormone Heart-lung machine Polyethylene Synthetic amino acid Transistor Hovercraft Synthetic RNA Photovoltaic cell Radio interferometer FORTRAN programming language Birth control pill Artificial satellite Nuclear power plant Polio vaccine (Sabin) Transistor radio Velcro Pacemaker Ultrasound Atomic-powered ship COBOL computer language IBM Model 1401 computer X-ray image intensifier Rice and wheat strains Virtual machine Laser

Time Line

Date 1960 1960 1960 1961 1962 1962 1962 1963 1964 1964 1964-1965 1966 1967 1967 1967 1967 1969 1969 1969-1983 1970 1970 1970-1980 1972 1972 1975-1979 1975-1990 1976 1976 1976-1988 1977 1977 1977-1985 1978 1978 1978 1978-1981

Invention Memory metal Telephone switching Weather satellite SAINT Communications satellite Laser eye surgery Robot (industrial) Cassette recording Bullet train Electronic synthesizer BASIC programming language Tidal power plant Coronary artery bypass surgery Dolby noise reduction Neutrino detector Synthetic DNA Bubble memory The Internet Optical disk Floppy disk Videocassette recorder Virtual reality CAT scanner Pocket calculator Laser-diode recording process Fax machine Supercomputer Supersonic passenger plane Stealth aircraft Apple II computer Fiber-optics Cruise missile Cell phone Compressed-air-accumulating power plant Nuclear magnetic resonance Scanning tunneling microscope

/

913

914

/

Time Line

Date 1979 1979 1980’s 1981 1982 1982 1982 1982 1983 1983 1983 1983 1983 1985 1985 1997 2000

Invention Artificial blood Walkman cassette player CAD/CAM Personal computer Abortion pill Artificial heart Genetically engineered insulin Robot (household) Artificial chromosome Aspartame Compact disc Hard disk Laser vaporization Genetic “fingerprinting” Tevatron accelerator Cloning Gas-electric car

915

Topics by Category Topics by Category

Agriculture Artificial insemination Cloning Cloud seeding In vitro plant culture Rice and wheat strains

In vitro plant culture Synthetic amino acid Synthetic DNA Synthetic RNA Ultracentrifuge Chemistry

Astronomy Artificial satellite Communications satellite Neutrino detector Radio interferometer Weather satellite Aviation and space Airplane Artificial satellite Communications satellite Dirigible Radio interferometer Rocket Stealth aircraft Turbojet V-2 rocket Weather satellite Biology Artificial chromosome Artificial insemination Cloning Genetic “fingerprinting”

Ammonia Fuel cell Refrigerant gas Silicones Thermal cracking process Ultracentrifuge Ultramicroscope Vat dye X-ray crystallography Communications Cassette recording Cell phone Color television Communications satellite Community antenna television Dolby noise reduction Electronic synthesizer Fax machine Fiber-optics FM radio Hearing aid Laser-diode recording process Long-distance radiotelephony Long-distance telephone

916

/

Topics by Category

Radar Radio Radio crystal sets Rotary dial telephone Sonar Talking motion pictures Telephone switching Television Touch-tone telephone Transistor radio Vacuum tube Videocassette recorder Xerography Computer science Apple II computer BASIC programming language BINAC computer Bubble memory COBOL computer language Colossus computer Computer chips Differential analyzer ENIAC computer Floppy disk FORTRAN programming language Hard disk IBM Model 1401 computer Internet Mark I calculator Optical disk Personal computer Pocket calculator SAINT Supercomputer UNIVAC computer

Virtual machine Virtual reality Consumer products Apple II computer Aspartame Birth control pill Broadcaster guitar Brownie camera Cassette recording Cell phone Color film Color television Compact disc Cyclamate Disposable razor Electric refrigerator FM radio Gas-electric car Hearing aid Instant photography Internet Nylon Orlon Personal computer Pocket calculator Polyester Pyrex glass Radio Rotary dial telephone Teflon Television Touch-tone telephone Transistor radio Tupperware Vacuum cleaner Velcro

Topics by Category

Videocassette recorder Walkman cassette player Washing machine Drugs and vaccines Abortion pill Antibacterial drugs Artificial hormone Birth control pill Genetically engineered insulin Penicillin Polio vaccine (Sabin) Polio vaccine (Salk) Reserpine Salvarsan Tuberculosis vaccine Typhus vaccine Yellow fever vaccine Earth science Aqualung Bathyscaphe Bathysphere Cloud seeding Richter scale X-ray crystallography Electronics Cassette recording Cell phone Color television Communications satellite Compact disc Dolby noise reduction Electronic synthesizer

/

917

Fax machine Fiber-optics FM radio Hearing aid Laser-diode recording process Long-distance radiotelephony Long-distance telephone Radar Radio Radio crystal sets Rotary dial telephone Sonar Telephone switching Television Touch-tone telephone Transistor Transistor radio Vacuum tube Videocassette recorder Walkman cassette player Xerography Energy Alkaline storage battery Breeder reactor Compressed-air-accumulating power plant Fluorescent lighting Fuel cell Gas-electric car Geothermal power Heat pump Nuclear power plant Nuclear reactor Oil-well drill bit Photoelectric cell Photovoltaic cell

918

/

Topics by Category

Solar thermal engine Tidal power plant Vacuum tube Engineering Airplane Assembly line Bullet train CAD/CAM Differential analyzer Dirigible ENIAC computer Gas-electric car Internal combustion engine Oil-well drill bit Robot (household) Robot (industrial) Steelmaking process Tidal power plant Vacuum cleaner Washing machine Exploration Carbon dating Aqualung Bathyscaphe Bathysphere Neutrino detector Radar Radio interferometer Sonar Food science Aspartame Cyclamate

Electric refrigerator Food freezing Freeze-drying Genetically engineered insulin In vitro plant culture Microwave cooking Polystyrene Refrigerant gas Rice and wheat strains Teflon Tupperware Genetic engineering Amniocentesis Artificial chromosome Artificial insemination Cloning Genetic “fingerprinting” Genetically engineered insulin In vitro plant culture Rice and wheat strains Synthetic amino acid Synthetic DNA Synthetic RNA Home products Cell phone Color television Community antenna television Disposable razor Electric refrigerator Fluorescent lighting FM radio Microwave cooking Radio Refrigerant gas

Topics by Category

Robot (household) Rotary dial telephone Television Touch-tone Telephone Transistor radio Tungsten filament Tupperware Vacuum cleaner Videocassette recorder Washing machine Manufacturing Assembly line CAD/CAM Interchangeable parts Memory metal Polystyrene Steelmaking process Materials Buna rubber Contact lenses Disposable razor Laminated glass Memory metal Neoprene Nylon Orlon Plastic Polyester Polyethylene Polystyrene Pyrex glass Silicones Steelmaking process Teflon

/

919

Tungsten filament Velcro Measurement and detection Amniocentesis Atomic clock Carbon dating CAT scanner Cyclotron Electric clock Electrocardiogram Electroencephalogram Electron microscope Geiger counter Gyrocompass Mass spectrograph Neutrino detector Radar Sonar Radio interferometer Richter scale Scanning tunneling microscope Synchrocyclotron Tevatron accelerator Ultracentrifuge Ultramicroscope Vending machine slug rejector X-ray crystallography Medical procedures Amniocentesis Blood transfusion CAT scanner Cloning Coronary artery bypass surgery Electrocardiogram

920

/

Topics by Category

Electroencephalogram Heart-lung machine Iron lung Laser eye surgery Laser vaporization Mammography Nuclear magnetic resonance Pap test Syphilis test Ultrasound X-ray image intensifier

Penicillin Polio vaccine (Sabin) Polio vaccine (Salk) Reserpine Salvarsan Syphilis test Tuberculosis vaccine Typhus vaccine Ultrasound X-ray image intensifier Yellow fever vaccine

Medicine

Music

Abortion pill Amniocentesis Antibacterial drugs Artificial blood Artificial heart Artificial hormone Artificial kidney Birth control pill Blood transfusion CAT scanner Contact lenses Coronary artery bypass surgery Electrocardiogram Electroencephalogram Genetically engineered insulin Hearing aid Heart-lung machine Iron lung Laser eye surgery Laser vaporization Mammography Nuclear magnetic resonance Pacemaker Pap test

Broadcaster guitar Cassette recording Dolby noise reduction Electronic synthesizer FM Radio Radio Transistor radio Photography Autochrome plate Brownie camera Color film Electrocardiogram Electron microscope Fax machine Holography Infrared photography Instant photography Mammography Mass spectrograph Optical disk Talking motion pictures Weather satellite

Topics by Category

Xerography X-ray crystallography Physics Atomic bomb Cyclotron Electron microscope Field ion microscope Geiger counter Hydrogen bomb Holography Laser Mass spectrograph Scanning tunneling microscope Synchrocyclotron Tevatron accelerator X-ray crystallography

/

921

Synthetic amino acid Synthetic DNA Synthetic RNA Vat dye Transportation Airplane Atomic-powered ship Bullet train Diesel locomotive Dirigible Gas-electric car Gyrocompass Hovercraft Internal combustion engine Supersonic passenger plane Turbojet

Synthetics

Weapons technology

Artificial blood Artificial chromosome Artificial heart Artificial hormone Artificial insemination Artificial kidney Artificial satellite Aspartame Buna rubber Cyclamate Electronic synthesizer Genetically engineered insulin Neoprene

Airplane Atomic bomb Cruise missile Dirigible Hydrogen bomb Propeller-coordinated machine gun Radar Rocket Sonar Stealth aircraft V-2 rocket

This Page Intentionally Left Blank

923

Index Index

Abbe, Ernst, 678 ABC. See American Broadcasting Company Abel, John Jacob, 50, 58, 60 Abortion pill, 1-5 Adams, Ansel, 430 Adams, Thomas, 850 Advanced Research Projects Agency, 446-447 AHD. See Audio high density disc Aiken, Howard H., 187, 417, 490, 828 Airplane, 6-10 Aldrin, Edwin, 8 Alferov, Zhores I., 320-321 Alkaline storage battery, 11-15 Ambrose, James, 167 American Broadcasting Company, 215 American Telephone and Telegraph Company, 741 Amery, Julian, 714 Amino acid, synthetic, 724-728 Ammonia, 16-19; and atomic clock, 8182; as a refrigerant, 290-291, 345, 631, 746 Amniocentesis, 20-23 Anable, Gloria Hollister, 100 Anschütz-Kaempfe, Hermann, 382 Antibacterial drugs, 24-27 Antibiotics, 24-27, 47, 813; penicillin, 553-557, 676, 738 Apple II computer, 28-32 Appliances. See Electric clock; Electric refrigerator; Microwave cooking; Refrigerant gas; Robot (household); Vacuum cleaner; Washing machine Aqualung, 33-37 Archaeology, 158-162 Archimedes, 687 Armstrong, Edwin H., 339 Armstrong, Neil, 8 Arnold, Harold D., 477 Arnold, Henry Harley, 807 ARPAnet, 447-448 Arsonval, Jacques Arsène d’, 351 Arteries and laser vaporization, 472-476

Artificial blood, 38-40 Artificial chromosome, 41-44 Artificial heart, 45-49 Artificial hormone, 50-53 Artificial insemination, 54-57 Artificial intelligence, 668, 671, 864 Artificial kidney, 58-62 Artificial satellite, 63-66 Artificial sweeteners, 67-70; Aspartame, 67-70; cyclamates, 248251 ASCC. See Automatic Sequence Controlled Calculator Aspartame, 67-70 Assembly line, 71-75, 197, 434, 436, 439 Aston, Francis William, 494, 496 Astronauts, 749, 848 AT&T. See American Telephone and Telegraph Company Atanasoff, John Vincent, 312 Atomic bomb, 76-79, 84, 118-119, 255, 412, 414, 521, 525, 697, 721 Atomic clock, 80-83 Atomic Energy Commission, 119, 521, 523 Atomic force microscope, 681 Atomic mass, 494-497 Atomic-powered ship, 84, 86-87 Audiffren, Marcel, 289 Audio high density disc, 220 Audrieth, Ludwig Frederick, 67 Autochrome plate, 88-91 Automatic Sequence Controlled Calculator, 187 Automobiles; and assembly lines, 71, 75; and interchangeable parts, 434441; and internal combustion engine, 442-445 Avery, Oswald T., 733 Aviation. See Airplane; Dirigible; Rockets; Stealth aircraft; Supersonic passenger plane; Turbojet Babbage, Charles, 417 Backus, John, 347

924

/

Index

Bacon, Francis Thomas, 355, 358 Baekeland, Leo Hendrik, 571 Baeyer, Adolf von, 571 Bahcall, John Norris, 511 Bain, Alexander, 316 Baker, William Oliver, 172, 174 Banting, Frederick G., 375 Baran, Paul, 446 Bardeen, John, 782, 786, 789 Barnay, Antoine, 663 Barton, Otis, 95, 100 BASIC computer language, 29-30, 9294, 559 Bathyscaphe, 95-99 Bathysphere, 100-103 Batteries, 11, 227; alkaline storage, 1115; and electric cars, 360, 363; and fuel cells, 356; and hearing aids, 390, 392; and pacemakers, 547; silicon solar, 569; and transistor radios, 780, 875, 878 Battery jars, 454, 607 Baulieu, Étienne-Émile, 1-2 Bavolek, Cecelia, 394 Bazooka, 659 BCS theory, 789 Beams, Jesse W., 815 Becquerel, Alexandre-Edmond, 562 Becquerel, Antoine-Henri, 365 Beebe, William, 95, 100 Bélanger, Alain, 1 Bell, Alexander Graham, 320-322, 390, 482-483, 663-665 Bell Telephone Laboratories, 101, 138140, 172-173, 204-205, 217, 229-230, 323, 390-391, 482, 567, 614, 625, 678, 744, 752, 774-775, 778-779, 786, 829, 840, 861, 863, 876 Belzel, George, 558 Benedictus, Edouard, 454 Bennett, Frederick, 434 Bennett, W. R., 217 Berger, Hans, 298 Bergeron, Tor, 183 Berliner, Emil, 279 Berthelot, Marcellin Pierre, 597 Bessemer, Henry, 701, 704 Bessemer converter, 701-702, 704 Best, Charles H., 375

Bethe, Hans, 412, 720 Bevis, Douglas, 20 Billiard balls, 572-573 BINAC. See Binary Automatic Computer Binary Automatic Computer, 104-107, 315, 330, 348 Binnig, Gerd, 678, 680 Birdseye, Clarence, 343 Birth control pill, 108-112 Bissell, Melville R., 832 Blodgett, Katherine Ann, 454 Blood plasma, 38 Blood transfusion, 113-117 Bobeck, Andrew H., 138-139 Bohn, René, 842 Bohr, Niels, 76, 520, 695 Bolton, Elmer Keiser, 507, 529 Booth, Andrew D., 330 Booth, H. Cecil, 832, 835 Borlaug, Norman E., 638, 643 Borsini, Fred, 151 Bothe, Walter, 367 Bragg, Lawrence, 896, 898 Bragg, William Henry, 896, 898 Brain, and nuclear magnetic resonance, 516, 519 Brattain, Walter H., 782, 786, 789 Braun, Wernher von, 63, 871 Bravais, Auguste, 896 Breast cancer, 486, 489 Breeder reactor, 118-121 Broadcaster guitar, 122-129 Broadcasting. See FM radio; Radio; Radio crystal sets; Television; Transistor radio Broglie, Louis de, 302, 678 Brooks, Fred P., 866 Brownell, Frank A., 130 Brownie camera, 130-137 Bubble memory, 138-141 Buehler, William, 498 Bullet train, 142-145 Buna rubber, 146-150 Burks, Arthur Walter, 312 Burton, William M., 765 Busch, Adolphus, 259 Busch, Hans, 302, 679 Bush, Vannevar, 262, 264

Index CAD. See Computer-Aided Design CAD/CAM, 151-157 Calculators; desktop, 232; digital, 490493; electromechanical, 313; mechanical, 104; pocket, 576-580; punched-card, 104 California Institute of Technology, 646, 731, 782 Callus tissue, 421 Calmette, Albert, 791 Cameras; Brownie, 130-137; and film, 88-91, 192-195; and infrared film, 426; instant, 430-433; in space, 887889; video, 165, 859; and virtual reality, 867; and X rays, 901-904. See also Photography Campbell, Charles J., 468 Campbell, Keith H. S., 177 Cancer, 4, 324, 376; and cyclamates, 69, 249-250; and infrared photography, 428; and mammography, 486-489; therapy, 40; uterine, 549-552 Capek, Karel, 650, 654 Carbohydrates, 374 Carbon dating, 158-162 Carlson, Chester F., 891, 893 Carnot, Sadi, 398 Carothers, Wallace H., 507, 510, 529, 574, 589 Carrel, Alexis, 113 Carty, John J., 477, 484 Cary, Frank, 558 Cascariolo, Vincenzo, 335 Cassette recording, 163-166, 221, 223, 279, 538; and Dolby noise reduction, 282; and microcomputers, 386; and Sony Walkman, 788; and transistors, 784 CAT scanner, 167-171 Cathode-ray tubes, 170, 303, 315, 326, 564, 611; and television, 757-758, 760, 837 Caton, Richard, 298 CBS. See Columbia Broadcasting System CD. See Compact disc CDC. See Control Data Corporation Cell phone, 172-176 Celluloid, 454, 571-573 Centrifuge, 815-818

/

925

Cerf, Vinton G., 446, 448 Chadwick, James, 367 Chain, Ernst Boris, 553 Chamberlain, W. Edward, 901 Chance, Ronald E., 374 Chandler, Robert F., Jr., 638 Chang, Min-Chueh, 108 Chanute, Octave, 6 Chapin, Daryl M., 567 Chardonnet, Hilaire de, 589 Chemotherapy, 24, 40, 676 Cho, Fujio, 360 Christian, Charlie, 122, 126 Chromosomes. See Artificial chromosome Clark, Barney, 45 Clark, Dugold, 257 Clarke, Arthur C., 63, 204 Cloning, 177-182 Cloud seeding, 183-186 Coal tars, 593, 843 COBOL computer language, 92, 187191, 350 Cockerell, Christopher Sydney, 407 Cohen, Robert Waley, 442 Coleman, William T., Jr., 714 Collins, Arnold Miller, 507 Color film, 192-195 Color photography, 88-91 Color television, 196-199 Colossus computer, 200-203 Columbia Broadcasting System, 196, 215, 830 Communications satellite, 204-207 Community antenna television, 208-216 Compaan, Klaas, 537 Compact disc, 217-224 Compressed-air-accumulating power plant, 225-228 Computer-Aided Design (CAD), 151157 Computer chips, 140, 229-234 Computer languages, 154; ALGOL, 9293; BASIC, 29-30, 92-94, 559; COBOL, 92, 187-191, 350; FORTRAN, 92-93, 189, 347-350 Computerized axial tomography, 167171 Computers; and information storage,

926

/

Index

104-107, 138-141, 165, 330-334, 386389, 537-540; and Internet, 446-450. See also Apple II computer; Personal computers Concorde, 714-719 Condamine, Charles de la, 146 Contact lenses, 235-239 Conti, Piero Ginori, 378 Contraception, 1-5, 108-112 Control Data Corporation, 709, 711 Cooking; microwave, 502-506; and Pyrex glass, 607, 609; and Tefloncoating, 748-749 Coolidge, William David, 795 Cormack, Allan M., 167 Corning Glass Works, 323, 606-610 Coronary artery bypass surgery, 240243 Cource, Geoffroy de, 714, 716 Cousins, Morison, 799 Cousteau, Jacques-Yves, 33, 35, 102 Cray, Seymour R., 709, 711-712 Crick, Francis, 41, 177, 729, 733 Crile, George Washington, 113 Critical mass, 77, 119, 521 Crookes, William, 365 CRT. See Cathode-ray tubes Cruise missile, 244-247 Curie, Jacques, 692 Curie, Marie, 823 Curie, Pierre, 692, 695 Curtis, William C., 611-612 Cyclamate, 248-251 Cyclotron, 252-256 DAD. See Digital audio disc Daimler, Gottlieb, 257 Dale, Henry Hallett, 50 Damadian, Raymond, 516 Datamath, 579 Davis, Raymond, Jr., 511 Deep-sea diving, 95-103 De Forest, Lee, 477-478, 480, 483, 837838 Dekker, Wisse, 217 Deoxyribonucleic acid; characteristics, 733. See also DNA Depp, Wallace Andrew, 751 Desert Storm, 699

Devol, George C., Jr., 654 DeVries, William Castle, 45-46 Diabetes, 51-52, 374-377 Dichlorodifluoromethane, 630-633 Diesel, Rudolf, 257-258 Diesel locomotive, 257-261 Differential analyzer, 262-266 Digital audio disc, 219 Dirigible, 267-271 Disposable razor, 272-278 Diving. See Aqualung DNA, 41, 177; and artificial chromosomes, 41-44; and cloning, 177; and genetic “fingerprinting,” 370-373; recombinant, 41; synthetic, 729-732; and X-ray crystallography, 900. See also Deoxyribonucleic acid; Synthetic DNA Dolby, Ray Milton, 279, 281 Dolby noise reduction, 279-283 Domagk, Gerhard, 24 Donald, Ian T., 823-824 Dornberger, Walter Robert, 871 Drew, Charles, 113, 115 Drinker, Philip, 451 Dulbecco, Renato, 581 Dunwoody, H. H., 621 Du Pont. See Du Pont de Nemours and Company Du Pont de Nemours and Company, 77, 149, 248, 508-509, 529, 531, 542, 589-590, 746-748, 799, 803 Durfee, Benjamin M., 490 Durham, Eddie, 126 Durrer, Robert, 701 Dyes, 593; and acrylics, 543; and infrared radiation, 425, 428; and microorganism staining, 24-25; and photographic film, 192-194; poison, 674; and polyesters, 591; vat, 842845 Earthquakes, measuring of, 645-649 Eastman, George, 130, 135 Eckert, John Presper, 104, 312, 828 Edison, Thomas Alva, 11, 335, 479, 616, 744, 839; and batteries, 12-14; and Edison effect, 837; and electric light, 795, 832; and fluoroscope, 901; and phonograph, 217, 279

Index Edison effect, 837-838 Edlefsen, Niels, 252 EDVAC. See Electronic Discrete Variable Automatic Computer Effler, Donald B., 240 Ehrlich, Paul, 24, 673 Einstein, Albert, 82, 472, 497, 563, 695, 721 Einthoven, Willem, 293, 295 Eisenhower, Dwight D., 84, 415 Elastomers, 148, 507-510, 598 Electric clock, 284-288 Electric refrigerator, 289-292 Electricity, generation of, 79, 378, 569 Electrocardiogram, 293-297 Electroencephalogram, 298-301 Electrolyte detector, 479 Electron microscope, 302-306, 403, 902 Electron theory, 562-565 Electronic Discrete Variable Automatic Computer, 105-107, 314, 829 Electronic Numerical Integrator and Calculator, 105-106, 312-315, 347, 668, 829 Electronic synthesizer, 307-311 Eli Lilly Research Laboratories, 374-377 Elliott, Tom, 360 Elmquist, Rune, 545 Elster, Julius, 562, 564 Engelberger, Joseph F., 654 ENIAC. See Electronic Numerical Integrator and Calculator Ericsson, John, 687 Espinosa, Chris, 28 Estridge, Philip D., 386 Evans, Oliver, 71 Ewan, Harold Irving, 625 “Excalibur,” 416 Eyeglasses; and contact lenses, 235-239 ; frames, 498, 500; and hearing aids, 391, 787 Fabrics; and dyes, 842-845; orlon, 541544; polyester, 589-592; and washing machines, 883-886 Fahlberg, Constantin, 67 Fasteners, velcro, 846-849 Favaloro, Rene, 240 Fax machine, 316-319

/

927

FCC. See Federal Communications Commission Federal Communications Commission; and cell phones, 173, 175; and communication satellites, 204; and FM radio, 341; and microwave cooking, 505; and television, 196197, 208-210 Fefrenkiel, Richard H., 172 Feinbloom, William, 235, 237 Fender, Leo, 122 Ferguson, Charles Wesley, 158 Fermi, Enrico, 76, 84, 412, 520, 525 Fessenden, Reginald, 13, 477-480, 616618 Fiber-optics, 320-324 Fick, Adolf Eugen, 235 Field ion microscope, 325-329, 679 FIM. See Field ion microscope Finlay, Carlos J., 905 Fischer, Rudolf, 192 Fisher, Alva J., 883 Fleming, Alexander, 553, 555 Fleming, John Ambrose, 478, 621, 837, 839 Flick, J. B., 394 Floppy disk, 330-334 Florey, Baron, 553 Flosdorf, Earl W., 351 Flowers, Thomas H., 200 FLOW-MATIC, 187 Fluorescent lighting, 335-338 FM radio, 339-342 Fokker, Anthony Herman Gerard, 601, 603 Food; artificial sweeteners, 67-70, 248251; freeze-drying, 351-354; freezing, 343-346; microwave cooking, 502-506; packaging, 598599; and refrigeration, 289-292, 343346, 630, 632; rice and wheat, 638644; storage, 799-806 Food and Drug Administration, 45, 111, 375 Ford, Henry, 11, 71, 74, 257, 434 Forel, François-Alphonse, 645 Forest de Bélidor, Bernard, 770 FORTRAN programming language, 92-93, 189, 347-350 Foucault, Jean-Bernard-Léon, 382

928

/

Index

Fox Network, 215 Francis, Thomas, Jr., 585 Freeze-drying, 351-354 Frerichs, Friedrick von, 673 Frisch, Otto Robert, 76, 520 Fuchs, Klaus Emil Julius, 412 Fuel cell, 355-359 Fuller, Calvin S., 567 Fulton, Robert, 335 Gabor, Dennis, 402, 404 Gagarin, Yuri A., 874 Gagnan, Émile, 33, 102 Gamow, George, 325, 412, 414, 720-721 Garcia, Celso-Ramon, 108 Garros, Roland, 601 Garwin, Richard L., 414 Gas-electric car, 360-364 Gates, Bill, 92, 94 Gaud, William S., 638 Gautheret, Roger, 421 GE. See General Electric Company Geiger, Hans, 365, 367 Geiger counter, 365-369 Geissler, Heinrich, 335 Geitel, Hans Friedrich, 562, 564 General Electric Company, 101, 183185, 219, 264, 290, 341, 356, 384, 440, 455, 477, 617, 683, 685, 795-796, 809, 840, 863, 893, 902 Genetic “fingerprinting,” 370-373 Genetically engineered insulin, 374-377 Geothermal power, 378-381 Gerhardt, Charles, 597 Gershon-Cohen, Jacob, 486 Gibbon, John H., Jr., 394 Gibbon, Mary Hopkinson, 394 Gillette, George, 272 Gillette, King Camp, 272, 276 Glass; coloring of, 819-820; fibers, 322323, 591; food containers, 800; goldruby, 819; high-purity, 322; laminated, 454-458; Pyrex, 606-610 Glass fiber. See Fiber-optics Goddard, Robert H., 63, 65, 658, 660, 662 Goldmark, Peter Carl, 196 Goldstine, Herman Heine, 312, 347 Goodman, Benny, 126

Goodyear, Charles, 146-147, 335 Gosslau, Ing Fritz, 871 Gould, R. Gordon, 472 Goulian, Mehran, 729 Graf Zeppelin, 271 Gray, Elisha, 663 Greaves, Ronald I. N., 351 Green Revolution, 638-639, 641-644 Grove, William Robert, 355 Groves, Leslie R., 76, 747 Grunberg-Manago, Marianne, 733 Guérin, Camille, 791 Guitar, electric, 122-129 Gutenberg, Beno, 645 Gyrocompass, 382-385 Haas, Georg, 59 Haber, Fritz, 16-19 Haberlandt, Gottlieb, 421 Hahn, Otto, 84, 520 Haldane, John Burdon Sanderson, 724 Haldane, T. G. N., 398 Hall, Charles, 335 Halliday, Don, 151 Hallwachs, Wilhelm, 562 Hamilton, Francis E., 490 Hammond, John, 126 Hanratty, Patrick, 151 Hard disk, 386-389 Hata, Sahachiro, 673 Haüy, René-Just, 896 Hayato, Ikeda, 142 Hayes, Arthur H., Jr., 67 Hazen, Harold L., 262 Health Company, 650 Hearing aid, 390-393 Heart; and pacemakers, 545-548. See also Artificial heart Heart-lung machine, 394-397 Heat pump, 398-401 Heilborn, Jacob, 272 Henne, Albert, 630, 746 Hero (Greek mathematician), 851 Hero 1 robot, 650-653 Herschel, William, 425, 427 Hertz, Heinrich, 502, 621 Heumann, Karl, 842 Hewitt, Peter Cooper, 335

Index Hindenburg, 271 Hitler, Adolf, 414, 509, 807, 871 Hoff, Marcian Edward, Jr., 229 Hoffman, Frederick de, 413 Hoffmann, Erich, 676 Hofmann, August Wilhelm von, 593, 842, 844 Hollerith, Herman, 417 Holography, 402-406, 537 Homolka, Benno, 192 Honda Insight, 360 Hoover, Charles Wilson, Jr., 751 Hoover, William Henry, 832 Hopper, Grace Murray, 187-188 Hormones. See Artificial hormone Hounsfield, Godfrey Newbold, 167, 169 House appliances. See Appliances Houtz, Ray C., 541 Hovercraft, 407-411 Howe, Elias, 335 Hughes, Howard R., 533, 535 Hulst, Hendrik Christoffel van de, 625 Humphreys, Robert E., 765 Humulin, 374, 377 Hyatt, John Wesley, 571, 573 Hyde, James Franklin, 683 Hydrofoil, 665 Hydrogen bomb, 412-416 IBM. See International Business Machines IBM Model 1401 computer, 417-420 Ibuka, Masaru, 778, 786, 875, 879 ICBM. See Intercontinental ballistic missiles Idaho National Engineering Laboratory, 119, 521 Immelmann, Max, 601 Immunology. See Polio vaccine; Tuberculosis vaccine; Typhus vaccine; Yellow fever vaccine In vitro plant culture, 108, 421-424 INEL. See Idaho National Engineering Laboratory Infantile paralysis. See Polio Infrared photography, 425-429 Instant photography, 430-433 Insulin, genetically engineered, 374-377

/

929

Intel Corporation, 153, 232, 234, 559 Interchangeable parts, 434-441 Intercontinental ballistic missiles, 63-64 Internal combustion engine, 442-445 International Business Machines, 31, 140, 187, 189, 313, 330-331, 333, 347350, 386, 388, 395, 420, 490-493, 680681, 830, 861-865; Model 1401 computer, 417-420; personal computers, 558-561 Internet, 446-450 Iron lung, 451-453 Isotopes, and atomic mass, 494 Ivanov, Ilya Ivanovich, 54 Ives, Frederick E., 90 Jansky, Karl, 614, 625 Jarvik, Robert, 45 Jarvik-7, 45, 49 The Jazz Singer, 742 Jeffreys, Alec, 370 Jenkins, Charles F., 756 Jet engines; and hovercraft, 408, 410; impulse, 871; and missiles, 244; supersonic, 714-719; turbo, 807-810 Jobs, Steven, 28, 30 Johnson, Irving S., 374 Johnson, Lyndon B., 206 Johnson, Reynold B., 330 Joliot, Frédéric, 76 Jolson, Al, 742 Jones, Amanda Theodosia, 343, 345 Joyce, John, 272 Judson, Walter E., 634 Judson, Whitcomb L., 847 Kahn, Reuben Leon, 737 Kamm, Oliver, 50 Kao, Charles K., 320 Kelvin, Lord, 398 Kemeny, John G., 92 Kettering, Charles F., 11, 630 Kidneys, 58, 62, 374; and blood, 39; and cyclamate, 248; problems, 634 Kilby, Jack St. Clair, 151, 229, 231, 576, 578 Kipping, Frederic Stanley, 683 Kitchenware. See Polystyrene; Pyrex glass; Teflon; Tupperware

930

/

Index

Knoll, Max, 302 Kober, Theodor, 267 Koch, Robert, 791 Kolff, Willem Johan, 58 Kornberg, Arthur, 729 Kornei, Otto, 891 Korolev, Sergei P., 63-64 Kramer, Piet, 537 Krueger, Myron W., 866 Kruiff, George T. de, 537 Kunitsky, R. W., 54 Kurtz, Thomas E., 92 Lake, Clair D., 490 Laminated glass, 454-458 Land, Edwin Herbert, 430, 432 Langévin, Paul, 692, 695, 823 Langmuir, Irving, 183 Laser, 459-463 Laser-diode recording process, 464-467 Laser eye surgery, 468-472 Laser vaporization, 472-476 Laservision, 219, 465 Laue, Max von, 896 Lauterbur, Paul C., 516 Lawrence, Ernest Orlando, 252, 254, 720 Lawrence-Livermore National Laboratory, 416, 671 Leclanché, Georges, 355 Leeuwenhoek, Antoni van, 678 Leith, Emmett, 402 Leland, Henry M., 434, 437 Lengyel, Peter, 733 Lenses; camera, 130, 132-134; electromagnetic, 303; electron, 302303; and fax machines, 317; and laser diodes, 465; microscope, 678; and optical disks, 539; Pyrex, 609; railroad lantern, 606; scleral, 235; television camera, 887; and xerography, 891-894. See also Contact lenses Leonardo da Vinci, 235 Leverone, Louis E., 850 Leverone, Nathaniel, 850 Lewis, Thomas, 293 LGOL computer language, 92-93 Libby, Willard Frank, 158, 160

Lidwell, Mark, 545 Lincoln, Abraham, 320, 439 Lindbergh, Charles A., 661 Littleton, Jesse T., 606, 608 Livestock, artificial insemination of, 54-57 Livingston, M. Stanley, 252 Locke, Walter M., 244 Lockhead Corporation, 697 Long-distance radiotelephony, 477481 Long-distance telephone, 482-485 Loosley, F. A., 701 Lumière, Auguste, 88-89 Lumière, Louis, 88-89 Lynde, Frederick C., 850-851 Lyons, Harold, 80 McCabe, B. C., 378 McCormick, Cyrus Hall, 335 McCormick, Katherine Dexter, 108 McCune, William J., 430 Machine guns, 601-605 McKay, Dean, 558 McKenna, Regis, 28 McKhann, Charles F., III, 451 McMillan, Edwin Mattison, 720 McWhir, J., 177 Magnetron, 504 Maiman, Theodore Harold, 320, 459, 468, 472 Mallory, Joseph, 432 Mammography, 486-489 Manhattan Project, 77-78, 412, 414, 525, 747-748 Mansfield, Peter, 516 Marconi, Guglielmo, 477, 616, 619, 621, 839 Mariano di Jacopo detto Taccola, 770 Mark I calculator, 490-493 Marrison, Warren Alvin, 284, 286 Marsden, Ernest, 367 Mass spectrograph, 494-497 Massachusetts Institute of Technology, 861 Mauchly, John W., 104, 312, 347, 828 Maxwell, James Clerk, 88, 502, 621 Meitner, Lise, 76, 520 Memory metal, 498-501

Index Mercalli, Giuseppe, 645 Merrill, John P., 61 Merryman, Jerry D., 576, 578 Mestral, Georges de, 846, 848 Metchnikoff, Élie, 673-674 Microprocessors, 94, 229-234, 287, 419, 538 Microscopes; atomic force, 681; electron, 302-306, 403, 902; field ion, 325-329, 679; scanning tunneling, 678-682; ultra-, 819-822 Microvelcro, 847 Microwave cooking, 502-506 Midgley, Thomas, Jr., 444, 630, 746 Miller, Bernard J., 394 Miller, Stanley Lloyd, 724 Millikan, Robert A., 646 Millikan, Robert Andrews, 722 Milunsky, Aubrey, 20 Missiles; cruise, 244-247; guided, 385; intercontinental, 63-64; Sidewinder, 698; Snark, 106. See also Rockets; V-2 rocket Mixter, Samuel Jason, 113 Mobile Telephone Service, 172 Model T, 14, 71, 75, 439-440 Monitor, 687 Monocot plants, 422 Monomers, 148, 541, 590-591 Moog, Robert A., 307, 309 Moon; distance to, 462; and lasers, 459; and radar, 614; and radio signals, 614 Morel, Georges Michel, 421-423 Morganthaler, Ottmar, 335 Morita, Akio, 217, 222, 778, 786, 875 Morse, Samuel F. B., 320, 335 Morse code, 477, 616, 621 Motion picture sound, 741-745 Mouchout, Augustin, 687 Movies. See Talking motion pictures Müller, Erwin Wilhelm, 325, 327, 679 Murray, Andrew W., 41 Murrow, Edward R., 830 Naito, Ryoichi, 38 National Broadcasting Company, 198, 215 National Geographic, 665

/

931

National Geographic Society, 665 National Radio Astronomy Observatory, 628 Natta, Giulio, 593 Nautilus, 84, 521 NBC. See National Broadcasting Company Neoprene, 507-510 Neumann, John von, 92, 104, 312, 347, 710, 828 Neurophysiology, 298, 300 Neutrino detector, 511-515 Newman, Max H. A., 200 Newton, Isaac, 659 Nickerson, William Emery, 272 Nieuwland, Julius Arthur, 507 Nipkow, Paul Gottlieb, 756 Nirenberg, Marshall W., 733 Nitinol, 498-501 Nitrogen, 16 Nobécourt, P., 421 Nobel Prize winners, 174; Chemistry, 16, 18-19, 50, 52, 158, 160, 183, 455, 494, 496, 595, 720, 724, 819, 821-822; Physics, 229, 231, 252, 254-255, 302, 304, 321, 402, 404, 459, 520, 619, 678, 680, 782, 789, 896-898; Physiology or Medicine, 24, 41, 167, 169, 293, 295, 375, 553, 555, 581, 674, 676, 730, 733 Nordwestdeutsche Kraftwerke, 225 Northrop Corporation, 106, 697 Noyce, Robert, 151, 229 NSFnet, 447 Nuclear fission, 76, 84, 118-121, 185, 412, 520-528 Nuclear fusion, 78 Nuclear magnetic resonance, 516-519 Nuclear power plant, 520-524 Nuclear reactor, 118-121, 520-528 Nylon, 510, 529-532, 541, 574, 590; Helance, 591; and velcro, 846-847 Oak Ridge National Laboratory, 77, 525-528 Ochoa, Severo, 733 Ohain, Hans Pabst von, 807 Ohga, Norio, 875 Oil-well drilling, 345, 533-536 Oparin, Aleksandr Ivanovich, 724

932

/

Index

Opel, John, 558 Ophthalmology, 468 Oppenheimer, J. Robert, 76, 325 Optical disk, 537-540 Orlon, 541-544 Ottens, Lou F., 537 Otto, Nikolaus, 257 Oxytocin, 50 Pacemaker, 545-548 Painter, William, 272 Paley, William S., 196 Pap test, 549-552 Papanicolaou, George N., 549 Parsons, Charles, 378 Parsons, Ed, 208 Particle accelerators, 252, 256, 720-723, 761-764 Paul, Les, 122 Pauli, Wolfgang, 511 PC. See Personal computers PCM. See Pulse code modulation Pearson, Gerald L., 567 Penicillin, 553-557 Peoples, John, 761 Perkin, William Henry, 842, 844 Perrin, Jean, 695 Persian Gulf War, 246, 698-699 Personal computers, 153, 558-561, 864; Apple, 28-32; and floppy disks, 332333; and hard disks, 389; and Internet, 447, 449 Pfleumer, Fritz, 163 Philibert, Daniel, 1 Philips Corporation, 464, 857 Photocopying. See Xerography Photoelectric cell, 562-566 Photography; film, 88-91, 192-195, 430433. See also Cameras Photovoltaic cell, 567-570 Piccard, Auguste, 36, 95, 97, 103 Piccard, Jacques, 95 Piccard, Jean-Félix, 97 Pickard, Greenleaf W., 621 Pierce, John R., 204 Pincus, Gregory, 108 Planck, Max, 563 Plastic, 571-575; Tupperware, 799-806 Plunkett, Roy J., 746, 748

Pocket calculator, 576-580 Polaroid camera, 170, 430-433 Polio, 451-453 Polio vaccine, 581-588 Polyacrylonitrile, 541-543 Polyester, 589-592 Polyethylene, 593-596 Polystyrene, 597-600 Porter, Steven, 272 Powers, Gary, 245 Pregnancy. See Abortion pill; Amniocentesis; Birth control pill; Ultrasound Priestley, Joseph, 146 Propeller-coordinated machine gun, 601-605 Protein synthesis, 735 Prout, William, 494 Pulse code modulation, 217-220 Purcell, Edward Mills, 625 Purvis, Merton Brown, 751 Pye, David Randall, 442 Pyrex glass, 606-610 Quadrophonic sound, 221 Quantum theory, 563 Quartz crystals, 81, 284-288 Radar, 229, 265, 314, 391, 504, 611-612, 614-615; and sonar, 693, 824; and bathyscaphe, 96; and laser holography, 405; and stealth aircraft, 697-699 Radio, 616-620; FM, 339-342 Radio Corporation of America, 196199, 210, 213, 219, 340-341, 464, 537, 618-619, 741, 758-759, 787 Radio crystal sets, 621-624 Radio frequency, 616; and cell phones, 172, 175; and crystral radio, 622; and microwave heating, 505 Radio interferometer, 625-629 Radioactivity, 720, 734; and barium, 76, 520; carbon dating, 158-162; and DNA, 371; and isotopes, 494, 497; measuring, 365-369; and neutrinos, 511-512 Radiotelephony, 477-481 Rainfall, induced, 183-186 RAM. See Random access memory

Index Random access memory, 140, 387, 559, 861-862, 864 Raytheon Company, 503, 505, 786 Razors, 272-278 RCA. See Radio Corporation of America Reagan, Ronald, 415 Reber, Grote, 625 Recombinant DNA, 41 Recording; cassettes, 163-166, 538, 784, 788, 875-882; compact discs, 217-224; Dolby noise reduction, 279-283; laser-diodes, 464-467; sound, 741742; video, 857-860 Reed, Walter, 905 Refrigerant gas, 630-633 Reichenbach, Henry M., 130 Rein, Herbert, 541 Remsen, Ira, 67 Reserpine, 634-637 Ribonucleic acid, 734. See also Synthetic RNA Ricardo, Harry Ralph, 442 Rice and wheat strains, 638-644 Rice-Wray, Edris, 108 Richter, Charles F., 645-646 Richter scale, 645-649 Rickover, Hyman G., 520 Riffolt, Nils, 661 Ritchie, W. A., 177 Rizzo, Paul, 558 RNA, synthetic, 733-736 Robot, household, 650-653 Robot, industrial, 654-657 Rochow, Eugene G., 683, 685 Rock, John, 108 Rockets; and satellites, 63-66; design, 712; liquid-fuel-propelled, 658-662, 871-874. See also Missiles Rogers, Howard G., 430 Rohrer, Heinrich, 678, 680 Röntgen, Wilhelm Conrad, 167, 365, 896, 901 Roosevelt, Franklin D., 264, 588, 770-771 Root, Elisha King, 71 Rosen, Charles, 362 Rosing, Boris von, 758 Rossi, Michele Stefano de, 645 Rotary cone drill bit, 533, 536

/

933

Rotary dial telephone, 663-667, 751, 774-776 Rotary engine, 362 Roux, Pierre-Paul-Émile, 673 Rubber, synthetic, 146-150, 507-510, 530, 593, 595 Ruska, Ernst, 302, 304, 678, 680 Russell, Archibald, 714 Rutherford, Ernest, 252, 365-368, 455, 494, 564, 720-721, 898 Ryle, Martin, 625 Sabin, Albert Bruce, 581, 583 Saccharin, 248 Sachs, Henry, 272 SAINT, 668-672 Salk, Jonas Edward, 581, 585-586 Salomon, Albert, 486 Salvarsan, 673-674, 676-677 Sanger, Margaret, 108, 110, 112 Sarnoff, David, 196-197, 210, 339-340, 758 Satellite, artificial, 63-66 Satre, Pierre, 714 Saulnier, Raymond, 601 Savannah, 85 Sawyer, Wilbur Augustus, 905 Sayer, Gerry, 807 Scanning tunneling microscope, 678-682 Schaefer, Vincent Joseph, 183 Schaudinn, Fritz, 673 Schawlow, Arthur L., 459 Schlatter, James M., 67 Schmidt, Paul, 871 Scholl, Roland, 842 Schönbein, Christian Friedrich, 571 Schrieffer, J. Robert, 789 SDI. See Strategic Defense Initiative Selectavision, 219 Semiconductors, 139-140, 218, 229-234, 317, 464-466, 568, 786-787, 892; and calculators, 577, 579; defined, 229, 232, 891 Senning, Ake, 545 Serviss, Garrett P., 659 Seyewetz, Alphonse, 88 Shannon, Claude, 868 Sharp, Walter B., 533, 535 Sharpey-Schafer, Edward Albert, 50

934

/

Index

Shaw, Louis, 451 Shaw, Ronald A., 407 Sheep, cloning of, 177-182 Shellac, 572 Shockley, William B., 229, 778, 782, 786, 789 Shroud of Turin, 161 Shugart, Alan, 330, 386 Shuman, Frank, 687 Sidewinder missile, 698 Siedentopf, H. F. W., 819 Siegrist, H., 192 Silicones, 683-686 Simon, Edward, 597 The Singing Fool, 742 Sinjou, Joop, 537 Sinsheimer, Robert L., 729 Sketchpad, 868 Slagle, James R., 668, 671 Sloan, David, 252 Smith, Hugh, 905 Smith, Robert, 427 Smouluchowski, Max von, 819 Snark missile, 106 Snyder, Howard, 883 Sogo, Shinji, 142 Solar energy, 567-568, 687-688, 690 Solar thermal engine, 687-691 Sonar, 692-696; and radar, 823 Sones, F. Mason, 240 Sony Corporation, 165, 218-224, 539, 778, 781, 783-785, 788-789, 875-881 Spaeth, Mary, 459, 461 Spallanzani, Lazzaro, 54 Spangler, James Murray, 832 Spencer, Percy L., 502, 504 Sperry, Elmer Ambrose, 382, 384 Sputnik, 63-66, 446, 874 “Star Wars” (Strategic Defense Initiative), 699 Staudinger, Hermann, 530 Stealth aircraft, 697-700 Steelmaking process, 701-708 Steenstrup, Christian, 289 Stewart, Alice, 823 Stewart, Edward J., 272 Stibitz, George, 828 Stine, Charles M. A., 529

STM. See Scanning tunneling microscope Stockard, Charles, 549 Stokes, T. L., 394 Storax, 597 Strassmann, Fritz, 76 Strategic Defense Initiative, 416, 699 Strowger, Almon B., 751, 753 Styrene, 148-149, 597-598 Submarines; detection of, 692, 695, 823; navigation, 382-385; nuclear, 98, 521; weapons, 245-246 Sucaryl, 248 Suess, Theodor, 701 Sulfonamides, 24 Sullivan, Eugene G., 606 Sun, 514, 725; energy, 725; and nuclear fusion, 511, 515, 567, 687; and timekeeping, 80 Sun Power Company, 688 Supercomputer, 709-713 Supersonic passenger plane, 714-719 Surgery; and artificial blood, 39; and artificial heart, 46, 48; and blood transfusion, 113-117; and breast cancer, 486-487, 489; cardiac, 499; coronary artery bypass, 240-243; and heart-lung machine, 394-397; kidney-transplant, 61; laser eye, 468-472; laser vaporization, 472-476; transplantation, 61 Sutherland, Ivan, 866, 868 Sveda, Michael, 67, 248 Svedberg, Theodor, 815 Swarts, Frédéric, 630 Swinton, Alan A. Campbell, 756 Sydnes, William L., 558 Synchrocyclotron, 720-723 Synthetic amino acid, 724-728 Synthetic DNA, 729-732 Synthetic RNA, 733-736 Syphilis, 24, 554-556, 673, 676, 744; test, 737-740; treatment of, 673-674, 676677 Szostak, Jack W., 41 Talking motion pictures, 741-745 Tarlton, Robert J., 208-209 Tassel, James Van, 576 Taylor, Frederick Winslow, 71 Taylor, William C., 606

Index Tee-Van, John, 100 Teflon, 746-750 Telecommunications Research Establishment, 201 Telegraphy, radio, 616 Telephone; cellular, 172-176; longdistance, 482-485; rotary dial, 663667, 751, 774-776; touch-tone, 667, 774-777 Telephone switching, 751-755 Television, 756-760 Teller, Edward, 78, 412, 414, 416 Tesla, Nikola, 13, 832 Teutsch, Georges, 1 Tevatron accelerator, 761-764 Texas Instruments, 140, 153, 232-233, 419, 577-579, 787-788 Theiler, Max, 905 Thein, Swee Lay, 370 Thermal cracking process, 765-769 Thermionic valve, 564 Thomson Electron Tubes, 901 Thomson, Joseph John, 494, 496, 563564, 838 Thornycroft, John Isaac, 407, 409 Thornycroft, Oliver, 442 Tidal power plant, 770-773 Tiros 1, 887-890 Tiselius, Arne, 815 Tokyo Telecommunications Engineering Company, 778, 780, 787, 875. See also Sony Corporation Tomography, 168, 170 Topografiner, 679 Torpedo boat, 409 Touch-tone telephone, 667, 774-777 Townes, Charles Hard, 459 Townsend, John Sealy Edward, 365 Toyota Prius, 363 Transistor radio, 786-790 Transistors, 172, 229, 232, 390-391, 418419, 778-788, 875-876; invention of, 840 Traut, Herbert, 549 Tressler, Donald K., 343 Truman, Harry S., 78 Tsiolkovsky, Konstantin, 63, 65 Tuberculosis vaccine, 791-794 Tungsten filament, 795-798

/

935

Tuohy, Kevin, 235 Tupper, Earl S., 799, 803 Tupperware, 799-806 Turbojet, 807-810 Turing, Alan Mathison, 104, 200, 668 Turner, Ted, 208, 211 Tuskegee Airmen, 612 Typhus vaccine, 811-814 U2 spyplane, 245, 432 U-boats. See Submarines Ulam, Stanislaw, 412, 414 Ultracentrifuge, 815-818 Ultramicroscope, 819-822 Ultrasound, 823-827 Unimate robots, 654-656 UNIVAC. See Universal Automatic Computer Universal Automatic Computer, 106, 315, 331, 348, 711, 828-831 Upatnieks, Juris, 402 Urey, Harold Clayton, 724 Uterine cancer, 549, 552 V-2 rocket, 65, 244, 659, 662, 871-874 Vaccines. See Polio vaccine; Tuberculosis vaccine; Typhus vaccine; Yellow fever vaccine Vacuum cleaner, 832-836 Vacuum tubes, 339, 837-841; and computers, 106, 201-202, 313-314; and radar, 391; and radio, 478, 623; and television, 783; thermionic valve, 564; and transistors, 229, 391, 778-780, 786-787, 876. See also Cathode-ray tubes Vat dye, 842-845 VCR. See Videocassette recorder Vectograph, 432 Veksler, Vladimir Iosifovich, 720 Velcro, 846-849 Vending machine slug rejector, 850-856 Videocassette recorder, 214, 218, 857860; and laservision, 465 Videodisc, 219 Vigneaud, Vincent du, 50 Virtual machine, 861-865 Virtual reality, 866-870 Vitaphone, 742 Vogel, Orville A., 638, 643

936

/

Index

Volta, Alessandro, 355 Vonnegut, Bernard, 183 Vulcanization of rubber, 146, 149 Wadati, Kiyoo, 645 Waldeyer, Wilhelm von, 673 Walker, William H., 130 Walkman cassette player, 165, 784, 788, 875-882 Waller, Augustus D., 293 Warner Bros., 741-745 Warner, Albert, 741, 744 Warner, Harry, 741, 744 Warner, Jack, 741, 744 Warner, Samuel, 741, 744 Warren, Stafford L., 487 Washing machine, electric, 883-886 Washington, George, 289 Wassermann, August von, 676, 737 Watson, James D., 41, 177, 729, 733 Watson, Thomas A., 482 Watson, Thomas J., 394, 558 Watson, Thomas J., Jr., 386 Watson-Watt, Robert, 611 Weather; and astronomy, 609; cloud seeding, 183-186; and rockets, 712 Weather satellite, 887-890 Wehnelt, Arthur, 837 Wells, H. G., 659 Westinghouse, George, 335 Westinghouse Company, 101, 440, 758759, 832 Wexler, Harry, 887 Whinfield, John R., 589 Whitaker, Martin D., 525 White, Philip Cleaver, 421 White Sands Missile Range, 873 Whitney, Eli, 71, 335 Whittle, Frank, 807 Wichterle, Otto, 235 Wigginton, Randy, 28 Wigner, Eugene, 525, 789 Wilkins, Arnold F., 611

Wilkins, Maurice H. F., 733 Wilkins, Robert Wallace, 634 Williams, Charles Greville, 146 Wilmut, Ian, 177-178 Wilson, Robert Rathbun, 761 Wilson, Victoria, 370 Wise, Brownie, 799 Wolf, Fred, 289-290 Woodrow, O. B., 883 World War I, and nitrates, 18 World War II; and Aqualung, 36; atomic bomb, 84, 118, 521, 525-527, 697, 721; and computers, 92; spying, 34, 200, 202, 668; V-2 rocket, 65, 244, 659, 662, 871-874 Wouk, Victor, 360, 362 Wozniak, Stephen, 28, 30 Wright, Almroth, 555 Wright, Orville, 6-10, 658 Wright, Wilbur, 6-10, 335, 658 Wynn-Williams, C. E., 200 Xerography, 891-895 Xerox Corporation, 891-894 X-ray crystallography, 896-900 X-ray image intensifier, 901-904 X-ray mammography, 486-489 Yellow fever vaccine, 905-908 Yoshino, Hiroyuki, 360 Zaret, Milton M., 468 Zenith Radio Corporation, 209, 341 Zeppelin, Ferdinand von, 267-270 Ziegler, Karl, 593 Zinn, Walter Henry, 118 Zinsser, Hans, 811 Zippers, 846-847 Zoll, Paul Maurice, 545 Zsigmondy, Richard, 819, 821 Zweng, H. Christian, 468 Zworykin, Vladimir, 756, 758

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF