The Complete Guide to LIMS & Laboratory Informatics - 2015 Edition

April 6, 2017 | Author: Simone Giacomelli | Category: N/A
Share Embed Donate


Short Description

Guide to LIMS and Laboratory Informatics...

Description

The Complete Guide to LIMS & Laboratory Informatics 2015 Edition

PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Mon, 23 Mar 2015 21:18:24 CET

Contents Articles 1. About the Content of This Guide LIMSwiki mission

2. Introduction to Informatics in the Laboratory

1 1 2

Information

2

Informatics (academic field)

6

Laboratory informatics

3. Informatics Across Several Industries

10 12

Bioinformatics

12

Chemical informatics

18

Environmental informatics

21

Geoinformatics

23

Health informatics

25

4. All about LIMS and LIS

31

Laboratory information management system

31

LIMS feature

37

Laboratory information system

54

LIS feature

57

LIMS and laboratory informatics questionnaire

75

5. More Laboratory Informatics Applications

89

Electronic laboratory notebook

89

Laboratory execution system

92

Scientific data management system

92

Chromatography data management system

93

6. Related Standards and Compliance

95

21 CFR Part 11

95

21 CFR Part 11/Audit guidelines and checklist

97

40 CFR Part 3

103

Good Automated Laboratory Practices

106

Good Automated Manufacturing Practice

107

Health Insurance Portability and Accountability Act

109

Health Insurance Portability and Accountability Act/Audit guidelines and checklist

113

Clinical Laboratory Improvement Amendments

117

Health Level 7

120

ISO 9000

123

ISO/IEC 17025

126

ISO/TS 16949

129

The American Society of Crime Laboratory Directors/Laboratory Accreditation Board

133

The NELAC Institute

134

7. Laboratory Informatics Resources

136

LIMSWiki:LIMSforum and LIMS/LI forum posts

136

Laboratory, Scientific, and Health Informatics Buyer's Guide

183

8. Key Laboratory Informatics Vendors

192

LabLynx, Inc.

192

LABVANTAGE Solutions, Inc.

203

LabWare, Inc.

211

STARLIMS Corporation

217

Thermo Scientific

224

9. Laboratory Informatics Vendor Directory

236

LIMS vendor

236

ELN vendor

248

CDMS vendor

252

LIS vendor

254

SDMS vendor

260

Open-source laboratory informatics software

261

References Article Sources and Contributors

262

Image Sources, Licenses and Contributors

263

Article Licenses License

265

1

1. About the Content of This Guide LIMSwiki mission The mission and goal of the 'Laboratory Informatics Encyclopedia', otherwise known as LIMSwiki, is to provide the laboratory community and LIMS community with an organized, documented, up-to-date, standardized body of knowledge (BoK) regarding all aspects of laboratory informatics, bioinformatics, and health informatics. The value and success of LIMSwiki (like any community wiki) is dependent upon the laboratory community contributing their vast knowledge through the creation of relevant articles and editing of existing articles where knowledge is absent. The explosion of the laboratory and health informatics fields paired with the vast number of LIMS vendors and products — as well as LIMS' increased scope well beyond its original purpose of sample management to just about all facets of laboratory operations — has resulted in a wealth of information not easily comprehended. This potential confusion frustrates informatics customers and makes the already challenging task of successfully implementing and managing a LIMS, ELN, or other informatics software even more difficult. In that regard, the Laboratory Informatics Institute (which is responsible for this wiki) believes the community (including vendors, users, and consultants) will benefit from a common vocabulary and understanding to facilitate communication, comparison, and product integration, providing maximum value to buyers and maximum opportunity to vendors and consultants.

2

2. Introduction to Informatics in the Laboratory Information Information, in its most restricted technical sense, is a sequence of symbols that can be interpreted as a message, recorded as signs, or transmitted as signals. Conceptually, information is the message (utterance or expression) being conveyed. Therefore, in a general sense, information is "knowledge communicated or received concerning a particular fact or circumstance." From the stance of information theory, information is taken as a sequence of symbols from an alphabet, say an input alphabet χ, and an output alphabet ϒ. Information processing consists of an input-output function that maps any input sequence from χ into an output sequence from ϒ. The mapping may be probabilistic or determinate. It may have memory or be memoryless. Information cannot be predicted and resolves uncertainty. The uncertainty of an event is measured by its probability of occurrence and is inversely proportional to that. The more uncertain an event, the more information is required to resolve uncertainty of that event. The amount of information is measured in bits. The concept that information is the message has different meanings in different contexts. Thus the concept of information becomes closely related to notions of constraint, communication, control, data, form, instruction, knowledge, meaning, understanding, stimulation, pattern, perception, representation, and entropy.

Variations of information As sensory input Often information can be viewed as a type of input to an organism or system. Some inputs are important to the function of the organism (for example, food) or to the system itself (energy) and are called causal inputs. Other inputs (information) are important only because they are associated with causal inputs and can be used to predict the occurrence of a causal input at a later time (and perhaps another place). Some information is important because of its association with other information, but eventually there must be a connection to a causal input. In practice, information is usually carried by weak stimuli that must be detected by specialized sensory systems and amplified by energy inputs before they can be functional to the organism or system. For example, light is often a causal input to plants but provides information to animals. The colored light reflected from a flower is too weak to do much photosynthetic work. However, the visual system of the bee detects it, and the bee's nervous system uses the information to guide the bee to the flower, where the bee often finds nectar or pollen, causal inputs serving a nutritional function.

As representation and complexity One theory says information is a concept that involves at least two related entities in order to make quantitative sense: a dimensionally defined category of objects "S" and any of its subsets "R". In essence "R" is a representation of "S"; it conveys representational (and hence, conceptual) information about "S". The amount of information that "R" conveys about "S" is equivalent to the rate of change in the complexity of "S" whenever the objects in "R" are removed from "S". Under this theory, the universal scientific constructs of pattern, invariance, complexity, representation, and information are unified under a novel mathematical framework. Among other things, the

Information framework aims to overcome the limitations of Shannon-Weaver information when attempting to characterize and measure subjective information.

As an influence which leads to a transformation Information can also be defined as any type of pattern that influences the formation or transformation of other patterns. In this sense, there is no need for a conscious mind to perceive, much less appreciate, the pattern. Consider, for example, DNA. The sequence of nucleotides is a pattern that influences the formation and development of an organism without any need for a conscious mind. Systems theory at times seems to refer to information in this sense, assuming information does not necessarily involve any conscious mind, and patterns circulating (due to feedback) in the system can be called information. In other words, it can be said information in this sense is something potentially perceived as representation, though not created or presented for that Visual representation of the relationship between language, data/facts, information, purpose. For example, anthropologist and and knowledge social scientist Gregory Bateson defined "information" as a "difference that makes a difference." If, however, the premise of "influence" implies that information has been perceived by a conscious mind and also interpreted by it, the specific context associated with this interpretation may cause the transformation of the information into knowledge. Complex definitions of both "information" and "knowledge" make such semantic and logical analysis difficult, but the condition of "transformation" is an important point in the study of information as it relates to knowledge, especially in the business discipline of knowledge management. In this practice, tools and processes are used to assist a knowledge worker in performing research and making decisions, including steps such as: • • • • •

reviewing information in order to effectively derive value and meaning referencing metadata if any is available establishing a relevant context, often selecting from many possible contexts deriving new knowledge from the information making decisions or recommendations from the resulting knowledge

The Danish Dictionary of Information Terms suggests, however, information only provides an answer to a posed question. Whether the answer provides knowledge depends on the informed person. Thus a generalized definition of the transformation concept could be "information represents the answer to a specific question."

3

Information

As a property in physics Information has had a well-defined meaning in physics. However, in 2003 theoretical physicist J. D. Bekenstein claimed a growing trend in physics was to define the physical world as being made up of information itself. Examples of this include the phenomenon of quantum entanglement, where particles can interact without reference to their separation or the speed of light. Information itself cannot travel faster than light, even if the information is transmitted indirectly. This could lead to all attempts at physically observing a particle with an "entangled" relationship to another being slowed down, even though the particles are not connected in any other way other than by the information they carry. Another link is demonstrated by the Maxwell's demon thought experiment. In this experiment, a direct relationship between information and another physical property, entropy, is demonstrated. As a result, destroying the information is impossible without increasing the entropy of a system; in practical terms this often means generating heat.

As records Records are specialized forms of information, produced consciously or as by-products of business activities or transactions and retained because of their value. Organizations value records as evidence of activity, but they may also be retained for their informational value. Sound records management ensures the integrity of records is preserved for as long as they are required. The international standard on records management, ISO 15489, defines records as "information created, received, and maintained as evidence and information by an organization or person, in pursuance of legal obligations or in the transaction of business." The International Committee on Archives (ICA), Committee on Electronic Records defined a record as "recorded information produced or received in the initiation, conduct, or completion of an institutional or individual activity and that comprises content, context, and structure sufficient to provide evidence of the activity." Records may be maintained to retain corporate memory of the organization or to meet legal, fiscal, or accountability requirements imposed on the organization. In 2005 legal expert Anthony Willis elaborated on this view, stating the sound management of business records and information delivered "...six key requirements for good corporate governance ... transparency; accountability; due process; compliance; meeting statutory and common law requirements; and security of personal and corporate information."

Technologically mediated information In 2011 scientists Martin Hilbert and Priscila López estimated the world's technological capacity to store information grew from 2.6 (optimally compressed) exabytes in 1986 – which is the informational equivalent to less than one 730-MB CD-ROM per person (539 MB per person) – to 295 (optimally compressed) exabytes in 2007. This is the informational equivalent of almost 61 CD-ROM per person in 2007. Hilbert and López also stated the world's combined technological capacity to receive information through one-way broadcast networks was the informational equivalent of 174 newspapers per person per day in 2007., while the world's combined effective capacity to exchange information through two-way telecommunication networks was the informational equivalent of six newspapers per person per day.

4

Information

Information and semiotics Scientists can also explain information in terms of signs and signal-sign systems. Signs themselves can be considered in terms of four interdependent levels, layers, or branches of semiotics: pragmatics, semantics, syntax, and empirics. These four layers serve to connect the social world with the physical or technical. The four branches of semiotics are described as such: 1. pragmatics: the purpose of communication - Pragmatics links the issue of signs with the context within which signs are used. The focus of pragmatics is on the intentions of living agents underlying communicative behavior. In other words, pragmatics links language to action. 2. semantics: the meaning of a message conveyed in a communicative act - Semantics considers the content of communication, the meaning of signs, and the association between signs and behavior. The study of semantics links symbols and their referents or concepts, particularly the way in which signs relate to human behavior. 3. syntax: the formalism used to represent a message - Syntax considers the form of communication in terms of the logic and grammar of sign systems. Syntax focuses on form rather than the content of signs and sign systems. 4. empirics: the signals used to carry a message - Emperics focus on the physical characteristics of the medium of communication. Empirics is devoted to the study of communication channels and their characteristics, e.g., sound, light, electronic transmission, etc. In 2008, lexicographer Sandro Nielsen discussed the relationship between semiotics and information in relation to dictionaries. The concept of lexicographic information costs is introduced and refers to the efforts users of dictionaries need to make in order to, first, find the data sought and, secondly, understand the data so they can generate information. Communication normally exists within the context of some social situation. The social situation sets the context for the intentions conveyed (pragmatics) and the form in which communication takes place. We express out intentions through a mutually understood collection of inter-related signs. Mutual understanding implies agents involved understand the chosen language in terms of its agreed syntax (syntactics) and semantics. The sender codes the message in the language and sends the message as signals along some communication channel (empirics). The chosen communication channel will have inherent properties which determine outcomes such as the speed with which communication can take place and over what distance.

Further reading • Floridi, Luciano (2010). Information - A Very Short Introduction [1]. Oxford University Press. pp. 130. ISBN [2]  0199551375. • Floridi, Luciano. Zalta, Edward N. ed. Semantic Conceptions of Information [3] (Spring 2013 ed.). Stanford University. • Frieden, B. Roy (20 August 2012). "Fisher Information, a New Paradigm of Science" [4]. Optical Sciences Center, Univ. of Arizona. • Von Baeyer, Hans Christian (2004). Information: The New Language of Science [5]. Harvard University Press. pp. 258. ISBN [2] 0674013875. • Young, Paul (1987). The Nature of Information [6]. Praeger. pp. 192. ISBN [2] 0275926982.

5

Information

6

External links • Informationsordbogen.dk [7], the Danish Dictionary of Information Terms / Informationsordbogen

Notes Some elements of this article are reused from the Wikipedia article [8].

References [1] [2] [3] [4] [5] [6] [7] [8]

http:/ / books. google. com/ books?id=VupFqa3IJiUC http:/ / en. wikipedia. org/ wiki/ International_Standard_Book_Number http:/ / plato. stanford. edu/ entries/ information-semantic/ http:/ / fp. optics. arizona. edu/ frieden/ fisher_information. htm http:/ / books. google. com/ books?id=QpuZgAR8DJwC http:/ / books. google. com/ books?id=yX9QAAAAMAAJ http:/ / www. informationsordbogen. dk http:/ / en. wikipedia. org/ wiki/ Information

Informatics (academic field) Informatics is the science of information, the practice of information processing, and the engineering of information systems. Informatics studies the structure, algorithms, behavior, and interactions of natural and artificial systems that store, process, access, and communicate information. It also develops its own conceptual and theoretical foundations and utilizes foundations developed in other fields. Since the advent of computers, individuals and organizations increasingly process information digitally. This has led to the study of informatics that has computational, cognitive, and social aspects, including study of the social impact of information technologies.

A computer used at China's 2002 National Olympiad in Informatics

While the field of informatics encompasses the study of systems that represent, process, and communicate information, the theory of computation in the specific discipline of theoretical computer science which evolved from Alan Turing studies the notion of a complex system regardless of whether information actually exists. Since both fields process information, there is some disagreement among scientists as to field hierarchy. For example, Arizona State University attempted to adopt a broader definition of informatics to even encompass cognitive science at the launch of its School of Computing and Informatics in September 2006. The confusion arises since information can be easily stored on a computer, and hence informatics could be considered the parent of computer science. However, the original notion of a computer was the name given to the action of computation regardless of the existence of information or the existence of a Von Neumann architecture. Humans are examples of computational systems and not information systems. Many fields such as quantum computing theory are studied in theoretical computer science but not related to informatics. A practitioner of informatics may be called an informatician or an informaticist.

Informatics (academic field)

Etymology In 1957 the German computer scientist Karl Steinbuch coined the word Informatik by publishing a paper called Informatik: Automatische Informationsverarbeitung ("Informatics: Automatic Information Processing"). The English term informatics is sometimes understood as meaning the same as computer science. However, the German word Informatik is the correct translation of the English phrase computer science. (The naming for computer science is derived from the concept of computation, which may or may not involve the existence of information. For example, quantum computation and digital logic do not involve information.) The French term informatique was coined in 1962 by Philippe Dreyfus together with various translations — informatics (English), also proposed independently and simultaneously by Walter F. Bauer and associates who co-founded Informatics Inc., and informatica (Italian, Spanish, Romanian, Portuguese, Dutch), referring to the application of computers to store and process information. The term was coined as a combination of "information" and "automatic" to describe the science of automating information interactions. The morphology—informat-ion + -ics—uses "the accepted form for names of sciences, as conics, linguistics, optics, or matters of practice, as economics, politics, tactics", and so, linguistically, the meaning extends easily to encompass both the science of information and the practice of information processing.

History This new term was adopted across Western Europe, and, except in English, developed a meaning roughly translated by the English "computer science" or "computing science." Mikhailov et al. advocated the Russian term informatika (1966), and the English informatics (1967), as names for the theory of scientific information and argued for a broader meaning, including study of the use of information technology in various communities and of the interaction of technology and human organizational structures: Informatics is the discipline of science which investigates the structure and properties (not specific content) of scientific information, as well as the regularities of scientific information activity, its theory, history, methodology and organization. Usage has since modified this definition in three ways. First, the restriction to scientific information is removed, as in business informatics or legal informatics. Second, since most information is now digitally stored, computation is now central to informatics. Third, the representation, processing and communication of information are added as objects of investigation, since they have been recognized as fundamental to any scientific account of information. Taking information as the central focus of study, then, distinguishes informatics, which includes the study of biological and social mechanisms of information processing, from computer science, where digital computation plays a distinguished central role. Similarly, in the study of representation and communication, informatics is indifferent to the substrate that carries information. For example, it encompasses the study of communication using gesture, speech and language, as well as digital communications and networking. The first example of a degree-level qualification in informatics occurred in 1982 when Plymouth Polytechnic (now the University of Plymouth) offered a four-year BSc (honours) degree in "Computing and Informatics," with an initial intake of only 35 students. The course still runs today, making it the longest available qualification in the subject.[citation needed] In 1989, the first International Olympiad in Informatics (IOI) — a competition of the brightest informatics students around the world — was held in Bulgaria. The competition involved two days of intense competition, with up to four students selected from each participating country to attend and compete for the highest score on a variety of informatics problems.

7

Informatics (academic field)

Changing definitions The definition of informatics has seen many variations across different institutions: • The 2008 Research Assessment Exercise, of the U.K. Funding Councils, includes a new Computer Science and Informatics unit of assessment (UoA), the scope of which is described as follows: The UoA includes the study of methods for acquiring, storing, processing, communicating and reasoning about information, and the role of interactivity in natural and artificial systems, through the implementation, organisation and use of computer hardware, software and other resources. The subjects are characterised by the rigorous application of analysis, experimentation and design. • At the Indiana University School of Informatics and Computing in Indianapolis and Southeast, informatics is defined as "the art, science and human dimensions of information technology" and "“the study and application of information technology to the arts, science and professions." These definitions are generally accepted in the United States and differ from British usage in omitting the study of natural computation. • At the University of California, Irvine, informatics is defined thusly: Informatics is based on recognizing that the design of this technology is not solely a technical matter, but must focus on the relationship between the technology and its use in real-world settings. That is, informatics designs solutions in context, and takes into account the social, cultural and organizational settings in which computing and information technology will be used. • At the University of Michigan, Ann Arbor, defines it as a "coupling [of] information with computing technology," adding: Informatics provides solid grounding in computer programming, mathematics, and statistics, combined with study of the ethical and social science aspects of complex information systems. Informatics majors learn to critically analyze various approaches to processing information and develop skills to design, implement, and evaluate the next generation of information technology tools.

Applications of informatics In the English-speaking world the term informatics was first widely used in the applied sense as "medical informatics," taken to include "the cognitive, information processing, and communication tasks of medical practice, education, and research, including information science and the technology to support these tasks." Many such compounds are now in use; they can be viewed as different areas of applied informatics. In the 2000s, a major area of applied informatics is that of organizational informatics. Organizational informatics is fundamentally interested in the application of information, information systems and ICT within organizations of various forms, including private sector, public sector, and voluntary sector organizations. As such, organizational informatics can be seen to be sub-category of social informatics and a super-category of business informatics. By 2004, the field of laboratory informatics — the specialized application of information technology to optimize and extend laboratory operations — began emerging as a more distinct area of applied informatics.

8

Informatics (academic field)

Contributing disciplines • • • • • • •

Computer science Communication studies Complex systems Didactics of informatics Information science Information theory Information technology

Further reading • Gammack, John; Valerie Hobbs; Diarmuid Pigott (2011). The Book of Informatics [1] (1st Revised ed.). Cengage Learning. pp. 548. ISBN [2] 0170216004. • Fourman, Michael (2002) (PDF). Informatics: Informatics Research Report EDI-INF-RR-0139 [2]. University of Edinburgh. pp. 9. • Bauer, Walter F. (1996). "Informatics and (et) Informatique" [3]. IEEE Annals of the History of Computing (Institute of Electrical and Electronics Engineers) 18 (2). Archived from the original [4] on 20 November 2010.

External links • Council of European Professional Informatics Societies (CEPIS) [5]

References [1] [2] [3] [4] [5]

http:/ / books. google. com/ books?id=MOIW12eOvJsC http:/ / www. inf. ed. ac. uk/ publications/ online/ 0139. pdf http:/ / web. archive. org/ web/ 20101120212846/ http:/ / www. softwarehistory. org/ history/ Bauer1. html http:/ / www. softwarehistory. org/ history/ Bauer1. html http:/ / www. cepis. org

9

Laboratory informatics

10

Laboratory informatics Laboratory informatics is the specialized application of information through a platform of instruments, software, and data management tools that allow scientific data to be captured, migrated, processed, and interpreted for immediate use, as well as stored, managed, and shared to support future research, development, and lab testing efforts while maximizing the efficiency of laboratory operations. The term "laboratory informatics" has been in use at least since the early 1980s and has expanded in meaning since then. Before the advent of computer technology, information management played an important role in laboratories and research efforts of all sorts. And while today the process of information management continues to be important, laboratory informatics tends to focus more on the technology associated with that information management process.

An Eppendorf thermal cycler as an example of a laboratory device that measures, processes, and sends information

The field itself is one which has seen significant growth as demand for fast and efficient electronic data exchange has boomed. A rapid series of technological developments have made laboratory equipment less static and more interactive, allowing large networks of integrated lab devices, computers, and telecommunications equipment to log, analyze, and distribute data. This has progressively enabled scientific research projects to move from a localized model to a more global model, one that allows "involved researchers to spend less time collecting data or waiting for information to arrive from another location, which in turn allows them to focus more on the work at hand and makes their research both faster and more efficient." This has led to laboratories requiring more robust and scalable data management systems to stay competitive. The rapid rate of change in the technological and environmental needs of researchers — coupled with growing competition — has led to the creation of conferences like the IQPC Forum on Laboratory Informatics to help directors, managers, and researchers better keep up with the industry.

Sub-elements in laboratory informatics Laboratory informatics is often modeled as a central component or hub for other branching elements of the field. However, looking at the architecture in this fashion oversimplifies the field of laboratory informatics and risks giving the false appearance that branched elements of the field have greater importance than others. Instead, a multi-layered, non-hierarchical model of these elements that places an emphasis on an individual laboratory's identified business needs may be more appropriate. A cottage industry of businesses and consultants has developed from this philosophy, helping laboratories map their informatics needs to their corporate strategy. Yet it's difficult to deny the existence of branching elements of laboratory informatics. Many scientific pursuits require a laboratory, from medicine to astrophysics. This has led to special "sub-applications" of informatics to more specialized laboratories. Genome informatics developed as genetics laboratories sought more efficient ways to manage the large amounts of data being acquired from experiments and research. As scientists continue their pursuit of unlocking the secrets of the brain, neuroinformatics and its associated technology has developed to aid those researchers in their endeavors. And as hydrologists tackle the issues of equitable and efficient use of water for many different purposes, hydroinformatics and computational hydraulics have emerged.

Laboratory informatics

Technology of laboratory informatics Important hardware and software systems that play a role in laboratory informatics include but are not limited to: • • • • • • • • • •

Chromatography data management systems (CDMS) Electronic laboratory notebooks (ELN) Enterprise content management applications (ECM) Enterprise resource planning applications (ERP) Laboratory execution systems (LES) Laboratory information management systems (LIMS) Laboratory information systems (LIS) Manufacturing enterprise systems (MES) Process analytical technology (PAT) Scientific data management systems (SDMS)

References

11

12

3. Informatics Across Several Industries Bioinformatics Bioinformatics is the application of computer science and information technology to the field of biology, with a primary goal of understanding biological processes. What sets it apart from other approaches, however, is its focus on developing and applying computationally intensive techniques (e.g. pattern recognition, data mining, machine learning algorithms, and visualization) to achieve this goal. Major research efforts in the field include sequence alignment, gene finding, genome assembly, drug design, drug Female laboratory technician sitting at computer that displays a microarray; DNA discovery, protein structure alignment, microarray technology aids in gene expression analysis and other bioinformatics protein structure prediction, prediction of functions. gene expression and protein–protein interactions, genome-wide association studies, and the modeling of evolution. The term "bioinformatics" was coined by Paulien Hogeweg and Ben Hesper in 1978 for "the study of informatic processes in biotic systems." Its primary use since at least the late 1980s has been in genomics and genetics, particularly in those areas of genomics involving large-scale DNA sequencing. However, rapid developments in genomic, molecular research, and information technologies have combined to produce a tremendous amount of information related to molecular and other types of biology. Bioinformatics now entails the creation and advancement of databases, algorithms, computational, and statistical techniques and theory to solve formal and practical problems arising from the management and analysis of biological data. Common activities in bioinformatics include mapping and analyzing DNA and protein sequences, aligning different DNA and protein sequences to compare them, and creating and viewing 3-D models of protein structures.

History Arguably one of the first "bioinformatics" projects — though the concept didn't yet exist — involved the 1965 creation and maintenance of a protein sequence database called the Atlas of Protein Sequence and Structure by Margaret O. Dayhoff, Richard V. Eck, and Robert S. Ledley. The work grew out of their "biochemical investigation of the relations between the structures and function of proteins and the theoretical attempt to decipher the genetic code." Six years later the Brookhaven National Laboratory and the Cambridge Crystallographic Data Centre jointly created the Protein Data Bank, intended as a public database of three-dimensional protein structures. The work at Brookhaven would go on to influence others in the field to contribute, with 23 structures contributed in 1976, breaking 5,000 by the end of 1996 and 40,000 in 2006. The significant growth in contributions was fueled by several events, including: Peter Y. Chou and Gerald D. Fasman's 1974 creation (and later, refinement) of a protein structure prediction algorithm; David J. Lipman and William R. Pearson's 1985 development (and later, refinement)

Bioinformatics of FASTP (later FASTA) as well as Stephen Altschul and company's 1990 development and refinement of BLAST, both database sequence searching algorithms and programs; and the formal start of the Human Genome Project in 1990. A flurry of genome studies went on to produce unprecedented amounts of biological data, creating a sudden demand for rapid and efficient computational tools to manage and analyze the data. "The development of these computational tools depended on knowledge generated from a wide range of disciplines including mathematics, statistics, computer science, information technology, and molecular biology." The merger of these disciplines largely went on to form what is now known as bioinformatics.

Bioinformatics vs. computational biology In order to study how normal cellular activities are altered in different disease states, biological data must be combined to form a comprehensive picture of these activities. Therefore, the field of bioinformatics has evolved such that the most pressing task now involves the analysis and interpretation of various types of data, including nucleotide and amino acid sequences, protein domains, and protein structures. However, the related field of computational biology differs slightly from bioinformatics. Jin Xiong, author of Essential Bioinformatics, describes the differences between the two as such: Bioinformatics is limited to sequence, structural, and functional analysis of genes and genomes and their corresponding products and is often considered computational molecular biology. However, computational biology encompasses all biological areas that involve computation. For example, mathematical modeling of ecosystems, population dynamics, application of the game theory in behavioral studies, and phylogenetic construction using fossil records all employ computational tools, but do not necessarily involve biological macromolecules.

Major research areas Sequence analysis Since the Phage Φ-X174 was sequenced in 1977, the DNA sequences of thousands of organisms have been decoded and stored in databases. This sequence information is analyzed to determine genes that encode polypeptides (proteins), RNA genes, regulatory sequences, structural motifs, and repetitive sequences. A comparison of genes within a species or between different species can show similarities between protein functions, or relations between species (the use of molecular systematics to construct phylogenetic trees). With the growing amount of data, it long ago became impractical to analyze DNA sequences manually. Today, computer programs such as BLAST are used daily to search sequences from more than 260,000 organisms, containing over 190 billion nucleotides. These programs can compensate for mutations (exchanged, deleted, or inserted bases) in the DNA sequence, to identify sequences that are related, but not identical. A variant of this sequence alignment is used in the sequencing process itself. The so-called shotgun sequencing technique — which was used, for example, by The Institute for Genomic Research to sequence the first bacterial genome, Haemophilus influenzae — does not produce entire chromosomes, but instead generates the sequences of many thousands of small DNA fragments (ranging from 35 to 900 nucleotides long, depending on the sequencing technology). The ends of these fragments overlap and, when aligned properly by a genome assembly program, can be used to reconstruct the complete genome. Shotgun sequencing yields sequence data quickly, but the task of assembling the fragments can be quite complicated for larger genomes. For a genome as large as the human genome, it may take many days of CPU time on large-memory, multiprocessor computers to assemble the fragments, and the resulting assembly will usually contain numerous gaps that have to be filled in later. Shotgun sequencing is the method of choice for virtually all genomes sequenced today, and genome assembly algorithms are a critical area of bioinformatics research.

13

Bioinformatics Another aspect of bioinformatics in sequence analysis is annotation, which involves computational gene finding to search for protein-coding genes, RNA genes, and other functional sequences within a genome. Not all of the nucleotides within a genome are part of genes. Within the genome of higher organisms, large parts of the DNA do not serve any obvious purpose. This so-called junk DNA may, however, contain unrecognized functional elements. Bioinformatics helps to bridge the gap between genome and proteome projects, as in the use of DNA sequences for protein identification.

Gene expression analysis The expression of many genes can be determined by measuring mRNA levels with multiple techniques including microarrays, expressed cDNA sequence tag (EST) sequencing, serial analysis of gene expression (SAGE) tag sequencing, massively parallel signature sequencing (MPSS), RNA-Seq (also known as "Whole Transcriptome Shotgun Sequencing" (WTSS)), or various applications of multiplexed in-situ hybridization. All of these techniques are extremely noise-prone and/or subject to bias in the biological measurement, and a major research area in computational biology involves developing statistical tools to separate signal from noise in high-throughput gene expression studies. Such studies are often used to determine the genes implicated in a disorder: one might compare microarray data from cancerous epithelial cells to data from non-cancerous cells to determine the transcripts that are up-regulated and down-regulated in a particular population of cancer cells.

Regulation analysis Regulation is the complex orchestration of events starting with an extracellular signal such as a hormone and leading to an increase or decrease in the activity of one or more proteins. Bioinformatics techniques have been applied to explore various steps in this process. For example, promoter analysis involves the identification and study of sequence motifs in the DNA surrounding the coding region of a gene. These motifs influence the extent to which that region is transcribed into mRNA. Expression data can be used to infer gene regulation: one might compare microarray data from a wide variety of states of an organism to form hypotheses about the genes involved in each state. In a single-cell organism, one might compare stages of the cell cycle, along with various stress conditions (heat shock, starvation, etc.). One can then apply clustering algorithms to that expression data to determine which genes are co-expressed. For example, the upstream regions (promoters) of co-expressed genes can be searched for over-represented regulatory elements.

Protein expression analysis Protein microarrays and high-throughput mass spectrometry can provide a snapshot of the proteins present in a biological sample. Bioinformatics is very much involved in making sense of protein microarray and mass spectrometry data; the former approach faces similar problems as with microarrays targeted at mRNA, the latter involves the problem of matching large amounts of mass data against predicted masses from protein sequence databases, and the complicated statistical analysis of samples where multiple, but incomplete peptides from each protein are detected.

Cancer mutation analysis In cancer, the genomes of affected cells are rearranged in complex or even unpredictable ways. Massive sequencing efforts are used to identify previously unknown point mutations in a variety of genes in cancer. Bioinformaticians continue to produce specialized automated systems to manage the sheer volume of sequence data produced, and they create new algorithms and software to compare the sequencing results to the growing collection of human genome sequences and germline polymorphisms. New physical detection technologies are employed, such as oligonucleotide microarrays to identify chromosomal gains and losses (called comparative genomic hybridization), and single-nucleotide polymorphism arrays to detect known point mutations. These detection methods simultaneously

14

Bioinformatics measure several hundred thousand sites throughout the genome, and when used in high-throughput to measure thousands of samples, generate terabytes of data per experiment. The data is often found to contain considerable variability, or noise, and thus hidden Markov model and change-point analysis methods are being developed to infer real copy number changes.

Genome annotation In the context of genomics, annotation is the process of marking the genes and other biological features in a DNA sequence. The first genome annotation software system was designed in 1995 by Dr. Owen White, who was part of the team at The Institute for Genomic Research that sequenced and analyzed the first genome of a free-living organism to be decoded, the bacterium Haemophilus influenzae. Dr. White built a software system to find the genes (places in the DNA sequence that encode a protein), the transfer RNA, and other features, and to make initial assignments of function to those genes. Most current genome annotation systems work similarly, but the programs available for analysis of genomic DNA are constantly changing and improving.

Comparative and computational genomics The core of comparative genome analysis is the establishment of the correspondence between genes (orthology analysis) or other genomic features in different organisms. It is these intergenomic maps that make it possible to trace the evolutionary processes responsible for the divergence of two genomes. A multitude of evolutionary events acting at various organizational levels shape genome evolution. At the lowest level, point mutations affect individual nucleotides. At a higher level, large chromosomal segments undergo duplication, lateral transfer, inversion, transposition, deletion and insertion. Ultimately, whole genomes are involved in processes of hybridization, polyploidization and endosymbiosis, often leading to rapid speciation. The complexity of genome evolution poses many exciting challenges to developers of mathematical models and algorithms, who have recourse to a spectra of algorithmic, statistical, and mathematical techniques. Examples range from exact, heuristics, fixed-parameter, and approximation algorithms for problems based on parsimony models to Markov Chain Monte Carlo algorithms for Bayesian analysis of problems based on probabilistic models.

Biological systems modeling Systems biology involves the use of computer simulations of cellular subsystems (such as the networks of metabolites and enzymes which comprise metabolism, signal transduction pathways, and gene regulatory networks) to both analyze and visualize the complex connections of these cellular processes. Artificial life or virtual evolution attempts to understand evolutionary processes via the computer simulation of simple (artificial) life forms.

Computational evolutionary biology Evolutionary biology is the study of the origin and descent of species, as well as their change over time. Informatics has assisted evolutionary biologists in several key ways, enabling researchers to: • trace the evolution of a large number of organisms by measuring changes in their DNA, rather than through physical taxonomy or physiological observations alone. • compare entire genomes, which permits the study of more complex evolutionary events, such as gene duplication, horizontal gene transfer, and the prediction of factors important in bacterial speciation. • build complex computational models of populations to predict the outcome of the system over time. • track and share information on an increasingly large number of species and organisms. The area of research within computer science that uses genetic algorithms is sometimes confused with computational evolutionary biology, but the two areas are not necessarily related.

15

Bioinformatics

16

Literature analysis The sheer amount of published literature makes it virtually impossible to read every paper, resulting in disjointed subfields of research. Literature analysis aims to employ computational and statistical linguistics to mine this growing library of text resources. For example: • abbreviation recognition - identify the long-form and abbreviation of biological terms • named entity recognition - recognizing biological terms such as gene names • protein-protein interaction - identify which proteins interact with which proteins from text The area of research uses statistics and computational linguistics, and is substantially influenced by them.

Structural bioinformatic approaches Prediction of protein structure Protein structure prediction is another important application of bioinformatics. The amino acid sequence of a protein, the so-called primary structure, can be easily determined from the sequence on the gene that codes for it. In the vast majority of cases, this primary structure uniquely determines a structure in its native environment. (Of course, there are exceptions, such as the bovine spongiform encephalopathy (a.k.a. Mad Cow Disease) prion.) Knowledge of this structure is vital in understanding the function of the protein. For lack of better terms, structural information is usually classified as one of secondary, tertiary, and quaternary structure. A viable general solution to such predictions remains an open problem. As of now, most efforts have been directed towards heuristics that work most of the time.

The idealized evolution of a gene lines is shown from a common ancestor in an ancestral population, descending to three populations labeled A, B, and C. There are two speciation events, each occurring at the junctions shown as an upside down Y. There are also two gene-duplication events, depicted by a horizontal bar.

One of the key ideas in bioinformatics is the notion of homology. In the genomic branch of bioinformatics, homology is used to predict the function of a gene: if the sequence of gene A, whose function is known, is homologous to the sequence of gene B, whose function is unknown, one could infer that B may share A's function. In the structural branch of bioinformatics, homology is used to determine which parts of a protein are important in structure formation and interaction with other proteins. In a technique called homology modeling, this information is used to predict the structure of a protein once the structure of a homologous protein is known. This currently remains the only way to predict protein structures reliably. One example of this is the similar protein homology between hemoglobin in humans and the hemoglobin in legumes (leghemoglobin). Both serve the same purpose of transporting oxygen in the organism. Though both of these proteins have completely different amino acid sequences, their protein structures are virtually identical, which reflects their near identical purposes.

Bioinformatics

Molecular Interaction Efficient software is available today for studying interactions among proteins, ligands, and peptides. Types of interactions most often encountered in the field include protein–ligand (including drug), protein–protein and protein–peptide. Molecular dynamic simulation of movement of atoms about rotatable bonds is the fundamental principle behind computational algorithms, termed docking algorithms for studying molecular interactions. Docking algorithms In the last two decades, tens of thousands of protein three-dimensional structures have been determined by X-ray crystallography and protein nuclear magnetic resonance spectroscopy (protein NMR). One central question for the biological scientist is whether it is practical to predict possible protein–protein interactions only based on these 3D shapes, without doing protein–protein interaction experiments. A variety of methods have been developed to tackle the protein–protein docking problem, though it seems that there is still much work to be done in this field.

Software and tools Software tools for bioinformatics range from simple command-line tools to more complex graphical programs and standalone web-services available from various bioinformatics companies or public institutions.

Open source bioinformatics software Many free and open-source bioinformatics software tools have existed since the 1980s. The combination of a continued need for new algorithms for the analysis of emerging types of biological readouts, the potential for innovative in silico experiments, and freely available open code bases have helped to create opportunities for all research groups to contribute to both bioinformatics and the range of open-source software available, regardless of their funding arrangements. In order to maintain this tradition and create further opportunities, the non-profit Open Bioinformatics Foundation have supported the annual Bioinformatics Open Source Conference (BOSC) since 2000.

Web services in bioinformatics SOAP and REST-based interfaces have been developed for a wide variety of bioinformatics applications, allowing an application running on one computer in one part of the world to use algorithms, data, and computing resources on servers in other parts of the world. The main advantages derive from the fact that end users do not have to deal with software and database maintenance overheads. Basic bioinformatics services are classified by the European Bioinformatics Institute (EBI) into numerous categories, including ontologies, structures, gene expression, proteins, etc. The availability of these service-oriented bioinformatics resources demonstrate the applicability of web-based bioinformatics solutions, and range from a collection of standalone tools with a common data format under a single, standalone, or web-based interface, to integrative, distributed, and extensible bioinformatics workflow management systems.

17

Bioinformatics

18

Further reading • Jones, Neil C.; Pevzner, Pavel A. (2004). An Introduction to Bioinformatics Algorithms [1]. MIT Press. pp. 435. ISBN [2] 0262101068. • Lesk, Arthur (2008). Introduction to Bioinformatics [2] (3rd ed.). OUP Oxford. pp. 474. ISBN [2] 0199208042. • Xiong, Jin (2006). Essential Bioinformatics [3]. Cambridge University Press. pp. 339. ISBN [2] 113945062X.

External links • The Bioinformatics Organization [4] • Bioinformatics Without Borders [5] • Open Bioinformatics Foundation [6]

Notes Some elements of this article are reused from the Wikipedia article [7].

References [1] [2] [3] [4] [5] [6] [7]

http:/ / books. google. com/ books?id=p_qzpkNVcUwC http:/ / books. google. com/ books?id=et5qQgAACAAJ http:/ / books. google. com/ books?id=AFsu7_goA8kC http:/ / www. bioinformatics. org/ http:/ / www. embnet. org/ http:/ / www. open-bio. org/ http:/ / en. wikipedia. org/ wiki/ Bioinformatics

Chemical informatics Chemical informatics (more commonly known as chemoinformatics and cheminformatics) is the use of computer and informational techniques applied to a range of problems in the field of chemistry. While the field has roughly been around around since the 1990s, the rise in high-throughput screening (a scientific experimentation method primarily used in drug discovery) and combinatorial chemistry (a method of synthesizing a large number of compounds in a single process), as well as increases in computing power and data storage sizes, have increased interest in the field in the twenty-first century.

The Jmol open-source Java viewer for chemical 3D structures is an example of a software application that may be used in the field of chemical informatics.

Outside of pharmaceutical research, other applications of chemical informatics include the area of topology, chemical graph theory, and mining the chemical space. It can also be applied to data analysis for the paper, pulp, and

Chemical informatics dye industries.

History The 1960s saw the introduction of databases for the storage and retrieval of chemical structures, as well as three-dimensional molecular modeling methods, laying the groundwork for future generations to improve computational methods of chemical and molecular analysis. The term "chemoinformatics" was defined by F.K. Brown in 1998 as such: Chemoinformatics is the mixing of those information resources to transform data into information and information into knowledge for the intended purpose of making better decisions faster in the area of drug lead identification and optimization. Since then, both the "chem" and "chemo" spellings have been used. European academia settled on the term "chemoinformatics" for its 2006 Obernai research and teaching workshop. Other entities like the Journal of Cheminformatics and Slovak company Molinspiration have trended towards "cheminformatics."

Application Storage and retrieval The primary application of chemical informatics is in the storage and retrieval of both structured and unstructured information relating to chemical structures, molecular models and other chemical data. Efficiently querying and retrieving that stored information extends into other realms of computer science like data mining and machine learning. Other forms of data querying include graph, molecule, sequence, and tree mining.

Representation The in silico representation of chemical structures uses specialized formats such as the XML-based Chemical Markup Language or Simplified Molecular-Input Line-Entry System (SMILES) specifications. These representations are often used for storage in large chemical databases. While some formats are suited for visual representations in two or three dimensions, others are more suited for studying physical interactions, modeling, and docking studies.

Virtual libraries Stored chemical data can pertain to both real and virtual molecules. Virtual libraries of such molecules and compounds may be generated in various ways to explore chemical space and hypothesize novel compounds with desired properties. The Fragment Optimized Growth (FOG) algorithm, for example, was developed to "grow" novel classes of compounds like drugs, natural products, and diversity-oriented synthetic products from a training database of existing compounds.

Virtual screening In contrast to high-throughput screening, virtual screening involves computationally screening in silico libraries of compounds, by means of various methods such as docking, to identify members likely to possess desired properties such as biological activity against a given target. In some cases, combinatorial chemistry is used in the development of the library to increase the efficiency in mining the chemical space. More commonly, a diverse library of small molecules or natural products is screened.

19

Chemical informatics

Quantitative structure-activity relationship (QSAR) This is the calculation of quantitative structure-activity relationship and quantitative structure property relationship values, used to predict the activity of compounds from their structures. In this context there is also a strong relationship to chemometrics, the science of extracting information from chemical systems by data-driven means. Chemical expert systems are also relevant since they represent parts of chemical knowledge as an in silico representation.

External links • • • • • •

Cambridge Healthtech Institute Cheminformatics/ Chemoinformatics Glossary & Taxonomy [1] Indiana Cheminformatics Education Portal [2] The Blue Obelisk Project [3] The Chemical Structure Association Trust [4] The eCheminfo Network and Community of Practice [5] The UK-QSAR and ChemoInformatics Group [6]

Notes This article reuses portions of content from the Wikipedia article [7].

References [1] [2] [3] [4] [5] [6] [7]

http:/ / www. genomicglossaries. com/ content/ chemoinformatics_gloss. asp http:/ / icep. wikispaces. com/ http:/ / www. blueobelisk. org/ http:/ / www. csa-trust. org http:/ / www. echeminfo. com/ http:/ / www. ukqsar. org http:/ / en. wikipedia. org/ wiki/ Cheminformatics

20

Environmental informatics

Environmental informatics Environmental informatics (EI) is a developing field of science that applies information processing, management, and sharing strategies to the interdisciplinary field of environmental science. Applications include the integration of information and knowledge, the application of computational intelligence to environmental data, and the [1] identification of the environmental Publicly available data sets and informatics tools like open-source SAGA GIS enable impacts of information technology. EI the creation of environmental models and images such as this. helps scientists define information processing requirements, analyze real-world problems, and solve those problems using informatics methodologies and tools. As EI has continued to evolve, several other definitions have been offered over the years: • "an emerging field centering around the development of standards and protocols, both technical and institutional, for sharing and integrating environmental data and information." - Biosphere Data Project, University of California - Berkeley, 2004 • the application of "[r]esearch and system development focusing on the environmental sciences relating to the creation, collection, storage, processing, modelling, interpretation, display and dissemination of data and information." - Natural Environment Research Council, 2014

History Environmental informatics emerged roughly around the late 1980s in Central Europe. For example, in 1986 Germany's Gesellschaft für Informatik (Society for Computer Science) created the technical committee Informatik im Umweltschutz (Computer Science in Environmental Protection) dedicated to "the whole spectrum of subjects related to informatics in environmental protection." The group is still active as of 2014, set to host it's 28th International Conference on Informatics for Environmental Protection. Since Informatik im Umweltschutz's inception, other groups there and in other regions of the world were created, including The International Environmetrics Society (TIES, founded in 1989) and the International Environmental Modelling and Software Society (iEMSs, founded in 2000), as well as conferences like the International Symposium on Environmental Software Systems (ISESS, founded in 1995).

21

Environmental informatics

Application Environmental informatics can help tackle problems and tasks such as the following: • the acquisition and application of remote sensing data from optical, thermal infrared, and microwave instruments targeting the atmosphere, vegetation, and the ocean • the estimation of aerosol load in the atmosphere • the gauging of influence of trace gases, aerosol, and clouds on the weather and climate • the analysis of geographical features for urban and regional development • the modeling and assessment of ecological environments • the development and optimization of mathematical algorithms for environmental modeling

Ecoinformatics Closely related to EI is the concept of ecological informatics or "ecoinformatics," which essentially takes environmental informatics and adds the consideration of anthropogenic activity trends. Ecoinformatics aims to facilitate environmental research and management by developing ways to access, manage, and integrate databases of environmental information and develop new algorithms enabling different environmental datasets to be combined to test ecological hypotheses.

Further reading • Recknagel, Friedrich; Jørgensen, Sven Erik (ed.); Chon, T. S. (ed.) (2009). "Chapter 3: Ecological Informatics: Current Scope and Feature Areas" [2]. Handbook of Ecological Modelling and Informatics. WIT Press. pp. 41–47. ISBN [2] 9781845642075. • Voigt, Kristina (July 2008). "Environmental Informatics, Environmetrics, Chemoinformatics, Chemometrics: Integration or Separation!?" [3] (PDF). International Congress on Environmental Modelling and Software. Proceedings of the iEMSs Fourth Biennial Meeting. 3: 1594–1601. ISBN [2] 9788476530740.

External links • • • • • • • • • •

Data Observation Network for Earth [4] (DataONE) ecoinformatics.org - Online Resource for Managing Ecological Data and Information [5] Ecological Data Wiki [6] Ecological Informatics: An International Journal on Ecoinformatics and Computational Ecology [7] Frontiers in Environmental Science - Environmental Informatics [8] Informatik für Umweltschutz, Nachhaltige Entwicklung und Risikomanagement [9] (formerly Informatik im Umweltschutz) International Environmental Modelling and Software Society [10] (iEMSs) International Society for Environmental Information Sciences [11] (ISEIS) Journal of Environmental Informatics [12] The International Environmetrics Society [13] (TIES)

22

Environmental informatics

Notes This article reuses a couple of elements from the Wikipedia article [14].

References [1] http:/ / saga-gis. org/ en/ index. html [2] http:/ / books. google. com/ books?id=XzEKlNhnUHUC& pg=PA41 [3] http:/ / www. iemss. org/ iemss2008/ uploads/ Main/ S18-01-Voigt_et_al-IEMSS2008. pdf [4] http:/ / www. dataone. org [5] http:/ / www. ecoinformatics. org [6] http:/ / www. ecologicaldata. org/ [7] http:/ / www. journals. elsevier. com/ ecological-informatics/ [8] http:/ / www. frontiersin. org/ Environmental_Informatics [9] http:/ / enviroinfo. eu/ [10] http:/ / www. iemss. org/ [11] http:/ / www. iseis. org/ [12] http:/ / www. iseis. org/ jei/ [13] http:/ / www. environmetrics. org/ [14] http:/ / en. wikipedia. org/ wiki/ Environmental_informatics

Geoinformatics Geoinformatics is a multidisciplinary field of science that uses technologies supporting the processes of acquiring, analyzing, and visualizing geospatial data. The definition of the term "geoinformatics" varies greatly, however. For example, author G. Randy Keller, focusing on the internals of our planet, explained geoinformatics as the use of "data, software tools, and computational infrastructure ... to facilitate studies of the structure, dynamics, and evolution of the solid Earth through time, as well as the Geological scientists use geoinformatics tools to create 3D maps of not only Earth's processes that act upon it and within it surface but also, as in the case of astrogeology, the surface of other planets like Mars. from the near surface to the core." Other definitions of geoinformatics dutifully extend its scope to the surface of the planet, causing more confusion as terms like "geomatics," "geographical information system," and "computational geography" are brought to the discussion from different regions around the world and are often used synonymously. Senior lecturer Jiří Šíma of the University of West Bohemia in Pilsen attempts to compare "geomatics" and "geoinformatics" using ISO standards: "According to ISO Standard 19122 'geomatics is a discipline concerned with the collection, distribution, storage, analysis, processing, presentation of geographic data or geographic information.' Its range is perfectly described by activities of the Geomatics Canada: establishing and maintainace [sic] of national spatial reference system, preparing, publishing and distributing of state topographical maps, aeronautical

23

Geoinformatics charts, aerial photographs and gazetteers, surveys on state boundaries, property surveys on federal lands, maintainance [sic] of national bases of geographic data for the development of geographical information systems. There is no definition of geoinformatics in ISO Standards. One of the best was published by Dietmar Grünreich, president of the Federal Agency for Cartography and Geodesy in Frankfurt (Main): 'geoinformatics is a discipline concerned with theory of geospatial data modeling, their storage, management and processing as well as with development of geographical information systems and necessary information and communication technology.'"

Application Geoinformatics can help tackle problems and tasks such as the following: • • • • • •

the modeling and use of seismic data the construction and use of other geologically realistic 3-D models the production of high-quality paleogeographic maps the production of astrogeological 3D maps the measurement of Earth's gravity field the mitigation of hazards in volcanically active areas

• • • •

the planning and management of land use the reconstruction of architecture and archeological sites the creation of commercial maritime routes the management of natural resources

Informatics Scientists practicing in the earth sciences increasingly rely on digital spatial data acquired and visualized from remotely sensed images analyzed by geographical information systems (GIS). Other informatics tools include geospatial analysis and modeling software, geospatial databases, and wired and wireless networking technologies. As these types of systems and tools have become more readily available, a larger global initiative to use them for greater data integration and sharing has emerged. GEON, for example, is an open collaborative project for creating infrastructure for collecting 3- and 4D geospatial data. OneGeology is another global informatics initiative attempting to compile digital geological map data for all to use.

Further reading • Sinha, A. Krishna, et al. (December 2010). "Geoinformatics: Transforming data to knowledge for geosciences" [1] . GSA Today 20 (12): 4–10. doi [2]:10.1130/GSATG85A.1 [3].

External links • • • • • •

GEON [4] International Cartographic Association [5] (ICA) International Society for Photogrammetry and Remote Sensing [6] (ISPRS) International Union of Geodesy and Geophysics [7] (IUGG) OneGeology [8] Open Geospatial Consortium [9] (OGC)

24

Geoinformatics

Notes This article reuses a few elements from the Wikipedia article [10].

References [1] http:/ / www. geosociety. org/ gsatoday/ archive/ 20/ 12/ article/ i1052-5173-20-12-4. htm [2] http:/ / en. wikipedia. org/ wiki/ Digital_object_identifier [3] http:/ / dx. doi. org/ 10. 1130%2FGSATG85A. 1 [4] http:/ / www. geongrid. org/ [5] http:/ / www. icaci. org/ [6] http:/ / www. isprs. org/ [7] http:/ / www. iugg. org/ [8] http:/ / www. onegeology. org/ [9] http:/ / www. opengeospatial. org/ [10] http:/ / en. wikipedia. org/ wiki/ Geoinformatics

Health informatics Health informatics (also called health care informatics, healthcare informatics, medical informatics, nursing informatics, clinical informatics, or biomedical informatics) is a discipline at the intersection of information science, computer science, and health care. It deals with the resources, devices, and methods required to optimize the "collection, storage, retrieval, [and] communication ... of health-related data, information, and knowledge." Health informatics is applied to the areas of nursing, clinical care, dentistry, pharmacy, public health, Health informatics helps manage, analyze, and integrate patient data from physician to occupational therapy, and biomedical specialist and beyond. research. Health informatics resources include not only computers but also clinical guidelines, formal medical terminologies, and information and communication systems. Early names for health informatics included medical information data processing, medical information science, medical informatics, medical computer science, and medical computing.

History Worldwide use of technology in medicine began in the early 1950s with the rise of computers. In 1949, Gustav Wager established the first professional organization for informatics in Germany. The prehistory, history, and future of medical information and health information technology are discussed in reference. Specialized university departments and Informatics training programs began during the 1960s in France, Germany, Belgium and The Netherlands. Medical informatics research units began to appear during the 1970s in Poland and in the U.S., with

25

Health informatics medical informatics conferences springing up as early as 1974. Since then the development of high-quality health informatics research, education, and infrastructure has been the goal of the U.S. and the European Union. By the mid-2000s, work in the U.K. by the voluntary registration body the UK Council of Health Informatics Professions led to the creation of eight key constituencies within the domain of health informatics: information and communication technologies; health records; information management; knowledge management; health informatics service and project management; clinical informatics; education, training, and development; and research. Those constituencies — already based on U.K. National Health Service standards (NHS) — later found their way into the NHS' Health Informatics Career Framework in a slightly modified format. As of 2013[1] tens of datasets, publications, guidelines, specifications, meetings, conferences, and organizations around the world continue to shape what health informatics is today.

Health informatics in North America Argentina Since 1996, the International Medical Informatics Association's Latin America and the Caribbean regional group has sought to develop health informatics within the region, including Argentina's Asociación Argentina de Informática Médica (AAIM). Since 1997, the not-for-profit Buenos Aires Biomedical Informatics Group has represented the interests of a broad range of clinical and non-clinical professionals working within the health informatics sphere. The group strives to promote informatics technology and related content within the research and health administration spheres, especially those relating to the biomedical field. Brazil "In 1968 the Pan American Health Organization set up the Regional Library of Medicine and Health Sciences (BIREME) in the Paulista Medical School in São Paulo under an agreement with the Government of Brazil." The library also made possible access to the MEDLINE and MEDLARS systems, and it would eventually go on to become the "hub of the Latin American network of biomedical and health information." In 1986 the Brazilian Society of Health Informatics (Sociedade Brasileira de Informática em Saúde) was founded to better expand the use of informatics technology within the country. The same year saw the first Brazilian Congress of Health Informatics held, and the first Brazilian Journal of Health Informatics was published. Since 1996, the International Medical Informatics Association's Latin America and the Caribbean regional group has sought to develop health informatics within the region, including Brazil's Sociedade Brasileira de Informática em Saúde (SBIS). Canada Health Informatics projects in Canada are implemented provincially, with different provinces creating different systems. A national, federally-funded, not-for-profit organization called Canada Health Infoway was created in 2001 to foster the development and adoption of electronic health records across Canada. As of July 2013[1] there were 380 health informatics projects under way in Canadian hospitals, health-care facilities, pharmacies, and laboratories, with an investment value of $2.1 billion since its inception. Provincial and territorial programs include the following: • eHealth Ontario was created as an Ontario provincial government agency in September 2008. It has been plagued by delays, and its CEO was fired over a multi-million dollar contract scandal in 2009. • Alberta Netcare Portal was created in 2006 by the Government of Alberta. The Netcare portal is used daily by thousands of clinicians. It provides access to demographic data, prescribed/dispensed drugs, known allergies/intolerances, immunizations, laboratory test results, diagnostic imaging reports, the diabetes registry and

26

Health informatics other medical reports. Netcare interface capabilities are being included in electronic medical record products which are being funded by the provincial government. United States Even though the idea of using computers in medicine sprouted as technology advanced in the early twentieth century, it was not until the 1950s that informatics made a realistic impact in the United States. Robert Ledley led the charge in the 1950s with his early use of medical computation in his dental projects at the United States National Bureau of Standards. By the mid-1950s expert systems such as MYCIN and Internist-I were developed, and the National Library of Medicine started using even the even more advanced MEDLINE and MEDLARS systems by 1965. Around this same time a flurry of activity occurred. At the University of Utah, Dr. Homer R. Warner, one of the fathers of medical informatics, was already offering graduate-level classes in medical computer applications. Meanwhile Neil Pappalardo, Curtis Marble, and Robert Greenes were developing the Massachusetts General Hospital Utility Multi-Programming System (MUMPS) in Octo Barnett's Laboratory of Computer Science at Massachusetts General Hospital in Boston. Yet due to its advanced nature, fragmented use across multiple entities, and inherent difficulty in extracting and analyzing data from the database, development of healthcare and laboratory systems on MUMPS was sporadic at best. By the 1980s, however, the advent of Structured Query Language (SQL), relational database management systems (RDBMS), and Health Level 7 (HL7) allowed software developers to expand the functionality and interoperability of health informatics systems, including the application of business analytics and business intelligence techniques to clinical data. As of 2013[1] web-based and database-centric Internet applications of laboratory informatics software have further changed the way researchers and technicians interact with data, with web-driven data formatting technologies like Extensible Markup Language (XML) making interoperability of health and laboratory informatics software a much-needed reality. SaaS and cloud computing technologies have further changed how informatics systems are implemented in the U.S and worldwide, while at the same time raising new questions about security and stability.

Health informatics in Europe The European Union's Member States are committed to sharing their best practices and experiences to create a European eHealth Area, thereby improving access to and quality health care at the same time as stimulating growth in a promising new industrial sector. The associated European eHealth programs plays a fundamental role in the European Union's strategy. Work on this initiative involves a collaborative approach among several parts of the Commission services. Additionally, the not-for-profit European Institute for Health Records or EuroRec has promoted the use of high quality electronic health record systems in the European Union since its foundation in late 2002. epSOS (European Patients - Smart Open Services) represents another key European initiative to "build and evaluate a service infrastructure that demonstrates cross-border interoperability between electronic health record systems in Europe." Co-funded by the European Commission Competitiveness and Innovation Programme since 2008, the initiative (scheduled to finish on December 31, 2013) was devised with the vision of giving patients in Europe the opportunity to use cross-border electronic medical record services for healthcare-related activities in participating epSOS pilot countries.

27

Health informatics In the United Kingdom The U.K. health informatics community has long played a key role in international activity, joining Technical Committee Four (TC 4) of the International Federation of Information Processing in 1968, which eventually became the International Medical Informatics Association (IMIA) in 1979. In 1978, the Medical Specialist Group of the British Computer Society organized the first European Federation for Medical Informatics (EFMI) Medical Informatics Europe (MIE) conference in Cambridge. In 2002, the idea of a profession of health informatics across the U.K. was first implemented as the U.K. Council for Health Informatics Professions (UKCHIP), which has a formal Code of Professional Conduct, standards for expressing competences which are used for entry, confirmation of fitness to practice, re-grading and personal development. Consistent standards express competences of health informatics professionals in both domain-specific and generic informatics professional areas. The consistency is intended to apply in operational care delivery organizations, academia, and the commercial service and solution providers. The broad history of health informatics in the U.K. has been captured in the 2008 book U.K. Health Computing : Recollections and Reflections by Glyn M. Hayes and Denise E. Barnett. The book describes the early development of health informatics in the country as "unorganized and idiosyncratic." England In 2002 the National Health Service (NHS) in England contracted several vendors for a national health informatics system called the National Programme for IT or "NPfIT." By 2010, however, the project drastically behind schedule, forcing a wide consultation to be launched as part of a wider "Liberating the NHS" plan. "Following three reports on the National Programme by both the National Audit Office and this Committee, and a review by the Major Projects Authority, the Government announced in September 2011 that it would dismantle the National Programme but keep the component parts in place with separate management and accountability structures." The program was officially dismantled in September 2013, officially dubbed "one of the worst and most expensive contracting fiascos in the history of the public sector." Scotland In 1984, Scotland saw the implementation of the General Practice Administration System (GPASS), developed and controlled by NHS Scotland. It was provided free to all general practitioners in Scotland. However, an agreement was reached in 2008 to shut down the electronic system due to "a series of problems and critical reports." The system was formally shut down in August 2012, with all practices having moved to new systems called EMIS and INPS.

Health informatics in Asia and Oceania In Asia, Australia, and New Zealand, the regional group called the Asia Pacific Association for Medical Informatics (APAMI) was established in 1993 and now consists of more than 15 member regions in the Asia Pacific Region. Australia Founded in 2002, the Australasian College of Health Informatics (ACHI) is the professional association for health informatics in the Asia-Pacific region. It represents the interests of a broad range of clinical and non-clinical professionals working within the health informatics sphere through a commitment to quality, standards, and ethical practice. ACHI is a sponsor of the e-Journal for Health Informatics, an indexed and peer-reviewed professional journal. ACHI has also supported the Australian Health Informatics Education Council (AHIEC) since its founding in 2009. Although there are a number of health informatics organizations in Australia, the Health Informatics Society of Australia (HISA) is regarded as the major umbrella group and is a member of the International Medical Informatics Association (IMIA). Nursing informaticians were the driving force behind the formation of HISA, which is now a

28

Health informatics company limited by guarantee of the members. The membership comes from across the informatics spectrum that is from students to corporate affiliates. HISA has a number of branches (Queensland, New South Wales, Victoria and Western Australia) as well as special interest groups such as nursing (NIA), pathology, aged and community care, industry, and medical imaging. China In Hong Kong a computerized patient record system called the Clinical Management System (CMS) has been developed by the Hospital Authority since 1994. This system has been deployed at all the sites of the Authority (40 hospitals and 120 clinics) and is used by all 30,000 clinical staff on a daily basis, with a daily transaction of up to 2 millions. The comprehensive records of 7 million patients are available online in the Electronic Patient Record (ePR), with data integrated from all sites. Since 2004, radiology image viewing has been added to the ePR, with radiography images from any HA site being available as part of the ePR. The Hong Kong Hospital Authority placed particular attention to the governance of clinical systems development, with input from hundreds of clinicians being incorporated through a structured process. The health informatics section of the Hong Kong Hospital Authority has close relationship with the information technology department and clinicians to develop healthcare systems for the organization to support the service to all public hospitals and clinics in the region. The Hong Kong Society of Medical Informatics (HKSMI) was established in 1987 to promote the use of information technology in healthcare. The eHealth Consortium has been formed to bring together clinicians from both the private and public sectors, medical informatics professionals, and the IT industry to further promote IT in healthcare in Hong Kong. New Zealand Health Informatics is taught at five New Zealand universities. The most mature and established is the Otago program, which has been offered since the mid-1990s. Health Informatics New Zealand (HINZ) is the national organization that advocates for health informatics. HINZ organizes a conference every year and also publishes the online journal Healthcare Informatics Review Online.

Health informatics in the Middle East Saudi Arabia The Saudi Association for Health Information (SAHI) was established in 2006 to work under direct supervision of King Saud University for Health Sciences to practice public activities, develop theoretical and applicable knowledge, and provide scientific and applicable studies.

Regulation and standards The international standards on the subject are covered by ICS 35.240.80 in which ISO 27799:2008 is one of the core components.

In the United States In 2004 the U.S. Department of Health and Human Services (HHS) formed the Office of the National Coordinator for Health Information Technology (ONCHIT). The mission of this office is widespread adoption of interoperable electronic health records (EHRs) in the US within 10 years. The Certification Commission for Healthcare Information Technology (CCHIT), a private nonprofit group, was funded in 2005 by the U.S. Department of Health and Human Services to develop a set of standards for electronic health records (EHR) and supporting networks, and certify vendors who meet them. In July, 2006 CCHIT released

29

Health informatics its first list of 22 certified ambulatory EHR products, in two different announcements.

Clinical Informatics While health informatics and clinical informatics are often considered the same, some make a distinction between the two. The American Medical Informatics Association, for example, states clinical informatics is concerned with the use of information in health care by clinicians. By extension, clinical informaticians analyze, design, implement, and evaluate information and communication systems that enhance individual and population health outcomes, improve patient care, and strengthen the clinician-patient relationship. Clinical informaticians use their knowledge of patient care combined with their understanding of informatics concepts, methods, and health informatics tools to: • • • •

assess information and knowledge needs of health care professionals and patients. characterize, evaluate, and refine clinical processes. develop, implement, and refine clinical decision support systems. lead or participate in the procurement, customization, development, implementation, management, evaluation, and continuous improvement of clinical information systems.

Clinicians collaborate with other health care and information technology professionals to develop health informatics tools which promote patient care that is safe, efficient, effective, timely, patient-centered, and equitable.

Further reading • De Moor, Georges J. E. ; McDonald, Clement J.; van Goor, J. M. Noothoven, ed. (1993). Progress in Standardization in Health Care Informatics [2]. IOS Press. pp. 215. ISBN [2] 9051991142. • Hovenga, Evelyn J. S., ed. (2010). Health Informatics: An Overview [3]. IOS Press. ISBN [2] 1607500922. • Hoyt, Robert E.; Bailey, Nora; Yoshihashi, Ann, ed. (2012). Health Informatics: Practical Guide For Healthcare And Information Technology Professionals [4]. Lulu Enterprises Incorporated. pp. 492. ISBN [2] 1105437558. • Smith, Jack (1999). Health Management Information Systems: A Handbook for Decision Makers [5] (2nd ed.). McGraw-Hill International. pp. 348. ISBN [2] 0335205658.

Notes Some elements of this article are reused from the Wikipedia article [6].

References [1] [2] [3] [4] [5] [6]

http:/ / www. limswiki. org/ index. php?title=Health_informatics& action=edit http:/ / books. google. com/ books?id=DHzOJaNaOYkC http:/ / books. google. com/ books?id=eckD3fSrPagC http:/ / books. google. com/ books?id=6bqruAAACAAJ http:/ / books. google. com/ books?id=8YjlAAAAQBAJ http:/ / en. wikipedia. org/ wiki/ Health_informatics

30

31

4. All about LIMS and LIS Laboratory information management system Sometimes referred to as a laboratory information system (LIS) or laboratory management system (LMS), a laboratory information management system (LIMS) is a software-based laboratory and information management system that offers a set of key features that support a modern laboratory's operations. Those key features include — but are not limited to — workflow and data tracking support, flexible architecture, and smart data exchange interfaces, which fully "support its use in regulated environments." The features and uses of a LIMS have evolved over the years Laboratories around the world depend on a LIMS to manage data, assign rights, manage inventory, and more. from simple sample tracking to an enterprise resource planning tool that manages multiple aspects of laboratory informatics. Due to the rapid pace at which laboratories and their data management needs shift, the definition of LIMS has become somewhat controversial. As the needs of the modern laboratory vary widely from lab to lab, what is needed from a laboratory information management system also shifts. The end result: the definition of a LIMS will shift based on who you ask and what their vision of the modern lab is. Dr. Alan McLelland of the Institute of Biochemistry, Royal Infirmary, Glasgow highlighted this problem in the late 1990s by explaining how a LIMS is perceived by an analyst, a laboratory manager, an information systems manager, and an accountant, "all of them correct, but each of them limited by the users' own perceptions." Historically the LIMS, LIS, and process development execution system (PDES) have all performed similar functions. Historically the term "LIMS" has tended to be used to reference informatics systems targeted for environmental, research, or commercial analysis such as pharmaceutical or petrochemical work. "LIS" has tended to be used to reference laboratory informatics systems in the forensics and clinical markets, which often required special case management tools. The term "PDES" has generally applied to a wider scope, including, for example, virtual manufacturing techniques, while not necessarily integrating with laboratory equipment. In recent times LIMS functionality has spread even farther beyond its original purpose of sample management. Assay data management, data mining, data analysis, and electronic laboratory notebook (ELN) integration are all features that have been added to many LIMS, enabling the realization of translational medicine completely within a single software solution. Additionally, the distinction between a LIMS and a LIS has blurred, as many LIMS now also fully support comprehensive case-centric clinical data.

Laboratory information management system

History of LIMS Up until the late 1970s, the management of laboratory samples and the associated analysis and reporting were time-consuming manual processes often riddled with transcription errors. This gave some organizations impetus to streamline the collection of data and how it was reported. Custom in-house solutions were developed by a few individual laboratories, while some enterprising entities at the same time sought to develop a more commercial reporting solution in the form of special instrument-based systems. In 1982 the first generation of LIMS was introduced in the form of a single centralized minicomputer, which offered laboratories the first opportunity to utilize automated reporting tools. As the interest in these early LIMS grew, industry leaders like Gerst Gibbon of the Federal Energy Technology Centre in Pittsburgh began planting the seeds through LIMS-related conferences. By 1988 the second-generation commercial offerings were tapping into relational databases to expand LIMS into more application-specific territory, and International LIMS Conferences were in full swing. As personal computers became more powerful and prominent, a third generation of LIMS emerged in the early 1990s. These new LIMS took advantage of the developing client/server architecture, allowing laboratories to implement better data processing and exchanges. By 1995 the client/server tools had developed to the point of allowing processing of data anywhere on the network. Web-enabled LIMS were introduced the following year, enabling researchers to extend operations outside the confines of the laboratory. From 1996 to 2002 additional functionality was included in LIMS, from wireless networking capabilities and georeferencing of samples, to the adoption of XML standards and the development of Internet purchasing. As of 2012, some LIMS have added additional characteristics that continue to shape how a LIMS is defined. Examples include the addition of clinical functionality, electronic laboratory notebook (ELN) functionality, as well a rise in the software as a service (SaaS) distribution model.

Technology Laboratory information management operations The LIMS is an evolving concept, with new features and functionality being added often. As laboratory demands change and technological progress continues, the functions of a LIMS will likely also change. Despite these changes, a LIMS tends to have a base set of functionality that defines it. That functionality can roughly be divided into five laboratory processing phases, with numerous software functions falling under each: • • • • •

the reception and log in of a sample and its associated customer data the assignment, scheduling, and tracking of the sample and the associated analytical workload the processing and quality control associated with the sample and the utilized equipment and inventory the storage of data associated with the sample analysis the inspection, approval, and compilation of the sample data for reporting and/or further analysis

There are several pieces of core functionality associated with these laboratory processing phases that tend to appear in most LIMS:

32

Laboratory information management system

33

Sample management The core function of LIMS has traditionally been the management of samples. This typically is initiated when a sample is received in the laboratory, at which point the sample will be registered in the LIMS. This registration process may involve accessioning the sample and producing barcodes to affix to the sample container. Various other parameters such as clinical or phenotypic information corresponding with the sample are also often recorded. The LIMS then tracks chain of custody as well as sample location. Location tracking usually involves assigning the sample to a particular freezer location, often down to the granular level of shelf, rack, box, row, and column. Other event tracking such as freeze and thaw cycles that a sample undergoes in the laboratory may be required.

A lab worker matches blood samples to documents. With a LIMS, this sort of sample management is made more efficient.

Modern LIMS have implemented extensive configurability, as each laboratory's needs for tracking additional data points can vary widely. LIMS vendors cannot typically make assumptions about what these data tracking needs are, and therefore vendors must create LIMS that are adaptable to individual environments. LIMS users may also have regulatory concerns to comply with such as CLIA, HIPAA, GLP, and FDA specifications, affecting certain aspects of sample management in a LIMS solution. One key to compliance with many of these standards is audit logging of all changes to LIMS data, and in some cases a full electronic signature system is required for rigorous tracking of field-level changes to LIMS data. Instrument and application integration Modern LIMS offer an increasing amount of integration with laboratory instruments and applications. A LIMS may create control files that are "fed" into the instrument and direct its operation on some physical item such as a sample tube or sample plate. The LIMS may then import instrument results files to extract data for quality control assessment of the operation on the sample. Access to the instrument data can sometimes be regulated based on chain of custody assignments or other security features if need be. Modern LIMS products now also allow for the import and management of raw assay data results. Modern targeted assays such as qPCR and deep sequencing can produce tens of thousands of data points per sample. Furthermore, in the case of drug and diagnostic development as many as 12 or more assays may be run for each sample. In order to track this data, a LIMS solution needs to be adaptable to many different assay formats at both the data layer and import creation layer, while maintaining a high level of overall performance. Some LIMS products address this by simply attaching assay data as BLOBs to samples, but this limits the utility of that data in data mining and downstream analysis. Electronic data exchange The exponentially growing volume of data created in laboratories, coupled with increased business demands and focus on profitability, have pushed LIMS vendors to increase attention to how their LIMS handles electronic data exchanges. Attention must be paid to how an instrument's input and output data is managed, how remote sample collection data is imported and exported, and how mobile technology integrates with the LIMS. The successful transfer of data files in Microsoft Excel and other formats, as well as the import and export of data to Oracle, SQL, and Microsoft Access databases is a pivotal aspect of a the modern LIMS. In fact, the transition "from proprietary

Laboratory information management system databases to standardized database management systems such as Oracle ... and SQL" has arguably had one of the biggest impacts on how data is managed and exchanged in laboratories. Additional functions Aside from the key functions of sample management, instrument and application integration, and electronic data exchange, there are numerous additional operations that can be managed in a LIMS. This includes but is not limited to: audit management fully track and maintain an audit trail barcode handling assign one or more data points to a barcode format; read and extract information from a barcode chain of custody assign roles and groups that dictate access to specific data records and who is managing them compliance follow regulatory standards that affect the laboratory customer relationship management handle the demographic information and communications for associated clients document management process and convert data to certain formats; manage how documents are distributed and accessed instrument calibration and maintenance schedule important maintenance and calibration of lab instruments and keep detailed records of such activities inventory and equipment management measure and record inventories of vital supplies and laboratory equipment manual and electronic data entry provide fast and reliable interfaces for data to be entered by a human or electronic component method management provide one location for all laboratory process and procedure (P&P) and methodology to be housed and managed personnel and workload management organize work schedules, workload assignments, employee demographic information, and financial information quality assurance and control guage and control sample quality, data entry standards, and workflow; reports create and schedule reports in a specific format; schedule and distribute reports to designated parties time tracking claculate and maintain processing and handling times on chemical reactions, workflows, and more

34

Laboratory information management system

LIMS architecture and delivery methods A LIMS has utilized many architectures and distribution models over the years. As technology has changed, how a LIMS is installed, managed, and utilized has also changed with it. The following represents architectures which have been utilized at one point or another: Thick-client A thick-client LIMS is a more traditional client/server architecture, with some of the system residing on the computer or workstation of the user (the client) and the rest on the server. The LIMS software is installed on the client computer, which does all of the data processing. Later it passes information to the server, which has the primary purpose of data storage. Most changes, upgrades, and other modifications will happen on the client side. This was one of the first architectures implemented into a LIMS, having the advantage of providing higher processing speeds (because processing is done on the client and not the server) and slightly more security (as access to the server data is limited only to those with client software). Additionally, thick-client systems have also provided more interactivity and customization, though often at a greater learning curve. The disadvantages of client-side LIMS include the need for more robust client computers and more time-consuming upgrades, as well as a lack of base functionality through a web browser. The thick-client LIMS can become web-enabled through an add-on component. Thin-client A thin-client LIMS is a more modern architecture which offers full application functionality accessed through a device's web browser. The actual LIMS software resides on a server (host) which feeds and processes information without saving it to the user's hard disk. Any necessary changes, upgrades, and other modifications are handled by the entity hosting the server-side LIMS software, meaning all end-users see all changes made. To this end, a true thin-client LIMS will leave no "footprint" on the client's computer, and only the integrity of the web browser need be maintained by the user. The advantages of this system include significantly lower cost of ownership and fewer network and client-side maintenance expenses. However, this architecture has the disadvantage of requiring real-time server access, a need for increased network throughput, and slightly less functionality. A sort of hybrid architecture that incorporates the features of thin-client browser usage with a thick client installation exists in the form of a web-based LIMS. Some LIMS vendors are beginning to rent hosted, thin-client solutions as "software as a service" (SaaS). These solutions tend to be less configurable than on premise solutions and are therefore considered for less demanding implementations such as laboratories with few users and limited sample processing volumes. Another implementation of the thin client architecture is the maintenance, warranty, and support (MSW) agreement. Pricing levels are typically based on a percentage of the license fee, with a standard level of service for 10 concurrent users being approximately 10 hours of support and additional customer service, at a roughly $200 per hour rate. Though some may choose to opt out of an MSW after the first year, it's often more economical to continue the plan in order to receive updates to the LIMS, giving it a longer life span in the laboratory. Web-enabled A web-enabled LIMS architecture is essentially a thick-client architecture with an added web browser component. In this setup, the client-side software has additional functionality that allows users to interface with the software through their device's browser. This functionality is typically limited only to certain functions of the web client. The primary advantage of a web-enabled LIMS is the end-user can access data both on the client side and the server side of the configuration. As in a thick-client architecture, updates in the software must be propagated to every client machine. However, the added disadvantages of requiring always-on access to the host server and the need for cross-platform functionality mean that additional overhead costs may arise.

35

Laboratory information management system Web-based Arguably one of the most confusing architectures, web-based LIMS architecture is a hybrid of the thick- and thin-client architectures. While much of the client-side work is done through a web browser, the LIMS also requires the additional support of Microsoft's .NET Framework technology installed on the client device. The end result is a process that is apparent to the end-user through the Microrosoft-compatible web browser, but perhaps not so apparent as it runs thick-client-like processing in the background. In this case, web-based architecture has the advantage of providing more functionality through a more friendly web interface. The disadvantages of this setup are more sunk costs in system administration and support for Internet Explorer and .NET technologies, and reduced functionality on mobile platforms.

LIMS configurability LIMS implementations are notorious for often being lengthy and costly. This is due in part to the diversity of requirements within each lab, but also to the inflexible nature of LIMS products for adapting to these widely varying requirements. Newer LIMS solutions are beginning to emerge that take advantage of modern techniques in software design that are inherently more configurable and adaptable — particularly at the data layer — than prior solutions. This means not only that implementations are much faster, but also that the costs are lower and the risk of obsolescence is minimized.

The distinction between a LIMS and a LIS Up until recently, LIMS and LIS have exhibited a few key differences, making them noticeably separate entities: 1. A LIMS traditionally has been designed to process and report data related to batches of samples from biology labs, water treatment facilities, drug trials, and other entities that handle complex batches of data. A LIS has been designed primarily for processing and reporting data related to individual patients in a clinical setting. 2. A LIMS needs to satisfy good manufacturing practice (GMP) and meet the reporting and audit needs of the U.S. Food and Drug Administration and research scientists in many different industries. A LIS, however, must satisfy the reporting and auditing needs of hospital accreditation agencies, HIPAA, and other clinical medical practitioners. 3. A LIMS is most competitive in group-centric settings (dealing with "batches" and "samples") that often deal with mostly anonymous research-specific laboratory data, whereas a LIS is usually most competitive in patient-centric settings (dealing with "subjects" and "specimens") and clinical labs. However, as of 2012 these distinctions have faded somewhat as some LIMS vendors have adopted the case-centric information management normally reserved for a LIS, blurring the lines between the two components further. Thermo Scientific's Clinical LIMS is a recent example of this merger of LIMS with LIS, with Dave Champagne, informatics vice president and general manager, stating: "Routine molecular diagnostics requires a convergence of the up-to-now separate systems that have managed work in the lab (the LIMS) and the clinic (the LIS). The industry is asking for, and the science is requiring, a single lab-centric solution that delivers patient-centric results." STARLIMS Corporation's STARLIMS product is another recent example of this LIMS/LIS merger. With the distinction between the two entities becoming less clear, discussions within the laboratory informatics community have raised the question of whether or not the two entities should be considered the same.

36

Laboratory information management system

Standards covered by LIMS A LIMS covers standards such as: • • • •

Title 21 CFR Part 11 from the Food and Drug Administration (United States) ISO 17025 ISO 15189 Good laboratory practice

LIMS vendors See the LIMS vendor page for a list of LIMS vendors past and present.

Further reading • Gibbon, G.A. (1996). "A brief history of LIMS" [1] (PDF). Laboratory Automation and Information Management 32 (1): 1–5. doi [2]:10.1016/1381-141X(95)00024-K [2]. • Wood, Simon (September 2007). "Comprehensive Laboratory Informatics: A Multilayer Approach" [3] (PDF). American Laboratory. p. 1.

References [1] http:/ / www. sciencedirect. com/ science/ article/ pii/ 1381141X9500024K [2] http:/ / dx. doi. org/ 10. 1016%2F1381-141X%2895%2900024-K [3] http:/ / www. starlims. com/ Intl/ AL-Wood-Reprint-9-07. pdf

LIMS feature You can find a listing of all LIMS vendors — and by extension, the features their products offer — on the LIMS vendor page.

A LIMS feature is one or more pieces of functionality that appear within a laboratory information management system (LIMS). The LIMS is an evolving concept, with new features and abilities being introduced every year. As laboratory demands change and technological progress continues, the functions of a LIMS will also change. Yet like the automobile, the LIMS tends to have a base set of functionality that defines it. That functionality can roughly be divided into five laboratory processing phases, with numerous software functions falling under each: • • • • •

the reception and log in of a sample and its associated customer data the assignment, scheduling, and tracking of the sample and the associated analytical workload the processing and quality control associated with the sample and the utilized equipment and inventory the storage of data associated with the sample analysis the inspection, approval, and compilation of the sample data for reporting and/or further analysis

Of course, there are LIMS features that are difficult to categorize under any of these phases. Such features often contribute to the entire LIMS and how it's utilized. For example, multilingual support appears in LIMS like Assaynet Inc.'s LIMS2010 and Two Fold Software's Qualoupe LIMS, allowing users to interact with the LIMS in more than one language. Some functionality may also overlap several laboratory phases, making it difficult to firmly classify. The features described below come from an analysis of freely available LIMS product information on vendor websites. An attempt was made to discover the features most utilized in vendors' LIMS products and collect information on those features for each LIMS. Not every possible feature is referenced here; some LIMS products fill

37

LIMS feature

38

specific niches, utilizing unique functionality to solve a specific problem. That said, keep in mind the categorization of features below is very loose. It may be viable to argue a feature belongs under a different section or multiple sections. For the purposes of organizing this information in an uncomplicated manner, however, some liberty has been taken in the categorizing of features.

Sample, inventory, and data management To hide the contents of this section for easier reading of other sections, click the "Collapse" link to the right.

Sample login and management Sample accessioning and management is one of the core functions a modern LIMS is tasked with, whether it's in a manufacturing, water treatment, or pharmaceutical laboratory. As such, researchers who work in these types of labs are unable to complete their experiment-based goals without an effective method of managing samples. The process of sample management for experiments includes, but is not limited to: • storing related sample information, including aliquot numbers, dates, and external links • setting user alerts for sample status • creating and documenting viewable sample container schemas with name and status • assigning sample access rights • assigning custom sample ID numbers based on a specification Additional functionality that could potentially fall under this feature: • • • •

utilizing a unique ID system barcoding of samples defining sample points and series creating data associations for samples - such as pedigree for sample/aliquot relationships or relationships based on experiment, etc. • issuing sample receipts

Sample tracking For most laboratory personnel, knowing that a sample has arrived to the lab isn't good enough; they need to know where it's located and what is being done with it. Enter the sample tracking feature. Without it, many problems arise. In the forensic world, for example, many samples are linked to a criminal investigation. In this case, misidentification, contamination, or duplication can become significant issues: a lost sample is essentially missing evidence, while a duplicated sample can render it useless as evidence. After sample reception and its initial handling procedures, many LIMS can then track sample location as well as chain of custody. Location tracking usually involves assigning the sample to a particular freezer location, often down to the granular level of shelf, rack, box, row, and column. The process of tracking a sample has become more streamlined with increasing support of 2-D barcode technology. While handwritten labels were the norm, now barcode support in a LIMS can "tie together a vast amount of information, clearly relating each sample

Where's sample 20110512_122GJH? Sample tracking functionality will let you know which lab oven it's in.

LIMS feature to a specific case." Other event tracking such as freeze and thaw cycles that a sample undergoes in the laboratory may also be required. As each laboratory's needs for tracking additional data points can vary widely, many modern LIMS have implemented extensive configurability to compensate for varying environments. The functionality of sample tracking strongly ties into the audit trail and chain of custody features of a LIMS.

Sample and result batching What is batching? The United States Environmental Protection Agency (EPA) defines a batch as "a group of samples which behave similarly with respect to the sampling or testing procedures being employed and which are processed as a unit." This definition can be applied to many laboratories which handle large quantities of samples for some form of analysis or processing. A LIMS that has the ability to check in, link, and track groups of samples across one or multiple facilities is valuable to such laboratories. Additionally, batching the analysis results of multiple samples or groups of samples gives laboratories more flexibility with how they manage their data. Batching also offers the benefit of mimicking the production groups of samples while also capturing quality control data for the entire group.

Task and event scheduling Within the context of a LIMS, the ability to schedule a task or event is a natural extension of how work was done in a laboratory before the advent of data management systems. Workloads are assigned to technicians, maintenance schedules are created and followed, and research deadlines must be observed. While these tasks have in the past been performed without LIMS, a modern data management system can now optimize those tasks and provide additional scheduling functionality to streamline the operation of a lab. Autoscribe Ltd., for example, offers a scheduling module for its LIMS that allows users to automatically schedule multiple jobs, data backups, alarms, and reports. Some LIMS like LabWare, Inc.'s LabWare LIMS offer multiple types of schedulers that match to the particular functions of a research laboratory. Additional functionality within this feature includes the ability to configure automated assignments of analysis requests, establish recurring events, and in most cases, create printable schedules. Examples of tasks and events that can feasibly be scheduled in a LIMS include: • • • • •

registration of received samples into the system production of reports creation and sending of e-mails and alerts maintenance of equipment assignment of workloads to personnel

Option for manual result entry While many LIMS vendors tout the ability of their product to automate the entry of results into the LIMS database, the need for manual data entry of analysis results still exists. This feature is important to laboratories obtaining analysis results from multiple sources, including non-digital paper-based results and instruments that can't be connected to the LIMS. Additional functionality associated with this feature includes a customizable spell check dictionary and the ability to add comments, notes, and narratives to most anything in the LIMS.

39

LIMS feature

Multiple data viewing methods Laboratories produce data, and LIMS exist to help manage that data. Additionally, even before the existence of LIMS, scientists have had a corresponding need for visually representing data. Today a LIMS can not only collect and analyze data from samples, but it also can represent that data in reports, graphs, gradients, and spreadsheets. Depending on the LIMS, more than one way to visually represent the data may exist. Some laboratory information management systems take a very specialized approach to data views. For example, Biomatters Ltd.'s Geneious and Geneious Pro offer multiple methods of viewing complicated sequence analysis data, including 3-D structuring and representations of plasmid vectors.

Data and trend analysis Sample experimentation and analysis plays an important part of laboratory informatics, helping laboratories make better sense of their experiments and reach valuable conclusions about them. While this important phase of laboratory work has often been done externally from the LIMS, it's now more common to see basic analysis tools being included. Some LIMS allow users to analyze sample data directly from the software. Such tools allow raw data to be imported directly to the LIMS, which then can store, process, and report information about it. Additionally, calculations and functions used in the analysis are typically definable and editable for further flexibility. As with the feature "multiple data viewing methods," data and trend analysis is also increasingly important in laboratories that have very specialized data management needs. When even in 2009 genetic scientists in large- and medium-sized sequencing and core centers were voicing concerns about "a lack of adequate vendor-supported software and laboratory information management systems (LIMS),", today data management options like the previously mentioned Geneious Pro are starting to emerge, offering the ability to perform specialized analytical tasks for the sequencing industry. As sample experimentation and data analysis are important parts of most if not all laboratories, such functionality — which has often come in the form of a separate application or analysis device — will likely continue to merge into LIMS and other data management solutions.

Data and equipment sharing Aside from data storage and sample registration, a modern LIMS' major contribution to the laboratory is aiding in the sharing of experiment results, reports, and other data types with those who need it most. Rather than pieces of information becoming misplaced or forgotten in laboratory notebooks, the LIMS makes it easier to share sample test results and increase the efficiency of collaboration inside and outside the laboratory. Yet data is more than just sample test results; it also can come in the form of charts, reports, policy and procedure, and other documents. Additionally, the need for controlling who has access to those types of data is also an important consideration. As such, this feature is at least partially tied to other features like document management and configurable security.

40

LIMS feature

Customizable fields and/or interface As thorough as some user interface (UI) developers may be in adding relevant fields and interface options for LIMS end users, there are at times options that are either omitted or unanticipated. This has traditionally required the end user to contact the vendor and ask if the needed option(s) can be added in the next release. However, some modern LIMS vendors have responded instead by adding functionality that gives end users and/or LIMS administrators more control over the user interface. Aspects of the LIMS' user interface that are becoming more customizable by the end user include: • • • • • •

system nomenclature equations used in calculations data and universal fields appearance of the interface and/or menus primary system language the LIMS source code, especially if in a non-proprietary format

Query capability As was the case before the advent of databases and electronic data management solutions, today researchers must search through sample results, experiment notes, and other types of data to better draw conclusions about their research. Whereas this used to mean browsing through laboratory notebooks, Excel spreadsheets, or Access databases, now powerful query tools exists within data management tools like a LIMS. Not only can data be searched for based on name, number, or vendor, LIMSs like eBioSys' eLab and Mountain States Consulting's Advanced query tools allow researchers to better complete project MSC-LIMS allow for queries of attached meta-data like objectives. user ID, project number, task number, sample type, location, and collection date. Finally, as LIMS continue to include both sample management and experimental data management functionality, queries become more powerful in general as now sample and experiment can be matched together in one database. Query functionality often includes the ability to: • search both transactional data and archived data tables • search multiple databases via an application programming interface (API) or open database connectivity (ODBC) connection • filter and sort data • create ad-hoc queries

Import data Data can originate from numerous places in the laboratory. The ability to import that data into a LIMS can be beneficial, especially when an instrument can't be connected or an external client provides a data feed independent of the LIMS. Some LIMS like Bridge-Soft's QMS even allow to cross-reference laboratory nomenclature from received data sources with the recipient's. And of course instrument interfacing allows for even more importation options. Additional data validation procedures may be applied to the imported data to guarantee information homogeneity. Additionally, some LIMS may allow for the import and integration of non-normalized legacy data tables with LIMS data tables into a single database.

41

LIMS feature

Internal file or data linking This feature allows LIMS users to link together reports, protocols, sample results, and more, providing greater contextual clarity to projects. Examples include: • • • • • •

linking a sample batch to a test or sample preparation methodology linking a test process to a particular customer linking a report to a sample batch linking a group of test results to a raw data file linking an image to a work order linking all lab results with the correct reporting test method

External file or data linking This feature allows LIMS users to link together data and files in the LIMS with data, files, and customers outside the scope of the LIMS. Examples include: • linking data files from chromatography equipment to synthesis data • linking equipment ID with an external annotation database • linking external standard operating procedure documents with an internal test specification

ELN support or integration As a software replacement for more traditional paper laboratory notebooks, the electronic laboratory notebook (ELN) has been important to laboratory functions. Yet the lines between ELN and LIMS began to blur in the 2000s, with both types of software incorporating features from the other. The result today is some LIMS either include traditional ELN functionality or link physical laboratory notebook references to data in the LIMS.

Export to MS Excel While Microsoft Excel has long been used within the laboratory setting, a slow shift towards relational databases and LIMS occurred in the late 1990s and early 2000s. Additional concerns with the difficulties of Excel's validation and compliance with FDA 21 CFR Part 11 and other regulations have led many labs to turn to data management solutions that are easier to validate. Nevertheless, laboratories continue to use Excel in some fashion, and thus Excel integration or data exportation in Excel format is a real need for LIMS customers. LIMS with this feature allow raw, processed, or imported data to be exported in the Excel format for further analysis and dissemination.

Raw data management While not described as a feature on most LIMS vendor websites, a few indicate that their LIMS is capable of managing (import, export, editing, etc.) data in its raw format for future analysis.

Data warehouse A LIMS' data warehouse serves the important function of storing, extracting, and managing the data that laboratories crank out for the purposes or analysis, reporting, and process management, typically separate from the primary storage database. Data warehouses also offer the benefit of speeding up queries, making queries and data mining more user-friendly, and smoothing out data gaps.

42

LIMS feature

Deadline control Deadline control is functionality within a LIMS that allows users to manage and be notified of events that occur within the laboratory. With this functionality users can also be notified of upcoming deadlines on anything from sample analysis to license renewal. Note that this functionality may also feasibly fall under the task and event scheduling or alarms features. As deadline control seems to be advertised as a notable feature by only a few vendors, it seems even more likely that this functionality is considered part of scheduling or alarms.

Production control There are many types of businesses that produce goods, and in most cases there is a research laboratory involved at some point in the process. This is especially true in the pharmaceutical and chemical industries, where production measurements such as yield, volume, activity, and impurity are vital. As LIMSs have already recorded such information during tests and analysis, the addition of production control functionality seems natural. Some LIMS take a very active approach to this. For example, 2nd Sight Solutions' OhNo! features production control as major functionality for the synthesis of radiopharmaceuticals. Other LIMS may have less pronounced production functionality, while still offering the ability to track the production process in and out of the lab. And yet other LIMSs like dialog's diaLIMS offer robust production-based functionality but as a module or add-on to the base LIMS software. The types of functionality that may fall under this feature include: • • • • •

recipe management consumable tracking batch traceability production planning enterprise resource planning

Project and/or task management Project and task management within a LIMS typically involves the scheduling of tasks to workers and organizing associated tasks into a more cohesive unit for better tracking and management. While the functionality of task and event scheduling can also be found in project and task management, many LIMS include functionality beyond scheduling that warrants the addition of the project and/or task management feature. This functionality includes: • • • • • • • • •

job allocation and rescheduling instrument workload tracking time tracking pending workload verification priority setting project-based workflow management sample, batch, and document linking work list sharing recurring event management

43

LIMS feature

44

Inventory management Laboratories use a wide array of inventory, from reagents to glassware, from radiopharmaceuticals to laboratory baths. With that comes the need to know how much/many and the frequency of use. For this, most LIMS products now include some sort of inventory management functionality. • register origin, demographics of incoming materials • track used and in-use items via barcodes • track inventory reduction based on usage and shipping out of the lab • create alerts for when items reach a certain stock level • calculate inventory cost and fluctuation • manage transportation and routing • manual incrementing/decrementing of items

LIMS can help laboratories keep track of their stock of reagents and even streamline reordering of them.

• track location and usage of laboratory equipment • assign storage locations • track forensic evidence It should be noted that samples and electronic equipment may also be considered inventory, and thus there is likely some functionality crossover with the sample management and instrument management features.

Document creation and management Standard operation procedures, (SOPs), specifications, reports, graphs, images, and receipts are all collected and used in the average laboratory. With a LIMS already designed to manage and store sample and experiment data, it makes sense to include functionality to create, import, export, and manage other sorts of data files. As sample and experimental data can be indexed, queried, and linked, so too can document data. Functionality of a typical document management system includes the ability to: • • • • •

upload and index documents enforce version control provide full text search export to PDF or other relevant format add documents as attachments

Case management The laboratory information system (LIS) has played an important role in the case management tasks of patient-centric and clinical laboratories. However, some LIMS have gained case management functionality, effectively blurring the lines between LIS and LIMS.. Self-proclaimed LIMS products have emerged in the clinical, public health, and veterinary industries, areas that have historically been served by LIS software. When also considering the fields of law enforcement and forensic science, case management has an increasingly important role in some LIMS. Functionality seen in the case management feature includes: • • • • •

case accessioning and assignment disease tracking trend analysis clinical history follow-up out-of-range result alerts

LIMS feature

45

• document and result association • evidence control • study management

Workflow management

Capturing workflow in the lab is becoming more commonplace for the LIMS.

• • • • • •

Workflow management is common in the laboratory, acting as a graphical representation of planned sequential steps to either fully or partially automate a process within the lab. Separate standards-based workflow management systems (in the form of a software component) have traditionally performed this task. However, in the 2000s LIMS vendors began incorporating workflow management functionality into their LIMS software, reducing the headaches that customization of a LIMS often brought. Modern commercial and open-source LIMS solutions often feature workflow management functionality, including: • attribute definition of activities

definition of inputs and outputs of activities assignment of documentation to activities setting of quality control limits dynamically modify workflow in case of future changes receive notification of changes to the workflow sending user-defined messages during the process

Specification management Specification (spec) management is vital to not only the manufacturing and research industries but also to a host of other laboratories requiring precise measurements and infallible test methods. Just as the ASTM offers standards and specs for LIMS, so too do LIMS users have standards and specs for their laboratory. With spec management in place within the LIMS, laboratories can then: • • • • •

enforce standard operating procedures and business rules create specs down to a project or sample level validate recipes and procedures accept or reject sample batches document internal and external spec history

Note that some of the functionality of spec management may cross over into the realm of quality control and data validation.

Customer and supplier management Unless a laboratory is conducting internalized independent research, in most cases it will do business with external entities such as contract labs, sample providers, equipment providers, and reagent suppliers. In some cases, even internal employees may be considered a customer, necessitating documentation of who is using the system and in what ways. For a veterinary lab, the customer may be the animal and handler. For a forensic lab the customer may be more complex: internal staff, university staff, police departments, and maintainers of nationwide crime databases may all at some point act as customers. In these cases, documenting these various points of contact and linking them to samples, equipment, and tests becomes vital. Managing demographics, complaints, correspondence, and history

LIMS feature are all feasible with customer management functionality. This process is often made simpler through the use of a more context-neutral entity creation system, which allows for more flexible management of contacts. This feature may also be referred to as contact management, an address book module, or a customer service module.

Billing management While the finances of a laboratory are important, they've typically been handled separately as a business process. However, some LIMS include additional functionality to make handling financial transactions and documentation of all sorts possible within the LIMS. In theory, such functionality brings the possibility of keeping more of a laboratory's data centrally located and queryable. This feature may include: • • • • • • •

payment processing expense reporting price quotes revenue management workload tracking of billable hours bill of materials grant management

Quality, security, and compliance To hide the contents of this section for easier reading of other sections, click the "Collapse" link to the right.

Regulatory compliance The topic of whether or not a LIMS meets regulatory compliance is often a complex one. While Title 21 CFR Part 11 has arguably had the largest influence on a electronic data management system's compliance, other influential standards have shaped the way LIMS and other systems handle and store data. Other compliance-based codes, standards, and regulations include: • • • • • • • • • • • • • •

ASTM ASCLD/LAB Classified data Freedom of information legislation (various) GALP and GAMP HIPAA Health Level 7 ICD ISO/IEC 17025 ISO 9000/9001 ISO/TS 16949 ODBC TNI and NELAP Title 40 CFR Part 3

With so many codes, standards, and regulations, LIMS consumers are advised to contact vendors with their user requirements and ask how the vendor's software meets and/or exceeds those requirements.

46

LIMS feature

47

QA/QC functions The quality management functions of a LIMS allow users to maintain a level of necessary quality across many of the functions in a laboratory. From running quality assurance tests to ensuring employed researchers are proficient at certain tasks, the QA/QC functionality of a LIMS is largely responsible for the output of consistent data and manufactured products in and out of the lab. Common functionality includes: • • • • • • • •

single or batch QA/QC tests quality control charts and reports proficiency testing document management instrument maintenance data acceptance/rejection certificates of analysis (COA) data types defined by QC analysis

Performance evaluation As document management becomes increasingly prevalent in LIMS, it only makes sense to also collate and store all the documentation associated with employee training and certification. Changes to laboratory techniques, scientific understanding, and business practices force researchers to learn, reevaluate, and demonstrate competency in order to maintain quality levels in the laboratory. Evaluations can frequently extend beyond staff members, however. Clinics, visit types, vendors, or test species can also be tracked and evaluated based on custom criteria. The performance evaluation functionality of a LIMS makes this possible. That functionality typically includes the ability to maintain training records and history, and also to link that training to a technique or piece of equipment. Afterwards, the staff member, vendor, etc. can be marked as competent or certified in the equipment, knowledge, or process. Periodical assessment of the training and its practical effectiveness can later be performed. Productivity of an entity or process can also be gauged over a certain date range based on tracked time, pre-determined milestones, or some other criteria.

Audit trail As codes and regulations like Title 21 CFR Part 11 mandate "computer systems (including hardware and software), controls, and attendant documentation" utilize electronic signatures and audit trails, LIMS developers must put serious thought into how their software handles audit trail functionality. The audit trail — documentation of the sequence of activities that have affected an action — must be thorough and seamlessly integrated into the software. Information recorded in the audit trail typically includes: Whether validating sample data or an entire LIMS, maintaining an audit trail is an important part of 21 CFR Part 11 compliance.

• case number • transaction type • amount and quantity prior to change • user notes

• operator code • time stamp • location

LIMS feature

Chain of custody The chain of custody (COC) of an item is of varying importance, depending on the type of laboratory. A highly regulated laboratory that works under Code of Federal Regulation or other guidelines makes tracking COC a vital part of its operations. This is especially true in forensic labs, which depend on continuous accountability of their evidence collection, retention, and disposal procedures. As with an audit trail, a laboratory depends on recorded information like user ID, time stamp, and location ID to maintain a robust and accurate COC. Barcodes, inventory management, and configurable security roles all play an important part in maintaining chain of custody.

Configurable roles and security Many roles exist within the laboratory setting, each with its own set of responsibilities. And just as the role an individual plays within the laboratory may change, so may the responsibilities associated with each role. This sort of change necessitates a flexible and configurable security system, one that allows for the placement of individual LIMS users into standardized security roles which provide role-specific access to certain LIMS functionality. Additionally, as responsibilities change within roles, that same flexible configuration is necessary for assigning or restricting access to certain LIMS functionality for each existing or newly created role. Of course, roles aren't always assigned on an individual level. Often large groups of individuals may need to be assigned to roles, necessitating group assignments for security purposes. For example, a group of laboratory trainees may only be given read-only access to the sample login and sample tracking functionality of the system through a custom "Trainees" group role, while the head researcher of the lab may be given the "Administrator" role, which allows that individual to access most if not all of the LIMS' functionality.

Data normalization For the purposes of describing LIMS functionality, "data normalization" specifically refers to the process of ensuring incoming/imported data into the LIMS is standardized to the same format of existing LIMS data. Here's an example to better explain this issue. When a LIMS is initially configured, in most if not all cases a clear standard can be set for how logged samples and their associated measurements pre- and post-analysis are recorded in the system. Perhaps all temperatures will be recorded in Celsius to two decimal places. If temperature data imported from a spreadsheet or a lab instrument is not in this format, the LIMS can normalize the incoming data to match the standard already set for existing LIMS temperature data. This ensures consistency within the LIMS database and typically leads to better data validation efforts later on. Note: Some LIMS developers may include data normalization functionality within what they may refer to as data validation functionality. The line between these two may be blurred or not exist at all.

Data validation For the purposes of describing LIMS functionality, "data validation" specifically refers to the process of ensuring existing data in the LIMS — either pre-analysis or post-analysis — sufficiently meets any number of standards or thresholds set for sample login, sample analysis, or any other data management process in the LIMS. This validation process may be completely automatic and system-based, or it may also include additional steps on the part of the user base utilizing additional LIMS functionality, including verification of standard operating procedures (SOPs), QC samples, and QA approval. Note: This functionality shouldn't be confused with the process of validating the LIMS itself, which is an entirely different process partially falling under regulatory compliance and involves the process of ensuring "the software is performing in a manner for which it was designed."

48

LIMS feature

49

Data encryption The existence of this functionality in LIMS software generally indicates the LIMS has the ability to protect the integrity and authenticity of its housed data through the use of a variety of technologies which makes data unreadable except to those possessing a key/right/etc. to unlock/read the data. This functionality is especially vital to the Web-enabled LIMS, which transfers information over the Internet in a client-server relationship. As a wide variety of encryption technologies exist, it's generally a good idea to consult with the developers of a LIMS to determine the strengths and weaknesses of their employed encryption methods.

Version control Version control is a form of safeguard which helps preserve data integrity. This is typically done by creating a modifiable new version of a piece of information rather than allowing the original to be modified. Such versioning may be applied to a wide variety of digital information housed in the LIMS, including test methods, training certifications, instrument logs, specifications, and process and procedure (P&P) documentation. In LIMS like LabWare LIMS, reference data can also be versioned while also retaining the original relationship between samples and test results, including the version of reference data current at the time lab testing is performed. Information tracked with such revisions includes attributes like user name, time the edit was made, and what exactly was edited. This also benefits those managing audit trails and chains of custody. Other LIMS may employ a different form of version control called file locking, which simply puts the affected information into a read-only mode for users while someone else is busy editing it. Another popular strategy is to, rather than locking the file, allow multiple people edit to a piece of information, later merging the various edits. Potential LIMS buyers may need to inquire with developers to determine what type of versioning scheme is used in the vendor's software.

Automatic data backup

The temperature of an open cryopreservation container may be monitored on a computer via a connection to a LIMS with environmental monitoring functionality.

The existence of this piece of functionality in a LIMS usually means information contained in one or more associated databases or data warehouses can be automatically preserved in an additional backup file. The save location for that file as well as the scheduled backup time is configurable, typically through the administrative module of the LIMS.

Environmental monitoring Some LIMS like Core LIMS and Oracle Health Sciences LabPas allow users to monitor the environmental conditions of not only sample storage containers but also the entire laboratory itself. Attributes like humidity, air quality, and temperature may be monitored to ensure sample storage units and experiments maintain desired conditions. Alarms may be able to be configured to notify staff if a storage container's environmental attributes go beyond a certain threshold. Manufacturers utilizing a LIMS like NOVA-LIMS may also be able to employ more advanced environmental tracking features in the plant to guarantee a more consistent, higher quality product is created.

LIMS feature

50

Reporting, barcoding, and printing To hide the contents of this section for easier reading of other sections, click the "Collapse" link to the right.

Custom reporting Reporting is a vital part of a LIMS, as it allows users to gain a clearer picture of collected data and potential trends. At a minimum a number of pre-configured report styles come standard with a LIMS. However, some LIMS are more flexible than others, offering the ability to customize reports in numerous ways. The most popular attributes of custom reporting include custom headers, custom information placement, charts, pivot tables, and multiple output formats. Note: Some LIMS vendors will offer custom reporting as an option as an added cost, depending on the level of customization required.

Report printing Today's LIMS software almost universally offers the ability to print reports and other materials, so this feature may seem a bit redundant to list. Nonetheless, printer support is a feature worth confirming when considering a piece of LIMS software.

Label support The label — typically affixed to a sample container — is a vital part of the sample tracking process. Identifying information such as sample number, batch number, and barcodes are printed on such labels to ensure optimized sample management and more precise sample data. As such, some LIMS allow users to design and print labels directly from the software.

Barcode support Barcodes offer many advantages to laboratory techs handling samples, including more accurate data input, tighter sample/instrument associations, tighter sample/study associations, and more room for human-readable information on a label. Given such advantages, many LIMS developers have integrated barcode support into their laboratory information management systems, including support for symbologies like Code 128, Code 39, and Interleaved 2 of 5. Aside from printing options, a LIMS may also offer support for a variety of bar code readers. The word "Wikipedia" encoded in Code 128 and Code 39

Barcode support and label support are typically found together in LIMS software, but not always, thus their separation into two features of a LIMS.

LIMS feature

Export to PDF A LIMS with this feature is able to collect and save information into a Portable Document Format (PDF).

Export to MS Word A LIMS with this feature is able to collect and save information into a Microsoft Office Word format.

Export to HTML or XML A LIMS with this feature is able to collect and save information into a HyperText Markup Language (HTML) and/or Extensible Markup Language (XML) format.

Fax integration A LIMS with this feature is able to connect with a fax machine and send information to it via manual input, automatically, and/or at scheduled intervals.

Email integration A LIMS with this feature is able to integrate with and use the electronic mail information exchange method to send reports, alerts, and more via manual input, automatically, and/or at scheduled intervals.

Base functionality To hide the contents of this section for easier reading of other sections, click the "Collapse" link to the right.

Administrator management The administrator management tools of a LIMS allow lab technicians to set up the LIMS most optimally for the laboratory. Through the administrator management interface of a LIMS, other features may be accessed like setting up user roles and scheduling automatic data backups. Like report printing, administrator management is nearly ubiquitous in LIMS software, generally considered a mandatory feature. However, for the purposes of being thorough, it's important to point out its existence.

51

LIMS feature

Modular This feature indicates that a LIMS has an intentional modular design, which separates some functionality into manageable components of the overall system. Generally speaking, a modular design allows for 1. the structured addition of new functionality to a LIMS and 2. the limiting of overall effects on the system design as new functionality is added.

Instrument interfacing and management In laboratories there are instruments, and with those instruments comes scientific measurements which produce data. It's therefor natural a lab technician would want to connect those instruments to a laboratory information management system, which is already organizing and storing laboratory data. This sort of interfacing is typically handled with instrument-to-LIMS interfaces, which started out as merely data-transfer mechanisms. Later that interface mechanism became much more robust as a data management tool, though often at great expense with heavy involvement from third parties. Today, "many LIMS vendors can act as single source providers of An entire room of gas chromatography instruments could potentially be the entire instrument interfacing solution,", providing connected to a LIMS via instrument interfacing. a cheaper and smoother solution to LIMS customers. The ability to calibrate and schedule maintenance for interfaced instruments may also be included in this category.

Mobile device integration While not incredibly common, a few LIMS developers are including support for mobile devices in their laboratory information management system. LabCollector, for example, extends its LIMS' functionality to Pocket PC or Windows CE devices equipped with wireless barcode scanners, allowing users to read or collect sample information while on the move. Future Technologies' DNA LIMS, designed for labs performing DNA analysis, has its own mobile version for technicians who need access but can't be in the lab.

Alarms and/or alerts Alarms and alerts are an integral part of a LIMS. They can be automatic or scheduled, and they can come in the form of an e-mail, a pop-up message, or a mobile text message. When the results for a sample analysis go out out of range, an automatic warning message can appear on the screen of the technician responsible for the analysis. A scheduled alert can be e-mailed to a lab technician every month indicating a piece of laboratory equipment needs routine maintenance. If the LIMS is equipped with environmental monitoring, an alert can be sent in the form of an SMS text message to the head researcher if the temperature inside a freezer unexpectedly rises. All of these scenarios represent a tiny fraction of the possible implementation of alarms and alerts in a LIMS, highlighting how powerful (yet easy to take for granted) this feature is.

52

LIMS feature

53

Work-related time tracking This feature specifically refers to a LIMS' ability to track the amount of time an employee spends at work in general (for payroll purposes) or on more specific projects and tasks (as part of an employee work evaluation program).

Voice recognition system A LIMS with this feature allows some functions of the LIMS (for example, accessing sample analysis results) to be accessed via voice commands.

External monitoring This feature allows clients outside the laboratory to monitor the status of sample batches, test results, and more via an online Web portal or, less commonly, as activity alerts sent via e-mail, fax, or SMS.

Messaging The messaging feature of a LIMS may refer to one (or both) of two things: • a built-in instant messaging system that allows users to converse with each other through text messages real-time • an SMS text messaging integration that allows the users or the LIMS itself to send messages or alerts to a user's mobile or smart phone

Multilingual If a LIMS is listed as multilingual, its an indication the LIMS interface can be configured to display more than one language depending on the preference a user or administrator chooses. Some LIMS interfaces can only be displayed in one of two languages (English or German, for example), while others come configured with support for dozens of languages.

Instant messaging clients built into a LIMS often make it easier to collaborate.

Network-capable This feature is perhaps archaic and/or obvious, but it is mentioned nonetheless. It's generally applied to a non-Web-based LIMS installed over a local or wide-area computer network, essentially indicating the LIMS is not an isolated application, but rather one that can interface with other instances of the LIMS or other networked instruments.

Web client or portal A LIMS with a Web client or portal is either a Web-based LIMS (one that is not installed on every computer, but rather is hosted on a server and accessed via a Web browser) or a non-Web-based LIMS with an included portal to access it via the Internet.

LIMS feature

Online or integrated help This indicates a LIMS has help infrastructure integrated into the software, support documentation via the LIMS vendor's website, or both.

Software as a service delivery model This indicates the software can be licensed and utilized via the software as a service (SaaS) delivery model.

Usage-based cost While rare, some LIMS vendors allow potential clients to license and utilize the vendor's software under a usage-based cost model. An example of this model in use is Bytewize AB's O3 LimsXpress, which has a cost directly related to the amount of samples processed each month.

References

Laboratory information system A laboratory information system (LIS) is a software system that records, manages, and stores data for clinical laboratories. A LIS has traditionally been most adept at sending laboratory test orders to lab instruments, tracking those orders, and then recording the results, typically to a searchable database. The standard LIS has supported the operations of public health institutions (like hospitals and clinics) and their associated labs by managing and reporting critical data concerning "the status of infection, immunology, and care and treatment status of patients."

History of LIS Advances in computational technology in the early 1960s led some to experiment with time and data management functions in the healthcare setting. Company Bolt Beranek Newman and the Massachusetts General Hospital worked together to create a system that "included time-sharing and multiuser techniques that would later be essential to the Hospitals and labs around the world depend on a laboratory implementation of the modern LIS." At around the information system to manage and report patient data and test results. same time General Electric announced plans to program a hospital information system (HIS), though those plans eventually fell through. Aside from the Massachusetts General Hospital experiment, the idea of a software system capable of managing time and data management functions wasn't heavily explored until the late 1960s, primarily because of the lack of proper technology and of communication between providers and end-users. The development of the Massachusetts General Hospital Utility Multi-Programming System (MUMPS) in the mid-'60s certainly helped as it suddenly allowed for a

54

Laboratory information system multi-user interface and a hierarchical system for persistent storage of data. Yet due to its advanced nature, fragmented use across multiple entities, and inherent difficulty in extracting and analyzing data from the database, development of healthcare and laboratory systems on MUMPS was sporadic at best. By the 1980s, however, the advent of Structured Query Language (SQL), relational database management systems (RDBMS), and Health Level 7 (HL7) allowed software developers to expand the functionality and interoperability of the LIS, including the application of business analytics and business intelligence techniques to clinical data. Today, web-based and database-centric Internet applications of laboratory informatics software have changed the way researchers and technicians interact with data, with web-driven data formatting technologies like Extensible Markup Language (XML) making LIS and EMR interoperability a much-needed reality. SaaS and cloud computing technologies have further changed how the LIS is implemented, while at the same time raising new questions about security and stability.

Common LIS functions Functions that a LIS has historically performed include, but are not limited to: • patient management, including admission date, admitting physician, ordering department, specimen type, etc. • patient data tracking • • • • • •

decision support, including comparisons of lab orders with their respective ICD-9 codes test ordering quality assurance workload and management reporting workflow management billing

Clinical vs. anatomic pathology LIS The laboratory information system has been primarily segmented into two broad categories (though other variations exist): the clinical pathology and anatomic pathology LIS. In clinical pathology the chemical, hormonal, and biochemical components of body fluids are analyzed and interpreted to determine if a disease is present, while anatomic pathology tends to focus on the analysis and interpretation of a wide variety of tissue structures, from small slivers via biopsy to complete organs from a surgery or autopsy. These differences may appear to be small, but the differentiation in laboratory workflow of these two medical specialties has led to the creation of different functionalities within LISs. Specimen collection, receipt, and tracking; work distribution; and report generation may vary — sometimes significantly — between the two types of labs, requiring targeted functionality in the LIS. Other differences include: • Specific dictionary-driven tests are found in clinical pathology environments but not so much in anatomic pathology environments. • Ordered anatomic pathology tests typically require more information than clinical pathology tests. • A single anatomic pathology order may be comprised of several tissues from several organs; clinical pathology orders usually do not. • Anatomic pathology specimen collection may be a very procedural, multi-step processes, while clinical pathology specimen collection is routinely more simple.

55

Laboratory information system

Differences between a LIS and LIMS There is often confusion regarding the difference between a laboratory information system (LIS) and a laboratory information management system (LIMS). While the two laboratory informatics components are related, their purposes diverged early in their existences. Up until recently, LIMS and LIS have exhibited a few key differences: 1. A LIS has been designed primarily for processing and reporting data related to individual patients in a clinical setting. A LIMS has traditionally been designed to process and report data related to batches of samples from drug trials, water treatment facilities, and other entities that handle complex batches of data. 2. A LIS must satisfy the reporting and auditing needs of hospital accreditation agencies, HIPAA, and other clinical medical practitioners. A LIMS, however, needs to satisfy good manufacturing practice (GMP) and meet the reporting and audit needs of the U.S. Food and Drug Administration and research scientists in many different industries. 3. A LIS is usually most competitive in patient-centric settings (dealing with "subjects" and "specimens") and clinical labs, whereas a LIMS is most competitive in group-centric settings (dealing with "batches" and "samples") that often deal with mostly anonymous research-specific laboratory data. However, as of 2011 these distinctions have faded somewhat as some LIMS vendors have adopted the case-centric information management normally reserved for a LIS, blurring the lines between the two components further.

LIS vendors See the LIS vendor page for a list of LIS vendors past and present.

Further reading • Henricks, Walter H. (09 October 2012). "LIS Basics: CP and AP LIS Design and Operations" [1] (PDF). Pathology Informatics 2012. Walter H. Henricks, MD. • Park, Seung Lyung; Pantanowitz, Liron; Sharma, Guarav; Parwani, Anil Vasdev (March 2012). "Anatomic Pathology Laboratory Information Systems: A Review" [2]. Advances in Anatomic Pathology 19 (2): 81–96. doi [2] :10.1097/PAP.0b013e318248b787 [3]. (Alternate URL [4])

References [1] [2] [3] [4]

http:/ / www. pathinformatics. pitt. edu/ sites/ default/ files/ 2012Powerpoints/ 01HenricksTues. pdf http:/ / ebookbrowse. com/ anatomic-pathology-laboratory-information-systems-a-review-slpark-et-all-adv-anat-pathol-2012-pdf-d344405134 http:/ / dx. doi. org/ 10. 1097%2FPAP. 0b013e318248b787 https:/ / docs. google. com/ gview?url=http:/ / bpa-pathology. com/ uploads/ file/ docs/ Anatomic+ Pathology+ Laboratory+ Information+ Systems+ -+ A+ Review+ -+ SLPark+ et+ all. + -+ Adv+ Anat+ Pathol+ 2012. pdf& chrome=true

56

LIS feature

57

LIS feature You can find a listing of all LIS vendors — and by extension, the features their products offer — on the LIS vendor page.

A LIS feature is one or more pieces of functionality that appear within a laboratory information system (LIS). The LIS has traditionally been utilized in clinical, pathology, and medical research laboratories as well as numerous public health institutions. Yet as laboratory demands have changed and technological progress has continued, the functions of a LIS have also changed, with the distinction between a LIS and a laboratory information management system (LIMS) fading as some LIMS vendors have adopted the case-centric information management normally reserved for a LIS..

Thousands of hospital laboratories like this one benefit from the use of a laboratory information system.

Despite the blurring of distinction between a LIS and a LIMS, the LIS generally continues to feature the following: • • • • •

patient management, including admission date, admitting physician, ordering department, specimen type, etc. patient data tracking decision support, including comparisons of lab orders with their respective ICD codes quality assurance of ordered tests workload and management reporting

Of course, there are LIS features that are difficult to categorize or simply contribute to the whole of the LIS rather than add a function. For example, multilingual support allows users to interact with the LIS in more than one language. Some functionality may also overlap several research phases, making it difficult to firmly classify. The features described below come from an analysis of freely available LIS product information on vendor websites. An attempt was made to discover the features most utilized in vendors' LIS products and collect information on those features for each LIS. Not every possible feature is referenced here; some LIS products fill specific niches, utilizing unique functionality to solve a specific problem. That said, keep in mind the categorization of features below is very loose. It may be viable to argue a feature belongs under a different section or multiple sections. For the purposes of organizing this information in an uncomplicated manner, however, some liberty has been taken in the categorizing of features.

LIS feature

Experiment, patient, and data management To hide the contents of this section for easier reading of other sections, click the "Collapse" link to the right.

Sample login and management Sample login and management — often referred to as accessioning or specimen management — is an important component of the clinical laboratory, whether it's a molecular pathology lab testing samples for disease indicators or a contract lab running pharmacokinetic and biomarker analysis on samples from a clinical trial. As such, researchers and technicians who work in these types of labs are unable to complete their tasks without an effective method of managing samples. The process of sample management and accessioning includes, but is not limited to: • • • • •

storing related sample information, including demographics, dates, and external links creating and documenting viewable sample container schemas with name and status assigning sample access rights assigning custom sample ID or accessioning numbers based on a specification applying additional processing to the sample before storage and/or analysis

Additional functionality that could potentially fall under this feature: • barcoding or RFID tagging of samples • defining sample points and series • creating data associations for samples - such as pedigree for sample/aliquot relationships or relationships based on experiment, etc. • issuing sample receipts

Sample tracking For most laboratory personnel, knowing that a sample has arrived to the lab isn't good enough; they need to know where it's located and what is being done with it. Enter the sample tracking feature. Without it, many problems arise. In the forensic world, for example, many samples are linked to a criminal investigation. In this case, misidentification, contamination, or duplication can become significant issues: a lost sample is essentially missing evidence, while a duplicated sample can render it useless as evidence. After sample reception and its initial handling procedures, many LIS can then track sample location as well as chain of custody. Location tracking usually involves assigning the sample to a particular freezer, oven, or other location, often down to the granular level of shelf, rack, box, row, and column. The process of tracking a sample has become more streamlined with increasing support of 2-D barcode or radio-frequency identification (RFID) technology. While handwritten labels were the norm, now barcode and RDIF support in a LIS can "tie together a vast amount of information, clearly relating each sample to a Where's sample 20110512_122GJH? Sample specific case." Other event tracking such as freeze and thaw cycles that tracking functionality will let you know which a sample undergoes in the laboratory may also be required. As each lab oven it's in. laboratory's needs for tracking additional data points can vary widely, many modern LIMS and LIS have implemented extensive configurability to compensate for varying environments. The functionality of sample tracking strongly ties into the audit trail and chain of custody features of a LIS.

58

LIS feature

Sample and result batching What is batching? The United States Environmental Protection Agency (EPA) defines a batch as "a group of samples which behave similarly with respect to the sampling or testing procedures being employed and which are processed as a unit." This definition can be applied to many laboratories which handle large quantities of samples for some form of analysis or processing. A LIS that has the ability to check in, link, and track groups of samples across one or multiple facilities is valuable to such laboratories. Additionally, batching the analysis results of multiple samples or groups of samples gives laboratories more flexibility with how they manage their data. Batching also offers the benefit of mimicking the production groups of samples while also capturing quality control data for the entire group.

Task and event scheduling Within the context of a LIS, the ability to schedule a task or event is a natural extension of how work was done in a laboratory before the advent of data management systems. Sample processing, data analysis, equipment maintenance, and case management follow-ups are assigned to technicians and other personnel. Outpatient scheduling is another aspect of some clinical atmospheres, better handled with computerized scheduling functionality. While these tasks have in the past been performed without the LIS, a modern data management system can now optimize those tasks and provide additional scheduling functionality to streamline the operation of a lab. Some LISs like Elekta AB's IntelliLab include a scheduling calendar for recurring test orders, rules-based orders, and pre-defined selection lists. Additional functionality within this feature group includes the ability to configure automated assignments of experiment requests, establish recurring events, and in most cases, create printable reports. Examples of tasks and events that can feasibly be scheduled in a LIS include: • • • • •

production of reports creation and sending of e-mails and alerts maintenance of equipment assignment of accessioning tasks to technicians scheduling outpatient visits

Option for manual result entry While many LIS vendors tout the ability of their product to automate the entry of sample analysis results into LIS' or other databases, the need for manual data entry of analysis results still exists. This feature is important to laboratories obtaining analysis results from multiple sources, including non-digital paper-based results and instruments that can't be connected to the LIS. Additional functionality associated with this feature includes a customizable spell-check dictionary and the ability to add comments, notes, and narratives to many of the data items in the LIS.

Multiple data viewing methods Hospitals, physicians, and clinical research facilities produce reams of data, and the LIS exists to help organize and distribute that data to the necessary entities. Additionally, even before the existence of the LIS, scientists have had a corresponding need for visually representing that data for clearer analysis and hypothesis creation. Today a LIS can not only collect and analyze data, but it also can represent that data in reports, graphs, gradients, and spreadsheets. Depending on the LIS, more than one way to visually represent the data may exist. This category ties in with the custom templates and forms functionality apparent in some LIS, providing both custom and standardized ways to present information across a healthcare or medical research enterprise.

59

LIS feature

Configurable templates and forms Similar to an electronic laboratory notebook (ELN), a template in a LIS is a functionality item which allows users to increase the productivity and quality of their work by allowing for the creation of a standardized analysis page, patient page, or reporting process across a healthcare or medical research enterprise. These templates allow researchers to maintain more consistent data representation for similar tasks in the LIS and save time by not needing to manually input common data outputs or recreate experiments. Templates and forms typically utilize a wide field library, and the data that is posted to those template fields can also be normalized to a specific standard. Types of templates that may be created include those for renal and blood pressure analysis, patient demographics, test ordering, and department-level reports.

Data and trend analysis For public health centers and pharmaceutical research centers alike, data analysis plays an important role in their operations, helping clinicians and researchers make better sense of their collected data and reach valuable conclusions about them. While this important phase of laboratory work has often been done externally from the LIS, it's now Some LISs allow users to analyze patient test results or clinical research data with built-in software tools. more common to see basic analysis tools being included. Such tools allow raw data to be imported directly to the LIS, which then can store, process, and display it in a shareable form. Vendors may include data analysis functionality by simply including Microsoft Excel compatibility or providing advanced reporting tools, or they may take a more advanced approach by programming and including their own custom data and trend analysis tools in their informatics software. As sample analysis is increasingly an important part of most if not all laboratories, such functionality — which has often come in the form of a separate application or analysis device — will likely continue to merge into software like LIS, LIMS, and other laboratory informatics solutions.

Data and equipment sharing Aside from data storage and sample registration, a modern LIS's major contribution to the laboratory is aiding in the sharing of test results, reports, and patient data with other entities across the clinical and research enterprise. Rather than pieces of information becoming misplaced or locked away in a physician's office or pathology lab, the LIS makes it easier to test results and increase the efficiency of patient-doctor-lab collaboration in general. Yet data is more than just test results; it also can come in the form of charts, reports, policy and procedure, and other documents. Additionally, the need for controlling who has access to those types of data is also an important consideration. As such, this feature is at least partially tied to other features like document management and configurable security.

Data mining Data mining, in the field of computational science, involves "the process of discovering interesting and useful patterns and relationships in large volumes of data" and includes three computational steps: model-learning, model evaluation, and model usage. As informatics software allows both research and clinical laboratories to collect and manage increasing quantities of data, a corresponding demand for tools capable of modeling that data is appearing. For example, public health laboratories may wish to utilize data mining for statistical analysis and surveillance of populations for specific diseases. LIMSs like LabWare LIMS and LISs like Orchard Harvest are examples of laboratory informatics software which incorporate data mining and reporting tools.

60

LIS feature

Customizable fields and/or interface As thorough as some user interface (UI) developers may be in adding relevant fields and interface options for end users, there are at times options that are either omitted or unanticipated. This has traditionally required the end user to contact the vendor and ask if the needed option(s) can be added in the next release. However, many modern LIS vendors have responded instead by adding functionality that gives end users and/or LIS administrators more control over the user interface. Aspects of the LIS's user interface that are often customizable by the end user include: • report interface and display • patient profile display • project and experiment display Note in many cases an interface may be customized through the use of templates and forms, and as such, this functionality may be closely tied to the configurable templates and forms functionality.

Query capability As was the case before the advent of databases and electronic data management solutions, today researchers must search through test results, patient notes, and other types of data to better draw conclusions from experiments, diagnose patient illnesses, and plan pharmaceutical research activities. Whereas this used to mean browsing through laboratory notebooks, Excel spreadsheets, or Access databases, now powerful query tools exists within data management tools like the LIS and ELN. A flexible search algorithm can be implemented to allow users to search a dataset by patient name (full or partial) or by any accessioning number. Or more advanced query tools may be implemented to collate and search across multiple datasets. Query functionality often includes the ability to: • search both transactional data and archived data tables • search multiple databases via an application programming interface (API) or open database connectivity (ODBC) connection • filter and sort data • collate queried data for further analysis and visualization • create ad-hoc queries

Import data Data can originate from numerous places in the laboratory. The ability to import that data into a LIS can be beneficial, especially when an instrument can't be connected or external clients collaborating on a project need to submit relevant data. Of course instrument interfacing allows for even more importation options. Additional data validation procedures may be applied to the imported data to guarantee information homogeneity. For the LIS, one of the common sources of importing data is a separate electronic medical record (EMR) system, for collecting patient data and test orders.

61

LIS feature

62

Internal file or data linking This feature allows research collaborators using a LIS to link together sample batches, reports, protocols, results, and more, providing greater contextual clarity to projects and datasets. Examples include: • linking a sample batch to a test or sample preparation methodology • linking a test process to a particular experiment • linking a report to a sample batch • linking a group of experiment results to a raw data file • linking multiple images to a patient record • linking all experiment results with the correct reporting test methods

Many informatics systems allow for internal linking of data; however, entities outside of the system often need to access the data housed within. A LIS is often capable of such external data linking.

External file or data linking This feature allows research collaborators using a LIS to link together data and files housed in the database with data, files, and customers outside the LIS's domain. Examples include: • linking to an external practice management or electronic medical record (EMR) system using an an HL7-compliant interface • linking one public health data source with others to pool demographic and medical information for better disease modeling • linking to separate clinical trial laboratory data files within a report

ELN support or integration The functionality of a LIMS and an ELN began to blur in the 2000s, with both types of software incorporating features from the other. It has been more common to see a LIMS take on some sort of ELN support (or vice versa), but less common in the LIS. Though uncommon, some LIS may include some sort of integration or compatibility with an ELN, and thusly this functionality is at least mentioned.

Export to MS Excel While Microsoft Excel has long been used within the laboratory setting, a slow shift towards relational databases and LIMS occurred in the late 1990s and early 2000s. Additional concerns with the difficulties of Excel's validation and compliance with FDA 21 CFR Part 11 and other regulations have led many labs to turn to data management solutions that are easier to validate. Nevertheless, laboratories continue to use Excel in some fashion, and thus Excel integration or data exportation in Excel format is a real need for LIS customers. LISs with this feature allow raw, processed, or imported data to be exported in the Excel format for further analysis and dissemination elsewhere in the LIS or externally from it.

LIS feature

Raw data management While not described as a feature on most LIS vendor websites, a few indicate their product is capable of managing (import, export, editing, etc.) data in its raw format for future analysis and dissemination.

Data warehouse An LIS's data warehouse serves the important function of storing, extracting, and managing the data that laboratories, physician offices, and other facilities produce for the purposes of analysis, reporting, and dissemination, typically separate from the primary storage database. Data warehouses also offer the benefit of speeding up queries, making queries and data mining more user-friendly, and smoothing out data gaps.

Project and/or task management Project and task management within a LIS typically involves the scheduling of tasks to technicians and organizing associated tasks into a more cohesive unit for better tracking and management. While the functionality of task and event scheduling can also be found in project and task management, many LISs include functionality beyond scheduling that warrants the addition of the project and/or task management feature. This functionality includes: • job allocation and rescheduling • • • • • •

instrument workload tracking pending workload verification project- and experiment-based workflow management sample, batch, and document linking work template sharing recurring event management

See also: Patient and case management

Test, experiment, and/or trial management Specimen or sample test management is a common component of a LIS, while experiment and research trial management functionality is a component of some LISs, often limited to those that are designed to help manage clinical trials. Test, experiment, and trial management can cover a wide variety of tasks, from setting up the design of a clinical trial to specimen task assignments, from ordering tests for patients to planning trial experiments. Note: this may also be referred to as "order management" with some vendors. It's worth noting this functionality category may seem broad in scope and include other functionality listed on this page, including workflow management and project and task management. Its inclusion when reviewing software functionality is primarily to indicate when a vendor or project team indicates the existence of specific test, experiment, or trial management tools in their software.

Inventory management Laboratories use a wide array of inventory, from reagents to glassware, from radiopharmaceuticals to laboratory baths. With that comes the need to know how much/many and the frequency of use. For this, some LIS products (especially those for pathology labs) now offer limited or full-featured inventory management functionality, which may include the ability to: • register the origin and demographics of incoming materials. • track used and in-use items via barcodes or RFID tags. • track inventory reduction based on usage and shipping out of the lab. • create alerts for when items reach a certain stock level. • calculate inventory cost and fluctuation.

63

LIS feature • • • • • •

64

manage transportation and routing. manual incrementing/decrementing of items. track location and usage of laboratory equipment. track location and usage of reagents. assign storage locations. track forensic evidence.

It should be noted electronic equipment may also be considered inventory, and thus there is likely some functionality crossover with instrument management features.

Document and/or image management Standard operation procedures, (SOPs), specifications, reports, graphs, images, and receipts are all collected and used in the average laboratory. With a LIS already designed to reference and store test and patient data of all types, it makes sense to include functionality to create, import, export, and manage other sorts of data files. As experimental data can be indexed, queried, and linked, so too can document data. Functionality of a typical document management system includes the ability to: • upload and index documents. • add images and photos inline to a patient or case entry. • enforce version control.

Standard operating procedures, workflow diagrams, and business models can all be handled effectively with document management functionality.

• provide full text searches. • export to PDF, XML, or other relevant formats. • add documents as attachments.

Patient and case management The laboratory information system (LIS) has played an important role in the case management tasks of patient-centric and clinical laboratories. LIS products have included patient or case management tools suitable for the clinical, public health, and veterinary industries, as well as the fields of law enforcement and forensic science. Functionality seen in the patient and case management feature includes: • • • • • •

case accessioning and assignment disease tracking trend analysis clinical history follow-up out-of-range result alerts document and result association

• evidence control • study management • collating of patient data across multiple spectrum

LIS feature

65

Workflow management Workflow management is common in the laboratory, acting as a graphical representation of planned sequential steps to either automate or clarify a process or experiment within the lab. Separate standards-based workflow management systems (in the form of a software component) have traditionally performed this task. However, in the 2000s vendors began incorporating workflow management functionality into their laboratory informatics software, reducing customization headaches in the process. Capturing workflow in the lab is becoming more commonplace for laboratory informatics products.

Modern commercial and open-source LIS solutions recognize clinical laboratory workflow often has its own share of requirements, requiring specific workflow management functionality, including: • managing the request cycle within a laboratory

• • • • • • •

organizing and executing diagnostic testing managing specific chemistry- and biology-related procedures defining activity attributes managing automation tools to better workflows re-route samples based on changes to a process dynamically modifying workflow in case of future changes receiving notification of changes to the workflow

Specification management Specification (spec) management is vital to not only the manufacturing and research industries but also to a host of other laboratories requiring precise measurements and infallible test methods. Just as the ASTM offers standards and specs for laboratory informatics software, so too do users have standards and specs for their laboratory. Spec management has primarily been seen in a manufacturing execution system (MES) or a LIMS, but occasionally a LIS may appear which includes such functionality. With spec management in place, laboratories can then: • • • • •

enforce standard operating procedures and business rules. create specs down to a project or sample level. validate recipes and procedures. accept or reject sample batches. document internal and external spec history.

Note some of the functionality of spec management may cross over into the realm of quality control and data validation.

Customer, supplier, and physician management Unless a laboratory is conducting internalized independent research, in most cases it will do business with external entities such as contract labs, physician offices, equipment providers, and reagent suppliers. In some cases, even internal employees may be considered a customer, necessitating documentation of who is using the system and in what ways. For a veterinary lab, the customer may be the animal and handler. For a forensic lab the customer may be more complex: internal staff, university staff, police departments, and maintainers of nationwide crime databases may all at some point act as customers. In these cases, documenting these various points of contact and linking them to tests, equipment, and patients becomes vital. Managing demographics, complaints, correspondence, and history are all feasible with customer, supplier, and physician management functionality. This process is often made simpler

LIS feature through the use of a more context-neutral entity creation system, which allows for more flexible management of contacts. This feature may also be referred to as contact management, an address book module, or a customer service module.

Billing and revenue management While the finances of a laboratory are important, they've typically been handled separately as a business process. However, some LISs include additional functionality to make handling financial transactions and documentation of all sorts possible within the LIS. In theory, such functionality brings the possibility of keeping more of a laboratory's data centrally located and queryable. This feature may include: • • • • • • • •

payment processing expense reporting price quotes revenue management workload tracking of billable hours bill of materials sales team and client management profitability analysis

• medical necessity checks

Quality, security, and compliance To hide the contents of this section for easier reading of other sections, click the "Collapse" link to the right.

Regulatory compliance The topic of whether or not a LIS meets regulatory compliance is often a complex one. While Title 21 CFR Part 11 has arguably had the largest influence on an electronic data management system's compliance, other influential standards have shaped the way laboratory informatics systems handle and store data. Other compliance-based codes, standards, and regulations include: • • • • • • • • • • • • • •

ASTM ASCLD/LAB Classified data Freedom of information legislation (various) GALP and GAMP HIPAA Health Level 7 ICD ISO/IEC 17025 ISO 9000/9001 ISO/TS 16949 ODBC TNI and NELAP Title 40 CFR Part 3

With so many codes, standards, and regulations, LIS consumers are advised to contact vendors with their user requirements and ask how the vendor's software meets and/or exceeds those requirements.

66

LIS feature

67

QA/QC functions The quality management functions of a LIS allow users to maintain a level of necessary quality across many of the functions in a laboratory. Some of the activities quality assurance / quality control functionality allows for includes: • • • • • • •

force random review of cases by second pathologist before case verification receive and process QC results from laboratory analyzers create user rules set up custom alerts and flags for out-of-range results observe standard deviations in outcome research review and sign off on data electronically delta checking

Performance evaluation As document and file management plays an important role in clinical and research laboratories, it only makes sense to collate and store all the associated data for future reference, including documentation relating to individual training and performance. Changes to laboratory techniques, scientific understanding, and business practices force lab technicians and researchers to learn, reevaluate, and demonstrate competency in order to maintain quality levels in the laboratory. Evaluations can frequently extend beyond staff members, however. Clinics, visit types, vendors, or test species can also be tracked and evaluated based on custom criteria. The performance evaluation functionality of a LIS makes this possible. That functionality typically includes the ability to maintain training records and history, and also to link that training to a technique or piece of equipment. Afterwards, the staff member, vendor, etc. can be marked as competent or certified in the equipment, knowledge, or process. Periodical assessment of the training and its practical effectiveness can later be performed. Productivity of an entity or process can also be gauged over a certain date range based on tracked time, pre-determined milestones, or some other criteria.

Audit trail As codes and regulations like Title 21 CFR Part 11 mandate "computer systems (including hardware and software), controls, and attendant documentation" utilize electronic signatures and audit trails, LIS developers must put serious thought into how their software handles audit trail functionality. The audit trail — documentation of the sequence of activities that have affected an action — must be thorough and seamlessly integrated into the software. Information recorded in the audit trail typically includes: Whether validating an instrument's data or an entire LIS, maintaining an audit trail is an important part of 21 CFR Part 11 compliance.

• • • •

case number accessioning number transaction type amount and quantity prior to change

• operator code • time stamp • location

LIS feature • user notes

Chain of custody The chain of custody (COC) of an item is of varying importance, depending on the type of laboratory. A highly regulated laboratory that works under Code of Federal Regulation or other guidelines makes tracking COC a vital part of its operations. This is especially true in forensic labs, which depend on continuous accountability of their evidence collection, retention, and disposal procedures. As with an audit trail, a laboratory depends on recorded information like user ID, time stamp, and location ID to maintain a robust and accurate COC. Barcodes and RFID tags, inventory management, and configurable security roles all play an important part in maintaining chain of custody.

Configurable roles and security Many roles exist within the clinical and research setting, each with its own set of responsibilities. And just as the role an individual plays within the laboratory may change, so may the responsibilities associated with each role. This sort of change necessitates a flexible and configurable security system, one that allows for the placement of individual LIS users into standardized security roles which provide role-specific access to certain functionality. Additionally, as responsibilities change within roles, that same flexible configuration is necessary for assigning or restricting access to specific functionality for each existing or newly created role. Of course, roles aren't always assigned on an individual level. Often large groups of individuals may need to be assigned to roles, necessitating group assignments for security purposes. For example, a group of hospital laboratory trainees may not be given access to the inventory management functionality of the system through a custom "Trainees" group role, while the head of the lab may be given the "Administrator" role, which allows that individual to access a much broader spectrum of the LIS's functionality.

Data normalization For the purposes of describing LIS functionality, "data normalization" specifically refers to the process of ensuring incoming/imported data into the LIS is standardized to the same format of existing data. Here's an example to better explain this issue. When a LIS is initially configured, in most if not all cases a clear standard can be set for how logged test results and their associated measurements pre- and post-analysis are recorded in the system. Perhaps all temperatures will be recorded in Celsius to three decimal places. If temperature data imported from a spreadsheet or a lab instrument is not in this format, the LIS can normalize the incoming data to match the standard already set for existing temperature data. This ensures consistency within the database and typically leads to better data validation efforts later on. Note: Some LIS developers may include data normalization functionality within what they may refer to as data validation functionality. The line between these two may be blurred or not exist at all.

68

LIS feature

69

Data validation For the purposes of describing LIS functionality, "data validation" specifically refers to the process of ensuring existing data in the LIS — either pre-analysis or post-analysis — sufficiently meets any number of standards or thresholds set for any given data management process. This validation process may be completely automatic and system-based, or it may also include additional steps on the part of the user base utilizing additional LIS functionality, including verification of standard operating procedures (SOPs), QC samples, and QA approval. Note: This functionality shouldn't be confused with the process of validating the application itself, which is an entirely different process partially falling under regulatory compliance and involves the process of ensuring "the software is performing in a manner for which it was designed."

In a LIS, data and data models can be forced through a validation process to remove errors and reconcile that those data and models.

Data encryption The existence of this functionality in a LIS indicates the software has the ability to protect the integrity and authenticity of its housed data through the use of a variety of technologies which makes data unreadable except to those possessing a key/right/etc. to unlock/read the data. This functionality is especially vital to the web-enabled LIS, which transfers information over the Internet in a client-server relationship. As a wide variety of encryption technologies exist, it's generally a good idea to consult with the developers of a LIS to determine the strengths and weaknesses of their employed encryption methods.

Version control Version control is a form of safeguard which helps preserve data integrity and thus ties in with the topic of regulatory compliance. This is typically done by creating a modifiable new version of a piece of information rather than allowing the original to be modified. Such versioning may be applied to a wide variety of digital information housed in the LIS, including templates, training certifications, instrument logs, specifications, and process and procedure (P&P) documentation. Information tracked with such revisions includes attributes like user name, time the edit was made, and what exactly was edited. This also benefits those managing audit trails and chains of custody. Other LIS vendors may employ a different form of version control called file locking, which simply puts the affected information into a read-only mode for users while someone else is busy editing it. Another popular strategy is to, rather than locking the file, allow multiple people to edit a piece of information, later merging the various edits. Potential LIS buyers may need to inquire with developers to determine what type of versioning scheme is used in the vendor's software.

LIS feature

Automatic data backup The existence of this piece of functionality in a LIS usually means information contained in one or more associated databases or data warehouses can be automatically preserved in an additional backup file. The save location for that file as well as the scheduled backup time is configurable, typically through the administrative module of the software.

Environmental monitoring While not common at all, a few LIS may allow users to monitor the environmental conditions of not only sample storage containers but also the entire laboratory itself. Attributes like humidity, air quality, and temperature may be monitored to ensure sample storage units and experiments maintain desired conditions. This monitoring may be done by treating the storage container as a device, which must be interfaced with the LIS. Alarms may be able to be configured to notify staff if a storage container's environmental attributes go beyond a certain threshold.

Reporting, barcoding, and printing To hide the contents of this section for easier reading of other sections, click the "Collapse" link to the right.

Custom reporting Reporting often provides useful information representation for gaining a clearer picture of collected data and potential trends. At a minimum, a number of pre-configured report templates typically come standard with a LIS. However, other systems are more flexible than others, offering the ability to customize reports in numerous ways. The most popular attributes of custom reporting include custom headers, custom information placement, charts, pivot tables, and multiple output formats. Note: Some LIS vendors may offer custom reporting as an option as an added cost, depending on the level of customization required.

Synoptic reporting Synoptic reporting is a specific type of reporting applicable to pathology and other associated laboratories. Synoptic reporting essentially involves a structured, pre-formatted "checklist" of clinically and morphologically relevant data elements (ideally passed to a relational database where they are efficiently organized, searched, and retrieved), with the intent of making reporting more efficient, uniform, and relevant to internal and external researchers. This style of reporting has the advantage of obviating the need for transcription services, reducing specimen turnaround time, and prioritizing the presentation of large amounts of diagnostic information. Some LISs, especially those oriented towards pathology, may include this specialized functionality. In some cases, a configurable templates or form may be utilized to structure a report in a synoptic format, providing similar functionality to a separate synoptic reporting module.

70

LIS feature

71

Report printing Today's software almost universally offers the ability to print reports and other materials, so this feature may seem a bit redundant to list. Nonetheless, printer support is a feature worth confirming when considering a piece of laboratory informatics software.

Label support The label — typically affixed to a sample container or piece of equipment — is a vital part of many laboratory operations. Identifying information such as sample number, batch number, and barcodes are printed on such labels to ensure optimize managing the location of items in a lab. As such, numerous LISs allow users to design and print labels directly from the software.

Barcode and/or RFID support Barcodes offer many advantages to laboratory techs handling samples, including more accurate data input, tighter sample/instrument associations, tighter sample/study associations, and more room for human-readable information on a label. Given such advantages, many laboratory informatics developers have integrated barcode support into their software, including support for symbologies like Code 128, Code 39, and Interleaved 2 of 5. Aside from printing options, a LISmay also offer support for a variety of barcode readers. The word "Wikipedia" encoded in Code 128 and Code 39

Additionally, some LIS include the ability to handle radio-frequency identification (RFID) tags, which have several advantages over a more traditional label-based approach to accessioning.

Barcode support and label support are typically found together in LIS software, but not always, thus their separation into two features.

Export to PDF A LIS with this feature is able to collect and save information into a Portable Document Format (PDF).

Export to MS Word A LIS with this feature is able to collect and save information into a Microsoft Office Word format.

Export to HTML or XML A LIS with this feature is able to collect and save information into a HyperText Markup Language (HTML) and/or Extensible Markup Language (XML) format.

Fax integration A LIS with this feature is able to connect with a fax machine and send information to it via manual input, automatically, and/or at scheduled intervals.

LIS feature

72

Email integration A LIS with this feature is able to integrate with and use the electronic mail information exchange method to send reports, alerts, and more manually, automatically, and/or at scheduled intervals.

Base functionality To hide the contents of this section for easier reading of other sections, click the "Collapse" link to the right.

Administrator management The administrator management tools of a LIS allows researchers to set up the software most optimally for the facility and its projects. Through the administrator management interface, other features may be accessed like setting up user roles and scheduling automatic data backups. Like report printing, administrator management is nearly ubiquitous in laboratory informatics software, generally considered a mandatory feature. However, for the purposes of being thorough, it's important to point out its existence.

Modular This feature indicates that a LIS has an intentional modular design, which separates some functionality into manageable components of the overall system. Generally speaking, a modular design allows for 1. the structured addition of new functionality to a LIS and 2. the limiting of overall effects on the system design as new functionality is added.

Instrument interfacing and management In laboratories there are instruments, and with those instruments comes scientific measurements which produce data. It's therefor natural a researcher would want to connect those instruments to a laboratory information system, which is already organizing and storing laboratory data for hospitals and medical research facilities. This sort of interfacing is typically handled with instrument-to-software interfaces, which started out as merely data-transfer mechanisms. Later that interface mechanism became much more robust as a data management tool, though often at great expense with heavy involvement from third parties. Today, "vendors can act as single source providers of the entire instrument interfacing solution,", providing a cheaper and smoother solution to laboratory informatics customers. In the clinical laboratory setting, a LIS vendor may have additional considerations to make, such as Health Level 7 (HL7) triggers, messages, and segments transported across communication interfaces.

Mobile device integration

In some cases mobile devices like these may access and utilize a LIS, typically through a Web portal or special mobile version of the software.

While not ubiquitous by any means, LIS developers are increasingly including support for mobile devices in their software, usually in the form of a separate mobile version of the software. Research and development labs, for example, potentially can put mobile technology to use in the laboratory as remotely monitoring a lab or using mobile phone microscopy. Those uses aside, the relatively simple action of recording and reviewing laboratory research results while on the move or at a conference gives researchers flexibility, and LIS developers like McKesson are beginning to include that functionality.

LIS feature

73

Third-party software integration A few LIS vendors either incorporate third-party software into their product or they provide the means to integrate the LIS with other applications. The most typical integration involves simply communicating with common authoring tools like Microsoft Word, allowing users to work directly from the third-party application and then transferring the information to the LIS.

Alarms and/or alerts Alarms and alerts in a LIS can be automatic or scheduled, and they can come in the form of an e-mail, a pop-up message, or a mobile text message. For example, when a test result goes out-of-range, an automatic warning message can appear on the screen of the lab analyst responsible for the test. Another example: a scheduled alert can be e-mailed to a lab technician every month indicating a piece of laboratory equipment needs routine maintenance. Both scenarios represent a tiny fraction of the possible implementation of alarms and alerts in a LIS, highlighting how powerful (yet easy to take for granted) this feature is.

Work-related time tracking This feature specifically refers to a LIS' ability to track the amount of time an employee spends at work in general (for payroll purposes) or on more specific projects and tasks (as part of an employee work evaluation program). May also be referred to as "workload tracking" or "workload tracking."

Voice recognition system A LIS with this feature allows some functions of the software (for example, accessing test results) to be accessed via voice commands.

External monitoring This feature allows clients and/or collaborators outside the laboratory to monitor the status of experiments, test results, and more via an online web portal or, less commonly, as activity alerts sent via e-mail or SMS.

Messaging The messaging feature of a LIS may refer to one of two (or both) things: • a built-in instant messaging system that allows users to converse with each other through text messages real-time • an SMS text messaging integration that allows the users or the LIS itself to send messages or alerts to one or more user's mobile or smart phone

Commenting

An instant messaging client built into a LIS often makes it easier to collaborate.

Clinical data collection and research collaboration require data sharing and communication tools to be most effective. One of the collaborative communication features of some LISs is commenting on test results, patient records, or study protocols.

LIS feature

Multilingual If a LIS is listed as multilingual, its an indication the software interface can be configured to display more than one language depending on the preference a user or administrator chooses. Some LIS interfaces can only be displayed in one of two languages (English or German, for example), while others come configured with support for dozens of languages.

Network-capable This feature is perhaps archaic and/or obvious, but it is mentioned nonetheless. It's generally applied to a non-web-based LIS installed over a local or wide-area computer network, essentially indicating the LIS is not an isolated application, but rather one that can interface with other instances or other networked instruments.

Web client or portal A LIS with a web client or portal is either a web-based LIS (one that is not installed on every computer, but rather is hosted on a server and accessed via a web browser) or a non-web-based LIS with an included portal to access it via the Internet.

Online or integrated help This indicates a LIS has help infrastructure integrated into the software, support documentation via the vendor's website, or both.

Software as a service delivery model This indicates the software can be licensed and utilized via the software as a service (SaaS) delivery model.

Usage-based cost While rare, some software vendors allow potential clients to license and utilize the vendor's software under a usage-based cost model. An example of this model in use is Bytewize AB's O3 LimsXpress, which has a cost directly related to the amount of samples processed each month.

References

74

LIMS and laboratory informatics questionnaire

LIMS and laboratory informatics questionnaire The intention of this document is 1. to assist labs searching for a laboratory informatics product with identifying their system needs and 2. to help labs better determine if a specific vendor/product meets their requirements in the form of a request for information (RFI). The idea is to allow users to incorporate a standardized specifications sheet in their comparison of various LIMS and other laboratory informatics products. This questionnaire lists the extensive requirements of a LIMS as well as other laboratory informatics systems. (See below for more about this.) The questionnaire is organized such that sections 1.0 through 1.4 offer questions applicable to most any laboratory informatics system, be it a LIMS or an ELN. Section 1.5 covers functionality found specifically in software systems other than LIMS. This questionnaire is comprehensive and includes many items that do not apply to every lab. Additionally, some laboratories' requirements include a functionality item not common to other labs. Section 1.6 "Industry-specific" contains a selection of those industry-specific requirements and will continue to be amended over time. The last section, 1.7 "Custom functions," is designed for the vendor to insert any additional functionality that doesn't fall under the categories provided. When referencing a particular item for someone else, use the section number followed by the requirement letter, e.g. 1.4.2.f for "Does your system allow the administrator to create custom screens, applications, and reports? Please give details."

More about this questionnaire As noted above, this questionnaire was originally designed to cover aspects of a laboratory information management system (LIMS). However, a significant portion of this page if not most of it could easily apply to other laboratory informatics systems like ELNs. As such, we took the approach of adding addenda (as seen in section 1.5 and 1.6) that provide additional requirements unique to other systems and industries. If you're evaluating several industry-neutral LIMS, you likely don't need 1.5 and 1.6.

Requirement code and notes In responding to each requirement, the vendor must select a requirement code from the following: • • • •

Y: Meets requirement in commercial off-the-shelf solution as delivered/configured (or vendor provides service) YC: Meets requirement only with customization (additional code, using a third-party application, etc.) N: Does not meet requirement I: Informational response only, N/A

The vendor should ideally enter a requirement code and a response for each functionality question.

Printing or saving The print/export options in the navigation on the far left give you a few options for saving this and other pages, printing them for later. Notice you'll have to open 1.5 and 1.6 in separate pages to view, print, and save them. Note: Tables are currently not being rendered in PDFs as we like through the MediaWiki Collection extension [1]. While we sure would love for you to make a PDF of this document, be warned: the tables don't format as intended, using either the "Create a book" or "Download as PDF" processes. For now you're best off selecting "Printable version" and printing that instead.

75

LIMS and laboratory informatics questionnaire

76

1.0 Vendor information 1.0 Vendor information Company name Physical address Website LIMSwiki web page Contact name and title Contact e-mail Contact phone and fax Years in business

1.1 Vendor services 1.1 Vendor services Request for information

a. Does the vendor offer an online demonstration and/or an on-site demonstration? b. Does the vendor provide a detailed project approach and plan that includes the project team, timeline, deliverables, and risk and issue management procedures? c. Does the vendor explain their overall project approach, acknowledgement of the deliverables, time/schedule constraints, and any other criteria for the project? d. Does the vendor provide reliable cost estimates and pricing schedules, including all products and services in the scope of work? e. Can the vendor detail the amount of time and staff that purchaser will have to provide for the implementation process? f. Can the vendor explain the maintenance and support offered during and after implementation, including times and methods of availability, issue escalation and management, etc.? Give details. g. Does the vendor provide a support schedule for the implementation process, including optional support levels and their function and availability? Give details. h. Does the vendor provide support during the "go-live" period between system validation/operational deployment and final acceptance/beginning of maintenance and support agreements? i. Does the vendor provide a gap analysis after initial system installation, identifying the deliverables or tasks remaining? j. Does the vendor provide a table linking each deliverable to the corresponding user requirement specification it fulfills? k. Does the vendor use a consistent training methodology for training new users? Give details. l. Does the vendor supply LIMS-specific training program curricula? m. Does the vendor provide user, administrator, developer, installation, and reference manuals? Give details. n. Does the vendor provide design qualification documentation? o. Does the vendor provide installation qualification documentation? p. Does the vendor provide operation qualification documentation? q. Does the vendor provide performance qualification documentation during implementation?

Requirement code

Vendor response

LIMS and laboratory informatics questionnaire

77

r. Does the vendor provide well-documented system upgrades that authorized users can independently install? s. Does the vendor provide source code for the system? t. Does the vendor provide an optional comprehensive set of test codes suitable for use by the purchasing facility?

1.2 Information technology 1.2.1 General IT 1.2.1 General IT Request for information

a. Does your system operate with a web-based interface (hosted on a server and accessed via a web browser) or on a more traditional client-server architecture? If web-based, what technology does it support? b. Does your system contain a single database that supports multiple laboratory sites and departments? c. Does your system's database conform to the Open Database Connectivity Standard (ODBC)? d. Did you design your system so upgrades to the back-end database do not require extensive reconfiguration or effectively cripple the system? Please describe. e. Did you design your system to not be impacted by multiple users or failover processes? Please describe. f. Does your system apply security features to all system files? g. Does your system apply login security to all servers and workstations accessing it? h. Does your system provide a workstation and server authentication mechanism? i. Does your system apply Secured Socket Layer (SSL) encryption on the web client interface? j. Does your system encrypt client passwords in a database with support for multi-case and special characters? k. Does your system provide all secured users access to its data via the Internet, LAN, or direct modem connection? l. Does your system use TCP/IP as its network transport? m. Does your system contain an archive utility that doesn't require off-line mode? n. Does your system provide local backup and restore capability without support intervention? o. Does your system maintain the transactional history of system administrators? p. Does your system maintain an analyst communication log, accessible by the administrator? q. Does your system architecture facilitate the incorporation of new technology and interfaces?

Requirement code

Vendor response

LIMS and laboratory informatics questionnaire

78

1.2.2 Hardware environment 1.2.2 Hardware environment Request for information

Requirement code Vendor response

a. Does your system prove compatible with a variety of hardware environments? Please describe how. b. Can your system be utilized with a touch-screen?

1.2.3 Software environment 1.2.3 Software environment Request for information

Requirement Vendor code response

a. Does your system utilize a non-proprietary database such as Oracle or Microsoft SQL Server? Please explain. b. Does your system prove compatible with a variety of software environments? Please describe how.

1.3 Regulatory compliance and security 1.3.1 Regulatory compliance 1.3.1 Regulatory compliance Request for information

a. Does your system support 21 CFR Part 11 and 40 CFR Part 3 requirements, including login security, settable automatic logouts, periodic requirements for mandatory password changes, limits on reusability of passwords, and full electronic signature? Please explain in detail. b. Does your system support ISO/IEC 17025 requirements? Please explain how. c. Does your system support HIPAA requirements? Please explain how. d. Does your system support GALP and/or GAMP standards? Please explain how. e. Does your system support the standards The NELAC Institute? Please explain how. f. Does your system meet government requirements for handling classified information and documents?

Requirement code

Vendor response

LIMS and laboratory informatics questionnaire

79

g. Does your system maintain audit and specification violation trails of all data manipulation — such as result and header information changes — as consistent with all applicable regulations and standards? Provide details. h. Does your system's audit log retain all data, prohibit any deletions, allows user comments, and allow reporting of contained information? i. Does your system provide additional persistent auditing capabilities, such as the audit of cancelled tests and scheduled system functions? If so, what? j. Does your system provide user-selectable NELAP-compliant internal chain of custody that tracks all samples and associated containers from the time they are collected until disposed of? Please explain how. k. Does your system provide the ability to insert/manage secure electronic and/or digital signatures? l. Does your system incorporate automatic date and time stamping of additions, changes, etc.?

1.3.2 Security 1.3.2 Security Request for information

a. Does your system allow system administrators and managers to configure multiple levels of user rights and security by site location, department, role, and/or specific function? Please explain the depth of this security. b. Does your system allow administrators to reset user passwords? c. Does your system enforce rules concerning password complexity, reuse, and expiration? If so, how? d. Does your system provide automatic logout based on keyboard or mouse inactivity? e. Does your system prompt users for a reason for database record changes? f. Does your system allow administrators to modify records, while also maintaining an audit trail of such actions? g. Does your system allow authorized personnel to review audit logs at will? h. Does your system allow authorized users to query and print chain of custody for items, cases, projects, and batches? i. Does your system allow supervisors to override chain of custody? j. Does your system automatically track when supervisors review critical result values? k. Does your system provide email notification of lockout, security access, and improper workstation access? l. Does your system allow multiple users to connect simultaneously to a contract lab? m. Does your system provide read-only access to contract laboratory results? n. Does your system prohibit issuing reports outside of qualified areas while also allowing reports to be viewed locally or remotely based on security application limits and/or sample ownership? If so, how?

1.4 General system functions 1.4.1 General functions

Requirement code

Vendor response

LIMS and laboratory informatics questionnaire

80

1.4.1 General functions Request for information

a. Does your system offer non-LIMS trained personnel the ability to easily access system data via an intuitive, user-friendly Windows-type graphical user interface (GUI) which permits the display of data from sample points, projects, and user-defined queries, and can be configured to language, character set, and time zone needs? b. Does your system permit remote access for users, system admins, and support agents? c. Does your system allow for the use of navigation keys to freely move from field to field? d. Does your system allow data tables to be sorted? e. Can your system send on-screen output to a printer or file? If so, does it contradict view-only statuses? f. Does your system provide single data entry, automatically populate other data fields, and remember pertinent and relevant data so it doesn't need to be re-entered, selected, or searched for? g. Does your system support multiple users entering data simultaneously? h. Does your system eliminate (or significantly reduce) redundant data entry and paper trails? If so, how? i. Does your system contain one or more spell-check dictionaries that allow authorized users to add, edit, or remove entries? j. Does your system provide full database keyword and field search capability, including the use of multiple search criteria? k. Does your system include the ability to search multiple databases, including those containing legacy data? l. Does your system interface with or import existing data from other systems and/or databases? m. Does you system cleanly convert migrated data to allow for reporting of historical sample collections? If so, how? n. Does your system provide data archival and retention functionality for both paper-based and electronic laboratory records? If so, what is your system strategy for maintaining the archives as technology changes? o. Does your system allow users to associate and store both sample- and non-sample-related objects such as pictures from microscopes, GCMS scans of peaks, PDF files, spreadsheets, or even raw data files from instrument runs for later processing? p. Does your system store more non-traditional information and objects like project- or sample-specific special information fields, user-defined fields, scanned chain of custodies and digital photos of such items as sample events, bitmaps, movies, and .wav audio files? q. Does your system issue sequential numbers for chain of custody? r. Does your system's numbering scheme allow for sub-numbering while maintaining parent-child relationships? s. Does your system efficiently utilize standardized data input points and enhanced individual workload tracking? t. Does your system capture data from all laboratory processes, ensuring uniformity of statistical reporting and other electronic data shared with designated users of the data? u. Does your system link or embed standard operation procedures (SOPs) to/in other objects like analysis requests and test results? v. Does your system notify users of events like the scheduling, receipt, and completion of tasks? w. Does your system include the ability to set up alerts via email? x. Does your system have real-time messaging capabilities, including instant messaging to one or more users? y. Does your system support the use of a voice recognition system (for navigation or transcription) or have that functionality? z. Does your system offer integrated or online user help screens?

Requirement code

Vendor response

LIMS and laboratory informatics questionnaire

81

1.4.2 Configuration and customization 1.4.2 Configuration and customization Request for information

Requirement code

Vendor response

Requirement code

Vendor response

a. Can your system can be configured to meet the workflow of a laboratory without additional programming? Please explain how. b. Can your system easily and efficiently be modified to meet lab growth and changing business needs? Please explain how. c. Does your system include an application programming interface (API)? If so, what kind? If web, does it use Simple Object Access Protocol (SOAP) or representational state transfer (REST)? d. Can your system expand to accommodate a new discipline? If so, how? e. Can your system support customized screens with user-definable information specific to a customer, department, analysis, etc.? f. Does your system allow the administrator to create custom screens, applications, and reports? Please give details. g. Does the system allow a user to independently add fields without requiring reconfiguration of the system, even after routine upgrades and maintenance? h. Does your system allow a user to independently add universal fields on all samples logged into the system at any time during or after implementation, while neither voiding the warranty nor requiring vendor review at a later date? i. Does your system support the definition and maintenance of edit tables and lists? j. Does your system dynamically change captions (labels) on system fields? k. Does your system have dynamically configurable limit periods and notification hierarchy? l. Does your system allow for the integration of additional printers and scanners both locally and externally?

1.4.3 Receiving and scheduling 1.4.3 Receiving and scheduling Request for information

a. Does your system track status and workflow of the accession throughout the laboratory lifecycle, from submission to final analysis, including receiving, diagnostic testing, diagnostic test result reporting, and billing? b. Does your system support barcoded specimen labeling and tracking? c. Does your system create and maintain a unique electronic accession record for each accession received? d. Does your system support standard-format digital picture and document upload and attachment to electronic accession records? e. Does your system support a user-configurable, spreadsheet-style, templated multi-sample (batch) login without requiring additional programming? f. Does your system support the modification of sample or sample batch information prior to actual multi-sample (batch) login? g. Does your system support ad-hoc samples not predefined in the sample point list during multi-sample (batch) login? h. Does your system create, save, and recall pre-login groups for routine samples to simplify recurring logins? i. Does your system streamline the login of recurring sampling projects?

LIMS and laboratory informatics questionnaire

82

j. Does your system automatically generate labels for recurring samples and sample groups? k. Does your system allow authorized users to generate user-definable or rules-based chain of custodies, worksheets, routing sheets, and custom labels upon sample login? l. Does your system provide a comprehensive view of all samples and projects in the system using a color-coded status view of the current and scheduled samples via user-configurable templates, all without requiring additional programming? m. Does your system include environmental monitoring (EM) functionality or integrate with an external EM product? n. Does your system prevent a sample from being placed in a report queue until approved? o. Does your system include comprehensive sample scheduling, tracking, and sample flow management? p. Does your system allow authorized users to accept, cancel, re-run, and override attributes of one or multiple tests for a given patient? q. Does your system allow authorized users to review the available test types in the system, including their reference range and units of measure? r. Does your system have a "miscellaneous" test code to allow a test undefined in the system to be ordered and billed? s. Does your system allow authorized users to schedule routine samples on an hourly, daily, weekly, or monthly basis, allowing them to be enabled and disabled as a group? t. Does your system generate an hourly, daily, weekly, or monthly sampling schedule from a schedule database? u. Does your system schedule and assign tasks based on available inventory and personnel? v. Does your system support automatic assignment and scheduling of analysis requests? w. Can your system receive accession/analysis request information from web-enabled forms? x. Can your system electronically receive and process collection and analysis request information and schedules from third parties? y. Does your system have an inter-lab transfer function? z. Can your system process automated uploading of field-derived sample collection data? aa. Does your system allows users to handle billable and non-billable tests on the same accession? ab. Does your system support tracking of shipping and receiving?

1.4.4 Analysis and data entry 1.4.4 Analysis and data entry Request for information

a. Does your system support a variety of test protocols, each capable of storing test comments, test required, and special information like GCMS conditions or special objects associated with the test? Please give details. b. Does your system provide normal data range values for diagnostic tests? c. Does your system includes default input values for diagnostic tests? d. Does your system provide for a single test code requiring multiple analytes as targets? e. Does your system limit test code authorization to only qualified personnel and maintain their certification(s) to run assigned tests? f. Does your system support and qualify text-based tests? g. Does your system support single-component tests such as pH, BOD, CD, etc.?

Requirement code

Vendor response

LIMS and laboratory informatics questionnaire

83

h. Does your system allow users to specify a single-component, multi-component, or narrative text test or group of tests, which represent all tests required? i. Does your system permit user-generated and modifiable calculations (based on a formulaic language) to be applied to all tests? j. Does your system distinguish between routine and duplicate analysis? k. Does your system provide an overview of all outstanding tests/analyses for better coordination of work schedules? l. Does your system notify analysts of applicable safety hazards associated with a sample, reagent, or test before testing begins? m. Does your system electronically transfer an item during testing from one functional area to another? n. Does your system's user interface display visual indicators such as status icons to indicate a sample's status in the workflow? o. Does your system allow file transfer of data from instruments via intelligent interfaces or multi-sample/multi-test ASCII files, with full on-screen review prior to database commitment? p. Does your system permit manual data entry into an electronic worksheet of test measurements and results? q. Does your system allow incorrectly inputted data to be manually corrected? r. Does your system provide colored visual indication of previously entered data as well as new data associated with a single sample when a result is entered, with the indicator changing color if the value is out of specification? s. Does your system allow automated or semi-automated data insertion? t. Does your system store non-narrative textual results in searchable fields?

1.4.5 Post-analysis and validation 1.4.5 Post-analysis and validation Request for information

a. Does your system update sample/item status when tests are completed? b. Can your system automatically reorder a test or order additional tests if results don't meet lab-defined criteria? c. Does your system read results from previously entered tests to calculate a final result and immediately display the calculated result? d. Does your system allow authorized users to review all analytical results, including pricing, spec violations, history or trend analysis by analyte, and comments? e. Can your system graphically display the results of one or more tests in a graph (normalized or otherwise) for the purpose of visualizing data or searching for possible trends? f. Does your system allow on-screen review of the stored test result, diluted result with corrected method detection limits (MDLs), and qualifiers after running samples for multiple dilutions as in gas chromatography–mass spectrometry (GC-MS)? g. Does your system display the standard operating procedure (SOP) associated with each test result to ensure proper techniques were used? h. Does your system store test-related analysis comments with the test? i. Does your system provide auto-commenting for common laboratory result comments? j. Does your system provide for high-volume multi-component transfers of test results, with the ability to automatically match samples to data files in either a backlog mode or a designated file mode, to parse the data, and to review and commit the sample data?

Requirement code

Vendor response

LIMS and laboratory informatics questionnaire

84

k. Does your system's results validation process access all information about a sample or group of samples, including comments or special information about the sample? l. Does your system's results validation process check each result against its individual sample location specifications (both warning and specification limits)? m. Does your system support validation at the analysis and sample level, while also prohibiting sample validation when analysis validation is incomplete? n. Does your system use a menu-driven process for results validation? o. Does your system provide secure electronic peer review of results? p. Can your system clearly differentiate released preliminary data from fully validated results? q. Does your system validate/approve data prior to being moved to the main database? r. Does your system fully manage all aspects of laboratory quality control, including the reporting and charting of all quality control data captured in the lab? Please explain how. s. Does your system provide a base for a quality assurance program, including proficiency testing, scheduled maintenance of equipment, etc.? Please explain how. t. Does your system distinguish QA/QC duplicates from normal samples? u. Does your system allow QA/QC tests to be easily created and associated with the primary analytical test? v. Does your system allow manual entry of QA and QC data not captured as part of the system's regular processes? w. Does your system calculate monthly QA/QC percentages for testing? x. Does your system automatically flag out-of-range quality control limits? y. Does your system check data files for specification and corrects them for specific reporting and analyte limits and qualifiers like dilution factor, automatically assigning qualifiers based on project analyte limiting?

1.4.6 Instruments 1.4.6 Instruments Request for information

a. Does your system bilaterally interface with instruments and related software? If so, please provide details. b. Can your system download data directly from laboratory instruments? c. Does your system permit the defining and exporting of sequences to instruments? d. Does your system track and report on laboratory equipment usage? e. Does your system allow automatic or manual reservation/scheduling of laboratory instruments? f. Does your system automatically (or manually allow an authorized user to) remove an instrument from potential use when it falls out of tolerance limit or requires scheduled calibration? g. Does your system provide a database of preventative maintenance, calibration, and repair records for laboratory equipment, preferably supported by standardized reporting? h. Can your system schedule calibration, verification, and maintenance tasks in the worksheets or work flow process and make that schedule available for viewing? i. Does your system allow users to create and edit instrument maintenance profiles?

Requirement code

Vendor response

LIMS and laboratory informatics questionnaire

85

1.4.7 External system interfaces 1.4.7 External system interfaces Request for information

Requirement code

Vendor response

a. Does your system support a library of common and/or basic electronic data deliverable (EDD) formats? If so, which? b. Can your system transfer data to and from another record management system? If so, how? c. Does your system integrate with Microsoft Exchange services? d. Can your system import data from and export data to Microsoft Word, Excel, and/or Access? e. Can your system interface with non-Microsoft programs? If so, which? f. Can your system interface with external billing systems? If so, how? g. Can your system interface with enterprise resource planning (ERP) systems? If so, how? h. Can your system interface with external contract or reference laboratories to electronically send or retrieve datasheets, analysis reports, and other related information? i. Can your system exchange data with National Identification System (NAIS) tracking systems? j. Can your system generate and exchange data with other systems using Health Level 7 (HL7) standards? k. Can your system leverage the application programming interface (API) of other systems to establish integration between systems? l. Does your system provide a real-time interface for viewing live and stored data transactions and errors generated by interfaced instruments and systems? m. Can your system transmit status changes of samples, inventory, equipment, etc. to an external system? n. Can your system direct output from ad-hoc queries to a computer file for subsequent analysis by other software? o. Does your system support the manual retransmission of data to interfaced systems? p. Does your system support dockable mobile devices and handle information exchange between them and the system? q. Does your system support the use of optical character recognition (OCR) software?

1.4.8 Reporting 1.4.8 Reporting Request for information

a. Does your system include a versatile report writer and forms generator that can generate reports from any data in tables? If so, please provide details. b. Does your system include a custom graphic generator for forms? c. Does your system interface with a third-party reporting application? d. Does your system allow the development of custom templates for different types of reports? e. Does your system maintain template versions and renditions, allowing management and tracking of the template over time? f. Can your system generate template letters for semi-annual reports? g. Does your system support report queries by fields/keys, status, completion, or other variables?

Requirement code

Vendor response

LIMS and laboratory informatics questionnaire

h. Does your system use Microsoft Office tools for formatting reports? i. Does your system support multiple web browsers for viewing online reports? If so, list which ones. j. Can your system generate, store, reproduce, and display laboratory, statistical, and inventory reports on demand, including narrative? k. Does your system include several standard reports and query routines to access all samples with the pending status through a backlog report that includes the following criteria: all laboratory, department, analysis, submittal date, collection date, prep test complete, location, project, sample delivery group, and other user-selectable options? l. Can your system indicate whether a report is preliminary, amended, corrected, or final while retaining revision history? m. Does your system support both structured and synoptic reporting? n. Can your system generate management and turn-around time reports and graphs? o. Can your system generate customized final reports? p. Can your system automatically generate laboratory reports of findings and other written documents? q. Can your system automatically generate individual and aggregate workload and productivity reports on all operational and administrative activities? r. Can your system automatically generate and transmit exception trails and exception reports for all entered and/or stored out-of-specification data? s. Can your system generate a read-only progress report that allows for printed reports of sample status and data collected to date? t. Does your system provide an ad-hoc web reporting interface to report on user-selected criteria? u. Can your system automatically generate and update control charts? v. Can your system generate QA/QC charts for all recovery, precision, and lab control samples via a full statistics package, including Levy-Jennings plots and Westgard multi-rule? w. Does your system display history of previous results for an analyte's sample point in a tabular report, graphic trend chart, and statistical summary? x. Can your system automatically generate and post periodic static summary reports on an internal web server? y. Does your system transmit results in a variety of ways including fax, e-mail, print, and website in formats like RTF, PDF, HTML, XML, DOC, XLS, and TXT? Please explain. z. Does your system electronically transmit results via final report only when all case reviews have been completed by the case coordinator? aa. Does your system include a rules engine to determine the recipients of reports and other documents based on definable parameters? ab. Does your system allow database access using user-friendly report writing and inquiry tools?

1.4.9 Laboratory management

86

LIMS and laboratory informatics questionnaire

87

1.4.9 Laboratory management Request for information

a. Does your system allow the creation, modification, and duplication of user profiles? b. Does your system allow entry, maintenance, and administration of customers, suppliers, and other outside entities? c. Does your system allow the creation, modification, and maintenance of user training records and associated training materials? d. Does your system allow the management of information workflow, including notifications for requests and exigencies? e. Does your system allow the management of documents like SOPs, MSDS, etc. to better ensure they are current and traceable? f. Does your system allow the management and monitoring of resources by analyst, priority, analysis, and instrument? g. Does your system allow authorized persons to select and assign tasks by analysts, work group, instrument, test, sample, and priority? h. Does your system allow authorized persons to review unassigned work by discipline and by lab? i. Does your system allow authorized persons to review pending work by analyst prior to assigning additional work? j. Does your system manage and report on reference samples, reagents, and other inventory, including by department? If so, to what extent? k. Does your system automatically warn specified users when inventory counts reach a definable threshold and either prompt for or process a reorder? l. Does your system allow authorized users to monitor and report on reference and reagent creation, use, and expiration? m. Does your system allow authorized users to search invoice information by invoice number, account number, accession, payment types, client, or requested diagnostic test(s)? n. Does your system include performance assessment tracking? o. Can your system receive, record, and maintain customer and employee feedback and apply tools to track the investigation, resolution, and success of any necessary corrective action? p. Does your system monitor proficiency test assignment, completion, and casework qualification for analytical staff? q. Does your system provide analysis tools to better support laboratory functions like resource planning, productivity projections, workload distribution, and work scheduling? Do those tools display information in a consolidated view, with the ability to drill down to more detailed data? Please explain. r. Does your system calculate administrative and lab costs? s. Does your system capture and maintain patient, submitter, supplier, and other client demographics and billing information for costing, invoicing, collecting, reporting, and other billing activities? t. Does your system support multiple customer payment sources (e.g. grants}? Please explain the extent. u. Does your system track number of visits per specific industry?

Requirement code

Vendor response

LIMS and laboratory informatics questionnaire

1.5 System-specific The system-specific addendum can be found here.

1.6 Industry-specific The industry-specific addendum can be found here.

1.7 Custom requirements 1.7 Custom requirements Request for information Requirement code Vendor response a. b. c. d. e. f. g. h. i. h. i. j.

References [1] https:/ / www. mediawiki. org/ wiki/ Extension:Collection

88

89

5. More Laboratory Informatics Applications Electronic laboratory notebook An electronic laboratory notebook (also known as electronic lab notebook or ELN) is a software program or package designed to replace more traditional paper laboratory notebooks. Laboratory notebooks in general are used by scientists and technicians to document, store, retrieve, and share fully electronic laboratory records in ways that meet all legal, regulatory, technical and scientific requirements. A laboratory notebook is often maintained to be a legal document and may be used in a court of Alexander Graham Bell's unpublished lab notebook, well before the invention law as evidence. Similar to an inventor's of the ELN notebook, the lab notebook is also often referred to in patent prosecution and intellectual property litigation. Modern electronic lab notebooks have the advantage of being easier to search upon, support collaboration amongst many users, and can be made more secure than their paper counterparts.

History of the ELN While some credit Dr. Keith Caserta with the concept of an electronic version of the laboratory notebook, it's likely that others had similar early ideas on how to integrate computing into the process of laboratory note taking. Significant discussion concerning the transition from a pen-and-paper laboratory notebook to an electronic format was already in full swing in the early 1990s. During the 206th National Meeting of the American Chemical Society in August, 1993, an entire day of the conference was dedicated to talking about "electronic notebooks" and ELNs. "A tetherless electronic equivalent of the paper notebook would be welcomed by the working scientist," noted Virginia Polytechnic Institute's Dr. Raymond E. Dessy for the conference. Dessy had in the mid-1980s begun postulating on the idea of an electronic notebook, and by 1994 he provided one of the first working examples of an ELN. By 1997, a special interest group called the Collaborative Electronic Notebook Systems Association (CENSA) formed. Supported by 11 major pharmaceutical and chemical companies, the consortium worked with scientific software and hardware vendors to facilitate the creation of an ELN that met the technical and regulatory needs of its members. The consortium at that time envisioned a collaborative ELN that "teams of scientists worldwide can use to reliably capture, manage, securely share, and permanently archive and retrieve all common data and records generated by research and development and testing labs." That same year development of an enterprise-wide ELN at Kodak's research facilities in England was in full swing. The Kodak ELN was "implemented as a collection of Lotus Notes databases and applications," making it arguably one of the first enterprise ELN solutions in use at the time. In 1998 one of the first web-based versions of an ELN was introduced in the form of the University of Oregon's Virtual Notebook Environment (ViNE), "a platform-independent, web-based interface designed to support a range of scientific activities across distributed, heterogeneous computing platforms." This innovation would go on to inspire

Electronic laboratory notebook vendors in the 2000s to develop web-based thin-client ELNs for laboratories everywhere. Yet it likely wasn't until the Electronic Signatures in Global and National Commerce Act (ESIGN) in June 2000 that the true legal implications of a fully electronic laboratory notebook would have on the industry. If an ELN were to be responsible for providing validation during the patent processes and be valid for other types of audits, a mechanism for authenticating the origin of the ideas would be necessary. The ESIGN act meant that electronic records could be authenticated and digital signatures made legally binding, lending further relevancy to ELNs. Instead of searching through notebooks and piles of documents, printing material, and submitting thousands of pages for an FDA audit, ELN users could suddenly collate and submit electronic records, saving time and headaches. Enthusiasm for ELNs began to pick up again in the early 2000s, with a strong case for further data integration into ELNs being made at the CENSA-supported International Quality & Productivity Center (IQPC) conference in London during September 2004. During that conference the push for stronger data integration was made, with the base premise that "ELNs would improve corporate strategy by allowing information to be used more intelligently with the help of decision-support software." By early 2007, industry-specific ELNs were pushing growth in the market: Scientific Computing World estimated 83 percent of related organizations declaring interest in ELNs, with 43 percent of those organizations seriously considering an evaluation or purchase. Despite the beginnings of an economic downturn in the late 2000s, Atrium Research later estimated that ELN's market potential was around $1.7 billion. During this time scientists and academics — traditionally slow to adopt technological change — were gradually warming up to the benefits of an electronic laboratory notebook. Academics in particular realized the problems the high turnover postdoc rate created in research laboratories. Postodocs would depart from the university, leaving PIs and directors scratching their heads on where the data ended up. ELNs changed that, allowing much more persistent data that can be found and referenced even after a postdoc departs. The movement towards ELN integration into other laboratory functions during the 2000s eventually led to the blurring of what an ELN actually is. In early 2007 Scientific Computing World reported that the definition of an ELN varied among scientists, with 35 percent of them stating they were "clear about the difference between a LIMS and an ELN." Today it's possible to see in some vendors' offerings the formerly distinct entity that was ELN to now be completely integrated into a LIMS.

Regulations and legal aspects The laboratory accreditation criteria found in the ISO 17025 standard needs to be considered for the protection and computer backup of electronic records[citation needed]. These criteria can be found specifically in clause 4.13.1.4 of the standard. Electronic lab notebooks used for development or research in regulated industries, such as medical devices or pharmaceuticals, are expected to comply with U.S. Food and Drug Administration (FDA) regulations related to software validation. The purpose of the regulations is to ensure the integrity of the entries in terms of time, authorship, and content. Unlike ELNs for patent protection, the FDA is not concerned with patent interference proceedings, but rather with avoidance of falsification. Typical provisions related to software validation are included in the medical device regulations at 21 CFR 820 (et seq.) and 21 CFR 11. Essentially, the requirements are that the software has been designed and implemented to be suitable for its intended purposes. Evidence to show that this is the case is often provided by a software requirements specification (SRS) that lays out the intended uses and needs that the ELN will meet. The SRS typically includes one or more testing protocols that, when followed, demonstrate that the ELN meets the requirements of the specification and that the requirements are satisfied under worst-case conditions. Security, audit trails, prevention of unauthorized changes without substantial collusion of otherwise independent personnel (i.e. those having no interest in the content of the ELN such as independent quality unit personnel) are all fundamental to the ELN. Finally, one or more reports demonstrating the results of the testing in accordance with the predefined protocols are required prior to release of the ELN software for use. If the reports show that the software failed to satisfy any of the SRS requirements, then

90

Electronic laboratory notebook corrective and preventive action (CAPA) must be undertaken and documented. Such CAPA may extend to minor software revisions, or changes in architecture or major revisions. CAPA activities need to be documented as well.

Modern features of an ELN ELNs are generally divided into two categories: • A "specific" ELN contains features designed to work within specific applications, scientific instrumentation, or data types. • A cross-disciplinary or "generic" ELN is designed to support access to all data and information that needs to be recorded in a lab notebook. Among these two general categories are ELNs that capture two particular markets: individual researchers and group research teams. ELNs can be tailored to one or both types of markets, with both groups and individuals benefiting from the ELN's inherent ability to add structure to research records. Groups utilizing an ELN typically require two additional abilities: to share research data and communicate about their research. Modern features include, but are not limited to: • importation of data which has already been captured elsewhere • direct recording of data in various forms like text, images, and tables • lending of structure to data and information through the use of preformatted or customizable templates which include a range of field types • creation of links between records • storage of fully searchable records in a secure database format • inclusion of a messaging system for better collaboration • a secure yet flexible environment to protect the integrity of both data and process while allowing for process changes • generation of secure forms that accept laboratory data input via a computing device and/or laboratory equipment • accommodation for networked or wireless communications • a scheduling option for routine procedures such as equipment qualification and study-related timelines • configurable qualification requirements

ELN vendors See the ELN vendor page for a list of ELN vendors past and present.

Further reading • LabCompliance News [1] • Taylor, Keith T. (2006). "The status of electronic laboratory notebooks for chemistry and biology" [2] (PDF). Current Opinion in Drug Discovery & Development 9 (3): 348–353. Retrieved 06 May 2011.

References [1] http:/ / www. labcompliance. com/ [2] http:/ / www. symyx. com/ products/ pdfs/ Electronic_laboratory_notebooks. pdf

91

Laboratory execution system

92

Laboratory execution system A laboratory execution system or LES is a "computer system employed in the laboratory at the analyst work level to aid in step enforcement for laboratory test method execution," according to the 2007 Annual Book of ASTM Standards. The general purpose of the LES is to direct the user to follow specific steps to ensure the rigidity of the test method and and the process' end results, though alternate workflow routes may be applied in specific circumstances. The LES may encompass this functionality and more, including tasks like enforcing standard operating procedures (SOPs), validating calculations and instrument interfaces, and acquiring or importing procedural data from other systems into one common system. For some, an LES may be considered a sub-branch of an electronic laboratory notebook (ELN) specifically made for laboratories engaging in quality control and quality assurance applications, while others may consider it simply a separate set of functionality which may be found in an ELN or a LIMS. Some in the industry suggest the term "laboratory execution system" is a vendor-led morphing of the "method execution system," which was originally designed "to address the problem of standard operating procedure (SOP) compliance." In France, the English word "Middleware" is used to define a LES.

References

Scientific data management system A scientific data management system (SDMS) is a piece or package of software that acts as a document management system (DMS), capturing, cataloging, and archiving data generated by laboratory instruments (HPLC, mass spectrometry) and applications (LIMS, analytical applications, electronic laboratory notebooks) in a compliant manner. The SDMS also acts as a gatekeeper, serving platform-independent data to informatics applications and/or other consumers. As with many other laboratory informatics tools, the lines between a LIMS, ELN, and an SDMS are at times blurred. However, there are some essential qualities that an SDMS owns that distinguishes it from other informatics systems:

NIST tests standard interfaces for its lab equipment. SDMSs allow labs to integrate equipment data with other types of data.

1. While a LIMS has traditionally been built to handle structured, mostly homogeneous data, a SDMS (and systems like it) is built to handle unstructured, mostly heterogeneous data. 2. A SDMS typically acts as a seamless "wrapper" for other data systems like LIMS and ELN in the laboratory, though sometimes the SDMS software is readily apparent. 3. A SDMS is designed primarily for data consolidation, knowledge management, and knowledge asset realization. [1]

An SDMS can be seen as one potential solution for handling unstructured data, which can make up nearly 75 percent of a research and development unit's data. This includes PDF files, images, instrument data, spreadsheets, and other forms of data rendered in many environments in the laboratory. Traditional SDMSs have focused on acting as a nearly invisible blanket or wrapper that integrate information from corporate offices (SOPs, safety documents, etc.) with data from lab devices and other data management tools, all to be indexed and searchable from a central

Scientific data management system

93

database. An SDMS also must be focused on increasing research productivity without sacrificing data sharing and collaboration efforts. Some of the things a standard SDMS may be asked to do include, but are not limited to[2]: • • • • • • • •

retrieve worklists from LIMS and convert them to sequence files interact real-time with simple and complex laboratory instruments analyze and create reports on laboratory instrument functions perform complex calculations and comparisons of two different sample groups monitor environmental conditions and react when base operating parameters are out of range act as an operational database that allows selective importation/exportation of ELN data manage workflows based on data imported into the SDMS validate other computer systems and software in the laboratory

SDMS vendors See the SDMS vendor page for a list of SDMS vendors past and present.

References [1] Wood, Simon (2007). "Comprehensive Laboratory Informatics: A Multilayer Approach" (http:/ / www. starlims. com/ AL-Wood-Reprint-9-07. pdf), pp. 3. [2] Heyward, Joseph E. II (2009). "Selection of a Scientific Data Management System (SDMS) Based on User Requirements" (https:/ / scholarworks. iupui. edu/ handle/ 1805/ 2000), pp. 1–5 (PDF).

Chromatography data management system Sometimes referred to as a chromatography data system (CDS), a chromatography data management system (CDMS) is a set of dedicated data-collection tools that interface and/or integrate with a laboratory's chromatography equipment. A base CDMS will set up a desired methodology to be used by the chromatography equipment, acquire data from it, process the acquired data, store the information in a database, and interface with other laboratory informatics systems to import and export files and data.[1]

A liquid chromatography linear ion trap instrument as an example of a device that may be interfaced with a CDMS

History of the CDMS The first attempts to automate the analysis of chromatography data through electronics took place in the early 1970s. These analysis tools utilized microprocessor-based integrators, "dedicated devices for measuring chromatographic peaks and performing user-specified calculations" which also featured a printer plotter to output the results. Limited memory plagued those early systems, preventing more than one chromatograph from being stored at any one time. This became less of problem for large labs with bigger budgets in the mid-70s, as expensive centralized data systems were installed, allowing greater data storage and sharing capabilities.

Chromatography data management system As computers shrank in size, the personal computer became a viable reality. In 1980 entrepreneur and Hewlett-Packard prodigy Dave Nelson saw the potential the personal computer could have on the field of analytical chemistry, joining with partner Harmon Brown to create Nelson Analytical Inc. That year they developed the first CDMS personal computer software, soon followed by Turbochrom, the first CDMS system for MS Windows. This innovation quickly spread from analytical chemistry labs to the fields of environmental, forensic, and pharmaceutical sciences. At the same time chromatography minicomputers like Hewlett-Packard's 3350 LAS Lab Automation System and Perkin-Elmer's LIMS 2000 CLAS chromatography laboratory automation system were seeing increased utilization, featuring the data acquisition and processing of up to 32 or more simultaneous chromatographs. In the 1990s, more affordable higher-performance PCs — combined with tighter networking standards — allowed for networks of CDMSs, especially those installed on personal computers. By the late '90s, the CDMS commonly featured the ability to set up a methodology and analytical run information, control some instruments, acquire injection data, process the data in different ways, save the data, and transmit it to other systems like a LIMS. By 2008, CDMS functions were becoming more enhanced, driven by improvements in liquid chromatographs (LC) and gas chromatographs (GC). The new innovation of high-speed LC and GC instruments meant the potential for faster data generation, improved seperation, and higher resolutions and sensitivities. While these next-generation machines would bring more processing power to chromatography labs, it also meant that vendors would have to improve CDMSs, specifically the analog-to-digital converter sampling rates. Some vendors were estimating at the time that data acquisition sampling rates on the order of 100 to 300 Hz would be needed to keep up with the new wave of speedier chromatography devices. Additional concerns of scalability and remote access were becoming important due to the expansion of pharmaceutical and chemical companies expanding into parts of Latin America, South America, and the Far East.

CDMS vendors See the CDMS vendor page for a list of CDMS vendors past and present.

References [1] McDowall, R.D. (1999). "Chromatography Data Systems I: The Fundamentals" (http:/ / www. 21cfrpart11. com/ files/ library/ compliance/ cds_1. pdf) (PDF), pp. 1-2.

94

95

6. Related Standards and Compliance 21 CFR Part 11 The Title 21 Code of Federal Regulations Part 11 (21 CFR Part 11) provides compliance information regarding the U.S. Food and Drug Administration's (FDA) guidelines on electronic records and electronic signatures. Within this part, requirements are created to help ensure security, integrity, and confidentially of electronic records and to ensure electronic signatures are as legally binding as hand-written signatures. Practically speaking, Part 11 requires drug makers, medical device manufacturers, biotech and biologics companies, contract research organizations, and other FDA-regulated industries, with some specific exceptions, to implement controls, including audits, system validations, audit trails, electronic signatures, and documentation for closed and open software and systems involved in processing specific electronic data. This primarily includes data to be maintained by the FDA predicate rules and data used to demonstrate compliance to a predicate rule. A predicate rule is any requirement set forth in the Federal Food, Drug and Cosmetic Act, the Public Health Service Act, or any FDA regulation other than Part 11. The rule also applies to submissions made to the FDA in electronic format, but not to paper submissions by electronic methods, though paper submissions may eventually be prohibited by the FDA.

History By the early 1990s, food and drug manufacturers approached the U.S. Food and Drug Administration (FDA) about the possibility of electronic submissions with electronic signatures. However, at that time the government did not allow for digital signatures. In July 1992, the FDA began soliciting comments about the process of using electronic signatures. In March 1997, the FDA issued Part 11 regulations which, in the words of the FDA, were "intended to permit the widest possible use of electronic technology, compatible with FDA's responsibility to protect the public health." Various keynote speeches by FDA insiders early in the 21st century (in addition to compliance guides and draft guidance documents) as well as strong efforts by the FDA to motivate industry to move to e-filing resulted in many companies like Eli Lilly, Agilent Technologies, and other businesses rapidly being forced to change their methods and systems to adapt to the new standards. However, many entities expressed concerns about the Title 11 conditions, including concerns the regulations would "unnecessarily restrict" the use of technology, add significant compliance costs beyond what was intended, and stifle technological innovation while reducing public health benefit. In November 2002, the FDA released the guidance document "Guidance for Industry 21 CFR Part 11; Electronic Records; Electronic Signatures, Electronic Copies of Electronic Records" to the public for commenting. On February 3, 2003, the FDA withdrew that document, stating "we wanted to minimize industry time spent reviewing and commenting on the draft guidance when that draft guidance may no longer represent our approach under the [current good manufacturing practice] initiative," adding it would afterwards "intend to exercise enforcement discretion with regard to certain Part 11 requirements." Further guidance documents were withdrawn later that month, culminating in a final guidance document in August 2003 stating the government body would re-examine Part 11 and make necessary changes. However, the FDA reiterated despite its retraction of the guidance documents "21 CFR Part 11 is not going away, and neither is the agency's demand for electronic record integrity." The retraction of guidance and change in policy, however, led many IT members in the pharmaceutical and life sciences industry in late 2004 to state one of the key problems they face as the lack of clear guidelines from the FDA about what is required for compliance.

21 CFR Part 11 The FDA had indicated it would produce a revised version of Part 11 by the end of 2006, after its Third Annual FDA Information Management Summit had concluded. Those revisions never arrived, and little in the way of updates on the topic arrived. On July 8, 2010, the FDA announced it would begin to audit facilities working with drugs "in an effort to evaluate industry's compliance and understanding of Part 11 in light of the enforcement discretion," leaving some to wonder if this was an indicator the regulation and/or its guidance would finally see a revision.

Structure The structure of Part 11 is as follows: Subpart A — General Provisions § 11.1 Scope [1] § 11.2 Implementation [2] § 11.3 Definitions [3] Subpart B — Electronic Records § 11.10 Controls for closed systems [4] § 11.30 Controls for open systems [5] § 11.50 Signature manifestations [6] § 11.70 Signature/record linking [7] Subpart C — Electronic Signatures § 11.100 General requirements [8] § 11.200 Electronic signature components and controls [9] § 11.300 Controls for identification codes/passwords [10]

Subpart A This is essentially the preamble of the regulations, explaining to what and who the regulations apply as well as how they'll apply. Definitions of common terms appearing in the regulations can also be found here, including a clarification in the difference between a digital and electronic signature.

Subpart B This section covers the requirements applicable to electronic records and their management. Several requirements are addressed, including "how to ensure the authenticity, integrity, and, when appropriate, the confidentiality of electronic records"; what content a signature should contain; and how electronic records and their signatures should be linked. It also covers topics like system validation, data traceability, audit control, and version control.

Subpart C This final section addresses the requirements specific to electronic signatures and their use. General requirements for electronic signatures, their components and controls, and password controls are all addressed. Additionally, this section addresses requirements for more advanced biometric-based signatures.

Audit guidelines and checklist For those auditing computer systems and IT environments for their compliance with 21 CFR Part 11 and other regulations, a set of guidelines and checklist items may be useful. Click the link above for the full set of guidelines and checklist items.

96

21 CFR Part 11

Further reading • "Electronic Code of Federal Regulations - Title 21: Food and Drugs - Part 11: Electronic Records; Electronic Signatures" [11]. U.S. Government Printing Office. • "CFR - Code of Federal Regulations - Title 21 - Part 11 Electronic Records; Electronic Signatures" [12]. U.S. Food and Drug Administration. • Huber, Ludwig (15 November 2012). "Tutorial: 21 CFR Part 11 - Electronic Records and Electronic Signatures" [13] . LabCompliance.

References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13]

http:/ / www. accessdata. fda. gov/ scripts/ cdrh/ cfdocs/ cfcfr/ CFRSearch. cfm?fr=11. 1 http:/ / www. accessdata. fda. gov/ scripts/ cdrh/ cfdocs/ cfcfr/ CFRSearch. cfm?fr=11. 2 http:/ / www. accessdata. fda. gov/ scripts/ cdrh/ cfdocs/ cfcfr/ CFRSearch. cfm?fr=11. 3 http:/ / www. accessdata. fda. gov/ scripts/ cdrh/ cfdocs/ cfcfr/ CFRSearch. cfm?fr=11. 10 http:/ / www. accessdata. fda. gov/ scripts/ cdrh/ cfdocs/ cfcfr/ CFRSearch. cfm?fr=11. 30 http:/ / www. accessdata. fda. gov/ scripts/ cdrh/ cfdocs/ cfcfr/ CFRSearch. cfm?fr=11. 50 http:/ / www. accessdata. fda. gov/ scripts/ cdrh/ cfdocs/ cfcfr/ CFRSearch. cfm?fr=11. 70 http:/ / www. accessdata. fda. gov/ scripts/ cdrh/ cfdocs/ cfcfr/ CFRSearch. cfm?fr=11. 100 http:/ / www. accessdata. fda. gov/ scripts/ cdrh/ cfdocs/ cfcfr/ CFRSearch. cfm?fr=11. 200 http:/ / www. accessdata. fda. gov/ scripts/ cdrh/ cfdocs/ cfcfr/ CFRSearch. cfm?fr=11. 300 http:/ / www. ecfr. gov/ cgi-bin/ retrieveECFR?gp=& SID=04a3cb63d1d72ce40e56ee2e7513cca3& r=PART& n=21y1. 0. 1. 1. 8 http:/ / www. accessdata. fda. gov/ scripts/ cdrh/ cfdocs/ cfcfr/ cfrsearch. cfm?cfrpart=11 http:/ / www. labcompliance. com/ tutorial/ part11/

21 CFR Part 11/Audit guidelines and checklist The following guidelines and checklist items provide a frame of reference for vendors and auditors to better determine potential compliance issues with Title 21 Code of Federal Regulations Part 11 and a variety of other regulatory guidelines. All items in the checklist for general IT controls should also be checked for individual systems, especially where those systems use different control measures (e.g., they have an independent authentication system). If this checklist is used by software vendors, then certain elements may or may not apply depending on the circumstances. For instance, validation is technically the responsibility of the entity acquiring the software. However, in the case of SaaS, a greater practical responsibility to validate the system may lie with the vendor. In all cases, the vendor should assume responsibility for ensuring that their software operates as intended within the targeted environments. Failure to do so may result in a lack of willingness of potential customers to obtain the system. References will be provided for each checklist item to indicate where the requirement comes from. These references are either to the regulation itself, Agency responses in the Final Rule, or from the guidance document "General Principles of Software Validation; Final Guidance for Industry and FDA Staff" (GPSV).

General IT Following is a list of questions that either apply to the larger IT environment, or to both the larger environment and to individual systems. The auditor must be sure to evaluate both where necessary. For instance, an organization may have a robust password policy which is managed by a centralized identity management tool. This is important evaluate in terms of general security around the systems in scope. At the same time, the specific system may or may not leverage the corporate IDM and thus it’s identity management should be evaluated on its own merits.

97

21 CFR Part 11/Audit guidelines and checklist Computer Systems Validation - 21 CFR 11.10(a) • Does a defined computer system validation policy exist? - 21 CFR 11.10(a) • Are all computer systems involved in activities covered by predicate regulations validated? - 21 CFR 11.10(a), 21 CFR 211.68(b), 21 CFR 820.30(g) • Does the computer system validation cover the current deployed version of the system? - GPSV 4.7 • Validation Assessment • • • • •

Does the software developer have a defined systems development life-cycle (SDLC)? - GPSV 4.4 Does the SDLC reflect a generally recognized life cycle approach? [1] Is the SDLC followed? - GPSV 4.4 Is the software well documented from a design/development/implementation perspective? - GPSV 3.3 Is there evidence of design review activities (what this entails will depend on the nature of the SDLC - for instance, Agile methodologies will involve daily standup meetings,while a waterfall approach may reflect formal design review steps)? - GPSV 3.5 • Does the level of validation coverage reflect the risk from system failure? - GPSV 6.1 • Is there sufficient level of independence in the validation/verification activities? - GPSV 4.9 • Are sufficient resources and personnel provided for software development and validation? - 21 CFR 211.25(c), 21 CFR 820.25(a) • Are records maintained of defects and failures identified in the development process? - GPSV 5.2.6 • For any software system, is there a set of approved requirements which drove the design (note: the name can vary based on the SDLC in use). - GPSV 6.1 • For iterative development approaches, are previous versions of deliverables (such as requirements lists) archived in some fashion? - GPSV 5.2.1 • Is there an audit trail for modifications to system documentation? - 21 CFR 11.10(k)(2) • For commercial off-the-shelf (COTS), has the vendor been evaluated for its quality systems? - GPSV 6.3 • Is there some form of traceability that permits tracking of test results and verification activities to specific requirements? - GPSV 5.2.2 • Are adequate change control systems in place during the development and implementation processes? GPSV 3.3 • For each of the other elements of this checklist that apply directly to an electronic record system, has appropriate validation work been undertaken to establish that the system complies with the checklist item?

Identity Management Systems • Do any identity management systems have minimum password complexity/strength requirements? Do these minimums seem reasonable? - 21 CFR Part 11 Final Rule Section 130 • Do these id systems have policies regarding password change frequency? - 21 CFR 11.300(b) • Do identity management systems prevent the creation of duplicate user ID’s? [2]

Access Controls • Do formal procedures exist governing user account creation for electronic records systems. • Do formal procedures exist governing access to network and server resources that are used to operate electronic records systems?

98

21 CFR Part 11/Audit guidelines and checklist

Cloud Computing Policies[3] • Are policies in place governing the selection and use of cloud vendors for electronic record systems? • Do these policies include Service Level Agreements(SLA's) regarding such things as up time, and support responsiveness? • Are cloud vendors evaluated for security and compliance with appropriate regulations? • Do policies governing record retention specifically apply to cloud vendors? • Are systems for transmitting electronic records configured to do so in a secure manner? 21 CFR 11.30

Training and Personnel • Is an organizational chart available covering personnel involved in the design, development, administration or use of electronic records systems? • Are job descriptions available for these individuals, indicating their specific responsibilities regarding electronic record systems? • Is there a defined training program around authentication practices? Electronic signatures?21 CFR 11.10(i) • Are system administrators and developers trained in Part 11 and related regulations? 21 CFR 11.10(i) • Are users trained on the use of electronic records systems? 21 CFR 11.10(i)

Change Control Systems • Is there a formal change control system for modifications to the production electronic records system? GPSV 5.2.7 • Does the change control system require an assessment of impact, risk, and require authorization before proceeding? GPSV 6.1 • Is there a configuration management system in place such that the contents of each version of released software is archived and readily identifiable? GPSV 5.2.1 • Is there a formal change control system for changes to requirements and design elements of the system during the development process? GPSV 3.3 • Do change control systems in use require appropriate approvals as governed by the SDLC model in use?

Electronic Signature Certification • If the organization is using electronic signatures, have they filed a certification with the FDA indicating so? 21 CFR 11.100(c)

Records Retention Policy • Does the organization have a records retention policy covering records per the predicate regulations? 21 CFR 11.10(c)

System Specific Fraud Detection • Is the system designed to either prevent record alteration or make such alteration apparent? 21 CFR 11.10(a)

99

21 CFR Part 11/Audit guidelines and checklist

Audit Trails • • • • •

Does the system maintain an audit trail that tracks changes to electronic records? 21 CFR 11.10(e) Are the audit trail records time stamped? 21 CFR 11.10(e) Are the audit trail records system generated, such that human intervention is not required? 21 CFR 11.10(e) Are audit trail records secured such that they cannot be modified by users of the system? 21 CFR 11.10(e) Is the audit trail data available for export (printing or electronic) to support agency review? 21 CFR 11.10(e)

Access Controls • Does the identity management systems have minimum password complexity/strength requirements? Do these minimums seem reasonable? 21 CFR Part 11 Final Rule Section 130 • Do these id systems have policies regarding password change frequency? 21 CFR 11.300(b) • Do identity management systems prevent the creation of duplicate user ID’s?

Open Systems Controls[4] • Are records transmitted by the system sent in a secure manner, such that their authenticity, integrity and confidentiality are ensured? 21 CFR 11.30 • Is access to the system appropriately managed to prevent unauthorized external access? • Has the system been evaluated for susceptibility to intrusion? • Is there a system in place to evaluate current IT security threats that have been identified (by the National Cyber Awareness System via NIST, or other appropriate organization)?

Electronic Signatures • Is the electronic signature system engineered in such a way as to ensure that the signatures cannot be attached to other records, or cannot be removed from the records they are attached to? - 21 CFR 11.70 • Is the system engineered such that in order to apply someone else’s signature to a file that collaboration is required between two or more individuals? (this is largely covered by the identity management controls). - 21 CFR 11.200(a)(3) • If a signature event only requires one signature element, is it only in the case of being part of a continuous period of system access? - 21 CFR 11.200(a)(1)(i) • Are their suitable loss management procedures in place to address compromised passwords, or lost/stolen authentication devices (such as RSA ID tokens)? - 21 CFR 11.300(c) • Is the system designed to alert security and/or management in the event of an apparent attempt at unauthorized use of electronic signatures? Does the system automatically take steps to lock out users associated with these attempts? - 21 CFR 11.300(d) • Is there a system for the periodic testing of tokens and cards to ensure that they are still operating as expected and have not been altered? If not, is there something in the nature of the tokens/cards that would render them unusable should alteration be attempted? - 21 CFR 11.300(e) • Is there a password reset method that does not require system administrators to know a user’s password? - 21 CFR Part 11 Final Rule Section 123 • Are user passwords suitably encrypted in any persistent data store, such that elucidating the original password would require extraordinary means? • Are controls in place to ensure that password reset instructions are sent to the correct individual?

100

21 CFR Part 11/Audit guidelines and checklist

Export of Records for Agency Review • Does the system support exporting records in a format that is readable by the agency? - 21 CFR 11.10(b) • If the agency hasn’t been specifically consulted with regard to acceptable formats, does the system support export into common formats such as XML or JSON?

Records Retention Support • Does the system have sufficient controls to ensure that the records stored within it will be available throughout the period specified in the records retention policy? - 21 CFR 11.10(c)

Process Controls • Does the system have a mechanism to establish differing levels of authority to perform tasks in the system? - 21 CFR 11.10(g) • Does the system have a mechanism for preventing steps being taken out of sequence (e.g., signing a record before data has been entered, or releasing a record before the review step was completed)? - 21 CFR 11.10(f)

Reference material 21 CFR Part 11 Subpart A — General Provisions § 11.1 Scope [1] § 11.2 Implementation [2] § 11.3 Definitions [3] Subpart B — Electronic Records § 11.10 Controls for closed systems [4] § 11.30 Controls for open systems [5] § 11.50 Signature manifestations [6] § 11.70 Signature/record linking [7] Subpart C — Electronic Signatures § 11.100 General requirements [8] § 11.200 Electronic signature components and controls [9] § 11.300 Controls for identification codes/passwords [10]

General Principles of Software Validation The full name of this FDA guidance document is "General Principles of Software Validation; Final Guidance for Industry and FDA Staff," referenced on here as "GPSV." Section 1. Purpose [5] Section 2. Scope [6] 2.1. Applicability 2.2. Audience 2.3. The Least Burdensome Approach 2.4. Regulatory Requirements for Software Validation 2.4. Quality System Regulation vs Pre-Market Submissions

101

21 CFR Part 11/Audit guidelines and checklist Section 3. Context for Software Validation [7] 3.1. Definitions and Terminology 3.1.1 Requirements and Specifications 3.1.2 Verification and Validation 3.1.3 IQ/OQ/PQ 3.2. Software Development as Part of System Design 3.3. Software is Different from Hardware 3.4. Benefits of Software Validation 3.5 Design Review Section 4. Principles of Software Validation [8] 4.1. Requirements 4.2. Defect Prevention 4.3. Time and Effort 4.4. Software Life Cycle 4.5. Plans 4.6. Procedures 4.7. Software Validation After a Change 4.8. Validation Coverage 4.9. Independence of Review 4.10. Flexibility and Responsibility Section 5. Activities and Tasks [9] 5.1. Software Life Cycle Activities 5.2. Typical Tasks Supporting Validation 5.2.1. Quality Planning 5.2.2. Requirements 5.2.3. Design 5.2.4. Construction or Coding 5.2.5. Testing by the Software Developer 5.2.6. User Site Testing 5.2.7. Maintenance and Software Changes Section 6. Validation of Automated Process Equipment and Quality System Software [10] 6.1. How Much Validation Evidence Is Needed? 6.2. Defined User Requirements 6.3. Validation of Off-the-Shelf Software and Automated Equipment

102

21 CFR Part 11/Audit guidelines and checklist

Others • 21 CFR Part 211 [11]: Current Good Manufacturing Practice for Finished Pharmaceuticals • 21 CFR Part 820 [12]: Quality System Regulation

References and footnotes [1] While the Agency specifically does not recommend an SDLC, and rightfully so, established SDLC approaches become established typically due to the quality of product that comes from them. An SDLC that is either unique or a blend of disparate approaches may merit additional attention on the part of the auditor [2] Although the regulation only specifies that identification codes in combination with passwords must be unique, since passwords are typically stored in encrypted format, there is no practical way to do this outside of ensuring that user ID's are unique [3] In general there was little anticipation when Part 11 was drafted that such a thing as the cloud would come to exist. These checklist items, therefore, are reasonable extensions of requirements for in house systems. [4] The field of IT security has exploded in recent years with a number of high profile breaches. At the time of the writing of Part 11, the internet was much safer in this regard than it is today. The auditor should focus significant effort on security around all systems, but especially open systems. [5] http:/ / www. fda. gov/ medicaldevices/ deviceregulationandguidance/ guidancedocuments/ ucm085281. htm#_Toc517237928 [6] http:/ / www. fda. gov/ medicaldevices/ deviceregulationandguidance/ guidancedocuments/ ucm085281. htm#_Toc517237929 [7] http:/ / www. fda. gov/ medicaldevices/ deviceregulationandguidance/ guidancedocuments/ ucm085281. htm#_Toc517237935 [8] http:/ / www. fda. gov/ medicaldevices/ deviceregulationandguidance/ guidancedocuments/ ucm085281. htm#_Toc517237944 [9] http:/ / www. fda. gov/ medicaldevices/ deviceregulationandguidance/ guidancedocuments/ ucm085281. htm#_Toc517237955 [10] http:/ / www. fda. gov/ medicaldevices/ deviceregulationandguidance/ guidancedocuments/ ucm085281. htm#_Toc517237965 [11] http:/ / www. accessdata. fda. gov/ scripts/ cdrh/ cfdocs/ cfcfr/ cfrsearch. cfm?cfrpart=211 [12] http:/ / www. accessdata. fda. gov/ scripts/ cdrh/ cfdocs/ cfcfr/ cfrsearch. cfm?cfrpart=820

40 CFR Part 3 The Title 40 Code of Federal Regulations Part 3 (40 CFR Part 3) — sometimes referred to as the Cross-Media Electronic Reporting Rule (CROMERR) — provides for electronic reporting (in lieu of a paper document) to the U.S. Environmental Protection Agency (EPA). Within this part, requirements are created to ensure that electronic reporting to the EPA is enacted in a satisfactory way to satisfy federal or authorized program reporting requirements, including those requiring an electronic signature.

History On August 31, 2001, the EPA "published a notice of proposed rulemaking, announcing the goal of making electronic reporting and electronic recordkeeping available under EPA regulatory programs." However, the EPA had been working on plans related to such a proposal (referred to as Cross-Media Electronic Reporting) well before, stating in its review of its final rule that the process actually "reflects more than ten years of interaction with stakeholders," including "electronic reporting pilot projects conducted with state agency partners, including the States of Pennsylvania, New York, Arizona, and several others." This also involved collaboration with more than half of U.S. states in May 1997 on the State Electronic Commerce/Electronic Data Interchange Steering Committee (SEES) and a series of conferences starting in 1999 to acquire stakeholders' thoughts. Public commenting closed on February 27, 2002, with the EPA receiving 184 collection of written comments. The EPA made additional adjustments to the proposal, which culminated in a final version of CROMERR that was codified into Title 40 as Part 3 on October 13, 2005 and made effective January 11, 2006. On December 24, 2008, minor adjustments were made to CROMERR that extended compliance dates for existing systems making the transition to electronic filing to the EPA.

103

40 CFR Part 3

Structure The structure of Part 3 is as follows: Subpart A — General Provisions § 3.1 Who does this part apply to? [1] § 3.2 How does this part provide for electronic reporting? [2] § 3.3 What definitions are applicable to this part? [3] § 3.4 How does this part affect enforcement and compliance provisions of Title 40? [4] Subpart B — Electronic Reporting to EPA § 3.10 What are the requirements for electronic reporting to EPA? [5] § 3.20 How will EPA provide notice of changes to the Central Data Exchange? [6] Subpart C — [Reserved] Subpart D — Electronic Reporting Under EPA-Authorized State, Tribe, and Local Programs § 3.1000 How does a state, tribe, or local government revise or modify its authorized program to allow electronic reporting? [7] § 3.2000 What are the requirements authorized state, tribe, and local programs' reporting systems must meet? [8]

Appendix 1 to Part 3 Priority Reports [9]

Subpart A This is essentially the preamble of the regulations, explaining to what and who the regulations apply as well as how they'll apply. Definitions of common terms appearing in the regulations can also be found here, including a description of electronic signature devices.

Subpart B This section covers the requirements applicable to electronic record formats and their submission to the EPA's Central Data Exchange (CDX) or other related EPA systems. It also provides guidelines on how the EPA will notify CDX users of hardware and software changes that affect transmission.

Subpart C Subpart C is blank, "reserved for future EPA electronic recordkeeping requirements."

Subpart D This final section provides in-depth requirements for revising state, local, and tribal government programs for electronic submissions as well as outlining the reporting system requirements. In particular it lays out a list of requirements for data generated from electronic document receiving systems, including security, audit trail, quality control, and electronic signatures.

104

40 CFR Part 3

Central Data Exchange The EPA's Central Data Exchange (CDX) is used by EPA offices, local and state governments, private industries, and Indian tribes required to submit environmental data related to more than 60 programs in the United States, including the Greenhouse Gas Reporting Program, the RadNet program, and the Verify engine and vehicle compliance program. The EPA touts CDX as an important component of operations as well as meeting 40 CFR Part 3 compliance, claiming reductions in reporting burdens, cost, and data transfer times as well as increases in data quality and compliance. As of mid-February 2015, the CDX had more than 296,000 registered users submitting data to 63 data feeds, with 10 additional data flows in development.

Further reading • CROMERR Fact Sheet [10] (PDF) • "Cross-Media Electronic Reporting" [11]. Federal Register. OFR/GPO. 13 October 2005. • "CROMERR 101: Fundamentals for States, Tribes, and Local Governments" [12] (PDF). U.S. Environmental Protection Agency.

References [1] http:/ / www. ecfr. gov/ cgi-bin/ text-idx?c=ecfr& sid=5ff3a0efed913ef8fae9e225869688a2& rgn=div5& view=text& node=40:1. 0. 1. 1. 3& idno=40#se40. 1. 3_11 [2] http:/ / www. ecfr. gov/ cgi-bin/ text-idx?c=ecfr& sid=5ff3a0efed913ef8fae9e225869688a2& rgn=div5& view=text& node=40:1. 0. 1. 1. 3& idno=40#se40. 1. 3_12 [3] http:/ / www. ecfr. gov/ cgi-bin/ text-idx?c=ecfr& sid=5ff3a0efed913ef8fae9e225869688a2& rgn=div5& view=text& node=40:1. 0. 1. 1. 3& idno=40#se40. 1. 3_13 [4] http:/ / www. ecfr. gov/ cgi-bin/ text-idx?c=ecfr& sid=5ff3a0efed913ef8fae9e225869688a2& rgn=div5& view=text& node=40:1. 0. 1. 1. 3& idno=40#se40. 1. 3_14 [5] http:/ / www. ecfr. gov/ cgi-bin/ text-idx?c=ecfr& sid=5ff3a0efed913ef8fae9e225869688a2& rgn=div5& view=text& node=40:1. 0. 1. 1. 3& idno=40#se40. 1. 3_110 [6] http:/ / www. ecfr. gov/ cgi-bin/ text-idx?c=ecfr& sid=5ff3a0efed913ef8fae9e225869688a2& rgn=div5& view=text& node=40:1. 0. 1. 1. 3& idno=40#se40. 1. 3_120 [7] http:/ / www. ecfr. gov/ cgi-bin/ text-idx?c=ecfr& sid=5ff3a0efed913ef8fae9e225869688a2& rgn=div5& view=text& node=40:1. 0. 1. 1. 3& idno=40#se40. 1. 3_11000 [8] http:/ / www. ecfr. gov/ cgi-bin/ text-idx?c=ecfr& sid=5ff3a0efed913ef8fae9e225869688a2& rgn=div5& view=text& node=40:1. 0. 1. 1. 3& idno=40#se40. 1. 3_12000 [9] http:/ / www. ecfr. gov/ cgi-bin/ text-idx?c=ecfr& sid=5ff3a0efed913ef8fae9e225869688a2& rgn=div5& view=text& node=40:1. 0. 1. 1. 3& idno=40#ap40. 1. 3_12000. 1 [10] http:/ / www. epa. gov/ CROMERR/ documents/ cromerr_fact_sheet. pdf [11] https:/ / www. federalregister. gov/ articles/ 2005/ 10/ 13/ 05-19601/ cross-media-electronic-reporting [12] http:/ / www. epa. gov/ cromerr/ training/ cromerr101/ cromerr_course_summary. pdf

105

Good Automated Laboratory Practices

Good Automated Laboratory Practices The Good Automated Laboratory Practices (GALP) was a U.S. EPA-based conglomeration of "regulations, policies, and guidance documents establishing a uniform set of procedures to ensure the reliability and credibility of laboratory data." GALP is considered an expired policy by the EPA, though the true expiration date is not know.

History Work on GALP was first started by the EPA in 1989 as an extension of its pre-existing good laboratory practice (GLP) requirements to what they saw as a state of increasing automation in laboratories. An additional revision was released in 1995 by the EPA. The GALP's creation was based on six principles: 1. The system must provide a method of assuring the integrity of all entered data. 2. The formulas and decision algorithms employed by the system must be accurate and appropriate. 3. An audit trail that tracks data entry and modifications to the responsible individual is a critical element in the control process. 4. A consistent and appropriate change-control procedure capable of tracking the system operation and application software is a critical element in the control process. 5. Control of even the most carefully designed and implemented system will be thwarted if appropriate user procedures are not followed. 6. Consistent control of a system requires the development of alternative plans for system failure, disaster recovery, and unauthorized access.

Impact GALP had a variable regulatory impact on laboratories and organizations early on. Pharmaceutical and biologics laboratories could practically ignore the regulations while the U.S. Food and Drug Administration (FDA) and EPA used GALP as a key guiding factor in its standards and contract renewal considerations. The Department of Energy and Superfund programs tightly followed GALP's standards while others viewed GALP as something to be loosely interpreted.

References

106

Good Automated Manufacturing Practice

Good Automated Manufacturing Practice Good Automated Manufacturing Practice (GAMP) is both a technical subcommittee of the International Society for Pharmaceutical Engineering (ISPE) and a set of guidelines for manufacturers and users of automated systems in the pharmaceutical industry. One of the core principles of GAMP is that quality cannot be tested into a batch of product but must be built into each stage of the manufacturing process. As a result, GAMP covers all aspects of production; from the raw materials, facility and equipment to the training and hygiene of staff. GAMP is largely about automated system validation. In October 2014, Irish tech company Dataworks Ltd. described it as such: It is a formal process of thorough documentation, testing, and logical process steps that validate clients' required specifications. The process begins with a user requirements specification for the machine, from which a functional requirement and a The area of automated pharmaceutical manufacturing is influenced in part design specification are created. by GAMP and its associated guidelines. These documents then form the basis for the traceability matrix and for the formal testing of internal acceptance, factory acceptance, and site acceptance. Categorising software is used to support the approach to validation based on the difficulty and individuality of the computerised system.

History GAMP's origins can be traced to the United Kingdom in 1988, when software developers David Forrest and Colin Jones, through their company FJ Systems, developed real-time control and production information management control systems for pharmaceutical manufacturers. They worked with ICI Pharmaceuticals' Tony Margetts on the problem of validating systems that were increasingly becoming more software-based than mechanical- and electrical-based. This culminated in a five-page document called VMAN I, mapping the older installation qualification (IQ), operational qualification (OQ), and performance qualification (PQ) phases of equipment validation to a more modern software validation lifecycle. A second version was created upon additional feedback. GAMP itself was eventually founded in 1991 (with the previously mentioned Margetts as chairman of the editorial board) to deal with the evolving U.S. Food and Drug Administration expectations for Good Manufacturing Practice (GMP) compliance of manufacturing and related systems. GAMP published its first draft guidance in February 1994, with version 1.0 of it arriving in March 1995. Soon afterwards the organization entered into a partnership with ISPE, formally becoming part of ISPE in 2000. GAMP 4 was released a year later, followed by GAMP 5 in 2008. GAMP has enjoyed the support of numerous regulatory authorities over the years spanning the United States, Europe, and

107

Good Automated Manufacturing Practice Japan and is now a recognized good practice worldwide.

Publications GAMP 5 ISPE has published a series of good practice guides for the industry on several topics involved in drug manufacturing. The most well-known is The Good Automated Manufacturing Practice (GAMP) Guide for Validation of Automated Systems in Pharmaceutical Manufacture. The last major revision (GAMP 5) was released in February 2008. The guidance generally states that pharmaceutical computer systems should be built with several key ideas in mind: 1. Make product and process understanding clear. 2. Approach the life cycle from the standpoint of a quality management system. 3. Make life cycle activities scalable. 4. Ensure quality risk management is science-based. 5. Leverage supplier involvement into the system. System categorization Software systems borne from these principles can be categorized into one of four GAMP 5 categories. These classifications act as built-in risk and difficulty assessments that support different validation approaches: Category 1: Infrastructure software - This includes "established or commercially available layered software" and "infrastructure software tools" that are themselves validated from within rather than from the infrastructure. Category 3: Non-configured products - This includes "software that is used as installed" and potentially "software that is configurable (category 4) but is used either unconfigured or with the standard defaults provided by the software supplier." Category 4: Configured products - This includes products where "the user has the means and knowledge to change the functionality of the device in a way that changes the results outputted by the device. As a direct consequence, this triggers increased validation effort." Category 5: Custom applications - This includes any "application, module, user-defined program, or macro" that has been written in-house or by a third party that "needs to be specified, version controlled, built, and tested (including integration testing with the commercial application, as applicable) as a minimum to ensure the quality of the software." Note: "Category 2: Firmware" was removed from GAMP with revision five.

Other guides As of February 2015, the ISPE has 13 guides, the latest published in October 2014, titled A Risk-Based Approach to Regulated Mobile Applications.

External links • International Society for Pharmaceutical Engineering (ISPE) website [1] • GAMP 4 guide [2] (PDF) • GAMP 5 guide [3]

108

Good Automated Manufacturing Practice

References [1] http:/ / www. ispe. org [2] http:/ / www. ssfa. it/ allegati/ GDL_GIQAR_GCP_GampGuidelineMilano06. pdf [3] http:/ / www. slideshare. net/ ProPharmaGroup/ overview-of-computerized-systems-compliance-using-the-gamp-5-guide

Health Insurance Portability and Accountability Act The Health Insurance Portability and Accountability Act of 1996 (HIPAA) was enacted by the United States Congress and signed by President Bill Clinton in 1996. Its intended purpose was "to improve portability and continuity of health insurance coverage in the group and individual markets; to combat waste, fraud, and abuse in health insurance and health care delivery; to promote the use of medical savings accounts; to improve access to long-term care services and coverage; [and] to simplify the administration of health insurance."

History Initial In 1994, U.S. President Bill Clinton attempted to overhaul the national health care system but didn't receive the support he needed. In 1995, Senators Nancy Kassebaum (R-KS) and Edward Kennedy (D-MA) introduced a comparatively pared down proposal called the Health Insurance Reform Act of 1995 (S 11028), later referred to informally as the Kassebaum/Kennedy Bill. The proposal called for health insurance portability for employees, medical savings accounts, increased deductibility of health insurance for the self-employed, and tax breaks for long-term care insurance. The legislation successfully made it out of the Senate Labor and Human Resources Committee on August 2, 1995, only to be stalled "because of opposition from conservative senators who shared industry concerns over the group-to-individual portability provisions." With desire to get some sort of health care reform legislation passed, Clinton referenced the stalled bill in his January 1996 State of the Union address on several occasions. Though some feared the ploy by Clinton would ultimately sink the bill, it inevitably resulted in bipartisan cooperation so no one side could take credit for the bill. On February 7, 1996, the two parties agreed to further discuss the legislation in the House and Senate. This resulted in several events: the House of Representatives created an alternative bill (HR 3103) that drew on characteristics of S 11028, passing on March 28; the Senate passed a version of the original S 11028 on April 23 but without controversial attachments like medical savings accounts. However, differences between the House and Senate bills caused problems. "The House bill, for example, included provisions allowing for medical savings accounts, a limit on monetary damages in medical malpractice lawsuits and a reduction in states' authority to regulate health insurance purchasing pools created by small businesses." Additionally, a provision on mental health coverage was found on the Senate bill that was omitted from the House version. It took several weeks of debating to make concessions on these topics.

109

Health Insurance Portability and Accountability Act A Republican-led compromise was offered on June 10, however debate raged on. It wasn't until a July 25 compromise between Kennedy and Ways and Means Committee Chairman Bill Archer (R-TX) on medical savings accounts that momentum shifted. Provisions on mental illness and medical malpractice were eventually dropped from the proposal on July 31, with both House and Senate agreeing on the final version on August 1 and August 2 respectively. On August 21, 1996, the legislation was signed into law by President Clinton and codified as Public Law 104-191, the Health Insurance Portability and Accountability Act of 1996 (HIPAA).

Amendments The administrative simplification provisions in HIPAA meant more work had to be done in regards to the legislation. The U.S. Department of Health and Human Services (HHS) began work on the HIPAA Privacy Rule in 1999, "which set out detailed regulations regarding the types of uses and disclosures of personally identifiable health information that are permitted by the covered entities." However, large volumes of comments and Executive branch changes in 2000 slowed the process down. Several more years of corrections and requests for comments followed, culminating in the release of the Final Rule on August 14, 2002 as 45 CFR Part 160 and Subparts A and E of Part 164. Most health plans were expected to be in compliance by April 14, 2003, though some exceptions existed. Despite the Privacy Rule, many still argued that the legislation wasn't suitable enough to prevent mishandling of personal health information and that it was impeding research. These concerns mixed with few incidents of enforcement in the first few years after the 2003 compliance date prompted additional review by the HHS. On February 16, 2006, HHS issued the Final Rule regarding HIPAA enforcement, to be effective March 16, 2006. Additional updates to the enforcement rule came with the Health Information Technology for Economic and Clinical Health (HITECH) Act, enacted on February 17, 2009. The Act added "several provisions that strengthen the civil and criminal enforcement of the HIPAA rules" by adding categories of violations and tier levels of penalty amounts. HIPAA and the HITECH statutes were further revised in January 2013 (effective March 26, 2013) "to strengthen the privacy and security protection for individuals’ health information," update the Breach Notification Rule, "strengthen the privacy protections for genetic information," and revise other portions of HIPAA rules "to improve their workability and effectiveness."

Structure HIPAA is divided into five titles, each with their own subtitles: Title I: Health Care Access, Portability, and Renewability Subtitle A - Group Market Rules Subtitle B - Individual Market Rules Subtitle C - General and Miscellaneous Provisions Title II: Preventing Health Care Fraud and Abuse; Administrative Simplification; Medical Liability Reform Subtitle A - Fraud and Abuse Control Program Subtitle B - Revisions to Current Sanctions for Fraud and Abuse Subtitle C - Data Collection Subtitle D - Civil Monetary Penalties Subtitle E - Revisions to Criminal Law Subtitle F - Administrative Simplification Subtitle G - Duplication and Coordination of Medicare-Related Plans Title III: Tax-Related Health Provisions Subtitle A - Medical Savings Accounts

110

Health Insurance Portability and Accountability Act Subtitle B - Increase in Deduction for Health Insurance Costs of Self-Employed Individuals Subtitle C - Long-Term Care Services and Contracts Subtitle D - Treatment of Accelerated Death Benefits Subtitle E - State Insurance Pools Subtitle F - Organizations Subject to Section 833 Subtitle G - IRA Distributions to the Unemployed Subtitle H - Organ and Tissue Donation Information Included With Income Tax Refund Payments Title IV: Application and Enforcement of Group Health Plan Requirements Subtitle A - Application and Enforcement of Group Health Plan Requirements Subtitle B - Clarification of Certain Continuation Coverage Requirements Title V: Revenue Offsets Subtitle A - Company-Owned Life Insurance Subtitle B - Treatment of Individuals Who Lose United States Citizenship Subtitle C - Repeal of Financial Institution Transition Rule to Interest Allocation Rules

Description Title I of HIPAA contains three subtitles that protect health insurance coverage for workers and their families when they change or lose their jobs. Title II of HIPAA contains seven subtitles. One of the most important for expanding HIPAA is Subtitle F, the Administrative Simplification (AS) provisions, requiring the establishment of national standards for electronic health care transactions and national identifiers for providers, health insurance plans, and employers. Title II also addresses the security and privacy of health data, with the intent of improving the efficiency and effectiveness of the nation's health care system by encouraging the widespread use of electronic data interchange in the U.S. health care system. Title III of HIPAA modifies the Internal Revenue Code (IRC) to revise available tax deductions for health insurance, clarify how pre-tax money could be applied health payments, and regulate long-term care services and how they're contracted. Other tax-related issues like IRA distribution and organ donor tax refund payments are covered by this title, in total spread out over eight subtitles. Title IV of HIPAA modifies both the IRC and the Public Health Service Act (PHSA) to describe requirements for and enforcement of how group health plans could legally manage and cover patients' pre-existing conditions as well as their continuation of coverage. This information is supplied over two subtitles. Title V of HIPAA contains three subtitles that amend the IRC concerning miscellaneous issues such as interest deductions on loans related to company-owned life insurance, how individuals who lose their U.S. citizenship shall be treated tax-wise, and the removal of certain limitations on interest allocation.

Enforcement On February 16, 2006, HHS issued the Final Rule regarding HIPAA enforcement. It became effective on March 16, 2006. The Enforcement Rule set civil money penalties for violating HIPAA rules and established procedures for investigations and hearings for HIPAA violations. Before the enforcement rule, the deterrent effects of the legislation seemed negligible, with few prosecutions for violations. Enforcement operations were ratcheted up further with the passage of the Health Information Technology for Economic and Clinical Health Act (HITECH) in 2009, which greatly increased the financial penalties that could be applied to entities in non-compliance. By the end of 2014, the U.S. Department of Health and Human Resources (HHS) reported investigating 106,522 HIPAA complaints against national pharmacy chains, major health care centers, insurance groups, hospital chains

111

Health Insurance Portability and Accountability Act and other small providers since April 2003. The HHS reported 23,314 of those cases had been resolved by requiring changes in privacy practice or by corrective action. 10,566 cases were investigated and found that HIPAA was followed correctly. Another 68,412 cases were found to be ineligible for enforcement because, for example, a violation occurred before HIPAA became effective, a case was withdrawn by the pursuer, or an activity did not actually violate the rules. According to the HHS, the most commonly investigated compliance issue, by order of frequency, have been: 1. 2. 3. 4. 5.

incorrectly used or revealed protected health information (PHI); insufficient protection mechanisms for PHI; insufficient mechanisms for patients to access their PHI; insufficient administrative protections and tools for managing electronic PHI; and usage and disclosure of more PHI than minimally necessary.

The HHS also stated the entities most likely to be responsible for infractions, by order of frequency, have been: 1. 2. 3. 4. 5.

private practices; general hospitals; outpatient facilities; pharmacies; and health plans (group health plans and health insurance issuers).

Assessed impact The enactment of HIPAA caused major changes in the way physicians and medical centers operate. The complex legalities and potentially stiff penalties associated with HIPAA, as well as the increase in paperwork and the cost of its implementation, were causes for concern among physicians and medical centers. Many of those concerns were expressed in an August 2006 paper published in the journal Annals of Internal Medicine. It mentioned a University of Michigan study that demonstrated how the implementation of the HIPAA Privacy rule resulted in a drop from 96 percent to 34 percent in the proportion of follow-up surveys completed by study patients being followed after a heart attack. By 2013, views on the impact of HIPAA were mixed. Leon Rodriguez, director of the HHS' Office for Civil Rights said of HIPAA: Whereas many thought HIPAA would "bankrupt" healthcare, shut down research, and otherwise paralyze the industry, instead the industry has learned the benefits of the transaction and code set standards through the ease of electronic transactions. And the balance of the [HIPAA] Privacy and Security protections have paved the way to real benefits for consumers through greater access to quality care. In an article for the Houston Chronicle, writer and business consultant Lisa Dorward stated the following for patients requesting personal health information: Direct cost to patients is minimal; health care institutions can charge the patient only for copying and postage costs for delivery of the documents. On the other hand, costs to health care providers are high and can strain already overburdened budgets. Some clinics and hospitals have had to reconstruct or remodel existing registration areas to comply with HIPAA's privacy regulations. Writing for the Loyola Consumer Law Review, attorney and legal writer Anna Colvert wrote: Generally, HIPAA is considered a step in the right direction regarding patient privacy, and it has resulted in more descriptive and detailed privacy policies; however, it has not improved the online privacy practices of these organizations. While HIPAA is a solid foundation in protecting patients’ healthcare information there is more work to be done..."

112

Health Insurance Portability and Accountability Act A May 2013 Computerworld reported on a survey conducted by the Ponemon Institute that found 51 percent of respondents believed "HIPAA compliance requirements can be a barrier to providing effective patient care" and 59 percent "cited the complexity of HIPAA requirements as a major barrier to modernizing the healthcare system."

Audit guidelines and checklist For those auditing computer systems and IT environments for their compliance with the Health Insurance Portability and Accountability Act and other regulations, a set of guidelines and checklist items may be useful. Click the link above for the full set of guidelines and checklist items as they relate to HIPAA.

Further reading • "Public Law 104 - 191 - Health Insurance Portability and Accountability Act of 1996" [1]. U.S. Government Publishing Office. • "S. 1028 (104th): Health Insurance Reform Act of 1995" [2]. GovTrack.us. Civic Impulse, LLC. • "Bill Makes Health Insurance ‘Portable’" [3]. CQ Almanac 1996 52: 6-28–6-39. 1997.

References [1] http:/ / www. gpo. gov/ fdsys/ pkg/ PLAW-104publ191/ content-detail. html [2] https:/ / www. govtrack. us/ congress/ bills/ 104/ s1028 [3] http:/ / library. cqpress. com/ cqalmanac/ document. php?id=cqal96-1092479

Health Insurance Portability and Accountability Act/Audit guidelines and checklist The following guidelines and checklist items provide a frame of reference for vendors and auditors to better determine potential compliance issues with the Health Insurance Portability and Accountability Act and a variety of other regulatory guidelines. The following checklist is focused largely on computerized systems that house Protected Health Information (PHI) under the HIPAA regulations. However, since the computerized system exists as part of a complete operation, even when it is hosted by a Cloud provider, the checklist covers the majority of the regulation. This notion of the requirements of the entire regulation applying even to Cloud companies is particularly underscored with the HITECH modifications to the HIPAA regulations where Business Associates are now entirely responsible with adherence to the HIPAA privacy regulations and not merely on a contractual basis.

Administrative safeguards Security Management Process • Does a detailed risk assessment exist regarding potential vulnerabilities to the confidentiality, integrity, and availability of PHI? • Does the assessment identify actions to mitigate certain risks? Have these actions been taken, or have plans been generated to take these actions? • Does a policy exist specifying sanctions to be taken against employees who fail to comply with security policies and procedures? • Is there a system in place for regular review of system activity, including things such as audit logs and incident reports?

113

Health Insurance Portability and Accountability Act/Audit guidelines and checklist Assigned security responsibility • Is there a formally identified individual who is responsible for developing and implementing security policies? • Has this individual, or the individual's direct reports, developed and implemented security policies? • Collect evidence of security policies being implemented (group policy reports for the AD server, for instance) Workforce security and Information Access Management • Do procedures exist governing access to PHI by employees? • Are employees who should not have access to PHI prevented from accessing it? • If employees are permitted to access systems that contain PHI, but are not permitted to access PHI, does the system have suitable controls to prevent that access? • View system accesses by both individuals who have access to PHI and those who don't, and evaluate potential areas of weakness in the security measures. • Do processes exist for authorizing access to PHI? Do these processes seem reasonable. • Are employees who have access to PHI supervised appropriately? Do their supervisors have adequate training and understanding regarding the treatment of PHI? • Are adequate procedures in place governing the termination of employees with access to PHI? • Do these procedures include appropriately times termination of accounts (i.e., in the case of involuntary termination, is the account terminated before the employee might have the opportunity to cause harm?). • For voluntary terminations, are procedures in place that require the supervisor to evaluate the need for continued access to PHI prior to the departure of the employee in question? • Is there a clear requirement for communication with system administrators and IT staff regarding affected accounts? • If a health clearinghouse is part of a larger organization, confirm that adequate controls exist that prevent the larger organization from accessing PHI. • Do the PHI access procedures apply to the IT/IS organization? That is, is access to PHI only allowed for IT/IS employees with a legitimate business reason to access that data? Are IT/IS employees adequately trained in the HIPAA regulations, internal policies and procedures regarding PHI? Security Awareness and Training • Is there a formal and documented training program for employees who deal with PHI? • Are employees provided training on principles of security? • Are there procedures in place for addressing malicious software, including it's detection and reporting? Are employees prevented from accessing remote sites that are at high risk for containing malicious software? • Is there a system for ensuring that security protection software (in particular anti-virus programs, and firewalls) are updated periodically? • For outward facing applications, is there a process by which security flaws in components (such as Java) are identified and fixed. • For systems that provide access to PHI, do they track log-ins, and in particular failed logins? • Does the system lock out users after a specified number of failed logins? • Are system administrators notified if such an event occurs? • Is there evidence that administrators respond to such events in an appropriate manner? • Are there policies governing password complexity, change and reuse frequency? Are the policies consistent with current "standards" within the industry? • Are employees trained to maintain strict secrecy regarding their passwords? • Are there procedures mandating that IT may not request passwords from users?

114

Health Insurance Portability and Accountability Act/Audit guidelines and checklist Security Incident Procedures • Are procedures in place for responding to security incidents? • Is there evidence that these procedures are being followed (review any logs/files regarding actions taken in response to security incidents). Contingency Plan • • • • •

Does the organization have a comprehensive disaster preparedness/business continuity plan? Does the plan included a backup and recovery procedure for all system data? Does the plan adequately address how operations can be continued under various scenarios? Does the plan include procedures for testing the various elements of the plan to ensure they are still valid? Does the plan address the criticality of the various systems in its design?

Evaluation • Is a periodic re-evaluation of security standards undertaken? • Does the re-evaluation take into account changes in the current state of IT security and the environment of threats facing secured systems, as well as the current state of the regulations? Business Associate Agreements • If components of the system are held outside the direct control of the company, such that PHI will be outside of the direct control of the company, do sufficient agreements exist to guarantee that the party responsible for handling the PHI will adhere to the requirements of the regulation? • Are these agreements in such a form that they qualify as a contract or equivalent?

Physical safeguards Facility Access Controls • Is the facility containing the system (this includes electronic access points that connect to the system in a "non-secure" manner) sufficiently protected from unauthorized access? • Is access to application and database servers further restricted to only those personnel who are authorized to directly interact with those elements of the system (i.e., system administrators). • Is there a system that limits access to facilities and areas within facilities to authorized personnel? Does this system implement a mechanism for confirming the identify of individuals accessing the facility (e.g., through a electronic key access system) • Does this system apply to visitors as well? • Is access to systems used for testing and revision of software similarly restricted? Evaluate the access restrictions to tools that could be used to modify and deploy the software. Ensure that these access restrictions are addressed via SOP.

115

Health Insurance Portability and Accountability Act/Audit guidelines and checklist Workstation Use • Do procedures exist which govern the class of workstation that can be used to access PHI? Workstation Security • Are workstations that are used to access PHI appropriately restricted? • If workstations can directly interact with PHI without additional controls, are the workstations secured in appropriately restricted areas? Device and Media Controls • • • •

Are procedures in place governing the use and removal of hardware and storage media used to house PHI? Do the procedures seem reasonable? Do procedures exist regarding the disposal of media and devices used to store PHI? Are records maintained that account for the movement of such media, and who moved it?

Technical safeguards Access Control • Do systems with access to PHI have a robust authentication process for gaining access? • Do these system require that all users have a unique id? • Are password assignment, change, recovery, and related processes designed in such a way so as to ensure that the user gaining access to PHI is who they say they are? • Is there a mechanism for gaining access to necessary PHI in the event of an emergency? Is this mechanism designed such that it's invocation during non-emergencies would not be achievable in a non-obvious way? • Does this system automatically log off users after a defined period of inactivity? • Does the system maintain PHI in an encrypted state? Audit Controls • Do systems used for PHI maintain audit trails which record, in a secure manner, all activities within the system. Are the audit trails reviewed periodically? Integrity • Are policies and procedures in place to ensure that PHI has not been altered or destroyed in an unauthorized manner? • Are electronic mechanisms employed to corroborate that PHI has not been altered or destroyed in an unauthorized manner?* • If PHI is transmitted outside of the responsible entity (i.e., via the internet), is the data transmitted in such a way so as to prevent unauthorized access (via ssl or similar protocols?) • Are security certificates on servers involved in managing PHI current, and authenticated by a recognized third party certifying organization?

116

Health Insurance Portability and Accountability Act/Audit guidelines and checklist

Organizational requirements Business associate contracts • • • •

Are business associates required contractually to adhere to the regulations with regard to PHI they maintain? Do business associate agreements exist with third party data/application hosting services? Do business associate agreements extend, contractually, to agents/subcontractors? Is it clear within the terms of the business associate agreements that the business associate must immediately report any breaches or incidents? • Is it clear within the terms of the business associate agreements that the relationship can be terminated if the associate fails to comply with the requirements of the regulations? • Do records exist of audits and other reviews of business associates? If breeches or violations of the regulation have occurred, have appropriate actions been taken, up to and including termination of the agreement?

Documentation requirements Documentation • Are the procedures required by the regulations maintained in written (or alternatively electronic, but signed) form? • Are actions and activities which are required to be documented maintained in written form (or electronic alternatives)? • Is there a retention policy regarding the policies and procedures? Does the policy require that such documents be maintained for at least 6 years after either the date of its creation or of its effective date (whichever is later)? • Does a review system exist for these policies and procedures to ensure that they are current?

Clinical Laboratory Improvement Amendments The Clinical Laboratory Improvement Amendments (CLIA) of 1988 is a United States federal statute and regulatory standards program that applies to all clinical laboratory testing performed on humans in the United States, except clinical trials and basic research.

History On December 5, 1967, the U.S. enacted Public Law 90-174, which included in Section 5 the "Clinical Laboratories Improvement Act of 1967." CLIA '67 set regulations on the licensing of clinical laboratories and the movement of samples in and out of them across state lines. Laboratories would be eligible for a full, partial, or exempt CLIA-67 license, depending on the laboratory's conducted tests. However, by the mid-1980s the relevancy of CLIA '67 to a vastly changed procedural and technological clinical laboratory landscape began to be questioned. The Office of the Assistant Secretary for Health for Planning and Evaluation (ASPE) of the U.S. Department of Health and Human Services commissioned a study to assess the effectiveness of federal regulations affecting clinical laboratories and their goal of protecting the public health. On April 8, 1986, the Final Report on Assessment of Clinical Laboratory Regulations by Michael L. Kenney and Don P. Greenberg was submitted to the ASPE. The analysis found that many federal regulations are technically obsolescent and many may be operationally unnecessary as a result of changing laboratory technology and changed federal reimbursement policies. Among changes recommended by the HHS-funded analysis are: (a) the regulatory classification system based upon physical location of laboratories is no longer appropriate

117

Clinical Laboratory Improvement Amendments and should be replaced with a classification system reflecting laboratory functions; (b) a single, uniform set of federal regulations should be developed that covers all civilian laboratories receMng federal reimbursement or operating in interstate commerce; (c) a revised federal regulatory system should emphasize measures of performance such as personnel and inspection requirements; and (c clinical laboratory regulations should be based upon objective data to the maximum extent possible. On August 5, 1988, a new set of proposed regulations were put forth by the Health Care Financing Administration as Medicare, Medicaid and CLIA Programs; Revision of the Clinical Laboratory Regulations for the Medicare, Medicaid, and Clinical Laboratories Improvement Act of 1967 Programs. The proposal aspired "to remove outdated, obsolete and redundant requirements, make provision for new technologies, place increased reliance on outcome measures of performance, and emphasize the responsibilities and duties of personnel rather than the formal credentialing requirements and detailed personnel standards in existing regulations." This ultimately led to the proposal becoming law on October 31, 1988 under Public Law 100-578 as the Clinical Laboratory Improvement Amendments of 1988. Regulations for implementing CLIA continued to be developed afterwards, with the Department of Health and Human Services considering thousands of comments to the proposed regulations. The final regulations were published February 28, 1992, set to be effective on September 1 of the same year. The new CLIA '88 put into place regulations concerning test complexity, certification, proficiency testing, patient test management, personnel requirements, quality assurance, and other processes in the clinical laboratory. However, phase-in effective dates were extended on several occasions afterwards: on December 6, 1994 in the Federal Register (59 FR 62606), May 12, 1997 in the Federal Register (62 FR 25855), October 14, 1998 in the Federal Register (63 FR 55031), and December 29, 2000 in the Federal Register (65 FR 82941). On January 24, 2003, the Centers for Medicare and Medicaid Services submitted their final rule (68 FR 3639), effective April 24, 2003, affecting QC requirements for laboratories and qualification requirements for lab directors. The final rule also made revisions to 42 CFR 493, including the renaming, reorganizing, and consolidation of similar requirements into one section, the deletion of duplicate requirements, and the rewording of the requirements to better clarify their original intent. It also addressed requirements regarding the entire testing process, making those requirement better correlate with the workflow of a lab specimen in the laboratory, from acquisition to reporting of results, including the subdivision of testing into pre-analytic, analytic, and post-analytic phases.

CLIA program The CLIA program sets standards and issues certificates for clinical laboratory testing. CLIA defines a clinical laboratory as any facility which performs laboratory testing on specimens derived from humans for the purpose of providing information for: • diagnosis, prevention, or treatment of disease or impairment. • health assessments. The CLIA program is designed to ensure the accuracy, reliability, and timeliness of test results regardless of where the test was performed. Each specific laboratory system, assay, and examination is graded for level of complexity by assigning scores of "1," "2," or "3" for each of seven criteria. A test scored as a "1" is the lowest level of complexity, while a test scored "3" indicates the highest level. A score of "2" is assigned when the characteristics for a particular test are ranked primarily between low- and high-level in description. The seven criteria for categorization are: 1. Knowledge 2. Training and experience 3. Reagents and materials preparation 4. Characteristics of operational steps

118

Clinical Laboratory Improvement Amendments 5. Calibration, quality control, and proficiency testing materials 6. Test system troubleshooting and equipment maintenance 7. Interpretation and judgment The Centers for Medicare and Medicaid Services (CMS) has the primary responsibility for the operation of the CLIA program. Within CMS, the program is implemented by the Center for Medicaid and State Operations, Survey and Certification Group, and the Division of Laboratory Services. The CLIA Program is funded by user fees collected from over 244,000 laboratories, most located in the United States.

CLIA waived tests Under CLIA, tests and test systems that meet risk, error, and complexity requirements are issued a CLIA certificate of waiver. In its 2014 document Administrative Procedures for CLIA Categorization - Guidance for Industry and Food and Drug Administration Staff, the U.S. Food and Drug Administration (FDA) advises its staff that a medical testing device originally rated moderately complex could receive a waiver "if the device is simple to use and the sponsor demonstrates in studies conducted at the intended use sites that the test is accurate and poses an insignificant risk of erroneous results." While a waived test is deemed to have an acceptably low level of risk, the Centers for Disease Control and Prevention (CDC) reminds administrators and recipients of such tests that no test is 100 percent safe: Although CLIA requires that waived tests must be simple and have a low risk for erroneous results, this does not mean that waived tests are completely error-proof. Errors can occur anywhere in the testing process, particularly when the manufacturer's instructions are not followed and when testing personnel are not familiar with all aspects of the test system. Some waived tests have potential for serious health impacts if performed incorrectly... To decrease the risk of erroneous results, the test needs to be performed correctly, by trained personnel and in an environment where good laboratory practices are followed. In November 2007, the CLIA waiver provisions were revised by the United States Congress to make it clear that tests approved by the FDA for home use automatically qualify for CLIA waiver.

List of tests A list of tests categorized by the FDA as waived since 2000 can be found at the FDA website [1]. As of February 17, 2015, the list included 6,669 separate test devices.

Further reading • Kenney, Michael L. (February 1987). "Quality Assurance in Changing Times: Proposals for Reform and Research in the Clinical Laboratory Field" [2] (PDF). Clinical Chemistry 33 (2): 328–336. PMID [3] 3542302 [4].

External links • 42 CFR 493 at the U.S. Government Printing Office [5] • CLIA Law & Regulations at CDC [6] • Chronology of CLIA Related Documents in the Federal Register & Code of Federal Regulations [7]

119

Clinical Laboratory Improvement Amendments

Notes A couple elements of this article are reused from the Wikipedia article [8].

References [1] [2] [3] [4] [5] [6] [7] [8]

http:/ / www. accessdata. fda. gov/ scripts/ cdrh/ cfdocs/ cfClia/ testswaived. cfm http:/ / www. clinchem. org/ content/ 33/ 2/ 328. full. pdf http:/ / en. wikipedia. org/ wiki/ PubMed_Identifier http:/ / www. ncbi. nlm. nih. gov/ pubmed/ 3542302 http:/ / www. gpo. gov/ fdsys/ granule/ CFR-2011-title42-vol5/ CFR-2011-title42-vol5-part493/ content-detail. html http:/ / wwwn. cdc. gov/ clia/ Regulatory/ default. aspx http:/ / wwwn. cdc. gov/ CLIA/ Regulatory/ Chronology. aspx http:/ / en. wikipedia. org/ wiki/ Clinical_Laboratory_Improvement_Amendments

Health Level 7 Health Level Seven (HL7) is an international non-profit volunteer-based organization involved with the development of international health care informatics interoperability standards. The HL7 community consists of health care experts and information scientists collaborating to create standards for the exchange, management, and integration of electronic health care information. The term "HL7" is also used to refer to some of the specific standards created by the organization (e.g., HL7 v2.x, v3.0, HL7 RIM). HL7 and its members The Reference Information Model (RIM) is an important component of the HL7 v3.0 provide a framework (and related standard and is based on XML. standards) for the exchange, integration, sharing, and retrieval of electronic health information. v2.x of the standards, which support clinical practice and the management, delivery, and evaluation of health services, are the most commonly used in the world.

History The International Organization for Standardization (ISO) got involved with standardizing network exchanges of data between computers around 1979, creating the Open Systems Interconnect (OSI) standards model. These formal OSI standards ranged across seven levels, from OSI Level 1 (physical layer, e.g. communication over coaxial cable) to OSI Level 7 (application layer, e.g. communication between clinical software). By 1981, researchers at University of California - San Francisco had created a proprietary protocol that unbeknownst to them at the time fit under the OSI Level 7 model. The protocol was developed for clinical purposes such that "computers exchanged several core messages, including the synchronization of patient admission-discharge-transfer information, orders from clinical areas, and the display of textual results to the clinical areas." By 1985, Simborg Systems (which developed hospital information systems) sought to have a non-proprietary protocol created because "standardization efforts at the time was either fragmented, in a different direction or with a

120

Health Level 7 different scope." This led to a push to create a new standards organization, with initial meetings occurring at the end of March 1987. The meetings produced the term "HL7" and prompted a non-profit organization to be created, eventually known as Health Level Seven International. Version 1.0 of the HL7 specification was released in October 1987. The direction of HL7 was largely led by Simborg Systems; however, with greater practical use seen in furthering the protocol and non-profit, the first non-Simborg Systems chairperson, Ed Hammond, took the reigns in 1989. By June 1990, Version 2.1 was published and included mechanisms for results reporting and billing. By the early- to mid-1990s news of HL7 was beginning to spread to international clinical sectors, particularly parts of Europe, including Netherlands, Germany, Canada, Japan, Australia, and the United Kingdom. In June 1994 the American National Standards Institute (ANSI) awarded Health Level 7 International status as an accredited standards developer. Version 2.2 became an official ANSI standard in February 1996. HL7 had roughly 1,700 members from various health care industries around the globe by the late 1990s. Version 3.0 of the HL7 standard was released in late 2005, which internationalized it and made it more consistent and precise. Where the 2.x standards eventually received wide adoption for their flexibility and available implementation options, the 3.0 standards, in contrast, departed from that flexibility in order to be more "definite and testable, and provide the ability to certify vendors' conformance." In 2009, Corepoint Health reported that most HL7 messaging was occurring using 2.3 and 2.3.1 models, with 3.0-based messages representing only a tiny fraction of all interfaces; in 2012 Corepoint Health's Rob Brull estimated that more than 90 percent of all healthcare systems were still utilizing 2.x models. That trend continued, with several experts proclaiming the standard to be more or less a failure. In early 2012, HL7 announced the HL7 FHIR (Fast Healthcare Interoperability Resources) initiative, which would utilize the best aspects of both 2.x and 3.0 standards, optimally resulting in a standard that is 20 percent the size of 3.0 but still meet the operation requirements of 80 percent of systems using the standard. FHIR is being built on RESTful web services and provides modular, extensible "resources" to provide some flexibility but within a more fixed framework. In December 2014, HL7 announced the Argonaut Project, meant "to hasten current FHIR development efforts in order to create practical and focused guidelines and profiles for FHIR by the spring of 2015."

Standards In total HL7 develops conceptual standards (e.g., HL7 RIM), document standards (e.g., HL7 CDA), application standards (e.g., HL7 CCOW), and messaging standards (e.g., HL7 v2.x and v3.0). Messaging standards are particularly important because they define how information is packaged and communicated from one party to another. Such standards set the language, structure, and data types required for seamless integration from one system to another. Business use of the HL7 standards requires a paid organizational membership in HL7, Inc. HL7 members can access standards for free, and non-members can buy the standards from HL7 or ANSI. HL7 v2.x and 3.0 are the primary standards from the organization. They provide a framework for data exchange among clinical and healthcare systems in an ideal format. The 2.x standards are flexible, with several implementation options, loosely geared towards "clinical interface specialists" working to move clinical data in the application space. The 3.0 standards are designed to be more fixed, precise, and international, geared towards governments and end users of clinical applications. While HL7 v2.x and 3.0 are the primary standards, a few other important standards and components are associated with HL7, as detailed below.

121

Health Level 7

Reference Information Model (RIM) The Reference Information Model (RIM) is an important component of the HL7 Version 3 standard. RIM expresses the data content needed in a specific clinical or administrative context and provides an explicit representation of the semantic and lexical connections that exist between the information carried in the fields of HL7 messages. The standard is accepted as official by the ISO as ISO/HL7 21731; the original was approved in 2006, with a revised version appearing in 2014.

Clinical Document Architecture (CDA) The Clinical Document Architecture (CDA) is an XML-based markup standard intended to specify the encoding, structure, and semantics of clinical documents for exchange. The standard is accepted as official by the ISO as ISO/HL7 27932; the most current version comes from 2009.

Clinical Context Object Workgroup (CCOW) The Clinical Context Object Workgroup (CCOW) family of standards are designed to enable disparate applications to share user context and patient context in real-time, particularly at the user-interface level. CCOW implementations typically require a CCOW vault system to manage user security between applications. The primary standard under CCOW is the Context Management Specifications (CCOW), which "serves as the basis for ensuring secure and consistent access to patient information from heterogeneous sources." This standard is accepted as official by ANSI as ANSI/HL7 CMS V1.6.

Fast Healthcare Interoperability Resources (FHIR) The Fast Healthcare Interoperability Resources (FHIR) standard was announced in 2012 and has been in development since. FHIR is being built on RESTful web services and provides modular, extensible "resources" to provide some flexibility but within a more fixed framework. The fundamental principles of FHIR are: • • • • • • • • • •

prioritize implementers as the target user of the standard; provide a flexible framework for interoperability; limit complexity to where it's most needed; keep conformance requirement minimal but also provide varying degrees of rigor; leverage open source development principles; make the standard available without cost; support multiple exchange architectures; leverage common web technologies; make the standard forward and backward compatible; and design, publish, and implement associated specifications using widely available tools.

122

Health Level 7

123

Further reading • Introduction to HL7 Standards [1] • Spronk, René (05 September 2014). "The Early History of Health Level 7" [2]. Ringholm BV.

External links • Health Level 7 International [3] • HL7 Wiki [4] • HL7 FHIR [5]

References [1] [2] [3] [4] [5]

http:/ / www. hl7. org/ implement/ standards/ index. cfm http:/ / www. ringholm. com/ docs/ the_early_history_of_health_level_7_HL7. htm http:/ / www. hl7. org/ http:/ / wiki. hl7. org/ index. php?title=Main_Page http:/ / wiki. hl7. org/ index. php?title=FHIR

ISO 9000 ISO 9000 is a family of standards related to quality management systems and designed to help organizations ensure that they meet the needs of customers and other stakeholders. The standards are published by the International Organization for Standardization (ISO) and are available through national standards bodies. ISO 9000 deals with the fundamentals of quality management systems , including the eight management principles on which the family of standards is based.

A General Motors assembly plant advertises its ISO 9001 certification.

ISO 9001 deals with the requirements that organizations wishing to meet the standard have to fulfill. Third party certification bodies provide independent confirmation that organizations meet the requirements of ISO 9001. Over a million organizations worldwide are independently certified, making ISO 9001 one of the most widely used management tools in the world today. Despite widespread use, however, the ISO certification process has been criticized as being wasteful and not being useful for all organizations.

ISO 9000

History The ISO 9000 family of standards was originally built on several British standards developed in the early 1970s: BS 9000, BS 5179, and BS 5750. These quality assurance standards were initially related to the electronics manufacturing industry and set guidelines on managing supply-side quality through auditing and contractual documentation. However, the history of ISO 9000 can be traced back even further to the publication of the United States Department of Defense MIL-Q-9858 standard in 1959. MIL-Q-9858 was revised into the NATO AQAP series of standards in 1969, which in turn were revised into the BS 5179 series of guidance standards published in 1974, and finally revised into the BS 5750 series of requirements standards in 1979. As the idea of company certification of meeting a certain level of quality became more attractive, the push for a more rigorous international standard (primarily led by the British Standards Institute [BSI]) resulted in the creation of the ISO 9000 family in 1987. Originally based on BS 5750, the ISO 9000 family started out with three quality management models and a set of guidelines for following them: • • • •

ISO 9001:1987 Model for quality assurance in design, development, production, installation, and servicing ISO 9002:1987 Model for quality assurance in production, installation, and servicing ISO 9003:1987 Model for quality assurance in final inspection and test ISO 9004.1:1987 Quality management and quality system elements - Part 1: Guidelines

Changes to ISO 9000 In 1994, the ISO 9000 standards were updated to place focus on the importance of quality control and preventative action, and emphasize the need for the documentation of procedures. In 2000, ISO 9001, 9002, and 9003 were combined into ISO 9001:2000, with a major shift in focus towards quality management versus quality control as well as a focus on process management, "the monitoring and optimizing of a company's tasks and activities, instead of just inspecting the final product." It directed manufacturers to carefully examine client requirements in order to design and improve processes and improve customer satisfaction. The 2008 changes to ISO 9001 were minimal, clarifying and simplifying language while making it more consistent with other standards. The ISO 9004 guidelines document was updated in 2009 "to promote a sustainable business approach" that focused on all stakeholders. An updated version of ISO 9001 is expected at the end of 2015 if the ISO members vote favorably in the second quarter of 2015. With the revision the scope of the standard will not change. An essential change, however, will affect the structure. The new ISO 9001:2015 will follow the so-called high-level structure. This, and the uniform use of core texts and terms, will enable an identical structure for all management systems.

Adoption of the standard The global adoption of ISO 9001 may be attributable to a number of factors. Many major purchasers require their suppliers to hold ISO 9001 certification. In addition to several stakeholders' benefits, a number of studies have identified significant financial benefits for organizations certified to ISO 9001. Examples include: 1. In 2002, Heras et al. found superior return on assets compared to otherwise similar organizations without certification and demonstrated that this was statistically significant and not a function of organization size. 2. A 2003 study of 146 Singapore-based companies by Chow-Chua et al. found improved financial performance, though with the caveat "that while certification leads to better overall financial performance, non‐listed certified firms experience better documentation procedures, higher perceived quality of products or services, and more effective communication among employees than listed certified firms."

124

ISO 9000 3. That same year Rajan and Tamimi showed that ISO 9001 certification resulted in superior stock market performance and suggested that shareholders were richly rewarded for investing in the certified companies. 4. In 2005, Corbett et al. showed in 2005 similar superior performance, atating that "three years after certification, the certified firms do display strongly significant abnormal performance under all control-group specifications." 5. That same year, Sharma linked increases in "operating efficiency, growth in sales, and overall financial performance" gains with ISO 9000 certification. 6. Naveha and Marcus claimed in 2007 that manufacturers in the U.S. automotive industry that implemented ISO 9001 saw superior operational performance soon after. 7. A 2011 survey from The British Assessment Bureau showing 44 percent of their certified clients had won new business due to becoming certified. While the connection between superior financial performance and ISO 9001 may be seen from the examples cited, there remains no proof of direct causation, though longitudinal studies such as those of Corbett et al. may suggest it. Other researchers such as Heras et al. have suggested that while there is some evidence of this, the improvement is partly driven by the fact that there is a tendency for better performing companies to seek ISO 9001 certification.

Criticisms of the standard A common criticism of the ISO 9000 family of standards is the amount of money, time, and paperwork required for registration. In 2003, writing for Quality Magazine, engineer Scott Dalgleish emphasized that "[u]nder ISO, every quality system enhancement triggers enormous documentation changes that make quality managers question whether the benefits of the change are worth the effort." In a piece for Inc. magazine in 2005, journalist Stephanie Clifford told the story of Delaware North Companies, which spent nearly 18 months and $115,000 just to certify their guest services management division. Others have chosen not to adopt the standard because of the perceived risks and uncertainty of not knowing if there are direct relationships to improved quality as well as doubts about what kind and how many resources will be needed. Other perceived risks include how much certification will cost, increased bureaucratic processes, and risk of poor company image if the certification process fails. Critics like John Seddon, a leading global authority on the service industry, claim ISO 9001 promotes specification, control, and procedures rather than understanding and improvement. Others like business improvement specialist Jim Wade have argued that ISO 9001 is effective as a guideline, but that promoting it as a standard "helps to mislead companies into thinking that certification means better quality, ... [undermining] the need for an organization to set its own quality standards." In short, Wade argues that reliance on the specifications of ISO 9001 does not guarantee a successful quality system. The standard has been seen as especially prone to failure when a company is interested in certification before quality. Certifications have in fact often been based on customer contractual requirements rather than a desire to actually improve quality. "If you just want the certificate on the wall, chances are you will create a paper system that doesn't have much to do with the way you actually run your business," said ISO's Roger Frost in 2001. Certification by an independent auditor is often seen as the problem area, and according to Barnes, it "has become a vehicle to increase consulting services."

125

ISO 9000

126

Further reading • Cochran, Craig (2008). ISO 9001 in Plain English [1]. Paton Professional. pp. 178. ISBN [2] 9781932828207.

External links • ISO 9000 [2] at the International Organization for Standardization

Notes This article reuses a few elements from the Wikipedia article [3].

References [1] https:/ / books. google. com/ books?id=-GplCM5xTYYC [2] http:/ / www. iso. org/ iso/ iso_9000 [3] http:/ / en. wikipedia. org/ wiki/ ISO_9000

ISO/IEC 17025 ISO/IEC 17025 is an International Organization for Standardization (ISO) standard used by testing and calibration laboratories to provide a basis for accreditation of laboratory quality systems. There are many commonalities with the ISO 9000 family of standards, but ISO/IEC 17025 adds in the concept of competence to the equation, applying directly to those organizations that produce testing and calibration results.

History

Even military testing and calibration labs like the Navy Standards Laboratory (WPP) opt to get ISO/IEC 17025 certified.

ISO/IEC 17025 was originally known as ISO/IEC Guide 25, first released in 1978, with subsequent editions following in 1982 and 1990. Guide 25 was created with the belief that "third party certification systems [for laboratories] should, to the extent possible, be based on internationally agreed standards and procedures." In the midto late 1990s, an update to Guide 25 was required. However, the ISO decided to convert the guide into a standard and introduce tight compatibility with ISO 9001, which was also being revised, such that ISO 9001 would be treated as a master standard and the next evolution of Guide 25 to be treated as a standard to be specifically applied to testing and calibration laboratories. ISO/IEC 17025:1999 was issued by the ISO in late 1999 and was internationally adopted in 2000. A second release was made on May 12, 2005 after it was agreed that it needed to have its wording more closely aligned with the 2000 version of ISO 9001. The most significant changes introduced greater emphasis on the responsibilities of senior management, as well as explicit requirements for continual improvement of the management system itself, particularly communication with the customer. ISO/IEC 17025:1999 became defunct in May 2007.

ISO/IEC 17025

The standard The ISO/IEC 17025 standard itself comprises five elements: scope, normative references, terms and definitions, management requirements, and technical requirements. Two annexes are also included. In particular the management and technical requirements are the most important sections, with the management requirement section detailing the operation and effectiveness of the quality management system within the laboratory and the technical requirements section detailing the factors which determine the correctness and reliability of the tests and calibrations performed in laboratory. The standard is organized as follows:

Scope The scope of the standard is described over six points. It states what type of testing and calibration is covered; who it's applicable to; the purpose for the standard; what's not covered; and how it relates to ISO 9001.

Normative references This section states both ISO/IEC 17000 and the International Vocabulary of Metrology (VIM) are vital to applying the standard.

Terms and definitions This sections simply states that relevant terms found in the standard can be defined via ISO/IEC 17000 and VIM.

Management requirements The requirements for the operational effectiveness of a laboratory's quality management system are outlined in this section. The requirements are broken down into 15 subsections: 4.1 Organization 4.2 Management system 4.3 Document control 4.4 Review of requests, tenders and contracts 4.5 Subcontracting of tests and calibrations 4.6 Purchasing services and supplies 4.7 Service to the customer 4.8 Complaints 4.9 Control of nonconforming testing and/or calibration work 4.10 Improvement 4.11 Corrective action 4.12 Preventive action 4.13 Control of records 4.14 Internal audits 4.15 Management reviews

127

ISO/IEC 17025

Technical requirements The requirements for staff competence, methodologies, equipment testing and calibration, and test methods are outlined in this section. The requirements are broken down into 10 subsections: 5.1 General 5.2 Personnel 5.3 Accommodation and environmental conditions 5.4 Test and calibration methods and method validation 5.5 Equipment 5.6 Measurement traceability 5.7 Sampling 5.8 Handling of test and calibration items 5.9 Assuring the quality of the test and calibration results 5.10 Reporting the results

Annexes Two annexes and a bibliography are included. Annex A: Nominal cross-references to ISO 9001:2000 provides links between this standard and ISO 9001, important as this standard includes requirements not covered in ISO 9001. Annex B: Guidelines for establishing applications for specific fields gives accreditation seekers explanations of specific requirements to better complete their applications.

Accreditation Laboratories use ISO/IEC 17025 to implement a quality system aimed at improving their ability to consistently produce valid results. It is also the basis for accreditation from an accreditation body. Since the standard is about competence, accreditation is simply formal recognition of a demonstration of that competence. A prerequisite for a laboratory to become accredited is to have a documented quality management system. The usual contents of the quality manual follow the outline of the ISO/IEC 17025 standard. National accreditation bodies are primarily responsible for accrediting laboratories to ISO/IEC 17025. Laboratories can use either a domestic organization or some other internationally recognized body in cases where the domestic organization "has either no international recognition or where it lacks recognition in parts of the world relevant to the laboratory’s operations." Laboratories typically select a range of common and frequently used methodologies that could readily benefit and demonstrate a comprehensive quality system that those methodologies run under.

128

ISO/IEC 17025

129

Further reading • "Complying with ISO 17025" [1] (PDF). United Nations Industrial Development Organization. October 2009. pp. 106.

External links • ISO 17025:2005 [2] • ISO 17025:2005 [3] on the ISO Online Browsing Platform

References [1] http:/ / www. unido. org/ fileadmin/ user_media/ Publications/ Pub_fr/ Complying_with_ISO_17025_A_practical_guidebook. pdf [2] http:/ / www. iso. org/ iso/ catalogue_detail. htm?csnumber=39883 [3] https:/ / www. iso. org/ obp/ ui/ #iso:std:iso-iec:17025:ed-2:v1:en

ISO/TS 16949 ISO/TS 16949 is an International Organization for Standardization (ISO) technical specification for the development of a quality management system, specifically for the development, production, and, when relevant, installation and servicing of automotive-related products. The standard provides for continual improvement of these processes, emphasizing defect prevention and the reduction of variation and waste in the supply chain. It is based on the ISO 9001 standard.

Manufacturers of automotive parts supplied to automakers most certainly must get ISO/TS 16949 certified to remain competitive.

History ISO/TC 16949 is based on DaimlerChrysler, Ford, and General Motors' QS-9000 quality systems standards as well as the ISO 9000 family of standards. In June 1988, at the ASQ Automotive Division conference, a group of parts suppliers suggested to the attending vice presidents the need for a set of quality assessment standards separate from the ISO 9000 standards, which were introduced only a year earlier. At that time suppliers noted that ISO 9000 "lacked some elements in current automotive industry documents, such as business plans, customer satisfaction, continuous improvement, manufacturing capabilities, and much of the advanced quality planning content." The QS-9000 manual — based on content from ISO 9001 — was eventually released in August 1994, followed by a second edition in February 1995, which caught on worldwide with other original equipment manufacturers (OEMs). A few months later, at a European QS-9000 implementation meeting, representatives for the U.S. automakers learned that similar efforts had already been underway in the forms of "VDA 6.1 in Germany, AVSQ in Italy, and EAQF in France." A desire to further unify these disparate standards was expressed, resulting in the creation of the International Automotive Task Force (IATF).

ISO/TS 16949 The ISO Technical Committee (TC) 176, responsible for quality management and assurance standards, took notice and, not wanting to fraction ISO 9000 standards into sector-specific branches, attempted to convince the IATF to adopt ISO 9000. However, after several meetings, the TC 176 agreed the family of standards was not comprehensive enough for the automotive industry and vowed to include updates in the next version. Though the technical committee worked with the IATF, their needs were different enough that the automotive-specific changes would not be able to make it into the upcoming 2000 iteration. By November 1997, the two groups agreed on using the ISO technical report as a tool for the requirements, which would be based off of ISO 9001:1994. By the time the first draft document was created in the fall of 1998, a new type of ISO document became available: a Technical Specification (TS). The IATF agreed to this format, and in November 1998, ISO/TS 16949 was initially approved as the first ISO Technical Specification, with a second official printing arriving in March 1999. In March 2002, a revised ISO/TS 16949:2002 was released to align with changes to ISO 9001, putting more focus on how "to improve effectiveness and efficiency of the entire process instead of a narrow focus on mere compliance with standards." The current version is ISO/TS 16949:2009. It was released in July 2009 and draws off of ISO 9001:2008, "emphasizing defect prevention and the reduction of variation and waste in the supply chain."

The standard ISO/TS 16949 applies to the design, development, production and, when relevant, installation and servicing of automotive-related products. The requirements are intended to be applied throughout the supply chain, with vehicle assembly plants being encouraged to seek ISO/TS 16949 certification so as to improve system and process quality, to increase customer satisfaction, to identify problems and risks in production process and supply chain, and to take preventive measures to ensure effectiveness. The technical specification is organized as follows:

Introduction This section introduces the perceived importance of quality management systems as well as adopting a process-based approach to their development and implementation. It also addresses its relationship to the ISO 9001, 9004 and 14001 standards.

Scope The scope and application of the standard is described as defining "the quality management system requirements for the design and development, production and, when relevant, installation and service of automotive-related products."

Normative references This section states the definitions in ISO 9000:2005 are vital to applying the specification.

Terms and definitions Additional definitions like "control plan," "error proofing," and "laboratory scope" are defined.

Quality management system The requirements for the operational effectiveness of a manufacturer's quality management system are outlined in this section. The requirements are broken down into two subsections: 4.1 General requirements 4.2 Documentation requirements

130

ISO/TS 16949

Management responsibility This section outlines the managerial responsibilities associated with designing and implementing a quality management system. These responsibilities are broken into six subsections: 5.1 Management commitment 5.2 Customer focus 5.3 Quality policy 5.4 Planning 5.5 Responsibility, authority and communication 5.6 Management review

Resource management This section outlines the requirements for managing the various resources needed to develop and maintain a quality management system as well as improve its effectiveness. These responsibilities are broken into four subsections: 6.1 Provision of resources 6.2 Human resources 6.3 Infrastructure 6.4 Work environment

Product realization The requirements for managing the aspects of a quality management system that directly affects how products are designed, produced, and shipped are covered in this section, which spans six subsections: 7.1 Planning of product realization 7.2 Customer-related processes 7.3 Design and development 7.4 Purchasing 7.5 Production and service provision 7.6 Control of monitoring and measuring equipment

Measurement, analysis and improvement The requirements of this section address how products created through the quality management system should conform and be continually assessed for improvement. This section is divided into five subsections: 8.1 General 8.2 Monitoring and measurement 8.3 Control of nonconforming product 8.4 Analysis of data 8.5 Improvement

131

ISO/TS 16949

Annex A Annex A: Control plan "shows the correspondence between ISO 9001:2008 and ISO 14001:2004."

Certification Manufacturers get ISO/TS 16949 certified based of the certification rules issued by the International Automotive Task Force (IATF). Those certification rules changed in April 2014, "intended to strengthen the value of the certification as seen by the customers of the scheme, i.e. the automotive OEMs who receive the products that are produced by the suppliers certified to the scheme." The new rules place extra emphasis on customer-measured performance as well as audit planning, including additional controls on site extensions, ring fencing, and nonconformity management. In March 2014, standards institute BSI outlined all the changes that took place to the certification process in their document Presentation by BSI on the main changes to the IATF ISO/TS 16949 certification scheme. Certifications last three years, and according to the new rules, the first recertification audit should be completed within exactly three years of the initial Stage 2 audit.

Further reading • Lomas, Frank (14 March 2014). "Presentation by BSI on the main changes to the IATF ISO/TS 16949 certification scheme" [1] (PDF). The British Standards Institution.

External links • ISO/TS 16949 standards at the International Organization for Standardization [2]

References [1] http:/ / www. bsigroup. com/ LocalFiles/ en-US/ Documents/ TS16949changespresentations. pdf [2] http:/ / www. iso. org/ iso/ catalogue_detail?csnumber=52844

132

The American Society of Crime Laboratory Directors/Laboratory Accreditation Board

The American Society of Crime Laboratory Directors/Laboratory Accreditation Board The American Society of Crime Laboratory Directors/Laboratory Accreditation Board (ASCLD/LAB) is a Missouri-based not-for-profit that "offers voluntary accreditation to public and private crime laboratories" around the world. Laboratories wishing to become accredited must go through a proficiency testing program as part of the accreditation process. The main objectives ASCLD/LAB are:

of

the

1. To improve the quality of More than 400 crime labs around the world have chosen to get accredited by the laboratory services provided to the ASCLD/LAB. criminal justice system. 2. To adopt, develop and maintain criteria which may be used by a laboratory to assess its level of performance and to strengthen its operation. 3. To provide an independent, impartial, and objective system by which laboratories can benefit from a total operational review. 4. To offer to the general public and to users of laboratory services a means of identifying those laboratories which have demonstrated that they meet established standards.

History The American Society of Crime Laboratory Directors (ASCLD) was officially founded in the fall of 1974. Around the same time, a national examination of forensic science laboratories began, culminating in 1977 with the revelation that many mistakes were being made in those labs. The problematic statistics that came out of that research partially drove the ASCLD to create the Laboratory Accreditation Board (LAB) in the summer of 1981. The ASCLD/LAB eventually incorporated as a non-profit corporation in Missouri on February 4, 1988. By June 1992, the organization had accredited 128 laboratories, including its first international laboratory, located in Adelaide, Australia. By the spring of 2014 that total was 403.

Accreditation ASCLD/LAB accredits forensic laboratories and certain forensic breath alcohol calibration programs to help them "demonstrate that its technical operations and overall management system meet ISO/IEC 17025:2005 requirements and applicable ASCLD/LAB-International supplemental requirements." Application review, on-site assessments, quality review, and, if necessary, corrective action resolutions are conducted before the final review and accreditation decision. The International accreditation typically is good for four years as long as the lab remains compliant and maintains obligations such as notification of significant changes to primary policies, resources, organization, and legal ownership.

133

The American Society of Crime Laboratory Directors/Laboratory Accreditation Board

Compliance After acceptance, ASCLD/LAB uses its Annual Accreditation Audit Report, proficiency testing reports, and laboratory visits to monitor a crime lab's compliance with the body's accreditation standards. In the unusual case of a laboratory failing to comply with those standards, the ASCLD may choose to place the lab on probation. Examples of such probationary action include the Nassau County, New York crime lab in 2007 and 2010 and the El Paso Police Department in 2011. Once on probation, the affected lab must satisfy certain conditions before being able to again operate and eventually be removed from the probationary period, including but not limited to submitting lab analyses for external technical reviews. In extreme cases of non-compliance, the ASCLD can also choose to suspend the lab for a period of time or even revoke the lab's accreditation.

External links • ASCLD [1] • ASCLD/LAB [2]

References [1] http:/ / www. ascld. org/ [2] http:/ / www. ascld-lab. org/

The NELAC Institute The NELAC Institute (TNI) is a non-profit organization dedicated to promoting "the generation of environmental data of known and documented quality through an open, inclusive, and transparent process that is responsive to the needs of the community." The founders' long-term motivation behind the creation of the NIC was to enact a "uniform, rigorous, and robust" nationwide environmental laboratory and monitoring accreditation program. This accreditation program exists today in the form of the National Environmental Laboratory Accreditation Program (NELAP).

History The NELAC Institute (TNI) was formed on November 6, 2006 as a collaboration between the National Environmental Laboratory Accreditation Conference (NELAC) and the Institute for National Environmental Laboratory Accreditation (INELA) with "the vision that all entities generating environmental data in the United States be accredited to a national standard." With the original 2003 NELAC Standard and 2005 updates to ISO/IEC 17025 as their guide, TNI continued work on revising the NELAC Standard, culminating in the release of its new standards in July 2011. The revisions both made ISO/IEC 17025 adherence mandatory and added analysis requirements for "five new contaminants and lower limits for existing contaminants."

National Environmental Laboratory Accreditation Program NELAP is an accreditation program targeted at environmental laboratories. The laboratory and accreditation body standards for the program are modeled after sections of ISO/IEC 17025 and ISO/IEC 17011. Accreditation for NELAP is performed by U.S. state governmental agencies that wish to participate; control over scope, laboratory types accepted, and fees are all controlled by the state agency. As of February 2015[1], 14 agencies in 13 states are organized as NELAP accreditation bodies. Those states are Florida, Illinois, Kansas, Louisiana, Minnesota, New Hampshire, New Jersey, New York, Oregon, Pennsylvania, Texas, Utah, and Virginia. California used to also

134

The NELAC Institute participate but withdrew from the program on January 31, 2014. Environmental laboratories seeking NELAP accreditation are directed to go to their State's accreditation body. If the lab's state doesn't have such a body, the lab is still able to select an out-of-state body to do the accreditation for them. This sort of "reciprocal certification" has the disadvantage that the accredited lab will only be certified to test a certain subset of analytes and contaminants, specifically those chosen for coverage by the state's accreditation body. This requires labs to carefully select which state agency to go through. Once the application stage begins, labs go through a series of performance evaluation studies and on-site auditing as well as pay the necessary fees. Each state may vary its audit requirements for labs maintaining accreditation.

External links • The NELAC Institute [2] • National Environmental Laboratory Accreditation Program [3] • TNI LAMS [4]

References [1] http:/ / www. limswiki. org/ index. php?title=The_NELAC_Institute& action=edit [2] http:/ / www. nelac-institute. org/ index. php [3] http:/ / www. nelac-institute. org/ newnelap. php [4] http:/ / lams. nelac-institute. org/

135

136

7. Laboratory Informatics Resources LIMSWiki:LIMSforum and LIMS/LI forum posts This page was created to collect most of the commented and active threads from the LIMSforum and LIMS/LI user groups hosted on LinkedIn. (Threshold is three or more comments.) The LIMSforum's goal is to "connect, share, and learn about laboratory, scientific, and healthcare informatics," while LIMS/LI is more focused on laboratory informatics. Note: This page is updated once a month, typically at the beginning of the month. We run two months behind current to allow posts to accumulate comments first. Commented and Active Posts from LIMSforum and LIMS/LI Post #

Title

711.

Suggestions for simple sample-tracking software?

710.

Author

[1]

Forum

Year Month

Comments Likes

Paul-Michael Agapow

2015 01

37

7

New site for LIMSforum.com. Can I get you test the group discussion [2] functionality?

John Jones

2015 01

33

2

709.

When looking for a LIMS professional, should you choose a LIMS [3] Consulting firm or a LIMS Staffing firm? What's the difference?

John Jones

2015 01

21

8

708.

Which makes up the largest share of a LIMS solution: Labor or Software? John Jones [4]

2015 01

21

6

707.

LIMS Implementation: "Big Bang" or "Phased Approach"

Howard Rosenberg

2015 01

16

10

706.

Hello, Which are the top 5 vendors for instrument integration for 21 CFR Part 11 compliant Laboratories. Looking for vendors for instrument [6] integration to existing LIMS system.

Mithun Kale

2015 01

14

1

705.

Looking for lower cost (
View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF