Phoenix WinNonlin 6.3 Examples Guide

March 23, 2017 | Author: Subhasree Nag | Category: N/A
Share Embed Donate


Short Description

Download Phoenix WinNonlin 6.3 Examples Guide...

Description

Examples Guide Phoenix® WinNonlin® 6.3 Phoenix® Connect 1.3

Phoenix® Phoenix® WinNonlin® 6.3, Connect 1.3, and NLME 1.2 copyright ©2005-2012, Certara, L.P. All rights reserved. This software and the accompanying documentation are owned by Certara, L.P. Pharsight is authorized to distribute and sublicense the material contained herein with the express written permission of Certara, L.P. The software and the accompanying documentation may be used only as authorized in the license agreement controlling such use. No part of this software or the accompanying documentation may be reproduced, transmitted, or translated, in any form or by any means, electronic, mechanical, manual, optical, or otherwise, except as expressly provided by the license agreement or with the prior written permission of Certara, L.P. This product may contain the following software that is provided to Certara, L.P. under license: Actuate™ Formula One® copyright 1993-2003 Actuate Corporation. All rights reserved. Dundas Chart for ASP.NET enterprise edition 5.5.1.1700 (with custom code changes) copyright 2009 Dundas Data Visualization and others. All rights reserved. Tab Pro ActiveX 2.0.0.45 copyright 1996-1998, FarPoint Technologies, Inc. All rights reserved. Sentinel RMS 8.1.1 copyright 2006 SafeNet, Inc. All rights reserved. Microsoft® XML Parser version 3.0 copyright 1998-2005 Microsoft Corporation. All rights reserved. Certara, L.P. has agreement with the following software to use and redistribute licenses: Syncfusion Essential Studio Enterprise 6.302.0.30 copyright 2001-2009 Syncfusion Inc. All rights reserved. Minimal Gnu for Windows (MinGW, http://mingw.org/), copyright 2007 Free Software Foundation, Inc. This product may also contain the following royalty free software: DotNetbar 1.0.0.24030 (with custom code changes) copyright 1996-2009 DevComponents LLC. All rights reserved. Xceed zip Library 2.0.116.0 copyright 2009 Xceed Software Inc. All rights reserved. IMSL® copyright 1970-2008 Visual Numerics, Inc. All rights reserved. Information in the documentation is subject to change without notice and does not represent a commitment on the part of Pharsight Corporation or Certara, L.P. The documentation contains information proprietary to Certara, L.P. and is for use by Pharsight Corporation, and its affiliates' and designates' customers only. Use of the information contained in the documentation for any purpose other than that for which it is intended is not authorized. NONE OF PHARSIGHT CORPORATION, CERTARA, L.P., NOR ANY OF THE CONTRIBUTORS TO THIS DOCUMENT MAKES ANY REPRESENTATION OR WARRANTY, NOR SHALL ANY WARRANTY BE IMPLIED, AS TO THE COMPLETENESS, ACCURACY, OR USEFULNESS OF THE INFORMATION CONTAINED IN THIS DOCUMENT, NOR DO THEY ASSUME ANY RESPONSIBILITY FOR LIABILITY OR DAMAGE OF ANY KIND WHICH MAY RESULT FROM THE USE OF SUCH INFORMATION.

Destination Control Statement All technical data contained in the documentation are subject to the export control laws of the United States of America. Disclosure to nationals of other countries may violate such laws. It is the reader's responsibility to determine the applicable regulations and to comply with them.

United States Government Rights This software and accompanying documentation constitute “commercial computer software” and “commercial computer software documentation” as such terms are used in 48 CFR 12.212 (Sept 1995). United States Government end users acquire the Software under the following terms: (i) for acquisition by or on behalf of civilian agencies, consistent with the policy set forth in 48 CFR 12.212 (Sept 1995); or (ii) for acquisition by or on behalf of units of the Department of Defense, consistent with the policies set forth in 48 CFR 227.7202-1 (June 1995) and 227.7202-3 (June 1995). The manufacturer is Pharsight Corporation, 1699 South Hanley Road, Suite 200, St. Louis, MO 63144.

Trademarks AutoPilot, Drug Model Explorer (DMX), Pharsight Knowledgebase Server (PKS), PKS Reporter, Pharsight, Phoenix, Phoenix Connect, Phoenix NLME, Phoenix WinNonlin, IVIVC Toolkit, Trial Simulator, WinNonlin, and WinNonMix are trademarks or registered trademarks of Certara, L.P. and are licensed to Pharsight Corporation as provided above. NONMEM is a registered trademark of ICON Development Solutions. SPLUS is a registered trademark of Insightful Corporation. SAS and all other SAS Institute Inc. product or ser-

vice names are registered trademarks or trademarks of SAS Institute Inc. in the USA and other countries. Sentinel RMS is a trademark of SafeNet, Inc. Microsoft, MS, the Internet Explorer logo, MS-DOS, the Office logo, Microsoft Word, Microsoft Excel, Microsoft PowerPoint, Windows, Windows 2000, Windows XP, Windows Vista, the Windows logo, the Windows Start logo, and the XL design (the Microsoft Excel logo) are trademarks or registered trademarks of Microsoft Corporation. Pentium and Pentium III are trademarks or registered trademarks of Intel Corporation. Adobe, Acrobat, Acrobat Reader, and the Adobe PDF logo are registered trademarks of Adobe Systems Incorporated. All other brand or product names mentioned in this documentation are trademarks or registered trademarks of their respective companies or organizations.

Additional third party software acknowledgements Software for Locally-Weighted Regression The authors of this software are Cleveland, Grosse, and Shyu. Copyright © 1989, 1992 by AT&T. Permission to use, copy, modify, and distribute this software for any purpose without fee is hereby granted, provided that this entire notice is included in all copies of any software which is or includes a copy or modification of this software and in all copies of the supporting documentation for such software. This software is being provided “as is”, without any express or implied warranty. In particular, neither the authors nor AT&T make any representation or warranty or any kind concerning the merchantability of this software or its fitness for any particular purpose. LAPACK Copyright © 1992-2007 The University of Tennessee. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer listed in this license in the documentation and/or other materials provided with the distribution. Neither the name of the copyright holders nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. This software is provided by the copyright holders and contributors “as is” and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall the copyright owner or contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and on any theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise) arising in any way out of the use of this software, even if advised of the possibility of such damage. NLog Copyright © 2004-2006 Jaroslaw Kowalski . All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

Neither the name of Jaroslaw Kowalski nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. This software is provided by the copyright holders and contributors “as is” and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed. In no event shall the copyright owner or contributors be liable for any direct, indirect, incidental, special, exemplary, or consequential damages (including, but not limited to, procurement of substitute goods or services; loss of use, data, or profits; or business interruption) however caused and on any theory of liability, whether in contract, strict liability, or tort (including negligence or otherwise) arising in any way out of the use of this software, even if advised of the possibility of such damage.

Pharsight Corporation 1699 S. Hanley Road, St. Louis, MO 63144 USA Telephone: +1-919-859-6868 • Fax: +1-919-859-6871 www.pharsight.com • [email protected]

Contents

Chapter 1

Analyzing Multiple Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Preparing the data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Reviewing profile plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Noncompartmental analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 NCA model variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Dosing regimen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Model options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Summarizing the output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Exporting results to Microsoft Word . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

Chapter 2

Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17 Error bar plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Set up the data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Descriptive statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Plot the mean +/- standard deviation using relative error bars . . . . . . . . 21 Plot the median, minimum and maximum using absolute error bars . . . 23 Overlay multiple plots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Overlaying variables from the same data set . . . . . . . . . . . . . . . . . . . . . 26 Overlaying variables from multiple data sets . . . . . . . . . . . . . . . . . . . . . 28

Chapter 3

Noncompartmental Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33 A noncompartmental analysis of three profiles . . . . . . . . . . . . . . . . . . . . . . 33 The data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 The model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Descriptive statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

v

Phoenix Examples Guide

Noncompartmental analysis with exclusions, computing partial areas . . . . . 46 Model settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 Additional NCA examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 NCA_PD.pmo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 SparseSamplingChaioYeh.pmo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Urine.pmo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Chapter 4

Workflows and Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55 Create a workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Create the formulation data set for the bioequivalence model . . . . . . . . . . . 68 Create and add a template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

Chapter 5

Pharmacokinetic Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .79 Exploring the data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Plot the time and concentration data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Set up the model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Dosing regimen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Initial parameter estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Run the model and view the results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Saving the project and the results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

Chapter 6

The Phoenix Toolbox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95 Semicompartmental modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Set up semicompartmental modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Pharmacodynamic modeling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Nonparametric superposition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Output for effect-site concentrations. . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Steady-state effect computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Crossover design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Data stacked in one column . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Data in separate columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Deconvolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Absolute bioavailability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Dissolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

vi

Contents

Chapter 7

Linear Mixed Effects Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . .123 Comparing treatment groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 The model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 An illustration of variance structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 The model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Re-execute the model with new data . . . . . . . . . . . . . . . . . . . . . . . . . . 130

Chapter 8

The IVIVC Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .133 Setting up the data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Selecting and smoothing the dissolution data . . . . . . . . . . . . . . . . . . . . . . . 135 Fitting the unit impulse response and estimating absorption . . . . . . . . . . . 138 Developing and validating the IVIVC model . . . . . . . . . . . . . . . . . . . . . . . 140 Predicting PK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

Chapter 9

Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .145 Final Parameters table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Table Type 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Select and format summary statistics . . . . . . . . . . . . . . . . . . . . . . . . . . 148 Joining raw data and modeling output . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Recreating WinNonlin’s table template 9 in Phoenix . . . . . . . . . . . . . 151 Summary statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 Using custom tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

Chapter 10

Simulation and Study Design . . . . . . . . . . . . . . . . . . . . . . . . . . . .159 Using Phoenix as an aid in designing experiments. . . . . . . . . . . . . . . . . . . 159 Comparison of two designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 The data set. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Insert and map the PK model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Enter the dosing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 Model parameters and simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Enter the initial estimates: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Designing the sampling plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

Chapter 11

Bioequivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .167

vii

Phoenix Examples Guide

Average bioequivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Calculating average bioequivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 A replicated crossover design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Calculating average bioequivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Individual and population bioequivalence. . . . . . . . . . . . . . . . . . . . . . . . . . 174 The population/individual model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 The model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Comparing average bioequivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Chapter 12

Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .181 Computing ratios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Create the project and import the data . . . . . . . . . . . . . . . . . . . . . . . . . 181 Merge the two data sets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Calculate F (fraction of oral dose absorbed): . . . . . . . . . . . . . . . . . . . . 184 Calculate descriptive statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Creating a baseline-adjusted variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Import the data set. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Compute the change from baseline using a column transform . . . . . . . 186

Chapter 13

Modeling Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .189 Load, view, and run the example models . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Pharmacokinetic model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Pharmacokinetic model with multiple doses . . . . . . . . . . . . . . . . . . . . . . . . 190 Probit analysis: maximum likelihood estimation of potency . . . . . . . . . . . 190 Logit regression (bioassay) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Survival analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 System of two differential equations with data for both compartments . . . 193 System of two differential equations with data on one compartment . . . . . 193 Multiple linear regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Cumulative areas under the curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Mitscherlich nonlinear model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Four parameter logistic model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 Linear regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 Indirect response model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 Ke0 link model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Pharmacokinetic/pharmacodynamic link model . . . . . . . . . . . . . . . . . . . . . 196

viii

Contents

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197

ix

Phoenix Examples Guide

x

Chapter 1

Analyzing Multiple Profiles A start-to-finish example using noncompartmental analysis

This example demonstrates the general steps to summarize a data set using noncompartmental analysis. The data set contains time-concentration profiles from a two period crossover study with six subjects. See Chapter 3 on page 33 for additional examples of noncompartmental analysis.

Preparing the data Data for noncompartmental analyses can include one or more sort variables. Sort variables have discrete values that identify time-concentration profiles to be analyzed individually. Input data sets should be stacked (long and skinny) rather than unstacked (short and wide). Stacking simply means moving information stored in column headings into the rows. For example, matrix data such as plasma or urine can be placed in one row, and all associated data are arranged in rows beside the matrix data. This means that all measurements appear in a single column, with one or more additional columns flagging which data belong to which matrix. The data for one matrix must be listed first, then all the data for the other matrix. For noncompartmental analysis data, this means that time (the independent variable) and concentration (the dependent variable) data for all individuals should occupy only one column each, with one or more additional columns (sort variables) used to identify individual profiles. The study data for this example are contained in Profiles.CSV, which is located in the Phoenix examples directory. This crossover study includes two sort variables: Subject (subject identifiers) and Form (formulation). There are six subjects, each of whom was tested with two formulations, for a total of twelve profiles.

1

1

Phoenix Examples Guide Start Phoenix and create a new project: 1. In the Windows Start menu, select All Programs > Pharsight > Phoenix > Phoenix to start Phoenix. 2. Select File > New Project to create a new project. A new project is created in the Object Browser. 3. Name the new project Multiple Profiles. Import the data: 1. Select File > Import or click the Import is displayed.

button. The Open File(s) dialog

2. Navigate to the Phoenix examples subdirectory, which by default is located at

C:\Program Files\Pharsight\Phoenix\application\Examples. 3. Select Profiles.CSV and click Open. The Worksheet Import Options dialog is displayed. The dialog is used to assign options for how the data are imported and presented. 4. Select the Has units row check box. 5. Click Finish. The data set is added to the project’s Data folder. A data set in CSV (Comma Separated Values) format is added to the Data folder as a worksheet. 6. View the data set by selecting it in the Data folder. The worksheet is displayed in the Grid tab, which is located in the right viewing panel.

Note: To view the worksheet in its own window, select the worksheet and doubleclick or press ENTER. The data set is displayed in a worksheet window.

Reviewing profile plots Before analyzing the data, examine a plot of each profile to confirm the model and scan for outlying data points. Insert the XY Plot object: 1. Select the workflow object Insert > Plotting > XY Plot.

2

in the Object Browser and then select

Analyzing Multiple Profiles Reviewing profile plots

1

Note: The XY Plot object can also be added by right-clicking the workflow object and selecting New > Plotting > XY Plot. Any object can be added by selecting New in the workflow object menu. The XY Plot object is added to the workflow in the Object Browser. » Objects automatically open in the right viewing panel when they are inserted into a workflow. » Each object’s default view is the Setup tab, which contains all the steps necessary to set up an object. » To view the plot in its own window, double-click the XY Plot object in the Object Browser or select the plot object and press ENTER. The XY Plot window is displayed. » The same set of instructions can be used to set up an operational object if it is displayed in the right viewing panel or in its own window. 2. Map the data set Profiles as the input source for the XY Plot object: •

Use the pointer to drag the Profiles worksheet from the Data folder to the XY Data Mappings panel. OR



In the XY Plot XY Data Mappings panel click the Select source to open the Select Object dialog.

button



Click the (+) signs beside Multiple Profiles > Data to expand the menu tree.



Select the Profiles worksheet and click Select. The Profiles data set is mapped to the XY Plot.

3. Use the option buttons in the XY Data Mappings panel to map the data types to the following contexts: •

Map Subject to the Group context.



Map Form to the Lattice Conditions Page (Sort) context.



Map Time to the X context.



Map Conc to the Y context.

3

1

Phoenix Examples Guide Set plot options: The plot display options are located in the XY Plot's Options tab. Expand items in the Options menu tree by clicking the (+) signs. •

Select Plot > Title. In the Title field type Plotting Multiple Profiles.



Accept all other default entries for the XY Plot options.

Execute the plot: 1. Click the Execute

button. The Results are displayed on the Results tab.

2. If the XY Plot is opened in its own window, close the window. Return to the window at any time by selecting the XY Plot object in the Object Browser and double-clicking or pressing ENTER. XY Plot Form = Capsule

4

Analyzing Multiple Profiles Noncompartmental analysis

1

Noncompartmental analysis The noncompartmental analysis (NCA) plasma model 200 (extravascular dosing) is suitable for this data. All subjects had a dose of 100 ng at time 0 for each formulation. All profiles use uniform weighting, and allow Phoenix to select the terminal elimination phase. The linear trapezoidal method with linear interpolation is used to compute the areas under the curve. Insert the NCA object: 1. Select the workflow object in the Object Browser and then select Insert > NCA and Toolbox > NCA. The NCA object is added to the workflow object in the Object Browser.

Note: To view the object in its own window double-click the NCA object or select the NCA object and press ENTER. The NCA window is displayed. 2. Map the data set Profiles as the input source for the NCA object: •

Use the pointer to drag the Profiles worksheet from the Data folder to the Main Mappings panel. OR



In the NCA Main Mappings panel click the Select source open the Select Object dialog.

button to



Click the (+) signs beside Multiple Profiles > Data to expand the menu tree.



Select the Profiles worksheet and click Select. The Profiles data set is mapped to the NCA object.

NCA model variables Map the model variables: Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map Subject to the Sort context.



Map Form to the Sort context.



Map Time to the Time context.



Map Conc to the Concentration context. 5

1

Phoenix Examples Guide

Dosing regimen In this example one dose of 100 ng was administered at time 0 for each subject and formulation. •

Dosing options are located in the Dose Options area in the Options tab.



Extravascular is selected by default in the Type menu. Do not change this setting.

Enter the dosing data: There are two ways to enter dosing data: Enter the dosing data manually or Create a dosing worksheet.

Enter the dosing data manually 1. Select Dosing in the NCA object's Setup list. The Dosing panel is displayed. 2. Select the Use internal Worksheet check box. The Dosing sorts dialog is displayed. The Dosing sorts dialog prompts the user to select the sort variables to use to create the internal dosing worksheet. Dosing sorts dialog

3. Click OK to accept the default sort variables. 4. In the first cell under Dose type 100. 5. In the first cell under Time of Dose type 0. 6. Do not enter any values in the Tau column. 7. Use the pointer to select the first cells under Dose and Time of Dose. The selected cells are highlighted.

6

Analyzing Multiple Profiles Noncompartmental analysis

1

8. Place the pointer over the black square on the lower right side of the selection. The pointer changes to the following shape: . This signifies that the drag and fill feature can be used. 9. Press the left mouse button and drag the selection down to fill the Dose and Time of Dose cells beside each subject and formulation. 10. In the Dose Options area in the Options tab, type ng in the Unit field. Dose Options area

11. Go to Model options on page 9.

Create a dosing worksheet 1. Right-click the Data folder in the Object Browser and select New > Worksheet. 2. Name the new worksheet NCA Dosing Data. The new worksheet is automatically displayed in the Grid tab.

Note: To view the worksheet in its own window, select the worksheet and doubleclick or press ENTER. The data set is displayed in a worksheet window. The Columns tab is located underneath the Grid tab. The Columns tab is used to add columns to a worksheet. 3. Click the Add button underneath the Columns box. The New Column Properties dialog is displayed. » Use the New Column Properties dialog to define the data type and the name of a new column. » The Numeric option button is selected by default. Do not change this setting. 4. In the Column Name field type Subject and click OK. » A new column is displayed in the Columns box and in the Grid tab. Singleclick a column header in the Columns box to rename it. 5. In the first cell under Subject, type 1 for subject 1 and press ENTER. Repeat for subjects 2 through 6. 6. Click the Add button underneath the Columns box. 7

1

Phoenix Examples Guide 7. In the Column Name field type Dose and click OK. 8. In the Unit field for the Dose column type ng. 9. In the first cell under Dose type 100. 10. Add a final Numeric column and name it Time_of_Dose.

Note: Newly created columns do not support empty spaces in the column names. Phoenix can import column names with spaces, but it does not allow users to create column names with spaces. 11. In the first cell under Time_of_Dose type 0. 12. Use the drag and fill feature (see step 8. under Enter the dosing data manually) to fill the rest of the dosing data worksheet by highlighting the first two cells underneath Dose and Time_of_Dose and dragging the selection down. 13. The finished worksheet looks like this:

14. If the NCA Dosing Data worksheet is opened in its own window, close the window. Return to the window at any time by selecting the worksheet in the Data folder and double-clicking or pressing ENTER. Map the NCA Dosing Data worksheet to the Dosing panel: 1. Select the NCA object in the Object Browser. 2. Select Dosing in the Setup list. 3. Map the NCA Dosing Data worksheet to the Dosing panel in one of two ways: •

Use the pointer to drag the NCA Dosing Data worksheet from the Data folder to the Dosing panel.



Click the Select source button in the Dosing panel to select the worksheet and map it to the Dosing panel.

4. Use the option buttons in the Dosing panel to map Subject to Sort, Dose to Dose, and Time_of_Dose to Time of Dose.

8

Analyzing Multiple Profiles Results

1

CAUTION: Mapping a worksheet to the Dosing panel overrides the Unit settings in the Dose Options area. If a worksheet is mapped to the Dosing panel make sure that the appropriate units are added to the Dose column in the worksheet.

Model options Use the Options tab to specify settings for the NCA model options. The Options tab is located underneath the Setup tab. Model options: 1. The default setting for Model Type is Plasma (200-202). Do not change this setting.

Note: The exact plasma model type (200, 201, or 202) is determined by the dose type. 2. The default setting for Calculation Method is Linear Trapezoidal Linear Interpolation. Do not change this setting. 3. In the Titles field type Processing Multiple Profiles with Model 200. At this point all of the necessary mappings and options have been specified. Run the analysis: 1. Click the Execute

button. The results are displayed on the Results tab.

2. If the NCA object is opened in its own window, close the window. Return to the window at any time by selecting the NCA object in the Object Browser and double-clicking or pressing ENTER.

Results The Text Output, the Output Data worksheets, and the Observed Y and Predicted Y vs X plots are located on the Results page. The worksheet output for noncompartmental analysis includes seven worksheets: Dosing Used, Exclusions, Final Parameters, Final Parameters Pivoted, Partial Areas, Plot Titles, Slopes Settings, and Summary Table. The Final Parameters worksheets are presented in two formats. Each uses a different data layout. The Final Parameters Pivoted worksheet puts each output parameter in a separate column. The Final Parameters worksheet presents all 9

1

Phoenix Examples Guide parameter estimates in one column, with another column used to identify the parameters. Note that in each Results worksheet the sort variables Subject and Form are included as columns in the data grid, and the output is presented for each level of the sort variables. In addition, each output parameter with units has the units in the column header, except for the Final Parameters worksheet, which places units in their own column beside the Parameter column. The Core output provides a summary of model settings and all output data included in the workbook output. The plots are a plot of observed versus predicted data for each subject. Each plot is displayed on its own page. Change the number of plots displayed per page: 1. Select the Observed Y and Predicted Y vs X plot in the Results tab and double-click it. The plot is opened in its own window. 2. Select Plot in the Options menu tree. The lattice controls are located on the Content tab. 3. Clear the Bind Lattice to Data check box. 4. Click the up and down arrows in the Lattice Rows box to change the number of rows used to display plots. 5. Click the up and down arrows in the Lattice Columns box to change the number of columns used to display plots. » Phoenix can display a maximum of 15 latticed rows and 15 latticed columns. » Phoenix cannot display more than 200 charts per page. » The number of plots that can be displayed per page depends on the monitor size and resolution. » If too many plots are placed on one page the axes labels, legends, and other plot information can be difficult to read. 6. Close the Observed Y and Predicted Y vs X window. Return to the window at any time by selecting the plot in the Results tab and double-clicking it.

10

Analyzing Multiple Profiles Summarizing the output

1

Summarizing the output Phoenix's Descriptive Stats object is used to summarize several of the output parameters in the Final Parameters Pivoted worksheet. The Descriptive Stats object generates separate statistics for each formulation. Summarize the Final Parameters results: 1. Select the workflow in the Object Browser and then select Insert > NCA and Toolbox > Descriptive Stats. The Descriptive Stats object is added to the workflow in the Object Browser. 2. Map the NCA Final Parameters Pivoted worksheet as the input source for the Descriptive Stats object: •

In the Descriptive Stats Main Mappings panel click the Select Source button to open the Select Object dialog.



Select the Final Parameters Pivoted worksheet and click Select. OR



Select the workflow. The workflow’s Diagram tab is displayed in the right viewing panel.

Note: To view a workflow in its own window, double-click the workflow or select the workflow and press ENTER. The workflow window is displayed. Each operational object in a workflow is represented in the Diagram tab. •

Click the chevron bols.

buttons to expand the NCA and Descriptive Stats sym-

Each object symbol contains a complete list of all input and output sources. •

Click the (+) symbol beside the NCA Results.



Click the (+) symbol beside the Descriptive Stats Inputs.



Drag the NCA Final Parameters Pivoted worksheet to the Descriptive Stats Main input. The Final Parameters Pivoted worksheet is mapped to the Descriptive Stats object. A line is displayed that represents the mapping between the NCA and Descriptive Stats objects.

11

1

Phoenix Examples Guide Diagram tab mappings

3. Select the Descriptive Stats object in the Object Browser. 4. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map Form to the Sort context.



Map Tmax, Cmax, and AUCall to the Summary context.



Leave all other data types mapped to None.

Set the Descriptive Stats options and execute the object: Descriptive Stats options are accessible in the Options tab, which is located underneath the Setup tab. 1. Select the Confidence Interval check box. The default setting for the Confidence Interval is 95%. Do not change this setting.

12

Analyzing Multiple Profiles Summarizing the output

1

2. Select the Number of SD check box. The default setting for the number of standard deviations is 1. Do not change this setting. 3. Click the Execute

button. The Results are displayed on the Results tab.

This example summarizes AUCall, the area under the curve through the last measured value, Cmax, the maximal concentration of drug in the blood, and Tmax, the time at maximal concentration. A portion of the output is shown below. Descriptive Stats Statistics output:

13

1

Phoenix Examples Guide

4. If the Descriptive Stats object is opened in its own window, close the window.

Exporting results to Microsoft Word The results of any operational object can be exported to a Microsoft Word document. This example shows how to format plot output and export it to Microsoft Word. By default plots are exported at a resolution of 1024 by 768 pixels. 1. Select File > Word Export. The Word Export dialog is displayed. Word Export dialog

2. Clear the Multiple Profiles check box to deselect all objects in the project.

Note: Expand items in the Word Export menu tree by clicking the (+) signs. 3. Click the (+) signs beside Workflow > NCA > Results to expand the menu tree. 4. Select the Observed Y and Predicted Y vs X check box. 5. Select the Summary Table check box. 14

Analyzing Multiple Profiles Exporting results to Microsoft Word

1

6. Click the Options button. 7. Select the Landscape option button in the Orientation area in the Document tab. 8. Clear the Add source line to objects check box. 9. Click Finished. 10. Click the Export button. Phoenix creates a new Microsoft Word document and exports the selected objects into the document. 11. Save the Word file and exit Microsoft Word.

Note: It is not necessary to keep a project open after completing each chapter. This project is not required when working in the next chapter. To close a project right-click the project and select Close Project.

15

1

16

Phoenix Examples Guide

Chapter 2

Plots Creating plots, using error bars on XY plots, and plotting multiple columns on a single plot

This chapter provides an overview of Phoenix’s plotting capabilities through two types of plot examples: » Error bar plots below demonstrates the use of absolute and relative error bars on XY plots. » Overlay multiple plots on page 25 plots multiple variables per plot.

Error bar plots This example uses summary statistics and error bars to create two XY plots: » Plot the mean +/- standard deviation using relative error bars on page 21. » Plot the median, minimum and maximum using absolute error bars on page 23.

Set up the data Create a new project: 1. Select File > New Project to create a new project. A new project is created in the Object Browser. 2. Name the new project Plots.

17

2

Phoenix Examples Guide Import the data set: 1. Select File > Import or click the Import is displayed.

button. The Open File(s) dialog

2. Navigate to the Phoenix examples subdirectory, which by default is located at

C:\Program Files\Pharsight\Phoenix\application\Examples. 3. Select Bguide2.dat and click Open. The Worksheet Import Options dialog is displayed. The dialog is used to assign options for how the data are imported and presented. 4. Click Finish. The data set is added to the project’s Data folder. 5. View the data set by selecting it in the Data folder. The worksheet is displayed in the Grid tab, which is located in the right viewing panel.

Note: To view the worksheet in its own window, select the worksheet and doubleclick or press ENTER. The data set is displayed in a worksheet window.

Use the Units Builder to add units to a column: Units must be added to the time and concentration columns before the data set can be used to create plots. 1. Select Bguide2 in the Data folder. The worksheet is displayed in the Grid tab. The Columns tab is located underneath the Grid tab. The Columns tab is used to edit columns in a worksheet. 2. Select the Time column header in the Columns box. 3. Click the Unit Builder button. The Units Builder dialog is displayed.

4. Select hour [hr] in the Time menu. 18

Plots Error bar plots

2

5. Click the Add button beside the Time menu. After clicking the Add button the selected units are displayed in the New Units field.

Note: Units can also be typed directly into the New Units field. 6. Click OK to assign the units to the column. 7. Select the Conc column header in the Columns box. 8. Click the Unit Builder button. 9. Select nano [n] in the Mass prefix menu. 10. Select gram [g] in the Mass unit menu. 11. Click the Add button beside the Mass menus. 12. Click the / operator button in the Add operator area. 13. Select milli [m] in the Volume prefix menu. 14. Select liter [L] in the Volume unit menu. 15. Click the Add button beside the Volume menus. Click OK.

Note: Units added to ASCII data sets can be preserved when the data sets are exported to disk or a database. Phoenix adds the units to a row beneath the column headers. When importing a .dat or .csv file with units, select the Has units row check box in the File Options area in the Worksheet Import Options dialog. 16. If the Bguide2 worksheet is opened in its own window, close the window. Return to the window at any time by selecting the Bguide2 worksheet in the Data folder and double-clicking or pressing ENTER.

Descriptive statistics To create error data for the error bars, this example computes means and standard deviations for the concentration data at each time point.

19

2

Phoenix Examples Guide Compute summary statistics: 1. Select the workflow object in the Object Browser and then select Insert > NCA and Toolbox > Descriptive Stats.

Note: The Descriptive Stats object can also be added by right-clicking the workflow and selecting New > NCA and Toolbox > Descriptive Stats. Any object can be added by selecting New in the workflow menu. The Descriptive Stats object is added to the workflow in the Object Browser. » Objects automatically open in the right viewing panel when they are inserted into a workflow. » Each object’s default view is the Setup tab, which contains all the steps necessary to set up an object. •

To view the plot in its own window, double-click the Descriptive Stats object or select the plot object and press ENTER. The Descriptive Stats window is displayed.



The same set of instructions can be used to set up an object if it is displayed in the right viewing panel or in its own window.

2. Map the data set Bguide2 as the input source for the Descriptive Stats object: •

Use the pointer to drag the Bguide2 worksheet from the Data folder to the Main Mappings panel. OR



In the Descriptive Stats Main Mappings panel click the Select source button to open the Select Source dialog.



Select Bguide2 and click OK. The Bguide2 data set is mapped to the Descriptive Stats object.

3. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Leave Sex mapped to None.



Leave Subject mapped to None.



Map Time to the Sort context.



Map Conc to the Summary context.

Descriptive Stats options are accessible in the Options tab, which is located underneath the Setup tab.

20

Plots Error bar plots

2

4. Select the Confidence Interval check box. The default setting for the Confidence Interval is 95%. Do not change this setting. 5. Select the Number of SD check box. The default setting for the number of standard deviations is 1. Do not change this setting. 6. Click the Execute

button. The results are displayed on the Results tab.

7. If the Descriptive Stats object is opened in its own window, close the window. Return to the window at any time by selecting the Descriptive Stats object in the Object Browser and double-clicking or pressing ENTER.

Plot the mean +/- standard deviation using relative error bars 1. Select the workflow object in the Object Browser and then select Insert > Plotting > XY Plot. The XY Plot object is added to the workflow in the Object Browser. 2. Map the Descriptive Stats Statistics worksheet as the input source for the XY Plot object: •

In the XY Plot XY Data Mappings panel click the Select Source to open the Select Source dialog.



Select the Descriptive Stat’s Statistics worksheet and click OK.

button

OR •

Select the workflow. The workflow Diagram tab is displayed in the right viewing panel. Each operational object in a workflow is represented in the Diagram tab.



Click the chevron objects.

buttons to expand the Descriptive Stats and XY Plot

Each operational object in the Diagram tab contains a complete list of all input and output sources. •

Click the (+) symbol beside the Descriptive Stats Results.



Click the (+) symbol beside the XY Plot Inputs.



Drag the Descriptive Stats Statistics worksheet to the XY Plot’s XY Data input. The Statistics worksheet is mapped to the XY Plot object. A line is displayed that represents the mapping between the Descriptive Stats and XY Plot objects.

21

2

Phoenix Examples Guide Diagram tab mappings

Note: To view the workflow in its own window, double-click the workflow or select the workflow and press ENTER. The workflow window is displayed. 3. Select the XY Plot object in the Object Browser. 4. Use the option buttons in the XY Data Mappings panel to map the data types to the following contexts: •

Map Time to the X context.



Map Mean to the Y context.



Map SD to the Lower and Upper Error Bars.



Leave all other data types mapped to None.

Set plot options: The plot display options are located in the XY Plot's Options tab. 1. Select Plot > Title. In the Title field type: Mean +/- Standard Deviation. 2. Select Graphs > Mean vs Time > Error Bars.

22

Plots Error bar plots

2

3. User is selected by default in the Error Calculation Type menu. Do not change this setting. 4. Relative is selected by default in the User Calculation Type menu. Do not change this setting. Using the Relative User Calculation Type causes Phoenix to add and subtract the errors from the mean. 5. Click the Execute button. The results are displayed on the Results tab. Mean +/- Standard Deviation plot

Plot the median, minimum and maximum using absolute error bars 1. Select the workflow in the Object Browser and then select Insert > Plotting > XY Plot. The XY Plot object is added to the workflow in the Object Browser.

Note: When multiple objects of the same type are added to a workflow they are numbered sequentially. For example, the second XY Plot object added to this workflow is called XY Plot 1. 2. Map the Descriptive Stats Statistics worksheet as the input source for the XY Plot 1 object:

23

2

Phoenix Examples Guide •

In the XY Plot 1 XY Data Mappings panel click the Select Source ton to open the Select Source dialog.



Select the Descriptive Stats Statistics worksheet and click OK.

but-

OR •

Select the workflow. The workflow Diagram tab is displayed in the right viewing panel. Each operational object in a workflow is represented in the Diagram tab.



Click the chevron symbols.

buttons to expand the Descriptive Stats and XY Plot 1

Each operational object in the Diagram tab contains a complete list of all input and output sources. •

Click the (+) symbol beside the Descriptive Stats Results.



Click the (+) symbol beside the XY Plot 1 Inputs.



Drag the Descriptive Stats Statistics worksheet to the XY Plot 1 XY Data input. The Statistics worksheet is mapped to the XY Plot 1 object. A line is displayed that represents the mapping between the Descriptive Stats and XY Plot 1 objects.

3. Select the XY Plot 1 object in the Object Browser. 4. Use the option buttons in the XY Data Mappings panel to map the data types to the following contexts: •

Map Time to the X context.



Map Median to the Y context.



Map Min to the Lower Error Bar.



Map Max to the Upper Error Bar.



Leave all other data types mapped to None.

Set plot options: The plot display options are located in the XY Plot 1's Options tab. 1. Select Plot > Title. In the Title field type Minimum, Median, and Maximum Concentrations. 2. Select Graphs > Median vs Time > Error Bars. 3. In the User Calculation Type menu select Absolute.

24

Plots Overlay multiple plots

2

Using the Absolute User Calculation Type instructs Phoenix to plot the Min and Max values on the Y axis, rather than to add and subtract them from the Median. 4. Click the Execute button. The results are displayed on the Results tab. Minimum, Median, and Maximum Concentrations plot

Overlay multiple plots Overlaid plots can display data for more than one variable on one set of axes. They can be created from separate columns in the same data set, columns in different data sets, or both. •

Overlaying variables from the same data set on page 26 overlays data from separate columns in the same workbook.



Overlaying variables from multiple data sets on page 28 overlays variables from different workbooks.

Note: It is also possible to create similar plots without using the overlay feature, if the data are stored in one column, by assigning a variable or parameter to the Group context.

25

2

Phoenix Examples Guide

Overlaying variables from the same data set This example plots the summary statistics for concentration at each time interval. Plot summary statistics for concentration at each time point: 1. Select the workflow in the Object Browser and then select Insert > Plotting > XY Plot.

Note: The XY Plot object can also be added by right-clicking the workflow and selecting New > Plotting > XY Plot. The XY Plot object is added to the workflow in the Object Browser. The new XY Plot object is called XY Plot 2. 2. Map the Descriptive Stats Statistics worksheet as the input source for the XY Plot 2 object: •

In the XY Plot 2 XY Data Mappings panel click the Select Source ton to open the Select Source dialog.



Select the Descriptive Stats Statistics worksheet and click OK.

but-

OR •

Select the workflow. The workflow Diagram tab is displayed in the right viewing panel. Each operational object in a workflow is represented in the Diagram tab.



Click the chevron symbols.

buttons to expand the Descriptive Stats and XY Plot 2

Each operational object in the Diagram tab contains a complete list of all input and output sources. •

Click the (+) symbol beside the Descriptive Stats Results.



Click the (+) symbol beside the XY Plot 2 Inputs.



Drag the Descriptive Stats Statistics worksheet to the XY Plot 2 XY Data input. The Statistics worksheet is mapped to the XY Plot object. A line is displayed that represents the mapping between the Descriptive Stats and XY Plot 2 objects.

3. Select the XY Plot 2 object in the Object Browser.

26

Plots Overlay multiple plots

2

4. Use the option buttons in the XY Data Mappings panel to map the data types to the following contexts: •

Map Time to the X context.



Map CI 95% Lower Mean to the Y context.



Map CI 95% Upper Mean to the Y2 context.



Leave all other data types mapped to None.

Set plot options: The plot display options are located in the XY Plot 2's Options tab. Expand items in the Options menu tree by clicking the (+) signs. 1. Select Plot > Title. In the Title field type Overlay Charts. Press ENTER to move to the next line and type: Example 1. 2. Select Axes > Y. Select the Label tab. In the Label field type Confidence Interval. 3. Click the Execute

button. The results are displayed on the Results tab.

27

2

Phoenix Examples Guide

Overlaying variables from multiple data sets This example will plot the Observed and Predicted concentrations from a model fitting by using the output from Bg1.pmo. Import the data sets and the PK model: 1. Select File > Import or click the Import is displayed.

button. The Open File(s) dialog

2. Navigate to the Phoenix legacy WinNonlin examples subdirectory, which by default is located at C:\Program Files\Pharsight\Phoenix\application\Examples\Legacy WinNonlin. 3. Select Bg1.pmo and click Open. The Data Import Wizard is displayed. The wizard is used to assign options for how the data are imported and presented. 4. Click Finish. The data set is added to the project’s Data folder.

Note: PMO files can only be loaded on 32-bit operating systems or using the Phoenix32.exe on 64-bit operating systems. A file in PMO (Pharsight Model Object) format is added to the Data folder as one or more workbook objects. A .pmo file also adds one or more operational objects to the workflow. The Bg1.pmo file adds: •

A data set in workbook form (bg1).



Dosing worksheet (Bg1_sources).



A PK Model object named Bg1.

5. Click the (+) symbols beside bg1 and Bg1_sources in the Data folder to view the data sets’ worksheets. Run the PK Model: 1. Select the PK Model object Bg1 in the Object Browser. The PK Model’s Setup tab is displayed in the right viewing panel. CAUTION: Models saved in PMO format contain all the necessary data mappings and option settings. Do not change these settings.

28

Plots Overlay multiple plots

2

2. Select items in the PK Model object’s Setup tab list to examine the model’s data mappings and option settings. The imported PK Model object uses PK Model 3, which is a one compartment model with 1st order absorption. 3. Click the Execute

button. The results are displayed on the Results tab.

There are now two data sets available: the source data set bg1 and the PK Model’s Summary Table results. Both data sets are used to create the overlay plot. Add the first XY Plot plot: 1. Select the workflow in the Object Browser and then select Insert > Plotting > XY Plot. The XY Plot object is added to the workflow in the Object Browser. 2. Map the data set bg1 as the input source for the XY Plot 3 object: •

Use the pointer to drag the bg1 Sheet1 worksheet from the Data folder to the XY Data Mappings panel. OR



In the XY Plot 3 XY Data Mappings panel click the Select source ton to open the Select Object dialog.



Select the bg1 Sheet1 worksheet and click Select.

but-

The bg1 data set is mapped to the XY Plot 3 object. 3. Use the option buttons in the XY Data Mappings panel to map the data types to the following contexts: •

Leave Subject mapped to None.



Map Time to the X context.



Map Conc to the Y context.

Add the second XY Plot plot: 1. Select Plot in the Options tab menu tree. Select the Graphs tab. 2. Click the Add button. A second XY plot is added to the XY Plot 3 object.

29

2

Phoenix Examples Guide Graphs tab

A second XY Plot input named XY 1 Data is added to the Setup list. 3. Map the PK Model Summary Table worksheet as the input source for the second XY Plot: •

In the XY Plot 3 XY 1 Data Mappings panel click the Select Source button to open the Select Object dialog.



Select the Bg1 Summary Table worksheet and click Select. OR



Select the workflow. The workflow Diagram tab is displayed in the right viewing panel. Each operational object in a workflow is represented in the Diagram tab.



Click the chevron

buttons to expand the Bg1 and XY Plot 3 symbols.

Each operational object in the Diagram tab contains a complete list of all input and output sources. •

Click the (+) symbol beside the Bg1 Results.



Click the (+) symbol beside the XY Plot 3 Inputs.



Drag the Bg1 Summary Table worksheet to the XY Plot 3 XY 1 Data input. The Summary Table worksheet is mapped to the second XY Plot. A line is displayed that represents the mapping between the Bg1 and XY Plot 3 objects.

4. Use the option buttons in the XY 1 Data Mappings panel to map the data types to the following contexts:

30



Map Time to the X context.



Map Predicted to the Y context.



Leave all other data types mapped to None.

Plots Overlay multiple plots

2

Note: To delete a plot, select Plot in the Options menu tree and select the Graphs tab. Select the plot to be deleted and click Remove.

Set graph options: The graph display options are located in the XY Plot's Options tab. Expand items in the Options menu tree by clicking the (+) signs. 1. Select Plot > Title. In the Title field type: Overlay Charts 2. Press ENTER to move to the next line and type: Example using two data sets. 2. Select Graphs > Conc vs Time. Select the Appearance tab. 3. In the Appearance tab, use the Marker Color menu to change the marker color to red. 4. Click the Execute the Results tab.

button. Both XY Plot graphs are displayed together in

Note: It is not necessary to keep a project open after completing each chapter. This project is not required when working in the next chapter. To close a project right-click the project and select Close Project.

31

2

32

Phoenix Examples Guide

Chapter 3

Noncompartmental Analysis Computing areas, slopes, and moments

This chapter includes the following examples of noncompartmental analysis: » A noncompartmental analysis of three profiles on page 33. » Noncompartmental analysis with exclusions, computing partial areas on page 46. » Additional NCA examples on page 51: NCA for sparse sampling and drug effect data.

A noncompartmental analysis of three profiles Suppose a researcher has obtained time and concentration data following oral administration of a test compound to three subjects, and wants to perform noncompartmental analysis and summarize the results.

The data Data for this example are located in the Phoenix examples directory, which by default is located at C:\Program Files\Pharsight\Phoenix\application\Examples. Create a new project: 1. Select File > New Project to create a new project. A new project is created in the Object Browser. 2. Name the new project NCA.

33

3

Phoenix Examples Guide Import the data set: 1. Select File > Import or click the Import is displayed.

button. The Open File(s) dialog

2. Navigate to the Phoenix examples subdirectory, which by default is located at

C:\Program Files\Pharsight\Phoenix\application\Examples. 3. Select Bguide1.dat and click Open. The Worksheet Import Options dialog is displayed. The dialog is used to assign options for how the data are imported and presented. 4. Click Finish. The data set is added to the project’s Data folder. 5. View the data set by selecting it in the Data folder. The worksheet is displayed in the Grid tab, which is located in the right viewing panel.

Note: To view a worksheet in its own window, select the worksheet and double-click it or press ENTER. The worksheet is displayed in its own window.

Use the Units Builder to add units to a column: Units must be added to the time and concentration columns before the data set can be used in a noncompartmental analysis. 1. Select Bguide1 in the Data folder. The worksheet is displayed in the Grid tab in the right viewing panel. The Columns tab is located underneath the Grid tab. The Columns tab is used to edit columns in a worksheet. 2. Select the Time column header in the Columns box. 3. Click the Unit Builder button. The Units Builder dialog is displayed. 4. Select hour [hr] in the Time menu. 5. Click the Add button beside the Time menu. After clicking the Add button the selected units are displayed in the New Units field.

Note: Units can also be typed directly into the New Units field. 6. Click OK to assign the units to the column. 7. Select the Conc column header in the Columns box. 8. Click the Unit Builder button. 34

Noncompartmental Analysis A noncompartmental analysis of three profiles

3

9. Select nano [n] in the Mass prefix menu. 10. Select gram [g] in the Mass unit menu. 11. Click the Add button beside the Mass menus. 12. Click the / operator button in the Add operator area. 13. Select milli [m] in the Volume prefix menu. 14. Select liter [L] in the Volume unit menu. 15. Click the Add button beside the Volume menus. Click OK.

Note: Units added to ASCII data sets can be preserved if the data sets are saved in .dat or .csv file formats. Phoenix adds the units to a row below the column headers. To import a .dat or .csv file with units, select the Has units row check box in the File Options area in the Worksheet Import Options dialog.

The model Noncompartmental analysis for extravascular dosing is available as Model 200 in the Phoenix model library. Phoenix displays the model type (Plasma, Urine, or Drug Effect) in the Options tab of an NCA object.

Note: The exact model used is determined by the dose type. Extravascular Input uses Model 200, IV-Bolus Input uses Model 201, and Constant Infusion uses Model 202.

Insert the NCA object: 1. Select the workflow in the Object Browser and then select Insert > NCA and Toolbox > NCA. The NCA object is added to the workflow in the Object Browser. Objects automatically open in the right viewing panel when they are inserted in a workflow. Each object’s default view is the Setup tab, which contains all the steps necessary to set up an object. 2. Map the data set Bguide1 as the input source for the NCA object: •

Use the pointer to drag the Bguide1 worksheet from the Data folder to the NCA object’s Main Mappings panel. OR

35

3

Phoenix Examples Guide •

In the NCA Main Mappings panel click the Select source open the Select Object dialog.



Select Bguide1 and click Select.

button to

The Bguide1 data set is mapped to the NCA object. 3. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map Subject to the Sort context.



Map Time to the Time context.



Map Conc to the Concentration context.

Dosing regimen In this example one dose of 55 mg was administered at time 0. Dosing options are located in the Dose Options area in the Options tab. •

Extravascular is selected by default in the Type menu. Do not change this setting.

Enter the dosing data: There are two ways to enter dosing data: Enter the dosing data manually or Create a dosing worksheet.

Enter the dosing data manually 1. Select Dosing in the NCA object's Setup list. The Dosing panel is displayed. 2. Select the Use internal Worksheet check box. The Dosing sorts dialog is displayed. The Dosing sorts dialog prompts the user to select the sort variables to use to create the internal dosing worksheet.

36

Noncompartmental Analysis A noncompartmental analysis of three profiles

3

Dosing sorts dialog

3. Click OK to accept the default sort variable. 4. In the first cell under Dose type 55. 5. In the first cell under Time of Dose type 0. 6. Do not enter any values in the Tau column. 7. Use the pointer to select the first cells under Dose and Time of Dose. The selected cells are highlighted. 8. Place the pointer over the black square on the lower right side of the selection. The pointer changes to the following shape: . This signifies that the drag and fill feature can be used. 9. Press the left mouse button and drag the selection down to fill the Dose and Time of Dose cells beside each subject. 10. In the Dose Options area in the Options tab, type mg in the Unit field. Dose Options area

11. Go to Terminal elimination phase on page 39.

Create a dosing worksheet 1. Right-click the Data folder in the Object Browser and select New > Worksheet. 2. Name the new worksheet NCA Dosing Data. 37

3

Phoenix Examples Guide The new worksheet is automatically displayed in the Grid tab. The Columns tab is located underneath the Grid tab. The Columns tab is used to add columns to a worksheet. 3. Click the Add button underneath the Columns box. The New Column Properties dialog is displayed. •

Use the New Column Properties dialog to define the data type and the name of a new column.

4. Select the Text option button. 5. In the Column Name field type Subject and click OK. •

A new column is displayed in the Columns box and in the Grid tab. Singleclick a column header in the Columns box to rename it.

6. In the first cell under Subject, type DW for subject DW and press ENTER. Repeat for subjects GS and RH. 7. Click the Add button underneath the Columns box. 8. The Numeric option button is selected by default. Do not change this setting. 9. In the Column Name field type Dose and click OK. 10. In the Unit field for the Dose column type mg. 11. In the first cell under Dose type 55. 12. Add a final Numeric column and name it Time_of_Dose.

Note: Newly created columns do not support empty spaces in the column names. Phoenix can import column names with spaces, but it will not allow users to create column names with spaces. 13. In the first cell under Time_of_Dose type 0. 14. Use the drag and fill feature to fill the rest of the dosing data worksheet by highlighting the first two cells underneath Dose and Time_of_Dose and dragging the selection down. 15. The finished worksheet looks like this:

38

Noncompartmental Analysis A noncompartmental analysis of three profiles

3

Map the NCA Dosing Data worksheet to the Dosing panel: 1. Select the NCA object in the Object Browser. 2. Select Dosing in the Setup list. 3. Map the NCA Dosing Data worksheet to the Dosing panel in one of two ways: •

Use the pointer to drag the NCA Dosing Data worksheet from the Data folder to the Dosing panel.



Click the Select source button in the Dosing panel to select the worksheet and map it to the Dosing panel.

4. Use the option buttons in the Dosing panel to map Subject to Sort, Dose to Dose, and Time_of_Dose to Time of Dose. CAUTION: Mapping a worksheet to the Dosing panel overrides the Unit settings in the Dose Options area. If a worksheet is mapped to the Dosing panel make sure that the appropriate units are added to the Dose column in the worksheet.

Terminal elimination phase Phoenix attempts to estimate the rate constant,  Z , associated with the terminal elimination phase. Although Phoenix is capable of selecting the times to be used in the estimation of  Z , this example provides Phoenix with the time range. Specify the times to be included: There are two ways to specify the times to be included. 1. Select Slopes Selector in the Setup list. 2. Select Time Range in the Lambda Z Calculation Method menu. 3. In the Start field type 8. 4. In the End field type 24. 5. Make the same changes for the other subjects by selecting the Subject=GS tab and the Subject=RH tab and entering the same Start and End time values. OR 1. Select Slopes in the Setup list. 2. In the first cell under Start Time type 8. 3. In the first cell under End Time type 24. 4. Do not type any values into the Exclusions column. 39

3

Phoenix Examples Guide 5. Use the drag and fill feature to fill the rest of the Slopes worksheet by highlighting the first two cells under Start Time and End Time and dragging the selection down for all subjects. 6. Select Slopes Selector in the Setup list. Note that Time Range is selected in the Lambda Z Calculation Method menu. The Start and End times have been specified for each subject. A line is displayed on each graph that shows the Lambda Z time range. In this example no points are excluded from the specified Lambda Z time range. The example Noncompartmental analysis with exclusions, computing partial areas on page 46 demonstrates Lambda Z exclusions.

Therapeutic response The next step is to define a target concentration range to enable calculation of the time and area located above, below, and within that range.

Note: See Noncompartmental analysis with exclusions, computing partial areas on page 46 for an NCA example that includes computation of partial areas under the curve.

Specify the therapeutic response options: 1. Select Therapeutic Response in the Setup list. 2. Select the Use internal Worksheet check box. The Therapeutic Response sorts dialog is displayed. The Therapeutic Response sorts dialog prompts the user to select the sort variables to use to create the internal dosing worksheet. 3. Click OK to accept the default sort variable. 4. Select Therapeutic Response in the Setup list. 5. In the Lower cell for each subject type 2. 6. In the Upper cell for each subject type 4.

40

Noncompartmental Analysis A noncompartmental analysis of three profiles

3

Units The next step in setting options is to specify preferred output units. The independent variable, dependent variable, and dosing regimen must have units before preferred output units can be set. Set preferred units: 1. Select Units in the Setup list. The Units worksheet lists both the Default units and the Preferred units for each parameter. 2. Select the cell in the Preferred column for Volume (Vz, Vz/F, Vss). The new preferred unit is L (liter). 3. In the Preferred cell for Volume type L.

NCA Model options Four methods are available for computing the area under the curve. The default method is the linear trapezoidal rule with linear interpolation. This example uses the Linear Log Trapezoidal method: linear trapezoidal rule up to Tmax, and log trapezoidal rule for the remainder of the curve. Specify the NCA model options: Use the Options tab to specify settings for the NCA model options. The Options tab is located underneath the Setup tab. 1. Select Linear Log Trapezoidal in the Calculation Method menu. 2. In the Titles field type Example of Noncompartmental Analysis.

Results At this point, all of the necessary commands have been specified. This example includes text, worksheet, and plot output. Run the analysis: •

Click the Execute

button. The results are displayed on the Results tab.

41

3

Phoenix Examples Guide

NCA Text Output The NCA object’s Core output text file contains user settings, a brief summary table, and final parameters output for each subject. Core output

Subject=DW Date: 4/10/2009 Time: 17:01:42 Example of Noncompartmental Analysis WINNONLIN NONCOMPARTMENTAL ANALYSIS PROGRAM 6.3.0.326 Core Version 04Jun2007 Settings -------Model: Plasma Data, Extravascular Administration Number of nonmissing observations: 16 Dose time: 0.00 Dose amount: 55.00 Calculation method: Linear/Log Trapezoidal Weighting for lambda_z calculations: Uniform weighting Lambda_z method: User-specified lambda_z range, Log regression User's lambda_z bounds: 8.00, 24.00 Lower bound for therapeutic window: 2.00 Upper bound for therapeutic window: 4.00

NCA worksheet output The NCA object’s worksheet output contains summary tables of the results.

Item

42

Contents

Dosing Used

The dosing regimen specified in the Dosing panel.

Exclusions

Any excluded data points specified in the Slopes panel.

Final Parameters

Estimates of the final parameters for each level of the sort variable (each subject for this example), including times and areas above (“TimeHgh”), in (“TimeDur”) and below (“TimeLow”) the therapeutic response (AUCHgh, AUCLow, etc.). Parameter names that include “INF” are extrapolated to infinity using estimated Lambda Z.

Noncompartmental Analysis A noncompartmental analysis of three profiles Item

3

Contents

Final Parameters Pivoted

The same as Final Parameters, but with one parameter per column, in order to conveniently perform further analysis on individual parameters.

Partial Areas

Lists start and end times used to define the partial areas under the curve.

Plot Titles

The title of each graph in the output.

Slopes Settings

The settings for the user specified for the Terminal elimination phase.

Summary Table

The sort variables, X variable, points included in the regression for Lambda Z (noted with *), Y variable, predicted Y for the regression, residual for the regression, area under the curve (AUC), area under the moment curve AUMC and the Weight used for the regression.

The Final Parameters and the Summary Table results are shown below: Final Parameters

This subject’s concentrations were within the theoretical therapeutic range for just over 13.8 hours, as reflected in the parameter TimeDur.

43

3

Phoenix Examples Guide Summary Table

NCA plot output The NCA object’s plot output displays Observed Y and Predicted Y vs X graphs for each subject.

44

Noncompartmental Analysis A noncompartmental analysis of three profiles

3

Descriptive statistics At this point, it is convenient to summarize the results of the noncompartmental analysis using a Descriptive Stats object. This example summarizes parameter estimates across subjects. Summarize the Final Parameters results: 1. Select the workflow in the Object Browser and then select Insert > NCA and Toolbox > Descriptive Stats. The Descriptive Stats object is added to the workflow in the Object Browser. 2. Map the NCA Final Parameters worksheet as the input source for the Descriptive Stats object: •

In the Descriptive Stats Main Mappings panel click the Select Source button to open the Select Object dialog.



Select the NCA Final Parameters worksheet and click Select. OR



Select the workflow. The workflow Diagram tab is displayed in the right viewing panel. Each operational object in a workflow is represented in the Diagram tab.



Click the Down Arrows Stats symbols.

buttons to expand the NCA and Descriptive

Each operational object in the Diagram tab contains a complete list of all input and output sources. •

Click the (+) symbol beside the NCA Results.



Click the (+) symbol beside the Descriptive Stats Inputs.



Drag the NCA Final Parameters worksheet to the Descriptive Stats Main input. The Final Parameters worksheet is mapped to the Descriptive Stats object. A line is displayed that represents the mapping between the NCA and Descriptive Stats objects.

3. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Leave Subject mapped to None.



Map Parameter to the Sort context.



Leave Units mapped to None.

45

3

Phoenix Examples Guide •

Map Estimate to the Summary context.

Note: Mapping Parameter to Sort computes statistics on the parameter estimates and mapping Estimate to Summary computes one statistic per parameter. Descriptive Stats options are accessible in the Options tab, which is located underneath the Setup tab. 4. Select the Confidence Interval check box. The default setting for the Confidence Interval is 95%. Do not change this setting. 5. Select the Number of SD check box. The default setting for the number of standard deviations is 1. Do not change this setting. 6. Click the Execute

button. The results are displayed on the Results tab.

The three subjects spent an average of 13.6 hours within the therapeutic concentration range, as shown by the parameter TimeDur.

Noncompartmental analysis with exclusions, computing partial areas This example demonstrates the exclusion of points in the terminal elimination phase and computation of partial area under the curve in the Phoenix NCA object. This example uses time-concentration data for a single subject. The data is provided in NCA2.csv, which is located in the Phoenix examples directory. Import the data set: 1. Select File > Import or click the Import is displayed.

button. The Open File(s) dialog

2. Navigate to the Phoenix examples directory, which by default is located at

C:\Program Files\Pharsight\Phoenix\application\Examples. 3. Select NCA2.csv and click Open. The Worksheet Import Options dialog is displayed. The dialog is used to assign options for how the data are imported and presented. 4. Select the Has units row check box. 5. Click Finish. The data set is added to the project’s Data folder. A data set in CSV (Comma Separated Value) format is added to the Data folder as a worksheet.

46

Noncompartmental Analysis Noncompartmental analysis with exclusions, computing partial areas

3

6. View the data set by selecting it in the Data folder. Select the worksheet to display it in the Grid tab.

Model settings Noncompartmental analysis for extravascular dosing is available as model 200 in Phoenix’s noncompartmental analysis object. Phoenix always displays the model type in the NCA object’s Options tab.

Note: The exact model used is determined by the dose type. Extravascular Input uses Model 200, IV-Bolus Input uses Model 201, and Constant Infusion uses Model 202.

Insert the NCA object: 1. Select the workflow in the Object Browser and then select Insert > NCA and Toolbox > NCA. The NCA object is added to the workflow in the Object Browser.

Note: When multiple objects of the same type are added to a workflow they are numbered sequentially. For example, the second NCA object added to this workflow is called NCA 1. 2. Map the data set NCA2 as the input source for the NCA 1 object: •

Use the pointer to drag the NCA2 worksheet from the Data folder to the Main Mappings panel. OR



In the NCA 1 Main Mappings panel click the Select source open the Select Object dialog.



Select the NCA2 worksheet and click Select.

button to

The NCA2 data set is mapped to the NCA 1 object. 3. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map Time to the Time context.



Map Conc to the Concentration context.

47

3

Phoenix Examples Guide

Dosing regimen In this example one dose of 70 mg was administered at time 0. Enter the dosing data: 1. Select Dosing in the NCA 1 object's Setup list. The Dosing panel is displayed. 2. Select the Use internal Worksheet check box. 3. In the first cell in the Dose column type 70. 4. In the first cell in the Time of Dose column type 0. 5. Do not enter any values in the Tau column. Dosing options are located in the Dose Options area in the Options tab. 6. Extravascular is selected by default in the Type menu. Do not change this setting. 7. In the Unit field type mg.

Terminal elimination phase Phoenix attempts to estimate the rate constant Lambda Z associated with the terminal elimination phase. Although Phoenix is capable of selecting the times to be used in the estimation of Lambda Z, this example provides Phoenix with the time range. Specify the times to be included in calculation of Lambda Z: 1. Select Slopes in the NCA 1 object's Setup list. The Slopes panel is displayed. 2. In the first cell in the Start Time column type 0.33. 3. In the first cell in the End Time column type 2.5. 4. Exclude the data point at 1.5 by typing 1.5 in the first cell in the Exclusions column. 5. Select the NCA object's Slopes Selector tab. The Start and End times and the Exclusion are marked on the graph display.

48

Noncompartmental Analysis Noncompartmental analysis with exclusions, computing partial areas

3

Partial areas Partial areas under the curve are computed for 0 to 3.0 hours and 1.25 to 2.5 hours. Specify the AUCs (areas under a curve) to be calculated: 1. Select Partial Areas in the NCA 1 object's Setup list. The Partial Areas panel is displayed. 2. Select the Use internal Worksheet check box. 3. In the Options tab, select 2 in the Max # of Partial Areas menu. 4. In the first cell in the Start Time column type 0 and in the first cell in the End Time column type 3. 5. In the second cell in the Start Time column type 1.25 and in the second cell in the End Time column type 2.5.

Model options This example includes titles in the graph output and uses the Linear Log Trapezoidal method to calculate areas under the curve. Set model options: Use the Options tab to specify settings for the NCA model options. 1. The default setting for Model Type is Plasma (200-202). Do not change this setting.

Note: The exact model type (200, 201, or 202) is determined by the dose type. 2. Select Linear Log Trapezoidal in the Calculation Method menu. 3. In the Titles field type A Second NCA Example.

Results All necessary settings are complete. •

Click the Execute

button. The results are displayed on the Results tab.

49

3

Phoenix Examples Guide

NCA worksheet output Selections from the worksheet output are displayed below. Final Parameters

Note the partial area estimates in the last two rows of the Final Parameters (last two rows not pictured). Summary Table

Note that values used in the calculation of Lambda Z are marked with an asterisk in the column Lambda_z_Incl. The data point corresponding to time 1.5, which 50

Noncompartmental Analysis Additional NCA examples

3

was excluded manually, is not marked with an asterisk. The observation at time 2.0, with a value of 0, was automatically excluded.

NCA plot output

The excluded data point is marked on the Observed Y and Predicted Y vs X plot output of observed and predicted data.

Additional NCA examples The additional NCA example Pharsight Model Object (*.pmo) files included in the Phoenix examples subdirectory contain examples of noncompartmental analysis for drug effect data, sparse data, and urine concentrations. The file names are NCA_PD.pmo, SparseSamplingChaioYeh.pmo, Urine.pmo.

Note: Importing PMO files is only supported on 32-bit systems or by running Phoenix32.exe in 64-bit systems.

Load, view, and run an example Pharsight Model Object: 1. Select File > Import or click the Import is displayed.

button. The Open File(s) dialog

2. Navigate to the Phoenix Legacy WinNonlin examples directory, which by default is located at C:\Program Files\Pharsight\Phoenix\application\Examples\Legacy WinNonlin. 51

3

Phoenix Examples Guide 3. Select one of the model files listed above and click Open. The Data Import Wizard is displayed. 4. Click Finish. The data set is added to the project’s Data folder. A file in PMO (Pharsight Model Object) format is added to the Data folder as one or more workbook objects. A .pmo file also adds one or more operational objects to the workflow. Each model file adds: •

Source data set in worksheet form



Data sets in worksheet form for Dosing, Partial Areas, and Therapeutic Response are created if used by the model



An NCA model object.

Note: View the data sets by selecting them in the Data folder and double-clicking them or pressing ENTER. The data sets are displayed in separate worksheet windows. 5. Double-click the imported NCA models or select the NCA model and press ENTER to open each model in its own window. Models saved in .pmo format contain all the necessary data mappings and option settings. 6. Click the Execute button to run the model and examine its output. The results are displayed on the Results tab. Each model file is described below:

NCA_PD.pmo This example demonstrates noncompartmental analysis of pharmacodynamic data. It uses NCA model 220 to summarize cortisol concentrations from a single subject exposed to a two-hour, stepwise elevation in adrenocorticotropic hormone from time 60 to time 120. The data are derived from Urquhart and Li (1969)A and also appear, with a different model, as example PD11 in Gabrielsson and Weiner (2000)B.

A. Urquhart and Li (1969). Dynamic testing and modeling of adrenocortical secretory function. Ann. New York Acad. Sci. 156:756. B. Gabrielsson and Weiner (2000). Pharmacokinetic and Pharmacodynamic Data Analysis: Concepts and Applications. 3rd edition. Swedish Pharmaceutical Press, Stockholm, Sweden. 52

Noncompartmental Analysis Additional NCA examples

3

SparseSamplingChaioYeh.pmo This example contains the data and model from a classic literature example first introduced by Chiao YehA. It uses NCA model 200 with sparse sampling computations to summarize time-concentration data from nine subjects, each of whom provided three measurements at varying times.

Urine.pmo This example uses NCA model 210 to analyze urine concentrations and volumes for a single subject.

Note: It is not necessary to keep a project open after completing each chapter. This project is not required when working in the next chapter. To close a project right-click the project and select Close Project.

A. Yeh (1990). Estimation and Significant Tests of Area Under the Curve Derived from Incomplete Blood Sampling. ASA Proceedings of the Biopharmaceutical Section 74-81. 53

3

54

Phoenix Examples Guide

Chapter 4

Workflows and Templates Creating and reusing a project

The purpose of the example is to show Phoenix’s ability to create and reuse workflows. This example shows users how to create a workflow to perform an analysis, save the workflow as a template, and use the template to complete the same analysis using different data. This example assumes that a drug company wants to create a generic form of a popular drug. The company wants to test two formulations of a compound in order to decide which formulation is closest in bioequivalence to the name brand drug. In this example users will create a workflow to test the first formulation, save the workflow as a template, and reuse the template to test the second formulation. Data for this study were created using Pharsight’s Trial Simulator™. Create the project: 1. Select File > New Project to create a new project. A new project is created in the Object Browser. 2. Name the new project Templates. Import the data sets: Load the following three files from the Phoenix examples directory. » GenericForm1.xls » GenericForm2.xls » Pococuranitol.xls

Note: Select multiple files at once in the Open File(s) dialog by pressing the CTRL key and using the pointer to select the files.

55

4

Phoenix Examples Guide 1. Select File > Import or click the Import is displayed.

button. The Open File(s) dialog

2. Navigate to the Phoenix examples directory, which by default is located at

C:\Program Files\Pharsight\Phoenix\application\Examples. 3. Select GenericForm1.xls and click Open. The Data Import Wizard is displayed. The wizard is used to assign options for how the data are imported and presented. 4. Click the Forward Arrows Preview screens.

button to advance through the Worksheet

5. Select the Has units row check box for each worksheet except the History worksheet. 6. Click Finish. The data set is added to the project’s Data folder. 7. Repeat steps 1. and 2. 8. Select GenericForm2.xls and click Open. The Data Import Wizard is displayed. 9. Click the Forward Arrows button to advance through the Worksheet Preview screens. 10. Select the Has units row check box for each worksheet except the History worksheet. 11. Click Finish. 12. Repeat steps 1. and 2. 13. Select Pococuranitol.xls and click Open. The Data Import Wizard is displayed. 14. Click the Forward Arrows button to advance through the Worksheet Preview screens. 15. Select the Has units row check box for each worksheet except the History worksheet. 16. Click Finish. Data sets in XLS (Microsoft Excel Workbook) format are added to the Data folder as workbooks. View the data sets by selecting them in the Data folder. Click the (+) sign beside the data set name to view the worksheet. Select the worksheet to display it in the Grid tab.

56

Workflows and Templates Create a workflow

4

Create a workflow A Workflow object is the part of the project that is used to contain and manage operational objects, similar to how the Data folder is used to contain and manage data sets. Workflows can be set up to perform complex operational procedures using operational objects. A Workflow can be saved as a template file, which allows users to create complex operational procedures once, and reuse and share them multiple times. The advantage of a template is that it saves the configuration settings in each of the operational objects it contains. However, templates do not save mappings to external data sets. This means that a template can be created that can be easily reused with multiple data sets. Rename the workflow object: •

Rename the workflow object Parallel BE.

Any operational object, workbook, or worksheet can be renamed using one of the following three methods: •

Right-click the object and select Rename.



Single-click the object to make its name editable.

• Select the object and press F2 to make its name editable. Parallel BE workflow

Insert the BQL objects: BQL is an acronym for Below Quantifiable Limit. BQL rules are used to exclude values in a data set that are too low to be useful in an analysis. 1. Select Parallel BE in the Object Browser and then select Insert > Data > BQL. 2. Rename the BQL object BQL Brand. 3. Insert a second BQL object into the workflow. 57

4

Phoenix Examples Guide 4. Rename the BQL object BQL Generic. Create and map the BQL rule set: BQL rule sets are created and stored in the BQL Rules folder. 1. Right-click the BQL Rules folder and select New > Rule Set. The Rule Set options are displayed in the right viewing panel. 2. Type ERR under Nonnumeric Code. 3. Type 0 (zero) under Unconditional Substitution. 4. Select the Use When < LLOQ check box. 5. Select the Use Static LLOQ Value check box. 6. Type 0.01 in the LLOQ Value field. Map the BQL Rule Set to the BQL objects: 1. Select the Parallel BE workflow in the Object Browser. The workflow's Diagram tab is displayed in the right viewing panel. 2. Click the chevron

buttons to expand BQL Brand and BQL Generic.

3. Click the (+) symbols to expand Rule Sets for both BQL objects. 4. Use the pointer to drag the Rule Set from the BQL Rules folder to the BQL Rule Set input for both BQL Brand and BQL Generic. Mapping a BQL rule set

» The Rule Set icons change from gray to color, indicating that the Rule Set has been mapped to the BQL objects. 58

Workflows and Templates Create a workflow

4

5. Select the Pococuranitol workbook in the Data folder. Expand the workbook by clicking the (+) sign. 6. Select BQL Brand in the Parallel BE workflow. BQL Brand’s Setup tab is displayed in the right viewing panel. 7. Use the pointer to drag the Pococuranitol_data worksheet from the Data folder to BQL Brand’s Main Mappings panel. 8. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map Subject to the Sort context.



Map Time to the Time context.



Map Concentration to the Concentration context.



Leave all other data types mapped to None.

9. In the Output Column Name field type BrandConc_gt_0_01. 10. Click the Execute

button. The results are displayed on the Results tab.

11. Select the GenericForm1 workbook in the Data folder. Expand the workbook by clicking the (+) sign. 12. Select BQL Generic in the Parallel BE workflow. BQL Generic’s Setup tab is displayed in the right viewing panel. 13. Use the pointer to drag the GenericForm1_data worksheet from the Data folder to the Main Mappings panel. 14. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map Subject to the Sort context.



Map Time to the Time context.



Map Concentration to the Concentration context.



Leave all other data types mapped to None.

15. In the Output Column Name field type GenericConc_gt_0_01. 16. Click the Execute

button. The results are displayed on the Results tab.

Insert a Descriptive Stats object into the Parallel BE workflow: 1. Select BQL Brand’s Results tab. 2. Right-click the Output worksheet and select Send To > NCA and Toolbox > Descriptive Stats.

59

4

Phoenix Examples Guide A Descriptive Stats object is inserted into the Parallel BE workflow. The columns in the Output worksheet are automatically mapped to the Descriptive Stat object’s Main Mappings panel. 3. Rename the Descriptive Stats object Descriptive Stats Brand. 4. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Leave Subject mapped to None.



Map Time to the Sort context.



Map BrandConc_gt_0_01 to the Summary context.

5. In the Options tab, select the Confidence Interval check box. 6. Select the Number of SD check box. •

Leave both Confidence Interval and Number of SD set at their default values.

7. Click the Execute

60

button. The results are displayed on the Results tab.

Workflows and Templates Create a workflow

4

Insert a second Descriptive Stats object into the Parallel BE workflow: 1. Select BQL Generic’s Results tab. 2. Right-click the Output worksheet and select Send To > NCA and Toolbox > Descriptive Stats. A Descriptive Stats object is inserted into the Parallel BE workflow. The columns in the Output worksheet are automatically mapped to the Descriptive Stat object’s Main Mappings panel. 3. Rename the Descriptive Stats object Descriptive Stats Generic. 4. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Leave Subject mapped to None.



Map Time to the Sort context.



Map GenericConc_gt_0_01 to the Summary context.

5. In the Options tab, select the Confidence Interval check box. 6. Select the Number of SD check box. •

Leave both Confidence Interval and Number of SD set at their default values.

7. Click the Execute

button. The results are displayed on the Results tab.

Plot the concentration values over time: 1. Select the Parallel BE workflow in the Object Browser and then select Insert > Plotting > XY Plot. An XY Plot object is added to the Parallel BE workflow. The XY Plot object’s Setup tab is displayed in the right viewing panel. 2. Rename the XY Plot object XY Plot Brand vs Generic. 3. Map the Output results worksheet from the BQL Brand object as the input source for the XY Plot object: •

In the XY Plot’s XY Data Mappings panel click the Select Source ton to open the Select Object dialog.



Select BQL Brand’s Output worksheet and click Select.

but-

61

4

Phoenix Examples Guide

BQL Brand’s Output worksheet is mapped to the XY Plot object. 4. Use the option buttons in the XY Data Mappings panel to map the data types to the following contexts: •

Map Subject to the Group context.



Map Time to the X context.



Map BrandConc_gt_0_01 to the Y context.

Add a second graph to the XY Plot object: 1. Select Plot in the Options tab menu tree. Select the Graphs tab. 2. Click the Add button. A second XY plot is added to the XY Plot object.

A second XY Plot input named XY 1 Data is added to the Setup list.

62

Workflows and Templates Create a workflow

4

3. Map the Output results worksheet from the BQL Generic object as the input source for the second graph: •

In the XY Plot’s XY 1 Data Mappings panel click the Select Source button to open the Select Object dialog.



Select the BQL Generic’s Output worksheet and click Select.

BQL Generic’s Output worksheet is mapped to the second graph. 4. Use the option buttons in the XY 1 Data Mappings panel to map the data types to the following contexts: •

Map Subject to the Group context.



Map Time to the X context.



Map GenericConc_gt_0_01 to the Y context.

5. Click the Execute

button. The results are displayed on the Results tab.

Examine the output. The plot shows that the concentration values peak at very different times for the two formulations. Plot the mean concentration values over time: 1. Select Parallel BE in the Object Browser and then select Insert > Plotting > XY Plot. An XY Plot object is added to the Parallel BE workflow. The XY Plot object’s Setup tab is displayed in the right viewing panel. 2. Rename the XY Plot object XY Plot Mean Brand vs Generic. 3. Map the Statistics results worksheet from the Descriptive Stats Brand object as the input source for the XY Plot object: •

In the XY Plot’s XY Data Mappings panel click the Select Source ton to open the Select Object dialog.

but-



Select the Descriptive Stats Brand Statistics worksheet and click Select.

63

4

Phoenix Examples Guide

Descriptive Stats Brand’s Statistics worksheet is mapped to the XY Plot object. 4. Use the option buttons in the XY Data Mappings panel to map the data types to the following contexts: •

Map Time to the X context.



Map Mean to the Y context.



Map SD to the Lower Error Bars and Upper Error Bars contexts.



Leave all other data types mapped to None.

Add a second graph to the XY Plot object: 1. Select Plot in the Options tab menu tree. Select the Graphs tab. 2. Click the Add button. A second XY plot is added to the XY Plot object. A second XY Plot input named XY 1 Data is added to the Setup list. 3. Map the Statistics results worksheet from the Descriptive Stats Generic object as the input source for the second graph: •

In the XY Plot’s XY 1 Data Mappings panel click the Select Source button to open the Select Object dialog.



Select the Descriptive Stats Generic Statistics worksheet and click Select.

Descriptive Stats Generic’s Statistics worksheet is mapped to the second graph. 4. Use the option buttons in the XY 1 Data Mappings panel to map the data types to the following contexts:

64



Map Time to the X context.



Map Mean to the Y context.



Map SD to the Lower Error Bars and Upper Error Bars contexts.

Workflows and Templates Create a workflow •

4

Leave all other data types mapped to None.

5. Click the Execute

button. The results are displayed on the Results tab.

Examine the output. The plot shows that the mean concentration values peak at very different times for the two formulations. The plot does not place the points close to the axes, it is hard to differentiate between the overlaid plots, and the plot does not have a title. Use the Options tab to change the X and Y axes ranges, change the plot colors, and add a title. 1. In the Options tab, select Plot > Title. •

In the title field, type Brand and Generic Mean Concentration.

2. In the Options tab, select Axes > X. •

In the Range area, select the Custom option button.



In the Minimum field, enter 0.



Leave the Maximum field set to 60.

3. In the Options tab, select Axes > Y. •

In the Range area, select the Custom option button.



In the Minimum field, enter 0.



Leave the Maximum field set to 30.

4. Select Graphs > Mean vs Time (the first one in the list). 5. Select the Appearance tab. •

In the Marker Color menu, select Red.



In the Marker Border Color menu, select Red.



In the Line Color menu, select Red.

The Descriptive Stats Brand plot is now highlighted in Red. The plot is automatically updated to reflect the new axes ranges, plot colors, and the title. The XY Plot object does not have to be re-executed.

65

4

Phoenix Examples Guide The final plot looks like this:

Set up brand noncompartmental analysis:

Insert an NCA object into the Parallel BE workflow 1. Select BQL Brand’s Results tab. 2. Right-click the Output worksheet and select Send To > NCA and Toolbox > NCA. An NCA object is inserted into the Parallel BE workflow. The columns in the Output worksheet are automatically mapped to the NCA object’s Main Mappings panel. 3. Rename the NCA object NCA Brand. 4. Use the option buttons in the Main Mappings panel to map the data types to the following contexts:

66



Map Subject to the Sort context.



Map Time to the Time context.



Map BrandConc_gt_0_01 to the Concentration context.

Workflows and Templates Create a workflow

4

Dosing regimen In this example one dose of 50 mg was administered at time 0. The Pococuranitol workbook contains a dosing data worksheet named Pococuranitol_dose. 1. Select Dosing in the NCA Setup tab. Map the Pococuranitol_dose worksheet as the input source for NCA Brand’s Dosing panel.

Note: If the Pococuranitol_dose worksheet is not viewable in the Object Browser, expand the Pococuranitol workbook in the Data folder by clicking the (+) sign. 2. Use the pointer to drag the Pococuranitol_dose worksheet from the Data folder to NCA Brand’s Dosing Mappings panel. The Pococuranitol_dose worksheet is mapped to NCA Brand’s Dosing Mappings panel. 3. Use the option buttons in the Dosing Mappings panel to map Subject to Sort, Dose to Dose, and Time_of_Dose to Time of Dose. 4. Click the Execute

button. The results are displayed on the Results tab.

Set up generic noncompartmental analysis:

Insert a second NCA object into the Parallel BE workflow 1. Select BQL Generic’s Results tab. 2. Right-click the Output worksheet and select Send To > NCA and Toolbox > NCA. An NCA object is inserted into the Parallel BE workflow. The columns in the Output worksheet are automatically mapped to the NCA object’s Main Mappings panel. 3. Rename the NCA object NCA Generic. 4. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map Subject to the Sort context.



Map Time to the Time context.



Map GenericConc_gt_0_01 to the Concentration context.

67

4

Phoenix Examples Guide

Dosing regimen In this example one dose of 50 mg was administered at time 0. The GenericForm1 workbook contains a dosing data worksheet named GenericForm1_dose. 1. Select Dosing in the Setup tab. Map the GenericForm1_dose worksheet as the input source for NCA Generic’s Dosing panel.

Note: If the GenericForm1_dose worksheet is not viewable in the Object Browser, expand the GenericForm1 workbook in the Data folder by clicking the (+) sign. 2. Use the pointer to drag the GenericForm1_dose worksheet from the Data folder to NCA Generic’s Dosing Mappings panel. The GenericForm1_dose worksheet is mapped to NCA Generic’s Dosing Mappings panel. 3. Use the option buttons in the Dosing Mappings panel to map Subject to Sort, Dose to Dose, and Time_of_Dose to Time of Dose. 4. Click the Execute

button. The results are displayed on the Results tab.

Create the formulation data set for the bioequivalence model Combine the NCA output: Combine the Final Parameters Pivoted worksheets from both NCA objects and use the combined output in a bioequivalence model. The new column created by the Append Worksheets object will contain the formulation information for the bioequivalence model. 1. Select Parallel BE in the Object Browser and then select Insert > Data > Append Worksheets. An Append Worksheets object is added to the Parallel BE workflow. The Append Worksheets object’s Setup tab is displayed in the right viewing panel. 2. Map the Final Parameters Pivoted results worksheet from the NCA Brand object as the input source for the Append Worksheets object:

68



In the Append Worksheets object’s Worksheet 1 Mappings panel click the Select Source button to open the Select Object dialog.



Select NCA Brand’s Final Parameters Pivoted worksheet and click Select.

Workflows and Templates Create the formulation data set for the bioequivalence model

4

NCA Brand’s Final Parameters Pivoted worksheet is mapped to the Append Worksheets object. 3. Use the option buttons in the Worksheet 1 Mappings panel to map the data types to the following contexts: •

Map Subject to the Source Column context.



Map Cmax to the Source Column context.



Map AUClast to the Source Column context.



Leave all other data types mapped to None.

4. Map the Final Parameters Pivoted results worksheet from the NCA Generic object as the second input source for the Append Worksheets object: •

In the Append Worksheets object’s Worksheet 2 Mappings panel click the Select Source button to open the Select Object dialog.



Select NCA Generic’s Final Parameters Pivoted worksheet and click Select.

NCA Generic’s Final Parameters Pivoted worksheet is mapped to the Append Worksheets object. 5. Use the option buttons in the Worksheet 2 Mappings panel to map the data types to the following contexts: •

Map Subject to the Source Column context.



Map Cmax to the Source Column context.



Map AUClast to the Source Column context.



Leave all other data types mapped to None.

69

4

Phoenix Examples Guide 6. Leave the options in the Options tab set to their default settings. 7. Click the Execute

button. The results are displayed on the Results tab.

Edit the Append Worksheets output: 1. Right-click the Result worksheet in the Results tab and select Copy to Data Folder. The worksheet is copied to the Data folder and renamed Result from Append Worksheets. 2. Select the Result from Append Worksheets worksheet in the Data folder. 3. Rename the worksheet Parallel BE Input 1. The Columns tab is located underneath the right viewing panel. The Columns tab is used to edit columns in a worksheet. 4. Select the Source column header in the Columns box. 5. Click the column header once to make it editable. 6. Rename the column header to Formulation. 7. In the Formulation column rename NCA Brand to Brand. 8. Select the first cell in the Formulation column and type Brand. 9. Use the drag and fill feature to change all NCA Brand cells to Brand.

Note: To use the drag and fill feature, place the pointer over the black square on the lower right side of the selected cell. The pointer changes to the following shape: . This signifies that the drag and fill feature can be used. 10. In the Formulation column rename NCA Generic to Generic 1. 11. Use the drag and fill feature to change all NCA Generic cells to Generic 1. Set up the first bioequivalence model: Insert a Bioequivalence object into the Parallel BE workflow. 1. Select Parallel BE in the Object Browser and then select Insert > NCA and Toolbox > Bioequivalence. Map the Parallel BE Input 1 worksheet as the input source for the Bioequivalence object. 2. Use the pointer to drag the Parallel BE Input 1 worksheet from the Data folder to the Bioequivalence object’s Main Mappings panel. The Parallel BE Input 1 worksheet is mapped to the Bioequivalence object. 70

Workflows and Templates Create and add a template

4

3. In the Model tab select the Parallel/Other option button to set the model to a parallel bioequivalence model. •

Leave Reference Formulation set to Brand.

4. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map Formulation to the Formulation context.



Leave Subject mapped to None.



Map Cmax to the Dependent context.



Map AUClast to the Dependent context.

5. Click the Execute

button. The results are displayed on the Results tab.

Examine the output.

Create and add a template In Phoenix templates any data mappings that are internal to the workflow are retained. For example, all expected output from the BQL objects, such as subject, time, and concentration, are retained in the NCA objects’ Main Mappings panel. Mappings that are external to the template, such as the BQL rules and the data sets used with the BQL objects are not retained. Create the template: 1. Select the Parallel BE workflow. 2. Click the Create Template the workflow as a template.

button in the Object Browser toolbar to save

The Save Object dialog is displayed. The default file type is Excel. Change this in the Save as type: menu to Phoenix Template (*.wnlt). The default file name is Parallel BE. Do not change this name. 3. Save the template in the Pharsight Projects Directory, which by default is located at C:\Documents and Settings\\My Documents\Pharsight Projects or C:\Users\\Documents\Pharsight Projects. 4. Ensure Save As type is Phoenix Template (*.wnlt) and click OK to save the Parallel BE workflow as a Phoenix Template (.wnlt) file. The workflow is saved as a template file in the Pharsight Project directory and named Parallel BE.wnlt.

71

4

Phoenix Examples Guide Add the Parallel BE template to the project: 1. Select File > Import or click the Import is displayed.

button. The Open File(s) dialog

2. Navigate to the Pharsight Projects Directory. 3. Select Parallel BE.wnlt and click Open. The template is added to the project as a second workflow. The second workflow is nested below the first one. The new workflow is named Parallel BE 1. 4. Select Parallel BE 1 in the Object Browser. 5. Rename the workflow object Parallel BE 2.

Note: All references to workflows and operational objects in this section of the example refer to the Parallel BE 2 workflow and the operational objects it contains.

Map the BQL Rule Set to the BQL objects: 1. Select the Parallel BE 2 workflow object in the Object Browser. The workflow object's Diagram tab is displayed in the right viewing panel. 2. Click the chevron

buttons to expand BQL Brand and BQL Generic.

3. Click the (+) symbols to expand Rule Sets for both BQL objects. 4. Use the pointer to drag the Rule Set from the BQL Rules folder to the Rule Set input for both BQL Brand and BQL Generic. 5. Select BQL Brand in the Parallel BE 2 workflow. BQL Brand’s Setup tab is displayed in the right viewing panel. 6. Use the pointer to drag the Pococuranitol_data worksheet from the Data folder to BQL Brand’s Main Mappings panel. 7. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map Subject to the Sort context.



Map Time to the Time context.



Map Concentration to the Concentration context.



Leave all other data types mapped to None.

8. The template has retained the Output Column Name BrandConc_gt_0_01. Do not change this name. 9. Click the Execute 72

button. The results are displayed on the Results tab.

Workflows and Templates Create and add a template

4

10. Select BQL Generic in the Parallel BE 2 workflow. BQL Generic’s Setup tab is displayed in the right viewing panel. 11. Use the pointer to drag the GenericForm2_data worksheet from the Data folder to the Main Mappings panel. 12. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map Subject to the Sort context.



Map Time to the Time context.



Map Concentration to the Concentration context.



Leave all other data types mapped to None.

13. The template has retained the Output Column Name GenericConc_gt_0_01. Do not change this name. 14. Click the Execute

button. The results are displayed on the Results tab.

Set up brand descriptive statistics: 1. Select Descriptive Stats Brand in the Object Browser to display its Setup tab in the right viewing panel. •

Descriptive Stats Brand was created using a template, so all mappings in the Main Mappings panel and all options selected in the Options tab are carried over from the original object.



Because Descriptive Stats Brand used BQL Brand’s output as its input, and because BQL Brand has been executed in the second workflow, there is no need to make any changes to the Descriptive Stats object.

2. Click the Execute

button. The results are displayed on the Results tab.

Set up generic descriptive statistics: 1. Select Descriptive Stats Generic in the Object Browser to display its Setup tab in the right viewing panel. •

Because Descriptive Stats Generic used BQL Generic’s output as its input, and because BQL Generic has been executed in the second workflow, there is no need to make any changes to the Descriptive Stats object.

2. Click the Execute

button. The results are displayed on the Results tab.

73

4

Phoenix Examples Guide Plot the concentration values over time: 1. Select XY Plot Brand vs Generic in the Object Browser to display its Setup tab in the right viewing panel. •

XY Plot Brand vs Generic was created using a template, so all mappings in the XY Data and XY 1 Data Mappings panels and all options selected in the Options tab are carried over from the original object.



Because XY Plot Brand vs Generic used BQL Brand’s and BQL Generic’s output as its input, and because both BQL objects have been executed in the second workflow, there is no need to make any changes to the XY Plot object.

2. Click the Execute

button. The results are displayed on the Results tab.

Examine the output. The plot shows that the concentration values peak at very different times in the two different study groups. The second generic formulation lasts longer and has much higher concentration values than the brand name formulation. Plot the mean concentration values over time: 1. Select XY Plot Mean Brand vs Generic in the Object Browser to display its Setup tab in the right viewing panel. •

XY Plot Mean Brand vs Generic was created using a template, so all mappings in the XY Data and XY 1 Data Mappings panels and all options selected in the Options tab are carried over from the original object.



Because XY Plot Mean Brand vs Generic used Descriptive Stats Brand’s and Descriptive Stats Generic’s output as its input, and because both Descriptive Stats objects have been executed in the second workflow, there is no need to make any changes to the XY Plot object.

2. Click the Execute

button. The results are displayed on the Results tab.

Examine the output. The plot shows that the mean concentration values peak at very different times in the two different study groups. Set up brand noncompartmental analysis: 1. Select NCA Brand in the Object Browser to display its Setup tab in the right viewing panel. •

74

NCA Brand was created using a template, so all mappings in the Main Mappings panel and all options selected in the Options tab are carried over from the original object.

Workflows and Templates Create and add a template

4



Because NCA Brand used BQL Brand’s output as its input, and because BQL Brand has been executed in the second workflow, only a few changes need to be made to the NCA object.



Because the dosing data came from a data source external to the template, the Dosing tab needs to be re-mapped.

Dosing regimen In this example one dose of 50 mg was administered at time 0. The Pococuranitol workbook contains a dosing data worksheet named Pococuranitol_dose. 1. Select Dosing in the Setup list. Map the Pococuranitol_dose worksheet as the input source for NCA Brand’s Dosing panel. 2. Use the pointer to drag the Pococuranitol_dose worksheet from the Data folder to NCA Brand’s Dosing Mappings panel. The Pococuranitol_dose worksheet is mapped to NCA Brand’s Dosing Mappings panel. 3. Use the option buttons in the Dosing Mappings panel to map Subject to Sort, Dose to Dose, and Time_of_Dose to Time of Dose. 4. Click the Execute

button. The results are displayed on the Results tab.

Set up generic noncompartmental analysis: 1. Select NCA Generic in the Object Browser to display its Setup tab in the right viewing panel. •

Because NCA Generic used BQL Generic’s output as its input, and because BQL Generic has been executed in the second workflow, only a few changes need to be made to the NCA object.



Because the dosing data came from a data source external to the template, the Dosing tab needs to be re-mapped.

Dosing regimen In this example one dose of 50 mg was administered at time 0. The GenericForm2 workbook contains a dosing data worksheet named GenericForm2_dose. 1. Select Dosing in the Setup list. Map the GenericForm2_dose worksheet as the input source for NCA Generic’s Dosing panel. 75

4

Phoenix Examples Guide 2. Use the pointer to drag the GenericForm2_dose worksheet from the Data folder to NCA Generic’s Dosing Mappings panel. The GenericForm2_dose worksheet is mapped to NCA Generic’s Dosing Mappings panel. 3. Use the option buttons in the Dosing Mappings panel to map Subject to Sort, Dose to Dose, and Time_of_Dose to Time of Dose. 4. Click the Execute

button. The results are displayed on the Results tab.

Combine the NCA output: Combine the Final Parameters Pivoted worksheets from both NCA objects and use the combined output in the bioequivalence model. The new column created by the Append Worksheets object will contain the formulation information for the bioequivalence model. 1. Select Append Worksheets in the Object Browser to display its Setup tab in the right viewing panel. •

Append Worksheets was created using a template, so all mappings in the Worksheet 1 and Worksheet 2 Mappings panel are carried over from the original object.



Because Append Worksheets used NCA Brand’s and NCA Generic’s output as its input, and because both NCA objects have been executed in the second workflow, no changes need to be made to the Append Worksheets object.

2. Click the Execute

button. The results are displayed on the Results tab.

Edit the Append Worksheets output: 1. Right-click the Result worksheet in the Results tab and select Copy to Data Folder. The worksheet is copied to the Data folder and renamed Result from Append Worksheets. 2. Select the Result from Append Worksheets worksheet in the Data folder. 3. Rename the worksheet Parallel BE Input 2. The Columns tab is located underneath the right viewing panel. The Columns tab is used to edit columns in a worksheet. 4. Select the Source column header in the Columns box. 5. Click the column header once to make it editable. 6. Rename the column header to Formulation. 76

Workflows and Templates Create and add a template

4

7. In the Formulation column rename NCA Brand to Brand. 8. Select the first cell in the Formulation column and type Brand. 9. Use the drag and fill feature to change all NCA Brand cells to Brand. 10. In the Formulation column rename NCA Generic to Generic 2. 11. Use the drag and fill feature to change all NCA Generic cells to Generic 2. Set up the second bioequivalence model: 1. Select Bioequivalence in the Object Browser to display its Setup tab in the right viewing panel. Map the Parallel BE Input 2 worksheet as the input source for the Bioequivalence object. 2. Use the pointer to drag the Parallel BE Input 2 worksheet from the Data folder to the Main Mappings panel. The Parallel BE Input 2 worksheet is mapped to the Bioequivalence object. •

The Bioequivalence object was created using a template, so all mappings in the Main Mappings panel and all options selected in the Model tab are carried over from the original object.



In the Model tab the Parallel/Other option button is selected, and the Reference Formulation is set to Brand.

3. Click the Execute

button. The results are displayed on the Results tab.

Examine the output.

Note: It is not necessary to keep a project open after completing each chapter. This project is not required when working in the next chapter. To close a project right-click the project and select Close Project.

77

4

78

Phoenix Examples Guide

Chapter 5

Pharmacokinetic Modeling Creating and saving PK models in Phoenix

Suppose that a researcher has obtained concentration data from one subject after oral administration of a compound, and now wishes to fit a pharmacokinetic (PK) model to the data.

Exploring the data Create a new project: 1. Select File > New Project to create a new project. A new project is created in the Object Browser. 2. Name the new project PK Model. Import the data set: 1. Select File > Import or click the Import is displayed.

button. The Open File(s) dialog

2. Navigate to the Phoenix examples subdirectory, which by default is located at

C:\Program Files\Pharsight\Phoenix\application\Examples. 3. Select study1.CSV and click Open. The Worksheet Import Options dialog is displayed. The dialog is used to assign options for how the data are imported and presented. 4. Select the Has units row check box. 5. Click Finish. The data set is added to the project’s Data folder. A data set in CSV (Comma Separated Value) format is added to the Data folder as a worksheet.

79

5

Phoenix Examples Guide 6. View the data set by selecting it in the Data folder. Select the worksheet to display it in the Grid tab, which is located in the right viewing panel.

Plot the time and concentration data 1. Select the workflow in the Object Browser and then select Insert > Plotting > XY Plot.

Note: The XY Plot object can also be added by right-clicking the workflow and selecting New > Plotting > XY Plot. Any object can be added by selecting New in the workflow menu. The XY Plot object is added to the workflow in the Object Browser. 2. Map the data set study1 as the input source for the XY Plot object: •

Use the pointer to drag the study1 worksheet from the Data folder to the XY Data Mappings panel. OR



In the XY Plot XY Data Mappings panel click the Select source to open the Select Object dialog.



Select the study1 worksheet and click Select.

button

The study1 data set is mapped to the XY Plot object. 3. Use the option buttons in the XY Data Mappings panel to map the data types to the following contexts: •

Leave Subject mapped to None.



Map Time to the X context.



Map Conc to the Y context.

4. Click the Execute

button. The results are displayed on the Results tab.

Change the plot to semi-logarithmic: The plot display options are located in the XY Plot's Options tab. 1. Select Axes > Y. 2. Select the Logarithmic option button in the Scale area. Leave the logarithmic base set to 10. The XY Plot is automatically updated to reflect the scale change.

80

Pharmacokinetic Modeling Set up the model

5

Set up the model The plots suggests that the system might be adequately modeled by a one compartment, 1st order absorption model. This model is available as Model 3 in the pharmacokinetic models included in Phoenix. Begin modeling: 1. Select the workflow in the Object Browser and then select Insert > WNL 5 Classic Modeling > PK Model. The PK Model object is added to the workflow in the Object Browser. 2. Map the data set study1 as the input source for the PK Model object: •

Use the pointer to drag the study1 worksheet from the Data folder to the PK Model object’s Main Mappings panel. OR



In the PK Model Main Mappings panel click the Select source to open the Select Object dialog.



Select the study1 worksheet and click Select.

button

The study1 data set is mapped to the PK Model object. 3. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map Subject to the Sort context.



Map Time to the Time context.



Map Conc to the Concentration context.

Use the Model Selection tab to specify which PK model Phoenix uses in the analysis. The Model Selection tab is located underneath the Setup tab. 4. Select the Number 3 model check box in the Model Selection tab.

Dosing regimen Note: Entering the units for dosing data makes it possible to view and adjust units for the model parameters. In this example a single dose of 2 micrograms was administered at time 0. The dosing values are: – Number of Doses = 1 – Dose = 2 ug 81

5

Phoenix Examples Guide – Time = 0 Enter the dosing data: 1. Select the PK Model's Dosing panel. 2. Select the Use internal Worksheet check box. The Select sorts dialog is displayed. The Select sorts dialog prompts a user to select the sort variables to use to create the internal dosing worksheet. Select sorts dialog

3. Click OK to accept the default sort variable. 4. In the cell under Time type 0. 5. In the cell under Dose type 2. Use the Weighting/Dosing Options tab to specify settings for the PK Model dosing options. The Weighting/Dosing Options tab is located underneath the Setup tab. Dosing options are located in the Dosing area in the Weighting/Dosing Options tab.

82

Pharmacokinetic Modeling Set up the model

5

Weighting/Dosing Options tab

6. In the Unit field type ug.

Initial parameter estimates All model estimation procedures benefit from initial estimates of the parameters. While Phoenix can compute initial parameter estimates using curve stripping, this example will provide user values for the initial parameter estimates. Enter initial parameter estimates: 1. Select the Parameter Options tab, which is located underneath the Setup tab. 2. Select the User Supplied Initial Parameter Values option button. •

The WinNonlin Bounds option button is selected by default. Do not change this setting.

3. Select the PK Model's Initial Estimates panel. 4. Select the Use internal Worksheet check box. The Select sorts dialog is displayed. The Select sorts dialog prompts a user to select the sort variables to use to create the internal dosing worksheet. 5. Click OK to accept the default sort variable. 6. Enter the following initial values: •

V_F = 0.25



K01 = 1.81



K10 = 0.23

83

5

Phoenix Examples Guide

Run the model and view the results At this point, all of the necessary commands and options have been specified. •

Click the Execute

button. The results are displayed in the Results tab.

The Results tab contains three types of model output: •

Output Data (worksheets)



Plots



Text Output

Output Data (worksheets) The PK worksheet output contains the following types of results:

PK Model worksheet contents Item

Contents

Condition Numbers

Rank and condition number of the matrix of partial derivatives for each iteration. The matrix is of full rank, since Rank is equal to the number of parameters. If the Rank were less than three, that would indicate that there was not enough information in the data to estimate all three parameters. The condition value is the square root of the ratio of the largest to the smallest eigenvalue.

Correlation Matrix

Correlation matrix for the parameters.

Diagnostics

The following diagnostics are provided: corrected sum of squared observations (CSS), weighted corrected sum of squared observations (WCSS), sum of squared residuals (SSR), weighted sum of squared residuals (WSSR), estimate of residual standard deviation (S) and degrees of freedom (DF), the correlation between observed Y and predicted Y, the weighted correlation, and two measures of goodness of fit: the Akaike Information Criterion (AIC) and Schwartz Bayesian Criterion (SBC).

Dosing Used

Dose amounts and dosing times.

Eigenvalues

Eigenvalues for each level of the sort variables.

Final Parameters

Parameter names, estimates, standard error of the estimates, CV%, univariate confidence intervals, and planar confidence intervals.

Final Parameters Pivoted

Parameter names, estimates, standard error of the estimates, CV%, univariate confidence intervals, and planar confidence intervals stacked by parameter.

Initial Estimates Parameter names, initial values, and lower and upper bounds. Minimization Process

84

Iteration number, weighted sum of squares, and value for each parameter.

Pharmacokinetic Modeling Run the model and view the results

5

Partial Derivatives

Values of the partial derivatives at each time point for each function being fit. In this case, one function, predicting plasma concentration.

Predicted Data

Time and predicted Y for the number of time points selected in the Model Options PK Settings. Partial data shown below.

Secondary Parameters

Secondary parameter name, estimate, and standard error of the estimate CV%.

Secondary Secondary parameter name, estimate, and standard error of the estimate CV% Parameters Piv- stacked by parameter. oted Stacked Partial Derivatives

Values of the partial derivatives at each time point for each function being fit. In this case, one function, predicting plasma concentration, with all the parameters in one column.

Summary Table Summary of observed and predicted data and residuals. For PK/PD link models the Summary table would also include CP and Ce; for indirect response models, CP. User Defined Settings

User-defined PK model settings.

VarianceCovari- Variance-covariance matrix for the parameters. ance Matrix

Plots The PK model output includes six plots: Observed Y and Predicted Y vs X

85

5

Phoenix Examples Guide Partial Derivatives Plot

Predicted Y vs Observed Y

86

Pharmacokinetic Modeling Run the model and view the results

5

Predicted Y vs X

Residual Y vs Predicted Y

87

5

Phoenix Examples Guide Residual Y vs X

88

Pharmacokinetic Modeling Run the model and view the results

5

Text Output The Core output text file contains all model settings and output in plain text format.

WINNONLIN NONLINEAR ESTIMATION PROGRAM Core Version 16Nov2010

Listing of input commands MODEL 3 NVAR 3 NPOI 1000 XNUM 2 YNUM 3 NCON 3 CONS 1,2,0 METH 2'Gauss-Newton (Levenberg and Hartley) ITER 50 INIT 0.25,1.81,0.23 MISS '.' DATA 'WINNLIN.DAT' BEGI

The following default parameter boundaries were generated.

Parameter V_F K01 K10

Lower Bound Upper Bound 0.000 2.500 0.000 18.10 0.000 2.300

Saving the project and the results Projects and their results can be saved in several ways. Projects can be saved as a project file. Projects can also be loaded into the Pharsight Knowledgebase Server (PKS) as a new study. A Phoenix Connect license is required for this functionality.

Save the project as a file 1. Select File > Save Project. The Save Project dialog is displayed. 89

5

Phoenix Examples Guide 2. Select a directory in the Save in menu or accept the default directory. 3. Type a name in the File name field or accept the default name and click Save. 4. The project is saved as a Phoenix Project (.phxproj) file.

Save the project into the PKS 1. In the PKS menu select Create Study. The Create Study dialog is displayed.

2. Click the Connect button. 3. Enter the user name and password. 4. In the Study Name field type PK Model Example. 5. In the Description field type Saving a project in PKS. 6. Select the Study Data tab. 7. Click the Browse Projects button to select a data source. 8. Select the study1 worksheet and click Select. 9. Click the Map Study Data button.

90

Pharmacokinetic Modeling Run the model and view the results

5

10. Use the pointer to drag Subject from the Source Column to the Subject Identifiers box.

11. In the Study Mapping dialog, select the Default Data Collection Point tab. 12. Use the pointer to drag Time from the Source Column to the Relative_Nominal_Time and Relative_Actual_Time fields.

91

5

Phoenix Examples Guide

13. Click OK. 14. Select the Samples tab. 15. Drag Conc to the Samples list. 16. Click the Save Map button.

92

Pharmacokinetic Modeling Run the model and view the results

5

The Save As dialog is displayed. This allows users to save the study mapping selections to a .map file. 17. In the Save in menu select the Pharsight Projects Directory, which by default is located at C:\Documents and Settings\\My Documents\Pharsight Projects. 18. In the File name field type PK Example and click Save. 19. Click OK in the Create Study dialog. The PKS Save dialog is displayed. 20. In the Audit Reason field type Save project. 21. Enter the password in the Password field.

93

5

Phoenix Examples Guide

22. Click OK. The PKS Process Manager is displayed. The process manager shows the status of PKS jobs. 23. When the process is complete, click Close in the PKS Process Manager. The project is now saved as a study in the PKS.

Note: It is not necessary to keep a project open after completing each chapter. This project is not required when working in the next chapter. To close a project right-click the project and select Close Project.

94

Chapter 6

The Phoenix Toolbox Nonparametric superposition, semicompartmental modeling, and deconvolution

The Toolbox contains ten types of analysis and model objects. Examples for four of the model objects are provided under the following headings. » Semicompartmental modeling on page 95 » Nonparametric superposition on page 105 » Crossover design on page 112 » Deconvolution on page 116

Semicompartmental modeling The examples of semicompartmental modeling and nonparametric superposition use the data set in the file PK.CSV, which is located in the Phoenix examples directory. The data are from an early Phase I PK/PD trial. Quick input is sought for the design of a seven day multiple dose study. However, the profiles are irregular, and it is not easy to apply a compartmental modeling approach to the data. Create a new project: 1. Select File > New Project to create a new project. A new project is created in the Object Browser. 2. Name the new project Toolbox. Import the data set: 1. Select File > Import or click the Import is displayed.

button. The Open File(s) dialog

2. Navigate to the Phoenix examples directory, which by default is located at

C:\Program Files\Pharsight\Phoenix\application\Examples. 95

6

Phoenix Examples Guide 3. Select the file PK.CSV and click Open. The Worksheet Import Options dialog is displayed. The dialog is used to assign options for how the data are imported and presented. 4. Select the Has units row check box. 5. Click Finish. The data set is added to the project’s Data folder. A data set in CSV (Comma Separated Value) format is added to the Data folder as a worksheet. 6. View the data set by selecting it in the Data folder. Select the worksheet to display it in the Grid tab which is located in the right viewing panel. The braces in the Effect column header indicate that the units are nonstandard and will be carried throughout the analysis but they cannot be used in unit conversions. The first step in evaluating the data is an exploration of time-concentration and concentration-effect plots. Plot the data: 1. Select the workflow in the Object Browser and then select Insert > Plotting > XY Plot.

Note: The XY Plot object can also be added by right-clicking the workflow and selecting New > Plotting > XY Plot. Any object can be added by selecting New in the workflow menu. The XY Plot object is added to the workflow in the Object Browser. Objects automatically open in the right viewing panel when they are inserted in a workflow. Each object’s default view is the Setup tab, which contains all the steps necessary to set up an object. 2. Map the data set PK as the input source for the XY Plot object: •

Use the pointer to drag the PK worksheet from the Data folder to the XY Data Mappings panel. OR



In the XY Plot XY Data Mappings panel click the Select source to open the Select Object dialog.



Select the PK worksheet and click Select. The PK data set is mapped to the XY Plot.

96

button

The Phoenix Toolbox Semicompartmental modeling

6

3. Use the option buttons in the XY Data Mappings panel to map the data types to the following contexts: •

Map Subject to the Group context.



Map Time to the X context.



Map Conc to the Y context.



Leave Effect mapped to None.

4. Click the Execute

button. The results are displayed on the Results tab.

The plot indicates that compartmental modeling might be problematic. The data are highly variable.

5. Re-map the input data by selecting the XY Plot object’s XY Data Mappings panel. 6. Use the option buttons to map the data types to the following contexts: •

Leave Subject mapped to the Group context.



Map Time to None.



Map Conc to the X context.



Map Effect to the Y context.

The graph display options are located in the XY Plot's Options tab. 7. Select Graphs > Effect vs Conc in the Options menu tree. Clear the Sort X Values check box. Clearing the Sort X Values check box tells Phoenix to not sort the data set by ascending concentration values before creating the XY plot. 97

6

Phoenix Examples Guide 8. Click the Execute

button. The results are displayed on the Results tab.

Notice the hysteresis in the plot. Semicompartmental modeling supports calculation of effect-site concentrations based on Ke0. In this example, pre-clinical studies indicated that the Ke0 is between 0.2 and 0.3 per hour in rats and dogs.

Set up semicompartmental modeling This example estimates effect-site concentrations using semicompartmental modeling: 1. Select the workflow in the Object Browser and then select Insert > NCA and Toolbox > Semicompartmental Modeling. The SemiCompartmental object is added to the workflow in the Object Browser. 2. Map the data set PK as the input source for the SemiCompartmental object: •

Use the pointer to drag the PK worksheet from the Data folder to the Main Mappings panel. OR



In the SemiCompartmental Main Mappings panel click the Select source button to open the Select Object dialog.



Select the PK worksheet and click Select. The PK data set is mapped to the SemiCompartmental object.

3. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: • 98

Map Subject to the Sort context.

The Phoenix Toolbox Semicompartmental modeling •

Map Time to the Time context.



Map Conc to the Concentration context.



Map Effect to the Effect context.

6

Use the Options tab to specify settings for the SemiCompartmental model options. The Options tab is located underneath the Setup tab. 4. Type 0.25 in the Ke0 field. 5. Click the Execute

button. The results are displayed on the Results tab.

Output The SemiCompartmental model provides both workbook and graph output.

Worksheet output The Results worksheet shows the calculated concentration of the drug in the effect compartment, Ce, at each Time in the input data set, along with the input Conc and Effect data, for each subject.

99

6

Phoenix Examples Guide

Plot output The SemiCompartmental object’s Results tab includes the following four plots for each subject. Effect-compartment concentration (Ce) over time (Ce vs Time)

Concentration over time (Cp vs Time)

100

The Phoenix Toolbox Semicompartmental modeling

6

Effect as a function of Ce (Effect vs Ce)

Effect over concentration (Effect vs Cp)

Based on the plots PD model 103, an Inhibitory Effect E0 (formerly Emax) model, is appropriate to use to model the concentration in the effect compartment (Ce) versus effect relationship.

Note: E0, the effect at time zero, is a new final parameter that takes the place of the final parameter Emax which was used in previous WinNonlin PD models.

101

6

Phoenix Examples Guide

Pharmacodynamic modeling Insert the Inhibitory Effect (E0) Emax PD model: 1. Select the workflow in the Object Browser and then select Insert > WNL 5 Classic Modeling > PD Model. The PD Model object is added to the workflow in the Object Browser. 2. Map the SemiCompartmental Results worksheet as the input source for the PD Model object: •

In the PD Model Main Mappings panel click the Select Source to open the Select Object dialog.

button



Select the SemiCompartmental Results worksheet and click Select. OR



Select the workflow. The workflow Diagram tab is displayed in the right viewing panel. Each operational object in a workflow is represented in the Diagram tab.



Click the chevron Model symbols.

buttons to expand the SemiCompartmental and PD

Each object symbol contains a complete list of all input and output sources. •

Click the (+) symbol beside the SemiCompartmental Results.



Click the (+) symbol beside the PD Model Inputs.



Drag the SemiCompartmental Results worksheet to the PD Model Main input.

The Results worksheet is mapped to the PD Model object. A line is displayed that represents the mapping between the SemiCompartmental and PD Model objects.

102

The Phoenix Toolbox Semicompartmental modeling

6

Semicomp results to PD input

Model drug effect as a function of effect-site concentrations: 1. In the PD Model object, use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map Subject to the Sort context.



Leave Time mapped to None.



Leave Conc mapped None.



Map Ce to the X variable context.



Map Effect to the Y variable context.

Use the Model Selection tab to specify which PD model Phoenix uses in the analysis. The Options tab is located underneath the Setup tab. 2. In the Model Selection tab, select the model Number 103 check box. The default model parameter options are used. Phoenix generates initial parameter values and parameter bounds. To view the parameter option settings select the Parameter Options tab, which is located underneath the Setup tab. 3. Click the Execute

button. The results are displayed on the Results tab.

Phoenix analyzes each subject separately and includes all time points per subject.

103

6

Phoenix Examples Guide

Results Worksheet The Final Parameters output provides estimates for E0 and IC50 for each subject. These are used later to predict steady-state effect values.

Plot The Observed Y and Predicted Y vs X plot illustrates the fit of PD model 103 to the effect data when Ce from SemiCompartmental modeling is used as the measure of exposure. The Observed Y and Predicted Y vs X plot for the first subject is shown below.

The other plots address model fit.

104

The Phoenix Toolbox Nonparametric superposition

6

Nonparametric superposition This section uses the NonParametric Superposition object to predict plasma concentrations and effect-site concentrations at steady-state based on single-dose data. This feature allows for predictions on data that are otherwise difficult to model. This example uses the output from the semicompartmental modeling example, detailed under Semicompartmental modeling on page 95. Estimate steady-state plasma concentrations using nonparametric superposition: 1. Select the workflow in the Object Browser and then select Insert > NCA and Toolbox > NonParametric Superposition. The NonParametric Superposition object is added to the workflow in the Object Browser. 2. Map the SemiCompartmental Results worksheet as the input source for the NonParametric Superposition object: •

In the NonParametric Main Mappings panel click the Select Source button to open the Select Object dialog.



Select the SemiCompartmental Results worksheet and click Select. OR



Select the workflow. The workflow Diagram tab is displayed in the right viewing panel. Each operational object in a workflow is represented in the Diagram tab.



Click the chevron buttons to expand the SemiCompartmental and NonParametric symbols. Each object symbol contains a complete list of all input and output sources.



Click the (+) symbol beside the SemiCompartmental Results.



Click the (+) symbol beside the NonParametric Inputs.



Drag the SemiCompartmental Results worksheet to the NonParametric Main input. The Results worksheet is mapped to the NonParametric object. A line is displayed that represents the mapping between the SemiCompartmental and NonParametric objects.

105

6

Phoenix Examples Guide 3. Use the option buttons in the NonParametric object’s Main Mappings panel to map the data types to the following contexts: •

Map Subject to the Sort context.



Map Time to the Time context.



Map Conc to the Concentration context.



Leave Ce mapped to None.



Leave Effect mapped to None.

Use the Options tab to specify settings for the NonParametric model options. The Options tab is located underneath the Setup tab. 4. In the Options tab, type 50 in the Loading dose field. 5. In the Maintenance dose field type 50. 6. In the Tau (dosing interval) field type 4. 7. Click the Execute

button. The results are displayed on the Results tab.

Results Worksheet The NonParametric worksheet results provide predicted steady-state plasma concentrations and Lambda Z and half-life estimates.

106

The Phoenix Toolbox Nonparametric superposition

6

Graph The graph output shows predicted steady state concentrations over time for each subject. The first subject’s graph is shown below.

Estimate steady-state effect-site concentrations using nonparametric superposition: 1. Use the option buttons in the NonParametric object’s Main Mappings panel to re-map the data types to the following contexts: •

Leave Subject mapped to the Sort context.



Leave Time mapped to the Time context.



Map Conc to None.



Map Ce to the Concentration context.



Leave Effect mapped to None.

2. Select the NonParametric object’s Terminal Phase panel. 3. Select the Use internal Worksheet check box. 4. In the Start column for the first subject, type 4. In the End column, type 8. Repeat for the other two subjects. 5. Click the Execute

button. The results are displayed on the Results tab.

107

6

Phoenix Examples Guide

Output for effect-site concentrations The new NonParametric worksheet results provide predicted effect site concentrations at steady-state and Lambda Z and half-life estimates.

The plot output shows predicted effect site concentrations at steady-state over time for each subject. The first subject’s graph is shown below.

Now it is possible to compute the steady-state effect from the predicted steadystate concentrations at the effect site.

Steady-state effect computation Skip this section and proceed to Crossover design on page 112 if Microsoft Excel XP to 2007 is not installed on the same machine as Phoenix. Compute steady-state effects: 1. Select the PD Model object in the Object Browser. •

108

The sample graph for PD model 103 is displayed in the Model Selection tab. Note the effect formula for model 103 is E=E0*(1-(C/(C+IC50))).

The Phoenix Toolbox Nonparametric superposition

6

2. Select the NonParametric object in the Object Browser. Select the NonParametric object’s Results tab. 3. Right-click the Concentrations (effect site concentrations) worksheet and select Copy to Data Folder. The Concentrations worksheet is added to the project’s Data folder and renamed Concentrations from NonParametric. 4. Select Concentrations from NonParametric in the Data folder. The worksheet is displayed in the Grid tab in the right viewing panel. The Columns tab is located underneath the Grid tab. The Columns tab is used to add columns to a worksheet or edit existing columns. 5. Click the Add button underneath the Columns box. The New Column Properties dialog is displayed. •

Use the New Column Properties dialog to define the data type and the name of a new column.

6. The Numeric option button is selected by default. Do not change this setting. 7. In the Column Name field type Effect and click OK. 8. The new column is displayed in the Columns box and in the Grid tab. 9. Use the Down Arrow button beside the Columns box to move the Effect column header to the bottom of the Columns list. 10. Right-click Concentrations from NonParametric and select Edit in Excel. Phoenix displays the following message warning users that changes made in Excel are not recorded in Phoenix.

109

6

Phoenix Examples Guide 11. Click OK. The worksheet is opened in Excel. 12. In the Concentrations from NonParametric worksheet enter the PD model 103 effect formula in the Effect column for each subject at time zero. Use the E0 and IC50 values from the PD Model object’s Final Parameters worksheet. 13. Select the cell in the Effect column at time zero for the first subject, JDW. 14. Type the effect formula shown below in the Effect column cell at time 0 (zero) for subject JDW. = 102.93*(1-(C3/(C3+0.09)))

(for subject JDW)

15. Repeat for the second and third subjects, LEJ and SCC. = 100.17*(1-(C103/(C103+0.09)))

(for subject LEJ)

= 100.45*(1-(C203/(C203+0.08)))

(for subject SCC)

16. After the Effect value formula is set up at time zero for each subject copy the formula to the other time points for each subject. Excel XP and 2003 users: •

In Excel, select File > Save. Click Save in the Save As dialog. Because of the way Phoenix handles its interactions with Excel, users cannot use the Save As option in Excel to save the worksheet with a different name or to a different location. The Save option must be used.



Close Excel and click Yes to save the worksheet. The Apply Changes message is displayed.



Click Yes to apply the changes. An entry is written in the worksheet’s History tab noting that it was edited in Excel. The Save Excel Formulas message is displayed.

110

The Phoenix Toolbox Nonparametric superposition



6

Click Yes to save formulas. The worksheet is no longer editable in Phoenix, but it can be edited in Excel. – The worksheet can still be used with operational objects. The changes are applied to the worksheet in Phoenix.

Excel 2007 and 2010 users: •

Click the Office button and select Save.



Close Excel. Be sure to save the worksheet before closing Excel, or all changes are lost. The Apply Changes message is displayed.



Click Yes to apply the changes. An entry is written in the worksheet’s History tab noting that it was edited in Excel. The Save Excel Formulas message is displayed.



Click Yes to save formulas. The worksheet is no longer editable in Phoenix, but it can be edited in Excel. – The worksheet can still be used with operational objects. 111

6

Phoenix Examples Guide The changes are applied to the worksheet in Phoenix. The Concentrations from NonParametric worksheet now has Effect values derived from the equations used in the Excel edit. Once the steady-state effects and concentrations are generated it is possible to use the modified Concentrations from NonParametric worksheet to plot Time vs. Effect for each subject by mapping the worksheet to an XY Plot object. •

Insert a new XY Plot object. Map Concentrations from NonParametric to the plot object.



Map Subject to Group, Time to X, and Effect to Y. Leave Ce mapped to None. Time vs. Effect plot

Crossover design Crossover design supports two data formats: data for both treatments stacked in one column, or each treatment placed in a separate column. An example of each follows.

Data stacked in one column For this type of data, all the data for one treatment must be displayed in the first rows, followed by all the data for the other treatment. 1. Select File > Import or click the Import is displayed.

112

button. The Open File(s) dialog

The Phoenix Toolbox Crossover design

6

2. Navigate to the Phoenix examples directory, which by default is located at

C:\Program Files\Pharsight\Phoenix\application\Examples. 3. Select stacked.CSV and click Open. The Worksheet Import Options dialog is displayed. The dialog is used to assign options for how the data are imported and presented. 4. Click Finish. The data set is added to the project’s Data folder. A data set in CSV (Comma Separated Value) format is added to the Data folder as a worksheet. 5. View the data set by selecting it in the Data folder. Select the worksheet to display it in the Grid tab. Insert a Crossover object: 1. Select the workflow in the Object Browser and then select Insert > NCA and Toolbox > Crossover. The Crossover object is added to the workflow in the Object Browser. 2. Map the data set stacked as the input source for the Crossover object: •

Use the pointer to drag the stacked worksheet from the Data folder to the Crossover object’s Main Mappings panel. OR



In the Crossover Main Mappings panel click the Select source to open the Select Object dialog.



Select the stacked worksheet and click Select.

button

The stacked data set is mapped to the Crossover object. 3. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map TREATMENT to the Treatment context.



Map SUBJECT to the Subject context.



Map PARAMETER to the Sort context.



Map ESTIMATE to the Response context.



Leave PERIOD mapped to None.



Map SEQUENCE to the Sequence context.

4. Click the Execute

button. The results are displayed on the Results tab.

113

6

Phoenix Examples Guide

The Crossover object computes confidence intervals for treatment medians and median difference between treatments, the results of which are displayed in the Confidence Intervals worksheet. The Crossover object also estimates the relevance of direct, residual, and period effects as well as treatment and residual effects. These results are displayed in the Effects worksheet.

Data in separate columns 1. Select File > Import or click the Import is displayed.

button. The Open File(s) dialog

2. Navigate to the Phoenix examples directory, which by default is located at

C:\Program Files\Pharsight\Phoenix\application\Examples. 3. Select separate.CSV and click Open. 4. Click Finish. The data set is added to the project’s Data folder. Insert a Crossover object: 1. Select the workflow in the Object Browser and then select Insert > NCA and Toolbox > Crossover. The Crossover object is added to the workflow in the Object Browser.

Note: When multiple objects of the same type are added to a workflow they are numbered sequentially. For example, the second Crossover object added to this workflow is called Crossover 1.

114

The Phoenix Toolbox Crossover design

6

2. Map the data set separate as the input source for the Crossover 1 object: •

Use the pointer to drag the separate worksheet from the Data folder to the Main Mappings panel. OR



In the Crossover 1 Main Mappings panel click the Select source ton to open the Select Object dialog.



Select the separate worksheet and click Select.

but-

The separate data set is mapped to the Crossover 1 object. The treatment data layout must be specified before the data can be mapped to the contexts for the Crossover 1 object. Use the Options tab to specify settings for the Crossover model options. The Options tab is located underneath the Setup tab. 3. Select Separate in the Treatment Data Layout menu. 4. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map Sequence to the Sequence context.



Map Subject to the Subject context.



Map trt_G to the Test Treatment context.



Map trt_H to the Reference Treatment context.

5. Click the Execute

button. The results are displayed on the Results tab.

The Confidence Intervals worksheet contains treatment medians, median differences between treatments, and confidence intervals for those estimates.

The Effects worksheet provides statistics for direct, residual, and period effects, as well as the effect of treatment and residual simultaneously.

115

6

Phoenix Examples Guide

Deconvolution Perhaps the most common application of deconvolution is in the evaluation of drug release and drug absorption from orally administered drug formulations. In this case, the bioavailability is evaluated if the reference input is a vascular drug input. Similarly, gastro-intestinal release is evaluated if the reference is an oral solution (oral bolus input). Both are included here. This example uses the data set M3tablet.dat, which is located in the Phoenix examples directory, which is located by default at C:\Program Files\Pharsight\Phoenix\application\Examples. The analysis objectives are to estimate the following for a tablet formulation: 1. Absolute bioavailability and rate and cumulative extent of absorption over time. 2. In vivo dissolution and the rate and cumulative extent of release over time. Absolute bioavailability: To estimate the absolute bioavailability, the mean unit impulse response parameters A and alpha have already been estimated from concentration-time data following instantaneous input (IV bolus) for three subjects, using PK model 1. The data in M3tablet.dat includes those parameter estimates and plasma drug concentrations following oral administration of a tablet formulation. This example shows how to estimate the rate at which the drug reaches the systemic circulation, using deconvolution. Dissolution: To estimate the in vivo dissolution from the tablet formulation, the mean unit impulse response parameters A and alpha have already been estimated from concentration-time data following instantaneous input into the gastrointestinal tract by administration of a solution, using PK model 3. The steps below show how to use deconvolution to estimate the rate at which the drug dissolves.

116

The Phoenix Toolbox Deconvolution

6

Absolute bioavailability Evaluate absolute bioavailability: For this type of data, all the data for one treatment must be displayed in the first rows, followed by all the data for the other treatment. 1. Select File > Import or click the Import is displayed.

button. The Open File(s) dialog

2. Navigate to the Phoenix examples directory, which by default is located at

C:\Program Files\Pharsight\Phoenix\application\Examples. 3. Select M3tablet.dat and click Open. The Worksheet Import Options dialog is displayed. The dialog is used to assign options for how the data are imported and presented. 4. Click Finish. The data set is added to the project’s Data folder. 5. View the data set by selecting it in the Data folder. Start the evaluation: 1. Select the workflow in the Object Browser and then select Insert > NCA and Toolbox > Deconvolution. The Deconvolution object is added to the workflow in the Object Browser. 2. Map the data set M3tablet as the input source for the Deconvolution object: •

Use the pointer to drag the M3tablet worksheet from the Data folder to the Deconvolution object’s Main Mappings panel. OR



In the Deconvolution Main Mappings panel click the Select source ton to open the Select Object dialog.



Select M3tablet and click Select.

but-

The M3tablet data set is mapped to the Deconvolution object. 3. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map subject to the Sort context.



Map time to the Time context.



Map conc to the Concentration context.



Leave all the other data types mapped to None. 117

6

Phoenix Examples Guide 4. Select Exp Terms in the Setup list. 5. Select the Use internal Worksheet check box. 6. In the Value column type 100 for each A1 cell in the Parameter column. 7. In the Value column type 0.98 for each Alpha1 cell in the Parameter column.

Note: Type 100 and 0.98 in the first two cells underneath Value. Highlight the cells and drag the selection down to fill the Value column. There are no dose amounts for this example. The calculated fractional input approaches a value of 1 rather than being adjusted for dose amount. 8. Click the Execute

button. The results are displayed on the Results tab.

Phoenix generates worksheets and plots for the output. Partial results for subject 1 are displayed below.

Worksheet output Values worksheet

118

The Phoenix Toolbox Deconvolution

6

Plot output Cumulative Rates plot

Dissolution For the rest of this example an oral solution (a.k.a. oral bolus) is used to estimate the unit impulse response. In this case, the deconvolution result should be interpreted as an in vivo dissolution profile, not as an absorption profile. The oral impulse response function should have the property of the initial value being equal to 0, which implies that the sum of the A’s must be zero. The alphas should all still be positive, but at least one of the A’s will be negative. Evaluate dissolution: 1. Select the workflow in the Object Browser and then select Insert > NCA and Toolbox > Deconvolution. The Deconvolution object is added to the workflow in the Object Browser.

Note: When multiple objects of the same type are added to a project they are numbered sequentially. For example, the second Deconvolution object added to this project is called Deconvolution 1. 2. Map the data set M3tablet as the input source for the Deconvolution 1 object: •

Use the pointer to drag the M3tablet worksheet from the Data folder to the Deconvolution 1 object’s Main Mappings panel. OR 119

6

Phoenix Examples Guide •

In the Deconvolution 1 Main Mappings panel click the Select source button to open the Select Object dialog.



Select M3tablet and click Select. The M3tablet data set is mapped to the Deconvolution 1 object.

3. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map subject to the Sort context.



Map time to the Time context.



Map conc to the Concentration context.



Leave all the other data types mapped to None.

Use the Options tab to specify settings for the Deconvolution model options. The Options tab is located underneath the Setup tab. 4. Select 2 in the Exponential Terms menu. 5. Select Exp Terms in the Setup list. 6. Select the Use internal Worksheet check box. 7. In the Value column type -110 for each A1 cell in the Parameter column. 8. In the Value column type 3.8 for each Alpha1 cell in the Parameter column. 9. In the Value column type 110 for each A2 cell in the Parameter column. 10. In the Value column type 0.10 for each Alpha2 cell in the Parameter column.

Note: After the values have been entered for subject 1, highlight all four cells for subject 1 and drag the selection down to copy the A and Alpha values to subjects 2 and 3. 11. Click the Execute

button. The results are displayed on the Results tab.

Phoenix generates the new worksheet and graphs. Results for subject 1 are displayed below:

120

The Phoenix Toolbox Deconvolution

6

Worksheet output Values worksheet

Plot output Cumulative Rates plot

Note: It is not necessary to keep a project open after completing each chapter. This project is not required when working in the next chapter. To close a project right-click the project and select Close Project.

121

6

122

Phoenix Examples Guide

Chapter 7

Linear Mixed Effects Modeling Analyzing treatment effects and data variance

Two examples of linear mixed effects modeling are provided. » Comparing treatment groups on page 123 analyzes a randomized study. » An illustration of variance structures on page 127 is an example of assay validation.

Comparing treatment groups This example uses the Linear Mixed Effects (LinMix) capability in Phoenix to test for differences among treatment groups in a parallel study. Twenty-eight subjects were randomly assigned to four treatment groups. One observation of drug effect was measured from each subject for a total of 7 observations per treatment. If statistically significant differences are observed between treatments, then the estimates, with confidence intervals, are desired.

The model The model for these data is as follows.

y ij =  +  i +  ij where:

i = treatment index, 1, 2, 3, 4 j = subject index within treatment, 1, 2, ..., 7 yij = observation value for treatment i, subject j  = overall mean

123

7

Phoenix Examples Guide

 i = effect of treatment i  ij = random error term for observation yij Create a new project: 1. Select File > New Project to create a new project. A new project is created in the Object Browser. 2. Name the new project LinMix. Import the linear mixed effects model data set: 1. Select File > Import or click the Import is displayed.

button. The Open File(s) dialog

2. Navigate to the Phoenix examples directory, which by default is located at

C:\Program Files\Pharsight\Phoenix\application\Examples. 3. Select OneWayData.CSV and click Open. The Worksheet Import Options dialog is displayed. The dialog is used to assign options for how the data are imported and presented. 4. Click Finish. The data set is added to the project’s Data folder. 5. View the data set by selecting it in the Data folder. The worksheet is displayed in the Grid tab, which is located in the right viewing panel. Insert the Linear Model: 1. Select the workflow in the Object Browser and then select Insert > NCA and Toolbox > Linear Mixed Effects. The Linear Mixed Effects object is added to the workflow in the Object Browser. 2. Map the data set OneWayData as the input source for the Linear Mixed Effects Model object: •

Use the pointer to drag the OneWayData worksheet from the Data folder to the Linear Mixed Effects Model object’s Main Mappings panel. OR



In the Linear Mixed Effects Model Main Mappings panel click the Select source button to open the Select Object dialog.



Select OneWayData and click Select.

The OneWayData data set is mapped to the Linear Mixed Effects Model object. 124

Linear Mixed Effects Modeling Comparing treatment groups

7

3. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map Treatment to the Classification context.



Map Response to the Dependent context.

Use the Fixed Effects tab, which is located underneath the Setup tab, to set up the model specification. 4. Drag Treatment from the Classification box to the Model Specification box, or type Treatment in the Model Specification box. 5. None is selected by default in the Dependent Variables Transformation menu. Do not change this setting. 6. Select the Least Squares Means tab. 7. Drag Treatment from the Fixed Effects Model Classifiable Terms box to the Least Squares Means box.

8. Click the Execute

button. The results are displayed in the Results tab.

125

7

Phoenix Examples Guide

Results Diagnostics worksheet

Final fixed parameters This model is over parameterized. There are five parameters, , , , , , but there are only 4 means. The last parameter is removed from the model and is not estimated, resulting in output of “Not estimable” for the Placebo group. When that happens, each of the other 's represent the difference between the treatment mean and the last treatment mean. Note that subtracting the LSM for high dose group from the mean for placebo produces the same number as . The parameter  is then the mean of the omitted treatment group, the placebo group in this case.

Least squares means On the Least Squares Means (LSM) tab, Estimate, for balanced data, is the average of the observations within each treatment group. Also listed are the standard error of each mean, p value for the hypothesis that the true mean equals zero, and confidence interval.

Partial tests In this case, the partial tests have the same value as the sequential test. This is always true for balanced data sets. For unbalanced data, these results can differ. See the Phoenix WinNonlin User’s Guide for details.

126

Linear Mixed Effects Modeling An illustration of variance structures

7

Sequential tests The p-value is shown as 0.1358, indicating that differences among treatment groups were not statistically significant.

An illustration of variance structures This analysis is concerned with the precision components of assay validation to estimate contributions due to assay variation, assay-to-assay variation, and analyst-to-analyst variation. A single QC sample was prepared containing theoretically 65 ng/mL of analyte. Five analysts were recruited for this study. Each analyst ran 5 aliquots of the sample on 4 assay runs. The data are available in the data set AssayVal1.CSV in the Phoenix examples directory.

The model Import the linear mixed effects model data set: 1. Select File > Import or click the Import is displayed.

button. The Open File(s) dialog

2. Navigate to the Phoenix examples directory, which by default is located at

C:\Program Files\Pharsight\Phoenix\application\Examples. 3. Select AssayVal1.CSV and click Open. The Worksheet Import Options dialog is displayed. The dialog is used to assign options for how the data are imported and presented. 4. Click Finish. The data set is added to the project’s Data folder. Units must be added to the Determination column before the data set can be used in a Linear Mixed Effects model. 5. Select AssayVal1 in the Data folder. The worksheet is displayed in the Grid tab in the right viewing panel. •

Use the Columns tab to modify columns in a worksheet. The Columns tab is located underneath the right viewing panel.

6. Select the Determination column header in the Columns box. 7. In the Unit field type ng/mL.

127

7

Phoenix Examples Guide Insert the Linear Model: 1. Select the workflow in the Object Browser and then select Insert > NCA and Toolbox > Linear Mixed Effects. The Linear Mixed Effects Model object is added to the workflow in the Object Browser.

Note: When multiple objects of the same type are added to a project they are numbered sequentially. For example, the second Linear Mixed Effects Model object added to this project is called Linear Mixed Effects Model 1. 2. Map the data set AssayVal1 as the input source for the Linear Mixed Effects Model 1 object: •

Use the pointer to drag the AssayVal1 worksheet from the Data folder to the Linear Mixed Effects Model 1 object’s Main Mappings panel. OR



In the Linear Mixed Effects Model 1 Main Mappings panel click the Select source button to open the Select Object dialog.



Select AssayVal1 and click Select.

The AssayVal1 data set is mapped to the Linear Mixed Effects Model 1 object. 3. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map Analyst to the Classification context.



Map Assay to the Classification context.



Map Determination to the Dependent context.

4. Select the Variance Structure tab. 5. Drag Analyst from the Classification Variables box to the Random Effects field in the Random 1 tab, or type Analyst in the Random Effects field. 6. Click the Add Random button to add another Random effect. 7. Drag Assay from the Classification Variables box to the Random Effects field in the Random 2 tab, or type Assay in the Random Effects field. 8. Select the Estimates tab. 9. Select the Intercept Coefficient check box and type 1 in the Intercept Coefficient field.

128

Linear Mixed Effects Modeling An illustration of variance structures

10. Click the Execute

7

button. The results are displayed in the Results tab.

Results Statistical accuracy values are located in the Estimates worksheet.

The mean response is the intercept, which is estimated at 70.6 ng/mL with a 95% confidence interval. The lower confidence interval is 65.97 and the upper confidence interval is 75.28. Since the theoretical analyte concentration of 65 ng/mL is not within the confidence interval, one can conclude that the bias is statistically significant. The method has a bias of approximately 5 ng/mL. Select the Final Variance Parameters worksheet to view precision estimates and variance components.

Dependent

Units

Parameter

Estimate

Determination

ng/mL

Var(Analyst)_11

7.807236

Determination

ng/mL

Var(Assay)_21

1.86966

Determination

ng/mL

Var(Residual)

9.9133

Based on these results, most of the variation is coming from analyst-to-analyst variation and from within-assay variation. Assay-to-assay noise is quite small. The units on the variances are (ng/mL).

129

7

Phoenix Examples Guide

Re-execute the model with new data Now fit the same model to the data in the data set AV3.CSV. Re-run the model with a different data set: 1. Repeat steps 1. through 7. under Import the linear mixed effects model data set: on page 127 to import the data set AV3.CSV and add units to the Determination column. 2. In the Workflow, right-click Linear Mixed Effects 1 and select Copy. 3. Right-click the Workflow object and select Paste. A new Linear Mixed Effects object named Copy of Linear Mixed Effects 1 is added to the Workflow. The LinMix object copy contains the same settings as the original object. 4. Map the AV3 data set to Copy of Linear Mixed Effects 1. Do not change the data mappings in the Main Mappings panel. 5. Click the Execute

button. The results are displayed in the Results tab.

The results are displayed in the Results tab. The Final Variance Parameters worksheet contains the following variance components:

Dependent

Units

Parameter

Estimate

Determination

ng/mL

Var(Analyst)_11

-3.880848

Determination

ng/mL

Var(Assay)_21

26.42101

Determination

ng/mL

Var(Residual)

9.91625

This table indicates that analyst to analyst variation is negative. Since variances can not be negative, it is customary to replace the value with 0. A negative variance component indicates that the corresponding term should be removed from the model, which means that the contribution from that term is minimal compared to the contribution due to the other terms and it cannot be distinguished from the residual term. From the variance components it is clear that the largest contribution to noise in the method is from run-to-run variation. Within-run variation also contributes to the noise. There is very little variation among analysts, indicating that the method is robust. The Linear Mixed Effects object warns user about negative final variances. • 130

In the Results tab, select the Warnings and Errors text file.

Linear Mixed Effects Modeling An illustration of variance structures

7

The text file states: “Warning 11094: Negative final variance component. Consider omitting this VC structure.” Problems associated with a linear mixed effects model are written to this file during execution.

Note: It is not necessary to keep a project open after completing each chapter. This project is not required when working in the next chapter. To close a project right-click the project and select Close Project.

131

7

132

Phoenix Examples Guide

Chapter 8

The IVIVC Workflow Evaluation of in vitro in vivo correlations for formulation development

This chapter steps through the full process of using the Phoenix IVIVC workflow to generate an in vitro in vivo correlation model and apply it to predict PK profiles from dissolution data for a new formulation. The example is divided into the following tasks: •

Setting up the data on page 133



Selecting and smoothing the dissolution data on page 135



Fitting the unit impulse response and estimating absorption on page 138



Developing and validating the IVIVC model on page 140



Predicting PK on page 141

Note: Phoenix IVIVC functionality requires purchase and installation of a special Phoenix IVIVC license in addition to the core Phoenix license.

Setting up the data The complete IVIVC example requires three data sets, which are included in the Phoenix examples directory, which by default is located at C:\Program Files\Pharsight\Phoenix\application\Examples. 1. IVIVC_Diss.csv: in vitro fraction dissolved over time for 5 formulations. 2. IVIVC_Test.csv: time-concentration profiles for individual subjects given the same formulations in ivivc_diss.csv. 3. IVIVC_Vivo_Subj.csv: in vitro fraction dissolved over time for a new, test formulation, for use in predicting PK data.

133

8

Phoenix Examples Guide Import the data sets for the IVIVC project: Load the following three files from Phoenix examples directory: » IVIVC_Diss.csv » IVIVC_Test.csv » IVIVC_Vivo_Subj.csv

Note: Select multiple files at once in the Open File dialog by pressing the CTRL key and using the mouse pointer to select the files. 1. Select File > Import or click the Import displayed.

button. The Open File dialog is

2. Navigate to the Phoenix examples directory, which by default is located at

C:\Program Files\Pharsight\Phoenix\application\Examples. 3. Select IVIVC_Diss.csv and click Open. The Worksheet Import Options dialog is displayed. The dialog is used to assign options for how the data are imported and presented. 4. Select the Has units row check box in the Worksheet Import Options dialog. 5. Click Finish. The data set is added to the project’s Data folder. 6. Repeat steps 1 and 2. 7. Select IVIVC_Test.csv and click Open. The Worksheet Import Options dialog is displayed. 8. Select the Has units row check box in the Worksheet Import Options dialog and click Finish. 9. Repeat steps 1 and 2. 10. Select IVIVC_Vivo_Subj.csv and click Open. The Worksheet Import Options dialog is displayed. 11. Select the Has units row check box in the Worksheet Import Options dialog and click Finish. Data sets in CSV (Comma Separated Values) format are added to the Data folder as worksheet objects. 12. View the data set by selecting it in the Data folder. The worksheet is displayed in the Grid tab.

134

The IVIVC Workflow Selecting and smoothing the dissolution data

8

Selecting and smoothing the dissolution data The IVIVC object's InVitro Data panel, InVitro Formulation panel, and InVitro tab (located underneath the Setup tab) include settings to identify the dissolution data to be used in fitting an in vitro in vivo correlation, and settings to smooth the dissolution data. Insert the IVIVC workflow and identify and smooth the dissolution data: 1. Select the project in the Object Browser and then select Insert > IVIVC > IVIVC. The IVIVC object is added to the project in the Object Browser. 2. Map the data set IVIVC_Diss as the input source for the IVIVC object’s InVitro Data panel: •

Use the mouse pointer to drag the IVIVC_Diss data set from the Data folder to the IVIVC object’s InVitro Data Mappings panel. or



In the IVIVC InVitro Data Mappings panel click the Select source ton to open the Select Object dialog.



Select IVIVC_Diss and click Select.

but-

The IVIVC_Diss data set is mapped to the IVIVC object’s InVitro Data panel. 3. Use the option buttons in the InVitro Data Mappings panel to map the data types to the following contexts: •

Map Time to the InVitro Time context.



Map Formulation to the InVitro Formulation context.



Map Fdiss to the InVitro Dissolution context.

Note: Mapping the Formulation Partitioning data set enables users to perform dissolution data partitioning, which identifies the formulations that are used for IVIVC fitting and testing. 4. Use the option buttons in the InVitro Formulation Mappings panel to map the data types to the following contexts: •

Map CR01, CR02, and CR04 to the Internal context. These formulations are used to fit the IVIVC.



Map CR03 to the External context. It is used to validate the IVIVC in the Correlation tab. 135

8

Phoenix Examples Guide •

Leave Targ mapped to None. The target formulation provides the comparator for predictions made in the Prediction tab.

Fit the dissolution data: 1. In the InVitro tab, which is located underneath the Setup tab, select the Weibull option button under Dissolution Model. 2. Select the IVIVC object’s InVitro Estimates panel. 3. Select Fixed in the Fixed or Estimated column to set the initial value for the FINF (fraction absorbed extrapolated to time infinity) parameter for formulation CR01. 4. Enter 1 in the Initial column for the FINF parameter for formulation CR01. 5. Select all the cells under the Initial and Fixed columns for formulation CR01. InVitro estimates selected

6. Place the mouse pointer over the black square on the lower right side of the selection. The pointer changes to the following shape: . This signifies that the drag and fill feature can be used. 7. Press the left mouse button and drag the selection down to fill the Initial and Fixed cells for each formulation.

136

The IVIVC Workflow Selecting and smoothing the dissolution data

8

8. In the InVitro tab click the Fit Dissolution Data button. The Fit Dissolution Data button fits the Weibull model and generates smoothed data for each subject. When the model fit and data smoothing are complete Phoenix displays the following message:

137

8

Phoenix Examples Guide

Fitting the unit impulse response and estimating absorption The IVIVC object's InVivo Data panel and InVivo tab support identification of in vivo PK data, support fitting of the unit impulse response (UIR) function, and provide an estimation of the fraction of drug absorbed over time based on the UIR and PK data. Select the PK data and the PK dosing data: 1. Select the IVIVC object’s InVivo Data panel. 2. Map the data set IVIVC_Vivo_Subj as the input source for the IVIVC object’s InVivo Data panel: •

Use the mouse pointer to drag the IVIVC_Vivo_Subj worksheet from the Data folder to the IVIVC object’s InVivo Data Mappings panel. or



In the IVIVC InVivo Data Mappings panel click the Select source ton to open the Select Object dialog.



Select the IVIVC_Vivo_Subj worksheet and click Select.

but-

The IVIVC_Vivo_Subj data set is mapped to the IVIVC object’s InVivo Data panel. 3. Use the option buttons in the InVivo Data Mappings panel to map the data types to the following contexts: •

Map Time to the Independent context.



Map Subj to the Sort context.



Map Form to the InVivo Formulation context.



Map Cp to the Values context.

4. Select the IVIVC object’s InVivo Dosing panel. 5. Enter 1 in the Dose column for each formulation. •

Enter 1 in the Dose column for formulation CR01 and use the drag and fill feature to enter the dosing data. Look at the Status Panel, which is located underneath the Setup tab. Note that the top three squares in the panel are now green, indicating that those steps have been completed.

138

The IVIVC Workflow Fitting the unit impulse response and estimating absorption

8

Fit the UIR, generate absorption data and set the formulation information: 1. Select the InVivo tab, which is located underneath the Setup tab. 2. 3 is selected by default in the Maximum number of UIR exponentials menu. Do not change this setting. 3. Select IV in the Reference Formulation menu. 4. Click the Generate UIR button to fit the model and generate predicted data for each subject. When the model fit and data generation are complete Phoenix displays the following message:

5. Click the Deconvolve button. Phoenix deconvolves PK subject data with the newly fitted UIRs to estimate the fraction of the drug absorbed over time for each subject. When the deconvolution is complete Phoenix displays the following message:

139

8

Phoenix Examples Guide Select the Status tab, which is located underneath the Setup tab, to see the status of each step of the IVIVC workflow. If a step fails the Status tab displays information concerning why the step failed.

Developing and validating the IVIVC model Now that smoothed dissolution data and estimated absorption data are available they can be used to fit and test a correlation model. Generate and validate the IVIVC: 1. Select the Correlation tab, which is located underneath the Setup tab. 2. Select the Fabs=AbsScale*Diss(Tscale*Tvivo) option button. 3. Click the Build Correlation button. Phoenix fits the model to the dissolution and absorption data and generates parameters and predicted data. When the correlation is complete Phoenix displays the following message:

» The Results tab displays the Correlation Step worksheets, plots, and text output. 4. In the Correlation tab, select Linear_Trapezoidal_Linear_Interpolation in the Calculation Method menu. 5. Click the Validate Correlation button.

140

The IVIVC Workflow Predicting PK

8

Phoenix performs a noncompartmental analysis on the predicted and observed PK data, averages the AUC and Cmax for each formulation, and displays the percentage of error and the ratio of the predicted to observed data as measures of prediction error. When the correlation validation is complete Phoenix displays the following message:

Predicting PK Once an acceptable IVIVC model is generated Phoenix can use it to predict PK data based on dissolution data for new formulations. Predict PK profiles for the test formulation: 1. Select the IVIVC object’s Prediction Data panel. 2. Map the data set IVIVC_Test as the input source for the IVIVC object’s Prediction Data panel: •

Use the mouse pointer to drag the IVIVC_Test worksheet from the Data folder to the IVIVC object’s Prediction Data Mappings panel. or



In the IVIVC Prediction Data Mappings panel click the Select source button to open the Select Object dialog.



Select IVIVC_Test and click Select. The IVIVC_Test data set is mapped to the IVIVC object’s Prediction Data panel.

3. Use the option buttons in the Prediction Data Mappings panel to map the data types to the following contexts: •

Map Time to the Time context.



Map Formulation to the Formulation context.



Map Fdiss to the Dissolution context.

Now identify which formulations will be used for IVIVC fit and testing. 4. Select the IVIVC object’s Prediction Dosing panel. 141

8

Phoenix Examples Guide

Note: If an internal worksheet relies on internal data sources, such as output from part of the IVIVC workflow, then the worksheet might not be displayed. 5. Click the Rebuild button to create the internal worksheet. 6. Enter 1 in the Dose column. Set up the dissolution model: Fit the dissolution data to a Weibull model with the Fraction absorbed extrapolated to time infinity (Finf) fixed at a value of 1. 1. Select the Prediction tab, which is located underneath the Setup tab. 2. Select the Weibull option button to choose the Weibull dissolution model. 3. Select the IVIVC object’s Prediction Estimates panel. 4. Select Fixed in the Fixed or Estimated column to set the initial value for the FINF parameter. 5. Enter 1 in the Initial column for the FINF parameter 6. In the Prediction tab, select Targ in the Target Formulation menu. 7. Click the Fit Dissolution Data button in the Prediction tab to fit the model and generate smoothed data. When the model fit and data generation are complete Phoenix displays the following message box:

8. Click the Predict PK button in the Prediction tab to generate predicted PK data for each subject that exists in the original dissolution and PK data sets. When PK data prediction is complete Phoenix displays the following message box:

142

The IVIVC Workflow Predicting PK

8

Phoenix uses the IVIVC model to predict absorption for each subject and then convolves that with the UIRs from the target formulation to generate PK data for each subject. Phoenix then performs noncompartmental analysis on the predicted data and compares the results to those for the target formulation selected in the InVivo Data panel. The output, shown on the Results tab, gives the prediction error versus the target formulation. (screen) The IVIVC workflow is complete when Predict PK in the Status Panel is green. The results are displayed on the Results tab. CAUTION: It is not necessary to click the Execute button. Because the IVIVC object is a series of workflows clicking the Execute button will only re-execute all the steps that have been completed, and not produce any new output.

143

8

144

Phoenix Examples Guide

Chapter 9

Tables Creating report-ready tables

Three examples of table usage are provided: » Final Parameters table on page 145 creates a report-ready table of the final parameter estimates generated in Chapter 1. » Joining raw data and modeling output on page 150 combines data from two workbooks into a single table, and shows how to recreate WinNonlin 5.2.1’s table template 9 in Phoenix. » Using custom tables on page 156 shows how to use one of the custom table types provided with Phoenix.

Final Parameters table This example uses the Table object to create a report-ready table of the final parameter estimates created under Analyzing Multiple Profiles on page 1. That example computed PK parameters for six subjects for both Tablet and Capsule formulations. It used NCA model 200 and the Final Parameters worksheet. The input data for this example are located in the Phoenix examples directory. This example will create a table using the parameters Cmax, Tmax, AUCall and AUClast.

Table Type 3 Data for this example are provided in the Phoenix examples directory, which by default is located at C:\Program Files\Pharsight\Phoenix\application\Examples.

145

9

Phoenix Examples Guide Create a new project: 1. Select File > New Project to create a new project. A new project is created in the Object Browser. 2. Name the new project Tables. Import the data set: 1. Select File > Import or click the Import is displayed.

button. The Open File(s) dialog

2. Navigate to the Phoenix examples directory, which by default is located at

C:\Program Files\Pharsight\Phoenix\application\Examples. 3. Select Profiles Output.xls and click Open. The Data Import Wizard is displayed. The wizard is used to assign options for how the data are imported and presented. 4. Click the Forward Arrows

button.

5. For Final Parameters Pivoted, select the Has units row check box and click the Forward Arrows button. 6. Click the Forward Arrows button. 7. For Dosing Used, select the Has units row check box and click the Forward Arrows button. 8. Click the Forward Arrows button twice. 9. For Summary Table, select the Has units row check box and click the Forward Arrows button. 10. Click the Forward Arrows button and click Finish. The NCA results workbook is added to the project’s Data folder. The file Profiles Output.xls adds the following worksheets to the Data folder: – Final Parameters – Final Parameters Pivoted – Exclusions – Dosing Used – Plot Titles – Summary Table – Settings – History 146

Tables Final Parameters table

9

11. View the worksheets by selecting them in the Data folder. Click the (+) sign beside Profiles Output to view the worksheets. Select a worksheet to display it in the Grid tab, which is located in the right viewing panel. Create the Table: 1. Select the workflow in the Object Browser and then select Insert > Table > Table.

Note: The Table object can also be added by right-clicking the workflow and selecting New > Table > Table. Any object can be added by selecting New in the workflow menu. The Table object is added to the workflow in the Object Browser. 2. Map the Profiles Output Final Parameters Pivoted worksheet as the input source for the Table object: •

Use the pointer to drag the Profiles Output Final Parameters Pivoted worksheet from the Data folder to the Table object’s Main Mappings panel. OR



In the Table Main Mappings panel click the Select source open the Select Object dialog.

button to



Select the Final Parameters Pivoted worksheet and click Select. The Final Parameters Pivoted worksheet is mapped to the Table object.

Use the Options tab to specify which table type the Table object uses. The Options tab is located underneath the Setup tab. 3. Select Table 3 - Column Detail and Summary by Row Stratification in the Table Type menu. 4. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map Subject to the Row ID context. Table type 3 sorts by the row variable within values of the stratification row variable. Table type 3 does not compute summary statistics for each level of the row variable.



Map Form to the Stratification Row context. Table type 3 sorts the data by stratification row variable value and computes summary statistics for each value.



Map Tmax, Cmax, AUClast, and AUCall to the Data context. 147

9

Phoenix Examples Guide These variables provide data for the body of the table.

Note: Select the Table Preview panel in the Setup list to view an example of the final table output.

Select and format summary statistics 1. Select the Statistics tab, which is located underneath the Setup tab. 2. Select the check boxes in the Display column to select the following summary statistics: •

N



Mean



SE



Min



Median



Max

3. Select the Options tab.

Note: The Precision/Alignment menu item in the Options tab allows users to set the number of decimal places or significant figures for each mapped column header and the selected summary statistics, as well as the alignment of the output within each column. 4. Select Precision/Alignment in the Options menu tree. Click the (+) sign beside Precision/Alignment to view the mapped column headers and click the (+) sign beside Statistics to view the selected summary statistics. 5. Select Precision/Alignment > Subject in the Options menu tree. 6. Select 0 in the Value menu. 7. Select Captions in the Options menu tree. 8. In the Caption field type Table 1. 9. Click the Add button. 10. In the Caption field type Pharmacokinetic Parameters. 11. Click the Add button. 12. Select the Column/Sort Order tab, which is located underneath the Setup tab. 148

Tables Final Parameters table

9

The columns in the final table are displayed in the order set here. Use the up arrow and down arrow buttons to change the order in which the Data columns are displayed. •

Select Row Stratification to view study parameter(s) mapped to Row Stratification.



Select Row ID to view study parameter(s) mapped to Row ID.



Select Data to view study parameter(s) mapped to Data.

13. Select Data in the Column/Sort Order menu. 14. Select Cmax in the Column Order list and click the up arrow 15. Select AUCall in the Column Order list and click the up arrow

button. button.

The Table object’s Style tab is used to change the table display options. The Style tab is located underneath the Setup tab. Select different items in the Style menu tree to change the font, font size, font color, and alignment. Style selections are not necessary for this example. 16. Click the Execute

button. The results are displayed on the Results tab.

149

9

Phoenix Examples Guide Table type 3 results

Joining raw data and modeling output This example shows how to reproduce table template 9 that is used in WinNonlin 5.3 and earlier. Phoenix does not have a specific table type for this template. The main difference between table template 9 and the other table templates in WinNonlin is that two data sets are joined to create the final output. The Phoenix Table object only works with one data set at a time. To produce a table similar to table template 9 it is necessary to use Phoenix’s Join Worksheets object prior to creating the table. In this example two data sets are joined by the Sort variables in both data sets. The Default table type in Phoenix is used to recreate table template 9 in WinNonlin.

150

Tables Joining raw data and modeling output

9

Recreating WinNonlin’s table template 9 in Phoenix Import the data sets: This example uses two data sets, clayton.CSV and clayton_pk.dat. clayton.CSV contains time and concentration data for two formulations. clayton_pk.dat contains the Final Parameters output from a noncompartmental analysis.

Note: Select multiple files at once in the Open File(s) dialog by pressing the CTRL key and using the mouse pointer to select the files. 1. Select File > Import or click the Import is displayed.

button. The Open File(s) dialog

2. Navigate to the Phoenix examples directory, which by default is located at

C:\Program Files\Pharsight\Phoenix\application\Examples. 3. Select clayton.CSV and click Open. The Worksheet Import Options dialog is displayed. The dialog is used to assign options for how the data are imported and presented. 4. Select the Has units row check box. 5. Click Finish. The data set is added to the project’s Data folder. 6. Select File > Import or click the Import

button.

7. Select clayton_pk.dat and click Open. The Worksheet Import Options dialog is displayed. The dialog is used to assign options for how the data are imported and presented. The correct import options are automatically assigned to the data set. Do not change these settings. 8. Click Finish. The data set is added to the project’s Data folder. Merge the data: 1. Select the workflow in the Object Browser and then select Insert > Data > Join Worksheets. The Join Worksheets object is added to the workflow in the Object Browser. 2. Map the data set clayton as an input source for the Join Worksheets object: •

Use the pointer to drag the clayton worksheet from the Data folder to the Join Worksheets object’s Worksheet 1 Mappings panel.

151

9

Phoenix Examples Guide OR •

In the Join Worksheets Worksheet 1 Mappings panel click the Select source button to open the Select Object dialog.



Select the clayton worksheet and click Select.

3. Repeat Step 2 to map clayton_pk to the Join Worksheets object’s Worksheet 2 Mappings panel. The clayton and clayton_pk data sets are mapped to the Join Worksheet object. 4. Use the option buttons in the Worksheet 1 Mappings panel to map the data types to the following contexts: •

Map Subject to the Sort context.



Map Form to the Sort context.



Map Hour to the Source Column context.



Map Conc to the Source Column context.

5. Use the option buttons in the Worksheet 2 Mappings panel to map the data types to the following contexts: •

Map Subject to the Sort context.



Map Form to the Sort context.



Map Tmax to the Source Column context.



Map Cmax to the Source Column context.



Map AUClast to the Source Column context.

6. Click the Execute

button. The results are displayed on the Results tab.

7. Copy the joined worksheet to the Data folder: •

In the Join Worksheets object’s Results tab, right-click the Result worksheet and select Copy to Data Folder.

The Result worksheet is added to the project’s Data folder and renamed Result from Join Worksheets. Create the table: 1. Select the workflow in the Object Browser and then select Insert > Table > Table. The Table object is added to the workflow in the Object Browser.

152

Tables Joining raw data and modeling output

9

Note: When multiple objects of the same type are added to a workflow they are numbered sequentially. For example, the second Table object added to this workflow is called Table 1. 2. Map the joined data set Result from Join Worksheets as the input source for the Table 1 object: •

Use the pointer to drag the Result from Join Worksheets worksheet from the Data folder to the Table 1 object’s Main Mappings panel. OR



In the Table 1 Main Mappings panel click the Select source open the Select Object dialog.

button to



Select the Result from Join Worksheets worksheet and click Select. The Result from Join Worksheets data set is mapped to the Table 1 object.

3. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map Subject to the Row ID context.



Map Form to the Stratification Row context.



Map Hour to the Stratification Column context.



Map Conc to the Data context.

• Map Tmax, Cmax, and AUClast to the Dependency context. Default table type mappings

153

9

Phoenix Examples Guide

Summary statistics This table includes the following summary statistics and formatting options. Select summary statistics: 1. Select the Statistics tab, which is located underneath the Setup tab. 2. Select the check boxes in the Display column to select the following summary statistics: •

N



Mean



SE

Format the table: Use the Options tab to specify output and formatting options for the Default table type. The Options tab is located underneath the Setup tab. 1. Select Table in the Options menu tree. 2. Select the Page Break on Row Stratification check box. 3. Click the (+) sign beside Precision/Alignment to view the mapped column headers and click the (+) sign beside Statistics to view the selected summary statistics. 4. Select Precision/Alignment > Hour in the Options menu tree. 5. Select 1 in the Value menu. 6. Select Precision/Alignment > Subject in the Options menu tree. 7. Select 0 in the Value menu. 8. Select Captions in the Options menu tree. 9. In the Caption field type Table 2. 10. Click the Add button. 11. In the Caption field type Raw Data and Pharmacokinetic Parameters. 12. Click the Add button. 13. Click the Execute

154

button. The results are displayed on the Results tab.

Tables Joining raw data and modeling output

9

Default table type Formulation c results

155

9

Phoenix Examples Guide Default table type Formulation t results

Using custom tables The custom table types are included to provide additional reporting options. In custom table types all output formatting, statistics, styles, sorting, and other options are pre-defined through an XML file and style sheets. Once a custom table is defined, users do not need to make any further options selections. The only possible selections users can make are mapping the input data types to the mapping contexts in the custom table. Import the data set: This example uses a data set that contains demographic data for a population used in a bioequivalence study. 1. Select File > Import or click the Import is displayed. 156

button. The Open File(s) dialog

Tables Using custom tables

9

2. Navigate to the Phoenix examples directory, which by default is located at

C:\Program Files\Pharsight\Phoenix\application\Examples. 3. Select Bioequivalence Demographics.dat and click Open. The Worksheet Import Options dialog is displayed. The dialog is used to assign options for how the data are imported and presented. 4. Click Finish. The data set is added to the project’s Data folder. 5. Select the Bioequivalence Demographics worksheet in the Data folder to display it in the Grid tab. Add a table object using the Send To menu: 1. In the Data folder, right-click Bioequivalence Demographics and select Send To > Table > Table. A Table object is added to the workflow in the Object Browser and the data in the Bioequivalence Demographics data set is automatically mapped to the Table 2 object. Using the Send To menu option automatically maps the data in the selected data set to the object selected in the Send To menu.

Note: When multiple objects of the same type are added to a workflow they are numbered sequentially. For example, the third Table object added to this workflow is called Table 2. 2. Select the Custom Tables tab in the Table 2 object. 3. In the Select Custom Table menu, select Bioequivalence Demographics. All other tabs are removed from the Table object user interface when a custom table is selected. This is because the custom table type contains preconfigured table options. The study data types are automatically mapped to the appropriate mapping contexts. 4. Click the Execute

button. The results are displayed on the Results tab.

157

9

Phoenix Examples Guide Bioequivalence Demographics table results

Note: It is not necessary to keep a project open after completing each chapter. This project is not required when working in the next chapter. To close a project right-click the project and select Close Project.

158

Chapter 10

Simulation and Study Design Using Phoenix’s library of PK models

Using Phoenix as an aid in designing experiments Considerable research has been done in the area of optimal designs for linear models. Most methods involve computation of the variance covariance matrix. The “optimal” design is usually one in which replicate samples are taken at a limited number of combinations of experimental conditions. Unfortunately, these methods are of little or no value when designing experiments involving nonlinear models for a number of reasons, including: » It can be difficult or, in the case of a pharmacokinetic study, impossible to obtain replicate observations. » The primary interest often is not in the model parameters but in some functions of the model parameters such as AUC, t1/2, etc. When Phoenix performs a simulation the output includes information on precisely how parameters in the model can be estimated for specified values of the independent variables such as time.

Comparison of two designs Assume that a study is being planned and that the data produced by this study should be consistent with Phoenix PK model 3. Assume also that the parameter values should be approximately: V_F

10

K01

3

K10

0.05

and one of the following study designs, or sampling times, will be used:

0, 1.5, 3, 6, 9, 12, 15, 18, and 24 hours 159

Phoenix

10 Examples Guide or

0, 0.5, 1, 2, 4, 8, 12, 24, and 36 hours. Simulation can be used to determine which set of sampling times would produce the more precise estimates of the model parameters. This example will use Phoenix to simulate the model with each set of sampling times, and compare the variance inflation factors for the two simulations. Create a new project: 1. Select File > New Project to create a new project. A new project is created in the Object Browser. 2. Name the new project Study Design.

The data set First create a data set with the following column headers and data: Group

160

Times

1

0

1

1.5

1

3

1

6

1

9

1

12

1

15

1

18

1

24

2

0

2

0.5

2

1

2

2

2

4

2

8

2

12

2

24

2

36

Simulation and Study Design Using Phoenix as an aid in designing experiments 10 Create the data set: 1. Right-click the Data folder in the Object Browser and select New > Worksheet. 2. Name the new worksheet Example Data. The new worksheet is automatically displayed in the Grid tab, which is located in the right viewing panel. The Columns tab is located underneath the Grid tab. The Columns tab is used to add columns to a worksheet. 3. Click the Add button underneath the Columns box. The New Column Properties dialog is displayed. The New Column Properties dialog is used to define the data type and the name of a new column. 4. The Numeric option button is selected by default. Do not change this setting. 5. In the Column Name field type Group and click OK. A new column is displayed in the Columns box and in the Grid tab. 6. In the first cell under Group, type 1 and press ENTER. Repeat for cells 2 through 9. 7. In cells 10 - 18 type 2 in the Group column. 8. Click the Add button underneath the Columns box. 9. In the Column Name field type Times. Leave the data type set to Numeric and click OK. 10. Type the values from the Times column in the table on page 160 in the cells in the Times column. The finished worksheet looks like the table on page 160. •

Users can also import the data set Example Data.csv from the Phoenix examples directory, which by default is located at C:\Program Files\Pharsight\Phoenix\application\Examples.

Insert and map the PK model 1. Select the workflow in the Object Browser and then select Insert > tWNL5 Classic Modeling > PK Model. The PK Model object is added to the workflow in the Object Browser. 2. Map the data set Example Data as the input source for the PK Model object:

161

Phoenix

10 Examples Guide •

Use the pointer to drag the Example Data worksheet from the Data folder to the PK Model object’s Main Mappings panel. OR



In the PK Model Main Mappings panel click the Select source to open the Select Object dialog.



Select the Example Data worksheet and click Select.

button

The Example Data data set is mapped to the PK Model object. 3. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map Group to the Sort context.



Map Times to the Time context.

4. Use the Model Selection tab to specify which PK model Phoenix uses in the analysis. The Model Selection tab is located underneath the Setup tab. 5. Select the Number 3 model check box in the Options tab. 6. Select the Simulation check box in the Options tab. 7. In the Y Units field, type ng/mL.

Enter the dosing data 1. Select the PK Model's Dosing panel. 2. Select the Use internal Worksheet check box. The Select sorts dialog is displayed. The Select sorts dialog prompts a user to select the sort variables to use to create the internal dosing worksheet. Select sorts dialog

162

Simulation and Study Design Using Phoenix as an aid in designing experiments 10 3. Click OK to accept the default sort variable. 4. In the Time column type 0 for both groups. 5. In the Dose column type 100 for both groups.

Note: The number of rows in the Group column corresponds to the number of doses received. For example, if group 1 had 10 doses, there would be 10 rows of dosing information for group 1. In Phoenix this grouping of data is referred to as stacking data. 6. Select the Weighting/Dosing Options tab to specify settings for the PK Model dosing options. Dosing options are located in the Dosing area in the Weighting/Dosing Options tab. 7. In the Unit field type mg.

Model parameters and simulation Parameter values must be specified for simulations. 1. Select the Parameter Options tab, which is located underneath the Setup tab. •

The User Supplied Initial Parameter Values option button is selected by default. This setting cannot be changed.



The Do Not Use Bounds option button is selected by default. This setting cannot be changed.

Selecting the Simulation check box makes the parameter calculation and boundary selection options unavailable. If the Simulation check box is selected then users must supply initial parameter values, and parameter boundaries are not used.

Enter the initial estimates: 1. Select the PK Model's Initial Estimates panel. 2. Select the Use internal Worksheet check box. The Select sorts dialog is displayed. The Select sorts dialog prompts a user to select the sort variables to use to create the internal dosing worksheet. 3. Click OK to accept the default sort variable. 4. Enter the following initial values for each group:

163

Phoenix

10 Examples Guide •

V_F = 10



K01 = 3



K10 = 0.05

All the settings are complete and the model can be executed. 5. Click the Execute

button. The results are displayed on the Results tab.

Results The variance inflation factors (VIF) for each dosing scheme (groups 1 and 2) are located in the Final Parameters worksheet in the PK worksheet results, and are summarized in the following table. Parameter

Estimate

Group 1 VIF

Group 2 VIF

V_F

10

0.779

0.657

K01

3

68.48

1.176

K10

0.05

0

0

In practice it is useful to vary the values of V_F, K01, and K10 and repeat the simulations to determine if the first set of sampling times consistently yields less precise estimates than the second set.

Designing the sampling plan Note that for the parameters V_F and K10, the estimated variances would be approximately 15% lower using the second set of times, while the difference is much more dramatic for the parameter K01. These sets of variance inflation factors indicate that the second set of sampling times would provide tighter estimates of the model parameters. The partial derivatives plots for this model explain this result. The locations at which the partial derivative plots reach a maximum or a minimum indicate times the model is most sensitive to changes in the model parameters, so one approach to designing experiments is to sample where the model is most sensitive to changes in the model parameters.

164

Simulation and Study Design Using Phoenix as an aid in designing experiments 10 Partial Derivatives plot Group 1

Partial Derivatives plot Group 2

Note that in the first plot of the partial derivatives the model is most sensitive to changes in K10 at about 20 hours. Both sampling schemes included times near 20 hours, so therefore the two sets of sampling times were nearly equivalent in the precision with which K10 would be estimated.

165

Phoenix

10 Examples Guide For both V_F and K01 the model is most sensitive to changes very early, at about 0.35 hours for K01 and about 1.4 hours for V_F. The first set of sampling times does not include any post-zero points until hour 3, long past these areas of sensitivity. Even the second set of times could be improved if samples could be taken earlier than 0.5 hours. •

This same technique could be used for other models in Phoenix or for userdefined models.

Note: It is not necessary to keep a project open after completing each chapter. This project is not required when working in the next chapter. To close a project right-click the project and select Close Project.

166

Chapter 11

Bioequivalence Comparing drug exposure with different formulations

Three bioequivalence examples provide illustrations of analysis for different study designs and bioequivalence methods. » Average bioequivalence on page 167 analyzes a 2x2 crossover study. » A replicated crossover design on page 170 computes average bioequivalence for a replicated crossover study. » Individual and population bioequivalence on page 174 explores different methods of evaluating bioequivalence.

Average bioequivalence The objective of this study is to compare a newly developed tablet formulation to the capsule formulation that was being used in Phase II studies. Both had a label claim of 25 mg per dosing unit. A 2x2 crossover design was chosen for this study. Twenty subjects were randomly assigned to one of two sequence groups. Within each sequence group, each subject took both formulations, with a washout period between. Drug concentrations in plasma were measured, and the AUClast (area under a curve computed to the last observation) was calculated.

Calculating average bioequivalence Data for this example are provided in the Phoenix examples directory, which by default is located at C:\Program Files\Pharsight\Phoenix\application\Examples. The data set used is Data 2x2.CSV.

167

Phoenix

11 Examples Guide Create a new project: 1. Select File > New Project to create a new project. A new project is created in the Object Browser. 2. Name the new project Bioequivalence. Import the data set: 1. Select File > Import or click the Import is displayed.

button. The Open File(s) dialog

2. Navigate to the Phoenix examples directory, which by default is located at

C:\Program Files\Pharsight\Phoenix\application\Examples. 3. Select Data 2x2.CSV and click Open. The Worksheet Import Options dialog is displayed. The dialog is used to assign options for how the data are imported and presented. 4. Click Finish. The data set is added to the project’s Data folder. A data set in CSV (Comma Separated Value) format is added to the Data folder as a worksheet. 5. Select the Data 2x2 worksheet in the Data folder to view it in the Grid tab. Begin bioequivalence: 1. Select the workflow in the Object Browser and then select Insert > NCA and Toolbox > Bioequivalence. The Bioequivalence object is added to the workflow in the Object Browser.

Note: The default settings for a new Bioequivalence model are Crossover as the type of study and Average as the type of bioequivalence. 2. Map the data set Data 2x2 as the input source for the Bioequivalence object: •

Use the pointer to drag the Data 2x2 worksheet from the Data folder to the Main Mappings panel. OR



In the Bioequivalence Main Mappings panel click the Select source button to open the Select Object dialog.



Select the Data 2x2 worksheet and click Select. The Data 2x2 data set is mapped to the Bioequivalence Model object.

168

Bioequivalence Average bioequivalence 11 3. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map AUClast to the Dependent context. The following data types are automatically mapped to contexts when the data set is mapped to the Bioequivalence model. If they are not, use the option buttons in the Main Mappings panel to map the data types to the appropriate contexts.



Sequence is mapped to the Sequence context.



Subject is mapped to the Subject context.



Period is mapped to the Period context.



Formulation is mapped to the Formulation context.

Set up the model: Use the Model tab to specify settings for Bioequivalence model options. The Model tab is located underneath the Setup tab. 1. Make sure that Crossover is selected as the Type of study, Average is selected as the Type of Bioequivalence, and Capsule is selected as the Reference Formulation. 2. Select the Fixed Effects tab, which is located underneath the Setup tab. •

Sequence+Formulation+Period is automatically selected as the default Model Specification. Do not change this setting.



Ln(x) is automatically selected in the Dependent Variables Transformation menu. Do not change this setting.

3. Select the Variance Structure tab, which is located underneath the Setup tab. The random effects are already specified in the Variance Structure tab. If they are not, complete the following steps to specify the random variance structure. Otherwise, proceed to step 4. •

Drag Subject from the Classification Variables box to the Random Effects field in the Random 1 tab, or type Subject in the Random Effects field.



Click the left parens



Drag Sequence from the Classification Variables box to the Random Effects field in the Random 1 tab, or type Sequence in the Random Effects field.



Click the right parens

button or type ( in the Random Effects field.

button or type ) in the Random Effects field.

169

Phoenix

11 Examples Guide

4. Click the Execute

button. The results are displayed on the Results tab.

Results The Average Bioequivalence worksheet indicates that the difference in ln(AUClast) between formulations is 0.046±0.073. The 90% confidence interval for the ratio is 92.216 to 118.780. Since the confidence interval is completely contained between 80 and 125, one can conclude that the formulations are bioequivalent. Because the data are balanced the sequential and partial tests are identical. In the tests Sequence is statistically significant, but no other factor is. Sequential Tests worksheet



Select any cell with a numerical value in the Bioequivalence worksheet output and look in the value display bar above to see the full precision of 15 decimal places.

A replicated crossover design The objective of this study is to compare a newly developed tablet formulation to a capsule formulation that was used in Phase II studies. Both formulations have the same label claim per dosing unit. A RTRT/TRTR replicated crossover design was chosen for this study. Twenty subjects were randomly assigned to one of two sequence groups. Concentrations of the drug were measured in plasma, and the AUClast (area under the time-concentration curve, computed to the last observation) was calculated. 170

Bioequivalence A replicated crossover design 11

Calculating average bioequivalence Import the data set: 1. Select File > Import or click the Import is displayed.

button. The Open File(s) dialog

2. Navigate to the Phoenix examples directory, which by default is located at

C:\Program Files\Pharsight\Phoenix\application\Examples. 3. Select Data 2x4.CSV and click Open. The Worksheet Import Options dialog is displayed. The dialog is used to assign options for how the data are imported and presented. 4. Click Finish. The data set is added to the project’s Data folder. A data set in CSV (Comma Separated Value) format is added to the Data folder as a worksheet. 5. View the data set by selecting it in the Data folder. The worksheet is displayed in the Grid tab. Begin bioequivalence: 1. Select the workflow in the Object Browser and then select Insert > NCA and Toolbox > Bioequivalence. The Bioequivalence object is added to the workflow in the Object Browser.

Note: When multiple objects of the same type are added to a workflow they are numbered sequentially. For example, the second Bioequivalence object added to this workflow is called Bioequivalence 1. 2. Map the data set Data 2x4 as the input source for the Bioequivalence 1 object: •

Use the pointer to drag the Data 2x4 worksheet from the Data folder to the Main Mappings panel. OR



In the Bioequivalence 1 Main Mappings panel click the Select source button to open the Select Object dialog.



Select the Data 2x4 worksheet and click Select. The Data 2x4 data set is mapped to the Bioequivalence 1 object.

171

Phoenix

11 Examples Guide 3. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map AUClast to the Dependent context. The following data types are automatically mapped to contexts when the data set is mapped to the Bioequivalence model. If they are not, use the option buttons in the Main Mappings panel to map the data types to the appropriate contexts.



Sequence is mapped to the Sequence context.



Subject is mapped to the Subject context.



Period is mapped to the Period context.



Formulation is mapped to the Formulation context.

Set up the model: Use the Model tab to specify settings for Bioequivalence model options. The Model tab is located underneath the Setup tab. 1. Make sure that Crossover is selected as the Type of study, Average is selected as the Type of Bioequivalence, and Capsule is selected as the Reference Formulation. 2. Select the Fixed Effects tab, which is located underneath the Setup tab. •

Sequence+Formulation+Period is automatically selected as the default Model Specification. Do not change this setting.

Note: Phoenix has automatically selected a model specification and classification variables based on the model for replicated crossovers established in the U.S. FDA Guidance for Industry - Statistical Approaches to Establishing Bioequivalence (January 2001). •

Ln(x) is automatically selected in the Dependent Variables Transformation menu. Do not change this setting.

3. Select the Variance Structure tab, which is located underneath the Setup tab. The random and repeated effects are already specified in the Variance Structure tab. If they are not, use the following steps to specify the variance structure. Otherwise, proceed to step 6. 4. Select the Variance Structure’s Random 1 tab. •

172

Formulation is automatically selected in the Random Effects field. Do not change this setting.

Bioequivalence A replicated crossover design 11 •

Subject is automatically selected in the Variance Blocking Variables (Subject) field. Do not change this setting.



Banded No-Diagonal Factor Analytic(f) is automatically selected in the Type menu. Do not change this setting.



2 is automatically entered in the Number of factors (f) = field. Do not change this setting.

Notice that the default variance structure for a replicated crossover design is substantially different from and more complex than that for the 2x2 crossover design. As a result, the model fitting is more difficult as well. 5. Select the Variance Structure’s Repeated tab. •

Period is automatically selected in the Repeated Specification field. Do not change this setting.



Subject is automatically selected in the Variance Blocking Variables (Subject) field. Do not change this setting.



Formulation is automatically selected in the Group field. Do not change this setting.



Variance Components is automatically selected in the Type menu. Do not change this setting.

A user can expect that about 50% of data sets analyzed will produce a non-positive definite G matrix. This does not imply that the model-fitting is invalid, but only that a user must be careful not to over-interpret the variance estimates. The confidence interval on the formulation difference will still have the expected statistical properties. 6. Click the Execute

button. The results are displayed on the Results tab.

Results The Bioequivalence analysis just failed to show bioequivalence, given that the 90% confidence interval = 91.612 lower and 125.772 upper. Because the data are balanced, the sequential and partial tests are identical. Sequential Tests worksheet

173

Phoenix

11 Examples Guide Partial Tests worksheet

Individual and population bioequivalence Phoenix can handle a wide variety of model designs suitable for assessing individual and population bioequivalence, including:

TRTR/RTRT/TRRT/RTTR TT/RR/TR/RT TRT/RTR/TRR/RTT TRRTT/RTTRR TRR/RTR/RRT RTR/TRT TRR/RTT/TRT/RTR/TTR/RRT TRRR/RTTT TTRR/RRTT/TRRT/RTTR/TRRR/RTTT where T=Test formulation and R=Reference formulation.

Note: Each sequence must contain the same number of periods. For each period, each subject must have one measurement. The Getting Started Guide shows results for a RTR/TRT design, which is recommended in the U.S. FDA individual and population bioequivalence guidelines. This example demonstrates an analysis of a TT/RR/TR/RT design.

174

Bioequivalence Individual and population bioequivalence 11

The population/individual model Import the data set: 1. Select File > Import or click the Import is displayed.

button. The Open File(s) dialog

2. Navigate to the Phoenix examples directory, which by default is located at

C:\Program Files\Pharsight\Phoenix\application\Examples. 3. Select TT RR RT TR.DAT and click Open. The Worksheet Import Options dialog is displayed. The dialog is used to assign options for how the data are imported and presented. 4. Click Finish. The data set is added to the project’s Data folder. A data set in DAT (ASCII data) format is added to the Data folder as a worksheet.

The model Note that the number of subjects is not the same in each sequence group: Sequence

N

TT

4

RR

4

TR

4

RT

5

Begin bioequivalence: 1. Select the workflow in the Object Browser and then select Insert > NCA and Toolbox > Bioequivalence. The Bioequivalence object is added to the workflow in the Object Browser. 2. Map the data set TT RR RT TR as the input source for the Bioequivalence 2 object: •

Use the pointer to drag the TT RR RT TR worksheet from the Data folder to the Main Mappings panel. OR



In the Bioequivalence Model 2 Main Mappings panel click the Select source button to open the Select Object dialog. 175

Phoenix

11 Examples Guide •

Select the TT RR RT TR worksheet and click Select. The TT RR RT TR data set is mapped to the Bioequivalence 2 object.

3. In the Model tab, select the Population/Individual option button in the Type of Bioequivalence area. The mapping contexts in the Main Mappings panel are automatically updated. 4. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map AUC to the Dependent context. The following data types are automatically mapped to contexts when the data set is mapped to the Bioequivalence model. If they are not, use the option buttons in the Main Mappings panel to map the data types to the appropriate contexts.



Sequence is mapped to the Sequence context.



Subject is mapped to the Subject context.



Period is mapped to the Period context.



Formulation is mapped to the Formulation context.

Set up the model: 1. Use the Model tab to specify settings for Bioequivalence model options. •

Crossover is automatically selected in the Type of study area. Crossover studies are the only permitted type for Population/Individual bioequivalence analysis.



Select Population/Individual as the Type of Bioequivalence.



R is automatically selected in the Reference Value menu. Do not change this setting.

2. Select the Fixed Effects tab, which is located underneath the Setup tab. •

Ln(x) is automatically selected in the Dependent Variables Transformation menu. Do not change this setting. The values will be log-transformed before the analysis.

3. Select the Options tab, which is located underneath the Setup tab. 4. In the Confidence Level field type 95 to set the confidence level to 95%. The default bioequivalence options reflect the recommendations in the U.S. FDA (2001) guidelines on individual and population bioequivalence. 5. Click the Execute 176

button. The results are displayed on the Results tab.

Bioequivalence Individual and population bioequivalence 11

Results Partial Population/Individual results worksheet: Statistic

Value

Upper_CI

Conclusion

Difference(Delta)

-0.013

Ratio(%Ref)

98.735

SigmaR

0.345

SigmaWR

0.05

Ref_Pop_eta

-0.229

0.014

Pop. BE not shown for refnc-scaling CI test

Const_Pop_eta

-0.091

0.101

Pop. BE not shown for const-scaling CI test

Mixed_Pop_eta

-0.229

0.014

Pop. BE not shown for mixed-scaling CI test

Ref_Indiv_eta

0.001

0.043

Indiv. BE not shown for refnc-scaling CI test

Const_Indiv_et a

-0.093

-0.05

Indiv. BE shown for const-scaling CI test

Mixed_Indiv_et a

-0.093

-0.05

Indiv. BE shown for mixed-scaling CI test

BE shown for ratio test

Inspect the results for mixed scaling. For population bioequivalence the upper confidence limit is 0.014 > 0, and therefore population BE has not been shown. For individual bioequivalence the upper confidence limit is –0.05 < 0, and so individual BE has been shown.

Comparing average bioequivalence Re-analyze the data for average bioequivalence: 1. This example is the second part of the Individual and population bioequivalence section that starts on page page 174. 2. This example uses the data set TT RR RT TR from the The population/individual model example on page page 175. 3. Repeat steps 1. and 2. under Begin bioequivalence: on page 175 to insert another bioequivalence model and to map the data types to the appropriate contexts. 177

Phoenix

11 Examples Guide Set up the model: Use the Model tab to specify settings for Bioequivalence model options. The Model tab is located underneath the Setup tab. 1. Make sure that Crossover is selected as the Type of study, Average is selected as the Type of Bioequivalence, and R is selected as the Reference Formulation. 2. Select the Fixed Effects tab, which is located underneath the Setup tab. •

Sequence+Formulation+Period is automatically selected as the default Model Specification. Do not change this setting.



Ln(x) is automatically selected in the Dependent Variables Transformation menu. Do not change this setting.

3. Select the Variance Structure tab, which is located underneath the Setup tab. 4. Select the Variance Structure’s Random 1 tab. 5. In the Random 1 tab, Formulation is in the Random Effects field. If not, drag Formulation from the Classification Variables list or type Formulation in the Random Effects field. 6. Subject is in the Variance Blocking Variables (Subject) field. If not, drag Subject from the Classification Variables list or type Subject in the Variance Blocking Variables (Subject) field. 7. Make sure Banded No-Diagonal Factor Analytic(f) is selected in the Type menu. 8. Make sure 2 is In the Number of factors (f) = field. If it is not, type 2 in the field. 9. Select the Variance Structure’s Repeated tab. 10. Period is in the Repeated Specification field. If not, drag Period from the Classification Variables list or type Period in the Repeated Specification field. 11. Subject is in the Variance Blocking Variables (Subject) field. If not, drag Subject from the Classification Variables list or type Subject in the Variance Blocking Variables (Subject) field. 12. Formulation is in the Group field. If not, drag Formulation from the Classification Variables list or type Formulation in the Group field. 13. Click the Execute

button. The results are displayed on the Results tab.

Using the FDA model for average bioequivalence on replicated crossover designs resulted in a 90% lower confidence interval of 87.277% and a 99.715% upper confidence interval for the ratio of average AUC. Therefore a user can also 178

Bioequivalence Individual and population bioequivalence 11 conclude average bioequivalence is achieved. This is not always the case. Data can pass individual BE and fail average BE, and data can also pass average BE and fail individual BE.

Note: It is not necessary to keep a project open after completing each chapter. This project is not required when working in the next chapter. To close a project right-click the project and select Close Project.

179

Phoenix

11 Examples Guide

180

Chapter 12

Transformations Computing ratios and baseline adjustments

This example demonstrates some frequently used calculations, including computation of: » Fraction of drug absorbed using IV and oral data. » Metabolite to parent drug ratios. » Change from baseline to be used in AUC calculations.

Computing ratios Phoenix can compute derived parameters as ratios of modeling output parameters. This functionality has been designed specifically for the computation of F (fraction of oral dose absorbed) and for the calculation of metabolite to parent drug ratios. This example will demonstrate the computation of F. Metabolite to parent drug ratios would be calculated in the same manner. Two data sets are needed for this example: one with AUC from IV data for 24 subjects, and another with AUC from oral data from the same 24 subjects. These data sets are provided in the Phoenix example subdirectory as IV.csv and Oral.csv. The example opens these two data sets, merges them into one data set, computes F for each subject, then computes summary statistics for F.

Create the project and import the data 1. Select File > New Project to create a new project. A new project is created in the Object Browser. 2. Name the new project Transformations.

181

Phoenix

12 Examples Guide Import the data sets:

Note: Select multiple files at once in the Open File(s) dialog by pressing the CTRL key and using the pointer to select the files. 1. Select File > Import or click the Import is displayed.

button. The Open File(s) dialog

2. Navigate to the Phoenix examples directory, which by default is located at

C:\Program Files\Pharsight\Phoenix\application\Examples. 3. Select IV.csv and click Open. The Worksheet Import Options dialog is displayed. The dialog is used to assign options for how the data are imported and presented. 4. Click Finish. The data set is added to the project’s Data Folder. 5. Import the data set Oral.csv and click Open. 6. Click Finish. The data set is added to the project’s Data Folder. Data sets in CSV (Comma Separated Value) format are added to the Data folder as worksheets. 7. View the data sets by selecting them in the Data folder to display them in the Grid tab, which is located in the right viewing panel.

Merge the two data sets 1. Select the workflow in the Object Browser and then select Insert > Data > Join Worksheets. The Join Worksheets object is added to the workflow in the Object Browser. 2. Map the data set IV as an input source for the Join Worksheets object: •

Use the pointer to drag the IV worksheet from the Data folder to the Join Worksheets object’s Worksheet 1 Mappings panel. OR



In the Join Worksheets Worksheet 1 Mappings panel click the Select source button to open the Select Object dialog.



Select the IV worksheet and click Select.

3. Repeat Step 2 to map Oral to the Join Worksheets object’s Worksheet 2 Mappings panel. The IV and Oral data sets are mapped to the Join Worksheets object. 182

Transformations Computing ratios 12 4. Use the option buttons in the Worksheet 1 Mappings panel to map the data types to the following contexts: •

Map Subject to the Sort context.



Map Form to the Source Column context.



Map AUC to the Source Column context.

5. Use the option buttons in the Worksheet 2 Mappings panel to map the data types to the following contexts: •

Map Subject to the Sort context.



Map Form to the Source Column context.



Map AUC to the Source Column context.

6. Click the Execute button. The results are displayed on the Results tab. Join Worksheets Result worksheet

7. Copy the joined worksheet to the Data folder. •

Right-click the Join Worksheets object’s Result worksheet and select Copy to Data Folder.

183

Phoenix

12 Examples Guide The Result worksheet is added to the project’s Data folder and renamed Result from Join Worksheets.

Calculate F (fraction of oral dose absorbed): 1. Select the workflow in the Object Browser and then select Insert > Data > Data Wizard. The Data Wizard object is added to the workflow in the Object Browser. 2. On the Options tab. in the Action menu, select Transformation. 3. Click the Add button on the Options tab. 4. Map the data set Result from Join Worksheets as an input source for the Data Wizard (Step 1: Transformation): •

Use the pointer to drag the Result from Join Worksheets worksheet from the Data folder to the Column Transformation object’s Main Mappings panel. OR



In the Column Transformation Main Mappings panel click the Select source button to open the Select Object dialog.



Select the Result from Join Worksheets worksheet and click Select. The Result from Join Worksheets data set is mapped to the Column Transformation object.

5. Arithmetic is automatically selected in the Transformation Type menu. Do not change this setting. 6. Select x/y in the Transformation menu. 7. In the New Column Name field type Fraction. 8. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map AUC_1 to the Y Column context.



Map AUC_2 to the X Column context.

9. Click the Execute

button. The results are displayed on the Results tab.

10. Copy the transformed worksheet to the Data folder. 11. Right-click the Column Transformation object’s Result worksheet and select Copy to Data Folder. The Result worksheet is added to the project’s Data folder and renamed Result from Data Wizard. 184

Transformations Computing ratios 12

Calculate descriptive statistics 1. Select the workflow in the Object Browser and then select Insert > NCA and Toolbox > Descriptive Stats. The Descriptive Stats object is added to the workflow in the Object Browser. 2. Map the data set Result from Data Wizard as the input source for the Descriptive Stats object: •

Use the pointer to drag the Result from Data Wizard worksheet from the Data folder to the Main Mappings panel. OR



In the Descriptive Stats Main Mappings panel click the Select source button to open the Select Object dialog.



Select Result from Data Wizard and click Select. The Result from Data Wizard data set is mapped to the Descriptive Stats object.

3. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map Fraction to the Summary context.



Leave all other data types mapped to None.

Use the Options tab to specify settings for the Descriptive Stats options. The Options tab is located underneath the Setup tab. 4. Select the Confidence Interval check box. The default setting for the Confidence Interval is 95%. Do not change this setting. 5. Select the Number of SD check box. The default setting for the number of standard deviations is 1. Do not change this setting. 6. Click the Execute

button. The results are displayed on the Results tab.

The Statistics worksheet results are shown below:

185

Phoenix

12 Examples Guide

Creating a baseline-adjusted variable In many cases it is useful to fit a model to a variable with some endogenous or baseline level, for example, blood pressure or estrogen levels. The calculation of these PK parameters would generally be done on the baseline-adjusted observation values. This example will compute the change from baseline in a response variable, creating an analysis-ready new column of data.

Import the data set 1. Select File > Import or click the Import is displayed.

button. The Open File(s) dialog

2. Navigate to the Phoenix examples directory, which by default is located at

C:\Program Files\Pharsight\Phoenix\application\Examples. 3. Select endogenous.dat and click Open. The Worksheet Import Options dialog is displayed. The dialog is used to assign options for how the data are imported and presented. 4. Click Finish. The data set is added to the project’s Data folder. A data set in DAT (ASCII data) format is added to the Data folder as a worksheet. 5. View the data set by selecting it in the Data folder to display it in the Grid tab.

Compute the change from baseline using a column transform 1. Select the workflow in the Object Browser and then select Insert > Data > Data Wizard. The Data Wizard object is added to the workflow in the Object Browser.

Note: When multiple objects of the same type are added to a workflow they are numbered sequentially. For example, the second Data Wizard object added to this workflow is called Data Wizard 1. 2. On the Options tab of the Data Wizard 1 object, select Transformation as the type of Action. 186

Transformations Creating a baseline-adjusted variable 12 3. Click the Add button on the Options tab. 4. Map the data set endogenous as an input source for the Data Wizard 1 object: •

Use the pointer to drag the endogenous worksheet from the Data folder to the Data Wizard 1 object’s Main Mappings panel. OR



In the Data Wizard 1 Main Mappings panel click the Select source ton to open the Select Object dialog.



Select the endogenous worksheet and click Select.

but-

The endogenous data set is mapped to the Data Wizard 1 object. 5. Select Baseline in the Transformation Type menu on the Options tab. 6. Select Change from Baseline in the Transformation menu. 7. In the New Column Name field type Change. 8. Use the option buttons in the Main Mappings panel to map the data types to the following contexts: •

Map Time to the Time context.



Map Conc to the X Column context.

9. Click the Execute

button. The results are displayed on the Results tab.

187

Phoenix

12 Examples Guide Column Transformation Result worksheet

The Change column is added to the data set and is ready for use in modeling.

Note: It is not necessary to keep a project open after completing each chapter. This project is not required when working in the next chapter. To close a project right-click the project and select Close Project.

188

Chapter 13

Modeling Examples Pharmacokinetic, linear regression, survival, logit and other example models

The following examples use Pharsight model object files Exp1.pmo through Exp15.pmo, which are installed in the Phoenix examples subdirectory. Each model object file contains a data set and a PK, PD, PKPD, Indirect Response, or an ASCII model object. Each imported model object contains the appropriate default mappings and settings needed to run the model.

Note: These examples can only be run on 32-bit machines or on a 64-bit machine using the Phoenix32.exe.

Load, view, and run the example models Load the Model_Examples project: 1. Select File > Load Project. 2. Navigate to Phoenix’s Examples directory, which by default is located at

C:\Program Files\Pharsight\Phoenix\application\Examples. 3. Select Model_Examples.phxproj and click Open. The project is loaded in the Object Browser. Each example model has the following items associated with it: •

A data set in workbook form.



Many have data sets in workbook form for dosing.



A PK Model object.

4. Click the (+) symbols beside the workbooks in the Data folder to view the data sets.

189

Phoenix

13 Examples Guide View and execute the models: 1. View each model by selecting the model object in the Object Browser. 2. Select items in a model object's Setup list to view each model's mappings. 3. Select the various tabs in the Options tab row to view each model's settings. 4. Click the Execute button to execute each model. The results are displayed on the Results tab.

Pharmacokinetic model Model Exp1 In this example a data set was fit to PK model 13 in the pharmacokinetic model library. Four constants are required for model 13: the stripping dose associated with the parameter estimates, the number of doses, the dose, and the time of dosing. This example uses weighted least squares (1/observed Y). Phoenix determines initial estimates via curve stripping and then generates bounds for the parameters.

Pharmacokinetic model with multiple doses Model Exp2 In this example a data set obtained following multiple dosing is fit to model 13 in the pharmacokinetic model library, which is a two compartment open model. This model has five parameters: A, B, K01, Alpha, and Beta. This example uses the user-supplied initial values 20, 5, 3, 2, and 0.05. Phoenix generates bounds for the parameters.

Probit analysis: maximum likelihood estimation of potency Model Exp3 This example demonstrates how to use the NORMIT and WTNORM functions to perform a probit regression (parallel line bioassay or quantal bioassay) analysis. Note that a probit is a normit plus five. There are several interesting features used in this example: 1. The transform capability was used to create the response variable. 2. The logarithm of the relative potency is estimated as a secondary parameter.

190

Modeling Examples Logit regression (bioassay) 13 3. Maximum likelihood estimates were obtained by iteratively reweighting and turning off the halvings and convergence criteria. Therefore, instead of iterating until the residual sum of squares is minimized, the program adjusts the parameters until the partial derivatives with respect to the parameters in the model are zero. This will normally occur after a few iterations. 4. Since there is no 2 in a problem such as this, the variances for the maximum likelihood estimates are obtained by setting S2 = 1 (MEANSQUARE = 1). 5. The following modeling options are used: – Method 3 is selected, which is recommended for Maximum Likelihood estimation (MLE), and iterative reweighting problems. – Convergence Criterion is set to 0. This turns off convergence checks for MLE. – Iterations are set to 10. Estimates should converge after a few iterations. – Meansquare is set to 1. Sigma squared is 1 for MLE. For further reading regarding use of nonlinear least squares to obtain maximum likelihood estimates refer to Jennrich, R.I. and Moore, R.H. (1975), Maximum likelihood Estimation by Means of Nonlinear Least Squares, in the Proceedings of the American Statistical Association, 57-65.

Logit regression (bioassay) Model Exp4 The following data were obtained in a toxicological experiment: Dose (mg)

# DoseD (n)

# Died (Y)

300

50

15 (30%)

1000

20

9 (45%)

3300

26

19 (73%)

10000

12

12 (100%)

In this example, assume that the distribution of Y is binomial with:

mean = n p, and variance = n p q, and q=1–p exp   + X  where p = ------------------------------------------ and X=loge dose 1 + exp   + X  191

Phoenix

13 Examples Guide Maximum likelihood estimates of  and  for this model are obtained via iteratively reweighted least squares. This is done by fitting the mean function (n p) to the Y data with weight (npq) -1. The modeling commands needed to fit this model to the data are included in an ASCII model file. Note that the loge LD50 and loge LD001 are also estimated as secondary parameters. Note that this model is really a linear logit model in that:

p log ------------ =  + X 1– p Note: For this type of problem, the final value of the residual sum of squares is the Chi-square statistic for testing heterogeneity of the model. If this example is run a user obtains X2 (heterogeneity) = 2.02957, with 4-2=2 degrees of freedom (number of data points minus the number of parameters that were estimated). For a more in-depth discussion of the use of nonlinear least squares for maximum likelihood estimation, see Jennrich and Moore (1975). In the Engine Settings tab the Convergence Criteria is set to 0 to turn off the halving and convergence checks. Meansquare is set to 1 (the residual mean square is redefined to be 1.00) in order to estimate the standard errors of a, b and the secondary parameters.

Survival analysis Model Exp5 This is another maximum likelihood example and is very similar to Example 4. It is included to show that models arising in a variety of disciplines, such as case survival or reliability analysis, can be fit by nonlinear least squares. The dosing constant is defined in this model as the denominator for the proportions, that is, N. In the Engine Settings tab the Convergence Criteria is set to 0 to turn off the halving and convergence checks. Meansquare is set to 1 (redefines the residual mean square to be 1.00) in order to estimate the standard errors of the primary and secondary parameters.

192

Modeling Examples System of two differential equations with data for both compartments 13

System of two differential equations with data for both compartments Model Exp6 Consider the following model:

where K12 and K20 are first-order rate constants. This model may be described by the following system of two differential equations.

d-------------Z  1  = – K 12  Z 1 dt

compartment 1

dZ  2  -------------- = K 12  Z 1 – K 20  Z 2 dt

compartment 2

with initial conditions Z1 = D, and Z2 = 0. In addition to obtaining estimates of K12 and K20, it is also desirable to estimate D and the half-lives of K12 and K20. A sample solution for this example is given here. Note that for this example the model is defined as an ASCII file. Data corresponding to both compartments are available. Column C in the data set for this example contains a function variable, which defines the separate functions.

System of two differential equations with data on one compartment Model Exp7 The model for this example is identical to that for example 6. However, in this example, it is assumed that data are available only for compartment two.

Multiple linear regression Model Exp8 Linear regression models are a subset of nonlinear regression models; consequently, linear models can also be fit using Phoenix. To illustrate this, a sample data set taken from “Analyzing Experimental Data by Regression” by Allen and Cady was analyzed. Note that linear models can always be written as: 193

Phoenix

13 Examples Guide

n

Y = B0 +

 X i Bi i=1

This example is also interesting in that the model was initially defined in such a way to permit several different models to be fit to the data. In the ASCII model panel, note that the number of parameters to be estimated is defined in CONS in order to make the model specification as general as possible. Note also the use of a DO loop in the model text.

Note: When using Phoenix to fit a linear regression model use arbitrary initial values and use the Do Not Use Bounds option. Use the Gauss-Newton minimization method with the Levenberg and Hartley modification. The dosing constant for this example is the number of terms to be fit in the regression.

Cumulative areas under the curve Model Exp9 This example uses the TRANSFORM block of commands to output cumulative area under the curve values calculated by trapezoidal rule. It computes cumulative urine excretion then fits it to a one compartment model. The use of the LAG function is demonstrated.

Mitscherlich nonlinear model Model Exp10 In this example a data set is fit to the Mitscherlich model. The data were taken from Allen and Cady (1982). Fitting data to this model involves the estimation of three parameters; b1, b2, and .

y = b 1 – b 2  exp  –   x 

194

Modeling Examples Four parameter logistic model 13

Four parameter logistic model Model Exp11 This example illustrates how to fit a data set to a general four parameter logistic function. The function is often used to fit radioimmunoassay data. The function, when graphed, depicts a sigmoidal (S-shaped) curve. The four parameters represent the lower and upper asymptotes, the ED50, and a measure of the steepness of the slope. For further details see DeLean, Munson and Rodbard (1978). The model, with parameters a, b, c and d is as follows:

a – d  y = -------------------------- + d b 1 +  x  c

Linear regression Model Exp12 This example is based on Applied Regression Analysis by Draper & Smith, Wiley, 2nd ed., pp. 204-205.

Note: When doing linear regression in Phoenix, enter arbitrary initial estimates and select Do Not Use Bounds. Use the Gauss-Newton minimization method with the Levenberg and Hartley modification

Indirect response model Model Exp13 This example is PD8 from the textbook Pharmacokinetic and Pharmacodynamic Data Analysis: Concepts and Applications by Gabrielsson and Weiner, 2nd edition. Stockholm: Swedish Pharmaceutical Press, 1997. It uses an indirect response model, linking Phoenix pharmacokinetic model 11 to indirect response model 54. The PK data were fit in a separate run and are linked to the Indirect Response model. This can be done via the PKVAL command when using an ASCII model or via the Link parameters panel. Model 11 is a two compartment micro constant model with extravascular input.

195

Phoenix

13 Examples Guide

Ke0 link model Model Exp14 This example is PD10 from the textbook Pharmacokinetic and Pharmacodynamic Data Analysis: Concepts and Applications by Gabrielsson and Weiner, 2nd edition. Stockholm: Swedish Pharmaceutical Press, 1997. It uses an effect compartment PK/PD link model. The drug was administered intravenously, and a one compartment model is assumed (PK Model 1). The PD data is fit to a simple Emax model (PD Model 101). The pharmacokinetic data were fit in a separate run and are linked to the pharmacodynamic model.

Pharmacokinetic/pharmacodynamic link model Model Exp15 Rather than fitting the PK data to a PK model, an effect compartment is fitted and Keo estimated using the observed Cp data. Therefore it is a type of nonparametric model. The collapsed Ce values are then used to model the PD data. The example also illustrates how to mix differential equations and integrated functions. This approach was proposed by Dr. Wayne Colburn.

196

Index

A

E

Absolute bioavailability, 116 AUC partial areas, 46

Effect Ke0, 98 NCA for effect data, 51 Excluding data, 46

B Bioavailability absolute, 116 Bioequivalence output, 170

F Fit the dissolution data , 136 Fit the UIR, generate absorption data and set the formulation information , 139

C Change from baseline, 186 Crossover design, 112 output, 114 Custom tables, 156

G Generate and validate the IVIVC , 140

D

H

Data set exclusions, 46 Deconvolution, 116 Dissolution, 116

Hysteresis, 98

I Import the data sets for the IVIVC project , 134 in vitro in vivo correlations, 133 197

Phoenix Examples Guide

Insert the IVIVC workflow and identify and smooth the dissolution data , 135

K Ke0 semicompartmental modeling, 98

M Model options Pharmacokinetic, 84 Modeling pharmacodynamic, 102

N NCA_PD.pmo, 52 Noncompartmental analysis, 46 urine data, 53 Nonparametric superposition, 105 computing steady-state effect, 108

, 138 Semicompartmental modeling, 95, 98 chart output, 100 workbook output, 99 Set up the dissolution model , 142 Sparse data example, 53 SparseSamplingChaioYeh.pmo, 53 Steady-state computing effect value, 108 Summary statistics, 148

T Tables final parameters, 145 joining raw data and results, 150 summary statistics, 148 template 3, 145 template 9, 150 The IVIVC Workflow, 133

U O Overlay, 25

P Pharmacodynamic modeling, 102 Pharmacokinetic modeling model options, 84 Phoenix IVIVC, 133 Predict PK profiles for the test formulation , 141

S Select the PK data and the PK dosing data 198

Units braces around, 96 Urine.pmo, 53

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF