QG Course Manual January 2008 Version 5.1

May 28, 2018 | Author: Manolo Ericson Guti | Category: Variance, Accuracy And Precision, Sampling (Statistics), Bias Of An Estimator, Estimator
Share Embed Donate


Short Description

GeoEstadistica...

Description

Applied Geostatistics Y=X Y=X

‘T rue’ Block ZV

I

II

Cut of f ZC

Regression Regression of Y| X of Y| X

III

mean IV

n a e m

ff ZC o t u C

Estimated Block Z *V

for Geologists & Mining Engineers John Vann

QG Quantitative Group

Our Skills On Your Team www.qroup.net.au

Copyright ©2001, 2003, 2004, 2005, 2007, 2008 This document is copyright. All rights are reserved. No part of this document may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior written permission of Quantitative Geoscience Pty Ltd (trading as ‘Quantitative Group’ and ‘QG’). Preliminary Edition September 2001 Second Edition January 2003 Third Edition January 2004 Third Edition with Minor revisions June 2005, December 2005 Fourth edition February 2007 Fourth Edition with minor revisions January 2008

Additional, inexpensive, bound copies of this manual can be obtained by contacting QG. PO Box 1304, Fremantle Western Australia 6959 [email protected] tel +61 (0) 8 9433 3511 fax +61 (0) 8 9433 3611

QG QuantitativeGroup ABN 30 095 494947

Our Skills On Your Team www.qgroup.net.au

Quantitative Group

Short Course Manual

Contents COPYRIGHT

I

CONTENTS

1

1: INTRODUCTION

7

ACKNOWLEDGEMENTS PREREQUISITES

8 8

2: RESOURCE ESTIMATION CONCEPTS

DECISIONMAKING ANDRESOURCEESTIMATION

9

9

SAMPLE QUALITY GEOLOGY OTHER FACTORS ESTIMATION AT DIFFERENT STAGES OF A PROJECT First Evaluation (Reconnaissance) FIRST SYSTEMATIC SAMPLING Precision and Accuracy Sample Type Resource Classification Recoverable Resources at the Early Stage

10 11 11 11 11 12 12 13 14 14

AMPLING ILNFILL OCALSESTIMATION OF IN SITU RESOURCES ESTIMATION OF RECOVERABLE RESOURCES "Support" of the Selection Units

15 15 17 18

LEVEL OFINFORMATION ESTIMATION OFRESERVES

21 23

SOME ‘CLASSICAL ’ RESOURCE ESTIMATION TECHNIQUES Polygonal Methods 'Sectional' Methods Triangular Methods Inverse Distance Weighting (IDW) Methods

23 23 24 26 26

KRIGING SYSTEMATICRESOURCEESTIMATIONPRACTICE

28 28

FORMALISE ESTIMATION PROCEDURES ALLOCATE RESPONSIBILITIES DOCUMENT DECISION MAKING STEPS CAREFULLY GEOLOGICAL MODELS UPDATE AND REVISE ESTIMATION MODELS ‘BLACK BOX’ APPROACHES

28 29 29 30 30 31

A FINAL WARNING

31

3: STATISTICS

33

GEOSTATISTICS AND STATISTICS

33

SOME PRELIMINARYDEFINITIONS

34

UNIVERSE POPULATION SAMPLING UNIT SUPPORT

Quantitative Group

34 34 34 34

Short Course Manual

NOTIONS OFPROBABILITY

35

EVENTS PROBABILITY MULTIPLICATION RULE AND INDEPENDENCE CONDITIONAL PROBABILITY

35 35 35 36

RANDOMVARIABLES ANDDISTRIBUTIONS

37

RANDOM VARIABLES THE CUMULATIVE DISTRIBUTION FUNCTION THE HISTOGRAM How many Classes?

37 37 39 39

MOMENTS ANDEXPECTEDVALUE

40

EXPECTED VALUE MOMENTS The Mean (and the Median) The Variance

40 40 41 42

Properties of the Variance Measuring Dispersion Standard Deviation Coefficient of Variation Other Moments Skewness Kurtosis THE BOX PLOT COVARIANCE AND CORRELATION

43 43 43 44 44 44 45 46 46

LINEARREGRESSION STATISTICALTESTS

47 49

T-TESTS MANN -WHITNEY TEST WILCOXON MATCHED PAIRS TEST

51 51 51

COMMONDISTRIBUTIONS

51

GAUSSIAN (NORMAL) DISTRIBUTION LOGNORMAL DISTRIBUTION DEFINITION Testing for Lognormality/Normality probability plotting Q-Q Plot Chi-square ‘goodness of fit’ test Three Parameter Lognormal Distribution Sichel's t-estimator

51 52 52 53 53 53 54 54 55

4: SAMPLING

56

WHAT IS THEOBJECTIVE OFSAMPLING?

56

EQUIPROBABLE SAMPLING A SIMPLE STATEMENT OF THE PROBLEM

57 57

TYPES OFSAMPLING INOPEN PIT GRADE CONTROL

58

TESTING DIFFERENT APPROACHES TO SAMPLING BLAST HOLES (BH) General Characteristics of BH Samples Approaches to BH Sampling Other Considerations REVERSE CIRCULATION DRILLING General Characteristics of RC Samples Approaches to Sampling RC Automation of Sampling Rules of a Good Riffle Splitter DITCH WITCH SAMPLING CHANNEL SAMPLING FOR OPEN PIT MINING

59 59 60 61 62 62 63 64 64 65 66 66

UNDERGROUNDGRADE CONTROLSAMPLING

67

FACE SAMPLING DRILLING METHODS OTHER CONSIDERATIONS The Role of Geostatistics

68 68 68 69

PRACTICALINTRODUCTION TOSAMPLINGTHEORY

70

COMPONENTS OF THE TOTAL SAMPLING ERROR GY’S THEORY OF FUNDAMENTAL SAMPLING ERROR A Simplification MORE ABOUT THE LIBERATION FACTOR

70 71 73 73

FRANÇOIS-BONGARÇON’S MODIFIEDSAMPLINGTHEORY

74

EXPERIMENTAL CALIBRATION

76

Quantitative Group

Short Course Manual

PRACTICAL IMPLEMENTATION Sample Nomograms

77 77

EXAMPLECALCULATIONS FOR S AAMPLINGPROTOCOL SAMPLINGPRACTICE FORGRADE CONTROL

79 81

5: SPATIAL VARIATION

83

‘RANDOMNESS’ AND OREBODIES

83

DETERMINISTIC APPROACHES Trend Surfaces PROBABILISTIC MODELS ‘RANDOMNESS’ Coins and Dice

83 84 85 86 87

HE GEOSTATISTICALAPPROACH THE T DUAL ASPECTS OF REGIONALISED VARIABLES

87 88

REGIONALISEDVARIABLES: CONCEPTUALBACKGROUND

88

RANDOM FUNCTIONS

88

STATIONARITY

90

STRICT STATIONARITY WEAK OR 2ND ORDER STATIONARITY THE INTRINSIC HYPOTHESIS THE STATIONARITY DECISION

90 90 90 91

THE VARIOGRAM

92

DEFINITION OF THE VARIOGRAM MAIN FEATURES OF THE VARIOGRAM Range and ‘Zone of Influence’ Behaviour Near the Origin Highly Continuous Behaviour (‘Extreme continuity’) Moderately Continuous Behaviour Discontinuous Behaviour Random Behaviour ANISOTROPY Geometric Anisotropy Zonal Anisotropy PRESENCE OF A DRIFT PROPORTIONAL EFFECT NESTED STRUCTURES HOLE EFFECT PERIODICITY 6: VARIOGRAPHY

92 93 93 95 96 96 96 97 97 97 98 99 101 101 102 103 104

THE SCIENCE AND“ART” OF VARIOGRAPHY

104

THE AIMS OF STRUCTURAL ANALYSIS PRACTICAL ASPECTS OF A STRUCTURAL ANALYSIS Preliminary Steps Data Validation Getting a Feel for the Data Classical Statistics

104 105 105 105 105 107

HOW TOCOMPUTE AVARIOGRAM

109

1-D: ALONG A LINE 2-D: IN A PLANE 3-D

109 110 111

ADDITIVITY

112

AN EXAMPLE

112

MODELS FORVARIOGRAMS

114

NOT ANY MODEL WILL DO! Admissible Linear Combinations FROM A PRACTICAL VIEWPOINT

114 114 115

SOME COMMONMODELS

115

THE SPHERICAL MODEL POWER MODEL EXPONENTIAL MODEL GAUSSIAN MODEL CUBIC MODEL

116 117 118 118 119

MODELS FORNUGGETEFFECT

120

APPARENT VS. REAL NUGGET EFFECT Integration of ‘Microstructures’

121 121

Quantitative Group

Short Course Manual

Isotropy and the Nugget Effect Sampling Error and the Nugget Effect Locational Error COMBINING MODELS

122 123 123 123

ANISOTROPICMODELS

124

GEOMETRIC ANISOTROPY An Example ZONAL ANISOTROPY

125 126 127

WHY NOT AUTOMATEDFITTING? SYSTEMATICVARIOGRAMINTERPRETATION

128 128

TEN KEY STEPS WHEN LOOKING AT A VARIOGRAM 1. The number of pairs for each lag in the experimental variogram. 2. Smoothness of the experimental variogram 3. Shape near the srcin 4. Discontinuity at the srcin—nugget effect

128 129 129 131 131

5. Assess Is therethe a sill?—transitional phenomena 6. range 7. Can we see a drift? 8. Hole effect 9. Nested models 10. Anisotropy

131 132 132 133 133 133

‘UNCOOPERATIVE ’ OR ‘TROUBLESOME’ VARIOGRAMS

134

CALCULATION OF THE EXPERIMENTAL VARIOGRAM Theoretical Reasons Definition of Stationary Geographically Distinct Populations Intermixed Populations HOW TO DETERMINE APPROPRIATE VARIOGRAM CALCULATION PARAMETERS Lag Selection. Tolerances Missing Values EXTREME VALUES OTHER APPROACHES TO CALCULATING VARIOGRAMS

134 134 134 134 135 135 135 135 135 136 136

ALTERNATIVEESTIMATORS OF THE VARIOGRAM

136

ROBUST ESTIMATORS RELATIVE VARIOGRAMS Local Relative Variogram General Relative Variogram Pair-wise Relative Variogram

136 137 137 138 138

2 i-j RelativeSigma Variogram

138

Some General Comments About Relative Variograms V ARIOGRAPHY OF TRANSFORMS Logarithmic Transformation Gaussian Transform Indicator Transforms

139 139 140 141 142

A CASE STUDY OFVARIOGRAPHY

142

THE DATA Advantages of Consistent Spacing Histogram Proportional Effect VARIOGRAMS Possible Non-Stationarity? RELATIVE VARIOGRAMS Pair-Wise Relative Variogram Variograms of the Logarithmic Transform EXTREME VALUES & VARIOGRAPHY The Implications of the Transform THE MODEL FITTED Log Variograms and Relative Variograms The Relative Nugget Effect and Non-Linear Transformations Ranges ANISOTROPY Again: Possible Non-Stationarity? VARIOGRAMS OF THE INDICATOR TRANSFORM Why Use Indicators? Selecting the Cut Off

143 143 144 145 146 147 147 147 148 149 149 150 150 151 151 151 152 152 152 153

Short Range Structures SUMMARY OF VARIOGRAPHY Variograms Relative Variograms Log Variograms

154 155 155 155 155

Quantitative Group

Short Course Manual

Indicator Variograms CHARACTERISATION OF SPATIAL GRADE DISTRIBUTION GEOLOGICAL FACTORS Comparison of Geology and Variography

155 155 156 156

SUPPORT

157

WHAT IS‘SUPPORT’? "DISPERSION" AS AFUNCTION OFSUPPORT

157 158

SUPPORT EFFECT An Example

158 158

HOW GEOSTATISTICSCAN HELP

163

THE IMPACT FOR MINING APPLICATIONS

163

V VARIANCES OFDISPERSIONWITHIN AVOLUME

VARIANCE OF A POINT WITHIN V VARIANCE OF V WITHIN V

164 164 164

KRIGE'S RELATIONSHIP CHANGE OFSUPPORT—REGULARISATION

165 166

REGULARISATION OF THE VARIOGRAM

166

RETURNING TOOUR EXAMPLE

167

8: ESTIMATION ERROR

170

WHAT IS‘EXTENSIONVARIANCE’?

170

EXTENSION VARIANCE AND ESTIMATION VARIANCE THE FORMULA FOR EXTENSION VARIANCE Factors Affecting the Extension Variance OTHER PROPERTIES OF EXTENSION VARIANCE

171 171 173 174

EXTENSIONVARIANCE& DISPERSIONVARIANCE

174

PRACTICALITIES Combination of Elementary Extension Variances An Important Assumption Geometry of Mineralisation

176 176 176 177

SAMPLINGPATTERNS

178

RANDOM PATTERN RANDOM STRATIFIED GRID (RSG) REGULAR GRID

178 179 179

9: KRIGING

182

THE PROBLEM OFRESOURCEESTIMATION

WHAT DO WE WANT FROM AN ESTIMATOR ?

182

183

WHY KRIGING?

184

BLUE—BEST LINEAR UNBIASED ESTIMATOR

184

HOW KRIGINGWORKS

185

KRIGING MADE SIMPLE ? THE ADVANTAGES OF A PROBABILISTIC FRAMEWORK

186 187

KRIGINGEQUATIONS

188

Choosing the ‘Best’ Weights The Unbiased Condition Minimising The Error Variance TERMS IN THE KRIGING EQUATIONS The Lagrange Parameter

188 189 190 190 191

PROPERTIES OFKRIGING

194

EXACT INTERPOLATION UNIQUE SOLUTION KRIGING SYSTEMS DO NOT DEPEND ON THE DATA VALUES COMBINING KRIGING ESTIMATES INFLUENCE OF THE NUGGET EFFECT ON KRIGING WEIGHTS Screen Effect The Case of Low Nugget Effect, High Continuity The Case of High Nugget Effect, Low Continuity

194 195 195 195 195 195 196 196

SIMPLEKRIGING KRIGINGPRACTICE KRIGINGNEIGHBOURHOODANALYSIS HOW TOLOOK AT THERESULTS OF AKRIGING

Quantitative Group

Short Course Manual

196 196 197 198

Make maps of the estimates Check the location of very high and very low estimates Look carefully at estimates near the margins of the deposit. Examine the estimates in the context of geology. Look at the kriging variance in relation to sampling spacing. Look at the regression slope in relation to sampling spacing. Examine the estimates for poorly sampled or unsampled areas.

198 199 199 199 199 199 199

THE PRACTICE OFKRIGINGIN OPERATINGMINES

199

GRADE CONTROL Why Kriging? Geology First! Sampling The Variogram as a Tool Block Estimation KRIGING TECHNIQUE

199 199 200 200 200 200 201

UPPER CUTS

201

10: NON-LINEAR ESTIMATION

202

SOMETIMES, LINEARESTIMATIONISN’T ENOUGH WHAT IS A‘LINEARINTERPOLATOR ’?

202 203

THE GENERAL IDEA THE EXAMPLE OF IDW ORDINARY KRIGING NON-LINEAR

203 204 204 204

NON-LINEARINTERPOLATORS

205

LIMITATIONS OF LINEAR INTERPOLATORS AVAILABLE METHODS

205 206

SUPPORTEFFECT

207

DEFINITION THE NECESSITY FOR CHANGE OF SUPPORT RECOVERABLE RESOURCES

207 207 208

A SUMMARY OFMAIN NON-LINEARMETHODS

208

INDICATORS INDICATOR KRIGING MULTIPLE INDICATOR KRIGING MEDIAN INDICATOR KRIGING PROBABILITY KRIGING INDICATOR COKRIGING AND DISJUNCTIVE KRIGING

208 209 209 210 210 211

RESIDUAL INDICATOR KRIGING ISOFACTORIAL DISJUNCTIVE KRIGING UNIFORM CONDITIONING LOGNORMAL KRIGING MULTIGAUSSIAN KRIGING

211 212 213 213 213

CONCLUSIONS& RECOMMENDATIONS

214

Quantitative Group

Short Course Manual

Chapter

1 1: Introduction “Real knowledge is to know the extent of one's ignorance.” Confucius

I arrived at geostatistics like most practitioners – through necessity. As an exploration geologist, I had the task of estimating a resource on a newly discovered gold deposit. I had a ‘gut feeling’ that there were significant uncertainties involved, but no matter how many variations of polygonal shapes I tried to construct it was clear to me that I was grossly over-simplifying things. Worse still I could not make a connection between the methods I was using and the geological character of the grade distribution I was faced with. Recognition of incompleteness of our knowledge of a deposit (thus recognition of the uncertainly in the problem) is the primary motivation for geostatistics. With geostatistical tools, we can incorporate uncertainty into our modelling approaches. Although geostatistical tools are now available within many computer packages and used widely in the mining industry, in my experience few practitioners feel truly comfortable with these methods. The range of text books available are either overly mathematical (fine for geostatisticians, but not so user-friendly for people at the mine) or too simplified. Unfortunately, some of the best texts now available do not deal extensively with geological and mining problems. This manual evolved from a set of notes prepared for a short course in Applied Mining Geostatistics, which has now been presented on about 150 occasions in 9 countries about 1,500 participants since 1994. These participants were mainly mine geologists, exploration geologists and mining engineers who were interested in gaining a sound conceptual background to enable application of geostatistical tools to real problems in the mining industry. A sprinkling of surveyors, managers, chemists, metallurgists, computer scientists and mathematicians has also attended. But this course was always, from inception, specifically targeted at professional geologists and mining engineers faced with the practicalities of resource estimation. Because the course has been run many times there has been gradual change in the manner and order of presentation. This needed to be reflected in the accompanying course manual. In particular, materials on stationarity and estimation

Quantitative Group

Short Course Manual

were revised and some topics (kriging neighbourhood analysis, introductory nonlinear materials) were added or greatly expanded.

Acknowledgements I'm very much indebted to all those who have attended this course to date. Teaching always benefits from a critical audience. This manual, in various incarnations, benefited from discussions with our professional colleagues: Daniel Guibal, Henri Sanguinetti, Michael Humphreys, Olivier Bertoli and Tony Wesson. Dominique François-Bongarçon kindly reviewed sections of the previous manual and his constructive comments have been put to good use during this revision. Although modified, some sections and specific examples still owe debts to previous courses authored by Daniel Guibal, Margaret Armstrong and Pierre Delfiner. Colleagues and teachers srcinally shaped my own understanding of geostatistics: in particular, Peter Dowd, Alan Royle, Henri Sans, Pedro Carrasco, Olivier Bertoli and most importantly, Daniel Guibal. Finally, the many clients (geologists and engineers) who have worked together with us to try and solve real problems, and thus motivated development of clear explanations of geostatistical concepts.

Prerequisites Beyond basicfor numeracy and some Concepts mining and geological prerequisites the short course. and skills arevocabulary, stressed. there were no I believe it is a fallacy that good application geostatistics requires the user to memorise and understand reams of impenetrable formulae. However, it is dangerous to estimate resources and reserves using any methodology without understanding the underlying assumptions and mechanics of the technique. Consequently, some mathematics is unavoidable, but there is nothing that should unduly panic a science or engineering graduate. A reference list is included at the end of these notes so that you can take your interest in geostatistics further, if you wish. John Vann Fremantle, February 2007

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION

Chapter

2 2: Resource Estimation Concepts “The difficulties raised by the estimation problem are not of a mathematical nature. We encounter them at two extreme levels: at the initial stage of conceptualisation, i.e. of model choice, and at the final stage of practical application.” Georges Matheron “Estimating and Choosing” 1989

Decision Making and Resource Estimation Estimating mineral resources from drill hole data is an activity that is fraught with difficulty. Most classical statisticians would regard the data for any ore reserve estimate as dangerously inadequate. This‘over-drilled’. would often apply even in cases where the geologist felt that the deposit had been The data for resource estimation are always fragmentary in nature. We have samples separated by distances that are very large in comparison to the sample dimensions. However, the information we have increases with time, as more samples are collected and geological knowledge improves. During the life of a project, there are generally several stages, each of which corresponds to a different level of knowledge of the mineralisation. At the conclusion of each drilling campaign, three decisions are possible: 1.

Stop everything, if we consider that the mineralisation cannot be mined economically under current conditions.

2.

Mine the deposit immediatelyif we assess that this will be profitable.

3.

Begin a new phase of explorationif the deposit is still poorly known after the previous phase, or if we consider the economics to be marginal.

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION

Because mining investments are generally large, the economic consequences of making this choice are very important. Therefore, it is crucial that evaluate we the mineralisation and its potential very carefully. In particular, it is critical that we make the most of all the existing information at any decision step. 





An Example Just how much data did we collect?

Resource evaluation is therefore aprocess, not an event. Resources will always be estimated sequentially, and in general with more information as the project progresses. Geostatistics provides a consistent conceptual framework for resource evaluation that makes maximal use of available information. It has already been mentioned that the amount of information available increases with time. Even so, the amount of data employed for the final resource estimation prior to commencement of mining still constitutes extremely scarce information.

For example: a large base metal deposit is drilled on 100m x 100m centres by 38 mm (BQ) cores of which half is crushed for analysis. This is not an uncommon spacing for a huge porphyry copper deposit, for example. The density of sampling, expressed as a proportion of the total volume of the deposit, is around 1.5 -x8. 10 Even with 10m x 10m drilling, which constitutes a very close pattern and would be rarely achieved prior to the grade control stage, the density would still be very low: around1.5 x 10-6. This is less than one ton of sampling per million tones of the deposit, and by any statistical standards is a very small sample (in fact a sample representing only 0.0000015% of the deposit)! The performance of this calculation (tonnes of sample vs. tonnes of mineralisation) is recommended for any geologist of engineer dealing with a resource estimate! Sample Quality The statistical problem of very small (volumetric) sampling is not the only one we face—what is thequality of our sampling?

The recovery might be poor in some places. Different drilling techniques deliver samples of different quality. Handling of samples may affect data reliability. Samples below the water table will be less reliable when using any percussion drilling technique (including RC). Biases may occur in both sample splitting and analyses. The aliquot for assay in invariably very much smaller than the ½ core or RC split, so the correct reduction of samples in the lab is a critical problem. All these factors will degrade the representivity of a sample. The degree of degradation can be dramatic. For further details of sampling, refer to Dominique François-Bongarçon’s papers or to Pitard (1990).

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION

Geology It hardly needs to be said that quality geological information is vital. More than this, it is essential that the model used in the resource assessment stage is based on the best possible interpretation of the available data, both assays and qualitative observation. The geological model needs to be geometrically consistent and to adequately capture the key geometric factors that influence the distribution of potentially economic grades. Models A scientific model is judged by how well it works predictive tool, not byas a aesthetics

In general, the more complex the geology is, the more important its role in resource assessment will be. The geometry of the mineralisation is often the main determining factor when estimating tonnage. Clear distinction needs to be made between those factors of interest in developing genetic and exploration models and those affecting the distribution of ore at a scale of interest to the mining engineer or mine geologist. Other Factors The most important remaining factor is the mining method. There are a number of aspects to the consideration of mining factors when estimating a mineral resource or reserve. These have a critical impact upon estimation and they are considered later in some detail. Estimation at Different Stages of a Project

Un-Clustered

Clustered

Figure 1.1 Clustering

First Evaluation (Reconnaissance)

This is the initial stage of a potential mining project: 

Information is essentiallyqualitativeand geological.



Few or sometimesno samples.

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION



Any samples existing tend to be preferentially located, and therefore clustered.

The issue of clustering deserves some discussion. Clustering of values in our data set is an almost universal source of bias in resource data sets. Figure 1.1 illustrates the contrast between a clustered and ‘un-clustered’ pattern, in 2D. Note that the clustering of samples in high grade areas of mineralisation is common. The impact of clustering in most resource data sets is therefore to bias the mean grade high (i.e. overstate the average grade). Early estimation At very early stages block modelling may be an inappropriate tool

Any mineral resources at this stage be necessarily The mainevaluation objective of is to see, on a comparative basis,will whether further unreliable. work is warranted. Decisions will be made on geological and other specific technical qualitative grounds. Local block estimation will be of very little help: this is essentially a first geological appraisal. However, the variogram (which we will become very familiar with during this course) can be an excellent diagnostic tool even at the earliest stage. First Systematic Sampling Once we have located mineralisation, a systematic sampling of the zone of interest is generally undertaken.

The aim is to get a first quantitative estimation of global in situ resources. This includes: 

The best possible definition of the limits of the mineralisation (geometry).



Estimation of the global mean grade.



Estimation of the globalin situ tonnage.

Both qualitative (geological) and quantitative (assay) data are employed. At this global level, there is no problem of choosing an estimator: provided that the sampling is 1. truly systematic, there is no risk of estimation bias Precision and Accuracy

Precision is a measure employed in traditional statistics to quantify reproducibility the of an estimate (or observation). Precision is proportional to the variance of errors (or, equivalently, to the standard deviation of the errors). As such, the conceptual equivalent in resource estimation is theestimation variance which is discussed at length in Chapter 8. It is possible to have a precise estimate that has a poor accuracy: in such a case, the measurements are closely reproduced, but biased. Accuracy measures how close, on average, an estimate is to reality. Accuracy can thus be defined as a relative absence of bias.

This is not true, of course, if the data we employ are biased because of drilling, sampling or assaying problems! 1

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION

No estimation methodology can adequately counter biased sampling or assaying: Garbage in⎯ Garbage out … Note that the precision of the estimate is an important consideration. It depends upon: 

The number & type of samples (i.e. the sampling grid geometry).



The regularity and continuityof the mineralisation (i.e. the variability of the in

situ grades). It can be easily established from geostatistical theory that a geologist or engineer who feels that a regular grid results in a more precise estimate is quite right, and we will tackle this subject in more detail later on. At the stage of estimation of resources with the first systematic data, there are thus several important factors to consider.

Precise & Accurate

Imprecise & Accurate

Imprecise & inaccurate

Precise & inaccurate

Figure 1.2 Schematic recallof precisionand accuracyconcepts. Sample Type

The type of sample is defined by the physical means used to obtain it - drilling vs. channelling or drill core vs. percussion chips for example. The representivity of the sample type is critically important. It is essential to consider the volume of the samples, their geometry and orientation: in short what geostatisticians call the support of the samples.

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION

The concept of sample support is central to geostatistics. In fact, it represents one of the most important contributions of geostatistical theory to practical resource estimation. We will consider this topic in detail in Chapter 5. Resource Classification

Resources are generally classified, and any classification system is in part establishing a nomenclature relating to their precision. Categorisation nomenclature varies between countries, although in the past few there has been some measure of convergence between North American, British and Australasian systems. The JORC2 Code is widely regarded as a model. A classical distinction employed, for

The JORC Code The JORC code is not designed to give guidance on how to estimate rather it deals with how to report.

example, is the familiar one between Measured, Indicated and Inferred resources. The JORC definitions do not give guidance on several important factors that influence mineral deposit estimation. The JORC definitions are very much framed in terms of the amount of exploratory samplings and openings and a qualitative assessment of ‘continuity’, rather than in terms of quantifiable continuity of mineralisation grade.The subject of geostatistics addresses the problem of quantifying grade continuity. The continuity of mineralisation may be quite distinct from continuity of its geometry (“geological continuity”). This is an important distinction: a quartz reef may be clearly “continuous” from hole to hole at a drill spacing of 100m x 50m, but the associated gold grades may be totally uncorrelatable. This distinction is recognised in the 1999 edition of the JORC Code. Recoverable Resources at the Early Stage

The selective mining unit(SMU) is the smallest mineable unit upon which ore-waste selection can be made. The size of the SMU defines the selectivity of the mining operation.ofBecause ore itrecovery in an that operational mine function a of the size andtakes geometry the SMU, is important estimation of is recoverable resource the intended SMU characteristics into account. Geostatistics provides techniques that make it possible to make global estimation of recoverable resourceseven at this early stage, for example, by the Discrete Gaussian Model (see Vann and Sans, 1995) . This is a technique to estimate theglobal recoverable resource, and it can be applied as soon as we can reasonably define the variogram, histogram and geometry of the mineralisation. However, at this early stage, local recoverable resources , i.e. determining the proportion of SMU's selected as 3, is generally not feasible. ore—at a specified cut off grade—for local panels Because of this, more closely spaced infill drilling is generally required.

2

Joint Ore Reserves Committee of the AusIMM, AIG and MCA.

discussing local versus global resources we are using terms that are in general use by geostatisticians. Local resources estimate the grade of individual blocks or "panels" in the deposit. Global resources, on the other hand, are estimated for the entire deposit, or quite large zones, even though, at early stages, we may not always be able to locate them in space. For example we may estimate a single grade/tonnage—or make several estimates of grade and tonnage at different cut off grades—for the whole deposit. 3When

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION

Infill Sampling This is the stage where we have drilled the deposit more closely and therefore: 1.

We have more samples so we have a better definition of the grade histogram.

2.

We can thus better define the spatial distribution of grade.

3.

We usually have improved geological models and thus better definition of global tonnage.

This stage of estimation is usually critical. It may be repeated several times, each time obtaining more sampling, if required. We will use the output to make mine designs and run technical and financial feasibility calculations. It is difficult to estimate resources at this stage and the consequences of mistakes are high. It is rare for a mineral deposit to be completely extracted as ore during a mining operation for two main reasons: 1.

Technical: these relate to accessibility of ore grade material.

2.

Economic: we must generally define some material as ore and the remainder to be waste. In other words, we make a selection of material to be processed as ore and material to be directed to the waste dumps.

Because we will only generally recover as ore a proportion of the in situ mineralisation, we must define two corresponding types of estimation of resources: 



In Situ resources: Characterisation of the rich and poor zones of the deposit with no account of the selective mining method to be employed. Recoverable resources: Characterisation of the resources as a function of the selectivity of the mining method and economic criteria.

Local Estimation of In Situ Resources Unlike global estimation ofin situ resources, forlocal estimation ofin situ resources determination of the particular type of estimator to be used is an important decision.

Many different local estimators can be used, each providing estimates of blocks or panels of ore that are built from the local sample information. We examine a few of these in the next section. An important characteristic of an estimate is that it should be unbiased. An unbiased estimator does not cause systematic over or under estimation of resources. We also want our estimator to make the "best" use of existing

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION

information, and in this sense, we are seeking the "best" estimate. Of course, we will have to give a precise meaning to "best" in such a context. We use estimates of a mineral resource for economic assessments, so it seems natural that we would require some characterisation of the quality of estimation. Are some areas better estimated than others? Which zones need more drilling? On which factors does the quality (or precision) of our estimate depend? Here are some of the more important factors: 



Firstly, it is intuitive that the regularity of the mineralisation is a critical factor in the reliability or quality of our local resource estimates. Given the same amount of sample information, a more continuous mineralisation will allow better local estimation results than an erratic mineralisation. Secondly, the sampling pattern used is an important factor.

If we consider the two sampling geometries shown in figures 1.3 and 1.4 it seems sensible that the first case (figure 1.3) should allow a better (more precise) estimate of the block than the second case. The sampling pattern (sometimes referred to as the "sampling geometry") has a strong influence on the quality of local estimation.

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION

We will better estimate a block by a single sample if that sample is within the block. More particularly, the optimal position for a sample would logically be incentre the of the block to be estimated (and this is borne out by geostatistical theory, as we will see later). Furthermore, it is intuitive that an ‘even spread’ of samples around the block will lead to better estimation than a situation where all available data is clustered (see figure 1.4).

Figure 1.3 Two possible sampling geometries

Figure 1.4 Two more possiblesampling geometries 

Finally, the geometry of the block or panel to be estimated plays a role in the estimation quality. This includes the relative dimensions of the block with respect to the sample spacing.

If we take into account the above factors it is possible to assess the quality of any estimator and thus select the one that will best meet our quality criteria. Estimation of Recoverable Resources To follow the JORC guidelines, here we use the term recoverable resources in preference to the more usual geostatistical usage recoverable reserves because the former term hasno implication ofeconomic or technicalfeasibility. To date, the

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION

JORC code does not explicitly deal with the concept of recoverability (in the sense used by geostatisticians). The recoverable resources are, as stated earlier, a function of the selective mining unit (SMU) we employ. Recoverable resources are also affected by the cut off grade we assume and other technical parameters (e.g. minimum mining width). In an open pit situation there are clear technical constraints in the sense that before we can extract a given block as ore, wemust mine all the blocks above it. There are many factors involved in determining the recoverability of resources, but the most important two, as far as resource estimation are concerned, are introduced below: 1.

The support of the selection units.

2.

The level of information.

"Support" of the Selection Units Support Effect Support effect is a term used by geostatisticians to describe influence of support on the statistics of samples or other volumes.

Support is a term used by geostatisticians to describe the size, geometry and orientation of the selection unit. The smaller the selection unit, the better able we are to discriminate ore from waste, but the higher our mining costs will be. Figure 1.5 shows the influence of the selection support on the histogram of grades. Note that V represents a larger support than v. For example,V could be a 10m x 10m x 10m SMU andv could be a smaller block, say 5m x 5m x 5m.

Figure 1.5 The influenceof “Support”

There are several important things to note about the two histograms shown in figure 1.5 – 

The global mean m for both distributions is the same. The mean grade of



large blocks and small blocks are identical. The histogram of the smaller support is more dispersed, i.e. it is more spread out along the X-axis (which measures grade). This means there are more

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION

high grades and more low grades measured on smaller supports in comparison to large supports. The dispersion is measured by variance, denoted σ 2 in the illustration. This is hardly surprising, because larger supports represent groups of smaller supports, and thus averaging of smaller supports. Extreme grades therefore tend to be smoothed out when we consider larger supports. We expect a higher proportion of the samples to have intermediate grades, when considering larger support. 

If we apply the same cut off grade (zc) to both histograms, and if this cut off is above the mean, then there is more metal above cut off for the smaller support. Note that the because proportion the area under the us curve. Again, this makes intuitive sense, usingissmaller support allows to avoid diluting the higher grade material with unavoidable lower grade material. This is directly related to the concept of "selectivity"—selecting smaller mining units (SMU's) results in us extracting a higher proportion of the in situ metal.



However, if we apply a cut off (zc) that is less than the mean to both histograms, the situation is reversed: on samples we define more waste than exists on block support.

The physical significance of support is completely familiar to any geologist who has composited drill hole data, for example taking a file of 1m samples and producing a file of 2m composites. The following example shows the impact of such a compositing (often referred to as regularisationby geostatisticians) on statistics.

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION

AU (1M)

mean variance std dev mi n max range

3.40 2.10 2.00 1.00 1.30 1.90 12.20 3.70 5.10 2.30 3.20 2.10 3.00 2.00 6.00 1.10 3.28 7.56 2.75 1.00 12.20 11.20

AU (2M)

2.75 1.50 1.60 7.95 3.70 2.65 2.50 3.55 3.28 4.19 2.05 1.50 7.95 6.45

Table 1.1 1m sampleset and corresponding 2mcomposites

1 3 5 7 AU (1M)

9

AU (2M) 11 13 15

0

5

10

15

Figure 1.6 Smoothing impact of compositing

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION

Level of Information Most mines employsampling information fromproduction blast holes—or other specific close-spaced grade control drilling/sampling—to allow ore-waste classification. The intention of selective mining is to truck to the mill only those blocks that have average grade greater than a cut off (ore) and regard the remaining blocks as waste. However, because the grades upon which we base this selection are estimates, theywill are occur, subjecti.e.tosmoothing and error—it is unavoidable that some misclassification 



we will send some waste blocks to the mill (because their estimated grades indicate that they are ore), and we will send some ore blocks to the waste dumps (because their estimated grades indicate that they are waste).

Both of these misclassifications decrease the mean grade of the recovered ore and thus reduce the profit of the operation. No matter how closely we grade control sample, our information level cannot be perfect and thus misclassification is unavoidable. Sending ore to waste and waste to the mill is unavoidable because we make allocationdecisions on thebasis of estimates not reality. It is therefore important that any strategy for selective mining aims at optimal ore recovery, in the sense that it should minimise the amount of misclassification of ore and waste. The information effectrefers to the relationship between the amount and spacing of sampling available at the time of ore-waste classification and the number and magnitude of misclassification errors when making ore-waste allocation. Information Effect The information effect concerns our lack of information at the time when we must discriminate between ore and waste blocks.

The information effect concerns our lack of informationat the time when we must discriminate between ore and waste blocks. We will only have estimates for the block grades instead of the real or "true" grades. It should be clearly understood that estimates are alwayssmoothed relative to reality. If wecould select on the basis of true grades we would make no allocation errors: we would correctly classify each block. In essence, the problem is that we select on the basis of smoothed estimates, but we feed the mill with the true (unsmoothed) grades. We can illustrate the information effect using a scatter diagram of true grades (Yaxis) and estimated grades (X-axis) as shown in figure 1.7. Ideally, each block estimate will be equal to the corresponding true grade, and all the points will plot on the line Y=X. Because of the information effect, in practice, thisnever is the case, and the points will plot as a "cloud", represented here as an ellipse. The area of the plot can be divided into four quadrants depending upon the classification (or misclassification) of blocks as ore or waste:

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION

(I). The true grade of the block is above cut off, but we estimate the grade to be below cut off. We therefore send ore to the waste dump. (II). The true grade is above cut off, and we estimate the grade to be above cut off. In this case we correctly classify the block and send profitable ore to the mill. (III). The true grade is below cut off, but we estimate the grade to be above cut off. We therefore send unpayable waste to our mill. (IV). The true grade is below cut off, and we estimate the grade to be below cut off. We thus correctly allocate these blocks to the waste dump.

Figure 1.7 Information effect(see discussion intext) Conditional Bias Lack of ‘perfect’ or ‘exhaustive’ information implies that correlation between estimates and true block grades will be imperfect. This in turn implies overstatement of high grade and understatement of low grades, on average.

Clearly we wish to minimise I and III and maximise II and IV. It also follows that we wish to have an estimator that results in a scatter plot with the long-axis of the ellipse at approximately 45° This is because any deviation from this will result in increasing conditional bias.We also want to make this ellipse as "thin" as possible to reduce the number of allocation errors. However, it is important to understand that this ellipse cannot ever thin-out to a line (i.e. Y=X) because for this to happen we would need to know the exact truegrade of every location in the mineralisation! The issue of conditional bias is also seen in figure 1.7. This figure shows a case where the estimate is globally unbiased (i.e. the mean of the estimates is equal to the mean of the true grades). The expected true grade of blocks that have a given estimated grade can be plotted for a range of estimated grades. If we draw a curve through the resulting points, we obtain the conditional expectationshown on figure 1.7.

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION

In reality, we never know the true grades of blocks: the ultimate information we have are our smoothed estimates. Therefore, we want our estimates to be as conditionally unbiased as possible, in other words, we wish the conditional expectation curve in figure 1.7 to deviate as little as possible from the 45° bisector. However, the regression line vZ|Z*v will never be 45° if there is any degree of scatter in the plot. We will note below that polygonal estimators are always highly conditionally biased.

Estimation of Reserves Resources that are estimated with a "sufficient" degree of confidence (and the JORC code loosely defines this in terms of sampling density and types of sampling) may be classified asreserves if, and only if: 



A study of technical and economic feasibility (including mine planning and processing feasibility) has been completed, and The reserve is stated in terms of mineable tonnage and grade.

Clearly the second factor (and, in part, the technical requirement of the first factor) imply that a recoverable resource be estimated as the basis for an ore reserve. Recoverable reserves account for the support effect in addition to other technical factors (dilution, mining methods, constraints) and any economic considerations. Some ‘Classical’ Resource Estimation Techniques

A common characteristic of all the methods considered below is that these estimators are alllinear combinations of the data. Polygonal Methods

Polygonal methods have the longest history of usage for mining estimation problems. Each sample is located at the centre of a polygon defined by the bisectors of segments determined by sample pairs (see figure 1.8). The mean grade of each polygon is estimated by the grade of the central sample. This estimator has the advantage of being quite simple to build manually. In addition, given a sampling pattern that is not clustered, the polygonal method will result in an unbiased estimate of the global resources. However, as far as local estimation is concerned, the polygonal method is very poor, because: 1.

It does not take into account the spatial correlationof the mineralisation.

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION

2.

It does not use any data other than the centrally located sample.

3.

It generally results in severe conditionalbias.

Figure 1.8 The idea of polygonalestimation Polygonal Method This method is inadvisable for most practical mining situations because it effectively maximises conditional bias.

In particular, it should be noted that polygonal estimators are heavily conditionally biased when used to estimate recoverable resources, as a matter of fact the histogram of the estimates isidentical to that of the samples. The support effect may have quite a marked impact on the grade above a given cut off, but it is not accounted for at all. This is one of the reasons that most polygonal estimates (especially for gold) often require heavy cutting of sample grades. 'Sectional' Methods

This type of estimator should only be used for global resources, although they are sometimes (inadvisably) reported section by section. Figure 1.9 illustrates the basic methodology. Sectional methods represent a variation on the idea of polygonal estimation.

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION

Figure 1.9 “Sectionalmethod”

The mean grade gi of the drill samples in intersections of mineralisation are weighted by their intersection thickness ti and assigned to an area defined on section. This areaA is measured (traditionally with a planimeter, these days by a computer program). The grade of the section is calculated by the weighted average: N

∑ g ⋅t i

z*

=

i

i =1

N

∑t

i

i =1

The area of the mineralisation on this section is then projected half way to the next section to obtain a mineralisation volume. Using a manual technique this volume may be derived by simple orthogonal projection. Using computerised methods, some more sophisticated means (wireframing) may define the volume. Sectional Method Each section is treated independently in this method. Would you do that when interpreting the geology?

This method suffers from most of the problems inherent in the polygonal method described above and is only applicable for global estimationinofsitu resources for the same reasons. Local estimation is not really possible, even at the level of sections. In particular, no account of support or spatial correlation can be made in a local sense.

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION

Triangular Methods

These methods, rarely if ever used today, were the ancestors of inverse distance weighting. There are (or were) two variants: 1.

The mean grade of a triangle defined by the three corner samples is estimated by the average of these three grades. When used for estimation of local in situ resources, this method has the same drawbacks as the previous one.

2.

A second approach is to estimate small blocks. Any block within the triangle is estimated by a linear combination of the corner holes. The weights used are often inversely proportional to the distance of the sample to the block. Again, this method (although easily computerised) does not take into account important factors influencing the estimation.

Inverse Distance Weighting (IDW) Methods

Inverse distance weighting methods are more modern than the preceding techniques, and became quite widespread with the introduction of computers. The mineralisation to be estimated is divided into blocks, generally of equal size. The mean grade of a block is estimated by a weighted linear combination of nearby samples (see figure 1.10).

Figure 1.10 The idea of interpolation

The weighting factors give greater weight to closer samples, using the formula:

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION

1 λi

=

N



2

di

1

i =1

di2

where di represents the distance of samplei to the centre of the block being estimated. Only samples within a given "zone of influence" are utilised in the estimation. The method can take into account the spatial correlation of grades, albeit in a rough manner, if it is implemented using powers other than two, i.e.

λi

=

1d α i

N

∑ i =1

1

diα

where α is any chosen power. For example, to reduce the weight given to more distant samples we may chooseα = 3 or more. IDW Method A big step forward from polygonal approaches, but not without pit-falls.

The IDW method is a generalisation of the triangular method (the 2nd variant we discussed) and is easily computerised. Most mine planning software can implement IDW. Although it is a decided improvement upon polygonal methods, it still does not account for the known correlations between grades. The method relies on an arbitrary model for the spatial structure of mineralisation. In particular, the reasons α are not clear. The choice may be: for choosing any particular power 

Intuitive ⎯ often a very poor choice, especially if the stated aim is to “reduce the influence of grades” in a situation of poor grade continuity.



Against production data.



By cross-validation, or



By comparing results to a better estimator (e.g. kriging).

The "classical" methods do not rely upon the spatial structure of the data: this is their main drawback. Implementations of IDW can be made that incorporate anisotropy. Calculation of variograms is often made to determine the ratios of anisotropy. In such a case most of the work has been done towards a geostatistical estimate, however the IDW estimator (even with anisotropy accounted for) does not correctly model the distribution of grade. We will discuss this in more detail during the course. A further problem with IDW is that samples at the centroid of a block have a distance of zero, leading to mathematical failure of the method unless some ad hoc translation of data is used. In any case, samples near the centroid tend to get most of the weight (regardless of the power used).

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION

Note that classical or traditional methods may be "fine tuned" once we are mining. However, there is no way to get "optimal" estimates from these techniques prior to exploitation.

Kriging In the early 1960's, Georges Matheron of the Paris School of Mines (Matheron, 1962, 1963a, 1963b) developed a general solution to the problem of local estimation that built upon an empirical solution developed by the South African mining engineer Krige. named To honour Krige's pioneering contribution field (Krige, 1951)D.G. , Matheron the new technique he developed kriging. in this Kriging Kriging is mechanically much like IDW. What differs is the way that the weights are derived. The kriging weights are not arbitrary: they are based on data correlations.

λ i such that they reflect the spatial Kriging is a way of assigning the weights variability of the grades themselves. This estimator will also weight a sample according to its position relative to the block we are estimating. Furthermore, kriging assigns the weights in a way that can be shown to be mathematically optimal. Finally, kriging allows us to state the average error incurred in estimating a panel of defined geometry in a given deposit, using a particular arrangements of samples.

Kriging is a statistical method, i.e. it is built upon the ideas of probability theory. In fact, kriging is a type of distance-weighted estimator where the distance employed is a measure spatial correlation (“variogram distance”) rather than conventional (“Euclidean”) distance. We will now quickly recall some basic statistics and probability, followed by an introduction to the idea ofregionalised variables.Then we will tackle the problem of calculating and modelling variograms. After considering a few very practical (nonestimation) uses of variograms, we will finally return to the techniquekriging. of

Systemat ic Resource Estimation Practice Our aims in resource estimation do not end at "getting the best estimate" from a statistical or numerical point of view, important though that aim is. It is critical that we also formalise the estimation process, allocate responsibilities clearly, document the estimation adequately and take the geology into proper account.

Documentation Quite apart from the professionalism of documenting the job, there are other good practical and legal reasons for competent documentation.

Formalise Estimation Procedures Making an estimate is a process, not an event. Estimation is a dynamic series of steps and we may wish to repeat certain steps, or incorporate new observations. To make the process of "revisiting" our estimates easier, formalisation is required. A good resource estimate needs to be thoroughly documented so that it can be repeated. Of course, a well-documented procedure also facilitates efficient resource audit and easier transitions if project staff change.

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION

The resource evaluation procedure starts with data collection, geological interpretation and validation. From this early stage, the procedure used should be documented and formalised: Step

Test/Control

Step 1

Collect samples

Check representivity...

Step 2

Sample preparation

Quality Control

Step 3

Assay

Quality Control

Step 4

Hole survey

Quality Control etc...

For major resource delineation programs, standards, control procedures etc. should be implemented and documented from the earliest stages of the project. A review of quality control in geochemical sampling programs is given by Thompson (1984). The best starting point for assessing sampling practices are Dominique FrançoisBongarçon’s papers. Formalisation and documentation should be implemented for each step of the data acquisition and resource estimation process. Allocate Responsibilities At each step, responsibilities need to be clear: who sites holes?who monitors assay quality control? etc.

Estimation begins with data collection, data validation and critical examination of the geological model. If there are important mining constraints to be accounted for early in this process, then a mining engineer must be ‘brought on board’ early on. Experience shows that team approaches are generally superior to lone efforts. Several people, in particular if their specialisations are different, tend to create a constructively critical environment that results in more objective decision making. Document Decision Making Steps Carefully Estimation of mineral resources involves many decision making steps. A geologist or mining engineer revisiting the estimate should be easily able to answer such questions as: 

Which holes were used in the estimate?



If some holes were not used, why?



What are the justifications of the key features in the geological interpretation? Why was this particular model used? Are there rational alternatives?

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION







If populations were split (e.g. oxide versus primary or ‘northern end’ versus ‘southern end’), why? If different lithologies were combined in the estimate, why? If different sampling techniques (e.g. RC and DDH) were treated in the same way, how is this justified? If not, how were the differences quantified and accounted for? If repeat assays were available were these used, and if so, how?





If grades were cut, how and why? If a particular estimation methodology was selected, why?, etc.

For each deposit the specifics will be different, but the general scheme will still apply.

Geology Lack of good geological modelling is a detriment to any estimate. However, good geology is not enough, we must also estimate sensibly within geological boundaries.

Geological Models Note that all geological interpretations constitute models, whether this is explicit or not. The type of model used in estimation will be most reliant, in general, on the larger scale features that impact upon the spatial distribution of mineralisation. Genetic geological models and exploration models will often be a superset of the model required for resource estimation.

The critical features of the geological model used for resource estimation generally relate to geometry: stratigraphic contacts, folding, location of faults and discontinuities, identification of vein orientations etc. Knowledge of a genetic link between assayed elements (Au and As for example, or Pb and Ag) may be a useful part of the model. We will see later that one of the most important (if not the most important) decisions made in the estimation process is that of ‘stationarity’. We will rigorously define stationarity in subsequent chapters, but for the moment we can summarise the concept by: ‘stationarity decisions involve sub-setting (or re-grouping) data such that the resultant data sets may be legitimately employed for statistical analysis and estimation’. The geological model is a primary input to stationarity decisionmaking, and may be also influenced by that decision-making. The practical consequence is that we must build geological models for the purposes of grade estimation by paying attention not only to the geology, but also to the grade distribution. Update and Revise Estimation Models As more sampling or geological information becomes available it is often necessary to update and revise our estimation. At these times, it may be possible to improve the estimation algorithm or the way in which we incorporate geological features into our model. The potential financial advantages (i.e. increased profits) that may result from optimal estimation can be significant. Staying with an estimation procedure that is not performing well, or simply applying "mine-call" factors as a Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION

band-aid solution will probably cost the mine, i.e. result in lost profits. During the life of a mine several estimation procedures are often used with the aim of constant improvement.

Black-Box Input data; have little or no understanding of process; get dubious or uninterpretable results.

‘Black Box’ Approaches The whole point of formalisation, objectification and revisiting is to avoid so-called "black box" approaches. A ‘black box’ approach is one in which the assays are shoved into a computerised (or for that matter, manual) effectively fully automatic resource estimation procedure that no-one involved really understands. Classic answers to the question “why are you estimating this (or that) way?” from people

using ‘black box’ approaches include: 







This is the way it was done for the feasibility study and we're ‘locked in’. We have a policy from head office that this procedure be followed on all deposits. This is the only technique our software is set up to do. The ore reserves system was set up by (insert name of long-departed geologist/engineer) and we are unsure of how to go about changing anything.

If assumptions are challenged and decisions understood all the way through the process, a "black box" approach is not possible.

A Final Warning The three biggest single causes of serious error in resource estimation, in our experience, are: Unchecked data.Monstrous errors can lurk in databases for years and they may not be obvious. Frequently the authors have dealt with data that we assured is ‘checked’, only to locate trivial and serious problems. Simple keying errors or coordinate shifts can result in seriously erroneous estimations. The number one priority when setting up a resource estimation procedure is to instigate rigorous data checking and database quality systems. Different versions of the same database also coexist at some mines. Database integrity policy is easily followed once instituted. Poor geological understanding or control. A poor geological model (i.e. one which does not allow adequate characterisation of geometry) is an obvious example of potential disaster. Again, it should be emphasised that the detail of the model needs to be aimed at characterising the distribution of mineralisation at scales with mining engineering significance. It is also clear that failing to utilise such a model intelligently can cause serious trouble: we all know stories of ore blocks being interpolated well beyond inferred mineralisation limits (or into mid air...).

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION

Critical errors in interpolation. For example, use of an interpolation that assumes a high degree of spatial correlation when the data do not confirm this, or other misspecification of the model for grade variability.

Quantitative Group

Short Course Manual

CH 3 – STATISTICS

Chapter

3 3: Statistics “Three statisticians go deer hunting with bows and arrows. They spot a big buck and take aim. One shoots and his arrow flies off ten feet to the left. The second shoots and his arrow goes ten feet to the right. The third statistician jumps up and down yelling, "We got him! We got him!” Bill Butz, quoted by Diana McLellan in Reader’s Digest, June 1995

Geostatistics and Statistics 4 dealing with phenomena that fluctuate Geostatistics is a branch of applied statistics in space (Olea, 1991). As such it is built upon a foundation of probability theory and statistics. Consequently there is some unavoidable material that needs to be appreciated before we get going on geostatistics per se.

One aim of this chapter is to provide a vocabulary of terms and an understanding of some basic concepts of probability and statistics that are required later. This chapter is also intended to refresh any statistics you have (which may be rusty) and provide you with some useful tools to look at our data. The material is presented in enough detail that this chapter will provide reminders of the basics of probability and statistics for later revisions.

4Note—you

may still see American authors defining geostatistics as "the application of statistical methods to geological data", however, this definition is obsolete and the specific definition given here is now universally accepted.

Quantitative Group

Short Course Manual

CH 3 – STATISTICS

Some Preliminary Definitions Universe The universe is the entire mass or volume of material that we are interested in as a source for our data (and all possible data). In mining geostatistics this is generally the mineral deposit at hand, although it could be a zone (or some other subset) of a mineral deposit or even a group of deposits. It could also be a stratigraphic horizon, an exploration lease, etc. As such it may have clear, sharply defined boundaries or be imprecise (the margins of a poorly known body of mineralisation,

for example). Population The population is the set of all possible elements we can obtain from a defined universe. As such, the definition of a given population is closely linked to the specification of the sampling unit. For example, if our universe is a particular gold deposit, then the following populations might be of interest: 

The set: ‘all the possible 1m RC samples in the deposit’.



The set ‘all the possible 25m x 10m x 5m resource panels in the deposit’.



The set ‘all the possible 5m x 5m x 5m selective mining units in the deposit’.

So, we can define many populations from a given universe. It is important to clearly define both the universe and the population we are considering in any statistical or geostatistical study. Although this may seem quite obvious, it is not uncommon to see reports in which vague references to ‘the samples’ are made, without such a definition. Sampling Unit Each individual measurement or observation of a given population is a sampling unit. For example: 1/2 core samples, 1/4 core samples, RC grab samples, RC split samples, blast hole samples, etc. Again, it is essential to carefully define and document this. Especially as the precision with which the resulting sample represents the sampled material is directly linked to this definition (see next section).

Support Unlike classical statistics, where there is a ‘natural sampling unit’ there are many possible sample supports in geostatistical (spatial) problems.

Support A fundamental concept in geostatistics, introduced in the previous chapter, is that of support. The support of a sample is defined by its size, shape and orientation. Unlike classical statistics, where there is a ‘natural sampling unit’ (a person, a tree, a light bulb...) there are many possible sample supports in problems involving sampling for chemical assay, etc. The support is thus very important, because some statistics (especially the variance) are closely linked to the support selected. Examples of support definition are:

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION



A vertical 1.5 m long 1/2 HQ triple tube core sample.



A vertical 2 m long 1/2 HQ triple tube core sample.



A 2 m long, 5 kg horizontal channel sample across a face.



An 8-pipe 10kg sample taken from blast hole (BH) cuttings.



A 5m x 5m x 3m grade control-scale mining block.



A 25m x 25m x 10m resource estimation-scale block.

Notions of Probability While there is a voluminous literature on the probability theory, only a small portion of which is necessary to practical geostatistics. A good introduction to statistical and probabilistic concepts might be gained from Davis (1986). A few basic notions are required for this course, and probably useful in mining applications generally, and we summarise them here. Events An ‘event’ is a collection of sample points that defines an outcome to which we might assign a probability. Events may be transformed or combined, and can be viewed as sets. The notation and algebra of sets is thus commonly employed in probability theory. Probability To every event A we may assign some number Pr(A) called the probability ‘ of the event A’. Probabilities are measures of the likelihood of propositions or events. For example, the event may be ‘the average grade of the material in a given stockpile is greater than 1.5 g/t Au’. The scale for probabilities ranges from 0 (the event is impossible) to 1 (the event is a certainty), i.e.

0 ≤ Pr( A ) ≤ 1 Because Pr ( Ω ) = 1 and Pr ( ∅ ) = 0 it follows that the probability of the compliment of an event, i.e. that the event does not occur, is:

Pr( A ) = 1 − Pr( A ) Multiplication Rule and Independence A fundamental notion in probability is that of independence. Two eventsA and B are independent if the probability that both A and B occur is the product of their

respective probabilities, i.e.

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION

Pr ( A ∩ B=)

Pr×( A ) ⇒Pr ( B )

A , B independent

The event ‘both A and B occur’ is called acompound eventand is also denoted:

Pr( AB ) = Pr(∩ A = B ) ×Pr( A) Pr( B ) The way in which we calculate probabilities of compound events is defined by whether or not the events can be considered as independentor dependent. The classic example is the successive tossing of a coin: we flip a fair coin twice and the probability of the outcomeHeads, Heads is:

Pr ( Heads)P× r ( Heads) = 12 × 12 = 14 The intuitive idea of independence of two events A and B is that knowing A ‘ has occurred’ conveys no information about the event B. It is often assumed in statistics, for example, that the measurement of errors is independent. Conditional Probability If A and B are two events, the probability that B occurs given that A has already occurred is denoted:

Pr( B| A) and is called ‘theconditional probabilityof B given A’. The notion of conditional probability is closely linked to that of independence. If the occurrence or non occurrence ofA does not help us to make statements about B then, as we have already said, we can state thatA‘ and B are independent events’. In this case we can evaluate the conditional probability very easily:

Pr ( B| A) = Pr ( B )for

Aindependent ,B

If the occurrence or non occurrence of A does help us to make statements about B then we say that A ‘ and B are dependent events’. In this case we may still be able to assign a numerical value to the conditional probability. Conditional probability has an important role in resource estimation statistics and geostatistics. Examples of conditional probability in mining applications might relate to cut off grades. For example: 

‘The probability that the estimation error for a mining block is ±1% B) ( given that we consider a block that has an estimated grade greater than 2.5% Cu (A)’.

Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION



‘The probability that we select a block that has a true grade below 1.2 g/t Au as ore (B) given that we select a block that has an estimated grade greater than or equal to 1.2 g/t AuA()’.

As we have just said, the way in which we calculate probabilities of compound events is defined by whether or not the events can be consideredindependent as or dependent. In particular:

Pr( A ∩ B ) = Pr( AB ) = Pr( A) × Pr( Bfor )

A, independent B

and

Pr( A ∩ B ) = Pr( AB ) = Pr( A) × Pr( B| Afor )

A, dependent B

Random Variables and Distributions The idea of a random variable is central to both statistics and geostatistics.

Random Variable A random variable is a function whose domain is a sample space and whose range is some set of real numbers. Grades in a deposit can be easily conceived of as a random variable.

Random Variables A random variable (RV), which we usually denote by a capital letter X (etc.), is a variable that takes on numerical values, usually denoted by lower case letters:

{ x1 , x2 , x3 ,..., xn ,} , according to the outcome of an experiment to which we have assigned relevant probabilities. More strictly, a random variable function is a whose domain is a sample space and whose range is some set of real numbers. For example, if the experiment is a toss of a coin we might assign the score 1 to the outcome ‘Heads’ and 0 to the outcome ‘Tails’. We then have a random variable X that can assume the following values:

X

⎧1 with probability 1/ 2 =⎨ ⎩0 with probability 1/ 2

So, a random variable is just a function that takes on certain numerical values with given probabilities. The RV X may take on values (also referred to as realisations) that are members of a possible set of values{ x1 , x 2 , x 3 ,..., x n ,} . The word random in this usage does not suggest that the numeric values that the RV takes on are distributed randomly. It is implied that the RV assumes particular numeric values with a certain probability and that this probability can be estimated from a frequency distribution constructed from a sufficiently large random sample of outcomes. The Cumulative Distribution Function Any random variable X is defined by itscumulative distribution function : Quantitative Group

Short Course Manual

CH 2 – RESOURCE ESTIMATION

F x ( ) =XPr x[



]x− ∞< 1.0 infer highly continuous behaviour not normally seen for grade variables.

For λ greater than 1.0 the power models have a concave shape at the srcin, indicating very continuous behaviour at short distances. This kind of behaviour is not frequently seen in mining applications. In fact, for λ =2.0 we have a parabola, and smooth differentiable behaviour (unheard of in mining applications). λ if one Consequently, caution should be used with setting of the parameter decides to employ this model. Exponential Model The exponential model (see figure 6.6) is of the form:



γ ( h) = C ⎢1 − e





h a

⎤ ⎥ ⎦

Where C is the asymptote shown in figure 6.6. Note that a here is not the range (as defined for the spherical model, for example) but is a distance parameter controlling the spatial extent of the function. The exponential model looks quite similar at first glance to the spherical model and it is used to model transitional phenomena. Since the exponential variogram asymptotically approaches the sill, the ‘practical’ or ‘effective’ range is conventionally defined to bea3(the variogram is at 95% of the sill at this distance). Although sometimes applicable for mining data, this model finds more use in nonmining applications, for example soil chemistry (Webster and Oliver, 1990). Gaussian Model

Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

The Gaussian model (figure 6.7) is sometimes implemented in geostatistical software. The form is: −h ⎡ γ ( h) = C ⎢1 − e a ⎢ ⎣

2

2

⎤ ⎥ ⎥ ⎦

Where, as for the exponential model, C is the asymptote (shown in figure 6.7). Note that, again,a is not the range but a distance parameter controlling the spatial extent of the function. The practical range for the Gaussian model is a1.73 (again, this is the distance at which the model reaches 95% of the value of the sill. Warning! The Gaussian model represents extremely continuous behaviour at the srcin. In practice, such a variogram is implausible for any grade variable.

The Gaussian model represents extremely continuous behaviour at the srcin. In practice, such a variogram is implausible for any grade variable. Its only possible application is for very smooth, continuous variables like topography. Some geological surfaces might be reasonably modelled by a Gaussian variogram. If available, the cubic model (see below) might be more suited, in general. However, it is critical to know that this model, if used for kriging, should always be combined with a small nugget effect (say a few percent of the sill) to avoid numerical instabilities in the kriging. Cubic Model The cubic model (figure 6.8) is defined:



γ ( h ) = C ⎢7

h

2

a

2



35 h3 7 h5 3 h 7 ⎤ h ≤a + − 4 a 3 2 a5 4 a 7 ⎥

γ ( h) = C ⎣

⎦ h >a

The cubic model is smooth at the origin, in the manner of the Gaussian model, but not so smooth. Its overall shape is reminiscent of the spherical model, and if an experimental variogram presents very continuous behaviour, it is generally preferred to the Gaussian model. It is available only is specialised geostatistical packages such asIsatis.

Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

Figure 6.7 Gaussian Model

Figure 6.8 Cubic Model (Spherical for Comparison)

Models for Nugget Effect In the previous chapter we defined thenugget effect as the discontinuity of the variogram at the origin. We gave the nugget effect a simple, physical interpretation: if measurements are taken at two very close points x and x+h, the difference: E {Z ( x + h) − Z ( x)}2

[

Quantitative Group

Short Course Manual

] → Co

CH 6 –

VARIOGRAPHY

where Co is some value greater than zero. The value Co is the nugget effect.While the variogram is defined as being equal to zero at zero distance, in practice discontinuities are usually observed for mining data. Note that the nugget effect implies that values have very short scale fluctuation. To model the nugget effect we require a function for the variogram that assumes the value zero at the srcin and a constant value for all values greater than this. The equivalent model for covariance has a value 1 at the srcin and 0 elsewhere. To achieve this we use a mathematical function that has these properties, called the Dirac function δ ( h ) . The resulting model for nugget effect is:

= γ ( h ) Co

{1 h− δ ( )} h for ≠ 0

=0

for h = 0

γ (h)

Note that, strictly speaking such a model is only possible for variables that have discrete values. For mineralisation occurring as nuggets or grains that are small in comparison to the sample support this is not too problematic. However, some geological variables (for example depth to a geological surface) are clearly continuous. Even with such variables, we do often need the addition of a nugget effect model to obtain a satisfactory fit for the variogram. We discuss several reasons why the addition of a nugget effect is necessary for these variables below (widely spaced data, sampling errors, locational errors). It's apparent that things are somewhat complicated when it comes to modelling the nugget effect, so we will discuss a few of the issues involved in some detail. Apparent vs. Real Nugget Effect The nugget effect has the units of variance, and is sometimes called the nugget variance. The nugget effect is, in fact, that proportion of the variance that cannot be explained by the regionalised component of the regionalised variable (see previous chapter). Note that the case of complete absence of spatial correlation is referred to a pure nugget effect. In reality, mineralisation rarely exhibits this type of behaviour, because, although grade variables—especially precious metal grades—are usually expected to have some natural ‘chaotic’ behaviour at very short scales, there are other contributions to the nugget effect.

Integration of ‘Microstructures’

One of the main difficulties here is that there is a minimum distance below which the variogram is unknown, i.e. at distances smaller than the first point of the variogram (at lag 1). The usual procedure is to extrapolate from the first point to locate the intercept with the ordinate (Y) axis. If the extrapolation results in a positive Y intercept, this is taken as evidence of the presence of a nugget effect of magnitude Co. Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

On the other hand, if the curve goes through the srcin, we surmise that there is no nugget effect (or a negligible nugget effect). There is a risk of being wrong in both cases (see figure 6.9). 1.

We may obtain closer spaced sampling information that reveals our variogram does in fact tend to zero at short lags. In this case an ‘unsuspected’ short range structure may exist. Our apparent ‘nugget effect’ is actually due to the data being too sparsely or widely spaced.

2.

Alternatively, the observed variogram points lead us to extrapolate the variogram to the origin where as, in fact, the variogram ‘flattens’ at lags shorter than the available data. This type of behaviour is typical of the contribution of locational errors to the nugget effect.

Figure 6.9 Apparent nugget effect and missed nugget effect.

While the geostatistician may be able to hypothesise about the short-scale behaviour of the variogram by using some a priori knowledge, in general it is better to have close spaced information available. Close-spaced Sampling The down hole variogram is invaluable in estimation of the nugget effect.

The shortest, inter-sample distances limit our resolution of the variogram. Without such information, short-scale structures cannot be resolved. In such a case, any shortrange nested structures will be unresolved and appear as nugget effect. Geostatisticians refer to this incorporation of short structures into the apparent nugget as ‘integration of microstructures’. In a sense, this is always unavoidable, because even with exhaustive sampling of the mineralisation, we cannot resolve short-range structure at the scale of our samples (say, cores) or less. In summary, when modelling the nugget effect one should be aware that closer spaced sampling can often reduce the nugget effect. Isotropy and the Nugget Effect

Isotropy of Co The nugget effect is isotropic: i.e. it has a single value for a variogram model, even if the spatial components of that model are anisotropic.

The most closely sampled direction in mining situations is usually the ‘down the hole’ direction. Since the nugget effect is strictly defined for very small distances δh → 0 , it is independentof direction. This has important practical implications: 1.

In general, the most closely sampled direction should be used to reliably determine the nugget effect.

Quantitative Group

Short Course Manual

CH 6 –

2.

VARIOGRAPHY

It is incorrect to model the nugget effect with different values in different directions. The nugget is not allowed to be anisotropic!

Sampling Error and the Nugget Effect

Often we can know the variance of the errors associated with sampling. This is a large subject in itself, and the reader is referred to publications by FrancoisBongarcon (1991, 1992, 1993, 1996), Francois-Bongarcon and Gy (2001), Pitard (1990a, 1990b) and, in particular—for a discussion of the linkage between Gy's sampling theory and the nugget effect—to the paper of Ingamells (1981). The importance of minimising sampling errors is quite obvious in any case, but it is clear that our variography the ‘human nugget effect’. will be affected by the contribution of sampling errors to Locational Error Error and Nugget The nugget effect in essence captures the ‘noise’ in the data. This may come from database errors as well as from the inherent ‘nuggetty nature’ of the grade.

Locational error occurs when a sample that is associated in our database with the point x (where this location may be in one, two or three dimensions) was actually measured at some different locationx+u. Again, this is a measurement contribution to the nugget variance. In this case, instead of studying a variable Z(x), we actually study: Z1 ( x ) = Z ( x + u )

If this measurement error is constant, for example, our grid is wrongly located 10m to the east, there is no impact on the nugget effect. However, if this error is not constant (even if it is systematic, but variable) we will add to the nugget effect. Combining Models All the models we have introduced here describe simple curves or straight lines. The ‘real live’ experimental variograms that are encountered in mining (and other) applications will often appear more complicated and none of the models listed above will seem to be appropriate—a more elaborate shape may be necessary to fit the experimental curve.

Fitting models to such variograms is best done using combinations of two (or more) of the above models. This is allowable, because any linear combination of authorised models is, itself, authorised. The models are simply added together: γ( h )hγ=h γ+h1 ( ) γ+2 ( )+

3

( ) ...

A common addition is to add together a spherical model and a nugget effect model, this is usually summarised as:

⎡3

h

⎣⎢ 2

a

γ ( h) = Co + C ⎢ γ ( h) = Co + C

Co indicating the nugget effect model. Quantitative Group

Short Course Manual

1 h ⎤ ⎥ 2 a 3 ⎦⎥ 3



for h ≤ a for h > a

CH 6 –

VARIOGRAPHY

For geological data, especially grade or geochemical data, it is common to see two or more recognisable spatial components in the variogram, generally called ‘nested’ structures (see previous chapter). Figure 6.10 shows an example of this type of variogram. Note that we refer to the spherical models as Sph1 and Sph2. In general, a nugget effect plus 2 or perhaps 3 spherical structures will suffice. The addition of more than 3 models (plus nugget) doesn't usually result in much difference to the shape. The nested model in figure 6.10 would be written as follows:

⎡3 ⎣⎢ 2

γ (h) = Co + C1 ⎢

h a



1 h3 ⎤ 3 h 1 h3 ⎤ − 3⎥ ⎥ + C2 ⎡⎢ 2 a 3 ⎦⎥ ⎣⎢ 2 a 2 a ⎦⎥

⎡3

h

⎢⎣ 2

a

γ (h) = Co + C1 + C2 ⎢

1h ⎤ ⎥ 2 a 3 ⎥⎦

for h ≤ a1

3



γ (h) = Co + C1 + C2

for a1 < h ≤ a2 for

h

> a2

Figure 6.10 Example of combining models –nested spherical models with nugget effect.

Anisotropic Models Geologic Process The nature of most mineral deposits is that some anisotropic structural or sedimentary (or other) control is evident. Thus we expect anisotropy of variogram in most cases.

All the models discussed so far describe isotropic situations. In such a case, the variogram is only dependent upon the modulus or absolute value of h, i.e. only upon h , and not upon direction.

Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

As we discussed in the previous chapter, there are many geological situations where we expect anisotropy, i.e. where we expect the variation to be different depending upon the direction in which we calculate the variogram. We introduced, previously, the two types of anisotropy: 1.

Geometric anisotropy

2.

Zonal anisotropy

Geometric Anisotropy Geometric anisotropy(also called ‘elliptical anisotropy’) can be corrected by a simple linear or affine transformation of the coordinates. In the case of geometric anisotropy, for a transitional variogram, the sill of variograms in each direction is the same. Only the range is different. In the case of a linear variogram it is the slope that is directionally dependent.

We can plot the range (or slope, in the case of a linear variogram) as a function of the direction. For geometric anisotropy, the plot approximates an ellipse (in 2D, an ellipsoid in 3D); a simple change of coordinates transforms this ellipse into a circle, eliminating the anisotropy. It's important to understand that, when calculating the experimental variogram we choose at least four directions. This is because choosing only two directions may not detect a geometric anisotropy, even if one is present. Fitting the model for geometric anisotropy in simple cases presents few difficulties. Firstly, we calculate the experimental variogram in at least four directions in the plane,the plus in the and vertical or down-hole direction(s). determinetowhich directions have longest shortest ranges (these may notWe correspond the axes of the information grid, so care must be taken). For the two dimensional case there will be two principal directions at right angles to each other. In three dimensions there will be three mutually perpendicular principal directions. Fit an anisotropic model to the experimental variograms in these principal directions as follows: 1.

Fit the model to the experimental variogram for the ‘best-defined’ principal direction.

2.

Choose the next clearest experimental variogram in one of the remaining principal directions. Divide the range of each spatial structure in your variogram model (i.e. the model for the direction fitted in the first step, above) by a factor. Choose these factors such that the model is ‘compressed’ (dividing by factors greater than 1.0) or ‘stretched’ (divided by factors less than 1.0) in order to obtain a model that fits in the second principal direction.

3.

Repeat the second step for the remaining principal direction, if you are working in three dimensions.

Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

In general the convention is to specify anisotropy factorsor anisotropy ratiosas numbers that we divide the ranges by. If the range of a structure is 100m in the first direction we fit, and needs to be 50m in the second, then the anisotropy ratio is 2.0. Conversely, if the range of the structure is 40m in the first direction we fit, and needs to be 80m in the second, then the anisotropy ratio is 0.5. For nested models, the anisotropy can be complicated. For example, the ratios might be different for each component structure. Note, however, that some software may use a multiplying convention. An Example

The anisotropy ratios are usually specified for each direction, so that the reader of a report knows which direction is the reference direction. For example, figure 6.18 in the structural analysis case study (at the end of this chapter) we will specify the variogram model (in 3D) as shown in table 6.1: Table 6.1 Model for Variogram (ranges expressed using anisotropy ratios) Nugget

Sph1

Sph2

Sill

0.026

0.017

0.034

Range

0

20

150.

X-anis

1.00

1.50

5.00

Y-anis

1.00

1.00

1.00

Z-anis

1.00

1.50

4.00

Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

In this case the range of the longest structure (north-south, or Y-direction) was fitted first. In addition to a nugget effect model (range 0), two nested spherical models, denoted Sph1 and Sph2, were fitted with ranges of 20m and 150m respectively. The anisotropy ratios then give us the ranges in the other directions, by division: Table 6.2 Model for Variogram (ranges in metres in brackets) Nugget

Sph1

Sph2

Sill

0.026

0.017

0.034

Range

0

20

150.

X-anis

1.00 (0m)

1.50 (13.3m)

5.00 (30m)

Y-anis

1.00 (0m)

1.00 (20m)

1.00 (150m)

Z-anis

1.00 (0m)

1.50 (13.3m)

4.00 (37.5m)

Note that the nugget effect model has no range and is isotropic, by definition. The anisotropy is different for the short vs. long structures: for the short structure the resulting model is close to isotropic, whereas for the long structure, there is strong anisotropy. We discuss this model in more detail in the case study, later in this chapter. For unbounded variogram models (e.g. power models, including linear models) the affine transformation to model geometric anisotropy is applied by changing the gradient of the model. In the case of a linear model, this is simply the slope.

Practicalities… Zonal anisotropy cannot be modelled in most general mining packages. Even where it can be modelled, the model often can't be used for kriging.

Zonal Anisotropy More complicated types of anisotropy exist in some deposits, for example where distinct zonation of high and low values exist. In this case the variability in the direction parallel to the direction of zonation might be significantly lower than in the direction perpendicular to zonation. This concept is usually called zonal

anisotropy. Zonal anisotropy cannot be modelled in most general mining packages. Even where it can be modelled, the model often can't be used for kriging. Modelling of zonal anisotropy is beyond the scope of this course. The interested reader is referred to Journel and Huijbregts (1978, pp. 266-272) for a case study.

Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

Why Not Automated Fitting ? A common question is: “why not fit the variogram automatically”? Geostatisticians have had access to automated variogram modelling software for years (weighted least squares, etc.). However, most experienced geostatisticians will fit models manually. Black boxes Software can't be expected to mimic skilled variogram modelling. An intelligent operator with site-specific geological knowledge is required

There are two main reasons for this. Firstly, the modelling must take into account the fact that the most important parts of the variogram to fit accurately are the near srcin lags. Often, the first point is under-sampled, so some trade off between the number of pairs contributing the lag (reliability) and this the isneed for theeasy model respect the near srcin points istorequired. In many cases relatively to doto subjectively, but difficult to do algorithmically. Secondly, our problem is compounded when we are faced with noisy (or poorly defined) experimental variograms, where considerable external input to the modelling process is necessary. For example, we might employ a priori geological knowledge to adjust the range of a structure. Software can't be expected to mimic this. An intelligent operator with site-specific geological knowledge is required. This point emphasises that geostatistics is not suited to black box usage: if you have access to software that makes an automatic fit, treat this as a ‘first guess’ and then adjust the model intelligently to obtain a satisfactory final fit.

Systemat ic VariogramInterpretation Some geostatistical textbooks may leave the reader with the impression that fitting variograms is a fairly easy task. In many cases, especially when dealing with precious metals data, this is far from true. Having a systematic approach is important when first setting out to perform variographic analysis. We'll tackle seriously troublesome variography later, but for the moment we consider here the usual procedure for modelling variograms. Skill & Experience Fitting variogram models is as much a ‘craft’ as a science, in the sense that it isn't an activity that can be completely reduced to a formula

In fact, fitting variogram models is as much a ‘craft’ as a science, in the sense that it isn't an activity that can be completely reduced to a formula. Repeated experience fitting variograms increases the practitioners’ skill. However, we present some guidelines intended to help you with variogram modelling. Obviously, not all the following comments will apply to every experimental variogram you'll encounter. However, it's especially useful when first dealing with structural analysis to have a few ‘pointers’. Ten Key Steps When Looking at a Variogram We consider key points to look for when examining experimental variography. Always remember that the experimental variogram is an estimate of the ‘underlying’ variogram. As such some irregularity is generally expected, due to statistical fluctuation.

Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

1. The number of pairs for each lag in the experimental variogram.

The number of pairs contributing to the first lag (the first point of the experimental variogram) can often be quite low. This will depend upon the exact sampling pattern and search used. Consequently, the first point may not be very representative. Some software annotates the experimental variogram with the number of pairs used. So long as a listing is available, we can check to see that points at short lags are reliable. Although rules are difficult to apply strictly, fewer than 30 pairs is likely to be unreliable in any mining situation. Note that the number of pairs should be considered in proportion tolag the1 size ofbe the data set, dubious for an experimental variogram with 2,000 pairs at lag No. 2, might considered with 100 pairs. See figure 6.11a. Similarly, at distal lags, the number of pairs decreases. It's easy to see why: with increasing distance there comes a point where only samples at the very edges of our area can be used. It can be shown theoretically that the variogram can become dangerously unreliable at lags beyond 1/3 of the maximum sample separation. See Figure 6.11b. 2. Smoothness of the experimental variogram

The smoothness of the experimental variogram in figure 6.11a and 6.11b can be contrasted to the more erratic behaviour in figures 6.11c and 6.11d. Many factors can contribute to erratic variograms, and we'll discuss these in detail, in the section on troublesome variograms below. However, two types of erratic variography can be distinguished. In figure 6.11c, the experimental variogram is saw-toothed in a regular up-and-down fashion. This may indicate poorthe selection lags visible. or possible of a very high value. In any case, structure of is still If weinclusion/exclusion exclude other sources of irregularity, this variogram might be modelled in an ‘averaged’ way, as shown in the figure. Some of the techniques discussed below in the section on troublesome variograms might result in a ‘cleaner’ variogram. On the other hand, figure 6.11d shows a noisy variogram with no obvious structuring, nor is there clearly evident the kind of ‘saw-toothing’ behaviour seen in figure 6.11c. In this case we have to resort to some kind of robust variography (relative variograms) or transformation (logs, etc., see further, below).

Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

Figure 6.11 Common ‘real life’ experimental variogram features

Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

3. Shape near the srcin

It's critical to assess the shape near the srcin correctly. As we've already said, the first points are sometimes suspicious or unrepresentative. In mining applications, especially for grade variables, the shape at the srcin is nearly always linear. This is one reason that the spherical model is so popular. Important Note: If the experimental variogram suggests a parabolic shape near the srcin (like the Gaussian model introduced above) be very cautious

If the experimental variogram suggests a parabolic shape near the srcin (like the Gaussian model introduced above) be very cautious. This will nearly always be a statistical feature when dealing with grades. Resist the temptation to fit Gaussian models! The consequences for kriging are quite profound: the Gaussian model represents extraordinarily short-scale behaviour type not seen for mineral grades. Incontinuous, the case of smooth topographic variables (depth toofa ageological surface, the water table, vein width, etc.) caution is still advisable. If you are convinced that a topographic variable is Gaussian, then always fit the model with a nugget effect (in order to avoid instability in subsequent kriging). In many cases, a cubic model is preferred, but this model is not generally available in mining software. The slope of the variogram near the srcin is a critical factor in subsequent kriging. Greater weight is generally given to the experimental points closest to the srcin when assessing this slope (given that these points have a reasonable number of pairs contributing). Note that the slope is relative to the ratio of the range to the proportion of nugget effect. 4. Discontinuity at the srcin—nugget effect

Along with the shape and slope at the srcin, the proportion of nugget effect is a critical factor in modelling the variogram. Most grade variables have some nugget effect.effect Theε ,proportion of nugget effecttorelative the nugget and is measured as a ratio the sill: to the sill is often called relative ε=

Co Co + C

The relative nugget effect is often expressed as a percentage. Co is Isotropic Because of this, in mining we will generally use the down hole direction to set the nugget effect and then use this value for each of the other directions.

The nugget effect is the same in any direction (being defined at very small distances relative to the sample spacing). Because of this, in mining we will generally use the down hole direction to set the nugget effect and then use this value for each of the other directions. Note also that the relative nugget effect is dependent upon compositing length (i.e., ε is reduced). for a given spatial grade distribution, as we use longer composites, We discuss this phenomena (related to ‘support effect’) in more detail in the next chapter. 5. Is there a sill?—transitional phenomena

Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

Answering this question is sometimes not as easy as you might expect. For example, take figure 6.11e. Here we have an example of an experimental variogram (I) that clearly has a sill. However, variogram II seems to continue to rise. We may have a linear (or unbounded) variogram, but equally, we may not have yet reached the range of a transitional model. For example, we have been restricting the zone upon which we calculate the variogram too severely. Then again, it is possible we have a drift (see further below). Note that, so long as the shape of the function we choose fits the experimental data well, especially near the srcin, the difference between choosing linear or spherical (with a very long range) is negligible. If the sill level is not clearly defined (for example figure 6.11c) then we often use the ‘average’ level of fluctuation. If this corresponds to the variance of the data (as it should in the stationary case), our confidence is increased. Although the sill should coincide with the overall variance (in conditions of stationarity), this is not always the case, for example because of the presence of long-range trends in the data. Note that the level of the sill for the longest structures in a nested model has little impact upon kriging weights, so in most cases fixing it with great precision is not necessary. 6. Assess the range

If we do have a transitional model, then we need to assess the range. In general, the range is assessed visually, as the distance at which the experimental variogram stabilises at a sill. In many cases, the range is fairly clear, especially for experimental variograms that Slope The first few reliable points of the experimental variogram, not those close to the range, should control the slope.

closely approximate a spherical scheme. In other cases, it may not be so easy. Firstly, bear in mind that there are some mechanisms for specifying range inherent in the functional forms of the model we choose. In particular, the linear extrapolation of the slope at the srcin should cross the sill at 2/3 of the range for a spherical variogram. If the sill is clear, this rule of thumb can be quite helpful. In this case, the first few reliable points of the experimental variogram, not those close to the range, should control the slope. Careful definition of the ranges of shorter structures (when more than one regionalised structure is evident) is very important. 7. Can we see a drift?

Drift is not so easy to detect in many mining situations. Firstly, at lags beyond about 1/3 of the maximum available sample separation, theory indicates that the variogram becomes increasingly unreliable. So a continuously rising experimental variogram, such as that shown in figure 6.11f may be quite misleading. Again, look at the representivity of the pairs. Assessing a drift should also be made in conjunction with examination of a posting of the data, or a contour map. Look for trends that might clearly be responsible.

Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

Some software will print out, for each lag, the number of pairs, the mean of the pairs and the ‘drift’, this being the mean value of the pairs for this lag. Of course, systematic increase in this statistic for distal lags is still only significant if we have sufficient pairs. Impact of Drift Even where there is a drift, , the shape of the variogram at shorter lags is usually the critical factor in the results of any subsequent kriging.

Even where a convincing case can be made for a drift for larger lags, this may have little impact on subsequent kriging. This is because, as we have repeated regularly so far, the shape of the variogram at shorter lags is the critical factor in the results of any subsequent kriging. In most mining situations, modelling of drift not is required. If it is required, there are techniques available, but these are beyond the scope of this course (see Journel and Huijbregts, 1978, p.313). 8. Hole effect

A hole effect appears as a bump on the variogram. As stated in the previous chapter, most apparent "hole effects" are, in fact, an artefact of the sampling used, lack of pairs etc. Although hole effect models exist, they are beyond the scope of this course and their use is not common.

9. Nested models

Given an interpretable experimental variogram we will usually need to model more than one structure. In the simple case, we assess the nugget effect and then fit a single, say spherical, model for the structured component. In many cases, mining data present more than one range. Clear inflections in the experimental data indicate the ranges of nested spherical models. We see several examples of this in the case study at the end of this chapter. Generally, where several models are nested, fitting the model with the shortest range will prove critical (from the point of view of subsequent kriging). 10. Anisotropy

It is essential that the experimental variogram be calculated in at least four directions in the plane, and at geologically sensible orientations in the third dimension, in order to detect anisotropies. The procedure for fitting an anisotropic model was discussed in the preceding part of this chapter. In the absence of any detected anisotropy, an isotropic model can be fitted. Fitting of zonal anisotropy is beyond the scope of this course.

Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

‘Uncooperative’ or ‘Troublesome’ Variograms 18

If the experimental variograms encountered in practical situations were as well behaved as those often given as textbook examples, this section would be unnecessary! In fact, the reader is unlikely to avoid ‘horror variograms’ like the one shown in figure 6.11d for very long (if they are working in a gold mine, sooner rather than later). We will approach the subject of uncooperative or troublesome variograms via two different angles: 1.

Can we improve variography by choosing different calculation parameters?

2.

Can we deal with the problem by some approach more sophisticated than calculation of the traditional grade variogram?

Calculation of the Experimental Variogram We examine here a number of factors we should check first when confronted by a dreadful-looking experimental variogram. The initial experimental variograms are often highly erratic and time, effort and thought is required to establish why this is so. The paper of Armstrong (1984) gives some excellent examples (some of which are discussed here). Theoretical Reasons

The experimental variogram is an estimate of the spatial structure. As such, it is often highly variable for large values ofh. Various geostatisticians have demonstrated that, when we have a very large number of closely spaced data (for example from blast hole drilling or a simulation), subsets of this data can have widely varying histograms and variography. Definition of Stationary Stationarity The stationarity decision is probably the most important decision in a geostatistical study.

Sometimes the reason for poor variography is that we are calculating the variogram from two mixed populations that possess differing statistical characteristics. In an extreme case this will show up as a bimodal histogram, but this is certainly not always the case (Armstrong, 1984). Since the variogram assumes intrinsic stationarity, mixed populations can impact severely on the experimental variogram. Where possible, our variogram should therefore be calculated for a single statistical population. GEOGRAPHICALLY DISTINCT POPULATIONS

If the populations are geographically distinct, i.e. they can be outlined on maps of the deposit, then our problem is to define the boundaries of our zones. This is partly iterative, in that the variogram is one aspect of the evidence we will use to split or lump geological zones. Combining zones with quite different geostatistical

18

i.e. the usual variety for mining examples…

Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

characteristics can result in poorly defined and sometimes uninterpretable variography. Alternatively, if we split the zones into too many categories, we may end up with too few data in each zone, thus statistical fluctuations may overwhelm the underlying spatial structure. Obviously, some experience and trial-and-error is involved. INTERMIXED POPULATIONS

The problem may be more intractable. For example, if the two populations are related to intimately intermingled, but statistically contrasting lithologies, then the only means to resolve the problem may be more detailed sampling (i.e. closing up the drilling further). Another example of intermixed populations might be mixing two or more different drilling campaigns. It is often the case that older campaigns have smaller diameter drilling, poorer sample preparation etc. The result of this is artificially higher variance for the older drilling campaign. This may be revealed by higher sills, shorter ranges and in extreme cases, apparentpure nugget behaviour of the variogram. How to Determine Appropriate Variogram Calculation Parameters The parameters relating to search tolerances and lag selection are sometimes very sensitive. Lag Selection.

A strongly saw-toothing experimental variogram, shown in If figure 6.11c,pronounced is a warning that we of maythe have poorly specified the lagas increment. the data spacing is irregular, the basic lag interval to choose may not be immediately obvious, and we may get a situation where successive lags include larger, then smaller numbers of pairs in a cyclic fashion. The lags with fewer pairs will be less robust to extreme values, and tend to have—on average—higher valuesγ of (h ) . Tolerances

In particular, selection of the lag and angular tolerances can sometimes have a drastic impact on the variogram. If the variogram looks bad, try larger or smaller tolerances. In doing so we are, in a sense, varying the smoothing the data in order to lessen the impact of including or excluding particular pairs in a given lag. The angular tolerance can be especially sensitive to this type of effect. Missing Values

Most programs allow specification of a minimum value to consider in calculation of the variogram. It is common to flag missing values with a negative number, say 1. If we do not test for these values the impact can be quite drastic on the experimental variogram, since we are adding artificial (often randomly located) ‘noise’.

Quantitative Group

Short Course Manual

CH 6 –

‘Outliers’ Extreme values disproportionately impact on the experimental variogram because the variogram is calculated by squared differences.

VARIOGRAPHY

Extreme Values We will discuss some approaches to modelling variograms with extreme values below (e.g. log variograms, relative variograms). However, one particular case is that where there is a single, very large value in a data set mostly comprised of very small values. The variogram may be severely impacted by such a value, refer to a study by Rivoirard (1987a).

Note that, since the richest values often determine the economics of a deposit, cutting them (or removing them) should be a last resort if we are taking a scientific approach. Other Approaches to Calculating Variograms In addition to the traditional experimental variogram of grades: γˆα )(h

=

1 2N

N

∑( [{Z) x ()+ h − Z x } ] 2

i

i

i =1

There are a number of other approaches to calculating experimental variograms. Such approaches fall into two broad categories: 1.

‘Robust’ estimators of the variogram (e.g. relative variograms).

2.

Variography of transforms (e.g. logarithmic and Gaussian variography, indicators, etc.).

Alternative Estima tors ofthe Variogram ‘Seeing the underlying structure’ The underlying structure may be masked in an erratic experimental variogram. Alternative approaches to variography may significantly help with this.

Alternatives to the calculation of the experimental variogram may perform better in the presence of extreme values, and in particular for very skewed data. In each case, the aim of these alternative estimators is to produce a clearer display of the underlying spatial structure. This structure may be masked in an erratic experimental variogram. Robust Estimators Firstly, it should be noted that—in addition to relative variograms (which are ‘robust’ in certain cases, as we shall see)—‘robust’ variogram estimators have been proposed by Cressie and Hawkins (1980) and several other authors. These estimators were developed theoretically and intended as alternatives to calculation of the traditional variogram. They are rarely (if ever) implemented in mining software. David (1988) gives a review of some alternative estimators.

The traditional variogram and the variations noted below are probably as good as any of these alternatives if calculated with intelligence and modelled with experience. Consequently we will not consider these specialised estimators in any detail. Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

Relative Variograms The most commonly encountered ‘robust’ variograms are relative variograms. There are several different types of relative variogram and it pays to determine exactly which one your software is implementing. Relative variograms have been used for structural analysis and for kriging since the 1970's. They were especially promoted by Michel David and his students at Montreal (David, 1977, 1988). Proportional Effect The aim of relative variograms is to compensate for the proportional effect.

The aim of relative variograms is to compensate for the proportional effect. Recall that a proportionaleffect exists when there is a relationship between the local mean and the corresponding local variance. A proportional effect is the norm with lognormally distributed data and common in deposits exhibiting skewed (but not necessarily lognormal) histograms. Since proportional effects are often seen in gold deposits, the use of relative variography can sometimes provide better resolution of the underlying structure when dealing with gold data. Local Relative Variogram

The local relative variogramis a historical way of accounting for the dependence of γ (h ) on the local mean. It is rarely used today, because the main motivation of this approach was limited computer memory in the 1970’s. In this approach, we define regions and treat the data within each region separately, i.e. as separate populations. If we observe that the shapes of the variograms for each of our sub regions are similar (only the magnitude or sills differing from γ LR (h ) . This region to region) then we can define a single local relative variogram single relative variogram must then be scaled by the local mean to obtain the local variogram (Isaaks and Srivastava, 1989):

∑ N (h) γm(h) n

i

i

γ LR (h ) =

2

i =1

i

n

∑ N (h) i

i =1

where the γ i ( h ) , i.e. γ 1 ( h ), γ 2 ( h ),..., γ n ( h ) are the variograms for then local regions defined, m1 , m1 ,..., mn and N 1 ( h ), N 2 ( h ),..., N n ( h ) are the corresponding local means and number of sample pairs from each region. The above equation thus scales each local variogram by the square of the local mean then combines them in a weighted average (weighting by the number of sample pairs upon which each local variogram is defined). The resulting local relative variogram accounts for a linear-type proportional effect where the local variance is proportional to thesquare of the local mean.

Quantitative Group

Short Course Manual

CH 6 –

Proportional Effect (2) If the proportional effect is not linear, relative variograms lose effectiveness.

VARIOGRAPHY

If the proportional effect is not linear, an appropriate alternative scaling factor would need to be built into the above expression. It's evident that this approach to local variograms can be computationally heavy (depending uponn) and also that the component local variograms, from which we build γ LR (h ) are based upon smaller numbers of samples than the overall data set, thus reducing the statistical reliability of the resulting combined relative variogram. In fact, γ LR (h ) may be little better thanγ (h ) , depending on how many subpopulations we are required to define. This approach to relative variography is consequently not common. General Relative Variogram

The general relative variogram γ GR (h ) is a more common relative variogram. It does not require the definition of sub-populations, overcoming one of the main difficulties with the approach taken when using the local relative variogram. For the general relative variogram we calculate (for each lag): γ GR (h ) =

γ ( h) {m(h)}2

m( h ) is the where γ (h ) is simply the traditional experimental variogram, and mean of all the data values used to estimate γ (h ) for the lag h being considered. The program used to calculate the experimental variogram can be easily modified to calculate m( h ) for each lag, so the general relative variogram is easily implemented from a computational point of view.

Pair-wise Relative Variogram

The general relative variogram γ GR (h ) employs the squared mean of all the data contributing to a given lag. In contrast to this, the pair-wise relative variogram γ PR (h ) also uses the square of the mean, but the adjustment is made for each pair {Z ( xi ), Z ( x j )} considered. Again, this adjustment serves to reduce the impact of very large values on the calculation of the variogram. The correction made is:

γ PR (h ) =

1 2 N (h )

{Z ( xi ) − Z ( x j )}2

∑ (

)

{ Z ( x i ) + Z ( x j )} 2 2

A note of caution raised by Isaaks and Srivastava (1989) concerning this type of relative variogram is that when the two data forming the pair both have zero value (or close to this) their mean is zero, and we divide by zero in the standardisation. γ h PR Thistomeans ( value. ) becomes equal to infinity. To avoid this, zero values are set a smallthat positive Sigma2 i-j Relative Variogram Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

We may also correct the variogram by the variance of the data contributing to a given lag. This correction can result in considerable 'cleaning up' of noisy variograms. The sill is re-set to 1.0. Some General Comments About Relative Variograms

The theoretical foundation of relative variograms is not well understood. However, they have proved very useful in practice. Kriging with a relative variogram, or performing extension or dispersion variance calculations (see subsequent chapters) is to be approached cautiously. In particular, the general relative variogram (which is probably the most common relative variogram considered) is an estimator, and it often overestimates the underlying or ‘true’ relative variogram (David, 1988). In this case, dispersion, extension and kriging variances obtained from it will also be incorrectly estimated. However, there is no real difficulty in kriging with a general relative variogram, so long as we remember that the variances have been rescaled by the square of the mean (e.g. the kriging variance is now the ‘relative kriging variance’). Warning! Kriging directly with a pairwise relative variogram is problematic. The variances are now rescaled in a non-linear manner, and the relative nugget effect is usually understated.

However, kriging directly with a pair-wise relative variogram is problematic. The variances are now rescaled in a non-linear manner, and the relative nugget effect is usually understated (sometimes by a large margin). Note also that the structures observed in relative variograms can be very helpful in determining ranges to choose when fitting a model to the conventional experimental variogram. In the case of gold deposits and other mineralisation with skewed distributions, the relative variography is usually interesting to calculate (and not too time consuming) as part of theoverall spatial data analysis and structural modelling step.

Variography of Transforms Before considering a few common transformations, we should be clear about the implications of some of these approaches. In particular, the user should understand that: 



When we employ a transformation that applies to all the grades, for example, taking logarithms, we generally alter the variances of the different structures. This means that we cannot directly determine the relative nugget effect, or the contribution of a short-range versus a longer-range spatial structure by direct examination of the variogram based on transformed values. We will require a back-transform (and this involves assumptions). The ranges of structures are generally unaltered by such transformations. The reason is clear: the distance at which, say, the logarithms of the grades become uncorrelated is the same as the distance at which the grades themselves become uncorrelated.

Some common transformations are: Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

Logarithmic Transformation

Taking the logarithm of each sample value prior to calculating the variogram can result in markedly better variography. Note that there is no inference here of lognormality. This step is just a convenient deskewing transform that helps us to see the ranges of the structures we are trying to detect. Because the taking of logs drastically reduces the relative magnitude of extreme values, it reduces the influence of a small proportion of very high assays on the experimental variogram. As previously stated, the range is the distance at which sample pairs cease to exhibit correlation. Taking the logarithm of all the samples does not alter the distance at which to cease. However, we variogram may see theofrange clearly in in the logcorrelation variogramisinobserved cases where the experimental the ‘raw’ values very noisy and thus difficult to interpret. Warning! Relationships between log and normal variograms presume lognormality

Note also that in the case of a lognormal distribution, the relative variogram, traditional variogram and the log variogram are also theoretically equivalent (i.e there are relationships to convert parameters from one type of variogram to another – see David’s 1988 book). In general, we will try these different approaches as part of our assessment of ‘difficult’ variography and use the information that is gleaned to improve the model we finally select. In the case of a lognormal distribution, such a variogram can subsequently be used for lognormal kriging (although this method is inadvisable when the distribution deviates much from strict lognormality). There are a few preliminary steps to be careful of when dealing with logarithmic variography. First, zerotoand negative mustthere be carefully corrected of to very smallsmall positive values prior taking logs! values Secondly, is the problem values. If we have very small values in our data, then taking logs will result in some quite large negative logarithms. The squared differences that we use to calculate the variogram can then become very large. The end result is that we may get a masking of the underlying structure, or worse, structural artefacts due to these small values. Rivoirard (1987) gives an excellent case study (for a uranium deposit) where many grade values were small or below detection, resulting in difficult variography for even the log values. In this case, he opted to calculate the variogram for a new variable: x +α

where α is a constant value added to every datum. This results in the differences in logs being drastically reduced, greatly improving the resolution of spatial structure. Rivoirard suggests a value forα that is close to the mean, or median of the data set. The value α is equivalent to the additive constant for a three-parameter lognormal distribution.

Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

Gaussian Transform Normalisation A Gaussian transform is simply a transformation of data to a normal distribution: a standard statistical tool.

A Gaussian transform (or ‘anamorphosis’) is a transformation of the data that results in a normal histogram. In the case of a lognormal distribution, taking the logarithms results in a Gaussian (or normal) distribution. Therefore, for a lognormal distribution, taking logarithms isGaussian a transform. In the general case, a Gaussian transform can be made for any unimodal distribution. Again, no inference of normality is made: the Gaussian transform is simply a convenient deskewing to allow us to see obscured spatial structure—it can't create structure that isn't there! In fact, transforms like Gaussian and log can be viewed as data filters.ofThe usual Gaussian transform results in athemean data equal valuesto having histogram a standard normal distribution, i.e. with 0 and aa variance of 1.0. Consequently, the sill of the variogram of Gaussian transformed data will be at 1.0. Journel and Huijbregts (1978) and Hohn (1988) give full details on Gaussian transformations. There are two ways to do this: first, graphically (figure 6.12) and secondly by Hermite polynomial expansion. This latter method is equivalent to the first, but more mathematically useful. The details of Hermitian Gaussian transforms are beyond the scope of this course. In summary, the Hermite polynomials are a convenient series of functions that, when added together, can approximate most functional shapes. The transform is: N

∑ ψn!

n

Hn ( y )

n =1

Figure 6.12 Graphical Gaussian transformation (‘anamorphosis’) after Journel and Huijbregts (1978)

Note that, under certain assumptions, the variances of each structure in a model for a logarithmic or Gaussian variogram can be related to the variances in the traditional variogram, making these models useful for determining the nugget effect and sills as well as the ranges. In the case of the Gaussian transform, this relationship is given by Guibal (1987) as: N

γ Z ( h) =

Quantitative Group

2

ψ n 1 − {1 − γ Y ( h)}n ∑ n = 1 n!

Short Course Manual

CH 6 –

VARIOGRAPHY

where γ Z ( h) is the variogram in terms of theZ (or raw) values, andγ Y ( h) is the variogram of the Gaussian transformed values. If the Gaussian transform is available, it is generally preferred to using logs and usually performs better. This is because the log transform will not generally result in a Gaussian distribution—this will occur only if the data are strictly lognormally distributed (and this is very rare). Indicator Transforms Indicators Indicator transformations are non-linear. Thus we cannot krige untransformed grades using indicator variograms, including the median indicator’ variogram.

The use of indicators is a different strategy for performing structural analysis with a view to characterising the spatial distribution of grades. In this case, the transformed distribution isbinary, and so by definitiondoes not contain extreme values . Furthermore, the indicator variogram for a specified cut off zc is physically interpretable as characterising the spatial continuity of samples with grades exceeding zc . A good survey of the indicator approach can be found in the papers of Andre Journel (eg. 1983, 1987, 1989). An indicator random variable I ( x , zc ) is defined, at a locationx , for the cut off zc as the binary or step function that assumes the value 0 or 1 under the following conditions: I ( x , zc ) = 0 I ( x , zc ) = 1

if Z ( x ) ≤ zc if Z ( x ) > zc

After transforming the data, the indicator variogram can be calculated easily by any program written to calculate an experimental variogram. An indicator variogramis simply the variogram of the indicators. In addition to its uses forindicator kriging (IK), probability kriging (PK) and allied techniques, the indicator variogram can be useful when making structural analysis to determine the average dimensions of mineralised pods at different cut offs, for example. We now consider the application of some of these techniques for structural analysis of a gold deposit.

A Case Study of Variography The following study (Vann, 1993) was designed to characterise spatial distribution of the known primary ore in an open pit gold mine The aim of the study was to guide subsequent analysis of future exploration drilling strategies and estimation methodologies. However, the overall approach gives an example of theprocess of making a structural analysis.

Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

The Data Grade control at the mine was performed by kriging. The kriging was based on gold fire assay results from sampling of vertical, 5m deep blast holes (BH). The raw BH data is the most exhaustive grade information available for the mine.

The initial step of the study was to flag data within the zone of geological interest. BH holes were mostly drilled on 3 x 8 m spacings but some areas were drilled at 3 x 4 m. Holes that werenot on the 3 x 8 m pattern were excluded. This step is a ‘declustering’ and was necessary to avoid preferential (or clustered) sampling which may affect statistics and variography. Advantages of Consistent Spacing

Another reason that a consistent sampling pattern is desirable is that it enables us to construct a block model in which, on average, each cell contains only one BH sample. This is useful if meaningful conditional statistics are to be calculated and also has considerable advantages from a viewpoint of computational efficiency when calculating variograms.

Figure 6.13 Example of sample locations (declustered)

Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

Histogram

The histogram is given in figure 6.14.

Figure 6.14 Histogram of the BH gold assays (declustered)

Notes: 







The coefficient of variation (CV) or ‘relative standard deviation’ exceeds 2.0. The histogram is asymmetrical, with a clear positive skewness, i.e. it is skewed to the right, with a ‘tail’ of high values. The mean exceeds the median by approximately 1.0 g/t. The data include some values that are ‘extreme’ in the sense that they are very high (e.g. the 115 g/t assay).

The above observations indicate that it will be difficult to perform estimations for this mineralisation. The presence of a small percentage of high grades, implied by the skewed distribution, is often a forewarning of noisy grade variograms. A few very high values can have a large influence on an experimental variogram (Rivoirard, 1987a). Quantitative Group

Short Course Manual

CH 6 –

High CV A high CV (say > 2.0) is an indication that variography will be difficult.

VARIOGRAPHY

Deposits with CV's wellin excess of1.0 (gold and uraniumdeposits, for example) are often difficult from the point of view of variography and estimation (Isaaks and Srivastava, 1989). Proportional Effect

Previously we learned that aproportional effect is present when the local variance is proportional to the local mean. Lognormal distributions always exhibit a proportional effect, for example. Figure 6.15 shows plots of variances2() versus squared mean (m2) for columns, rows and benches in the block model. Plotting standard deviation (s) versus mean (m) is, of course, equivalent. A proportional effect is present since the variance of the grades in a column/row/bench systematically increases with the mean grade of that column/row/bench.

Figure 6.15 Plots of proportional effects Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

We note that he proportional effect observed is not strictly linear, with some suggestion of at least acomponent of the proportional effect being quadratic in nature. Variograms The variogram enables us to characterise the spatial distribution (or continuity) of mineralisation for interpretative purposes, in addition to providing a structural function for kriging. The aim of variography in this study was to attempt to ‘characterise’ the mineralisation, especially with respect to the distribution of high grade material.

Figure 6.16 Experimental variograms

The BH composites, being at the nodes of a block model, are regularly spaced, at 8 metre intervals in the north-south (Y) direction, 3 metre intervals in the east-west (X) direction and 5m intervals in the vertical (Z) direction. Calculation of the variogram is straightforward for regularly spaced data and (as discussed earlier). The definition of appropriate lag spacings is simple in such a case: lags are chosen that correspond to the dimensions of a unit cell in the block model. Figure 6.16 shows directional experimental variograms calculated for the zone of 2) of the BH data is indicated on figure 6.16 (and interest. The variance (s subsequent figures) by a dashed line. For the sake of clarity, only the experimental variograms calculated for the principal directions of the block model are shown. The intermediate directions (NW-SE and

Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

NE-SW), although calculated in each case, cluttered these plots unnecessarily and are therefore not shown. The experimental variogram suggests a high relative nugget effect (ε); i.e. the γ ( h) axis would intersect at a level close experimental plots, if extrapolated to the to that of the variance of BH samplesa (priori variance = 12.3). The variogram in the EW direction reveals a single spatial structure with an apparent range of about 15-20 metres. The fall γin( h) at lags beyond 40m in the EW direction is mainly due to the progressively smaller number of pairs employed to calculate γ ( h) at these lags. Recall that an experimental variogram cannot be considered strictly reliable for distances beyond one third of the maximum available lag spacing. Possible Non-Stationarity?

The variogram in the vertical direction is structured but does not reach a sill. Points beyond the 30m plotted in figure 6.16 continue to rise, but are based, as with the distal lags of the EW variogram, on too few sample pairs to be regarded as reliable. It was known that an increasing trend in grade existed for the lowermost benches of the zone of interest and this possible non-stationarity might provide an explanation for the observed variogram behaviour in the vertical direction. The variogram in the NS direction rises up to a distance of about 50-60 metres and then stabilises at about the level of the a priori variance. Relative Variograms Given the presence of proportional effects, discussed above, relative variograms

would be above). expected to be more structured than the ‘naïve’ variogram (i.e. as calculated Pair-Wise Relative Variogram

There are a number of different ways of calculating relative variograms (see David, 1988, pp.42-49 and Isaaks and Srivastava, pp.163-170 for details). The use of relative variograms was pioneered by David, and although their theoretical applicability has been debated, they can be very useful tools to reveal structuring in data sets where variography is influenced by very high values. The particular relative variogram employed here is apair-wise relative variogram γ ( h) PR standardised by lag variance, introduced as the ‘Sigma i-j’ relative variogram in the previous section of this chapter. Relative Variograms In the case of a linear proportional effect, a relative variogram should ‘work’ well.





The experimental relative variogram still suggests a high relative nugget effect (ε) though possibly lower than the ‘raw’ variogram. In the EW direction there is a short-scale structure with an apparent range of about 15-20 metres. Also evident is a possible longer-scale structure with a range of 30-40m or so. The fall in γ ( h ) at lags beyond 40m seen in the variogram is not observed in the relative variogram.

Quantitative Group

Short Course Manual

CH 6 –



VARIOGRAPHY

The relative variogram in the vertical direction also rises without reaching a sill.

Figure 6.17 Relative Variogram (Sigma i-j) 

The relative variogram in the NS direction has an inflection at the second lag that is suggestive of a short-range structure. This inflection is only subtly evident in the traditional variogram, and is better revealed by theγ ( h) PR plot. The longer-range structure seen in the variogram is also clearer in the relative variogram, with an apparent range of perhaps 100 metres.

Variograms of the Logarithmic Transform

The stated purpose of this variography was to characterise spatial distribution of mineralisation, especially the average shape and dimensions of high grade ‘pods’ of ore. In this context then, it is the ranges, and more specifically, the anisotropies, from the variography that interest us. Given the skewed nature of the BH histogram, it was decided that employing transformations of the data would better assess these.

Quantitative Group

Short Course Manual

CH 6 –

Extreme values Removing (or cutting) the extreme values in order to ‘improve’ the variography is dangerous indeed!

VARIOGRAPHY

Extreme Values & Variography 19 are usually an Highly skewed histograms containing ‘extreme’ valued observations early warning of poor variography of the ‘raw’ grades (in this case, untransformed BH composites). Extreme values are the richest samples, and in the case of gold deposits can make the orebody economic. To remove (or cut) the extreme values in order to

‘improve’ the variography is dangerous indeed! To deal with the problem of describing spatial continuity of strongly skewed distributions that include extreme values, one alternative is to employ relative variograms, another strategy is to use some transform of the srcinal values. For example, rather than simply calculate variograms of the srcinal data values, variograms of their logarithms maybe calculated. The Implications of the Transform

We repeat that the use of a logarithmic transform does not imply any assumption of underlying lognormality of the distribution. In any event, few mineral deposits have truly lognormal distributions. The logarithmic transform is simply a convenient deskewing transformation that reduces the adverse effects of very high values on the variography. Another common approach is to transform the distribution to that of a normal distribution, i.e. theGaussian transformation . The log transform is used here for reasons of simplicity, not for any theoretical reason. In fact, since the BH data are more skewed than lognormal, a Gaussian transform may have performed even better. By reducing the skewness of the distribution using any such transform, more structured, interpretable variography can often be obtained. The particular deskewing transform employed used here is a variation on the simple logarithmic transform: z '( x ) = log{a + z ( x )}

This form of log transform endeavours to avoid amplifying small differences between low values by adding atranslation constant ato each observation prior to taking logarithms. The value ofa is equivalent to the additive constant for the three parameter lognormal distribution. It should be specified bearing in mind the order of magnitude of the values themselves (Rivoirard, 1987a). In this case a value of 1.0, a value falling in between the median and mean values of the BH data, was

19The term outlier is used in classical statistics to imply that data are outside some limits, beyond which values are considered ‘uncharacteristic’ (see Velleman and Hoaglin, 1981, p.67). Because of this, the term outlier should be used with care when dealing with populations of gold assays. The term extreme value is preferred. Closer spaced observations may place such extreme values in context with surrounding values, so terms like erratic value should also be used with care. As used here, the label ‘extreme’ simply means a high-valued observation that has an undue influence upon experimental variography. An extreme value may be an observational or other error, and consequently, such high values should always be examined carefully. If such values are legitimate, they make a disproportionate contribution to the metal content of the deposit and it is inadvisable to employ arbitrary ‘cuts’.

Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

added. The variogramγ ( h) L of the transformed data set is then simply calculated. The resultant directional experimental log variograms are shown in figure 6.18.

Figure 6.18 Variogram of logarithmic transform

The Model Fitted A nested spherical model was fitted. This model is of the form:

γ ( h) =C+01⋅C Sph +a1 (⋅ 1C ) S ph 2 a

2( 2)

Where Sph denotes the spherical variogrammodel and C0 , C and a represent the nugget, sill and range respectively. The parameters of the model are tabulated on figure 6.18. Log Variograms and Relative Variograms

The logarithmic variogram of the BH data is certainly more clearly structured than the relative variogram. David (1988) notes that:

Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

“a logarithmic (semi-) variogramγ ( h) L usually looks better than a relative (semi-) variogramγ ( h) PR computed on the same data, hence it is easier to fit a model to the logarithmic (semi-) variogram”. Note that a relation exists for converting the model derived from the log variogram to that of the relative variogram, and that the two are equivalent in the case of lognormality (see David, 1988 for details). The range for a variogram or relative variogram should be the same as that for a log variogram on the same data. This is because if two values are independent of each other, then so are their logarithms. The better-defined, continuous structure revealed in the log variogram is a result of reducing the influence of extreme values by employing a deskewing transform. This underlying structuring is virtually masked in the "naive" variogram, and not even revealed very well in the relative variogram. The Relative Nugget Effect and Non-Linear Transformations Relative Nugget Unlike the range, the relative nugget effect changes when non-linear transform is employed.

The apparent relative nugget effect (ε) is much lower on the log variogram. Warning: unlike the range, the ratio C0 / C changes when non-linear transform is employed (like taking logarithms)! It can be shown from theory that this ratio is always higher in the relative variogram than it is in the logarithmic variogram. The larger relative nugget effect in the variogram and relative variogram compared to the log variogram is explained by stronger short-scale variation in grade when considering real grades as opposed to log-transformed grades (David, 1988). The use here of an additive constant in the log transform enhances the effect, further decreasingε. Ranges

The model fitted has a NS range of 150m, although the experimental variogram in this direction levels out at about 110m before a slight rise at a higher level at about 140m. It seems reasonable to say that the long-scale structure in the NS direction has a range of between 110-150m. The important short-scale NS structure observed previously in the relative variogram is also clear here, with a range of 20m. Anisotropy There is pronounced anisotropy, with the EW log variogram presenting much 20. However, the shorter ranges than those observed in the NS direction experimental log variogram in the EW direction has an undulating form: it starts to level out at 15m or so, only to rise sharply again to stabilise at the level of the sill.

A more subtle behaviour of this type is also apparent in the NESW direction (at about 50-60m). A possible explanation is that the mineralisation is more strongly heterogeneous in these directions; in other words, we are dealing with nonanisotropy here is specified by anisotropy ratios . The model in the NS direction (the longest range in all cases) is assigned anisotropy ratios (Y-anis in the figures) of 1.0 for Sph1 and Sph2. The spherical structures in the other directions must be divided by their anisotropy ratios (X-anis and Z-anis) to obtain the ranges discussed in the text). 20The

Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

stationary behaviour. The long-range in the EW direction is estimated to be 30m while the short-scale structure has a range of 13-15m. The subject of the "poddy" nature of the mineralisation discussed further, below. The experimental log variogram in the vertical direction is incomplete: there are not enough lags in this direction to define it properly. There is no way around this problem; it is a limitation of the available data. The model finally fitted assumes a nested spherical structure with a short range of approximately 13-15m (i.e. the same as for the EW direction) and a longer range of about 40m (i.e. slightly longer than in the EW direction). Because of the availability of only 10 lags (ten benches, a distance of 50m) for the experimental variogram in the vertical direction, specification of the long range is particularly uncertain. The experimental log variograms for the NESW & NWSE directions have intermediate ranges. This is consistent with the long-axis of the anisotropy ellipse (in the horizontal plane) being oriented NS and the short-axis EW. The ranges of the log variogram are thus consistent with an overall, or large-scale, control on the mineralisation on a scale of 100-150m NS and 30m EW, i.e. pronounced anisotropy is evident. Any large-scale control on mineralisation geometry in the vertical direction cannot be determined with the available data. Short-scale structures have ranges of about 15m-20m in NS, EW and vertical directions; i.e. the short-scale structuring is effectively isotropic. Again: Possible Non-Stationarity?

It islevel worth that the log variogram in theitvertical does not rise above the ofnoting the overall variance in the manner did fordirection the conventional variogram and the relative variogram. The smoothing transform of taking logs has eliminated this artefact, suggesting that it was caused, in part, by a small number of pairs containing extreme values in the vertical direction, yielding much greater average squared differences at given lags than in other directions. At greater lags, there are always fewer pairs, and the impact of a single high value will become more pronounced as the number of pairs falls. The behaviour of the experimental variogram in the vertical direction implies that extreme values occur near the top or bottom of the zone of interest. In fact, the two richest samples (115 and 99 g/t) occur on the lowermost two benches of the zone of interest. The trend for increasing grade on the lowermost benches of the zone of interest is real, but is probably exaggeratedby a few very high grades. Variograms of the Indicator Transform For an indicator transform, each sample is assigned a value of 1 or 0 depending upon whether or not it exceeds a specified cut-off, zc . Why Use Indicators?

Quantitative Group

Short Course Manual

CH 6 –

Extreme Values The indicator-transformed distribution is binary, and so—by definition—does not contain extreme values.

VARIOGRAPHY

The use of indicators is a different strategy for performing structural analysis with a view to characterising the spatial distribution of grades. In this case, the transformed distribution isbinary, and so—by definition— does not contain extreme zc should be values. Furthermore, the indicator variogram for a specified cut off physically interpretable as characterising the spatial continuity of samples with grades exceedingzc . SELECTING THE CUT OFF

Indicator variograms were calculated for the indicator I ( zc = 3.0) . This cut off was selected after producing and examining bench-by-bench 1:500 scale hand contouring of raw BH data for the entire zone of interest (not reproduced here). These data for these plans were generated by computer and then broadly contoured ‘by eye’ at several cut offs. This suggested coherence of mineralised 21 above this cut off. The 3 g/t pods at cut offs up to about 5 g/t, but destructuring cut off showed the most coherent outlining of higher grades. Lower cut offs were used to extend the analysis. In this case, we will only present a single cut off. Figure 6.19 shows theI ( zc = 3. 0) variogram for the principle directions of the block model, i.e. the EW, NS and vertical directions. The experimental indicator variogram for the intermediate directions (NWSE and NESW) are not shown, again for the sake of clarity—they fall between the NS variogram and the shorterrange EW and vertical variograms, as was the case for the log variogram.

21Destructuring

of high grades is a phenomenon described by Matheron (1982) in which the indicator variograms for progressively higher cut offs tend towards a pure nugget effect model. Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

Figure 6.19 Variogram of indicator for cut off = 3.0 g/t

An anisotropic, nested spherical model with nugget effect is once again fitted. Points to note: 





The relative nugget effect is, again, lower than for the variogram and relative variogram. The long range in the NS direction is about 80m and the short range is 20m. The model fitted for the EW and vertical directions is identical for the I ( zc = 3. 0) variogram. Both directions present short-scale structure with a range of about 6-7m and a long range of 20m.

SHORT RANGE STRUCTURES

The short-range structures in the NS, EW and vertical directions strongly suggest the presence of coherent +3g/t mineralised pods, of no overall preferred orientation, with average dimensions of about 15-20m. The long-range structure in the NS direction probably reflects the overall geometry of the mineralised zone.

Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

Overall, the indicator variography for I ( zc from the log variogram, above.

= 3. 0)

reinforces the picture obtained

Summary of Variography The aim of this variography was to attempt to ‘characterise’ primary mineralisation, especially with respect to the distribution of high-grade material. Variograms

The variogram of the raw or untransformed BH data is only poorly structured. From the point of view of characterising the shape of high-grade pods of mineralisation it is not very useful. A large relative nugget effect ( observed. ε) was The influence of a small proportion extreme-valued data on the variogram was pronounced. Relative Variograms

The pair-wise relative ‘sigma i-j’ variogram gives a clearer picture of the distribution of grade in the zone of interest. This is no doubt due to the presence of a distinct proportional effect. In such situations relative variograms are more ‘robust’ and usually perform better than ‘naïve’ or ‘raw’ variograms. A discernible anisotropy was observed in the relative variogram. Log Variograms

Variography for a log transform gave the clearest picture of the structuring of grade. In addition to clearly revealing anisotropy, the log transform enabled the fitting of a nested spherical model. We would expect a similar picture to emerge from a structural analysis based on a normal (Gaussian) transform. Indicator Variograms

Indicator I(3.0) transform was chosen on the basis of grade maps. At a 3.0 g/t cut off there is clear structure in the indicator variogram. This is physically interpretable as summarising spatial continuity of mineralisation above this grade. Characterisation of Spatial Grade Distribution The variography thus gave us the following picture of spatial distribution (or character) of gold grade: 





Long-range structures, corresponding to overall control of mineralisation, have ranges of 100-150m NS and 30m EW, i.e.: pronounced anisotropy. Short-range structures, corresponding to high grade pods, have ranges of 15-20m in both NS and EW directions. Indicator variography supports this interpretation of isotropy for +3 g/t pods. Vertical ranges are at least 15m. Only 50m of vertical BH data was available; beyond about one third of this distance (17m) interpretation of variography is tentative.

Quantitative Group

Short Course Manual

CH 6 –

VARIOGRAPHY

Geological Factors Geological interpretation is a vital and parallel step to variographic and statistical analysis. The overall model should be obtained as a result of performing and comparing all three analyses.

As an adjunct to the variography and exploratory data analysis performed for this study, a comprehensive and complete sectional, level-plan and long-sectional geological interpretation of the zone of interest was made. An attempt was made to integrate grade control data with diamond drilling information and in-pit geological bench mapping. The interpretation produced was thus based on all the available BH, DDH and mapping data within the zone of interest. This interpretation was largely made possible by computerised production of coloured BH grade plots in cross-section, plan and long-section. Importantly, pit mapping and DDH information only revealedpartial a picture of the mineralisation: BH data wasessential to allow detailed characterisation of high-grade pods. Comparison of Geology and Variography

This step is compulsory! In our case, geological interpretation compares well with variography, showing major NW-trending structures at 100-150m (NS) spacing and NE-trending structures at ~30m (EW) spacing, giving a physical explanation for the observed variography. High grade pods average about 20 m x 20 m in plan and show no clear-cut overall anisotropy, although some individual pods may be elongated NS, EW or obliquely.

Quantitative Group

Short Course Manual

CH 7 – SUPPORT

Chapter

7 Support “This problem of the discrepancy between the support of our samples and the intended support of our estimates is one of the most difficult we face in estimation.” Ed Isaaks and Mohan Srivastava “An Introduction to Applied Geostatistics” 1989

What is ‘Support’?

Definition The basic volume upon which a grade (or other spatial variable) is defined or measured is called the support of that variable.

Often a regionalised variable (ReV) is defined onsurface a or volume rather than on a point. While it may be sensible to consider an elevation to be defined at a point, we usually consider grades to be associated with a volume. The basic volume upon which a ReV is defined or measured is called the support of the ReV. Complete specification (Olea, 1991). of support includes the shape, size and orientation of the volume If we consider the same phenomena (say gold grades) with different support (say 1m cores versus 2m cores) then we are considering two different ReV's. These two ReV's have different support and this implies different structural (or variographic) character. Grades defined on RC chip samples, HQ cores, underground channel samples, and mining blocks will thus be distinctly different in character. So, the important question arises; “how can we relate ReV's defined on different supports?” Another way to phrase this is: “knowing the grades of cores, what can we say about the grades of blocks?” We will consider the answer to this important question in two stages. Firstly, we consider thedispersion as a function of support.

Quantitative Group

Short Course Manual

CH 7 – SUPPORT

"Dispersion" as a Function of Support Grades measured on a small support, say core samples v, can be much richer or poorer than grades of the same mineralisation that have been measured on larger supports, say mining blocksV=5x5x5m. Statistically, we say that grades on sample support are moredispersed than grades on block support. Support Effect In general, grades on smaller supports are more dispersed than grades on larger

supports. Although theglobal mean grade on different supports at zero cut-off should be identical, the variance of smaller supports will be higher. ‘Support effect’ is this influence of the support on the distribution of grades. An Example

We will consider the idea of dispersion via an example originally given by Delfiner (1979). The data are porosity measurements made on a thin section of sandstone, but they could be viewed as grades or any other additive attribute: the principles involved will not change. It may seem strange to work at such a small scale, but this allows us to obtainexhaustive data—not usually accessible in most geostatistical applications. The sandstone thin section was divided into 324 contiguous square areas, each square having sides 800 microns long (1 micron = -610metres). Table 7.1 shows the srcinal data. Porosity values were then averaged by groups of 4 (2x2 blocks); groups of 9 (3x3 blocks); and groups of 36 (6x6 blocks). The results of these averaging steps are given in tables 7.2, 7.3 and 7.4. Each table represents the same area.

Quantitative Group

Short Course Manual

CH 7 – SUPPORT

Table 7.1 Porosity Data - Original Measures Values 20.18

20.42 24.43

25.67 26.05

7.53

15.19

24.11 28.58

29.83 23.80

19.37

13.94 21.62

20.02 11.93

29.84

25.10

21.57 29.11

26.57 17.72

20.92

23.60

18.81

16.29

25.20 0.132

22.33

20.91

24.68

26.30 .7520 22.14

19.20

19.54

20.80

13.94

20.41 .26 19

28.45

22.61

24.70

15.96

25.34 1.502

25.61

29.23

23.91

35.63 .7633 21.58

21.27

24.37

23.35

16.43

25.33 .10 20

22.82

22.29

16.97

26.87

27.28 9.511

25.37

28.08

15.49

17.23 .7024 29.04

22.93

31.76

18.63

22.29

27.55 .51 29

22.32

25.64

21.35

24.68

21.39 1.752

21.59

31.30

33.57

21.99 .7822 25.95

26.10

26.34

37.22

27.03

15.09 .41 18

20.96

19.89

24.44

29.59

25.34 2.103

22.48

28.12

23.34

24.15 .4227 18.49

28.17

21.38

21.46

29.95

26.31 .14 33

21.93

23.48

22.76

24.46

22.16 0.373

26.43

28.07

28.11

30.80 .7225 28.99

25.85

26.76

18.87

25.18

22.15 .72 26

14.02

19.59

21.03

23.60

26.17 2.202

15.83

17.65

29.48

24.75 .2736 24.07

23.55

25.54

32.82

24.33

33.79 .93 25

27.89

28.26

25.10

25.75

22.47 4.362

28.27

22.53

22.72

19.53 .3026 22.50

26.21

23.33

16.53

21.56

16.36 .02 22

13.60

21.14

17.65

23.84

21.69 3.702

17.89

24.50

18.42

16.51 .1823 30.37

22.86

19.47

24.93

17.45

25.35 .95 25

23.68

23.33 15.96

21.48

16.35

13.96 26.38

14.96

20.84

20.50

22.79

22.88 0.512

25.65

24.79

24.84

23.54 .9821 23.22

25.66

21.05

21.63

23.72

25.04 .28 23

20.75

26.58

21.19

18.45

20.37 3.682

27.81

23.39

21.47

19.91 .4426 19.10

22.02

12.16

15.31

23.14

16.10 .56 23

25.98

20.66

19.98

17.78

20.43 4.152

23.35

27.11

29.51

26.72 .9119 26.53

24.48

21.95

23.15

25.51

24.52 .41 21

21.30

27.13

25.13

19.37

19.48 4.012

29.95

21.98

21.70

20.58 .6326 18.37

16.28

23.87

21.37

14.45

19.19 .32 20

19.36

22.50 22.22

19.30 26.82

20.45

24.61 22.43

29.98 9.34

6.63

21.22 18.17

6.81

26.86

18.46

21.14 27.43

19.46 20.57

19.38 23.27

29.30 25.06

29.14 30.63

26.94 22.04

8.77

22.40

22.30 25.44

19.12

18.72 27.77

22.45 26.15

26.20 21.63

27.89

21.44

19.46

26.00 23.88

25.60 24.64

25.50 25.92

23.45 21.35

17.73

19.45

15.85

Quantitative Group

Short Course Manual

9.75

21.03

17.60 23.71

26.85 20.65

17.38 15.44

CH 7 – SUPPORT

Table 7.2 Porosities of 2x2 Blocks

18.63

22.93

16.50

19.29

24.86

19.20

19.41

25.84

24.48

23.89

18.94

23.04

24.52

27.63

24.56

21.09

18.68

21.27

23.27

22.47

22.48

26.58

22.07

25.62

26.79

25.84

22.64

21.56

25.31

27.49

26.27

26.60

25.15

25.54

23.86

27.08

22.44

23.87

23.81

21.07

21.14

27.25

24.65

23.81

24.54

20.54

21.86

20.40

25.54

20.98

25.32

22.54

20.68

23.15

20.78

20.73

21.86

25.41

22.44

22.68

20.22

20.96

21.99

21.27

20.56

22.02

25.60

24.10

22.61

21.64

21.12

21.36

21.73

19.32

21.83

25.09

25.43

22.14

19.05

16.67

20.08

Table 7.3 Porosities of 3x3 Blocks

Quantitative Group

20.44

19.48

21.75

21.64

20.32

23.25

23.02

22.70

26.02

25.85

25.80

22.41

20.90

26.22

24.39

26.74

24.93

27.50

21.89

23.11

24.56

23.13

21.68

21.82

20.16

21.23

25.31

22.70

20.82

22.92

22.79

20.31

25.12

22.64

18.50

19.13

Short Course Manual

CH 7 – SUPPORT

Table 7.4 Porosities of 6x6 Blocks 21.35

23.81

22.95

23.03

24.70

23.98

21.12

23.95

20.34

Note that we observe very high (>30) and very low (
View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF