ETD Collection
Permanent URI for this collectionhttps://wiredspace.wits.ac.za/handle/10539/104
Please note: Digitised content is made available at the best possible quality range, taking into consideration file size and the condition of the original item. These restrictions may sometimes affect the quality of the final published item. For queries regarding content of ETD collection please contact IR specialists by email : IR specialists or Tel : 011 717 4652 / 1954
Follow the link below for important information about Electronic Theses and Dissertations (ETD)
Library Guide about ETD
Browse
1 results
Search Results
Item Quantification of sampling uncertainties at grade control decision points for a platinum mine(2017) Woollam, MandiThe mining industry has been challenged with rising costs and lower ore grades mined, thereby squeezing profit margins. Furthermore, with decreasing ore grades being mined, the PGE grade of interest is edging closer to the capabilities of the available analytical techniques. This is placing more pressure on maximizing the precision and accuracy of data and the importance of the quality of sampling has therefore become elevated. The quality of sampling is affected by all contributing effects that make up the sampling chain. Grade control sampling assay data is used routinely in conjunction with the geology of the ore to assign a destination for the ore (either to a stockpile or to the mill), based on its grade classification for the platinum mine. The classification of the grade for a particular mining block is made using four key elements (4E). This grade determination is often found to be different to that generated from the exploration sampling process. These differences between assay data obtained using different sampling techniques are due to errors in sampling. These errors were investigated in depth in this study. In order to understand the possible errors in the assignment of grade to a mining block, the errors at the classification thresholds (referred to as grade control decision points (GCDP)) have been investigated. These errors, comprised of both random and systematic errors and were determined independently in this study. Components that generate these errors include the heterogeneity of the ore, the action of taking a sample, sample preparation and the analytical technique used to analyse the sample. The random error associated with the assignment of grade to a particular mining block is highly dependent on the analytical technique, grade of the ore, as well as the number of assays used. The application of the central limit theorem in calculating the average grade of a block (with n=30 samples analysed in duplicate) reduces the random error by a factor of 7.75. Although this factor reduces the random error sufficiently (< 10 % at all grades) to enhance confidence, the large errors (>> 10 %) assigned to each of the individual thirty samples (at grades less than 1.7 g/t 4E) still present a risk of incorrect classification of the ore below 1.7 g/t 4E. The effect of systematic errors are additive and cannot be reduced by averaging more assays (unlike random error). These errors have their origin in the “design” of the sampling protocol. This study highlighted that, in general, the grade calculated for a mine area will be lower when calculated using grade control assay data when compared with the grade calculated if exploration assay data were used. This systematic error was also found to change in magnitude and sign depending on the 4E grade of the ore. The grade at which mining is viable (pay-limit of the mine) is defined in the 2015 annual report as 2.5 g/t 4E. This study, quantified the smallest systematic error (< ± 2 %) between the two sampling techniques at ore grades equal to 1.7 g/t 4E. Both these grades (2.5 and 1.7 g/t) are significant in that it is the upper and lower grade limit for the very low grade ore (VLGO) category. Ore that is assigned to grade categories higher than 2.5 g/t 4E, will be marked as destined for the milling process in mining. It is at this grade that the error should be minimised so as to minimise financial loss; such loss will occur when ore is stockpiled but should have been processed or when ore is diluted with sub-standard grades, which should have rather been stockpiled. This study quantified the errors associated with grade control into all its contributing parts; the largest error (- 8 %) was assigned to the sample preparation component of the sampling protocol. All other components were quantified as an average error and this was done at the mine’s different ore grades. Sampling design (quality assurance) components that present an opportunity to reduce the systematic errors were identified as: • Increased sub-sampling, sample mass • Re-design, sub-sampling sample cups • Protect the integrity of the analytical sample; through the introduction of a third sample cup to obtain the sample required for geological logging • Optimize the drilling speed of the reverse circulation drill • Optimize the rotational speed of the rota-port cone splitter • Centre the cone splitter perfectly below the falling sample stream • Increase the analytical sample mass (to a maximum) • Modification of sample preparation (eliminating the crushing stage) • Minimise loss of fines during sampling and sample preparation Additional quality control (QC) activities were recommended; these will reveal “out of (statistical) control”, sampling components. The cost of incorrect classification of ore is difficult to quantify but when mining tonnages are considered, it is certain to be significant. Using the average realised basket price ZAR / Pt (oz) for 2016, the value of ore (at 2.5 g/t 4E) from a three hundred tonne truck, is calculated to be around R287,666. Thus if a truck is incorrectly assigned to stockpile instead of processing this value will be lost. Resources invested in the resolution of these systematic errors will be resources well spent. Management should be commended for investing in heterogeneity and twin-hole test work, both of which provide information that quantify and identify sampling errors. Measuring these errors is the first step towards reduction or elimination of error.