Validation in Thermal Analysis – A Detailed Review
On Demand Webinar

Validation in Thermal Analysis

On Demand Webinar

Validation in thermal analysis demonstrates fitness for purpose of a proposed analytical method

Validation in thermal analysis
Validation in thermal analysis

In analytical laboratories, most analyses are nowadays performed using computerized measurement systems. A validation process can be used to demonstrate fitness for purpose of the system before starting the experiments.
The seminar presents the basic concepts step by step. The focus is on in-house method validation. The nine validation steps are described in detail using suitable examples.

In this Webinar, we will discuss the basic principles of validation, from equipment qualification and computerized system validation through to analytical method validation.

34:30 min
English

Validation in thermal analysis deals with the following types of questions:
“How can the user be sure that the result is correct?”
“Is the temperature exact?”
“What is the precision?”
“Does the method used produce accurate results?”
“What happens if the sample mass differs from analysis to analysis?”

What is meant by validation of a laboratory analysis?
According to ISO 17025, Validation is the confirmation by objective evidence that the requirements for a specific intended use are fulfilled. In other words, validation demonstrates fitness for purpose.  
Generally, in laboratory analysis, the analyst follows a well-defined method, a Standard Operating Procedure – or SOP. The SOP describes in detail how to measure a particular sample using a computerized system.

A computerized system is not only the equipment and its related software. It also includes network components such as hubs, routers, cables, switches and bridges. It may include peripheral devices such as printers or plotters.
Trained staff who follow written SOPs as well as the instrument operating manual are also considered as being part of the computerized system.
From a validation point of view, one distinguishes between method validation and computerized system validation.

Different approaches to validation
Validation in thermal analysis deals with two types of analytical method validation:

  • In-house methods validation
  • Standard methods via interlaboratory studies

Interlaboratory studies provide data on the repeatability and comparability of results, and thereby allow you to determine the uncertainty of measurement of a method. In-house methods aim to quantify the parameters that are relevant to and characterize the performance of a method.

Validation in thermal analysis

Slide 0: Validation in Thermal Analysis


Ladies and Gentlemen
Welcome to this seminar on validation in thermal analysis.

Slide 1: Topics


Many questions can arise when a laboratory reports the result of a thermal analysis measurement, for example:
“How can the user be sure that the result is correct?”
“Is the temperature exact?”
“What is the precision?”
“Does the method used produce accurate results?”
“What happens if the sample mass differs from analysis to analysis?”

 In this seminar, I want to discuss how Validation can help to answer these questions.

Slide 2: What is Validation


So, what exactly is Validation and what does it mean?

According to ISO 17025, Validation is the confirmation by objective evidence that the requirements for a specific intended use are fulfilled. In other words, validation demonstrates fitness for purpose.  
Generally, in laboratory analysis, the analyst follows a well-defined method, a Standard Operating Procedure – or SOP as it is commonly called. The SOP describes in detail how to measure a particular sample using a computerized system.
A computerized system is not only the equipment and its related software. It also includes network components such as hubs, routers, cables, switches and bridges. It may include peripheral devices such as printers or plotters.
Trained staff who follow written SOPs as well as the instrument operating manual are also considered as being part of the computerized system.
From a validation point of view, one distinguishes between method validation and computerized system validation.

Slide 3: What is Computerized System Validation?


I would like to start by explaining Computerized System Validation or CSV for short.

As the slide shows, CSV is the documented evidence that a computerized system operates in accordance with pre-determined specifications.
The pre-determined specifications are written in the User Requirement Specification or URS. It documents the functional parameters of the instrument and software that the user requires. The term DQ, or Design Qualification, is also used.

After the user has purchased the system that best suits his or her requirements, the IQ, OQ, and PQ are performed
The IQ, or Installation Qualification, is the documented evidence that the installed system matches the vendor specifications.

Once the IQ has been completed and approved by the user, the OQ, or Operation Qualification is performed. This documents the evidence that the system operates as specified by the vendor.

Finally, the PQ, or Performance Qualification is carried out. The PQ is is the documented evidence that the system performs according to the URS. The PQ is the last step before the instrument is ready for use.

Change-control is part of the on-going validation process. It consists of periodic reviews to ensure that the system remains fit for purpose and controlled, for example when changes are made to the system.

Slide 4: What is Analytical Method Validation?


Once the system has been qualified, it can be used to develop, validate and perform analytical methods.

Analytical Method Validation, AMV, establishes that the performance characteristics of a method meet the specifications related to its intended use.
This implies that the analyst knows why the method is required, and acceptable values of the key parameters.

I now want to discuss the relationship and differences between Computerized System Validation and Analytical Method Validation.

Slide 5: CSV and AMV / Relationships and Responsibilities


The three yellow boxes shown in the lower half of the diagram represent the foundation blocks necessary to ensure that the system has been designed, built and maintained correctly. All three are the responsibility of the vendor.

Once this part has been successfully completed, the laboratory or end-user becomes responsible for Analytical Instrument Qualification and Computerized System Validation, CSV. This is shown by the upper three green boxes.

The two green boxes in the second line represent the CSV, namely the qualification of the equipment and software validation. This is the responsibility of the user. However, most vendors offer this service to laboratories because vendors know their products best.
CSV also forms the basis of the last remaining box: the Analytical Methods Validation, AMV. Once the system is released, each method can be validated within the range for which the instrument is qualified.

Slide 6: CSV Limitations


There are a number of problems associated with Computerized System Validation. For example, software cannot be completely validated. This was demonstrated by the work of Boehm who describes the simple program flow segment shown in the slide. The number of conditional pathways and hence possible tests for the segment was calculated to be 1020 (ten to the power of 20). It would take more than three times the geological age of the Earth to validate this program segment.

Most analytical software is far more complex than this simple program, so guidelines have been published in order to achieve CSV in practice.

Slide 7: Software Categories in GAMP


One of these guidelines is Good Automated Manufacturing Practice, GAMP.
This provides guidance to help with the validation of a computerized system. This slide shows the classification of software into different categories.

Systems used to control the instruments, acquire and process data and report the final result are classified in Categories 3, 4 and 5. As we progress through the categories, validation becomes mores specific.

The METTLER TOLEDO STARe software is classified as Category 3, which describes off-the-shelf products used for business purposes. Its validation consists of:
installation testing, such as system access;
system functionality, such user policy; and
includes documentation, such as manuals from the supplier.

According to the definition of validation, one has to demonstrate that the software operates as expected. However, validated software does not imply that the software is without errors and so, software support is the responsibility of the vendor.

Slide 8: Analytical Method Validation (AMV)


Let’s now focus on Analytical Method Validation, AMV.

Method validation is essential in order to meet accreditation and legal requirements when working in regulated environments. It demonstrates that the analytical method is fit for its intended use and gives correct results. It also provides objective evidence for defense against challenges, for example “due care” in product liability cases.


Methods have to be validated

  • during their development,
  • if you want to prove the equivalence of two methods, or
  • before using them in routine operation.


Furthermore, a method must be re-validated

  • if changes are made to the equipment,
  • if the analyst changes, or
  • if the method is adapted to a new problem.


Validation should not be regarded as a one-off activity, but rather as an on-going process linked to the life cycle of the analytical method.

Slide 9: Analytical Method Validation (AMV)


This slide shows the two main types of methods, namely standard and non-standard methods.

The rules of validation state that methods described in a standard test method such as ASTM, DIN or ISO do not need to be validated. However, the standard method should indicate all the necessary aspects of validation. This is a general problem with standard test methods: they are considered to be validated, but very often they are not qualified in terms of performance parameters.
For example, ASTM E928-03 states a repeatability and reproducibility of 0.19 and 0.72 mol% respectively, but the trueness is not specified. The responsibility remains with users to ensure that the validation is sufficiently complete to meet their needs and they will still need to verify that the performance characteristics can be met in their own laboratories.

Modified standard methods and methods that are not described in a standard test method therefore need to be validated.

There are two possibilities to validate methods:
One possibility is to participate in an interlaboratory study, also known as a round-robin study.
The second possibility is in-house method validation.

I will explain these two approaches in the next set of slides.

Slide 10: Interlaboratory studies in Thermal Analysis


Interlaboratory studies provide data on the repeatability and comparability of results, and thereby allow you to determine the uncertainty of measurement of a method.

In order to obtain good comparability, homogeneous stable materials are first prepared by the organizer. The materials are distributed to the participants and measured under defined conditions in their laboratories. Experience shows that although different laboratories use the same method, they often produce significantly different results. The statistical data obtained helps you to interpret and assess differing results, and, if necessary allows you to introduce appropriate corrective measures. Participants can check the reliability of the data generated in their laboratories and, in turn, check their own proficiency.

Interlaboratory studies are useful for the comparative validation of new methods that are intended to replace older standard methods. There are several kinds of interlaboratory studies, namely for
improving methods;
for checking a particular procedure;
for certifying materials; and
for determining laboratory competence.

Finally, the organizer collects and evaluates the measured data from the different participants. They remain anonymous and are informed about their performance afterward.

The performance parameters obtained are discussed in the next slide.

Slide 11: Interlaboratory Studies in Thermal Analysis


The most important factors that lead to deviations between results are

  • the operator,
  • the equipment and its calibration, and
  • the ambient conditions during the tests, such as temperature, humidity or light.


If the operator, equipment, and ambient conditions are identical during the measurements, the conditions under which the measurements are performed are known as repeatability conditions.
If the operator, equipment, and ambient conditions are different, they referred to reproducibility conditions.

In interlaboratory studies, the deviations between the results from different laboratories are quantified using the repeatability standard deviationsr , and the reproducibility standard deviationsR.

Besides the different standard deviations, the corresponding repeatability and reproducibility limits are also used for assessing interlaboratory studies, as shown in the slide.

The repeatability limit, r, is equal to 2.8∙sr and

the reproducibility limit, R, is equal to 2.8∙sR.

Slide 12: Interlaboratory studies in Thermal Analysis


Interlaboratory studies allow participants to compare their performance with that of other laboratories and to check the reliability of their results. The diagram on the left shows an example of such a comparison, in this case an interlaboratory study of the measurement of the glass transition temperature of a material.

Data can also be compared using the z-score which is defined as the laboratory mean value minus the interlaboratory mean value divided by the reproducibility standard deviation. The z-score shows whether a laboratory tends to obtain values that are too high or too low in a measurement method.
A z-score of less than |2| is considered satisfactory; a value between |2| and |3| gives cause for concern; above |3| is unsatisfactory.
In this example, laboratories 30 and 38 determined values for the glass transition temperature that are too high.

Slide 13: In-House Method Validation


This slide illustrates the second approach to validate a method, namely in-house method validation.

How is this done in practice? The process is shown schematically in the diagram.

The starting point is always a draft Standard Operating Procedure based on well-defined analytical requirements. The draft SOP describes the preparatory measures before carrying out the experiment, the sample preparation, the temperature program and the evaluation.

The extent of the validation must be documented in a validation plan before carrying out the experiments. The next step of the validation is to demonstrate the fitness for purpose of the method. Can the analytical requirements be achieved?

Usually, various shortcomings are discovered during the validation process. Appropriate modifications are then made to the draft SOP which is then subjected to re-validation. if. If the requirements are met, a validated SOP is obtained according to which the routine work can be started.

Slide 14: In-House Method: the Validation Puzzle


The first step in validation concerns the quantitative determination of the parameters that are relevant to and characterize the performance of a method.

The puzzle shown in this slide presents an overview of possible parameters.

The parameters indicate whether a method can be used to solve a particular analytical problem. They also form the basis for defining control limits and other characteristic values that can be used to verify that a procedure is under control whenever it is used.

Validation should be as comprehensive as necessary to fulfill the requirements. It is not necessary to specify all the parameters, but only those that are relevant. It is the responsibility of the analyst to identify them.

Now let me explain some of the parameters that are important in thermal analysis.

Slide 15: Uncertainty of Measurement


The uncertainty of measurement is a quantitative measure of the quality of the particular measurement result. It allows the user to estimate the reliability of results. In simple terms, the uncertainty of measurement is the range of values within which the value of the quantity being measured, the so-called measurand, is expected to lie with a stated level of confidence.

The concept of the uncertainty of measurement is not only applicable to the uncertainty due to the result, but also to the uncertainty in connection with the sample, sample preparation, environmental influences, experimental parameters, the analyst, and the evaluation of the measured data.
Uncertainty is not the same as error because the “true” value must be known to estimate the error, which is defined as the difference between the “true value” and the result.

Although the term uncertainty of measurement appears to have become widely accepted, it is important to note that most analytical procedures refer to the uncertainty of result rather than the uncertainty of measurement.

The next slide shows a general procedure for estimating uncertainties using an example.

Slide 16: Example: Uncertainty of the Enthalpy of Fusion


The slide shows a so-called cause-and-effect or fishbone diagram. It is a simple analysis tool for determining the factors that contribute to a particular problem. It subdivides the possible causes leading to an overall effect into main and secondary factors.
As an example, let’s consider the determination of the enthalpy of fusion by DSC. The main influence factors are:
the sample preparation; the instrument; the method; and the evaluation.

The individual sources are identified and their uncertainties evaluated:
The uncertainty of the sample mass is estimated to be ±0.3%;
the thermal contact with the crucible ±0.5%;
the calibration of the instrument ±1.5%; and
the integration ±3%.
Other influences such as putting the sample into the crucible, the heating rate and the gas flow are assumed to be negligible.

The estimated uncertainties, expressed in terms of standard deviations, are used to calculate the combined uncertainty which corresponds to the square root of the sum of the squares of the individual uncertainties. In this example, the combined uncertainty of the enthalpy of fusion is ±3.40%.

Slide 17: Statistics


Statistical analysis is frequently used to verify performance parameters and interpret the results in order to prove the fitness for purpose of the method.
Two types of statistics are used:

  • Descriptive statistics. This is used to describe, organize, summarize, or visually display data such as the mean, standard deviation, graphs, variance, and
  • Inferential statistics. This is used to make predictions and decisions based on probability such as the confidence interval;  t-test, comparison of means; F-test, homogeneity of variances; Dixon-test, research of outliers, and so on.

Slide 18: Selectivity and Specificity


The selectivity of a method is the extent to which a particular analyte can be determined in a complex matrix without interference from other constituents, or in other words, the extent to which the determination of a substance remains unaffected by the presence of other substances.
A method is specific when it is completely selective for a particular analyte or class of analyte. The terms selectivity and specificity are often used synonymously.

In practice, the selectivity is often determined indirectly: if the linearity, detection and quantification limit, precision and trueness are good in all matrices, the selectivity of the method is assumed to be adequate.

The question of selectivity or specificity plays a rather secondary role in the validation of thermal analysis methods.

Slide 19: Precision, Bias and Accuracy


Precision, bias and trueness have to do with the distribution of a set of measurement results.
These terms can be illustrated using a so-called dartboard presentation shown on the right hand side of this slide. The true value is assumed to be in the center. Only dartboard A shows good precision, good trueness and good accuracy.

Precision is a parameter that describes the closeness of agreement between independent results obtained under defined conditions. It describes the scatter or spread of the individual measurement due to random errors around their mean value. It is the random measurement error.

Trueness is equivalent to the absence of bias or a systematic measurement error. It is a parameter that describes the closeness of agreement between the mean value obtained from a large number of results and an accepted reference value.

Accuracy is a parameter that describes the closeness of agreement between an individual result and the accepted reference value. It is not an actual performance parameter, but a general term to describe precision and trueness.

Slide 20: Precision Levels


Precision can be specified at four different levels as shown in the slide:

  • System precision: This refers to the variability of the system itself. In thermal analysis, it can be quantified by measuring a stable sample several times without making any changes.
  • Repeatability precision: This expresses the variability under the same operating conditions over a short time interval. In addition to system precision, it includes contributions from sample preparation such as weighing, sealing the crucible, and so on.
  • Intermediate precision: This includes the influence of additional random effects such as the operator, equipment, and different days of analysis. It characterizes the precision that can be achieved in a laboratory over a longer time interval. In practice, several series of measurements will be made with the same sample but performed by several operators, and, if applicable, using different instruments.
  • Reproducibility precision: This is obtained by varying further factors between laboratories and is particularly important in the assessment of standard methods or if the method is performed at different locations.

Slide 21: Example to Illustrate Precision


The slide shows an experiment to illustrate precision. This is based on the example discussed in Slide 16, which describes and quantifies the factors influencing the uncertainty of measurement of the enthalpy of fusion by DSC.

Three series of six samples were prepared; each series was measured on a different day. The final series was measured by a second analyst. This enables the intermediate precision to be determined.

The descriptive statistics are shown in the upper part of this slide. The table shows the individual results, mean, standard deviation and variance of the three series. The graph shows the dispersion of individual results around the mean.

The lower table shows the inferential statistics: F-experimental is lower than F-critical with two and fifteen degrees of freedom at 95% confidence level. This demonstrates that the variances are not statistically different. Since the P-value is greater than the significance level of 5%, the means are statistically equals.

Slide 22: Detection and Quantitation Limits


The limit of detection, LOD, is the lowest amount or concentration of an analyte that can be detected, that is, distinguished from the noise, with adequate statistical confidence, that is, with reasonable certainty.
The limit of quantitation, LOQ, is the lowest amount or concentration of an analyte that can be determined with a given trueness and precision, that is, with acceptable uncertainty.  

The detection limit is usually determined by multiplying the noise by a factor of 3 to 4.

The quantitation limit is calculated by multiplying the noise by a factor of 10 to 20.

In thermal analysis, the noise can be calculated as the root mean square or RMS.

The diagram on the right hand side shows the verification of LOD and the LOQ in the validation of the enthalpy of fusion by DSC.
The noise was first determined. The amounts of substance for the limits of detection and quantitation were then calculated. These values were then verified experimentally.

Slide 23: Robustness / Ruggedness


A method is said to be robust or rugged if small deviations from the experimental conditions have no or only a negligible influence on the results.

A typical procedure to investigate robustness/ruggedness is to:

  • identify parameters that could have an impact on the results,
  • then systematically vary the parameters in order to quantify their influence,
  • and finally, to specify their maximum permissible tolerances in routine operation.


In the validation example of the determination of the enthalpy of fusion by DSC, it was found that small variations of the heating rate and sample mass did not influence the results.
In addition, two different types of baseline integration were applied to the same series of results as a robustness test. The lowest part of the slide shows the results evaluated using line and spline baselines.
At first sight, the results look similar. However, a t-test was then used to compare the means at 95% confidence level. The t-experimental appears to be higher than the t-critical at ten degrees of freedom. This indicates that the means are statistically different. Based on this information, one can conclude that the type of baseline used influences the results and therefore needs to be specified in the SOP.

Slide 24: Declaration of Fitness for Purpose


If the analytical requirements and target values are met, then the method is validated by the declaration of fitness for purpose.
If the target values are not met, then more analytical development work is needed before writing a new draft SOP after which the validation process should be repeated.

In our enthalpy of fusion example, from a statistical point of view, the method has met the requirements and therefore can be considered as validated. Also, as we have seen during the validation, the baseline type influences the results and has to be specified in the validated SOP.

Slide 25: Summary


This slide summarizes the important points regarding validation.

Validation includes not only the method but also the computerized system.

Method validation is not a one-off activity but an on-going process; changes in the system or the method necessitate re-validation.

Analytical method validation is required to produce meaningful data. Two approaches are possible: interlaboratory studies, or in-house validation.

Performance parameters can vary with the application and have to be assessed during the validation.

Finally, a method is declared validated by its statement of fitness for purpose.

Slide 26: For more information


More information about validation in thermal analysis can be downloaded from METTLER TOLEDO Internet pages.

A number of UserCom articles related to validation is available. METTLER TOLEDO publishes articles on thermal analysis and applications from different fields twice a year in its well-known UserCom customer magazine. You can download back issues of thermal analysis UserCom as PDFs from the Internet.

You can also download information about webinars, application handbooks or of a more general nature from the other Internet addresses shown on this slide.

Slide 27: Thank You


This concludes my presentation on validation in thermal analysis.

Thank you very much for your interest and attention.


Thank you for visiting www.mt.com. We have tried to optimize your experience while on the site, but we noticed that you are using an older version of a web browser. We would like to let you know that some features on the site may not be available or may not work as nicely as they would on a newer browser version. If you would like to take full advantage of the site, please update your web browser to help improve your experience while browsing www.mt.com.