Every pharmaceutical manufacturer has to comply with the requirements of current Good Manufacturing Practices (cGMP). To verify that quality standards are being met, there has to be a systematic approach by which data is collected and studied to confirm that processes operate as intended. This systematic approach is called Validation.
When a process is validated, it ensures a high level of assurance that batches produced by that same process will be uniform and meet pre-determined quality requirements. Thus, validation serves to confirm that a given process has been developed correctly and that it operates within specific controls. In turn, this assures that quality products are being consistently produced and reduces the chances of rejected batches and the need for reworking. In other words, a validated process offers a significant cost reduction as compared to processes running without validation.
Worldwide, validation is now considered an integral part of Good Manufacturing Practices. A manufacturer who wishes to get approval to manufacture drugs or to introduce new drug products into the market must comply with validation requirements as specified by regulatory bodies.
It is important to remember that validation is not a one-off process, it is part of ongoing activity to ensure that quality products are consistently produced.
History of Validation
Table of Contents
In the mid-1970s, several issues were encountered in the sterility of large volume parenteral. In response to this, two FDA officials, Bud Loftus and Ted Byers proposed the concept of validation to avoid such quality issues. Initially, validation activities were centered around the processes involved in this category of products; later, the idea spread to other areas of the pharmaceutical industry. Thus, validation was a concept pioneered by the US FDA. However, there was no definition or mention of it in the regulations until 1978.
Definition
Each of the regulatory bodies has defined validation in different words. Some of the important definitions include:
European Commission, 1991:
“Validation is the act of proving, by GMPs, that any process leads to the expected results.”
In 2000, this definition was modified to read as:
“Validation is documented evidence that the process, operated within established parameters, can perform effectively and reproducibly to produce a medicinal product meeting its predetermined specifications and quality attributes.”
US FDA Definition: “Process validation is establishing documented evidence which provides a high degree of assurance that a specified process will consistently produce a product meeting its pre-determined specifications and quality characteristics.”
ICH Definition: “Process Validation is the means of ensuring and providing documentary evidence that processes within their specified design parameters are capable of repeatedly and reliably producing a finished product of the required quality.”
WHO Definition: “Validation is the documented act of proving that any procedure, process, equipment, material, activity or system leads to the expected result.”
Scope of Validation
Pharmaceutical manufacturers have to make sure their validation program covers all the important areas of pharmaceutical processing. The major areas include:
- Equipment validation (also called qualification).
- Facilities and utility validation (water system, air handling unit, compressed gas system, computer systems).
- Process validation.
- Cleaning validation.
- Analytical method validation.
- Instrument calibration.
Validation needs to be carried out for any new equipment, premises, utilities, systems, procedures, processes. It must also be performed when any major change has occurred in any of these. Validation is different from in-process tests the latter only helps in monitoring that a process runs as expected, whereas validation aims at demonstrating that a given process is suitable for routine use because it consistently yields a product of desired quality. In this sense, validation activities will focus on the most critical aspects of processes, and these are arrived at through a risk assessment approach.
Advantages of Validation
- Optimized processes.
- Assured quality of products.
- Reduced cost of maintaining quality.
- Increased output.
- Reduced complaints, rejections, batch failure, mix-ups, and cross-contamination.
- Faster scale-up from pilot level to the manufacturing level.
- Better compliance with regulatory requirements.
Types of Validation
Validation can be done at different stages of the process. Accordingly, there are three main types of validation as follows:
1. Prospective Validation – done before the process commences
2. Concurrent Validation – done as the process is going on
3. Retrospective Validation – done on the already completed process
Prospective Validation
It is defined as establishing documented evidence that a given system does what it purports to do based on a previously determined protocol. This type of validation is generally carried out before the start of a new process of manufacture. It must be done on a minimum of three consecutive batches of the product.
To carry out this validation, each step of the proposed process is evaluated to determine which parameters are critical to the quality of the finished product. With this information, experiments are designed and documented in an authorized protocol.
Prospective validation protocol must cover the evaluation of all the equipment, facilities, utilities, and analytical test procedures that will be used in the production of the new product. Only after data has been obtained about the critical process parameters, it will be possible to prepare the Master Batch Records.
Using such a well-defined process, a series of products batched must be produced. The number of batch runs to be carried out must be sufficient to allow the collection of data for evaluation. Generally, three consecutive batch runs are considered sufficient for the complete validation of the process. However, in reality, more than three runs may also be required to arrive at sufficiently reliable data.
During a validation run, the batch size must be kept the same as that intended for regular industrial-scale production. If it is intended to sell the validation batch products, care must be taken to produce the batches in conditions that comply completely with cGMP (current Good Manufacturing Practices). Also, such batches may be sold only after verifying that the validation exercise has given a satisfactory outcome and been authorized for marketing after passing all quality requirements.
When the validation batches are being processed, samples should be drawn at frequent intervals and tests should be performed at different stages of the production process; all results must be documented thoroughly. Final products in their final packs must also be tested for comprehensive data collection.
When deciding on the validation strategy, it is good to obtain data using different lots of active ingredients and major additives. Batches manufactured during different shifts, using different facilities and equipment that will be used for commercial production, must be evaluated. Readings must be taken over a wide operating range for the most critical operations, and all data obtained must be exhaustively analyzed.
Once the data generated has been reviewed, guidelines can be prepared regarding the level of monitoring necessary as a part of in-process controls during regular production. All such guidelines should be made a part of the Batch Manufacturing Record and Batch Packing Record. If necessary, they must also be added to the relevant Standard Operating Procedures (SOPs).
Prospective validation data is also to be used to determine limits, frequencies of testing, and actions to be taken in situations when the limits are exceeded.
Information in Prospective Validation Protocol
- Brief description of the process to be validated.
- Summary of the critical manufacturing steps to be studied.
- List of facilities and equipment to be used including monitoring/recording/measuring instruments/equipment and their calibration status.
- Analytical test methods to be used and their validation status.
- In-process controls proposed and their acceptance criteria.
- Sampling plan and procedures.
- Methods to record results and evaluate the data obtained.
- Specifications for finished product acceptance.
- Additional tests to be performed and their acceptance criteria and validation status.
- Proposed timeframe for the validation process.
- Functions and responsibilities in the validation program.
Concurrent Validation
Concurrent validation involves monitoring of the critical processing and testing steps at the in-process stage. It is almost the same as prospective validation except that the manufacturer will sell the products manufactured during the validation run, provided they meet all the pre-determined quality requirements. There must be documents maintained that show the justification for a concurrent validation, and due approval of the decision by authorized persons. Documentation for concurrent validation is the same as that for prospective validation.
Retrospective Validation
Retrospective validation is defined as establishing documented evidence that a system performs as purported, by reviewing the historical data that had been collected during the manufacturing and testing stages. This validation is done for products that have already been distributed; this method of validation is, therefore, acceptable, only for processes that are well-established and stabilized over many years of production. Retrospective validation is unsuitable in cases where there has been any recent change in either the product composition, or processing steps, or equipment used in the manufacture and testing of the product.
A specific protocol must be prepared first, outlining how the retrospective validation will be carried out. Historical data is collected from batch manufacturing and packaging records, equipment logbooks, process control charts, personnel change records, finished product testing data, and stability test results. After historical data collection and review, results must be reported, along with a conclusion and recommendations, if any.
Batches for retrospective validation must be selected in a manner to represent all the batches made during the period selected for review. The number of batches included in the validation must be sufficient to prove the consistency of the process. Generally, data is collected from anywhere between 10 and 30 consecutive batches. If fewer batches will be used, the reason must be justified and documented. Any batches that did not meet the specifications during the review period, must also be included. In some cases, samples retained after distribution may be tested to obtain the necessary data.
Elements to be considered for retrospective validation:
- Batches manufactured during the defined period (ex – 10 last successive batches).
- Batches are released per year.
- Batch size and strength.
- Master manufacturing and packaging records.
- Latest specifications for APIs and finished products.
- Process deviations list.
- Corrective actions list.
- Records of manufacturing document changes/revisions.
- Stability testing data for several batches.
Revalidation
During the normal course of operations, it may become necessary to introduce changes in the process for improving the quality. Occasionally, new equipments or instruments may be installed, or there may be a change in the utility systems. Whenever any such changes are introduced, it is vital to prove that these changes do not have any adverse effect on the process or the product quality. Collecting such evidence is described as revalidation. The documentation and other requirements for revalidation match those of prospective validation.
Often, due to wear and tear, over time, there may be a drift from normal operating conditions. This makes it important for manufacturers to make sure they schedule a periodic revalidation of their systems, equipments, facilities, and processes to confirm that they continue to perform as expected to meet the prescribed quality requirements.
Changes that Necessitate Revalidation
1. Change in raw materials (especially physical properties such as particle size, moisture content, density, viscosity, etc. which tends to affect product or process quality.
2. Change in the vendor from whom APIs and other raw materials are procured.
3. Change in the primary container or other packaging material.
4. Process changes (such as drying temperature, mixing time, batch size, etc.).
5. Substitution of equipment with a new type of equipment (same equipment new model does not require validation; but the qualification steps of DQ, IQ, OQ, and PQ must be performed and documented).
6. Any change in the facility/premises/plant.
If a decision is taken to not perform revalidation trials despite a change in the process/equipment, the reason for this decision must be explained and documented.
Validation Master Plan
Definition: A Validation Master Plan (VMP) is defined as a document that provides information about the company’s validation program. This document must contain details of validation to be done, and the timeframes for the studies to be performed. There must be clear statements regarding who is responsible for each part of the validation program.
The WHO guidelines define VMP as “A high-level document that establishes an umbrella validation plan for the entire project and summarizes the manufacturer’s overall philosophy and approach.”
It is important to note the situations in which the words ‘validation’ and ‘qualification’ are to be used. When a system or equipment is the focus of the exercise, it is known as ‘qualification’.
When a process is the focus of the exercise, it is known as ‘validation’.
For example, one may qualify a spray dryer, and validate a drying process; similarly, an autoclave is qualified but the sterilization process is validated.
Purpose of VMP: The main purpose of the VMP is to give a comprehensive overview of the complete validation operation, how it has been organized, what it will cover, and the validation plan.
- It helps management to understand how much time will be required, personnel to be involved, and expenses expected to be incurred.
- It informs members of the validation team about their jobs and responsibilities.
- It helps inspectors/auditors to understand the company’s approach to validation activities.
Who should write the VMP: The best VMP is a result of a team-writing effort because it ensures a representation of the perspectives of different departments involved in the operations. When people from diverse areas of the operation are involved, it is more likely that all possible angles of approaching the VMP are covered. A VMP must be as long as required to convey all the necessary information to ensure a successful validation program.
Elements of a Good VMP: Validation Master Plans must contain the following information at the very least:
- Company’s validation policy.
- Organizational structure.
- List of items to be validated.
- A brief outline of systems, equipment, facilities, and processes to be validated.
- Formats for documenting protocols and test reports.
- Planning and scheduling of validation activities.
- Change control procedure.
- Training requirements for validation team.
- Details of persons responsible for each stage of validation – preparing the plan, drawing up protocols and standard operating procedures (SOPs), actual validation work, preparation and control of reports and documents, approval of validation protocols and reports at every stage of validation, a system for tracking validation, training requirements for validation team.
Contents of a VMP:
1. Title page with document number and version information, and authorization in the form of approval signatures.
2. Table of contents listing out critical areas of the VMP.
3. Glossary to define technical terms and abbreviations.
4. Plan for validation – details of the process steps, critical parts of the process that impact product quality and what is to be validated, when, where, how, and why.
5. Management’s approach to validation.
6. Scope of the validation – what all will be covered under validation (and what will not be covered too).
7. Roles and responsibilities of the different departments (validation team, manufacturing department, engineering department, Quality Assurance department, etc.) for each of the activities involved.
8. Services to be outsourced to outside vendors.
9. Deviation management – how to document, investigate and deal with deviations that may be encountered.
10. Change control procedures.
11. Risk management policy.
12. Training of personnel.
13. Validation matrix that outlines the validation required throughout the manufacturing facility in the order of most to least critical.
14. References – documents that guide the validation process.
Analytical Method Validation
When a raw material, in-process or finished product is tested using certain analytical methods, it is important to confirm that the analytical methods themselves should be producing reliable results. This is ensured by performing validation of analytical methods.
Regulatory requirements necessitate that the test method used by a company should show sufficient accuracy, specificity, sensitivity, and reproducibility. Besides, modern cGMP guidelines require that quality is not merely tested, but built into the product from the very beginning steps. So, it naturally follows that not just the manufacturing steps, but also the analytical methods used for testing products must be designed with certain quality attributes.
Definition: Analytical method validation is defined as the process of establishing, through laboratory studies, that the procedure’s performance characteristics meet the requirements for its intended use.
Analytical method validation is not a one-time activity. Methods need to be revalidated regularly to ensure they are suitable to analyze materials in use now. Any change in equipment or instrumentation or premises may also call for revalidation of the analytical method.
Steps in Analytical Method Validation
1. Planning analytical method validation.
2. Writing the protocol and getting it approved.
3. Executing the approved protocol.
4. Analyzing validation data obtained.
5. Reporting results of the validation.
6. Finalizing the analytical method procedure based on validation results.
Contents of Analytical Method Validation Protocol
1. Objective.
2. Parameters to be evaluated.
3. Acceptance criteria for the above parameters.
4. Experiment details.
5. Analytical procedure in draft form.
6. Procedure to deal with errors/deviations.
7. Methods to be used for data analysis.
Analytical Method Validation Parameters/ Characteristics
The analytical performance parameters that must be a part of validation programs include the following:
- Accuracy
- Precision
- Specificity
- Detection limit
- Quantitation limit
- Linearity
- Range robustness
Accuracy: The International Convention on Harmonization (ICH) definition of states that “Accuracy of an analytical procedure is the closeness of agreement between the values that are accepted either as conventional true values or an accepted reference value and the value found.
For a drug substance, accuracy is determined by applying the analytical method to an analyte whose purity is known, such as a reference standard.
For drug products, accuracy is determined by applying the analytical method to mixtures containing drug components along with a known amount of analyte that has been added, within the operating range of the method.
According to ICH guidelines, a minimum of nine determinations must be performed over a minimum of three concentration levels that cover the specified range.
Accuracy is generally reported in terms of the percent recovery (by the assay) of the known amount of analyte added into the sample. It may also be reported in terms of the difference between the accepted true value and the mean, along with the confidence intervals.
Generally, the accuracy of recovery for drug substances must be between 99 – 101%. For drug products, the values may range between 98 – 102%. Any accuracy of recovery data that deviates from this range must be investigated in detail.
Precision: Precision is defined as the degree of closeness of a series of measurements obtained using multiple samples of the same substance under specified conditions.
Precision may be studied as three characteristics – repeatability, intermediate precision, and reproducibility.
Repeatability measures precision under the same conditions over a short time duration. This is done using normal operating conditions and the same equipment as usually used for the given analytical method. ICH guidelines prescribe that at least nine determinations should be run over the range specified for the procedure. Values to be reported include standard deviation, coefficient of variation (relative standard deviation), and confidence interval.
Intermediate precision refers to variation occurring within the same testing laboratory. It includes a study of day-to-day variation, equipment variation, and analyst variation.
Reproducibility gives information about the precision of measurements between laboratories. To validate reproducibility, the same study must be performed using the same experimental design and same sample lot at the different laboratories.
Specificity: ICH definition of specificity is “The ability to assess unequivocally, an analyte, in the presence of other components that are expected to be present”.
A test method is called specific if it can discriminate the compound of interest from other closely related compounds that may be present in the same sample. Samples containing the analyte must show positive results; samples without the analyte must show a negative result. Also, when closely related compounds are tested, the test method must not show a positive result.
Detection: Limit Detection limit (DL) is defined as the “lowest amount of analyte present in a sample that can be detected but not necessarily quantitated under the stated experimental conditions.” DL is generally expressed in terms of analyte concentration in the sample (as parts per million, or percentage). DL may be established visually, or using signal-to-noise ratios, or using data from the standard deviation and slope of the calibration curve.
Quantitation Limit: Quantitation limit (QL) is defined as the lowest level of an analyte that can be quantitatively measured under the given experimental conditions. This parameter is generally useful to assay analytes present in very low levels – for example, degradation products or impurities. QL may also be defined as the concentration of a related substance in the sample that produces a signal-to-noise ratio of 10:1. QL for a method is influenced by two important factors – the accuracy in sample preparation and sensitivity of the detector used.
QL may be evaluated by the visual method, signal-to-noise ratio method, and the calibration curve method. Once QL has been determined, it must be further validated by carrying out accuracy and precision measurements at this level.
Linearity: As per ICH guidelines, linearity is defined as, “The ability (within a particular range) to obtain test results of variable data (such as the area under the curve, or absorbance) which are directly proportional to the concentration of the analyte in the sample. Analyte quantitation may be done using variables such as peak height, peak area, or ratio of peak heights/areas of analyte to the internal standard.
Linearity may be determined by two methods. The first one involves directly weighing different quantities of the standard to prepare solutions of different concentrations. The second and more popular approach is to prepare high concentration stock solutions and then dilute them to lower concentrations.
Linearity is accepted if the coefficient of determination is found to be greater than or equal to 0.997. ICH guidelines required reporting of the slope, y-intercept, and residual sum of squares, too.
Range: Range is defined as the interval between lower and upper concentrations of analyte in the sample for an analytical procedure that is demonstrated to possess a suitable level of accuracy, precision, and linearity. Assays must generally have a range of 80 – 120% of nominal concentration. Content uniformity tests must have a range of 70 – 130% of the nominal concentration.
Robustness: It is defined as the capability of an analytical method to remain unaffected by small but deliberate variations in the method parameters. This characteristic indicates how reliable a given analytical method is during normal usage conditions.
Finalizing the Analytical Procedure:
Following a successful analytical method validation, the final analytical procedure must be established and documented. The minimum information to be provided in this document includes:
1. Rationale for the procedure and capabilities of the method. If the method is a revised one, the advantages of the revision must be described.
2. Complete details of the analytical procedure to allow the method to be replicated by anyone reading it. All important instructions and parameters must be mentioned here, along with formulae for the calculation of results.
3. List of impurities that are permitted, and their limits for impurity assays.
4. Validate data.
5. Revision History.
6. Signatures of authorized personnel (author of the procedure, reviewer, management, and QA representatives).
Analytical Method Revalidation
Validated methods need to be revalidated in the following situations:
1. When a significant change has been introduced in the process of synthesizing the drug substance (API).
2. When a new impurity is encountered, changing the specificity profile of the method.
3. If changes are made in excipient composition which can introduce new impurities.
4. When there is a change in major equipment or change of API supplier that may change the degradation profile of the API.
Validation is one of the most important concepts in the area of drug development and manufacturing. By promising consistent and reliable processes, validation helps to ensure products are manufactured with desired quality attributes every time a process is run. Thus, it plays a crucial role in achieving the objective of QA that quality will be designed and built into the product instead of being merely tested at the final stages.
Calibration of pH meter:
This uses the two-point calibration method which is performed using two buffers of known pH. One of them is a pH 7.0 standard buffer and the other is either an acidic or alkaline buffer of known pH. It is important to make sure that all buffers are at the same temperature before beginning the calibration because pH often varies with temperature.
1. Switch on the pH meter, and wait for enough time for it to warm up (as per information in the instrument’s operating manual).
2. Remove the electrode from its storage solution, rinse with distilled water and blot dry using a piece of tissue paper. Avoid rubbing the electrode while drying to prevent damage to the sensitive membrane that surrounds it.
3. Place the electrode tip into a buffer solution of pH 7.00 and press the “Measure” or “Calibrate” button, and wait for the display to stabilize.
4. Adjust the calibration button to make the display read 7 if required.
5. Remove the electrode from the buffer solution, rinse with distilled water and blot dry using fresh tissue paper.
6. Place the electrode into a buffer solution of either pH 4.01 or pH 9.20 and wait for the display to stabilize.
7. Adjust the calibration button to make the display read the pH as 4.01 or 9.20 depending on which buffer you have used.
8. Remove the electrode from the buffer, clean with distilled water, and blot dry with tissue paper.
9. Place the dried electrode back into the storage solution.
Qualification of UV-Visible Spectrophotometer:
The UV-Visible spectrophotometer is an instrument that is used to measure the absorbance of solutions over the ultraviolet and visible ranges of the electromagnetic spectrum, generally between 200 – 800 nanometres.
Qualification may be defined as the act of proving and documenting that given equipment or process or utility is correctly installed, working properly, and is consistently producing the expected results.
Qualification of the UV-Visible spectrophotometer involves the following steps:
1. Design qualification: The type and make of the instrument to be purchased must be chosen carefully depending on the specific requirements of the type of samples that will need to be measured.
2. Installation qualification:
(a) Install the instrument in a room that is maintained at a temperature between 15 – 25°C and relative humidity of 45 – 80%.
(b) The installation site must be away from dust, corrosive liquids or gases, and direct sunlight.
(c) Remove the instrument from its packing material, and place it on a surface that does not vibrate.
(d) Install the computer near the instrument, or load the software into an existing computer.
(d) Check the following parameters at this stage:
3. Operational qualification:
(a) Connect the instrument to the mains and switch it on.
(b) Follow the operating manual instructions and check for the following parameters: Wavelength accuracy, linearity of absorbance, resolution, wavelength reproducibility, photometric accuracy, stray light, photometric noise, baseline flatness, and photometric stability.
(c) Measure the values obtained for each of the above parameters and check if they meet the specified acceptance criteria. Record these values.
4. Performance qualification:
(a) Define the performance criteria that are most important to routine operations.
(b) Define the acceptance criteria for these performance criteria selected.
(c) Determine the values for the chosen criteria and check if they meet the acceptable limits set.
(d) Decide on the frequency of regular calibration and performance qualification for routine use of the instrument.
Make sure you also check our other amazing Article on : Calibration