Designing Degradation Experiments Using a Log-logistic Distribution

Assessing the reliability of a product is very important to improve the product's quality and to get the trust of customers. Degradation experiments are usually used to assess the reliability of highly reliable products, which are not expected to fail under the traditional life tests. Several decision variables, such as the sample size, the inspection frequency, and the termination time, have a direct influence on the experimental cost and the estimation precision of lifetime information. This paper deals with the optimal design of a degradation experiment where the degradation rate follows a log-logistic distribution. Under the constraint that the total experimental cost does not exceed a predetermined budget, the optimal decision variables are obtained by minimizing the mean squared error of the estimated 100p th percentile of the lifetime distribution of the product. A simulation study and a real example of drug potency data are provided to illustrate the proposed method. Zusammenfassung: Die Beurteilung der Zuverlässigkeit eines Produktes ist sehr wichtig, um dessen Qualität zu verbessern und um das Vertrauen der Kunden zu erhalten. Degradationsexperimente werden in der Regel verwen-det, um die Zuverlässigkeit der höchst-zuverlässigen Produkte zu beurteilen, von denen man nicht erwartet, dass diese unter den traditionellen Testsüber-haupt fehlschlagen. Mehrere Entscheidungsvariablen wie die Probengröße, die Prüffrequenz, und der Beendigungszeitpunkt, haben einen direkten Ein-fluss auf die Kosten und auf die experimentelle Schätzgenauigkeit der Lebens-dauerinformationen. Diese Arbeit befasst sich mit der optimalen Gestaltung von Degradationsexperimenten, bei denen die Abbaurate einer log-logistischen Verteilung genügt. Unter der Bedingung, dass die gesamten experimentellen Kosten ein vorgegebenes Budget nichtübersteigen, werden die optimalen Entscheidungsvariablen durch die Minimierung des mittleren quadratischen Fehlers des geschätzten 100p-te Perzentil der Lebensdauerverteilung des Pro-duktes erhalten. Eine Simulationsstudie und ein reales datenbeispiel einer Dosis-/Wirkungsstudie werden bereitgestellt, um das vorgeschlagene Ver-fahren zu illustrieren.


Introduction
Reliability is an important characteristic of a product's quality.Therefore, assessing the reliability of a product is very important to improve the product's quality and to get the trust of customers.However, for highly reliable products that are designed to operate without failure for several years, assessing lifetimes using traditional life tests that record only time-to-failure is a difficult task, since no failures are expected to occur over a reasonable amount of time.In such cases, if there exist quality characteristics whose degradation over time is related to failure, then collecting degradation data can be useful in assessing product's lifetime information.Lu and Meeker (1993) discussed statistical methods for using degradation measures to estimate a time-to-failure distribution for a large class of degradation models.In their book about reliability, Meeker and Escobar (1998, Chapter 13) provided various approaches that can be used to assess reliability information based on degradation data.
To conduct a degradation experiment, the experimenter has to pre-specify a critical level of degradation, obtain measurements of degradation at different fixed times, and define that failure occurs when the degradation reaches that level.On the other hand, degradation experiments are expensive, so it is essential to plan them carefully by determining the optimal setting of the three decision variables (i.e. the sample size, the inspection frequency, and the termination time of the experiment) that have a direct influence on the estimation precision and experimental total cost.The total cost of a degradation experiment can be divided into three parts: the cost of tested units, the cost of operating the experiment, and the cost of inspections (Wu and Chang, 2002).
Usually the increase of the sample size and the inspection frequency will increase the precision of the estimates, but this will increase the total cost of the experiment as well.Therefore, an optimal degradation experiment design is the one that can get a tradeoff between estimation precision and budget.
There is an extensive literature on the design of degradation experiments, which shows the importance of the degradation rate distribution of products.Weibull and log-normal distributions were used to describe degradation rates of highly reliable products.For example, Yu and Tseng (1999) studied the optimal setting of the decision variables for a degradation experiment with a linearized degradation model where the degradation rate follows a log-normal distribution.Their study was conducted under the constraint that the total experimental cost does not exceed a predetermined budget and by using the criterion of minimizing the variance of the estimated 100p th percentile of a product's lifetime distribution.Later, Yu and Tseng (2004) investigated the optimal combination of the decision variables for a degradation experiment, but this time with different degradation rate, which is assumed to follow a reciprocal Weibull distribution.They also highlighted the importance of choosing the correct degradation rate for the optimal test plan and showed that an incorrect choice may lead to serious bias.
For very-highly reliability products, the degradation of the quality characteristic is usually very slow and thus it is impossible to have a precise estimation within a reasonable amount of testing time.In such cases, we can conduct an accelerated degradation test (ADT) by using higher stresses to accelerate degradation at each run of the experiment (Meeker and Escobar, 1998).Yu (2002a) addressed the problem of designing an ADT to obtain an efficient interval estimation of mean-time-to-failure and determine the optimal decision variables by minimizing the expected width of its confidence interval, assuming that the degradation rate follows a log-normal distribution.Similarly, but by using the criterion of minimizing the mean-squared error of the estimated 100p th percentile of the product's lifetime distribution, Yu (2003Yu ( , 2002b) ) designed accelerated degradation experiments with log-normal and reciprocal Weibull degradation rates, respectively.For more details see Al-Haj Ebrahem, Alodat, and Arman (2009a);Al-Haj Ebrahem, Eidous, and Kamil (2009b).
Recently the log-logistic distribution has been found to be suitable for describing the degradation rate for highly reliable products (Chiodo and Mazzanti, 2004).In addition, it is often used to analyze lifetime data, especially for events whose rate increases initially and decreases later.
In most practical problems, interest focuses on finding the 100p th percentile of the time to failure distribution q p .A commonly used criterion to measure the estimation precision of the 100p th percentile of lifetime distribution is its mean squared error (MSE).Therefore, the optimal values of the decision variables are those that minimize the mean squared error of the estimated 100p th percentile under the constraint the total cost does not exceed a predetermined budget.
The primary objective of this research paper is to determine the optimal setting of the decision variables that affect the experimental cost and the estimation precision of the 100p th percentile of the lifetime distribution of the products whose degradation rates follow the log-logistic distribution.This can be achieved by minimizing the mean squared error of the estimated 100p th percentile under the constraint that the total experimental cost does not exceed a predetermined budget.
The rest of this paper is organized as follows.Section 2 addresses the optimization problem and describes the degradation model and its assumptions.Section 3 explains the framework for solving the optimization problem.Section 4, discusses the results of a simulation study to illustrate the proposed method.A real example of drug potency data is provided in section 5. Conclusions of this study are presented in Section 6.

The Optimization Problem
In order to understand the problem, we have first to define the decision variables.Thus, let n be the number of tested units.For each unit let f denotes the interval between measurements (inspection frequency) and let l be the number of inspections.Then the termination time is (l − 1)f .In designing the optimal degradation experiment, the experimenter faces the following decision problems: • How many devices (n) should be tested?
• How to determine the appropriate inspection frequency (f )?
• How many measurements (l) should be taken to collect the data?
• What is the appropriate termination time for the experiment?
Thus, the optimal degradation design problem consists of finding (f, l, n) that minimizes the mean squared error of the estimated 100p th percentile.However, the determination of (f, l, n) is restricted to the budget of experiment, say, C b .Hence, the optimal design problem can be formulated as follows: where qp (f, l, n) is an estimator of q p , T C(f, l, n) is the total cost of conducting a degradation experiment, and C b denotes the budget of the experiment.

Degradation Model and Assumptions
Consider the following degradation model where y ij is the observed degradation measurement of the i th unit at time t j , g(t j , β i ) is the actual degradation path, β i > 0 is a random effect that varies from unit to unit, ϵ ij is the random measurement error term, n is the number of tested units and l is the total number of inspections for each unit.Assume that a degradation experiment is conducted under the following conditions: 1.There are n experimental units randomly selected for conducting a degradation experiment in a specific homogenous environment (e.g. the same temperature, pressure, humidity, etc).2. The measurements are taken every f units of time (e.g.f hours or f days), until time t l = (l − 1)f , where l is the number of inspections.3. The error term ϵ ij follows a normal distribution with mean zero and variance σ 2 ϵ .4. The random effect β i is assumed to follow a log-logistic distribution with scale parameter a and shape parameter b (which is denoted by Note that log β i follows the logistic distribution with location parameter m and scale parameter u, where m = log a and u = 1/b (which is denoted by log The products lifetime T is defined as the time when the actual degradation path g crosses the critical level D. Thus, from the model in (2), T can be expressed as Since β i ∼ Log-logistic(a, b), it can be easily shown that T also follows a log-logistic distribution with scale parameter D/a and shape parameter b (i.e.T ∼ Log-logistic(D/a, b)).The 100p th percentile q p of the above log-logistic distribution can be expressed as In addition, we can rewrite the 100p th percentile in terms of the logistic distribution parameters m and u by taking the logarithm on both sides, as follows 3 The Optimal Design The following subsections illustrate a framework for solving the optimization problem

Estimation of q p
For j = 1, 2, . . ., l and based on the observations y ij , j = 1, . . ., l, the least squares estimator (LSE) βi of β i , conditional on β i , can be obtained by minimizing Clearly, the LSE of β i is given by Thus, we have In addition, σ 2 ϵ can be estimated as where Now, we need to find an approximation of the unconditional distribution of βi and log βi , in order to use them in the estimation of m and u.Therefore, by using Chebychev's inequality, it can be shown that βi converges in probability to β i as Pr From the above inequality and the definition of convergence in probability, we can show that βi converges in probability to β i when ∑ l j=1 t 2 j goes to infinity, i.e.
Austrian Journal of Statistics, Vol. 41 (2012), No. 4, 311-324 Moreover, since log βi is a continuous function of βi we can also conclude that According to the above result, it can be concluded that the asymptotic distribution of log βi is the logistic distribution with location parameter m and scale parameter u, (i.e.log βi ∼ Logistic(m, u)).Now, in order to find the maximum likelihood estimator of m and u, let and the log-likelihood function is To find the maximum likelihood estimator (MLE) ( m, û), we have to take the first partial derivatives of the log-likelihood with respect to m and u, and then set each derivative to zero and solve the resulting two equations simultaneously, i.e.
The estimators m and û can be obtained numerically by finding the roots of equations ( 11) and ( 12).Moreover, by substituting these two estimators into equation ( 5), an estimator of q p is qp = exp

The Sampling Distribution of ( m, û)
Since m and û are the MLE's of m and u, we can use the asymptotic property of MLE's to find their sampling distribution, that is ) , where ) denotes the variance-covariance matrix of m and û, and can be obtained by taking the inverse of the Fisher information matrix I −1 (m, u).
The Fisher information matrix I(m, u) can be expressed as Hence, the sampling distribution of m and û is given by ) , )) .
In a real situation the experiment is only conducted up to time t l .Thus, the parameters (m, u) should be calibrated by using the conditional expectation technique (Yu and Tseng, 2004).The following equations describe (m l , u l ) which denote the parameters after calibration (see Appendix)

The Distribution of qp
We can use the calibrated parameters to find the asymptotic distribution of log qp , which in turn will lead us to the distribution of qp .Since m and û are jointly normal, then any linear combination of them has a normal distribution, too.Therefore, log qp and its asymptotic distribution are expressed as . Now, since log qp ∼ N (µ, v) then qp follows a log-normal distribution with location parameter µ and scale parameter ν (i.e.qp ∼ Log-normal(µ, ν)).
It is easy to show that the mean and variance of qp , respectively, are E(q p ) = e µ+ ν 2 and var(q p ) = (e v − 1)e 2µ+v .
Thus, the mean squared error of the estimated 100p th percentile MSE(q p (f, l, n)) can be expressed as ) .

The Cost Function T C(f, l, n)
The total cost of a degradation experiment consists of three main parts (Wu and Chang, 2002): 1. Cost of tested units.Let C it denotes the cost of one unit.Then the total sample cost is nC it .2. Cost of inspections, which includes the cost of using inspection equipments and material.Let C mea be the cost of one inspection on one unit, then the total inspection cost is nlC mea .3. Cost of operating the experiment, which consists of the salaries of operators and the cost of utility and depreciation of equipments.It depends on the termination time (l − 1)f .Let C op be the cost of the operation in the interval between two inspections.Then (l − 1)f C op is the total cost of the operation.
Thus, the total experimental cost can be obtained by combining the above subtotal costs in one equation as According to the above results, we can rewrite the optimization problem (1) as But it is obvious that our optimization problem cannot be solved in a closed form easily, since both the objective function and the constraint are nonlinear functions of f , l, and n.Therefore, in order to find the optimal solution for the problem, we will use the algorithm which was proposed by Wu and Chang (2002).

Simulation Study
A simulation study is performed to investigate and analyze our proposed method, Therefore, assume that (m, u, σ 2 ϵ ) = (2.0,1.0, 2.5), set p = 0.1 and D = 100, and apply the algorithm which was proposed by Wu and Chang (2002), to get the optimal setting of the decision variables and their corresponding minimum MSE(q p (f, l, n)).
Changes in the values of C b , C op , C mea and C it affect the determination of the optimal degradation design.Thus, we examined the sensitivity of the decision variables (f, l, n) to the changes in cost elements.Results are represented in Table 1.From these results we can draw the following conclusions: 1.The sample size n is not affected by changes in C op and C mea , but it is highly affected by changes in C it and C b .In addition, n increases with C b and decreases with C it .2. The number of inspections l is slightly affected by changes in C op and highly affected by changes in the values of C b , C mea and C it .It is obvious that l decreases with both C b and C mea .

Application to Real Data
One of the formally documented procedures in degradation analysis concerns the determination of the shelf lives of drugs (Chao, 1999).All drugs are specifically labeled to be used before a certain date.How are these expiration dates determined?
Developing a new drug involves performing a stability study to determine the drug shelf life (or lifetime).The expiration dating period or shelf life of a drug is defined as the length of time it takes for the drug's potency to decrease to a particular level of its original potency.Potency refers to the amount or dose of a drug required to produce a given effect.
In the pharmaceutical industry, drug products are usually manufactured in different batches.The Food and Drug Administration (FDA) requires testing of at least three batches, preferably more, depending on economic considerations.

Estimation of σ 2 ϵ , m, and u
Based on the observations (t j , w ij ), j = 1, . . ., 4, and by using equations ( 6) and ( 7), the least squares estimators βi and σ2 ϵ i can be obtained, respectively.Figure 3 shows the logistic probability plot for log βi , i = 1, . . ., 24.The linear pattern of the plot indicates that it is reasonable to assume that β i follows the log-logistic distribution.

Optimal Test Plan Based on the Reduction of Drug's Potency Data
As we mentioned before that the shelf life of a drug is the time it takes for the drug's potency to decrease to 95% of its original stated potency, i.e. it is the time when the amount of reduction in potency is 5. Thus, in this case our critical degradation level D = 5.Moreover, assume that the cost elements needed for conducting this test are as follows: C it = 30, C op = 18, C mea = 8, and C b = 1000.Finally, take p = 0.1 and let the unit of time be measured by months.The optimal setting of the decision variables and their corresponding minimum MSE(q p (f, l, n)) are listed in Table 2.That is, the optimal inspection interval f * = 8 months, the optimal number of batches n * = 6, the optimal number of inspection l * = 5, and the optimal termination time of the test is 32 months.In other words, in order to estimate the 10 th percentile of the drug's lifetime distribution, or to estimate the time at which 10% of the drug batches will lose 5% of their original potency and considered as expired, we need to test 6 batches during 32 months by taking measurements of their drug potency every 8 months.

Concluding Remarks
In this paper, we obtained the optimal design of a degradation experiment where the degradation rate follows the log-logistic distribution.Particularly, we derived the optimal setting of the decision variables: the sample size, the inspection frequency, and the termination time, by using the criterion of minimizing the mean squared error of the estimated 100p th percentile of the products lifetime subject to the constraint that the total cost does not exceed a predetermined budget.Some concluding remarks are given as: 1. Selecting the optimal combination of the sample size, the inspection frequency, and the number of inspections with the minimal cost are the key points for optimal degradation experiment design.
2. Among all decision variables f , l, and n, the sample size n has the most important influence on the value of the mean squared error.Therefore, if we wish to reduce the mean squared error, we should increase the number of test units.In other words, if we want to increase the precision we have to raise the budget of the experiment or we can reduce the cost of tested units.
3. The budget of the experiment has a high influence on most of the decision variables.Thus, if we raise the budget of the experiments the number of units will increase, but the number of inspections and the mean squared error will decrease.

Figure 2 :
Figure 2: Plot of reduction of drug potency over time t in months.

Table 1 :
Optimal values of f , l, n and termination time (l − 1)f under various values of the budget C b , the operation cost C op , the inspection cost C mea , and the unit cost C it .C b C op C mea C it f * l * n * (l * − 1)f * MSE(q p (f * , l * , n * ))The inspection frequency f is sensitive to all cost elements.However, it nearly increases with C b and C mea , but decreases with C op .4. The termination time is affected by changes in all cost elements, and it decreases with C op and C it .5. The value of the mean squared error is highly affected by the sample size.

Table 2 :
Optimal setting of the decision variables for the drug's potency data.