Adaptive Allocation Designs for Normal and Binary Treatment Responses

A class of adaptive designs is formulated in two stages for clinical trials to favour the better performing treatment for further allocation in an efficient way. The first stage of the allocation consists in randomizing subjects to each treatment arm with equal probability and performing a test of equality of treatment effects. The resulting p value and the available estimate of a treatment difference measure is used to assign the incoming second stage subjects. Considering binary and normal responses, several exact and asymptotic properties of the proposed allocation are thoroughly examined and compared with the existing allocation designs.


Introduction
The primary objective of any clinical trial is to determine the efficacy of the competing treatments on the basis of the responses of the participants of the trial.Before the trial is actually conducted, the knowledge about the performance of the treatments is absent.In the absence of any information about treatment superiority, the usual practice is to consider each treatment equally important and assign equal number of subjects to each treatment arm.But such an allocation strategy is often criticised as the best and the worst treatments receive the same number of subjects.However, clinical trials involve human beings and therefore the ethical goal in such trials is to provide the best possible care for the subjects.Thus, adaptively designed allocation, (in particular, response adaptive allocation) is the natural choice for its ability to skew the assignment towards the treatment doing better using the available data.Two stage allocation design is one among the popular data dependent allocations, where the first stage data plays a crucial role in determining the sampling fraction of the second stage.The basic goal is naturally to assign a greater fraction of subjects to the treatment doing better.One of the simplest, perhaps the earliest, instance of two stage allocation in the field of a clinical trial can be found in Colton (1963), where for the assignment of n recruited subjects, 2m subjects, m on each treatment arm, are used in the first stage and the treatment doing better is selected for the assignment of the remaining (n − 2m) subjects.In a further work, Coad (1992) studied the inferential properties of the procedure, such as exact bias and variance of the estimated treatment difference.Specifically, assuming that a higher response is desirable, Coad considered the treatment producing the higher observed mean response in the first stage as better performing and suggested to assign the subjects of the second stage exclusively to this treatment.However, treatment assignments for the second stage subjects become perfectly predictable and hence lacks randomization (Atkinson and Biswas (2014), Antognini and Giovagnoli (2015)).Moreover, the best performing treatment after the first stage is decided through an examination of the sign of the estimated treatment difference and ignores the further information contained in the magnitude.Consequently, large as well as small positive values of the differences are given the same importance.Bandyopadhyay and Bhattacharya (2006) and Bandyopadhyay and Bhattacharya (2007) identified these issues and incorporating randomization in a convenient way, developed an ethical allocation after sequential determination of the first stage sampling allocation fraction.In further works, Bandyopadhyay, Biswas, and Bhattacharya (2009) and Bandyopadhyay, Biswas, and Bhattacharya (2010) used optimum design theory to develop ethical allocation for survival outcomes incorporating randomization in the second stage, where a prefixed allocation function is estimated on the basis of the first stage data and/ or incoming patients' covariate.However, in a recent work, Bhattacharya and Shome (2015) developed a two stage allocation procedure using the sufficient statistics based on the first stage data in a convenient way.To be specific, they used a decreasing function of the p value of a test of equality of treatment effects based on the first stage data to set the allocation probability of each subject of the second stage.The second stage allocation was fully randomized but the randomization probability remained the same.The present work develops an adaptive allocation procedure in two stages, where apart from being a function of the p value of a test of equality of treatment effects based on the first stage data, the second stage randomization probabilities are continuously updated after each response of the second stage assignments.The allocation procedure together with the related properties are discussed in Section 2 giving emphasis on the asymptotic properties.Considering binary and continuous treatment outcomes, the performance of the proposed procedure is compared with some relevant competitors in Section 3. Redesigning a real clinical trial adopting the proposed methodology is also added in Section 3. Finally Section 4 concludes with a discussion of a few relevant issues.

The general allocation function
Consider a clinical trial with two competing treatments (say, treatments A and B) and sequentially arriving subjects.A total of n subjects are to be assigned in two stages to accomplish certain objectives.The first stage assigns a total of 2m subjects to treatments A and B with equal probability (i.e. 1 2 ) to get an initial idea about the treatment performances.We suggest to assign the i + 1 th subject of the second stage (i ≥ 0) to treatment A with probability π mi , where π mi depends on the first stage data together with the allocation and response data of the i assigned subjects of the second stage.For a worthwhile determination of π mi , we assume that a higher response is desirable and consider a statistic T m based on the first stage data for an evaluation of the null hypothesis of equality of treatment effects with p m as the corresponding p value.Also suppose that ∆mi , i ≥ 1 is a strongly consistent sequence of estimates of a treatment effect measure ∆ (to be specified later), based on the first stage data and the allocation and response information available so far.Use of currently available estimate of treatment effect for the allocation of subjects was considered in Bandyopadhyay and Biswas (2001) with the goal of favouring the better performing treatment.But assigning more to the better performing treatment makes the allocation unbalanced and hence causes, in general, a significant loss in power.Therefore, as a trade off between favouring the better performing treatment and loss of statistical precision, we suggest to use the p value based on the first stage together with the available estimate of the treatment effect measure.Thus a sensible choice of π mi should be a function of both p m and ∆mi , that is π mi = φ(p m , ∆mi ) for some function φ.The most important task, is therefore, to suggest meaningful choices of φ following a thorough investigation of the properties.First of all, we note that the lower the value of p m , the stronger is the evidence against H 0 and hence the more is the importance of the second stage data for further allocation.Thus it seems reasonable to assume π mi as a non-increasing function of p m for fixed ∆mi .Again for any i, ∆m,i+1 ≥ ∆m,i indicates higher superiority of treatment A at stage i + 1 than at stage i and hence an increased probability of assignment to treatment A at stage i + 1 than stage i is preferable.This naturally suggests that a desirable choice of φ must be non-decreasing in the estimates of the treatment effect measure for fixed p m .However, treatment effect measure is, in general, unbounded and hence we use a reasonable monotonic and bounded function of the estimated treatment effect.Since we need to maintain a reasonable ordering in allocation probability, the most obvious choice of such a function becomes the distribution function of a random variable.Thus, we need a function φ(x, y) ∈ [0, 1] defined for x ∈ [0, 1] and y ∈ [0, 1] such that φ is non-increasing in x for fixed y and non-decreasing in y for fixed x.We provide below three such choices of φ considering different requirements.For a meaningful development, assume that µ k is the effect measuring parameter for treatment k, k=A,B and consider testing H 0 : µ A = µ B against the one sided alternative H : µ A > µ B .If we denote the p value related to testing H 0 against H by p m , then an allocation probability for incoming second stage patient could be, (1) The above choice of φ is decreasing in p m and provides the allocation design of Bhattacharya and Shome (2015), where the allocation probability is not updated after the first stage and hence is independent of any ∆m,i .
With an aim to provide an allocation depending on both, the p value p m and the sequence of updated estimates ∆m,i , i ≥ 1, we suggest the following allocation probability where G is a distribution function and w 1m is a function of p m controlling the influence of the first stage outcome on the second stage allocation.It is easy to observe that a constant w 1m outweighs the importance of the first stage and hence a suitable function of p m is appropriate.The choice (2) depends on ∆mi and meets all the necessary requirements provided w 1m is an increasing function of p m .However, the author's choice for w 1m is p m + 1 2 , a linear function of p m with positive slope.Finally, considering G as a distribution function and w 2m , a convenient function of p m , we suggest the following second stage allocation probability Now for a meaningful determination of w 2m , consider testing H 0 : µ A = µ B against the one sided alternative H : µ A < µ B .If p m is the p value associated to testing H 0 against H, defined earlier, then 1 − p m can be regarded as the p value based on the same data related to testing H 0 against H : µ A < µ B .Thus for a chosen significance level 100α%, a value of p m less than α indicates a strong support for µ A > µ B , whereas a value of 1 − p m less than α gives a strong support in favour of µ A < µ B .Combining, we observe that the data gives strong support in favour of µ A > µ B or µ A < µ B as p m < α or p m > 1 − α; otherwise the data gives evidence in favour of µ A = µ B , that is, equality of treatment effects.Therefore, for the assignment to treatment A, we must put a higher weight for p m < α and a lower weight for p m > 1 − α.Moreover, for p m lying between α and 1 − α, equal weights to both the treatments are reasonable.This naturally suggests to use But such a choice does not always ensure unpredictability of treatment assignments and hence leads to selection bias (Rosenberger and Lachin (2002)).Thus considering the fact that the lower values of p m and 1 − p m indicate stronger evidence against the null hypothesis, we suggest the following and use it extensively for further development.

Response driven choices of G and ∆
Now we come to the most crucial part of the development, that is, deciding an appropriate function G together with the form of treatment effect measure ∆.Although any distribution function is appropriate as G but such a choice induces arbitrariness.Consequently, we closely examine the meaning of G( ∆m,i ) and observe that G( ∆m,i ) is nothing but the estimated proportion of skewing to treatment A for the next (i.e.(i+1) th) subject, that is, a measure of the magnitude by which treatment A is better than treatment B. If X k denotes the continuous outcome measure for treatment k, then the natural amount of skewing is simply P (X A > X B ) and hence an estimate can be used as G( ∆m,i ).For example, if the outcome which suggests to take G = Φ, the cumulative distribution function of a standard normal variable and ∆ As another example, we get G(x) = x 1+x , x ≥ 0 and ∆ = µ A µ B when the response to treatment k is exponential with mean µ k .It is interesting to note that if the treatment effect measure is a difference measure (e.g.normal responses), G is symmetric about the origin but for a ratio based treatment effect measure (e.g.exponential responses), the corresponding choice of G has a median at unity.Motivated by the above discussion, one will be interested in either of P (X A > X B ) or P (X A ≥ X B ) to use as the skewing proportion when X k has a Bernoulli(µ k ) distribution.For independent Bernoulli responses X A and X B , it is easy to obtain P . Now a close examination reveals that P (X A > X B ) and P (X A ≥ X B ) can be both less than 50% even for µ A > µ B .Consequently, either P (X A > X B ) or P (X A ≥ X B ) do not qualify as a reasonable skewing proportion.However, as an alternative, if we consider the average 1 2 {P (X A > X B ) + P (X A ≥ X B )}, then we get the simplified expression 1 2 + 1 2 (µ A − µ B ). Naturally, the simplified expression can be looked upon as G(∆), where ∆ = µ A − µ B and G is the distribution function of a uniform random variable over [−1, 1].Then such an expression satisfies the requirements of a skewing proportion and hence can be used as a skewing proportion for binary responses.It is interesting to note that the resulting expression is nothing but the ridit function (Agresti (2010)) P (X A > X B ) + 1 2 P (X A = X B ) based on binary responses.Thus, depending on the response distributions we get unique choices of G and the form of treatment effect measure ∆.

The allocation in practice
With the allocation indicator δ i (= 1; if treatment A is assigned and = 0; otherwise) for the i th subject, the allocation design can be described by the conditional probability where F i is the information contained in the response and allocation data up to and including those of the i th patient, i ≥ 1 with φ, p m and ∆m,i are as defined before.If n assignments are made following the above allocation procedure, then the observed number of subjects to treatment A is simply N An = n i=1 δ i = n − N Bn , where δ i are i.i.d Bernoulli variables with success probability 1 2 for i ≤ 2m.But exact properties of N An n are not easy to express in tractable forms and therefore we proceed to examine the properties in large samples.For such an assessment, we assume m = [nθ], 0 < θ < 1 2 , where [x] represents the greatest integer in x so that as n → ∞, The limiting value of the observed allocation proportion to treatment A, N An n can be found in Result A.2 of the Appendix.

Exploring the performance measures
Since a reasonable allocation design aims to balance the ethical and statistical needs, it becomes important to examine both the ethical and inferential aspects.However, use of P in the second stage is intended to assign more subjects to the better treatment.G(∆) makes the allocation far from the balanced and hence a loss in statistical power is expected.On the other hand, equal randomization at the first stage reduces the power loss.Since, the proposed allocation designs are conducted in two stages comprising equal and adaptive randomization, a balance between ethics and statistical precision can be anticipated.For an assessment of the proposed designs from both the viewpoints(i.e.ethics and precision), we use the following measures • The expected number of allocation to treatment A, E(N An ) (denoted by EN A A ) along with its standard deviation and • The power of a test of the equality of treatment effects and • Error rate.
Considering normal and binary treatment responses, the above measures are computed for the proposed and competing allocation designs.In subsequent parts of this section, we refer the designs (1), ( 2) and (3) by D3, D1 and D2, respectively.

Normal responses
Suppose the response of patients to treatment k has a normal distribution with mean µ k and variance σ 2 k , k = A, B. Then the hypotheses of equality of treatment effects can be expressed as H 0 : µ A = µ B .If T m is a statistic based on the first stage data relevant to the testing problem with T obs m as its observed value then p m can be expressed as P (T m ≥ T obs m |H 0 ).For normal responses with unknown and unequal variances, the natural choice of T m is the Welch's t-statistic (Lehmann and Romano (2005)), defined by Bm where Xkm (s 2 km ) denotes the mean (variance) of the observed responses for the first stage patients assigned to treatment k.As indicated earlier, we have the choices G = Φ and ∆m , where μki (σ 2 ki ) is the estimated value of µ k (σ 2 k ) after i responses are observed.For a meaningful comparative evaluation of the performance, we consider the following competitors.
• A Two Stage Competitor: As a two stage competitor, we consider the design of Bhattacharya and Shome (2015), where m subjects are assigned to each treatment arm in the first stage and based on these data, the p value p m of a relevant test of treatment equality is calculated.Then each incoming subject of the second stage is assigned to treatment A with probability 1 − p m .Thus the procedure corresponds to the choice (1).Suppose N An denotes the observed number of subjects assigned to treatment A out of n assignments.Then under (4), as m → ∞, the observed allocation proportion to treatment A (i.e.N An n ) approaches the almost sure limit 1 − θ or θ according as µ A − µ B > 0 or µ A − µ B < 0 (see, Result A.2 of the Appendix and Bhattacharya and Shome (2015), for details).However, under µ A = µ B , p m has a uniform distribution over (0, 1) and hence in such a case N An n converges in distribution to a uniform random variable, distributed over the interval (θ, 1 − θ).Thus the limiting proportion of allocation depends on the sign of the treatment difference µ A − µ B except for • Neyman Allocation: We also consider a randomized version of Neyman allocation assuming (σ A , σ B ) unknown.The allocation probability to treatment k for the (i+1) th subject is σki σAi +σ Bi , k = A, B, where σki is the estimate (preferably, maximum likelihood estimate) of σ k based on the first i allocation and response data (Melfi, Page, and Geraldes (2001)).Use of sequentially updated maximum likelihood estimators of σ k , k = A, B results in a sequential maximum likelihood (SML) procedure (Hu and Rosenberger (2006)) targeting Neyman allocation.Now it is well known that under normality of responses, if σ k , k = A, B are known and the trial size is kept fixed (Chapter 2, Rosenberger and Lachin ( 2002)), such an allocation maximizes the power of Wald test for testing the equality of treatment effects.Thus Neyman allocation is optimal in such a situation.We, therefore, use this randomized version as a standard against which the inferential aspects of the designs can be assessed.
• Bandyopadhyay and Biswas (2001) Allocation: For a fair comparison, we also add the allocation of Bandyopadhyay and Biswas (2001)(referred to as BB) in the list of competitors.For such an allocation, the (i + 1) th subject is allocated to treatment A with probability Φ( μAi −μ Bi T ), where T (> 0) is a tuning parameter and μki is the estimate of µ k based on the first i allocation and response data.Naturally, the allocation probabilities are updated on the basis of the available estimates of the treatment effect.Now for a valid assessment, we fix n = 120 and consider testing H 0 : µ A = µ B against H 1 : µ A > µ B .Specifically, we take µ A = µ B = 1 under the null hypothesis and vary µ A at regular intervals from 1.0 under the alternative.Naturally, treatment A is the better for the assumed configuration.For each configuration of (µ A , µ B ), we conduct a simulation study with 25,000 repetitions and compute expected number of allocations to the better treatment (i.e.treatment A) and power considering different choices of m and different combinations of (σ A , σ B ).The test statistic for the power computation is Welch's t statistic with unequal sample sizes, defined by , where Xkn and s 2 kn are calculated on the basis of the responses from the two stages.Again, response adaptive randomization, in general, affects the type I error rate and such a rate depends on the convergence rate of the allocation proportions (Yi and Wang (2015)).Since,Welch's test is only approximate and asymptotically standard normal under the null hypothesis, we also report the attained significance levels (denoted by Error rate), where the nominal significance level is set at 5%.All these are provided in Table 1.Boldface figures indicate the powers for the proposed allocation and those within [ ] are the error rates.
Remarks.Before analyzing the numerical figures of Table 1, we provide a comparative picture of the second stage allocation probabilities for the two stage designs D1, D2 and D3.For the comparison, we fix µ A = .25,µ B = 0, α = .10,consider m = 25 and m = 30 and vary (σ A , σ B ).For each configuration, second stage allocation probabilities (φ) are simulated for each allocation design at the sample sizes n = 2m+1, 2m+2, ... and plotted against the sample size n.All these can be found in figures 1 and 2. We observe that, φ (1) remains fixed at a higher value, but φ (2) and φ (3) vary steadily at some lower values.This is expected, as φ (1)  does not take into account the second stage information and hence converges to a deterministic design.On the other hand, φ (2) and φ (3) consider the second stage information in addition to the first stage information and vary reasonably.However, for all the designs, the second stage allocation probabilities become stable, of course, with varying rates when sample size increases.

Now
, we look at the performance measures, computed in Table 1.As indicated earlier, the performance assessment includes both the ethical (i.e.assigning a larger fraction to the better treatment) and inferential perspectives (i.e.detecting a departure in treatment effectiveness with high probability).First of all, consider the allocation designs D1, D2 and D3.For these designs, the EN A A figures increase with increase in µ A − µ B irrespective of the choice of m and (σ A , σ B ).However, in any situation, higher number of patients are assigned to the better treatment (i.e.treatment A, in this case).But EN A A figures corresponding to m = 25 are, in general, lower than those with m = 15.This is natural because a large m causes most of the assignments through equal randomization and hence reduces the ethical allocation in the second stage.On the other hand, a lower m allows more allocation in the second stage, which in turn ensures more assignment to the treatment doing better.Next, we consider the response adaptive competitors BB and Neyman allocations.Being purely response adaptive procedures, BB and Neyman allocation procedures do not depend on m.But to start the allocation, we require initial estimates and hence, we assign two subjects to each treatment arm and estimate the parameters and start adaptation from the fifth patient onwards.The allocation figures corresponding to BB are considerably lower than those corresponding to D1, D2 and D3.But Neyman allocation assigns more subjects to the treatment with higher response variability without caring for the treatment effectiveness and the expected allocation numbers remain stable at a value close to n σ A σ A +σ B .Moreover, apart from minor exceptions, Neyman allocation provides the highest statistical power.It is well known that for known but unequal response variabilities, Neyman allocation minimizes the trial size for fixed statistical precision (i.e.power), and the power figures of Table 1 are in well agreement with this even for unknown variabilities.However, BB allocation is seen to be less sensitive to small departure in treatment effectiveness and hence provides lower number of allocations together with higher statistical power as compared to D1, D2 and D3.This is consistent with the fact that unbalanced allocation, in general, causes a loss in power.Thus the proposed allocations (i.e.D1 and D2) assign a larger number of subjects to the better treatment with a little loss in statistical power and hence meets the specifications of a sensible allocation design.Again from the figures of Table 1, we find that the attained significance level for all the allocation designs is slightly higher than the nominal level of 5%.Apart from minor exceptions, the difference from the nominal level is higher for smaller m than those associated with the higher choices of m.Therefore, adaptive allocation, in general, maintains a slightly higher significance level than the nominal one for moderate sample sizes.Now, we shall investigate the behaviour of power for each allocation design with the increase in sample size.We set θ = 1 3 , 1 4 and compute the powers for testing H 0 : µ A = µ B = 1 against H 1 : µ A ≥ 1, µ B = 1 for each allocation design.However, for brevity, we provide plots of power considering different combinations of (σ A , σ B ).The relevant plots are found It is easy to observe that, for all the allocation designs, the power figures increase with the sample size.However, rate of increase is not same for all the allocation designs.Specifically, if we consider Figure 3, we find that convergence rates of power are more or less same for σ A ≥ σ B .But for σ A < σ B , D2 provides the highest and D3 provides the lowest figures.The powers for D3 is also the lowest for σ A = σ B .This is natural as D3 provides more skewing towards the better treatment and suffers a significant loss in power.Figure 4 mimics the same phenomena except that the power figures are lower than those for θ = 1 4 , which is expected as a lower θ implies more allocation to the better treatment and hence results in a loss of power.After considering the comparative picture discussed above, it is natural to set some recommendations for the practitioner.For all the proposed allocation designs and the competitors, the performance varies considerably with the configuration of (σ A , σ B ).In particular, the performance measures are encouraging for σ A ≥ σ B with respect to all the criteria.However, if we consider all the combinations of (σ A , σ B ), the allocation design D2 is seen to produce

Binary responses
Suppose the response variable for treatment k is binary with success probability µ k so that hypotheses of equality of treatment effects is simply H 0 : µ A = µ B .We consider the estimated difference measure T m = √ m( S Am m − S Bm m ) as the relevant statistic, where S km is the number of successes by treatment k in the first stage.Now, for continuous responses, P (T m ≥ T obs m |H 0 ) and P (T m > T obs m |H 0 ) are the same and hence either of these are used as the p value.But for discrete distributions, the difference between the two is consequential and hence as a compromise, we use the average of these two, called the mid-p value (see, Agresti (2013), for details), for our purpose.That is, we suggest to use 1 2 P (T m ≥ T obs m |H 0 ) + 1 2 P (T m > T obs m |H 0 ), the mid p value based on the first stage response and allocation data.Now under the null hypothesis, S Am and S Bm are independently and identically distributed as Binomial(m, µ), where µ is the common but unspecified success probability under the null hypothesis.Writing z = √ mT obs m , we find that P (T m ≥ T obs m |H 0 ) is the same as a z = P (S Am − S Bm ≥ z|H 0 ) with z as an integer in {−m, m}.Now a simple manipulation yields µ and hence we suggest to replace such a µ by its maximum likelihood estimate, that is, the pooled success proportion of the first stage.We use the estimated value of 1 2 a z + 1 2 a z+1 as our p m for binary responses.Now we consider the following competitors for a relative assessment of the proposed allocation designs.
• Neyman Allocation: As in Section 3.1, we consider a randomized version of Neyman allocation for binary responses, where the (i+1) th subject is assigned to treatment k with probability , k = A, B, where μki is the maximum likelihood estimate of µ k based on the first i allocation and response data.
• Randomised-play-the-winner (RPW) Allocation: RPW is a randomized allocation procedure for binary responses, developed by Wei and Durham (1978).The allocation uses an urn containing two types of balls representing two treatments.For the allocation of an incoming subject, a ball is drawn with replacement and the corresponding treatment is assigned.If the response is a success, an additional β balls of the same type are added to the urn; and for a failure, β balls of the opposite type are added.If the urn initially contains α balls of each type, the procedure is termed RPW(α, β).The rationale behind the allocation is to increase the chance of assigning the treatment doing better.
For the computation of the performance measures, we consider n = 120 and the allocation designs D1, D2, RPW and Neyman.Specifically, considering treatment A as the best treatment, we conduct a simulation study with 25000 repetitions for different m and calculate the relevant measures.As indicated earlier, the unspecified common success probability is estimated by the pooled success proportion, based on the first stage data.For the calculation of power, we use a statistic similar to T m except that m is replaced by n and S km by S kn .All these are reported in Table 2.  Boldface figures indicate the powers for the proposed allocation and those within [ ] are the error rates.
Remarks.From the performance measuring figures of Table 2, it is easily observed that the better treatment (i.e.Fluoxetine) for varying trial size(n).However, for brevity, we provide only a plot (see Figure 6) of the the EAP to Fluoxetine for different values of n adopting different allocation schemes.The nature of EAP for varying n indicates the usefulness of the proposed data dependent allocations (i.e.D1 and D2) over the competitors.However, the EAP for D3 is always higher than those corresponding to D1 and D2.This is quite natural, as the first stage data provides a strong evidence of superiority of Fluoxetine and being a purely two stage design, D3 relies solely on the first stage for the second stage allocation and hence, on average, assigns more subjects to Fluoxetine.But D1 and D2 take into account both the first stage and available second stage data for further allocations in the second stage and hence are a little conservative in favouring Fluoxetine (i.e. the better treatment).For the response adaptive competitors (i.e.BB and Neyman) we use the available data of 42 assignments to obtain the starting estimates and start adaptation from the 43 rd patient onwards.BB allocation maintains an allocation proportion close to D2 but EAP for the Neyman allocation decreases from 50% as the better treatment has lower variability.However, the final response of the actual trial was binary and considering shortened REML group, we get 39 responses (excluding the misclassified observations) of which 19 were from Fluoxetine and the remaining 20 from the Placebo arm.The number of successes were 11 and 7, from Fluoxetine and Placebo, respectively and hence we estimate the corresponding success rates as 11 19 and 7 20 .Considering, these as the first stage data with m = 19, we, as earlier, carry out a simulation study considering different allocation designs to obtain the EAP figures to Fluoxetine for different values of n.For the calculation of the mid p value, we use the pooled proportion of success 11+7 20+19 = 18 39 as an estimate of µ.However, we provide only a plot (see Figure 7) of the EAP figures to Fluoxetine for different values of n.As in the continuous responses, the plot shows increasing nature of EAP for varying n.As earlier, we use the summary measures based on 39 assignments to obtain initial estimates for the response adaptive competitors (i.e.RPW(1,1) and Neyman) and start adaptation from the 40 th patient onwards.EAP for RPW(1,1) is observed to be higher than those for D3 but lower than D1 and D2.However, EAP for the Neyman allocation decreases initially and then becomes stable at the limiting proportion 51% Ṫhese indicate the advantage of data dependent allocation over the fixed allocation design and hence make the proposed allocations as practitioner's one of the choices.

Concluding remarks
Adaptive allocation designs for normal and binary responses, proposed in the current work, show promising results over the existing competitors.Combinations of equiprobability and adaptive randomization made the proposed designs beneficial from the viewpoints of both clinician and statistician.However, the only subjectivity remains in the selection of the first stage sample size m.A smaller choice outweighs the importance of the first stage whereas a higher choice reduces ethical allocation.Therefore, an optimum choice is required for m.Since, the proposed allocation is an attempt to assign most of the subjects to the better performing treatment, one can search for an optimal m, which maximizes the expected number of allocation to the better treatment for fixed trial size n.It is easy to observe that for fixed (µ A , µ B ) with µ A > µ B , the better treatment is A and the maximum value of EN A A is m + (n − 2m) = n − m.Hence, m = 0 maximizes the ethical benefit but simultaneously causes a loss in power.Therefore, as a compromise, we suggest to maximize the expected number of allocations to the better treatment subject to a fixed precision measure.However, such a choice is not immediate and we intend to work further on this issue.Again, the allocation can be easily modified for responses like exponential or Weibull.But, these distributions are often used as life time distributions in situations where, in addition, some kind of censoring is present.Naturally, the corresponding development requires a rigorous treatment.Although the present development is based on two treatments, presence of multiple treatments is also possible.However, the corresponding development is not straightforward and depends on a number of issues.We intend to explore these aspects in a further work.

Table 2 :
Performance evaluation for binary responses with n = 120