Adaptive trial designs: a review of barriers and opportunities

MISSING IMAGE

Material Information

Title:
Adaptive trial designs: a review of barriers and opportunities
Physical Description:
Mixed Material
Language:
English
Creator:
Kairalla, John A.
Coffey, Christopher S.
Thomann, Mitchell A.
Muller, Keith E.
Publisher:
BioMed Central (Trails)
Publication Date:

Notes

Abstract:
Adaptive designs allow planned modifications based on data accumulating within a study. The promise of greater flexibility and efficiency stimulates increasing interest in adaptive designs from clinical, academic, and regulatory parties. When adaptive designs are used properly, efficiencies can include a smaller sample size, a more efficient treatment development process, and an increased chance of correctly answering the clinical question of interest. However, improper adaptations can lead to biased studies. A broad definition of adaptive designs allows for countless variations, which creates confusion as to the statistical validity and practical feasibility of many designs. Determining properties of a particular adaptive design requires careful consideration of the scientific context and statistical assumptions. We first review several adaptive designs that garner the most current interest. We focus on the design principles and research issues that lead to particular designs being appealing or unappealing in particular applications. We separately discuss exploratory and confirmatory stage designs in order to account for the differences in regulatory concerns. We include adaptive seamless designs, which combine stages in a unified approach. We also highlight a number of applied areas, such as comparative effectiveness research, that would benefit from the use of adaptive designs. Finally, we describe a number of current barriers and provide initial suggestions for overcoming them in order to promote wider use of appropriate adaptive designs. Given the breadth of the coverage all mathematical and most implementation details are omitted for the sake of brevity. However, the interested reader will find that we provide current references to focused reviews and original theoretical sources which lead to details of the current state of the art in theory and practice. Keywords: Adaptive designs, Flexible designs, Group sequential, Internal pilot, Power, Sample size re-estimation, Comparative effectiveness research, Small clinical trials
General Note:
Publication of this article was funded in part by the University of Florida Open-Access publishing Fund. In addition, requestors receiving funding through the UFOAP project are expected to submit a post-review, final draft of the article to UF's institutional repository, IR@UF, (www.uflib.ufl.edu/UFir) at the time of funding. The institutional Repository at the University of Florida community, with research, news, outreach, and educational materials.
General Note:
Kairalla et al. Trials 2012, 13:145 http://www.trialsjournal.com/content/13/1/145; Pages 1-9
General Note:
doi:10.1186/1745-6215-13-145 Cite this article as: Kairalla et al.: Adaptive trial designs: a review of barriers and opportunities. Trials 2012 13:145

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
All rights reserved by the source institution.
System ID:
AA00013921:00001


This item is only available as the following downloads:


Full Text
!DOCTYPE art SYSTEM 'http:www.biomedcentral.comxmlarticle.dtd'
ui 1745-6215-13-145
ji 1745-6215
fm
dochead Review
bibl
title
p Adaptive trial designs: a review of barriers and opportunities
aug
au id A1 ca yes snm Kairallami Afnm Johninsr iid I1 email johnkair@ufl.edu
A2 CoffeySChristopherI2 christopher-coffey@uiowa.edu
A3 ThomannAMitchellmitchell-thomann@uiowa.edu
A4 MullerEKeithI3 kmuller@ufl.edu
insg
ins Department of Biostatistics, University of Florida, PO Box 117450, Gainesville, FL, 32611-7450, USA
Department of Biostatistics, University of Iowa, 2400 University Capitol Centre, Iowa City, IA, 52240-4034, USA
Department of Health Outcomes and Policy, University of Florida, PO Box 100177, Gainesville, FL, 32610-0177, USA
source Trials
issn 1745-6215
pubdate 2012
volume 13
issue 1
fpage 145
url http://www.trialsjournal.com/content/13/1/145
xrefbib pubidlist pubid idtype doi 10.1186/1745-6215-13-145pmpid 22917111
history rec date day 16month 2year 2012acc 882012pub 2382012
cpyrt 2012collab Kairalla et al.; licensee BioMed Central Ltd.note This is an Open Access article distributed under the terms of the Creative Commons Attribution License (
http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
kwdg
kwd Adaptive designs
Flexible designs
Group sequential
Internal pilot
Power
Sample size re-estimation
Comparative effectiveness research
Small clinical trials
abs
sec
st
Abstract
Adaptive designs allow planned modifications based on data accumulating within a study. The promise of greater flexibility and efficiency stimulates increasing interest in adaptive designs from clinical, academic, and regulatory parties. When adaptive designs are used properly, efficiencies can include a smaller sample size, a more efficient treatment development process, and an increased chance of correctly answering the clinical question of interest. However, improper adaptations can lead to biased studies. A broad definition of adaptive designs allows for countless variations, which creates confusion as to the statistical validity and practical feasibility of many designs. Determining properties of a particular adaptive design requires careful consideration of the scientific context and statistical assumptions. We first review several adaptive designs that garner the most current interest. We focus on the design principles and research issues that lead to particular designs being appealing or unappealing in particular applications. We separately discuss exploratory and confirmatory stage designs in order to account for the differences in regulatory concerns. We include adaptive seamless designs, which combine stages in a unified approach. We also highlight a number of applied areas, such as comparative effectiveness research, that would benefit from the use of adaptive designs. Finally, we describe a number of current barriers and provide initial suggestions for overcoming them in order to promote wider use of appropriate adaptive designs. Given the breadth of the coverage all mathematical and most implementation details are omitted for the sake of brevity. However, the interested reader will find that we provide current references to focused reviews and original theoretical sources which lead to details of the current state of the art in theory and practice.
bdy
Review
Introduction
In traditional clinical trials, key elements such as primary endpoint, clinically meaningful treatment difference, and measure of variability are pre-specified during planning in order to design the study. Investigators then collect all data and perform analyses. The success of the study depends on the accuracy of the original assumptions. Adaptive Designs (ADs) give one way to address uncertainty about choices made during planning. ADs allow a review of accumulating information during a trial to possibly modify trial characteristics
abbrgrp
abbr bid B1 1
. The flexibility can translate into more efficient therapy development by reducing trial size. The flexibility also increases the chance of a ‘successful’ trial that answers the question of interest (finding a significant effect if one exists or stopping the trial as early as possible if no effect exists).
ADs have received a great deal of attention in the statistical, pharmaceutical, and regulatory fields
1
B2 2
B3 3
B4 4
B5 5
B6 6
B7 7
B8 8
. The rapid proliferation of interest and inconsistent use of terminology has created confusion and controversy about similarities and differences among the various techniques. Even the definition of an ‘adaptive design’ is a source of confusion. Fortunately, two recent publications have reduced the confusion. An AD working group was formed in 2005 in order to ‘foster and facilitate wider usage and regulatory acceptance of ADs and to enhance clinical development, through fact-based evaluation of the benefits and challenges associated with these designs’
2
. The group was originally sponsored by the Pharmaceutical Research and Manufacturers of America (PhRMA) and is currently sponsored by the Drug Information Association. The group defined an AD as ‘a clinical study design that uses accumulating data to decide how to modify aspects of the study as it continues, without undermining the validity and integrity of the trial.’ The group also stressed that the changes should not be it ad hoc, but ‘by design.’ Finally, the group emphasized that ADs are not a solution for inadequate planning, but are meant to enhance study efficiency while maintaining validity and integrity. Subsequently, the US Food and Drug Administration (FDA) released a draft version of the “Guidance for Industry: Adaptive Design Clinical Trials for Drugs and Biologics”
3
. The document defined an AD as ‘a study that includes a prospectively planned opportunity for modification of one or more specified aspects of the study design and hypotheses based on analysis of data (usually interim data) from subjects in the study.’ Both groups supported the notion that changes are based on pre-specified decision rules. However, the FDA defined ADs more generally by interpreting as ‘prospective’ any adaptations planned ‘before data were examined in an unblinded manner by any personnel involved in planning the revision’
3
. Since different individuals become unblinded (that is, ‘unmasked’) at different points in a trial, we believe the FDA draft guidance document left open doors to some gray areas that merit further discussion. Both groups made it clear that the most valid ADs follow the principle of ‘adaptive by design’ since that is the only way to ensure that the integrity and validity of the trial are not compromised by the adaptations.
It is important to differentiate between ADs and what others have referred to as flexible designs
1
B9 9
. The difference was perhaps best described by Brannath et al. who state, that ‘Many designs have been suggested which incorporate adaptivity, however, are in no means flexible, since the rule of how the interim data determine the design of the second part of the trial is assumed to be completely specified in advance’
9
. Thus, a flexible design describes a more general type of study design that incorporates both planned and unplanned features (Figure
figr fid F1 1). There is general agreement that the implementation of flexible designs cannot be haphazard but must preserve validity and integrity (for example, by controlling type I error rate). While attractive, we believe that this flexibility opens a trial to potential criticism from outside observers and regulators. Furthermore, we believe that many of the concerns could be eliminated by giving more thought to potential adaptations during the planning stages of a trial. Correspondingly, for this review, we adopt a definition similar to that of the AD working group and of the FDA and focus only on ADs that use information from within-trial accumulating data to make changes based on preplanned rules.
fig Figure 1caption Summary of different types of adaptive designs for clinical trialstext
b Summary of different types of adaptive designs for clinical trials.
graphic file 1745-6215-13-145-1
As Figure
1 demonstrates, even the constrained definition of AD allows a wide range of possible adaptations, some more acceptable than others. The designs allow updates to the maximum sample size, study duration, treatment group allocation, dosing, number of treatment arms, or study endpoints. For each type of adaptation, researchers must ensure that the type I error rate is controlled, the trial has a high probability of answering the research question of interest, and equipoise is maintained
B10 10
. New analytic results with properly designed simulations
B11 11
are often needed to meet the restrictions. The approach reinforces the importance of ‘adaptive by design’ because the adaptation rules must be clearly specified in advance in order to properly design the simulations.
Despite their suggested promise, current acceptance and use of ADs in clinical trials are not aligned with the attention given to ADs in the literature. In order to justify the use of ADs, more work is needed to clarify which designs are appropriate, and what needs to be done to ensure successful implementation. In the remainder of the paper we summarize specific AD types used in clinical research and address current concerns with the use of the designs. There are too many possible ADs to cover all of them in a brief review. We begin with learning stage designs. Next, we describe confirmatory stage designs. We then discuss adaptive seamless designs that seek to integrate multiple stages of clinical research into a single study. Next we explore applied areas that would benefit from ADs. Finally, we describe some barriers to the implementation of ADs and suggest how they can be resolved in order to make appropriate ADs practical.
Learning-stage adaptive designs
Overview
In general, AD methods are accepted more in the learning (exploratory) stages of clinical trials
3
4
. Early in the clinical development process ADs allow researchers to learn and optimize based on accruing information related to dosing, exposure, differential participant response, response modifiers, or biomarker responses
3
. The low impact of exploratory studies on regulatory approval means less emphasis on control of type I errors, and more emphasis on control of type II errors (avoiding false negatives). Early learning phase designs in areas with potentially toxic treatments (for example, cancer or some neurological diseases) seek to determine the maximum tolerated dose (MTD), the highest dose for less than some percent of treated participants (such as 33 or 50 percent) having dose-related toxicities. An accurate determination of the MTD is critical since it will likely be used as the maximum dose in future clinical development. If the dose is too low, a potentially useful drug could be missed. If the dose is too high, participants in future studies could be put at risk. After the MTD has been determined, the next step is typically to choose a dose (less than or equal to the MTD) most likely to affect the clinical outcome of interest. Since the issues are very different for these two phases of the learning stage, we briefly summarize each below.
Early learning stage (toxicity dose)
Although a number of methods have been proposed for phase I MTD determination, by far the most prevalent is the traditional 3 + 3 method originally developed for, and primarily used in, oncology trials
B12 12
B13 13
. In this rule-based method, toxicity is defined as a binary event and participants are treated in groups of three, starting with an initial low dose. The algorithm then iterates, moving dose levels up or down depending on the number of toxicities observed. The MTD is identified from the data; for example, the highest dose studied with less than 1/3 toxicities (that is, zero or one dose-limiting toxicity out of six participants). This method is straightforward and convenient in that it requires no modeling and very few assumptions. However, the method has been criticized for not producing a good estimate
B14 14
. Several adaptive dose-response methods have advantages over the traditional method. A popular design is the Bayesian adaptive model-based approach called the continual reassessment method (CRM)
14
. By more effectively estimating the MTD along with a dose-response curve, the CRM tends to quickly accelerate participants to doses around the MTD. Fewer participants are treated at ineffective doses and the design is less likely to over-estimate or under-estimate the true MTD compared to the 3 + 3 method
14
. Safety concerns about the original CRM led to several improvements
B15 15
B16 16
. The CRM has utility in any area where finding the MTD is needed. However, to date, it has primarily been used in cancer
B17 17
and stroke
B18 18
B19 19
research trials.
Late learning stage exploratory (efficacy dose)
ADs for later exploratory development are not as well-developed as for earlier work. Consequently, PhRMA created a separate adaptive dose response working group to explore the issue and make recommendations
B20 20
. Among the group’s conclusions were that dose response (DR) is more easily detected than estimated, typical sample sizes in dose-ranging studies are inadequate for DR estimation, and adaptive dose-ranging methods clearly improve DR detection and estimation. The group also noted the advantages of design-focused adaptive methods. The group favored a general adaptive dose allocation approach using Bayesian modeling to identify an appropriate dose for each new participant based on previous responses
B21 21
, as employed in the Acute Stroke Therapy by Inhibition of Neutrophils (ASTIN) study
B22 22
. Unfortunately, complex simulations (or new analytic development) and software are needed in order to control the operating characteristics and employ the methods. The development of well documented and user-friendly software is vital for future use. We believe that access to dependable and easy-to-use software will make ADs more common in the exploratory stages of trials.
Confirmatory adaptive designs
Overview
From the FDA’s current perspective, some designs are considered ‘well understood,’ while others are not
3
. Accordingly, scrutiny of a protocol will vary depending on the type of design proposed. The FDA generally accepts study designs that base adaptations on masked (aggregate) data
3
. For example, a study could change recruitment criteria based on accruing aggregate baseline measurements. Group sequential (GS) designs are also deemed ‘well understood’ by the FDA. GS designs allow stopping a trial early if it becomes clear that a treatment is superior or inferior. Thus, GS methods meet our definition of an AD and are by far the most widely used ADs in modern confirmatory clinical research. They have been extensively described elsewhere
B23 23
and will not be discussed further.
Some designs are ‘less well understood,’ from the FDA perspective
3
. It is important to note that such methods are not automatically prohibited by the FDA. Rather, there is a higher bar for justifying the use of less well-understood designs. Proving lack of bias and advantageous operating characteristics requires extensive planning and validation. Debate continues concerning the usefulness and validity of confirmatory ADs in the category. Examples include adaptive randomization, enrichment designs, and sample size re-estimation (although some subtypes are classified as ‘well understood’). We briefly mention each below.
Adaptive randomization
Traditional randomization fixes constant allocation probabilities in advance. Adaptive randomization methods vary the allocation of subjects to treatment groups based on accruing trial information
1
B24 24
B25 25
. There are two basic types: covariate and response adaptive randomization. Each is briefly described immediately below.
With a sufficient sample size, a traditional randomization process will balance the distribution of all known and unknown covariates at the end of a study. This is, in fact, one of the major benefits of randomization. However, this process does not ensure that the covariates are balanced at all times during the conduct of the trial. Covariate adaptive randomization provides a higher probability of having treatment group balanced covariates during the study by allowing the allocation probabilities to change as a function of the current distribution of covariates. Methods exist forcing optimum balance deterministically (for example, minimization), with fixed (unequal) probability, and with dynamic allocation probabilities
B26 26
. A number of examples of methods and practice can be found in the literature (for example,
B27 27
B28 28
).
Alternatively, response adaptive randomization uses observed treatment outcomes from preceding participants to change allocation probabilities. The strategy can fulfill the ethical desire to increase the likelihood of giving an individual the best-known treatment at the time of randomization. Use is not widespread, but examples can be found
B29 29
B30 30
B31 31
B32 32
. Although attractive, response adaptive randomization schemes have administrative complexities and may create ethical dilemmas
7
B33 33
. One complication is that enrolling later in the study increases the chance of receiving the superior treatment since the randomization probability will have increased for the better treatment. Thus, bias can be created if sicker patients enroll earlier and healthier ones decide to wait until later to enroll
5
. Furthermore, the actual advantages may be negligible since the analysis, type I error rate control, and sample size calculations become more complicated due to the need to account for adaptive randomization
B34 34
B35 35
B36 36
. Proponents of response-adaptive randomization designs defend their efficiency and usefulness while continuing to address criticisms with new methods and simulation results
25
. However, according to the FDA draft guidance, ‘Adaptive randomization should be used cautiously in adequate and well-controlled studies, as the analysis is not as easily interpretable as when fixed randomization probabilities are used’
3
.
Enrichment designs
Enrichment of a study population refers to ensuring that participants in a trial are likely to demonstrate an effect from treatment, if one exists
B37 37
. For example, there is benefit to enrolling participants lacking comorbidities, with a risk factor of interest (such as high blood pressure), and likely to be compliant. An extension known as adaptive enrichment designs fulfills the desire to target therapies to patients who can benefit the most from the treatment
B38 38
B39 39
. In such designs, a trial initially considers a broad population. The first study period reveals participant groups most likely to benefit from the test agent (discovery phase). Subgroup members are then randomized to receive either the active agent or control (validation phase). Power for the chosen subgroups is increased due to the increased sample size in the subgroups, while non-promising groups are discarded. Adaptive enrichment designs have been praised for their ability to identify patient groups and undiluted effect sizes that can aid in the design and efficiency of replication studies
39
. An appealing area for adaptive enrichment is pharmacogenetic research where it could allow for isolation of the one or two genetic marker subgroups that are predictive for treatment response. The approach can increase efficiency when identifiable genetic subgroups have increased treatment benefit
B40 40
. Additionally, some studies have used an adaptive enrichment to identify a subset most likely to respond to treatment
B41 41
. However, adaptive enrichment designs have been criticized as having unfavorable operating characteristics in real-world confirmatory research. Disadvantages include increases in complexity, biased treatment effect estimates, lack of generalizability, and lack of information in excluded groups
7
. We believe that adaptive enrichment designs currently have greatest value in late learning stage designs.
Sample size re-estimation
Choosing a fixed sample size is complicated by the need to choose a clinically meaningful treatment effect and to specify values for nuisance parameters such as the variance, overall event rate, or accrual rate. Inaccurate estimates of the parameters lead to an underpowered or overpowered study, both of which have negative consequences. Sample size re-estimation (SSR) designs allow the parameter estimates to be updated during an ongoing trial, and then used to adjust the sample size accordingly
B42 42
.
Historically, a great deal of controversy surrounding ADs has centered on SSR based on observed treatment effects
B43 43
B44 44
B45 45
. The methods are defended for use in specific contexts, such as using a small amount of initial funding to seek promising results
B46 46
. The authors of the FDA draft guidance document, in listing the design as ‘less well understood,’ noted the potential for inefficiency, an increased type I error rate, difficulties in interpretation, and magnification of treatment effect bias
3
. A major concern with this type of SSR design is the potential to convey treatment effect information from decisions made using treatment-arm specific data at interim time points. A clever investigator with knowledge of the SSR procedure and the decision made after viewing the data could possibly back-calculate an absolute treatment effect. It should be noted that concerns of gaining some knowledge based on an action (or inaction) exist when using any treatment-arm specific data, including GS methods. Nevertheless, the clinical trials community now routinely uses GS methods without major concerns since the conveyed information is usually minimal.
Other types of SSR have stimulated less controversy. For example, internal pilots (IPs) are two stage designs with no interim testing, but with interim SSR based only on first stage nuisance parameter estimates
B47 47
. Moderate to large sample sizes imply minimal type I error rate inflation with unadjusted tests in a range of settings
4
B48 48
B49 49
. IP designs can be used in large randomized controlled trials to re-assess key nuisance parameters and make appropriate modifications with little cost to type I error rate. In contrast, small IP trials can have inflated type I error rate and therefore require adjustments for bias
B50 50
B51 51
B52 52
. Since IP designs do not include interim testing or effect size based SSR, there generally are not the same concerns about indirectly conveying an absolute treatment effect, though Proschan showed that it is possible if a researcher has knowledge of both the IP procedure and access to the blinded data
48
. Consequently, some observers believe that, from a regulatory standpoint, IP methods that keep group allocation masked may be preferred whenever possible. Accordingly, masked methods for IPs have been proposed
B53 53
B54 54
and are classified as ‘well understood’ in the FDA Draft Guidance document
3
. However, unmasked IP procedures may be appropriate provided that steps are taken to minimize the number of people with access to data or to the group allocation. Whether blinded or not, if an IP design is implemented in a setting where non-objective parties do not have access to accumulating raw data, the sample size changes will give no information concerning effect trends of interest. Thus, we believe that the setting has fewer risks and therefore encourage more use of SSR based on nuisance parameters in future phase II and III trials.
Adaptive seamless designs
A seamless design combines exploratory and confirmatory phases into a single trial. As a type of two-stage design, seamless designs can increase overall efficiency by reducing the lead time (‘white space’) between phases. Information from participants enrolled in the first stage is used to inform the second stage. An adaptive seamless design proceeds in the same manner, but uses data from participants enrolled in both stages in the final analysis. Previous authors have paid the most attention to a seamless transition between phase IIb (learning) and phase III (confirming)
1
B55 55
B56 56
B57 57
B58 58
. Seamless designs also seem appealing in early development (phase I/IIa). The approach allows for a more efficient utilization of sample size and resources versus conducting completely separate studies. However, since data from the learning phase inform decisions for the second phase, using the data in the final analysis raises concerns about bias and error rate inflation. As an example, consider the Coenzyme Q10 in Amyotrophic Lateral Sclerosis (QALS) study: an adaptive, two-stage, randomized controlled phase I/IIa trial to compare decline in Amyotrophic Lateral Sclerosis (ALS) Functional Rating Scale score
B59 59
. The first phase used a selection design
B60 60
to choose one of two doses (1800 mg or 2500 mg). The second phase then compared the selected dose to placebo using a futility design
B61 61
. Because the second phase dose was selected as ‘best’ in the first phase, there is a positive bias carried forward. Correspondingly, if the final test does not account for the bias, the overall type I error rate may be increased. The QALS investigators performed a series of studies to determine a bias correction and incorporated it into the final test statistic
B62 62
. The scenario is common since seamless designs require special statistical methods and extra planning to account for the potential bias. In general, the potential benefits must be weighed against the additional effort required to ensure a valid test at the end of the study.
Applied areas that would benefit from adaptive designs
Combinations of group sequential and sample size re-estimation
Combining the power benefits of an IP design and the early stopping sample size advantages of GS designs has great appeal. Asymptotically correct information-based monitoring approaches for simultaneous use of GS and IP methods in large clinical trials have been proposed
B63 63
. The approach can give power and expected sample size benefits over fixed sample methods in small samples, but may inflate the type I error rate
B64 64
. Kairalla et al.
B65 65
provided a practical solution; however, more work is needed in the area.
Rare diseases and small trials
Planning a small clinical trial, particularly for a rare disease, presents several challenges. Any trial should examine an important research question, use a rigorous and sensitive methodology to address the question, and minimize risks to participants. Choosing a feasible study design to accomplish all of the goals in a small trial can be a formidable challenge. Small trials exhibit more variability than larger trials, which implies that standard designs may lead to trials with power adequate only for large effects. The setting makes ADs particularly appealing. However, it is important to be clear about what an AD can and cannot do in the rare disease setting. Most importantly, an AD cannot make a drug more effective. One of the biggest benefits of an AD is quite the opposite: identifying ineffective treatments earlier. Doing so will minimize the resources allocated to studying an ineffective treatment and allow re-distributing resources to more promising treatments. Although ADs cannot ‘change the answer’ regarding the effectiveness of a particular treatment, they can increase the efficiency in finding an answer.
Comparative effectiveness trials
Comparative effectiveness (CE) trials compare two or more treatments
B66 66
that have already been shown to be efficacious. Unique issues found in CE trials make ADs attractive in the area. For one, the concept of a ‘minimum clinically meaningful effect’ in the population has a diminished meaning in a CE trial. Assuming roughly equal costs and side effects, a range of values may be identified with upper limit the largest reasonable effect and lower limit the smallest effect deemed sizable enough to change practice in the study context. Unfortunately, since detecting smaller effects requires larger sample sizes, for practical reasons researchers may feel the need to power CE trials for effects on the upper end of the spectrum. A potential AD could have two stages with the first powered to detect the larger reasonable effect size. At the conclusion of the first stage, one of three decisions might be reached: 1) Declare efficacy (one treatment best); 2) Declare futility (unlikely to show difference between treatments); or 3) If evidence suggests a smaller effect might exist, then proceed with a second stage powered to detect the smaller effect. Another issue is that available variability estimates are probably too low since the estimates were likely obtained from highly controlled efficacy trials. If true, using the estimates to power a CE trial may lead to an underpowered study. Thus, variance-based SSR could be built into the prior example to address the uncertainty. We believe ADs have promise in CE trials and that future research is warranted.
Applications in other research settings
Currently, ADs are considered most often in the context of clinical trials. However, the ability to modify incorrect initial assumptions would have value in many other settings. Importantly, since regulatory issues may not exist in many research settings, we believe that ADs may actually be much easier to implement. For example, laboratory research involving animals could use an AD to re-assess key parameters and determine whether more animals are needed to achieve high power. As another example, an observational study requires assumptions about the distribution of the population that will be enrolled. Any discrepancy between the hypothesized and actual distribution of the enrolled population will affect the power of the study. Although extensions of the IP design to the observational setting have been considered
B67 67
, more work is needed.
Barriers to implementing adaptive designs
Even though additional methodological development is needed in ADs, appropriate statistical methods exist to support a much greater use of ADs than currently seen. We believe logistical issues and regulatory concerns, rather than statistical issues, currently limit AD use. The majority of research on ADs has been driven by drug development within the pharmaceutical industry. While many basic principles remain the same regardless of the funding environment, some specific challenges differ when considering the use of ADs for trials funded by the National Institutes of Health (NIH) or foundations. For example, traditional funding mechanisms lack the required flexibility to account for sample size modifications after initiation of a trial. There is also a general sense of confusion and lack of understanding about the distinction between acceptable and unacceptable adaptations. If the reviewers do not understand the important distinctions, a valid AD might not pass through peer review. An NIH and private foundation funded workshop on ‘Scientific Advances in Adaptive Clinical Trial Designs’ was held in November 2009, as a first attempt to address the challenges
B68 68
. Participants included representatives from research institutions, regulatory bodies, patient advocacy groups, non-profit organizations, professional associations, and pharmaceutical companies. The participants stressed that the use of ADs may require a different way of thinking about the structure and conduct of Data and Safety Monitoring Boards (DSMBs). Also, they agreed that there is a great need for further education and communication regarding the strengths and weaknesses of various types of ADs. For example, researchers should be encouraged to publish manuscripts describing experiences (both positive and negative) associated with completed trials that used an AD. Similarly, a stronger emphasis on a statistical background for NIH reviewers and DSMB members seems necessary.
While communication among parties can go a long way towards increasing the use and understanding of ADs, more work is needed to develop infrastructure to support AD trials. Study infrastructure is one area where industry is clearly ahead of grant funded research. As an example, justifying properties of ADs often requires extensive planning through computations or simulations. Researchers must find a way to fund the creation of extensive calculations for a hypothetical study. The issue is exacerbated by fact that the planning is generally required prior to submitting a grant application for funding. Many pharmaceutical companies are developing in-house teams primarily responsible for conducting such simulations. Greater barriers exist for implementing the same type of infrastructure within publicly funded environments, particularly given the challenges associated with the current limited and highly competitive federal budget.
In our opinion, the most important way to ensure a high chance of conducting a successful AD trial is to have a high level of infrastructure (efficient data management, thorough understanding of AD issues, etcetera) in place. A low complexity AD (for example, an IP or GS design) conducted in a high infrastructure environment currently provides the best chance for success. However, a low infrastructure environment might be able to successfully conduct a low complexity AD, with a little bit of extra effort. The same chance of success is not present if one is trying to implement a high complexity AD design (for example, an adaptive seamless II/III design, or a combination of different adaptations). With a complex design a high level of infrastructure is needed in order to successfully conduct the trial. The QALS study, a complex two-stage seamless design described earlier, is a good example of a study with high infrastructure and with high adaptivity
62
. The QALS study was a success, requiring only 185 participants to establish that the cost and effort of undertaking a phase III trial would not be worthwhile. However, the trial was successful only because all parties involved (researchers, sponsor, DSMB members, etcetera) clearly understood the intricacies of the AD being used. A break-down in understanding for any stakeholder could have severely damaged the study. A high complexity AD with low infrastructure is likely doomed to fail. Unfortunately, the scenario is currently a common one due to the desire to use complex adaptive designs without the necessary high level of infrastructure required for success. One solution would be to only consider simple ADs. However, since researchers are mainly interested in obtaining the efficiency and advantages of more complex adaptations, we believe that the only way to increase the chances for success in the future is to first improve the existing infrastructure. As previously stated, many companies have begun the process. However, we believe that NIH should also offer more recognition and funding for planning clinical trials that might benefit from adaptations.
Although infrastructure characteristics often limit rates of adaptation, a number of steps have been taken to address the concern, especially in the neurosciences. One ongoing example is the NIH and FDA supported ‘Accelerating Drug and Device Evaluation through Innovative Clinical Trial Design’ project
B69 69
. The participants are studying the development and acceptance of a wide range of adaptive designs within the existing infrastructure of the National Institute of Neurological Disorders and Stroke (NINDS)-supported Neurological Emergencies Treatment Trials (NETT) network
B70 70
. The goal is to incorporate the resulting designs into future network grant submissions. Another example is the creation of the NINDS-funded Network for Excellence in Neuroscience Clinical Trials (NeuroNEXT)
B71 71
. The goal of the network is to provide infrastructure supporting phase II studies in neuroscience, including the conduct of studies in rare neurological diseases. The long-term objective of the network is to rapidly and efficiently translate advances in neuroscience into treatments for individuals with neurologic disorders. The infrastructure is intended to serve as a model that can be replicated across a number of studies and diseases. The development of rich infrastructures such as NeuroNEXT greatly increases the feasibility of using more novel trial designs, including ADs. Additional infrastructure with flexibility is needed in other disease areas to advance the use of ADs, particularly in the publicly funded environment.
Conclusions
A general overview of the main design classes provides the basis for discussing how to correctly implement ADs. We agree with Vandemeulebroecke
B72 72
that discussion concerning ADs should center on five main points: feasibility, validity, integrity, efficiency, and flexibility. We recommend systematically addressing each of the concerns through the development of better methodology, infrastructure, and software. Successful adoption of ADs also requires systematic changes to clinical research policies. We believe that the barriers can be overcome to move appropriate ADs into common clinical practice.
Abbreviations
AD: Adaptive designs; ALS: Amyotrophic Lateral Sclerosis; ASTIN: Acute Stroke Therapy by Inhibition of Neutrophils; CE: Comparative effectiveness; CRM: Continual reassessment method; DSMB: Data and Safety Monitoring Board; DR: Dose response; FDA: US Food and Drug Administration; GS: Group sequential; IP: Internal pilot; MTD: Maximum tolerated dose; NETT: Neurological Emergencies Treatment Trials; NeuroNEXT: Network for Excellence in Neuroscience Clinical Trials; NIH: National Institutes of Health; NINDS: National Institute of Neurological Disorders and Stroke; PhRMA: Pharmaceutical Research and Manufacturers of America; QALS: Coenzyme Q10 in ALS; SSR: Sample size re-estimation.
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed significantly to the overall design of the paper. JAK wrote the initial draft and worked on revisions. CSC conceived of the paper and worked on revisions. MAT conducted literature reviews and worked on revisions. KEM contributed to the overall focus and content and helped revise the manuscript. All authors read and approved the final manuscript.
bm
ack
Acknowledgements
We gratefully acknowledge the advice and assistance of our colleague Dr. Ronald Shorr at the University of Florida and the Malcom Randall VA Medical Center. We would also like to thank the reviewers for helpful suggestions on an earlier version of this manuscript that greatly improved the quality of the work.
All authors are supported in part by a supplement to the NIH/NCRR Clinical and Translational Science Award to the University of Florida, NCRR 3UL1RR029890-03S1. Additional support for JAK included NINR 1 R01 AG039495-01. Additional support for CSC included NINDS U01-NS077352, NINDS U01-NS077108, NINDS U01-NS038529, and NHLBI R01-HL091843-04. Additional support for KEM included NIDDK R01-DK072398, NIDCR U54-DE019261, NIDCR R01-DE020832-01A1, NHLBI R01-HL091005, NIAAA R01-AA016549, and NIDA R01-DA031017.
refgrp ChowSChangMAdaptive design methods in clinical trialspublisher Boca Raton: Chapman & Hall/CRC2007Adaptive designs in clinical drug development: an executive summary of the PhRMA working groupGalloPChuang-SteinCDragalinVGaydosBKramsMPinheiroJJ Biopharm Stat200616275lpage 28310.1080/10543400600614742link fulltext 16724485cnm U.S. Food and Drug AdministrationDraft Guidance for Industry: adaptive design clinical trials for drugs and biologics
http://www.fda.gov/downloads/DrugsGuidanceComplianceRegulatoryInformation/Guidances/UCM201790.pdf
Adaptive clinical trials: progress and challengesCoffeyCSKairallaJADrugs R&D2008922924210.2165/00126839-200809040-0000323175271Benefits, challenges and obstacles of adaptive clinical trial designsChowSCCoreyROrph J Rare Dis201167910.1186/1750-1172-6-79Adaptive designs for confirmatory clinical trialsBretzFKoenigFBrannathWGlimmEPoschMStat Med2009281181121710.1002/sim.353819206095Adaptive Methods: Telling “The Rest of the Story”EmersonSSFlemingTRJ Biopharm Stat2010201150116510.1080/10543406.2010.51445721058111Adaptive Design Across Stages of Therapeutic DevelopmentCoffeyCSClinical Trials in Neurology: Design, Conduct, & AnalysisCambridge: Cambridge University Presseditor Ravina B, Cummings J, McDermott M, Poole RM20129110022699152Multiplicity and flexibility in clinical trialsBrannathWKoenigFBauerPPharm Stat2007620521610.1002/pst.30217674349Adaptive designs: terminology and classificationDragalinVDrug Inf J200640425435The design of simulation studies in medical statisticsBurtonAAltmanDGRoystonPHolderRLStat Med20062442794292Design and Analysis of Phase I Clinical TrialsStorerBEBiometrics19894592593710.2307/25316932790129Dose Escalation Methods in Phase I Cancer Clinical TrialsTourneauCLLeeJJSiuLLJ Natl Cancer I200910170872010.1093/jnci/djp079Continual reassessment method: a practical design for phase I clinical trials in cancerO’QuigleyJPepeMFisherLBiometrics199046334810.2307/25316282350571Efficiency perspectives on adaptive designs in stroke clinical trialsCheungKKaufmannPStroke2011422990299410.1161/STROKEAHA.111.620765pmcid 318325821885845The continual reassessment method for dose-finding studies: a tutorialGarrett-MayerEClin Trials20063577110.1191/1740774506cn134oa16539090Phase I study of continuous MKC-1 in patients with advanced or metastatic solid malignancies using the modified Time-to-Event Continual Reassessment Method (TITE-CRM) dose escalation designTevaarwerkAWildingGEickhoffJChappellRSidorCArnottJBaileyHSchelmanWLiuGInvest New Drugs2011301039104521225315High-dose Lovastatin for acute ischemic stroke: Results of the phase I dose escalation neuroprotection with statin therapy for acute recovery trial (NeuSTART)ElkindMSVSaccoRLMacArthurRBPeerschkeENeilsGAndrewsHStillmanJCorporanTLeiferDLiuRCheungKCerebrovasc Dis20092826627510.1159/000228709281401519609078Safety and tolerability of Deferoxamine Mesylate in patients with acute intracerebral hemorrhageSelimMYeattsSGoldsteinJNGomesJGreenbergSMorgensternLBSchlaugGTorbeyMWaldmanBXiGPaleschYStroke2011423067307410.1161/STROKEAHA.111.617589320204321868742Innovative approaches for designing and analyzing adaptive dose-ranging trialsBornkampBBretzFDmitrienkoAEnasGGaydosBHsuCKonigFKramsMLiuQNeuenschwanderBParkeTPinheiroJRoyASaxRShenFJ Biopharm Stat20071796599510.1080/1054340070164384818027208Bayesian designs for dose-ranging drug trialsBerryDAMuellerPGrieveAPSmithMCase studies in Bayesian statistics, Vol. 5New York: SpringerGatsonis C, Kass RE, Carlin B, Carriquiry A, Gelman A, Verdinelli I, West M200299181ASTIN: an adaptive dose -response study of UK-279,276 in acute ischemic strokeKramsMLeesKRHackeWGrieveAPOrgogozoJFordGAStroke2003342543254910.1161/01.STR.0000092527.33910.8914563972JennisonCTurnbullBWGroup Sequential MethodsBoca Raton: Chapman & Hall/CRC2000Adaptive randomization in clinical trialsZhangLRosenburgerWDesign and Analysis of Experiments, Special Designs and Applications. Volume 3Hoboken: John Wiley & SonsHinkelmann K2012251282Adaptive Randomization for Clinical TrialsRosenbergerWFSverdlovOHuFJ Biopharm Stat20122271973610.1080/10543406.2012.67653522651111Handling covariates in the design of clinical trialsRosenbergerWFSverdlovOStat Sci20082340441910.1214/08-STS269The covariate-adaptive biased coin design for balancing clinical trials in the presence of prognostic factorsAntogniniABZagoraiouMBiometrika20119851953510.1093/biomet/asr021Rest versus exercise as treatment for patients with low back pain and Modic changes. A randomized controlled trialJensenRKLeboeuf-YdeCWedderkoppNSorensenJSMinnicheCBMC Med201210223510.1186/1741-7015-10-22334808022376791Extracorporeal circulation in neonatal respiratory failure: a prospective randomized studyBartlettRHRoloffDWCornellRGAndrewsAFDillonPWZwischenbergerJBPediatrics1985764794873900904Supportive versus immunosuppressive therapy of progressive IgA Nephropathy (STOP) IgAN trial: rationale and study protocolEitnerFAckermannDHilgersRDFloegeJJ Nephrol20082128428918587715A point-of-care clinical trial comparing insulin administered using a sliding scale versus a weight-based regimenFioreLDBrophyMFergusonRED’AvolioLHermosJALewRADorosGConradCHO’NeilJAsuf JrSabinTPKaufmanJSwartzSLLawlerELiangMHGazianoJMLavoriPWClin Trials2011818319510.1177/1740774511398368319589821478329A Bayesian response-adaptive covariate-balanced randomization design with application to a leukemia clinical trialYuanYHuangXLiuSStat Med2011301218122910.1002/sim.4218308698321432894Planning and executing response-adaptive learn-phase clinical trials: 1. The processFardipourPLittmanGBurnsDDDragalinVPadmanabhanSKParkeTPerevozskayaIReinoldKSharmaAKramsMDrug Inf J20094371372310.1177/009286150904300609A simulation study for comparing testing statistics in response-adaptive randomizationGuXLeeJJBMC Med Res Methodol201010486210.1186/1471-2288-10-48291147020525382The bias issue under the complete null with response adaptive randomization: Commentary on “Adaptive and model-based dose-ranging trials: Quantitative evaluation and recommendation”WangSJStat Biopharm Res20122458461Outcome-adaptive randomization: is it useful?KornELFreidlinBJ Clin Oncol201129771776305665821172882Enrichment of clinical study populationsTempleRClin Pharmacol Ther20108877477810.1038/clpt.2010.23320944560Evaluation of randomized discontinuation designFreidlinBSimonRJ Clin Oncol2005235094509810.1200/JCO.2005.02.52015983399Adaptive patient enrichment designs in therapeutic trialsWangSJHungHMJO’NeillRTBiometrical J20095135837410.1002/bimj.200900003Potential of adaptive clinical trial designs in pharmacogenetic researchVan der BaanFHKnolMJKlungelOHEgbertsACGGrobbeeDERoesKCBPharmacogenomics20121357157810.2217/pgs.12.1022462749Efficacy and tolerability of rizatriptan in pediatric migraineurs: Results from a randomized, double-blind, placebo-controlled trial using a novel adaptive enrichment designHoTWPearlmanELewisDHamalainenMConnorKMichelsonDZhangYAssaidCMozleyLHStricklerNBachmanRMahoneyELinesCHewittDJCephalalgia201232760765Sample size re-estimation in clinical trialsProschanMABiometrical J20095134835710.1002/bimj.200800266Modification of sample size in group sequential clinical trialsCuiLHungHMJWangSBiometrics19995585385710.1111/j.0006-341X.1999.00853.x11315017On the inefficiency of the adaptive design for monitoring clinical trialsTsiatisAAMehtaCBiometrika20039036737810.1093/biomet/90.2.367Adaptive and nonadaptive group sequential testsJennisonCTurnbullBWStat Med20062591793210.1002/sim.225116220524Adaptive increase in sample size when interim results are promising: A practical guide with examplesMehtaCPocockSJStat Med2011303267328410.1002/sim.410222105690The role of internal pilot studies in increasing the efficiency of clinical trialsWittesJBrittainEStat Med19909657210.1002/sim.47800901132345839Two-stage sample size re-estimation based on a nuisance parameter: a reviewProschanMAJ Biopharm Stat20051555957410.1081/BIP-20006285216022163Sample size recalculation in internal pilot study designs: a reviewFriedeTKieserMBiometrical J20064537555Re-calculating the sample size in internal pilot study designs with control of the type I error rateKieserMFriedeTStat Med20001990191110.1002/(SICI)1097-0258(20000415)19:7<901::AID-SIM405>3.0.CO;2-L10750058Controlling test size while gaining the benefits of an internal pilot designCoffeyCSMullerKEBiometrics20015762563110.1111/j.0006-341X.2001.00625.x11414593Practical methods for bounding type I error rate with an internal pilot designCoffeyCSKairallaJAMullerKEComm Stat Theory Methods2007362143215710.1080/03610920601143634Sample size re-estimation without unblinding for normally distributed outcomes with unknown varianceGouldALShihWComm Stat Theory Methods1992212833285310.1080/03610929208830947Blinded sample size recalculation for clinical trials with normal data and baseline adjusted analysisFriedeTKieserMPharm Stat20111081310.1002/pst.39819943322Adaptive seamless phase II/III designs: background operational aspects and examplesMacaJBhattacharyaSDragalinVGalloPKramsMDrug Inf J200640463473Seamless phase II/III designsStallardNToddSStat Methods Med Res20102062363420724313Design issues in randomized phase II/III trialsKornELFreidlinBAbramsJSHalabiSJ Clin Oncol20123066767110.1200/JCO.2011.38.573222271475FOLFIRINOX versus Gemcitabine for metastatic pancreatic cancerConroyTDesseigneFYchouMBoucheOGuimbaudRBecouarnYAdenisARaoulJGourgou-BourgadeSFouchardiereCBennounaJBachetJKhemissa-AkouzFPere-VergeDDelbaldoCAssenatEChauffertBMichelRMontot-GrillotCDucreuxMN Engl J Med20113641817182510.1056/NEJMoa101192321561347Phase II trial of CoQ10 for ALS finds insufficient evidence to justify phase IIIKaufmannPThompsonJLPLevyGBuchsbaumRShefnerJKrivickasLSKatzJRollinsYBarohnRJJacksonCETiryakiELomen-HoerthCArmonCTandanRRudnickiSARezaniaKSufitRPestronkANovellaSPHeiman-PattersonTKasarskisEJPioroEPMontesJArbingRVecchioDBarsdorfAMitsumotoHLevinBAnn Neurol20096623524410.1002/ana.21743285462519743457Selection and Futility DesignsLevinBClinical Trials in Neurology: Design, Conduct, & AnalysisCambridge: Cambridge University PressRavina B, Cummings J, McDermott M, Poole RM2012789022699152The phase II futility clinical trial designRavinaBPaleschYProg Neurother Neuropsych200722738A two-stage design for a phase II clinical trial of coenzyme Q10 in ALSLevyGKaufmannPBuchsbaumRMontesJBarsdorfAArbingRBattistaVZhouXMitsumotoHLevinBThompsonJLPNeurology20066666066310.1212/01.wnl.0000201182.60750.6616534103Information based monitoring of clinical trialsTsiatisAAStat Med2006253236324410.1002/sim.262516927248Combining an internal pilot with an interim analysis for single degree of freedom testsKairallaJAMullerKECoffeyCSComm Stat Theory Methods2010393717373810.1080/03610920903353709Achieving the benefits of both an internal pilot and interim analysis in large and small samplesKairallaJACoffeyCSMullerKEJSM Proceedings201052395252Comparative effectiveness research: Policy context, methods development and research infrastructureTunisSRBennerJMcClellanMStat Med2010291963197610.1002/sim.381820564311Internal pilots for observational studiesGurkaMJCoffeyCSGurkaKKBiometrical J20105590603Scientific Advances in Adaptive Clinical Trial Designs Workshop Planning CommitteeScientific Advances in Adaptive Clinical Trial Designs Workshop Summary2010
www.palladianpartners.com/adaptivedesigns/summary
Accelerating Drug and Device Evaluation through Innovative Clinical Trial Design
http://www2.med.umich.edu/prmc/media/newsroom/details.cfm?ID=1753
Neurological Emergencies Treatment Trials
http://www.nett.umich.edu
NeuroNEXT: accelerating drug development in neurologyThe Lancet NeurologyLancet Neurol20121111910.1016/S1474-4422(12)70008-X22265207Group sequential and adaptive designs-a review of basic concepts and points of discussionVandemeulebroekeMBiometrical J20085054155710.1002/bimj.200710436



PAGE 1

REVIEWOpenAccessAdaptivetrialdesigns:areviewofbarriersand opportunitiesJohnAKairalla1*,ChristopherSCoffey2,MitchellAThomann2andKeithEMuller3AbstractAdaptivedesignsallowplannedmodificationsbasedondataaccumulatingwithinastudy.Thepromiseofgreater flexibilityandefficiencystimulatesincreasinginterestinadaptivedesignsfromclinical,academic,andregulatory parties.Whenadaptivedesignsareusedproperly,efficienciescanincludeasmallersamplesize,amoreefficient treatmentdevelopmentprocess,andanincreasedchanceofcorrectlyansweringtheclinicalquestionofinterest. However,improperadaptationscanleadtobiasedstudies.Abroaddefinitionofadaptivedesignsallowsfor countlessvariations,whichcreatesconfusionastothestatisticalvalidityandpracticalfeasibilityofmanydesigns. Determiningpropertiesofaparticularadaptivedesignrequirescarefulconsiderationofthescientificcontextand statisticalassumptions.Wefirstreviewseveraladaptivedesignsthatgarnerthemostcurrentinterest.Wefocuson thedesignprinciplesandresearchissuesthatleadtoparticulardesignsbeingappealingorunappealingin particularapplications.Weseparatelydiscussexploratoryandconfirmatorystagedesignsinordertoaccountforthe differencesinregulatoryconcerns.Weincludeadaptiveseamlessdesigns,whichcombinestagesinaunified approach.Wealsohighlightanumberofappliedareas,suchascomparativeeffectivenessresearch,thatwould benefitfromtheuseofadaptivedesigns.Finally,wedescribeanumberofcurrentbarriersandprovideinitial suggestionsforovercomingtheminordertopromotewideruseofappropriateadaptivedesigns.Giventhe breadthofthecoverageallmathematicalandmostimplementationdetailsareomittedforthesakeofbrevity. However,theinterestedreaderwillfindthatweprovidecurrentreferencestofocusedreviewsandoriginal theoreticalsourceswhichleadtodetailsofthecurrentstateoftheartintheoryandpractice. Keywords: Adaptivedesigns,Flexibledesigns,Groupsequential,Internalpilot,Power,Samplesizere-estimation, Comparativeeffectivenessresearch,SmallclinicaltrialsReviewIntroductionIntraditionalclinicaltrials,keyelementssuchasprimaryendpoint,clinicallymeaningfultreatmentdifference,andmeasureofvariabilityarepre-specifiedduring planninginordertodesignthestudy.Investigatorsthen collectalldataandperformanalyses.Thesuccessofthe studydependsontheaccuracyoftheoriginalassumptions.AdaptiveDesigns(ADs)giveonewaytoaddress uncertaintyaboutchoicesmadeduringplanning.ADs allowareviewofaccumulatinginformationduringatrial topossiblymodifytrialcharacteristics[1].Theflexibility cantranslateintomoreefficienttherapydevelopmentby reducingtrialsize.Theflexibilityalsoincreasesthechance ofa ‘ successful ’ trialthatanswersthequestionofinterest (findingasignificanteffectifoneexistsorstoppingthe trialasearlyaspossibleifnoeffectexists). ADshavereceivedagreatdealofattentioninthestatistical,pharmaceutical,andregulatoryfields[1-8].Therapid proliferationofinterestandinconsistentuseofterminologyhascreatedconfusionandcontroversyaboutsimilaritiesanddifferencesamongthevarioustechniques. Eventhedefinitionofan ‘ adaptivedesign ’ isasourceof confusion.Fortunately,tworecentpublicationshave reducedtheconfusion.AnADworkinggroupwasformed in2005inorderto ‘ fosterandfacilitatewiderusageand regulatoryacceptanceofADsandtoenhanceclinicaldevelopment,throughfact-basedevaluationofthebenefits andchallengesassociatedwiththesedesigns ’ [2].The groupwasoriginallysponsoredbythePharmaceutical *Correspondence: johnkair@ufl.edu1DepartmentofBiostatistics,UniversityofFlorida,POBox117450,Gainesville, FL32611-7450,USA Fulllistofauthorinformationisavailableattheendofthearticle TRIALS 2012Kairallaetal.;licenseeBioMedCentralLtd.ThisisanOpenAccessarticledistributedunderthetermsoftheCreative CommonsAttributionLicense(http://creativecommons.org/licenses/by/2.0),whichpermitsunrestricteduse,distribution,and reproductioninanymedium,providedtheoriginalworkisproperlycited.Kairalla etal.Trials 2012, 13 :145 http://www.trialsjournal.com/content/13/1/145

PAGE 2

ResearchandManufacturersofAmerica(PhRMA)andis currentlysponsoredbytheDrugInformationAssociation. ThegroupdefinedanADas ‘ aclinicalstudydesignthat usesaccumulatingdatatodecidehowtomodifyaspects ofthestudyasitcontinues,withoutunderminingthevalidityandintegrityofthetrial. ’ Thegroupalsostressedthat thechangesshouldnotbe adhoc ,but ‘ bydesign. ’ Finally, thegroupemphasizedthatADsarenotasolutionfor inadequateplanning,butaremeanttoenhancestudyefficiencywhilemaintainingvalidityandintegrity.Subsequently,theUSFoodandDrugAdministration(FDA) releasedadraftversionofthe “ GuidanceforIndustry: AdaptiveDesignClinicalTrialsforDrugsandBiologics ” [3].ThedocumentdefinedanADas ‘ astudythatincludes aprospectivelyplannedopportunityformodificationof oneormorespecifiedaspectsofthestudydesignandhypothesesbasedonanalysisofdata(usuallyinterimdata) fromsubjectsinthestudy. ’ Bothgroupssupportedthenotionthatchangesarebasedonpre-specifieddecision rules.However,theFDAdefinedADsmoregenerallyby interpretingas ‘ prospective ’ anyadaptationsplanned ‘ beforedatawereexaminedinanunblindedmannerbyany personnelinvolvedinplanningtherevision ’ [3].Sincedifferentindividualsbecomeunblinded(thatis, ‘ unmasked ’ ) atdifferentpointsinatrial,webelievetheFDAdraftguidancedocumentleftopendoorstosomegrayareasthat meritfurtherdiscussion.Bothgroupsmadeitclearthat themostvalidADsfollowtheprincipleof ‘ adaptivebydesign ’ sincethatistheonlywaytoensurethattheintegrity andvalidityofthetrialarenotcompromisedbythe adaptations. ItisimportanttodifferentiatebetweenADsandwhat othershavereferredtoasflexibledesigns[1,9].ThedifferencewasperhapsbestdescribedbyBrannath etal. who state,that ‘ Manydesignshavebeensuggestedwhichincorporateadaptivity,however,areinnomeansflexible, sincetheruleofhowtheinterimdatadeterminethedesignofthesecondpartofthetrialisassumedtobecompletelyspecifiedinadvance ’ [9].Thus,aflexibledesign describesamoregeneraltypeofstudydesignthatincorporatesbothplannedandunplannedfeatures(Figure1). Thereisgeneralagreementthattheimplementationof flexibledesignscannotbehaphazardbutmustpreserve validityandintegrity(forexample,bycontrollingtypeI errorrate).Whileattractive,webelievethatthisflexibility opensatrialtopotentialcriticismfromoutsideobservers andregulators.Furthermore,webelievethatmanyofthe concernscouldbeeliminatedbygivingmorethoughtto potentialadaptationsduringtheplanningstagesofatrial. Correspondingly,forthisreview,weadoptadefinition similartothatoftheADworkinggroupandoftheFDA andfocusonlyonADsthatuseinformationfromwithintrialaccumulatingdatatomakechangesbasedonpreplannedrules. AsFigure1demonstrates,eventheconstraineddefinitionofADallowsawiderangeofpossibleadaptations, somemoreacceptablethanothers.Thedesignsallow updatestothemaximumsamplesize,studyduration, treatmentgroupallocation,dosing,numberoftreatment arms,orstudyendpoints.Foreachtypeofadaptation, researchersmustensurethatthetypeIerrorrateiscontrolled,thetrialhasahighprobabilityofansweringthe researchquestionofinterest,andequipoiseismaintained[10].Newanalyticresultswithproperlydesigned simulations[11]areoftenneededtomeettherestrictions.Theapproachreinforcestheimportanceof ‘ adaptivebydesign ’ becausetheadaptationrulesmustbe clearlyspecifiedinadvanceinordertoproperlydesign thesimulations. Despitetheirsuggestedpromise,currentacceptance anduseofADsinclinicaltrialsarenotalignedwiththeattentiongiventoADsintheliterature.InordertojustifytheuseofADs,moreworkisneededtoclarifywhich designsareappropriate,andwhatneedstobedoneto ensuresuccessfulimplementation.Intheremainderof thepaperwesummarizespecificADtypesusedinclinicalresearchandaddresscurrentconcernswiththeuse ofthedesigns.TherearetoomanypossibleADsto coveralloftheminabriefreview.Webeginwithlearningstagedesigns.Next,wedescribeconfirmatorystage designs.Wethendiscussadaptiveseamlessdesignsthat seektointegratemultiplestagesofclinicalresearchinto asinglestudy.Nextweexploreappliedareasthatwould benefitfromADs.Finally,wedescribesomebarriersto theimplementationofADsandsuggesthowtheycanbe resolvedinordertomakeappropriateADspractical.Learning-stageadaptivedesigns OverviewIngeneral,ADmethodsareacceptedmoreinthelearning(exploratory)stagesofclinicaltrials[3,4].Earlyin theclinicaldevelopmentprocessADsallowresearchers tolearnandoptimizebasedonaccruinginformation relatedtodosing,exposure,differentialparticipantresponse,responsemodifiers,orbiomarkerresponses[3]. ThelowimpactofexploratorystudiesonregulatoryapprovalmeanslessemphasisoncontroloftypeIerrors, andmoreemphasisoncontroloftypeIIerrors(avoiding falsenegatives).Earlylearningphasedesignsinareas withpotentiallytoxictreatments(forexample,canceror someneurologicaldiseases)seektodeterminethemaximumtolerateddose(MTD),thehighestdoseforless thansomepercentoftreatedparticipants(suchas33or 50percent)havingdose-relatedtoxicities.Anaccurate determinationoftheMTDiscriticalsinceitwilllikely beusedasthemaximumdoseinfutureclinicaldevelopment.Ifthedoseistoolow,apotentiallyusefuldrug couldbemissed.Ifthedoseistoohigh,participantsinKairalla etal.Trials 2012, 13 :145 Page2of9 http://www.trialsjournal.com/content/13/1/145

PAGE 3

futurestudiescouldbeputatrisk.AftertheMTDhas beendetermined,thenextstepistypicallytochoosea dose(lessthanorequaltotheMTD)mostlikelyto affecttheclinicaloutcomeofinterest.Sincetheissues areverydifferentforthesetwophasesofthelearning stage,webrieflysummarizeeachbelow.Earlylearningstage(toxicitydose)Althoughanumberofmethodshavebeenproposedfor phaseIMTDdetermination,byfarthemostprevalentis thetraditional3+3methodoriginallydevelopedfor,and primarilyusedin,oncologytrials[12,13].Inthisrulebasedmethod,toxicityisdefinedasabinaryeventand participantsaretreatedingroupsofthree,startingwith aninitiallowdose.Thealgorithmtheniterates,moving doselevelsupordowndependingonthenumberof toxicitiesobserved.TheMTDisidentifiedfromthedata; forexample,thehighestdosestudiedwithlessthan1/3 toxicities(thatis,zerooronedose-limitingtoxicityout ofsixparticipants).Thismethodisstraightforwardand convenientinthatitrequiresnomodelingandveryfew assumptions.However,themethodhasbeencriticized fornotproducingagoodestimate[14].Severaladaptive dose-responsemethodshaveadvantagesoverthetraditionalmethod.ApopulardesignistheBayesianadaptivemodel-basedapproachcalledthecontinual reassessmentmethod(CRM)[14].Bymoreeffectively estimatingtheMTDalongwithadose-responsecurve, theCRMtendstoquicklyaccelerateparticipantsto dosesaroundtheMTD.Fewerparticipantsaretreatedat ineffectivedosesandthedesignislesslikelytooverestimateorunder-estimatethetrueMTDcomparedto the3+3method[14].Safetyconcernsabouttheoriginal CRMledtoseveralimprovements[15,16].TheCRM hasutilityinanyareawherefindingtheMTDisneeded. However,todate,ithasprimarilybeenusedincancer [17]andstroke[18,19]researchtrials.Latelearningstageexploratory(efficacydose)ADsforlaterexploratorydevelopmentarenotaswelldevelopedasforearlierwork.Consequently,PhRMA createdaseparateadaptivedoseresponseworkinggroup toexploretheissueandmakerecommendations[20]. Amongthegroup ’ sconclusionswerethatdoseresponse (DR)ismoreeasilydetectedthanestimated,typicalsamplesizesindose-rangingstudiesareinadequateforDR estimation,andadaptivedose-rangingmethodsclearly improveDRdetectionandestimation.Thegroupalso notedtheadvantagesofdesign-focusedadaptivemethods.ThegroupfavoredageneraladaptivedoseallocationapproachusingBayesianmodelingtoidentifyan appropriatedoseforeachnewparticipantbasedonpreviousresponses[21],asemployedintheAcuteStroke TherapybyInhibitionofNeutrophils(ASTIN)study [22].Unfortunately,complexsimulations(ornewanalyticdevelopment)andsoftwareareneededinorderto controltheoperatingcharacteristicsandemploythe methods.Thedevelopmentofwelldocumentedand user-friendlysoftwareisvitalforfutureuse.Webelieve thataccesstodependableandeasy-to-usesoftwarewill makeADsmorecommonintheexploratorystagesof trials.Confirmatoryadaptivedesigns OverviewFromtheFDA ’ scurrentperspective,somedesignsare considered ‘ wellunderstood, ’ whileothersarenot[3]. Accordingly,scrutinyofaprotocolwillvarydepending onthetypeofdesignproposed.TheFDAgenerally acceptsstudydesignsthatbaseadaptationsonmasked FlexibleDesigns StudieswithUnknownProperties AdaptiveDesignsUnplanned Planned LearningPhase Combined(Seamless)Phase ConfirmatoryPhase AdaptiveDose Response(Toxicity) PhaseIIb/III PhaseI/IIa AdaptiveDose Response(Efficacy) Combinationsof GSandSSR SampleSize Re-estimation GroupSequential Adaptive Randomization Figure1 Summaryofdifferenttypesofadaptivedesignsforclinicaltrials. Kairalla etal.Trials 2012, 13 :145 Page3of9 http://www.trialsjournal.com/content/13/1/145

PAGE 4

(aggregate)data[3].Forexample,astudycouldchange recruitmentcriteriabasedonaccruingaggregatebaselinemeasurements.Groupsequential(GS)designsare alsodeemed ‘ wellunderstood ’ bytheFDA.GSdesigns allowstoppingatrialearlyifitbecomesclearthata treatmentissuperiororinferior.Thus,GSmethods meetourdefinitionofanADandarebyfarthemost widelyusedADsinmodernconfirmatoryclinicalresearch.Theyhavebeenextensivelydescribedelsewhere [23]andwillnotbediscussedfurther. Somedesignsare ‘ lesswellunderstood, ’ fromtheFDA perspective[3].ItisimportanttonotethatsuchmethodsarenotautomaticallyprohibitedbytheFDA.Rather, thereisahigherbarforjustifyingtheuseoflesswellunderstooddesigns.Provinglackofbiasandadvantageousoperatingcharacteristicsrequiresextensiveplanningandvalidation.Debatecontinuesconcerningthe usefulnessandvalidityofconfirmatoryADsinthecategory.Examplesincludeadaptiverandomization,enrichmentdesigns,andsamplesizere-estimation(although somesubtypesareclassifiedas ‘ wellunderstood ’ ).We brieflymentioneachbelow.AdaptiverandomizationTraditionalrandomizationfixesconstantallocationprobabilitiesinadvance.Adaptiverandomizationmethodsvary theallocationofsubjectstotreatmentgroupsbasedonaccruingtrialinformation[1,24,25].Therearetwobasic types:covariateandresponseadaptiverandomization. Eachisbrieflydescribedimmediatelybelow. Withasufficientsamplesize,atraditionalrandomization processwillbalancethedistributionofallknownandunknowncovariatesattheendofastudy.Thisis,infact, oneofthemajorbenefitsofrandomization.However,this processdoesnotensurethatthecovariatesarebalanced atalltimesduringtheconductofthetrial. Covariate adaptiverandomization providesahigherprobabilityof havingtreatmentgroupbalancedcovariatesduringthe studybyallowingtheallocationprobabilitiestochangeas afunctionofthecurrentdistributionofcovariates.Methodsexistforcingoptimumbalancedeterministically(for example,minimization),withfixed(unequal)probability, andwithdynamicallocationprobabilities[26].Anumber ofexamplesofmethodsandpracticecanbefoundinthe literature(forexample,[27,28]). Alternatively, responseadaptiverandomization uses observedtreatmentoutcomesfromprecedingparticipantstochangeallocationprobabilities.Thestrategycan fulfilltheethicaldesiretoincreasethelikelihoodofgivinganindividualthebest-knowntreatmentatthetime ofrandomization.Useisnotwidespread,butexamples canbefound[29-32].Althoughattractive,response adaptiverandomizationschemeshaveadministrative complexitiesandmaycreateethicaldilemmas[7,33]. Onecomplicationisthatenrollinglaterinthestudy increasesthechanceofreceivingthesuperiortreatment sincetherandomizationprobabilitywillhaveincreased forthebettertreatment.Thus,biascanbecreatedif sickerpatientsenrollearlierandhealthieronesdecideto waituntillatertoenroll[5].Furthermore,theactual advantagesmaybenegligiblesincetheanalysis,typeI errorratecontrol,andsamplesizecalculationsbecome morecomplicatedduetotheneedtoaccountforadaptiverandomization[34-36].Proponentsofresponseadaptiverandomizationdesignsdefendtheirefficiencyand usefulnesswhilecontinuingtoaddresscriticismswithnew methodsandsimulationresults[25].However,according totheFDAdraftguidance, ‘ Adaptiverandomizationshould beusedcautiouslyinadequateandwell-controlledstudies, astheanalysisisnotaseasilyinterpretableaswhenfixed randomizationprobabilitiesareused ’ [3].EnrichmentdesignsEnrichmentofastudypopulationreferstoensuringthat participantsinatrialarelikelytodemonstrateaneffect fromtreatment,ifoneexists[37].Forexample,thereis benefittoenrollingparticipantslackingcomorbidities, withariskfactorofinterest(suchashighbloodpressure),andlikelytobecompliant.Anextensionknownas adaptiveenrichmentdesignsfulfillsthedesiretotarget therapiestopatientswhocanbenefitthemostfromthe treatment[38,39].Insuchdesigns,atrialinitiallyconsidersabroadpopulation.Thefirststudyperiodreveals participantgroupsmostlikelytobenefitfromthetest agent(discoveryphase).Subgroupmembersarethen randomizedtoreceiveeithertheactiveagentorcontrol (validationphase).Powerforthechosensubgroupsis increasedduetotheincreasedsamplesizeinthesubgroups,whilenon-promisinggroupsarediscarded. Adaptiveenrichmentdesignshavebeenpraisedfortheir abilitytoidentifypatientgroupsandundilutedeffect sizesthatcanaidinthedesignandefficiencyofreplicationstudies[39].Anappealingareaforadaptiveenrichmentispharmacogeneticresearchwhereitcouldallow forisolationoftheoneortwogeneticmarkersubgroups thatarepredictivefortreatmentresponse.Theapproach canincreaseefficiencywhenidentifiablegeneticsubgroupshaveincreasedtreatmentbenefit[40].Additionally,somestudieshaveusedanadaptiveenrichmentto identifyasubsetmostlikelytorespondtotreatment [41].However,adaptiveenrichmentdesignshavebeen criticizedashavingunfavorableoperatingcharacteristics inreal-worldconfirmatoryresearch.Disadvantagesincludeincreasesincomplexity,biasedtreatmenteffect estimates,lackofgeneralizability,andlackofinformationinexcludedgroups[7].Webelievethatadaptiveenrichmentdesignscurrentlyhavegreatestvalueinlate learningstagedesigns.Kairalla etal.Trials 2012, 13 :145 Page4of9 http://www.trialsjournal.com/content/13/1/145

PAGE 5

Samplesizere-estimationChoosingafixedsamplesizeiscomplicatedbytheneedto chooseaclinicallymeaningfultreatmenteffectandtospecifyvaluesfornuisanceparameterssuchasthevariance, overalleventrate,oraccrualrate.Inaccurateestimatesof theparametersleadtoanunderpoweredoroverpowered study,bothofwhichhavenegativeconsequences.Sample sizere-estimation(SSR)designsallowtheparameterestimatestobeupdatedduringanongoingtrial,andthen usedtoadjustthesamplesizeaccordingly[42]. Historically,agreatdealofcontroversysurrounding ADshascenteredonSSRbasedonobservedtreatment effects[43-45].Themethodsaredefendedforusein specificcontexts,suchasusingasmallamountofinitial fundingtoseekpromisingresults[46].Theauthorsof theFDAdraftguidancedocument,inlistingthedesign as ‘ lesswellunderstood, ’ notedthepotentialforinefficiency,anincreasedtypeIerrorrate,difficultiesininterpretation,andmagnificationoftreatmenteffectbias[3]. AmajorconcernwiththistypeofSSRdesignisthepotentialtoconveytreatmenteffectinformationfromdecisionsmadeusingtreatment-armspecificdataatinterim timepoints.Acleverinvestigatorwithknowledgeofthe SSRprocedureandthedecisionmadeafterviewingthe datacouldpossiblyback-calculateanabsolutetreatment effect.Itshouldbenotedthatconcernsofgainingsome knowledgebasedonanaction(orinaction)existwhen usinganytreatment-armspecificdata,includingGS methods.Nevertheless,theclinicaltrialscommunity nowroutinelyusesGSmethodswithoutmajorconcerns sincetheconveyedinformationisusuallyminimal. OthertypesofSSRhavestimulatedlesscontroversy.For example,internalpilots(IPs)aretwostagedesignswith nointerimtesting,butwithinterimSSRbasedonlyon firststagenuisanceparameterestimates[47].Moderateto largesamplesizesimplyminimaltypeIerrorrateinflation withunadjustedtestsinarangeofsettings[4,48,49].IP designscanbeusedinlargerandomizedcontrolledtrials tore-assesskeynuisanceparametersandmakeappropriatemodificationswithlittlecosttotypeIerrorrate.In contrast,smallIPtrialscanhaveinflatedtypeIerrorrate andthereforerequireadjustmentsforbias[50-52].Since IPdesignsdonotincludeinterimtestingoreffectsize basedSSR,theregenerallyarenotthesameconcerns aboutindirectlyconveyinganabsolutetreatmenteffect, thoughProschanshowedthatitispossibleifaresearcher hasknowledgeofboththeIPprocedureandaccesstothe blindeddata[48].Consequently,someobserversbelieve that,fromaregulatorystandpoint,IPmethodsthatkeep groupallocationmaskedmaybepreferredwheneverpossible.Accordingly,maskedmethodsforIPshavebeenproposed[53,54]andareclassifiedas ‘ wellunderstood ’ inthe FDADraftGuidancedocument[3].However,unmasked IPproceduresmaybeappropriateprovidedthatstepsare takentominimizethenumberofpeoplewithaccessto dataortothegroupallocation.Whetherblindedornot,if anIPdesignisimplementedinasettingwherenonobjectivepartiesdonothaveaccesstoaccumulatingraw data,thesamplesizechangeswillgivenoinformation concerningeffecttrendsofinterest.Thus,webelievethat thesettinghasfewerrisksandthereforeencouragemore useofSSRbasedonnuisanceparametersinfuturephase IIandIIItrials.AdaptiveseamlessdesignsAseamlessdesigncombinesexploratoryandconfirmatoryphasesintoasingletrial.Asatypeoftwo-stagedesign,seamlessdesignscanincreaseoverallefficiencyby reducingtheleadtime( ‘ whitespace ’ )betweenphases. Informationfromparticipantsenrolledinthefirststage isusedtoinformthesecondstage.An adaptiveseamless design proceedsinthesamemanner,butusesdatafrom participantsenrolledinbothstagesinthefinalanalysis. Previousauthorshavepaidthemostattentiontoa seamlesstransitionbetweenphaseIIb(learning)and phaseIII(confirming)[1,55-58].Seamlessdesignsalso seemappealinginearlydevelopment(phaseI/IIa).The approachallowsforamoreefficientutilizationofsample sizeandresourcesversusconductingcompletelyseparatestudies.However,sincedatafromthelearningphase informdecisionsforthesecondphase,usingthedatain thefinalanalysisraisesconcernsaboutbiasanderror rateinflation.Asanexample,considertheCoenzyme Q10inAmyotrophicLateralSclerosis(QALS)study:an adaptive,two-stage,randomizedcontrolledphaseI/IIa trialtocomparedeclineinAmyotrophicLateralSclerosis(ALS)FunctionalRatingScalescore[59].Thefirst phaseusedaselectiondesign[60]tochooseoneoftwo doses(1800mgor2500mg).Thesecondphasethen comparedtheselecteddosetoplacebousingafutility design[61].Becausethesecondphasedosewasselected as ‘ best ’ inthefirstphase,thereisapositivebiascarried forward.Correspondingly,ifthefinaltestdoesnotaccountforthebias,theoveralltypeIerrorratemaybe increased.TheQALSinvestigatorsperformedaseriesof studiestodetermineabiascorrectionandincorporated itintothefinalteststatistic[62].Thescenarioiscommonsinceseamlessdesignsrequirespecialstatistical methodsandextraplanningtoaccountforthepotential bias.Ingeneral,thepotentialbenefitsmustbeweighed againsttheadditionaleffortrequiredtoensureavalid testattheendofthestudy.Appliedareasthatwouldbenefitfromadaptivedesigns Combinationsofgroupsequentialandsamplesize re-estimationCombiningthepowerbenefitsofanIPdesignandthe earlystoppingsamplesizeadvantagesofGSdesignshasKairalla etal.Trials 2012, 13 :145 Page5of9 http://www.trialsjournal.com/content/13/1/145

PAGE 6

greatappeal.Asymptoticallycorrectinformation-based monitoringapproachesforsimultaneoususeofGSand IPmethodsinlargeclinicaltrialshavebeenproposed [63].Theapproachcangivepowerandexpectedsample sizebenefitsoverfixedsamplemethodsinsmallsamples,butmayinflatethetypeIerrorrate[64].Kairalla etal. [65]providedapracticalsolution;however,more workisneededinthearea.RarediseasesandsmalltrialsPlanningasmallclinicaltrial,particularlyforararedisease,presentsseveralchallenges.Anytrialshouldexamineanimportantresearchquestion,usearigorousand sensitivemethodologytoaddressthequestion,and minimizeriskstoparticipants.Choosingafeasiblestudy designtoaccomplishallofthegoalsinasmalltrialcan beaformidablechallenge.Smalltrialsexhibitmorevariabilitythanlargertrials,whichimpliesthatstandard designsmayleadtotrialswithpoweradequateonlyfor largeeffects.ThesettingmakesADsparticularlyappealing.However,itisimportanttobeclearaboutwhatan ADcanandcannotdointherarediseasesetting.Most importantly,anADcannotmakeadrugmoreeffective. OneofthebiggestbenefitsofanADisquitetheopposite:identifyingineffectivetreatmentsearlier.Doingso willminimizetheresourcesallocatedtostudyinganineffectivetreatmentandallowre-distributingresourcesto morepromisingtreatments.AlthoughADscannot ‘ changetheanswer ’ regardingtheeffectivenessofaparticulartreatment,theycanincreasetheefficiencyin findingananswer.ComparativeeffectivenesstrialsComparativeeffectiveness(CE)trialscomparetwoor moretreatments[66]thathavealreadybeenshownto beefficacious.UniqueissuesfoundinCEtrialsmake ADsattractiveinthearea.Forone,theconceptofa ‘ minimumclinicallymeaningfuleffect ’ inthepopulation hasadiminishedmeaninginaCEtrial.Assuming roughlyequalcostsandsideeffects,arangeofvalues maybeidentifiedwithupperlimitthelargestreasonable effectandlowerlimitthesmallesteffectdeemedsizable enoughtochangepracticeinthestudycontext.Unfortunately,sincedetectingsmallereffectsrequireslarger samplesizes,forpracticalreasonsresearchersmayfeel theneedtopowerCEtrialsforeffectsontheupperend ofthespectrum.ApotentialADcouldhavetwostages withthefirstpoweredtodetectthelargerreasonable effectsize.Attheconclusionofthefirststage,oneof threedecisionsmightbereached:1)Declareefficacy (onetreatmentbest);2)Declarefutility(unlikelytoshow differencebetweentreatments);or3)Ifevidencesuggests asmallereffectmightexist,thenproceedwithasecond stagepoweredtodetectthesmallereffect.Anotherissue isthatavailablevariabilityestimatesareprobablytoolow sincetheestimateswerelikelyobtainedfromhighlycontrolledefficacytrials.Iftrue,usingtheestimatestopower aCEtrialmayleadtoanunderpoweredstudy.Thus, variance-basedSSRcouldbebuiltintothepriorexample toaddresstheuncertainty.WebelieveADshavepromise inCEtrialsandthatfutureresearchiswarranted.ApplicationsinotherresearchsettingsCurrently,ADsareconsideredmostofteninthecontext ofclinicaltrials.However,theabilitytomodifyincorrect initialassumptionswouldhavevalueinmanyothersettings.Importantly,sinceregulatoryissuesmaynotexist inmanyresearchsettings,webelievethatADsmay actuallybemucheasiertoimplement.Forexample,laboratoryresearchinvolvinganimalscoulduseanADto re-assesskeyparametersanddeterminewhethermore animalsareneededtoachievehighpower.Asanother example,anobservationalstudyrequiresassumptions aboutthedistributionofthepopulationthatwillbeenrolled.Anydiscrepancybetweenthehypothesizedand actualdistributionoftheenrolledpopulationwillaffect thepowerofthestudy.AlthoughextensionsoftheIP designtotheobservationalsettinghavebeenconsidered [67],moreworkisneeded.BarrierstoimplementingadaptivedesignsEventhoughadditionalmethodologicaldevelopmentis neededinADs,appropriatestatisticalmethodsexistto supportamuchgreateruseofADsthancurrentlyseen. Webelievelogisticalissuesandregulatoryconcerns,ratherthanstatisticalissues,currentlylimitADuse.The majorityofresearchonADshasbeendrivenbydrugdevelopmentwithinthepharmaceuticalindustry.While manybasicprinciplesremainthesameregardlessofthe fundingenvironment,somespecificchallengesdiffer whenconsideringtheuseofADsfortrialsfundedbythe NationalInstitutesofHealth(NIH)orfoundations.For example,traditionalfundingmechanismslackthe requiredflexibilitytoaccountforsamplesizemodificationsafterinitiationofatrial.Thereisalsoageneral senseofconfusionandlackofunderstandingaboutthe distinctionbetweenacceptableandunacceptableadaptations.Ifthereviewersdonotunderstandtheimportant distinctions,avalidADmightnotpassthroughpeerreview.AnNIHandprivatefoundationfundedworkshop on ‘ ScientificAdvancesinAdaptiveClinicalTrialDesigns ’ washeldinNovember2009,asafirstattempttoaddress thechallenges[68].Participantsincludedrepresentatives fromresearchinstitutions,regulatorybodies,patientadvocacygroups,non-profitorganizations,professionalassociations,andpharmaceuticalcompanies.Theparticipants stressedthattheuseofADsmayrequireadifferentway ofthinkingaboutthestructureandconductofDataandKairalla etal.Trials 2012, 13 :145 Page6of9 http://www.trialsjournal.com/content/13/1/145

PAGE 7

SafetyMonitoringBoards(DSMBs).Also,theyagreedthat thereisagreatneedforfurthereducationandcommunicationregardingthestrengthsandweaknessesofvarious typesofADs.Forexample,researchersshouldbeencouragedtopublishmanuscriptsdescribingexperiences(both positiveandnegative)associatedwithcompletedtrials thatusedanAD.Similarly,astrongeremphasisonastatisticalbackgroundforNIHreviewersandDSMBmembers seemsnecessary. Whilecommunicationamongpartiescangoalongway towardsincreasingtheuseandunderstandingofADs, moreworkisneededtodevelopinfrastructuretosupport ADtrials.Studyinfrastructureisoneareawhereindustry isclearlyaheadofgrantfundedresearch.Asanexample, justifyingpropertiesofADsoftenrequiresextensiveplanningthroughcomputationsorsimulations.Researchers mustfindawaytofundthecreationofextensivecalculationsforahypotheticalstudy.Theissueisexacerbatedby factthattheplanningisgenerallyrequiredpriortosubmittingagrantapplicationforfunding.Manypharmaceuticalcompaniesaredevelopingin-houseteams primarilyresponsibleforconductingsuchsimulations. Greaterbarriersexistforimplementingthesametypeof infrastructurewithinpubliclyfundedenvironments,particularlygiventhechallengesassociatedwiththecurrent limitedandhighlycompetitivefederalbudget. Inouropinion,themostimportantwaytoensurea highchanceofconductingasuccessfulADtrialisto haveahighlevelofinfrastructure(efficientdatamanagement,thoroughunderstandingofADissues, etcetera ) inplace.AlowcomplexityAD(forexample,anIPor GSdesign)conductedinahighinfrastructureenvironmentcurrentlyprovidesthebestchanceforsuccess. However,alowinfrastructureenvironmentmightbe abletosuccessfullyconductalowcomplexityAD,witha littlebitofextraeffort.Thesamechanceofsuccessis notpresentifoneistryingtoimplementahighcomplexityADdesign(forexample,anadaptiveseamlessII/ IIIdesign,oracombinationofdifferentadaptations). Withacomplexdesignahighlevelofinfrastructureis neededinordertosuccessfullyconductthetrial.The QALSstudy,acomplextwo-stageseamlessdesign describedearlier,isagoodexampleofastudywithhigh infrastructureandwithhighadaptivity[62].TheQALS studywasasuccess,requiringonly185participantsto establishthatthecostandeffortofundertakingaphase IIItrialwouldnotbeworthwhile.However,thetrialwas successfulonlybecauseallpartiesinvolved(researchers, sponsor,DSMBmembers, etcetera )clearlyunderstood theintricaciesoftheADbeingused.Abreak-downin understandingforanystakeholdercouldhaveseverely damagedthestudy.AhighcomplexityADwithlowinfrastructureislikelydoomedtofail.Unfortunately,the scenarioiscurrentlyacommononeduetothedesireto usecomplexadaptivedesignswithoutthenecessaryhigh levelofinfrastructurerequiredforsuccess.Onesolution wouldbetoonlyconsidersimpleADs.However,since researchersaremainlyinterestedinobtainingtheefficiencyandadvantagesofmorecomplexadaptations,we believethattheonlywaytoincreasethechancesforsuccessinthefutureistofirstimprovetheexistinginfrastructure.Aspreviouslystated,manycompanieshave beguntheprocess.However,webelievethatNIHshould alsooffermorerecognitionandfundingforplanning clinicaltrialsthatmightbenefitfromadaptations. Althoughinfrastructurecharacteristicsoftenlimitrates ofadaptation,anumberofstepshavebeentakento addresstheconcern,especiallyintheneurosciences.One ongoingexampleistheNIHandFDAsupported ‘ AcceleratingDrugandDeviceEvaluationthroughInnovative ClinicalTrialDesign ’ project[69].Theparticipantsare studyingthedevelopmentandacceptanceofawiderange ofadaptivedesignswithintheexistinginfrastructureofthe NationalInstituteofNeurologicalDisordersandStroke (NINDS)-supportedNeurologicalEmergenciesTreatment Trials(NETT)network[70].Thegoalistoincorporatethe resultingdesignsintofuturenetworkgrantsubmissions. AnotherexampleisthecreationoftheNINDS-funded NetworkforExcellenceinNeuroscienceClinicalTrials (NeuroNEXT)[71].Thegoalofthenetworkistoprovide infrastructuresupportingphaseIIstudiesinneuroscience, includingtheconductofstudiesinrareneurologicaldiseases.Thelong-termobjectiveofthenetworkistorapidly andefficientlytranslateadvancesinneuroscienceinto treatmentsforindividualswithneurologicdisorders.The infrastructureisintendedtoserveasamodelthatcanbe replicatedacrossanumberofstudiesanddiseases.The developmentofrichinfrastructuressuchasNeuroNEXT greatlyincreasesthefeasibilityofusingmorenoveltrial designs,includingADs.Additionalinfrastructurewithflexibilityisneededinotherdiseaseareastoadvancetheuseof ADs,particularlyinthepubliclyfundedenvironment.ConclusionsAgeneraloverviewofthemaindesignclassesprovides thebasisfordiscussinghowtocorrectlyimplementADs. WeagreewithVandemeulebroecke[72]thatdiscussion concerningADsshouldcenteronfivemainpoints:feasibility,validity,integrity,efficiency,andflexibility.Werecommendsystematicallyaddressingeachoftheconcerns throughthedevelopmentofbettermethodology,infrastructure,andsoftware.SuccessfuladoptionofADsalso requiressystematicchangestoclinicalresearchpolicies. WebelievethatthebarrierscanbeovercometomoveappropriateADsintocommonclinicalpractice.Abbreviations AD:Adaptivedesigns;ALS:AmyotrophicLateralSclerosis;ASTIN:Acute StrokeTherapybyInhibitionofNeutrophils;CE:Comparativeeffectiveness;Kairalla etal.Trials 2012, 13 :145 Page7of9 http://www.trialsjournal.com/content/13/1/145

PAGE 8

CRM:Continualreassessmentmethod;DSMB:DataandSafetyMonitoring Board;DR:Doseresponse;FDA:USFoodandDrugAdministration;GS:Group sequential;IP:Internalpilot;MTD:Maximumtolerateddose; NETT:NeurologicalEmergenciesTreatmentTrials;NeuroNEXT:Networkfor ExcellenceinNeuroscienceClinicalTrials;NIH:NationalInstitutesofHealth; NINDS:NationalInstituteofNeurologicalDisordersandStroke; PhRMA:PharmaceuticalResearchandManufacturersofAmerica; QALS:CoenzymeQ10inALS;SSR:Samplesizere-estimation. Competinginterests Theauthorsdeclarethattheyhavenocompetinginterests. Authors ’ contributions Allauthorscontributedsignificantlytotheoveralldesignofthepaper.JAK wrotetheinitialdraftandworkedonrevisions.CSCconceivedofthepaper andworkedonrevisions.MATconductedliteraturereviewsandworkedon revisions.KEMcontributedtotheoverallfocusandcontentandhelped revisethemanuscript.Allauthorsreadandapprovedthefinalmanuscript. Acknowledgements WegratefullyacknowledgetheadviceandassistanceofourcolleagueDr. RonaldShorrattheUniversityofFloridaandtheMalcomRandallVAMedical Center.Wewouldalsoliketothankthereviewersforhelpfulsuggestionson anearlierversionofthismanuscriptthatgreatlyimprovedthequalityofthe work. AllauthorsaresupportedinpartbyasupplementtotheNIH/NCRRClinical andTranslationalScienceAwardtotheUniversityofFlorida,NCRR 3UL1RR029890-03S1.AdditionalsupportforJAKincludedNINR1R01 AG039495-01.AdditionalsupportforCSCincludedNINDSU01-NS077352, NINDSU01-NS077108,NINDSU01-NS038529,andNHLBIR01-HL091843-04. AdditionalsupportforKEMincludedNIDDKR01-DK072398,NIDCRU54DE019261,NIDCRR01-DE020832-01A1,NHLBIR01-HL091005,NIAAAR01AA016549,andNIDAR01-DA031017. Authordetails1DepartmentofBiostatistics,UniversityofFlorida,POBox117450,Gainesville, FL32611-7450,USA.2DepartmentofBiostatistics,UniversityofIowa,2400 UniversityCapitolCentre,IowaCity,IA52240-4034,USA.3Departmentof HealthOutcomesandPolicy,UniversityofFlorida,POBox100177, Gainesville,FL32610-0177,USA. Received:16February2012Accepted:8August2012 Published:23August2012 References1.ChowS,ChangM: Adaptivedesignmethodsinclinicaltrials .BocaRaton: Chapman&Hall/CRC;2007. 2.GalloP,Chuang-SteinC,DragalinV,GaydosB,KramsM,PinheiroJ: Adaptivedesignsinclinicaldrugdevelopment:anexecutivesummary ofthePhRMAworkinggroup. JBiopharmStat 2006, 16: 275 – 283. 3.U.S.FoodandDrugAdministration: DraftGuidanceforIndustry:adaptive designclinicaltrialsfordrugsandbiologics .http://www.fda.gov/downloads/ DrugsGuidanceComplianceRegulatoryInformation/Guidances/UCM201790. pdf. 4.CoffeyCS,KairallaJA: Adaptiveclinicaltrials:progressandchallenges. DrugsR&D 2008, 9: 229 – 242. 5.ChowSC,CoreyR: Benefits,challengesandobstaclesofadaptiveclinical trialdesigns. OrphJRareDis 2011, 6: 79. 6.BretzF,KoenigF,BrannathW,GlimmE,PoschM: Adaptivedesignsfor confirmatoryclinicaltrials. StatMed 2009, 28: 1181 – 1217. 7.EmersonSS,FlemingTR: AdaptiveMethods:Telling “ TheRestofthe Story ” JBiopharmStat 2010, 20: 1150 – 1165. 8.CoffeyCS: AdaptiveDesignAcrossStagesofTherapeuticDevelopment In ClinicalTrialsinNeurology:Design,Conduct,&Analysis .EditedbyRavinaB, CummingsJ,McDermottM,PooleRM.Cambridge:CambridgeUniversity Press;2012:91 – 100. 9.BrannathW,KoenigF,BauerP: Multiplicityandflexibilityinclinicaltrials. PharmStat 2007, 6: 205 – 216. 10.DragalinV: Adaptivedesigns:terminologyandclassification. DrugInfJ 2006, 40: 425 – 435. 11.BurtonA,AltmanDG,RoystonP,HolderRL: Thedesignofsimulation studiesinmedicalstatistics. StatMed 2006, 24: 4279 – 4292. 12.StorerBE: DesignandAnalysisofPhaseIClinicalTrials. Biometrics 1989, 45: 925 – 937. 13.TourneauCL,LeeJJ,SiuLL: DoseEscalationMethodsinPhaseICancer ClinicalTrials. JNatlCancerI 2009,101: 708 – 720. 14.O ’ QuigleyJ,PepeM,FisherL: Continualreassessmentmethod:apractical designforphaseIclinicaltrialsincancer. Biometrics 1990, 46: 33 – 48. 15.CheungK,KaufmannP: Efficiencyperspectivesonadaptivedesignsin strokeclinicaltrials. Stroke 2011, 42: 2990 – 2994. 16.Garrett-MayerE: Thecontinualreassessmentmethodfordose-finding studies:atutorial. ClinTrials 2006, 3: 57 – 71. 17.TevaarwerkA,WildingG,EickhoffJ,ChappellR,SidorC,ArnottJ,BaileyH, SchelmanW,LiuG: PhaseIstudyofcontinuousMKC-1inpatientswith advancedormetastaticsolidmalignanciesusingthemodifiedTime-toEventContinualReassessmentMethod(TITE-CRM)doseescalation design. InvestNewDrugs 2011, 30: 1039 – 1045. 18.ElkindMSV,SaccoRL,MacArthurRB,PeerschkeE,NeilsG,AndrewsH, StillmanJ,CorporanT,LeiferD,LiuR,CheungK: High-doseLovastatinfor acuteischemicstroke:ResultsofthephaseIdoseescalation neuroprotectionwithstatintherapyforacuterecoverytrial(NeuSTART). CerebrovascDis 2009, 28: 266 – 275. 19.SelimM,YeattsS,GoldsteinJN,GomesJ,GreenbergS,MorgensternLB, SchlaugG,TorbeyM,WaldmanB,XiG,PaleschY: Safetyandtolerabilityof DeferoxamineMesylateinpatientswithacuteintracerebralhemorrhage. Stroke 2011, 42: 3067 – 3074. 20.BornkampB,BretzF,DmitrienkoA,EnasG,GaydosB,HsuC,KonigF,KramsM, LiuQ,NeuenschwanderB,ParkeT,PinheiroJ,RoyA,SaxR,ShenF: Innovative approachesfordesigningandanalyzingadaptivedose-rangingtrials. J BiopharmStat 2007, 17: 965 – 995. 21.BerryDA,MuellerP,GrieveAP,SmithM: Bayesiandesignsfordoserangingdrugtrials .In CasestudiesinBayesianstatistics,Vol.5 .Editedby GatsonisC,KassRE,CarlinB,CarriquiryA,GelmanA,VerdinelliI,WestM. NewYork:Springer;2002:99 – 181. 22.KramsM,LeesKR,HackeW,GrieveAP,OrgogozoJ,FordGA: ASTIN:an adaptivedose-responsestudyofUK-279,276inacuteischemicstroke. Stroke 2003, 34: 2543 – 2549. 23.JennisonC,TurnbullBW: GroupSequentialMethods .BocaRaton:Chapman& Hall/CRC;2000. 24.ZhangL,RosenburgerW: Adaptiverandomizationinclinicaltrials InDesignandAnalysisofExperiments,SpecialDesignsandApplications. Volume3 .EditedbyHinkelmannK.Hoboken:JohnWiley&Sons; 2012:251 – 282. 25.RosenbergerWF,SverdlovO,HuF: AdaptiveRandomizationforClinical Trials. JBiopharmStat 2012, 22: 719 – 736. 26.RosenbergerWF,SverdlovO: Handlingcovariatesinthedesignofclinical trials. StatSci 2008, 23: 404 – 419. 27.AntogniniAB,ZagoraiouM: Thecovariate-adaptivebiasedcoindesignfor balancingclinicaltrialsinthepresenceofprognosticfactors. Biometrika 2011, 98: 519 – 535. 28.JensenRK,Leboeuf-YdeC,WedderkoppN,SorensenJS,MinnicheC: Rest versusexerciseastreatmentforpatientswithlowbackpainandModic changes.Arandomizedcontrolledtrial. BMCMed 2012, 10: 22 – 35. 29.BartlettRH,RoloffDW,CornellRG,AndrewsAF,DillonPW,Zwischenberger JB: Extracorporealcirculationinneonatalrespiratoryfailure:a prospectiverandomizedstudy. Pediatrics 1985, 76: 479 – 487. 30.EitnerF,AckermannD,HilgersRD,FloegeJ: Supportiveversus immunosuppressivetherapyofprogressiveIgANephropathy(STOP) IgANtrial:rationaleandstudyprotocol. JNephrol 2008, 21: 284 – 289. 31.FioreLD,BrophyM,FergusonRE,D ’ AvolioL,HermosJA,LewRA,DorosG, ConradCH,O ’ NeilJAJr,SabinTP,KaufmanJ,SwartzSL,LawlerE,LiangMH, GazianoJM,LavoriPW: Apoint-of-careclinicaltrialcomparinginsulin administeredusingaslidingscaleversusaweight-basedregimen. ClinTrials 2011, 8: 183 – 195. 32.YuanY,HuangX,LiuS: ABayesianresponse-adaptivecovariate-balanced randomizationdesignwithapplicationtoaleukemiaclinicaltrial. Stat Med 2011, 30: 1218 – 1229. 33.FardipourP,LittmanG,BurnsDD,DragalinV,PadmanabhanSK,ParkeT, PerevozskayaI,ReinoldK,SharmaA,KramsM: Planningandexecuting response-adaptivelearn-phaseclinicaltrials:1.Theprocess. DrugInfJ 2009, 43: 713 – 723.Kairalla etal.Trials 2012, 13 :145 Page8of9 http://www.trialsjournal.com/content/13/1/145

PAGE 9

34.GuX,LeeJJ: Asimulationstudyforcomparingtestingstatisticsin response-adaptiverandomization. BMCMedResMethodol 2010, 10: 48 – 62. 35.WangSJ: Thebiasissueunderthecompletenullwithresponseadaptive randomization:Commentaryon “ Adaptiveandmodel-baseddoserangingtrials:Quantitativeevaluationandrecommendation ” Stat BiopharmRes 2012, 2: 458 – 461. 36.KornEL,FreidlinB: Outcome-adaptiverandomization:isituseful? JClin Oncol 2011, 29: 771 – 776. 37.TempleR: Enrichmentofclinicalstudypopulations. ClinPharmacolTher 2010, 88: 774 – 778. 38.FreidlinB,SimonR: Evaluationofrandomizeddiscontinuationdesign. JClinOncol 2005, 23: 5094 – 5098. 39.WangSJ,HungHMJ,O ’ NeillRT: Adaptivepatientenrichmentdesignsin therapeutictrials. BiometricalJ 2009, 51: 358 – 374. 40.VanderBaanFH,KnolMJ,KlungelOH,EgbertsACG,GrobbeeDE,RoesKCB: Potentialofadaptiveclinicaltrialdesignsinpharmacogeneticresearch. Pharmacogenomics 2012, 13: 571 – 578. 41.HoTW,PearlmanE,LewisD,HamalainenM,ConnorK,MichelsonD,Zhang Y,AssaidC,MozleyLH,StricklerN,BachmanR,MahoneyE,LinesC,Hewitt DJ: Efficacyandtolerabilityofrizatriptaninpediatricmigraineurs:Results fromarandomized,double-blind,placebo-controlledtrialusinganovel adaptiveenrichmentdesign. Cephalalgia 2012, 32: 760 – 765. 42.ProschanMA: Samplesizere-estimationinclinicaltrials. BiometricalJ 2009, 51: 348 – 357. 43.CuiL,HungHMJ,WangS: Modificationofsamplesizeingroup sequentialclinicaltrials. Biometrics 1999, 55: 853 – 857. 44.TsiatisAA,MehtaC: Ontheinefficiencyoftheadaptivedesignfor monitoringclinicaltrials. Biometrika 2003, 90:367 – 378. 45.JennisonC,TurnbullBW: Adaptiveandnonadaptivegroupsequential tests. StatMed 2006, 25: 917 – 932. 46.MehtaC,PocockSJ: Adaptiveincreaseinsamplesizewheninterim resultsarepromising:Apracticalguidewithexamples. StatMed 2011, 30: 3267 – 3284. 47.WittesJ,BrittainE: Theroleofinternalpilotstudiesinincreasingthe efficiencyofclinicaltrials. StatMed 1990, 9: 65 – 72. 48.ProschanMA: Two-stagesamplesizere-estimationbasedonanuisance parameter:areview. JBiopharmStat 2005, 15: 559 – 574. 49.FriedeT,KieserM: Samplesizerecalculationininternalpilotstudy designs:areview. BiometricalJ 2006, 4: 537 – 555. 50.KieserM,FriedeT: Re-calculatingthesamplesizeininternalpilotstudy designswithcontrolofthetypeIerrorrate. StatMed 2000, 19: 901 – 911. 51.CoffeyCS,MullerKE: Controllingtestsizewhilegainingthebenefitsofan internalpilotdesign. Biometrics 2001, 57: 625 – 631. 52.CoffeyCS,KairallaJA,MullerKE: PracticalmethodsforboundingtypeI errorratewithaninternalpilotdesign. CommStatTheoryMethods 2007, 36: 2143 – 2157. 53.GouldAL,ShihW: Samplesizere-estimationwithoutunblindingfor normallydistributedoutcomeswithunknownvariance. CommStat TheoryMethods 1992, 21: 2833 – 2853. 54.FriedeT,KieserM: Blindedsamplesizerecalculationforclinicaltrialswith normaldataandbaselineadjustedanalysis. PharmStat 2011, 10: 8 – 13. 55.MacaJ,BhattacharyaS,DragalinV,GalloP,KramsM: Adaptiveseamless phaseII/IIIdesigns:backgroundoperationalaspectsandexamples. Drug InfJ 2006, 40:463 – 473. 56.StallardN,ToddS: SeamlessphaseII/IIIdesigns. StatMethodsMedRes 2010, 20: 623 – 634. 57.KornEL,FreidlinB,AbramsJS,HalabiS: Designissuesinrandomized phaseII/IIItrials. JClinOncol 2012, 30: 667 – 671. 58.ConroyT,DesseigneF,YchouM,BoucheO,GuimbaudR,BecouarnY, AdenisA,RaoulJ,Gourgou-BourgadeS,FouchardiereC,BennounaJ, BachetJ,Khemissa-AkouzF,Pere-VergeD,DelbaldoC,AssenatE, ChauffertB,MichelR,Montot-GrillotC,DucreuxM: FOLFIRINOXversus Gemcitabineformetastaticpancreaticcancer. NEnglJMed 2011, 364: 1817 – 1825. 59.KaufmannP,ThompsonJLP,LevyG,BuchsbaumR,ShefnerJ,KrivickasLS, KatzJ,RollinsY,BarohnRJ,JacksonCE,TiryakiE,Lomen-HoerthC, ArmonC,TandanR,RudnickiSA,RezaniaK,SufitR,PestronkA, NovellaSP,Heiman-PattersonT,KasarskisEJ,PioroEP,MontesJ, ArbingR,VecchioD,BarsdorfA,MitsumotoH,LevinB: PhaseIItrial ofCoQ10forALSfindsinsufficientevidencetojustifyphaseIII. Ann Neurol 2009, 66: 235 – 244. 60.LevinB: SelectionandFutilityDesigns .In ClinicalTrialsinNeurology: Design,Conduct,&Analysis .EditedbyRavinaB,CummingsJ,McDermottM, PooleRM.Cambridge:CambridgeUniversityPress;2012:78 – 90. 61.RavinaB,PaleschY: ThephaseIIfutilityclinicaltrialdesign. ProgNeurother Neuropsych 2007, 2: 27 – 38. 62.LevyG,KaufmannP,BuchsbaumR,MontesJ,BarsdorfA,ArbingR, BattistaV,ZhouX,MitsumotoH,LevinB,ThompsonJLP: Atwo-stage designforaphaseIIclinicaltrialofcoenzymeQ10inALS. Neurology 2006, 66: 660 – 663. 63.TsiatisAA: Informationbasedmonitoringofclinicaltrials. StatMed 2006, 25: 3236 – 3244. 64.KairallaJA,MullerKE,CoffeyCS: Combininganinternalpilotwithan interimanalysisforsingledegreeoffreedomtests. CommStatTheory Methods 2010, 39: 3717 – 3738. 65.KairallaJA,CoffeyCS,MullerKE: Achievingthebenefitsofbothaninternal pilotandinterimanalysisinlargeandsmallsamples. JSMProceedings 2010, : 5239 – 5252. 66.TunisSR,BennerJ,McClellanM: Comparativeeffectivenessresearch: Policycontext,methodsdevelopmentandresearchinfrastructure.StatMed 2010, 29: 1963 – 1976. 67.GurkaMJ,CoffeyCS,GurkaKK: Internalpilotsforobservationalstudies. BiometricalJ 2010, 5: 590 – 603. 68.ScientificAdvancesinAdaptiveClinicalTrialDesignsWorkshopPlanning Committee: ScientificAdvancesinAdaptiveClinicalTrialDesignsWorkshop Summary .2010.www.palladianpartners.com/adaptivedesigns/summary. 69. AcceleratingDrugandDeviceEvaluationthroughInnovativeClinical TrialDesign. http://www2.med.umich.edu/prmc/media/newsroom/details. cfm?ID=1753. 70. NeurologicalEmergenciesTreatmentTrials. http://www.nett.umich.edu. 71.TheLancetNeurology: NeuroNEXT:acceleratingdrugdevelopmentin neurology. LancetNeurol 2012, 11: 119. 72.VandemeulebroekeM: Groupsequentialandadaptivedesigns-areview ofbasicconceptsandpointsofdiscussion. BiometricalJ 2008, 50: 541 – 557.doi:10.1186/1745-6215-13-145 Citethisarticleas: Kairalla etal. : Adaptivetrialdesigns:areviewof barriersandopportunities. Trials 2012 13 :145. Submit your next manuscript to BioMed Central and take full advantage of: € Convenient online submission € Thorough peer review € No space constraints or color “gure charges € Immediate publication on acceptance € Inclusion in PubMed, CAS, Scopus and Google Scholar € Research which is freely available for redistribution Submit your manuscript at www.biomedcentral.com/submit Kairalla etal.Trials 2012, 13 :145 Page9of9 http://www.trialsjournal.com/content/13/1/145


xml version 1.0 encoding utf-8 standalone no
mets ID sort-mets_mets OBJID sword-mets LABEL DSpace SWORD Item PROFILE METS SIP Profile xmlns http:www.loc.govMETS
xmlns:xlink http:www.w3.org1999xlink xmlns:xsi http:www.w3.org2001XMLSchema-instance
xsi:schemaLocation http:www.loc.govstandardsmetsmets.xsd
metsHdr CREATEDATE 2012-12-11T20:03:56
agent ROLE CUSTODIAN TYPE ORGANIZATION
name BioMed Central
dmdSec sword-mets-dmd-1 GROUPID sword-mets-dmd-1_group-1
mdWrap SWAP Metadata MDTYPE OTHER OTHERMDTYPE EPDCX MIMETYPE textxml
xmlData
epdcx:descriptionSet xmlns:epdcx http:purl.orgeprintepdcx2006-11-16 xmlns:MIOJAVI
http:purl.orgeprintepdcxxsd2006-11-16epdcx.xsd
epdcx:description epdcx:resourceId sword-mets-epdcx-1
epdcx:statement epdcx:propertyURI http:purl.orgdcelements1.1type epdcx:valueURI http:purl.orgeprintentityTypeScholarlyWork
http:purl.orgdcelements1.1title
epdcx:valueString Adaptive trial designs: a review of barriers and opportunities
http:purl.orgdctermsabstract
Abstract
Adaptive designs allow planned modifications based on data accumulating within a study. The promise of greater flexibility and efficiency stimulates increasing interest in adaptive designs from clinical, academic, and regulatory parties. When adaptive designs are used properly, efficiencies can include a smaller sample size, a more efficient treatment development process, and an increased chance of correctly answering the clinical question of interest. However, improper adaptations can lead to biased studies. A broad definition of adaptive designs allows for countless variations, which creates confusion as to the statistical validity and practical feasibility of many designs. Determining properties of a particular adaptive design requires careful consideration of the scientific context and statistical assumptions. We first review several adaptive designs that garner the most current interest. We focus on the design principles and research issues that lead to particular designs being appealing or unappealing in particular applications. We separately discuss exploratory and confirmatory stage designs in order to account for the differences in regulatory concerns. We include adaptive seamless designs, which combine stages in a unified approach. We also highlight a number of applied areas, such as comparative effectiveness research, that would benefit from the use of adaptive designs. Finally, we describe a number of current barriers and provide initial suggestions for overcoming them in order to promote wider use of appropriate adaptive designs. Given the breadth of the coverage all mathematical and most implementation details are omitted for the sake of brevity. However, the interested reader will find that we provide current references to focused reviews and original theoretical sources which lead to details of the current state of the art in theory and practice.
http:purl.orgdcelements1.1creator
Kairalla, John A
Coffey, Christopher S
Thomann, Mitchell A
Muller, Keith E
http:purl.orgeprinttermsisExpressedAs epdcx:valueRef sword-mets-expr-1
http:purl.orgeprintentityTypeExpression
http:purl.orgdcelements1.1language epdcx:vesURI http:purl.orgdctermsRFC3066
en
http:purl.orgeprinttermsType
http:purl.orgeprinttypeJournalArticle
http:purl.orgdctermsavailable
epdcx:sesURI http:purl.orgdctermsW3CDTF 2012-08-23
http:purl.orgdcelements1.1publisher
BioMed Central Ltd
http:purl.orgeprinttermsstatus http:purl.orgeprinttermsStatus
http:purl.orgeprintstatusPeerReviewed
http:purl.orgeprinttermscopyrightHolder
John A Kairalla et al.; licensee BioMed Central Ltd.
http:purl.orgdctermslicense
http://creativecommons.org/licenses/by/2.0
http:purl.orgdctermsaccessRights http:purl.orgeprinttermsAccessRights
http:purl.orgeprintaccessRightsOpenAccess
http:purl.orgeprinttermsbibliographicCitation
Trials. 2012 Aug 23;13(1):145
http:purl.orgdcelements1.1identifier
http:purl.orgdctermsURI http://dx.doi.org/10.1186/1745-6215-13-145
fileSec
fileGrp sword-mets-fgrp-1 USE CONTENT
file sword-mets-fgid-0 sword-mets-file-1
FLocat LOCTYPE URL xlink:href 1745-6215-13-145.xml
sword-mets-fgid-1 sword-mets-file-2 applicationpdf
1745-6215-13-145.pdf
structMap sword-mets-struct-1 structure LOGICAL
div sword-mets-div-1 DMDID Object
sword-mets-div-2 File
fptr FILEID
sword-mets-div-3