W 21: Utilizing Simulations to Enhance Randomization Methodology Decision Making
Almac Clinical Technologies United States
This poster will illustrate, via a case study, how simulations can be an effective tool in evaluating study design decisions by investigating expected treatment balance resulting from different randomization methodologies and associated parameterization.
ORAL PRESENTATION SCHEDULED: Session 2B at 12:10 - 12:20 PM
Configurable SAS simulation programs can be readily adapted to individual protocols to explore the design properties (expected treatment balance) for various randomization methods (stratified blocked rand., minimization, etc.) and associated parameters (block size, biased-coin probability, etc.).
Design: 2,000 randomized subjects, 2:1 treatment allocation ratio, 3 stratification factors Gender (male/female), Disease Severity (high/low), and Age (<=10, >10 or <18, >=18), with/without stratifying by 200 Sites (25 high, 50 medium, 50 medium-low, 75 low enrolling).
Scenario 1: to evaluate expected treatment balance when utilizing a stratified blocked randomization vs. minimization with biased-coin assignment.
Scenario 2: based on a stratified blocked rand. methodology without stratifying by Site, determine the minimum amount of subjects that each site needs to randomize to ensure that at least 1 subject is randomized to each treatment group.
Simulations: 100,000 trial runs for the above scenarios were generated based on real-life assumptions for factor level distributions (including Site) via configurable SAS programs, showing the expected treatment balance (P[Imb<=X (X=0,1,2,3,4,etc)] at Study, Site, and Strata level).
Scenario 1 (Results): simulations demonstrated that the inclusion of Site as a stratification factor based on stratified blocked methodology was not an option due to Study/Factor level treatment balance being compromised. Even when utilizing minimization methodology, Site balance was not sufficiently controlled to justify the loss of treatment balance (power) at the Site/Factor levels.
Scenario 2 (Results): theoretical probabilities were calculated at the Site level for the number of subjects required for both treatment arms being represented; simulations were then utilized to validate the calculated theoretical probabilities based on varying underlying Site distributions.
Based on the simulation results, a stratified blocked randomization design based on the 3 factors only (excluding Site) was selected, where Sites were only permitted to participate if guaranteed to be able to enroll the minimum number of subjects with low probability of both treatment groups not being represented at Sites, as identified via the simulations.
Treatment balance for a clinical trial is critical as the resulting drug effectiveness can be difficult to identify if balance is not achieved. Therefore, the various components of the randomization design that can impact treatment balance should be carefully considered upfront at the time the protocol is being drafted. Determining the optimal design for randomization via SAS simulations is an effective way to help identify how best to maintain treatment balance for the study, as shown in the above Case Study. Further, the simulation results of the Case Study illustrated the minimum number of subjects randomized at each site that had at least 1 subject randomized to each treatment arm, which assisted the clinical study team with their site selection.
While this Case Study focused on specific scenarios, simulations can also be used for numerous randomization design decisions such as stratification inclusions, randomization methodology, and various randomization methodology parameters. Additional real life expectations can be incorporated, such as sample distributions, to enhance the accuracy of the treatment balance results. Concerns for enrollment can also be assessed by including drop-out rates, which may help identify the need for replacement randomization. Because of the SAS simulation program’s ability to adapt to the specific needs of a study, it is effective tool for making informed randomization methodology design decisions.
Additional authors: Jennifer Ross / Graham Nicholls