DIA会员及用户请点击登录

登录

忘记用户 ID? or 忘记密码?

Not a Member?

创建账户并加入。

Menu 返回 Poster-Presentations-Details

PP11-80: Statistical Concerns For Cut Point Determination in Immunogenicity Studies





Poster Presenter

      Meiyu Shen

      • Expert Mathematical Statistician, Office of Translational Sciences, CDER
      • FDA
        United States

Objectives

The purpose of this project is to discuss statistical concerns for cut point determination in immunogenicity study.

Method

IID treats all observations as independent, AVE is to average the results from each donor across runs.We propose to calculate the cut point from repeated measurement data using a random effect model (REM). We will compare these methods for the immunogenicity screening assay cut-point determination.

Results

We first illustrated the problem of estimating the CP using IID approach while ignoring the data correlation structure. Simulation results show that the CP calculated from observations of n subjects repeatedly measured in m runs (Case 1) is larger than that calculated from observations of nm subjects measured once in m runs (Case 2). The ratio of CP of Case 1 versus CP of Case 2 depends on the ratio of the biological variability versus the repeated measuring error (\sigma_{bio}{/\sigma}_{rep}). When \sigma_{bio}{\le\sigma}_{rep}, the mean of 95% LCL for Q_{95} for Case 1 is larger than Q_{95}, the true quantile. In addition, the coverage of the 95% LCL for Q_{95} of Case 1 is significantly smaller than that of Case 2. Therefore, CP could be over-estimated by naively assuming measurements are i.i.d. We compaed the three approaches: IID, Ave, and REM for fixed and floating CP estimation, with respect to the mean of CPs, empirical coverage of CP, and mean of empirical FPRs. Although the replicates’ measurement error exits in the validation study due to repeated measurements of each subjects conducted by several different analysts, this error does not exist in the immunogenicity study where each subject is measured only once by one analyst. Therefore, concerns raise for the different design between validation study and immunogenicity study, which also leads to the following question: should replicates’ variability be included in CP calculation? We also have concerns on independence assumption of data used for determining a fixed CP since the data is correlated between runs due to multiple measurements of same subject. In addition, we have concerns on independence of data used for determining a floating CP since the data is correlated between runs due to multiple measurements of same subject and the data is correlated within a run. Note that the rational for using floating CP is that measurements of subjects are positively correlated with the negative control.

Conclusion

Nevertheless, comparisons illustrate that IID is flawed since CP can be larger than Q95 when the biological variability is less than replicated measurement's variability. The mean of false positive rates (FPRs) of 10,000 simulations is 2~3% when CP is determined by IID. AVE is the most conservative method since the mean of CPs of 10,000 simulations is smallest and the mean of FPRs of 10,000 simulations is largest. On other hand, the performance of REM is in the middle. Based on simulations, IID is flawed since it ignores the data correlation structure and the nature of replicates. AVE may lose some variability information, but it provides some protections against a bad assay in which the variability from replicates measurement is larger or similar to the biological variability. REM models the inter-subject correlation within a run, intra-subject correlation (replicates’ variability) between runs, and measurement error. The CP of REM is between that of IID and that of AVE. As a result, the FPR of REM is between that of IID and that of AVE.

获得信息并保持参与

不要错失任何机会——请加入我们的邮件列表,了解DIA的观点和事件。