DOI: 10.4172/2155-6180.1000101
In epidemiological and clinical research, investigators often want to estimate the direct effect of a treatment on an outcome, which is not relayed by intermediate variables. Even if the total effect is unconfounded, the direct effect is not identified when unmeasured variables affect the intermediate and outcome variables. This article focuses on the principal stratum direct effect (PSDE) of a randomized treatment, which is the difference between expectations of potential outcomes within latent subgroups of subjects for whom the intermediate variable would be constant, regardless of the randomized treatment assignment. Unfortunately, the PSDE will not generally be estimated in an unbiased manner without untestable conditions, even if monotonicity is assumed. Thus, we propose bounds and a simple method of sensitivity analysis for the PSDE under a monotonicity assumption. To develop them, we introduce sensitivity parameters that are defined as the difference in potential outcomes with the same value of the intermediate variable between subjects who are assigned to the treatment and those who are assigned to the control group. Investigators can use the proposed method without complex computer programming. The method is illustrated using a randomized trial for coronary heart disease.
Yanmin Li, Adam Shchy and Jianguo Sun
DOI: 10.4172/2155-6180.1000102
Current status data occur in many studies and in this case, each subject is observed only once [3,10]. Furthermore, the distributions of observation times may be different for subjects in different treatment groups. This paper focuses on current status recurrent event data that concern occurrence rates of certain recurrent events such as disease infections and discuss nonparmametric comparison of several treatment groups. For the problem, two new tests procedures are proposed and a simulation study is conducted and shows that they are more efficient than the existing ones. An illustrative example on lung tumors is provided.
Brian Neelon and A. James O’Malley
DOI: 10.4172/2155-6180.1000103
We illustrate how power prior distributions can be used to incorporate historical data into a Bayesian analysis when there is uncertainty about the similarity between the current and historical studies. We compare common specifications of the power prior and explore whether it is preferable to condition on the power parameter, a0 or to treat it as a random variable with a prior distribution of its own. We show that there are two natural ways of formulating the power prior for random a0. The first approach excludes the historical data in all but extreme cases and may therefore be of limited practical use. The second approach, called the normalized power prior (NPP), provides a measure of congruence between the current and historical data, so that the historical data are downweighted more substantially as the studies diverge. While this is an intuitively appealing property, our experience suggests that in real world problems involving large datasets and models with several parameters, the NPP may lead to considerably more downweighting than desired. We therefore advise practitioners to consider whether such attenuation is desirable, or whether it is more appropriate to assign a0 a fixed value based on expert opinion about the relevance of the historical data to the current analysis. We also extend the power prior to hierarchical regression models that allow covariate effects to differ across studies. We apply these methods to a pair of studies designed to improve delivery of care in pediatric clinics.
Journal of Biometrics & Biostatistics received 3496 citations as per Google Scholar report