Yu Ryan Yue and Xiao-Feng Wang
DOI: 10.4172/2155-6180.1000e128
A powerful modelling tool for spatial data is the framework of Gaussian Markov random fields (GMRFs), which are discrete domain Gaussian random fields equipped with a Markov property. GMRFs allow us to combine the analytical results for the Gaussian distribution as well as Markov properties, thus allow for the development of computationally efficient algorithms. Here we briefly review popular spatial GMRFs, show how to construct them, and outline their recent developments and possible future work.
Roberto Gomeni and Navin Goyal
DOI: 10.4172/2155-6180.1000187
Abstract
Background: Uncontrolled placebo response has been one of the major culprits in the failure of randomized clinical trials for depression. A major drawback associated with the presence of high placebo response is the increased noiseto- signal ratio that in a majority of cases prevents the detection of treatment effect. The aim of this work was to propose an adaptive randomization study design based on band-pass filtering and evaluate this approach when compared to the traditional study designs.
Results: Clinical trial simulations demonstrated that an adaptive randomization approach always outperformed the conventional study designs in improving the signal-to-noise ratio. The improvement directly correlated with the level of placebo response and the variability in response across centers. The proposed strategy does not warrant any unblinding of data.
Conclusions: The use of the adaptive randomization design provides a novel methodological approach for signal detection in clinical trials where placebo represents a known confounding factor. The improvement in signal detection was directly proportional to the level of placebo response, the degree of heterogeneity across recruitment centers with a reduced sample size as compared to the traditional study design. These findings support the use of the band-pass filtering approach in an adaptive randomization design as an efficient way to minimize the impact of uncontrolled placebo response and provide rational go/no-go decision criteria in the development of new medicines for psychiatric disorders.
Yunlong Xie and Dale L Zimmerman
DOI: 10.4172/2155-6180.1000188
Three standard hypothesis testing procedures exist based on likelihood functions: the likelihood ratio, score, and Wald tests. For unstructured antedependence models for categorical longitudinal data, Xie and Zimmerman derived the likelihood ratio test for the order of antedependence as well as likelihood ratio tests for time-invariance of transition probabilities and strict stationarity. In this article, we derive score tests (of Pearson’s chi-square form) and Wald tests for all the same purposes. Via simulation, we show that for testing for order of antedependence, a modified likelihood ratio test performs best if the sample is of size 50 or smaller, but otherwise the score test is superior. The Wald test is markedly inferior to both. We also show that the likelihood ratio and score tests for time-invariant transition probabilities and strict stationarity perform about equally well. The methods are applied to data from a longitudinal study of labor force participation of married women, indicating that these data are third-order antedependent with time-invariant transition probabilities of this order.
Qingzhao Yu, Ying Fan and Xiaocheng Wu
DOI: 10.4172/2155-6180.1000189
Mediation refers to the effect transmitted by mediators that intervenes in the relationship between an exposure and a response variable. Mediation analysis has been broadly studied in many fields. However, it remains a challenge for researchers to differentiate individual effect from multiple mediators. This paper proposes general definitions of mediation effects that are consistent for all different types (categorical or continuous) of response, exposure, or mediation variables. With these definitions, multiple mediators can be considered simultaneously, and the indirect effects carried by individual mediators can be separated from the total effect. Moreover, the derived mediation analysis can be performed with general predictive models. For linear predictive models with continuous mediators, we show that the proposed method is equivalent to the conventional coefficients product method. We also establish the relationship between the proposed definitions of direct or indirect effect and the natural direct or indirect effect for binary exposure variables. The proposed method is demonstrated by both simulations and a real example examining racial disparities in three-year survival rates for female breast cancer patients in Louisiana.
Xian Liu
DOI: 10.4172/2155-6180.1000191
In survival analysis, researchers often encounter multivariate survival time data, in which failure times are correlated even in the presence of model covariates. It is argued that because observations are clustered by unobserved heterogeneity, the application of standard survival models can result in biased parameter estimates and erroneous model-based predictions. In this article, the author describes and compares four methods handling unobserved heterogeneity in survival analysis: the Andersen-Gill approach, the robust sandwich variance estimator, the hazard model with individual frailty, and the retransformation method. An empirical analysis provides strong evidence that in the presence of strong unobserved heterogeneity, the application of a standard survival model can yield equally robust parameter estimates and the likelihood ratio statistic as does a corresponding model adding an additional parameter for random effects. When predicting the survival function, however, a standard model on multivariate survival time data can result in serious prediction bias. The retransformation method is effective to derive an adjustment factor for correctly predicting the survival function.
Dongmei Liu, Giovanni Parmigiani and Brian Caffo
DOI: 10.4172/2155-6180.1000192
Screening for changes in gene expression across biological conditions using high throughput technologies is now common in biology. In this paper we present a broad Bayesian multilevel framework for developing computationally fast shrinkage-based screening tools for this purpose. Our scheme makes it easy to adapt the choice of statistics to the goals of the analysis and to the genomic distributions of signal and noise. We empirically investigate the extent to which these shrinkage-based statistics improve performance, and the situations in which such improvements are larger. Our evaluation uses both extensive simulations and controlled biological experiments. The experimental data include a socalled spike-in experiment, in which the target biological signal is known, and a two-sample experiment, which illustrates the typical conditions in which the methods are applied. Our results emphasize two important practical concerns that are not receiving sufficient attention in applied work in this area. First, while shrinkage strategies based on multilevel models are able to improve selection performance, they require careful verification of the assumptions on the relationship between signal and noise. Incorrect specification of this relationship can negatively affect a selection procedure. Because this inter-gene relationship is generally identifiable in genomic experiments, we suggest a simple diagnostic plot to assist model checking. Secondly, no statistic performs optimally across two common categories of experimental goals: selecting genes with large changes, and selecting genes with reliably measured changes. Therefore, careful consideration of analysis goals is critical in the choice of the approach taken.
Sam Efromovich and Ekaterina Smirnova
DOI: 10.4172/2155-6180.1000193
The paper describes the theory, methods and application of statistical analysis of large-p-small-n cross-correlation matrices arising in fMRI studies of neuroplasticity, which is the ability of the brain to recognize neural pathways based on new experience and change in learning. Traditionally these studies are based on averaging images over large areas in right and left hemispheres and then finding a single cross-correlation function. It is proposed to conduct such an analysis based on a voxel-to-voxel level which immediately yields large cross-correlation matrices. Furthermore, the matrices have an interesting property to have both sparse and dense rows and columns. Main steps in solving the problem are: (i) treat observations, available for a single voxel, as a nonparametric regression; (ii) use a wavelet transform and then work with empirical wavelet coefficients; (iii) develop the theory and methods of adaptive simultaneous confidence intervals and adaptive rate-minimax thresholding estimation for the matrices. The developed methods are illustrated via analysis of fMRI experiments and the results allow us not only conclude that during fMRI experiments there is a change in cross-correlation between left and right hemispheres (the fact well known in the literature), but that we can also enrich our understanding how neural pathways are activated and then remain activated in timeon a single voxel-to-voxel level.
Journal of Biometrics & Biostatistics received 3496 citations as per Google Scholar report