This vignette is made for users that are already familiar with the basic condiments workflow described in the first vignette. Here, we will show how to modify the default parameters for the first two steps of the workflow
# For analysis
library(condiments)
library(slingshot)
set.seed(21)
We rely on the same toy dataset as the first vignette
data("toy_dataset", package = "condiments")
df <- toy_dataset$sd
rd <- as.matrix(df[, c("Dim1", "Dim2")])
sds <- slingshot(rd, df$cl)
By default, the topologyTest function requires only two inputs, the sds
object and the condition
labels. To limit run time for the vignette, we also change the default number of permutations used to generate trajectories under the null by setting the rep
argument to \(10\) instead of the default \(100\). As such, the test statistics might be more variable.
top_res <- topologyTest(sds = sds, conditions = df$conditions, rep = 10)
## Generating permuted trajectories
## Running KS-mean test
knitr::kable(top_res)
method | thresh | statistic | p.value |
---|---|---|---|
KS_mean | 0.01 | 0 | 1 |
The topologyTest function can be relatively slow on large datasets. Moreover, when changing the method used to test the null hypothesis that a common trajectory should be fitted, the first permutation part of generating rep
trajectories under the null is identical. Therefore, we allow users to specify more than one method and one value of the threshold. Here, we will use both the Kolmogorov-Smirnov test test(Smirnov 1939) and the classifier-test(Lopez-Paz and Oquab 2016).
top_res <- topologyTest(sds = sds, conditions = df$conditions, rep = 10,
methods = c("KS_mean", "Classifier"),
threshs = c(0, .01, .05, .1))
## Generating permuted trajectories
## Running KS-mean test
## Running Classifier test
knitr::kable(top_res)
method | thresh | statistic | p.value |
---|---|---|---|
KS_mean | 0 | 0.0070000 | 1.0000000 |
KS_mean | 0.01 | 0.0000000 | 1.0000000 |
KS_mean | 0.05 | 0.0000000 | 1.0000000 |
KS_mean | 0.1 | 0.0000000 | 1.0000000 |
Classifier | 0 | 0.4150000 | 0.9999821 |
Classifier | 0.01 | 0.3800000 | 1.0000000 |
Classifier | 0.05 | 0.3333333 | 1.0000000 |
Classifier | 0.1 | 0.2833333 | 1.0000000 |
To see all methods avaible, use /tmp/Rtmpq7NDpe/Rinst247e296baeca2c/condiments/help/topologyTest and look at the methods
argument.
For all methods but the KS test, additional paramters can be specified, using a custom argument: args_classifier
, args_wass
or args_mmd
. See the help file for given test more information on those parameters. For example, since the default test based on the wasserstein distance and permutation test is quite slow, we can pass a fast
argument.
top_res <- topologyTest(sds = sds, conditions = df$conditions, rep = 10,
methods = "wasserstein_permutation",
args_wass = list(fast = TRUE, S = 100, iterations = 10^2))
## Generating permuted trajectories
## Running wassertsein permutation test
knitr::kable(top_res)
method | thresh | statistic | p.value |
---|---|---|---|
wasserstein_permutation | NA | 1.356887 | 0.85 |
For now, the first part of the topologyTest has been designed for parallelisation using the BiocParallel package. For example, to run with 4 cores, you can run the following command
library(BiocParallel)
BPPARAM <- bpparam()
BPPARAM$progressbar <- TRUE
BPPARAM$workers <- 4
top_res <- topologyTest(sds = sds, conditions = df$conditions, rep = 100,
parallel = TRUE, BPPARAM = BPPARAM)
knitr::kable(top_res)
The tests for the second test are much less compute-intensive, therefore there is no parallelisation. However, the other changes introduce in the previous section are still possible
prog_res <- progressionTest(sds, conditions = df$conditions)
knitr::kable(prog_res)
lineage | statistic | p.value |
---|---|---|
All | 5.506366 | 0 |
dif_res <- fateSelectionTest(sds, conditions = df$conditions)
## note: only 1 unique complexity parameters in default grid. Truncating the grid to 1 .
##
## note: only 1 unique complexity parameters in default grid. Truncating the grid to 1 .
knitr::kable(dif_res)
pair | statistic | p.value |
---|---|---|
1vs2 | 0.6611712 | 8e-07 |
prog_res <- progressionTest(sds, conditions = df$conditions, method = "Classifier")
knitr::kable(prog_res)
lineage | statistic | p.value |
---|---|---|
All | 0.6026126 | 0.0012246 |
dif_res <- fateSelectionTest(sds, conditions = df$conditions, thresh = .05)
## note: only 1 unique complexity parameters in default grid. Truncating the grid to 1 .
##
## note: only 1 unique complexity parameters in default grid. Truncating the grid to 1 .
knitr::kable(dif_res)
pair | statistic | p.value |
---|---|---|
1vs2 | 0.6301802 | 6.01e-05 |
prog_res <- progressionTest(sds, conditions = df$conditions, method = "Classifier",
args_classifier = list(method = "rf"))
## Warning in randomForest.default(x, y, mtry = param$mtry, ...): invalid mtry:
## reset to within valid range
## Warning in randomForest.default(x, y, mtry = param$mtry, ...): invalid mtry:
## reset to within valid range
## Warning in randomForest.default(x, y, mtry = param$mtry, ...): invalid mtry:
## reset to within valid range
## Warning in randomForest.default(x, y, mtry = param$mtry, ...): invalid mtry:
## reset to within valid range
## Warning in randomForest.default(x, y, mtry = param$mtry, ...): invalid mtry:
## reset to within valid range
## Warning in randomForest.default(x, y, mtry = param$mtry, ...): invalid mtry:
## reset to within valid range
## Warning in randomForest.default(x, y, mtry = param$mtry, ...): invalid mtry:
## reset to within valid range
## Warning in randomForest.default(x, y, mtry = param$mtry, ...): invalid mtry:
## reset to within valid range
## Warning in randomForest.default(x, y, mtry = param$mtry, ...): invalid mtry:
## reset to within valid range
## Warning in randomForest.default(x, y, mtry = param$mtry, ...): invalid mtry:
## reset to within valid range
## Warning in randomForest.default(x, y, mtry = param$mtry, ...): invalid mtry:
## reset to within valid range
## Warning in randomForest.default(x, y, mtry = param$mtry, ...): invalid mtry:
## reset to within valid range
## Warning in randomForest.default(x, y, mtry = param$mtry, ...): invalid mtry:
## reset to within valid range
## Warning in randomForest.default(x, y, mtry = param$mtry, ...): invalid mtry:
## reset to within valid range
## Warning in randomForest.default(x, y, mtry = param$mtry, ...): invalid mtry:
## reset to within valid range
## Warning in randomForest.default(x, y, mtry = param$mtry, ...): invalid mtry:
## reset to within valid range
## Warning in randomForest.default(x, y, mtry = param$mtry, ...): invalid mtry:
## reset to within valid range
## Warning in randomForest.default(x, y, mtry = param$mtry, ...): invalid mtry:
## reset to within valid range
## Warning in randomForest.default(x, y, mtry = param$mtry, ...): invalid mtry:
## reset to within valid range
## Warning in randomForest.default(x, y, mtry = param$mtry, ...): invalid mtry:
## reset to within valid range
## Warning in randomForest.default(x, y, mtry = param$mtry, ...): invalid mtry:
## reset to within valid range
## Warning in randomForest.default(x, y, mtry = param$mtry, ...): invalid mtry:
## reset to within valid range
## note: only 1 unique complexity parameters in default grid. Truncating the grid to 1 .
knitr::kable(prog_res)
lineage | statistic | p.value |
---|---|---|
All | 0.5890991 | 0.0043539 |
dif_res <- fateSelectionTest(sds, conditions = df$conditions)
## note: only 1 unique complexity parameters in default grid. Truncating the grid to 1 .
##
## note: only 1 unique complexity parameters in default grid. Truncating the grid to 1 .
knitr::kable(dif_res)
pair | statistic | p.value |
---|---|---|
1vs2 | 0.6341441 | 3.41e-05 |
For all of the above procedures, it is important to note that we are making multiple comparisons. The p-values we obtain from these tests should be corrected for multiple testing, especially for trajectories with a large number of lineages.
That said, trajectory inference is often one of the last computational methods in a very long analysis pipeline (generally including gene-level quantification, gene filtering / feature selection, and dimensionality reduction). Hence, we strongly discourage the reader from putting too much faith in any p-value that comes out of this analysis. Such values may be useful suggestions, indicating particular features or cells for follow-up study, but should generally not be treated as meaningful statistical quantities.
If some commands and parameters are still unclear after going through this vignette, do not hesitate to open an issue on the condiments Github repository.
sessionInfo()
## R version 4.3.1 (2023-06-16)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 22.04.3 LTS
##
## Matrix products: default
## BLAS: /home/biocbuild/bbs-3.18-bioc/R/lib/libRblas.so
## LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.10.0
##
## locale:
## [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
## [3] LC_TIME=en_GB LC_COLLATE=C
## [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
## [7] LC_PAPER=en_US.UTF-8 LC_NAME=C
## [9] LC_ADDRESS=C LC_TELEPHONE=C
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
##
## time zone: America/New_York
## tzcode source: system (glibc)
##
## attached base packages:
## [1] stats4 stats graphics grDevices utils datasets methods
## [8] base
##
## other attached packages:
## [1] caret_6.0-94 lattice_0.22-5
## [3] viridis_0.6.4 viridisLite_0.4.2
## [5] RColorBrewer_1.1-3 ggplot2_3.4.4
## [7] tidyr_1.3.0 dplyr_1.1.3
## [9] slingshot_2.10.0 TrajectoryUtils_1.10.0
## [11] SingleCellExperiment_1.24.0 SummarizedExperiment_1.32.0
## [13] Biobase_2.62.0 GenomicRanges_1.54.0
## [15] GenomeInfoDb_1.38.0 IRanges_2.36.0
## [17] S4Vectors_0.40.0 BiocGenerics_0.48.0
## [19] MatrixGenerics_1.14.0 matrixStats_1.0.0
## [21] princurve_2.1.6 condiments_1.10.0
## [23] knitr_1.44
##
## loaded via a namespace (and not attached):
## [1] jsonlite_1.8.7 magrittr_2.0.3
## [3] ggbeeswarm_0.7.2 spatstat.utils_3.0-4
## [5] farver_2.1.1 rmarkdown_2.25
## [7] zlibbioc_1.48.0 vctrs_0.6.4
## [9] spatstat.explore_3.2-5 DelayedMatrixStats_1.24.0
## [11] RCurl_1.98-1.12 htmltools_0.5.6.1
## [13] S4Arrays_1.2.0 BiocNeighbors_1.20.0
## [15] SparseArray_1.2.0 pROC_1.18.4
## [17] sass_0.4.7 parallelly_1.36.0
## [19] bslib_0.5.1 plyr_1.8.9
## [21] lubridate_1.9.3 cachem_1.0.8
## [23] igraph_1.5.1 lifecycle_1.0.3
## [25] iterators_1.0.14 pkgconfig_2.0.3
## [27] rsvd_1.0.5 Matrix_1.6-1.1
## [29] R6_2.5.1 fastmap_1.1.1
## [31] GenomeInfoDbData_1.2.11 future_1.33.0
## [33] digest_0.6.33 colorspace_2.1-0
## [35] tensor_1.5 scater_1.30.0
## [37] irlba_2.3.5.1 beachmat_2.18.0
## [39] spatstat.linnet_3.1-1 labeling_0.4.3
## [41] randomForest_4.7-1.1 fansi_1.0.5
## [43] spatstat.sparse_3.0-3 timechange_0.2.0
## [45] polyclip_1.10-6 abind_1.4-5
## [47] mgcv_1.9-0 compiler_4.3.1
## [49] rngtools_1.5.2 proxy_0.4-27
## [51] withr_2.5.1 doParallel_1.0.17
## [53] BiocParallel_1.36.0 spatstat.model_3.2-8
## [55] MASS_7.3-60 lava_1.7.2.1
## [57] DelayedArray_0.28.0 ModelMetrics_1.2.2.2
## [59] tools_4.3.1 vipor_0.4.5
## [61] beeswarm_0.4.0 future.apply_1.11.0
## [63] nnet_7.3-19 goftest_1.2-3
## [65] glue_1.6.2 nlme_3.1-163
## [67] grid_4.3.1 reshape2_1.4.4
## [69] generics_0.1.3 recipes_1.0.8
## [71] gtable_0.3.4 spatstat.data_3.0-3
## [73] class_7.3-22 data.table_1.14.8
## [75] ScaledMatrix_1.10.0 BiocSingular_1.18.0
## [77] utf8_1.2.4 XVector_0.42.0
## [79] spatstat.geom_3.2-7 ggrepel_0.9.4
## [81] RANN_2.6.1 foreach_1.5.2
## [83] pillar_1.9.0 stringr_1.5.0
## [85] limma_3.58.0 Ecume_0.9.1
## [87] splines_4.3.1 survival_3.5-7
## [89] deldir_1.0-9 tidyselect_1.2.0
## [91] scuttle_1.12.0 pbapply_1.7-2
## [93] transport_0.14-6 gridExtra_2.3
## [95] xfun_0.40 statmod_1.5.0
## [97] hardhat_1.3.0 distinct_1.14.0
## [99] timeDate_4022.108 stringi_1.7.12
## [101] yaml_2.3.7 evaluate_0.22
## [103] codetools_0.2-19 kernlab_0.9-32
## [105] spatstat_3.0-6 tibble_3.2.1
## [107] cli_3.6.1 rpart_4.1.21
## [109] munsell_0.5.0 jquerylib_0.1.4
## [111] Rcpp_1.0.11 globals_0.16.2
## [113] spatstat.random_3.2-1 parallel_4.3.1
## [115] gower_1.0.1 doRNG_1.8.6
## [117] sparseMatrixStats_1.14.0 bitops_1.0-7
## [119] listenv_0.9.0 ipred_0.9-14
## [121] scales_1.2.1 prodlim_2023.08.28
## [123] e1071_1.7-13 purrr_1.0.2
## [125] crayon_1.5.2 rlang_1.1.1
Lopez-Paz, David, and Maxime Oquab. 2016. “Revisiting Classifier Two-Sample Tests.” Arxiv, October, 1–15. http://arxiv.org/abs/1610.06545.
Smirnov, Nikolai V. 1939. “On the Estimation of the Discrepancy Between Empirical Curves of Distribution for Two Independent Samples.” Bull. Math. Univ. Moscou 2 (2): 3–14.