--- title: "VSClust workflow" author: "Veit Schwämmle" package: vsclust abstract: > Here, we describe the workflow to run variance-sensitive clustering on data from a quantitative omics experiments. In principle, this can be any multi-dimensional data set containing quantitative and optionally replicated values. This vignette is distributed under a CC BY-SA license. vignette: > %\VignetteIndexEntry{VSClust workflow} %\VignetteEngine{knitr::rmarkdown} %%\VignetteKeywords{Genomics, Transcriptomics, Proteomics, Metabolomics, Clustering, Quantitative, Statistics, Functional analysis } %\VignetteEncoding{UTF-8} output: BiocStyle::html_document: toc_float: true --- ```{r style, echo = FALSE, results = 'asis'} BiocStyle::markdown() ``` ```{r env, message = FALSE, warning = FALSE, echo = FALSE} require(clusterProfiler) require(matrixStats) ``` # Introduction Clustering is a method to identify common pattern in highly dimensional data. This can be for example genes or proteins with similar quantitative changes, thus providing insights into the affected biological pathways. Despite of numerous clustering algorithms, they do not account for feature variance, i.e. the uncertainty in the measurements across the different experimental conditions. VSClust determines the characteristic patterns in high-dimensional data while accounting for feature variance that is given through replicated measurements. Here, we present an example script to run the full clustering analysis using the `vsclust` library. The same can be done by running the Shiny app (e.g. via its docker image or on \URL{http://computproteomics.bmb.sdu.dk}), or the corresponding command line script. For the source code, see \URL{https://bitbucket.com/veitveit/vsclust}. # Installation and additional packages Use the common Bioconductor commands for installation: ```{r eval=FALSE} # Uncomment in case you have not installed vsclust yet if (!require("BiocManager", quietly = TRUE)) install.packages("BiocManager") BiocManager::install("vsclust") library(vsclust) ``` The full functionality of this vignette can be obtained by additionally installing and loading the packages `matrixStats` and `clusterProfiler` # Initialization Here, we define the different parameters for the example data set `protein_expressions`. In the command-line version of VSClust ("runVSClust.R"), they can be given via yaml file. __Comments:__ A. Data sets with different numbers of replicates per condition need to be adapted to contain the same number of columns per condition. These can be done by either removing excess replicates or adding empty columns. B. We assume the input data to be of the following format: A1, B1, C1, ..., A2, B2, C2, ..., where letters denote sample type and numbers are the different replicates. C. If you prefer to estimate feature variance different, use averages and add an estimate for the standard deviation as last column. You will need to set the last option of `PreparedForVSClust` to `FALSE`. D. If you don't have replicates, use the same format as in C. and set the standard deviations to 1. ```{r} library(vsclust) #### Input parameters, only read when now parameter file was provided ## All principal parameters for running VSClust can be defined as in the ## shinyapp at computproteomics.bmb.sdu.dk/Apps/VSClust # name of study Experiment <- "ProtExample" # Number of replicates/sample per different experimental condition (sample # type) NumReps <- 3 # Number of different experimental conditions (e.g. time points or sample # types) NumCond <- 4 # Paired or unpaired statistical tests when carrying out LIMMA for # statistical testing isPaired <- FALSE # Number of threads to accelerate the calculation (use 1 in doubt) cores <- 1 # If 0 (default), then automatically estimate the cluster number for the # vsclust # run from the Minimum Centroid Distance PreSetNumClustVSClust <- 0 # If 0 (default), then automatically estimate the cluster number for the # original fuzzy c-means from the Minimum Centroid Distance PreSetNumClustStand <- 0 # max. number of clusters when estimating the number of clusters. Higher # numbers can drastically extend the computation time. maxClust <- 10 ``` # Statistics and data preprocessing At first, we load the example proteomics data set and carry out statistical testing of all conditions version the first based on the LIMMA moderated t-test. The data consists of mice fed with four different diets (high fat, TTA, fish oil and TTA$+$fish oil). Understand more about the data set with `?protein_expressions` This will calculate the false discovery rates for the differentially regulated features (pairwise comparisons versus the first "high fat" condition) and most importantly, their expected individual variances, to be used in the variance-sensitive clustering. These variances can also be uploaded separately via a last column containing them as individual standard deviations. The `PrepareForVSClust` function also creates a PCA plot to assess variability and control whether the samples have been loaded correctly (replicated samples should form groups). After estimating the standard deviations, the matrix consists of the averaged quantitative feature values and a last column for the standard deviations of the features. ```{r fig.width = 12 } data(protein_expressions) dat <- protein_expressions #### running statistical analysis and estimation of individual variances statOut <- PrepareForVSClust(dat, NumReps, NumCond, isPaired, TRUE) dat <- statOut$dat Sds <- dat[,ncol(dat)] cat(paste("Features:",nrow(dat),"\nMissing values:", sum(is.na(dat)),"\nMedian standard deviations:", round(median(Sds,na.rm=TRUE),digits=3))) ## Write output into file write.csv(statOut$statFileOut, paste("",Experiment,"statFileOut.csv",sep="")) ``` # Estimation of cluster number There is no simple way to find the optimal number of clusters in a data set. For obtaining this number, we run the clustering for different cluster numbers and evaluate them via so-called validity indices, which provide information about suitable cluster numbers. VSClust uses mainly the "Maximum centroid distances" that denotes the shortest distance between any of the centroids. Alternatively, one can inspect the Xie Beni index. The output of `estimClustNum` contains the suggestion for the number of clusters. We further visualize the outcome. ```{r fig.width = 12} #### Estimate number of clusters with maxClust as maximum number clusters #### to run the estimation with ClustInd <- estimClustNum(dat, maxClust=maxClust, scaling="standardize", cores=cores) #### Use estimate cluster number or use own if (PreSetNumClustVSClust == 0) PreSetNumClustVSClust <- optimalClustNum(ClustInd) if (PreSetNumClustStand == 0) PreSetNumClustStand <- optimalClustNum(ClustInd, method="FCM") #### Visualize estimClust.plot(ClustInd) ``` # Run final clustering Now we run the clustering again with the optimal parameters from the estimation. One can take alternative numbers of clusters corresponding to large decays in the Minimum Centroid Distance or low values of the Xie Beni index. First, we carry out the variance-sensitive method ``` {r} #### Run clustering (VSClust and standard fcm clustering ClustOut <- runClustWrapper(dat, PreSetNumClustVSClust, NULL, VSClust=TRUE, scaling="standardize", cores=cores) Bestcl <- ClustOut$Bestcl VSClust_cl <- Bestcl #ClustOut$p ## Write clustering results (VSClust) write.csv(data.frame(cluster=Bestcl$cluster, ClustOut$outFileClust, isClusterMember=rowMaxs(Bestcl$membership)>0.5, maxMembership=rowMaxs(Bestcl$membership), Bestcl$membership), paste(Experiment, "FCMVarMResults", Sys.Date(), ".csv", sep="")) ## Write coordinates of cluster centroids write.csv(Bestcl$centers, paste(Experiment, "FCMVarMResultsCentroids", Sys.Date(), ".csv", sep="")) ``` We see that most of the difference are between TTA diets and the rest. This shows that the TTA fatty acids have strong impact on the organisms. Cluster three shows the proteins that a commonly lower abundant in mice fed with fish oil and thus are related to biological processes affected this particular diet. For comparison, this is the clustering using standard fuzzy c-means of the means over the replicates. ``` {r} ClustOut <- runClustWrapper(dat, PreSetNumClustStand, NULL, VSClust=FALSE, scaling="standardize", cores=cores) Bestcl <- ClustOut$Bestcl ## Write clustering results (standard fcm) write.csv(data.frame(cluster=Bestcl$cluster, ClustOut$outFileClust, isClusterMember=rowMaxs(Bestcl$membership)>0.5, maxMembership=rowMaxs(Bestcl$membership), Bestcl$membership), paste(Experiment, "FCMResults", Sys.Date(), ".csv", sep="")) ## Write coordinates of cluster centroids write.csv(Bestcl$centers, paste(Experiment, "FCMResultsCentroids", Sys.Date(), ".csv", sep="")) ``` Here, the clusters look rather similar. VSClust best performs for larger numbers of different experimental conditions (one finds major improvements for $D>6$). For a 4-dimensional data set, the algorithm mostly filters out features with very high variance levels, making them unsuitable for belonging to a particular cluster. This analysis is then followed by evaluating the features (here proteins) of each cluster for their biological relevance. This can be done by functional analysis with e.g. the `clusterProfiler` package. ```{r} sessionInfo() ```