%\VignetteIndexEntry{Analyzing RNA-Seq data with the "DESeq2" package} %\VignettePackage{DESeq2} %\VignetteEngine{knitr::knitr} % To compile this document % library('knitr'); rm(list=ls()); knit('DESeq2.Rnw') \documentclass[12pt]{article} \newcommand{\deseqtwo}{\textit{DESeq2}} \newcommand{\lowtilde}{\raise.17ex\hbox{$\scriptstyle\mathtt{\sim}$}} <>= library("knitr") opts_chunk$set(tidy=FALSE,dev="png",fig.show="hide", fig.width=4,fig.height=4.5, message=FALSE) @ <>= BiocStyle::latex() @ <>= library("DESeq2") @ \author{Michael Love$^{1*}$, Simon Anders$^{2}$, Wolfgang Huber$^{2}$ \\[1em] \small{$^{1}$ Department of Biostatistics, Dana Farber Cancer Institute and} \\ \small{Harvard School of Public Health, Boston, US;} \\ \small{$^{2}$ European Molecular Biology Laboratory (EMBL), Heidelberg, Germany} \\ \small{\texttt{$^*$michaelisaiahlove (at) gmail.com}}} \title{Differential analysis of count data -- the DESeq2 package} \begin{document} \maketitle \begin{abstract} A basic task in the analysis of count data from RNA-Seq is the detection of differentially expressed genes. The count data are presented as a table which reports, for each sample, the number of sequence fragments that have been assigned to each gene. Analogous data also arise for other assay types, including comparative ChIP-Seq, HiC, shRNA screening, mass spectrometry. An important analysis question is the quantification and statistical inference of systematic changes between conditions, as compared to within-condition variability. The package \deseqtwo{} provides methods to test for differential expression by use of negative binomial generalized linear models; the estimates of dispersion and logarithmic fold changes incorporate data-driven prior distributions\footnote{Other \Bioconductor{} packages with similar aims are \Biocpkg{edgeR}, \Biocpkg{baySeq}, \Biocpkg{DSS} and \Biocpkg{limma}.}. This vignette explains the use of the package and demonstrates typical workflows. An RNA-Seq workflow on the Bioconductor website: \url{http://www.bioconductor.org/help/workflows/rnaseqGene/} (formerly the Beginner's Guide PDF), covers similar material to this vignette but at a slower pace, including the generation of count matrices from FASTQ files. \vspace{1em} \textbf{DESeq2 version:} \Sexpr{packageVersion("DESeq2")} \vspace{1em} \begin{center} \begin{tabular}{ | l | } \hline If you use \deseqtwo{} in published research, please cite: \\ \\ M. I. Love, W. Huber, S. Anders: \textbf{Moderated estimation of} \\ \textbf{fold change and dispersion for RNA-seq data with DESeq2}. \\ \emph{Genome Biology} 2014, \textbf{15}:550. \\ \url{http://dx.doi.org/10.1186/s13059-014-0550-8} \\ \hline \end{tabular} \end{center} \end{abstract} <>= options(digits=3, width=80, prompt=" ", continue=" ") @ \newpage \tableofcontents \newpage \section{Standard workflow} \subsection{Quick start} Here we show the most basic steps for a differential expression analysis. These steps imply you have a \Rclass{SummarizedExperiment} object \Robject{se} with a column \Robject{condition} in \Robject{colData(se)}. <>= dds <- DESeqDataSet(se = se, design = ~ condition) dds <- DESeq(dds) res <- results(dds) @ \subsection{Input data} \label{sec:prep} \subsubsection{Why raw counts?} As input, the \deseqtwo{} package expects count data as obtained, e.\,g., from RNA-Seq or another high-throughput sequencing experiment, in the form of a matrix of integer values. The value in the $i$-th row and the $j$-th column of the matrix tells how many reads have been mapped to gene $i$ in sample $j$. Analogously, for other types of assays, the rows of the matrix might correspond e.\,g.\ to binding regions (with ChIP-Seq) or peptide sequences (with quantitative mass spectrometry). The count values must be raw counts of sequencing reads. This is important for \deseqtwo{}'s statistical model \cite{Love2014} to hold, as only the actual counts allow assessing the measurement precision correctly. Hence, please do not supply other quantities, such as (rounded) normalized counts, or counts of covered base pairs -- this will only lead to nonsensical results. \subsubsection{\Rclass{SummarizedExperiment} input} \label{sec:sumExpInput} The class used by the \deseqtwo{} package to store the read counts is \Rclass{DESeqDataSet} which extends the \Rclass{SummarizedExperiment} class of the \Biocpkg{GenomicRanges} package. This facilitates preparation steps and also downstream exploration of results. For counting aligned reads in genes, the \Rfunction{summarizeOverlaps} function of \Biocpkg{GenomicAlignments} with \Robject{mode="Union"} is encouraged, resulting in a \Rclass{SummarizedExperiment} object. An example of the steps to produce a \Rclass{SummarizedExperiment} can be found in an RNA-Seq workflow on the Bioconductor website: \url{http://www.bioconductor.org/help/workflows/rnaseqGene/} and in the vignette for the data package \Biocexptpkg{airway}. Here we load the \Rclass{SummarizedExperiment} from that package in order to build a \Rclass{DESeqDataSet}. <>= library("airway") data("airway") se <- airway @ A \Rclass{DESeqDataSet} object must have an associated design formula. The design formula expresses the variables which will be used in modeling. The formula should be a tilde ($\sim$) followed by the variables with plus signs between them (it will be coerced into an \Rclass{formula} if it is not already). An intercept is included, representing the base mean of counts. The design can be changed later, however then all differential analysis steps should be repeated, as the design formula is used to estimate the dispersions and to estimate the log2 fold changes of the model. The constructor function below shows the generation of a \Rclass{DESeqDataSet} from a \Rclass{SummarizedExperiment} \Robject{se}. \emph{Note}: In order to benefit from the default settings of the package, you should put the variable of interest at the end of the formula and make sure the control level is the first level. <>= library("DESeq2") ddsSE <- DESeqDataSet(se, design = ~ cell + dex) ddsSE @ \subsubsection{Count matrix input} Alternatively, the function \Rfunction{DESeqDataSetFromMatrix} can be used if you already have a matrix of read counts prepared. For this function you should provide the counts matrix, the column information as a \Rclass{DataFrame} or \Rclass{data.frame} and the design formula. First, we load the \Robject{pasillaGenes} data object, in order to extract a count matrix and phenotypic data. <>= library("pasilla") library("Biobase") data("pasillaGenes") countData <- counts(pasillaGenes) colData <- pData(pasillaGenes)[,c("condition","type")] @ Now that we have a matrix of counts and the column information, we can construct a \Rclass{DESeqDataSet}: <>= dds <- DESeqDataSetFromMatrix(countData = countData, colData = colData, design = ~ condition) dds @ If you have additional feature data, it can be added to the \Rclass{DESeqDataSet} by adding to the metadata columns of a newly constructed object. (Here we add redundant data for demonstration, as the gene names are already the rownames of the \Robject{dds}.) <>= featureData <- data.frame(gene=rownames(pasillaGenes)) (mcols(dds) <- DataFrame(mcols(dds), featureData)) @ \subsubsection{\textit{HTSeq} input} You can use the function \Rfunction{DESeqDataSetFromHTSeqCount} if you have \texttt{htseq-count} from the \textit{HTSeq} python package\footnote{available from \url{http://www-huber.embl.de/users/anders/HTSeq}, described in \cite{Anders:2014:htseq}}. For an example of using the python scripts, see the \Biocexptpkg{pasilla} data package. First you will want to specify a variable which points to the directory in which the \textit{HTSeq} output files are located. <>= directory <- "/path/to/your/files/" @ However, for demonstration purposes only, the following line of code points to the directory for the demo \textit{HTSeq} output files packages for the \Biocexptpkg{pasilla} package. <>= directory <- system.file("extdata", package="pasilla", mustWork=TRUE) @ We specify which files to read in using \Rfunction{list.files}, and select those files which contain the string \Rcode{"treated"} using \Rfunction{grep}. The \Rfunction{sub} function is used to chop up the sample filename to obtain the condition status, or you might alternatively read in a phenotypic table using \Rfunction{read.table}. <>= sampleFiles <- grep("treated",list.files(directory),value=TRUE) sampleCondition <- sub("(.*treated).*","\\1",sampleFiles) sampleTable <- data.frame(sampleName = sampleFiles, fileName = sampleFiles, condition = sampleCondition) ddsHTSeq <- DESeqDataSetFromHTSeqCount(sampleTable = sampleTable, directory = directory, design= ~ condition) ddsHTSeq @ \subsubsection{Note on factor levels} \label{sec:factorLevels} In the three examples above, we applied the function \Rfunction{factor} to the column of interest in \Robject{colData}, supplying a character vector of levels. It is important to supply levels (otherwise the levels are chosen in alphabetical order) and to put the ``control'' or ``untreated'' level as the first element ("base level"), so that the log2 fold changes produced by default will be the expected comparison against the base level. An \R{} function for easily changing the base level is \Rfunction{relevel}. An example of setting the base level of a factor with \Rfunction{relevel} is: <>= dds$condition <- relevel(dds$condition, "untreated") @ In addition, when subsetting the columns of a \Rclass{DESeqDataSet}, i.e., when removing certain samples from the analysis, it is possible that all the samples for one or more levels of a variable in the design formula are removed. In this case, the \Rfunction{droplevels} function can be used to remove those levels which do not have samples in the current \Rclass{DESeqDataSet}: <>= dds$condition <- droplevels(dds$condition) @ \subsubsection{Collapsing technical replicates} \deseqtwo{} provides a function \Rfunction{collapseReplicates} which can assist in combining the counts from technical replicates into single columns. See the manual page for an example of the use of \Rfunction{collapseReplicates}. \subsubsection{About the pasilla dataset} We continue with the \Biocexptpkg{pasilla} data constructed from the count matrix method above. This data set is from an experiment on \emph{Drosophila melanogaster} cell cultures and investigated the effect of RNAi knock-down of the splicing factor \emph{pasilla} \cite{Brooks2010}. The detailed transcript of the production of the \Biocexptpkg{pasilla} data is provided in the vignette of the data package \Biocexptpkg{pasilla}. \subsection{Differential expression analysis} \label{sec:de} The standard differential expression analysis steps are wrapped into a single function, \Rfunction{DESeq}. The steps of this function are described in Section~\ref{sec:glm} and in the manual page for \Robject{?DESeq}. The individual sub-functions which are called by \Rfunction{DESeq} are still available, described in Section~\ref{sec:steps}. Results tables are generated using the function \Rfunction{results}, which extracts a results table with log2 fold changes, $p$ values and adjusted $p$ values. With no arguments to \Rfunction{results}, the results will be for the last variable in the design formula, and if this is a factor, the comparison will be the last level of this variable over the first level. <>= dds <- DESeq(dds) res <- results(dds) @ These steps should take less than 30 seconds for most analyses. For experiments with many samples (e.g. 100 samples), one can take advantage of parallelized computation. Both of the above functions have an argument \Robject{parallel} which if set to \Robject{TRUE} can be used to distribute computation across cores specified by the \Rfunction{register} function of \Biocpkg{BiocParallel}. For example, the following chunk (not evaluated here), would register 4 cores, and then the two functions above, with \Robject{parallel=TRUE}, would split computation over these cores. <>= library("BiocParallel") register(MulticoreParam(4)) @ We can order our results table by the smallest adjusted $p$ value: <>= resOrdered <- res[order(res$padj),] head(resOrdered) @ We can summarize some basic tallies using the \Rfunction{summary} function. <>= summary(res) @ The \Rfunction{results} function contains a number of arguments to customize the results table which is generated. Note that the \Rfunction{results} function automatically performs independent filtering based on the mean of counts for each gene, optimizing the number of genes which will have an adjusted $p$ value below a given threshold. This will be discussed further in Section~\ref{sec:autoFilt}. If a multi-factor design is used, or if the variable in the design formula has more than two levels, the \Robject{contrast} argument of \Rfunction{results} can be used to extract different comparisons from the \Rclass{DESeqDataSet} returned by \Rfunction{DESeq}. Multi-factor designs are discussed further in Section~\ref{sec:multifactor}, and the use of the \Robject{contrast} argument is dicussed in Section~\ref{sec:contrasts}. For advanced users, note that all the values calculated by the \deseqtwo{} package are stored in the \Rclass{DESeqDataSet} object, and access to these values is discussed in Section~\ref{sec:access}. \subsection{Exploring and exporting results} \subsubsection{MA-plot} \begin{figure} \centering \includegraphics[width=.45\textwidth]{figure/MANoPrior-1} \includegraphics[width=.45\textwidth]{figure/MA-1} \caption{ \textbf{MA-plot.} These plots show the log2 fold changes from the treatment over the mean of normalized counts, i.e. the average of counts normalized by size factors. The left plot shows the ``unshrunken'' log2 fold changes, while the right plot, produced by the code above, shows the shrinkage of log2 fold changes resulting from the incorporation of zero-centered normal prior. The shrinkage is greater for the log2 fold change estimates from genes with low counts and high dispersion, as can be seen by the narrowing of spread of leftmost points in the right plot.} \label{fig:MA} \end{figure} In \deseqtwo{}, the function \Rfunction{plotMA} shows the log2 fold changes attributable to a given variable over the mean of normalized counts. Points will be colored red if the adjusted $p$ value is less than 0.1. Points which fall out of the window are plotted as open triangles pointing either up or down. <>= plotMA(res, main="DESeq2", ylim=c(-2,2)) @ The MA-plot of log2 fold changes returned by \deseqtwo{} allows us to see how the shrinkage of fold changes works for genes with low counts. You can still obtain results tables which include the ``unshrunken'' log2 fold changes (for a simple comparison, the ratio of the mean normalized counts in the two groups). A column \Robject{lfcMLE} with the unshrunken maximum likelihood estimate (MLE) for the log2 fold change will be added with an additional argument to \Rfunction{results}: <>= resMLE <- results(dds, addMLE=TRUE) head(resMLE, 4) @ <>= df <- data.frame(resMLE$baseMean, resMLE$lfcMLE, ifelse(is.na(res$padj), FALSE, res$padj < .1)) plotMA(df, main=expression(unshrunken~log[2]~fold~changes), ylim=c(-2,2)) @ After calling \Rfunction{plotMA}, one can use the function \Rfunction{identify} to interactively detect the row number of individual genes by clicking on the plot. <>= identify(res$baseMean, res$log2FoldChange) @ \subsubsection{Plot counts} It can also be useful to examine the counts of reads for a single gene across the groups. A simple function for making this plot is \Rfunction{plotCounts}, which normalizes counts by sequencing depth and adds a pseudocount of $\frac{1}{2}$ to allow for log scale plotting. The counts are grouped by the variables in \Robject{intgroup}, where more than one variable can be specified. Here we specify the gene which had the smallest $p$ value from the results table created above. You can select the gene to plot by rowname or by numeric index. <>= plotCounts(dds, gene=which.min(res$padj), intgroup="condition") @ For customized plotting, an argument \Robject{returnData} specifies that the function should only return a \Rclass{data.frame} for plotting with \Rfunction{ggplot}. <>= d <- plotCounts(dds, gene=which.min(res$padj), intgroup="condition", returnData=TRUE) library("ggplot2") ggplot(d, aes(x=condition, y=count)) + geom_point(position=position_jitter(w=0.1,h=0)) + scale_y_log10(breaks=c(25,100,400)) @ \begin{figure} \centering \includegraphics[width=.45\textwidth]{figure/plotCounts-1} \includegraphics[width=.45\textwidth]{figure/plotCountsAdv-1} \caption{ \textbf{Plot of counts for one gene.} The plot of normalized counts (plus a pseudocount of $\frac{1}{2}$) either made using the \Rfunction{plotCounts} function (left) or using another plotting library (right, using \CRANpkg{ggplot2}).} \label{fig:plotcounts} \end{figure} \subsubsection{More information on results columns} \label{sec:moreInfo} Information about which variables and tests were used can be found by calling the function \Rfunction{mcols} on the results object. <>= mcols(res)$description @ For a particular gene, a log2 fold change of $-1$ for \Robject{condition treated vs untreated} means that the treatment induces a change in observed expression level of $2^{-1} = 0.5$ compared to the untreated condition. If the variable of interest is continuous-valued, then the reported log2 fold change is per unit of change of that variable. \emph{Note:} some values in the results table can be set to \texttt{NA}, for either one of the following reasons: \begin{enumerate} \item If within a row, all samples have zero counts, the \Robject{baseMean} column will be zero, and the log2 fold change estimates, $p$ value and adjusted $p$ value will all be set to \texttt{NA}. \item If a row contains a sample with an extreme count outlier then the $p$ value and adjusted $p$ value are set to \texttt{NA}. These outlier counts are detected by Cook's distance. Customization of this outlier filtering and description of functionality for replacement of outlier counts and refitting is described in Section~\ref{sec:outlierApproach}, \item If a row is filtered by automatic independent filtering, based on low mean normalized count, then only the adjusted $p$ value is set to \texttt{NA}. Description and customization of independent filtering is described in Section~\ref{sec:autoFilt}. \end{enumerate} The column of \Robject{log2FoldChange} for the default workflow will contain shrunken estimates of fold change as mentioned above. It is possible to add a column to the results table -- without rerunning the analysis -- which contains the unshrunken, or maximum likelihood estimates (MLE), log2 fold changes. This will add the column \Robject{lfcMLE} directly after \Robject{log2FoldChange}. <>= head(results(dds, addMLE=TRUE),4) @ \subsubsection{Exporting results to HTML or CSV files} An HTML report of the results with plots and sortable/filterable columns can be exported using the \Biocpkg{ReportingTools} package on a \Rclass{DESeqDataSet} that has been processed by the \Rfunction{DESeq} function. For a code example, see the ``RNA-seq differential expression'' vignette at the \Biocpkg{ReportingTools} page, or the manual page for the \Rfunction{publish} method for the \Rclass{DESeqDataSet} class. A plain-text file of the results can be exported using the base \R{} functions \Rfunction{write.csv} or \Rfunction{write.delim}. We suggest using a descriptive file name indicating the variable and levels which were tested. <>= write.csv(as.data.frame(resOrdered), file="condition_treated_results.csv") @ Exporting only the results which pass an adjusted $p$ value threshold can be accomplished with the \Rfunction{subset} function, followed by the \Rfunction{write.csv} function. <>= resSig <- subset(resOrdered, padj < 0.1) resSig @ \subsection{Multi-factor designs} \label{sec:multifactor} Experiments with more than one factor influencing the counts can be analyzed using model formula including the additional variables. The data in the \Biocexptpkg{pasilla} package have a condition of interest (the column \Robject{condition}), as well as information on the type of sequencing which was performed (the column \Robject{type}), as we can see below: <>= colData(dds) @ We create a copy of the \Rclass{DESeqDataSet}, so that we can rerun the analysis using a multi-factor design. <>= ddsMF <- dds @ We can account for the different types of sequencing, and get a clearer picture of the differences attributable to the treatment. As \Robject{condition} is the variable of interest, we put it at the end of the formula. Thus the \Rfunction{results} function will by default pull the \Robject{condition} results unless \Robject{contrast} or \Robject{name} arguments are specified. Then we can re-run \Rfunction{DESeq}: <>= design(ddsMF) <- formula(~ type + condition) ddsMF <- DESeq(ddsMF) @ Again, we access the results using the \Rfunction{results} function. <>= resMF <- results(ddsMF) head(resMF) @ It is also possible to retrieve the log2 fold changes, $p$ values and adjusted $p$ values of the \Robject{type} variable. The \Robject{contrast} argument of the function \Rfunction{results} takes a character vector of length three: the name of the variable, the name of the factor level for the numerator of the log2 ratio, and the name of the factor level for the denominator. The \Robject{contrast} argument can also take other forms, as described in the help page for \Rfunction{results} and in Section~\ref{sec:contrasts}. <>= resMFType <- results(ddsMF, contrast=c("type","single-read","paired-end")) head(resMFType) @ If the variable is continuous or an interaction term (see Section~\ref{sec:interaction}) then the results can be extracted using the \Robject{name} argument to \Rfunction{results}, where the name is one of elements returned by \Robject{resultsNames(dds)}. \newpage %--------------------------------------------------- \section{Data transformations and visualization} \label{sec:transf} %--------------------------------------------------- \subsection{Count data transformations} %--------------------------------------------------- In order to test for differential expression, we operate on raw counts and use discrete distributions as described in the previous Section~\ref{sec:de}. However for other downstream analyses -- e.g. for visualization or clustering -- it might be useful to work with transformed versions of the count data. Maybe the most obvious choice of transformation is the logarithm. Since count values for a gene can be zero in some conditions (and non-zero in others), some advocate the use of \emph{pseudocounts}, i.\,e.\ transformations of the form \begin{equation}\label{eq:shiftedlog} y = \log_2(n + 1)\quad\mbox{or more generally,}\quad y = \log_2(n + n_0), \end{equation} where $n$ represents the count values and $n_0$ is a positive constant. In this section, we discuss two alternative approaches that offer more theoretical justification and a rational way of choosing the parameter equivalent to $n_0$ above. The \emph{regularized logarithm} or \emph{rlog} incorporates a prior on the sample differences \cite{Love2014}, and the other uses the concept of variance stabilizing transformations (VST) \cite{Tibshirani1988,sagmb2003,Anders:2010:GB}. Both transformations produce transformed data on the $\log_2$ scale which has been normalized with respect to library size. The point of these two transformations, the \emph{rlog} and the VST, is to remove the dependence of the variance on the mean, particularly the high variance of the logarithm of count data when the mean is low. Both \emph{rlog} and VST use the experiment-wide trend of variance over mean, in order to transform the data to remove the experiment-wide trend. Note that we do not require or desire that all the genes have \emph{exactly} the same variance after transformation. Indeed, in Figure~\ref{figure/vsd2-1} below, you will see that after the transformations the genes with the same mean do not have exactly the same standard deviations, but that the experiment-wide trend has flattened. It is those genes with row variance above the trend which will allow us to cluster samples into interesting groups. \subsubsection{Blind dispersion estimation} The two functions, \Rfunction{rlog} and \Rfunction{varianceStabilizingTransformation}, have an argument \Robject{blind}, for whether the transformation should be blind to the sample information specified by the design formula. When \Robject{blind} equals \Robject{TRUE} (the default), the functions will re-estimate the dispersions using only an intercept (design formula $\sim 1$). This setting should be used in order to compare samples in a manner wholly unbiased by the information about experimental groups, for example to perform sample QA (quality assurance) as demonstrated below. However, blind dispersion estimation is not the appropriate choice if one expects that many or the majority of genes (rows) will have large differences in counts which are explanable by the experimental design, and one wishes to tranform the data for downstream analysis. In this case, using blind dispersion estimation will lead to large estimates of dispersion, as it attributes differences due to experimental design as unwanted ``noise'', and shrinks the tranformed values towards each other. By setting \Robject{blind} to \Robject{FALSE}, the dispersions already estimated will be used to perform transformations, or if not present, they will be estimated using the current design formula. Note that only the fitted dispersion estimates from mean-dispersion trend line is used in the transformation. So setting \Robject{blind} to \Robject{FALSE} is still mostly unbiased by the information about the samples. \subsubsection{Extracting transformed values} The two functions return \Rclass{SummarizedExperiment} objects, as the data are no longer counts. The \Rfunction{assay} function is used to extract the matrix of normalized values: <>= rld <- rlog(dds) vsd <- varianceStabilizingTransformation(dds) rlogMat <- assay(rld) vstMat <- assay(vsd) @ Note that if you have many samples, and the \Rfunction{rlog} function is taking too long, there is an argument \Robject{fast=TRUE}, which will perform an approximation of the rlog: instead of shrinking the samples independently, the function will find the optimal amount of shrinkage for each gene given the mean counts. The optimization is performed on the same likelihood of the data as the original rlog. The speed-up for a dataset with 100 samples is around 15x. \subsubsection{Regularized log transformation} The function \Rfunction{rlog}, stands for \emph{regularized log}, transforming the original count data to the log2 scale by fitting a model with a term for each sample and a prior distribution on the coefficients which is estimated from the data. This is the same kind of shrinkage (sometimes referred to as regularization, or moderation) of log fold changes used by the \Rfunction{DESeq} and \Rfunction{nbinomWaldTest}, as seen in Figure \ref{fig:MA}. The resulting data contains elements defined as: $$ \log_2(q_{ij}) = \beta_{i0} + \beta_{ij} $$ where $q_{ij}$ is a parameter proportional to the expected true concentration of fragments for gene $i$ and sample $j$ (see Section~\ref{sec:glm}), $\beta_{i0}$ is an intercept which does not undergo shrinkage, and $\beta_{ij}$ is the sample-specific effect which is shrunk toward zero based on the dispersion-mean trend over the entire dataset. The trend typically captures high dispersions for low counts, and therefore these genes exhibit higher shrinkage from the\Rfunction{rlog}. Note that, as $q_{ij}$ represents the part of the mean value $\mu_{ij}$ after the size factor $s_j$ has been divided out, it is clear that the rlog transformation inherently accounts for differences in sequencing depth. Without priors, this design matrix would lead to a non-unique solution, however the addition of a prior on non-intercept betas allows for a unique solution to be found. The regularized log transformation is preferable to the variance stabilizing transformation if the size factors vary widely. \subsubsection{Variance stabilizing transformation} Above, we used a parametric fit for the dispersion. In this case, the closed-form expression for the variance stabilizing transformation is used by \Rfunction{varianceStabilizingTransformation}, which is derived in the file \texttt{vst.pdf}, that is distributed in the package alongside this vignette. If a local fit is used (option \Robject{fitType="locfit"} to \Rfunction{estimateDispersions}) a numerical integration is used instead. The resulting variance stabilizing transformation is shown in Figure \ref{figure/vsd1-1}. The code that produces the figure is hidden from this vignette for the sake of brevity, but can be seen in the \texttt{.Rnw} or \texttt{.R} source file. Note that the vertical axis in such plots is the square root of the variance over all samples, so including the variance due to the experimental conditions. While a flat curve of the square root of variance over the mean may seem like the goal of such transformations, this may be unreasonable in the case of datasets with many true differences due to the experimental conditions. <>= px <- counts(dds)[,1] / sizeFactors(dds)[1] ord <- order(px) ord <- ord[px[ord] < 150] ord <- ord[seq(1, length(ord), length=50)] last <- ord[length(ord)] vstcol <- c("blue", "black") matplot(px[ord], cbind(assay(vsd)[, 1], log2(px))[ord, ], type="l", lty=1, col=vstcol, xlab="n", ylab="f(n)") legend("bottomright", legend = c( expression("variance stabilizing transformation"), expression(log[2](n/s[1]))), fill=vstcol) @ \incfig{figure/vsd1-1}{.49\textwidth}{VST and log2.}{ Graphs of the variance stabilizing transformation for sample 1, in blue, and of the transformation $f(n) = \log_2(n/s_1)$, in black. $n$ are the counts and $s_1$ is the size factor for the first sample. } \subsubsection{Effects of transformations on the variance} \incfig{figure/vsd2-1}{\textwidth}{Standard deviation over mean.}{ Per-gene standard deviation (taken across samples), against the rank of the mean, for the shifted logarithm $\log_2(n+1)$ (left), the regularized log transformation (center) and the variance stabilizing transformation (right). } Figure~\ref{figure/vsd2-1} plots the standard deviation of the transformed data, across samples, against the mean, using the shifted logarithm transformation (\ref{eq:shiftedlog}), the regularized log transformation and the variance stabilizing transformation. The shifted logarithm has elevated standard deviation in the lower count range, and the regularized log to a lesser extent, while for the variance stabilized data the standard deviation is roughly constant along the whole dynamic range. <>= library("vsn") par(mfrow=c(1,3)) notAllZero <- (rowSums(counts(dds))>0) meanSdPlot(log2(counts(dds,normalized=TRUE)[notAllZero,] + 1)) meanSdPlot(assay(rld[notAllZero,])) meanSdPlot(assay(vsd[notAllZero,])) @ %--------------------------------------------------------------- \subsection{Data quality assessment by sample clustering and visualization}\label{sec:quality} %--------------------------------------------------------------- Data quality assessment and quality control (i.\,e.\ the removal of insufficiently good data) are essential steps of any data analysis. These steps should typically be performed very early in the analysis of a new data set, preceding or in parallel to the differential expression testing. We define the term \emph{quality} as \emph{fitness for purpose}\footnote{\url{http://en.wikipedia.org/wiki/Quality_\%28business\%29}}. Our purpose is the detection of differentially expressed genes, and we are looking in particular for samples whose experimental treatment suffered from an anormality that renders the data points obtained from these particular samples detrimental to our purpose. \subsubsection{Heatmap of the count matrix}\label{sec:hmc} To explore a count matrix, it is often instructive to look at it as a heatmap. Below we show how to produce such a heatmap from the raw and transformed data. <>= library("RColorBrewer") library("gplots") select <- order(rowMeans(counts(dds,normalized=TRUE)),decreasing=TRUE)[1:30] hmcol <- colorRampPalette(brewer.pal(9, "GnBu"))(100) @ <>= heatmap.2(counts(dds,normalized=TRUE)[select,], col = hmcol, Rowv = FALSE, Colv = FALSE, scale="none", dendrogram="none", trace="none", margin=c(10,6)) @ <>= heatmap.2(assay(rld)[select,], col = hmcol, Rowv = FALSE, Colv = FALSE, scale="none", dendrogram="none", trace="none", margin=c(10, 6)) @ <>= heatmap.2(assay(vsd)[select,], col = hmcol, Rowv = FALSE, Colv = FALSE, scale="none", dendrogram="none", trace="none", margin=c(10, 6)) @ \begin{figure} \centering \includegraphics[width=.32\textwidth]{figure/figHeatmap2a-1} \includegraphics[width=.32\textwidth]{figure/figHeatmap2b-1} \includegraphics[width=.32\textwidth]{figure/figHeatmap2c-1} \caption{Heatmaps showing the expression data of the \Sexpr{length(select)} most highly expressed genes. The data is of raw counts (left), from regularized log transformation (center) and from variance stabilizing transformation (right).} \label{fig:heatmap2} \end{figure} \incfig{figure/figHeatmapSamples-1}{.6\textwidth}{Sample-to-sample distances.}{ Heatmap showing the Euclidean distances between the samples as calculated from the regularized log transformation. } \subsubsection{Heatmap of the sample-to-sample distances}\label{sec:dists} Another use of the transformed data is sample clustering. Here, we apply the \Rfunction{dist} function to the transpose of the transformed count matrix to get sample-to-sample distances. We could alternatively use the variance stabilized transformation here. <>= distsRL <- dist(t(assay(rld))) @ A heatmap of this distance matrix gives us an overview over similarities and dissimilarities between samples (Figure \ref{figure/figHeatmapSamples-1}): We have to provide a hierarchical clustering \Robject{hc} to the \Rfunction{heatmap.2} function based on the sample distances, or else the \Rfunction{heatmap.2} function would calculate a clustering based on the distances between the rows/columns of the distance matrix. <>= mat <- as.matrix(distsRL) rownames(mat) <- colnames(mat) <- with(colData(dds), paste(condition, type, sep=" : ")) hc <- hclust(distsRL) heatmap.2(mat, Rowv=as.dendrogram(hc), symm=TRUE, trace="none", col = rev(hmcol), margin=c(13, 13)) @ \subsubsection{Principal component plot of the samples}\label{sec:pca} Related to the distance matrix of Section~\ref{sec:dists} is the PCA plot of the samples, which we obtain as follows (Figure \ref{figure/figPCA-1}). <>= plotPCA(rld, intgroup=c("condition", "type")) @ \incfig{figure/figPCA-1}{.7\textwidth}{PCA plot.}{ PCA plot. The \Sexpr{ncol(rld)} samples shown in the 2D plane spanned by their first two principal components. This type of plot is useful for visualizing the overall effect of experimental covariates and batch effects. } It is also possible to customize the PCA plot using the \Rfunction{ggplot} function. <>= data <- plotPCA(rld, intgroup=c("condition", "type"), returnData=TRUE) percentVar <- round(100 * attr(data, "percentVar")) ggplot(data, aes(PC1, PC2, color=condition, shape=type)) + geom_point(size=3) + xlab(paste0("PC1: ",percentVar[1],"% variance")) + ylab(paste0("PC2: ",percentVar[2],"% variance")) @ \incfig{figure/figPCA2-1}{.7\textwidth}{PCA plot.}{ PCA plot customized using the \CRANpkg{ggplot2} library. } \newpage %-------------------------------------------------- \section{Variations to the standard workflow} %-------------------------------------------------- \subsection{Wald test individual steps} \label{sec:steps} The function \Rfunction{DESeq} runs the following functions in order: <>= dds <- estimateSizeFactors(dds) dds <- estimateDispersions(dds) dds <- nbinomWaldTest(dds) @ \subsection{Contrasts} \label{sec:contrasts} A contrast is a linear combination of estimated log2 fold changes, which can be used to test if differences between groups are equal to zero. The simplest use case for contrasts is an experimental design containing a factor with three levels, say A, B and C. Contrasts enable the user to generate results for all 3 possible differences: log2 fold change of B vs A, of C vs A, and of C vs B (the other three possible pairs will simply have $-1 \times$ the log2 fold changes of these three). In order to fit models with ``shrunken'' log2 fold changes in a manner which is independent to the choice of base level, \deseqtwo{} uses ``expanded model matrices'', described further in Section~\ref{sec:expanded}. The expanded model matrices include a coefficient for each level of the factors in addition to an intercept. The \Robject{contrast} argument of \Rfunction{results} function is again used to extract test results of log2 fold changes of interest. Log2 fold changes can also be added and subtracted by providing a \Robject{list} to the \Robject{contrast} argument with two elements: the names of the log2 fold changes to add, and the names of the log2 fold changes to subtract. Alternatively, a numeric vector of the length of \Robject{resultsNames(dds)} can be provided, for manually specifying the linear combination of terms. Demonstrations of the use of contrasts for various designs can be found in the examples section of the help page for the \Rfunction{results} function. The formula that is used to generate the contrasts can be found in Section~\ref{sec:ctrstTheory}. \subsection{Interactions} \label{sec:interaction} Interaction terms can be added to the design formula, in order to test, for example, if the log2 fold change attributable to a given condition is different based on a second variable, for example if the treatment effect of a drug is differs based on another grouping variable like sex or species. Interactions are specified in R formula using a colon, \Robject{:}, between the two variables names. Demonstrations of extracting results from an interaction model are shown in the examples section of the help page for \Rfunction{results}. \subsection{Time-series experiments} There are a number of ways to analyze time-series experiments, depending on the biological question of interest. In order to test for any differences over multiple time points, once can use a design including the time factor, and then test using the likelihood ratio test as described in Section~\ref{sec:LRT}, where the time factor is removed in the reduced formula. For a control and treatment time series, one can use a design formula containing the condition factor, the time factor, and the interaction of the two. In this case, using the likelihood ratio test with a reduced model which does not contain the interaction terms will test whether the condition induces a change in gene expression at any time point after the base-level time point (time 0). An example of the later analysis is provided in an RNA-Seq workflow on the Bioconductor website: \url{http://www.bioconductor.org/help/workflows/rnaseqGene/}. \subsection{Likelihood ratio test} \label{sec:LRT} \deseqtwo{} offers two kinds of hypothesis tests: the Wald test, where we use the estimated standard error of a log2 fold change to test if it is equal to zero, and the likelihood ratio test (LRT). The LRT examines two models for the counts, a \emph{full} model with a certain number of terms and a \emph{reduced} model, in which some of the terms of the \emph{full} model are removed. The test determines if the increased likelihood of the data using the extra terms in the \emph{full} model is more than expected if those extra terms are truly zero. The LRT is therefore useful for testing multiple terms at once, for example testing 3 or more levels of a factor at once, or all interactions between two variables. The LRT for count data is conceptually similar to an analysis of variance (ANOVA) calculation in linear regression, except that in the case of the Negative Binomial GLM, we use an analysis of deviance (ANODEV), where the \emph{deviance} captures the difference in likelihood between a full and a reduced model. The likelihood ratio test can be specified using the \Robject{test} argument to \Rfunction{DESeq}, which substitutes \Rfunction{nbinomWaldTest} with \Rfunction{nbinomLRT}. In this case, the user needs to provide a reduced formula, e.g. one in which a number of terms from \Robject{design(dds)} are removed. The degrees of freedom for the test is obtained from the difference between the number of parameters in the two models. \subsection{Approach to count outliers} \label{sec:outlierApproach} RNA-Seq data sometimes contain isolated instances of very large counts that are apparently unrelated to the experimental or study design, and which may be considered outliers. There are many reasons why outliers can arise, including rare technical or experimental artifacts, read mapping problems in the case of genetically differing samples, and genuine, but rare biological events. In many cases, users appear primarily interested in genes that show a consistent behavior, and this is the reason why by default, genes that are affected by such outliers are set aside by \deseqtwo{}, or if there are sufficient samples, outlier counts are replaced for model fitting. These two behaviors are described below. The \Rfunction{DESeq} function (and \Rfunction{nbinomWaldTest}/\Rfunction{nbinomLRT} functions) calculates, for every gene and for every sample, a diagnostic test for outliers called \emph{Cook's distance}. Cook's distance is a measure of how much a single sample is influencing the fitted coefficients for a gene, and a large value of Cook's distance is intended to indicate an outlier count. The Cook's distances are stored as a matrix available in \Robject{assays(dds)[["cooks"]]}. The \Rfunction{results} function automatically flags genes which contain a Cook's distance above a cutoff for samples which have 3 or more replicates. The $p$ values and adjusted $p$ values for these genes are set to \Robject{NA}. At least 3 replicates are required for flagging, as it is difficult to judge which sample might be an outlier with only 2 replicates. With many degrees of freedom -- i.\,e., many more samples than number of parameters to be estimated -- it is undesirable to remove entire genes from the analysis just because their data include a single count outlier. When there are 7 or more replicates for a given sample, the \Rfunction{DESeq} function will automatically replace counts with large Cook's distance with the trimmed mean over all samples, scaled up by the size factor or normalization factor for that sample. This approach is conservative, it will not lead to false positives, as it replaces the outlier value with the value predicted by the null hypothesis. The default Cook's distance cutoff for the two behaviors described above depends on the sample size and number of parameters to be estimated. The default is to use the $99\%$ quantile of the $F(p,m-p)$ distribution (with $p$ the number of parameters including the intercept and $m$ number of samples). The default for gene flagging can be modified using the \Robject{cooksCutoff} argument to the \Rfunction{results} function. The gene flagging functionality can be disabled by setting \Robject{cooksCutoff} to \Robject{FALSE} or \Robject{Inf}. The automatic outlier replacement performed by \Rfunction{DESeq} can be disabled by setting the \Robject{minReplicatesForReplace} argument to \Robject{Inf}. \Rfunction{DESeq} automatically replaces outliers if there are sufficient replicates and a row contains a count with very high Cook's distance. \Rfunction{DESeq} preserves the original counts in \Robject{counts(dds)} saving the replacement counts as a matrix named \Robject{replaceCounts} in \Robject{assays(dds)}. \subsection{Dispersion plot and fitting alternatives} Plotting the dispersion estimates is a useful diagnostic. The dispersion plot in Figure \ref{figure/dispFit-1} is typical, with the final estimates shrunk from the gene-wise estimates towards the fitted estimates. Some gene-wise estimates are flagged as outliers and not shrunk towards the fitted value, (this outlier detection is described in the man page for \Rfunction{estimateDispersionsMAP}). The amount of shrinkage can be more or less than seen here, depending on the sample size, the number of coefficients, the row mean and the variability of the gene-wise estimates. <>= plotDispEsts(dds) @ \incfig{figure/dispFit-1}{.5\textwidth}{Dispersion plot.}{ The dispersion estimate plot shows the gene-wise estimates (black), the fitted values (red), and the final maximum \textit{a posteriori} estimates used in testing (blue). } \subsubsection{Local or mean dispersion fit} A local smoothed dispersion fit is automatically substitited in the case that the parametric curve doesn't fit the observed dispersion mean relationship. This can be prespecified by providing the argument \Robject{fitType="local"} to either \Rfunction{DESeq} or \Rfunction{estimateDispersions}. Additionally, using the mean of gene-wise disperion estimates as the fitted value can be specified by providing the argument \Robject{fitType="mean"}. \subsubsection{Supply a custom dispersion fit} Any fitted values can be provided during dispersion estimation, using the lower-level functions described in the manual page for \Rfunction{estimateDispersionsGeneEst}. In the code chunk below, we store the gene-wise estimates which were already calculated and saved in the metadata column \Robject{dispGeneEst}. Then we calculate the median value of the dispersion estimates above a threshold, and save these values as the fitted dispersions, using the replacement function for \Rfunction{dispersionFunction}. In the last line, the function \Rfunction{estimateDispersionsMAP}, uses the fitted dispersions to generate maximum \textit{a posteriori} (MAP) estimates of dispersion. <>= ddsCustom <- dds useForMedian <- mcols(ddsCustom)$dispGeneEst > 1e-7 medianDisp <- median(mcols(ddsCustom)$dispGeneEst[useForMedian],na.rm=TRUE) dispersionFunction(ddsCustom) <- function(mu) medianDisp ddsCustom <- estimateDispersionsMAP(ddsCustom) @ \subsection{Independent filtering of results}\label{sec:autoFilt} The \Rfunction{results} function of the \deseqtwo{} package performs independent filtering by default using the mean of normalized counts as a filter statistic. A threshold on the filter statistic is found which optimizes the number of adjusted $p$ values lower than a significance level \Robject{alpha} (we use the standard variable name for significance level, though it is unrelated to the dispersion parameter $\alpha$). The theory behind independent filtering is discussed in greater detail in Section~\ref{sec:indepfilt}. The adjusted $p$ values for the genes which do not pass the filter threshold are set to \Robject{NA}. The independent filtering is performed using the \Rfunction{filtered\_p} function of the \Biocpkg{genefilter} package, and all of the arguments of \Rfunction{filtered\_p} can be passed to the \Rfunction{results} function. The filter threshold value and the number of rejections at each quantile of the filter statistic are available as attributes of the object returned by \Rfunction{results}. For example, we can visualize the optimization by plotting the \Robject{filterNumRej} attribute of the results object, as seen in Figure \ref{figure/filtByMean-1}. Note that if the maximum number of rejections is very small such that the line of rejections over filter threshold appears noisy, the expected false discovery rate might not be held exactly for this small set. <>= attr(res,"filterThreshold") plot(attr(res,"filterNumRej"),type="b", ylab="number of rejections") @ \incfig{figure/filtByMean-1}{.5\textwidth}{Independent filtering.}{ The \Rfunction{results} function maximizes the number of rejections (adjusted $p$ value less than a significance level), over theta, the quantiles of a filtering statistic (in this case, the mean of normalized counts). } Independent filtering can be turned off by setting \Robject{independentFiltering} to \Robject{FALSE}. <>= resNoFilt <- results(dds, independentFiltering=FALSE) addmargins(table(filtering=(res$padj < .1), noFiltering=(resNoFilt$padj < .1))) @ \subsection{Tests of log2 fold change above or below a threshold} It is also possible to provide thresholds for constructing Wald tests of significance. Two arguments to the \Rfunction{results} function allow for threshold-based Wald tests: \Robject{lfcThreshold}, which takes a numeric of a non-negative threshold value, and \Robject{altHypothesis}, which specifies the kind of test. Note that the \textit{alternative hypothesis} is specified by the user, i.e. those genes which the user is interested in finding, and the test provides $p$ values for the null hypothesis, the complement of the set defined by the alternative. The \Robject{altHypothesis} argument can take one of the following four values, where $\beta$ is the log2 fold change specified by the \Robject{name} argument: \begin{itemize} \item \Robject{greaterAbs} - $|\beta| > \textrm{lfcThreshold}$ - tests are two-tailed \item \Robject{lessAbs} - $|\beta| < \textrm{lfcThreshold}$ - $p$ values are the maximum of the upper and lower tests \item \Robject{greater} - $\beta > \textrm{lfcThreshold} $ \item \Robject{less} - $\beta < -\textrm{lfcThreshold} $ \end{itemize} The test \Robject{altHypothesis="lessAbs"} requires that the user have run \Rfunction{DESeq} with the argument \Robject{betaPrior=FALSE}. To understand the reason for this requirement, consider that during hypothesis testing, the null hypothesis is favored unless the data provide strong evidence to reject the null. For this test, including a zero-centered prior on log fold change would favor the alternative hypothesis, shrinking log fold changes toward zero. Removing the prior on log fold changes for tests of small log fold change allows for detection of only those genes where the data alone provides evidence against the null. The four possible values of \Robject{altHypothesis} are demonstrated in the following code and visually by MA-plots in Figure~\ref{figure/lfcThresh-1}. First we run \Rfunction{DESeq} and specify \Robject{betaPrior=FALSE} in order to demonstrate \Robject{altHypothesis="lessAbs"}. <>= ddsNoPrior <- DESeq(dds, betaPrior=FALSE) @ In order to produce results tables for the following tests, the same arguments (except \Robject{ylim}) would be provided to the \Rfunction{results} function. <>= par(mfrow=c(2,2),mar=c(2,2,1,1)) yl <- c(-2.5,2.5) resGA <- results(dds, lfcThreshold=.5, altHypothesis="greaterAbs") resLA <- results(ddsNoPrior, lfcThreshold=.5, altHypothesis="lessAbs") resG <- results(dds, lfcThreshold=.5, altHypothesis="greater") resL <- results(dds, lfcThreshold=.5, altHypothesis="less") plotMA(resGA, ylim=yl) abline(h=c(-.5,.5),col="dodgerblue",lwd=2) plotMA(resLA, ylim=yl) abline(h=c(-.5,.5),col="dodgerblue",lwd=2) plotMA(resG, ylim=yl) abline(h=.5,col="dodgerblue",lwd=2) plotMA(resL, ylim=yl) abline(h=-.5,col="dodgerblue",lwd=2) @ \incfig{figure/lfcThresh-1}{.5\textwidth}{MA-plots of tests of log2 fold change with respect to a threshold value.}{ Going left to right across rows, the tests are for \Robject{altHypothesis = "greaterAbs"}, \Robject{"lessAbs"}, \Robject{"greater"}, and \Robject{"less"}. } \subsection{Access to all calculated values}\label{sec:access} All row-wise calculated values (intermediate dispersion calculations, coefficients, standard errors, etc.) are stored in the \Rclass{DESeqDataSet} object, e.g. \Robject{dds} in this vignette. These values are accessible by calling \Rfunction{mcols} on \Robject{dds}. Descriptions of the columns are accessible by two calls to \Rfunction{mcols}. <>= mcols(dds,use.names=TRUE)[1:4,1:4] # here using substr() only for display purposes substr(names(mcols(dds)),1,10) mcols(mcols(dds), use.names=TRUE)[1:4,] @ The mean values $\mu_{ij} = s_j q_{ij}$ and the Cook's distances for each gene and sample are stored as matrices in the assays slot: <>= head(assays(dds)[["mu"]]) head(assays(dds)[["cooks"]]) @ The dispersions $\alpha_i$ can be accessed with the \Rfunction{dispersions} function. <>= head(dispersions(dds)) # which is the same as head(mcols(dds)$dispersion) @ The size factors $s_j$ are accessible via \Rfunction{sizeFactors}: <>= sizeFactors(dds) @ For advanced users, we also include a convenience function \Rfunction{coef} for extracting the matrix of coefficients $[\beta_{ir}]$ for all genes $i$ and parameters $r$, as in the formula in Section~\ref{sec:glm}. This function can also return a matrix of standard errors, see \Robject{?coef}. The columns of this matrix correspond to the effects returned by \Rfunction{resultsNames}. Note that the \Rfunction{results} function is best for building results tables with $p$ values and adjusted $p$ values. <>= head(coef(dds)) @ The beta prior variance $\sigma_r^2$ is stored as an attribute of the \Rclass{DESeqDataSet}: <>= attr(dds, "betaPriorVar") @ The dispersion prior variance $\sigma_d^2$ is stored as an attribute of the dispersion function: <>= dispersionFunction(dds) attr(dispersionFunction(dds), "dispPriorVar") @ \subsection{Sample-/gene-dependent normalization factors} \label{sec:normfactors} In some experiments, there might be gene-dependent dependencies which vary across samples. For instance, GC-content bias or length bias might vary across samples coming from different labs or processed at different times. We use the terms ``normalization factors'' for a gene $\times$ sample matrix, and ``size factors'' for a single number per sample. Incorporating normalization factors, the mean parameter $\mu_{ij}$ from Section~\ref{sec:glm} becomes: $$ \mu_{ij} = NF_{ij} q_{ij} $$ with normalization factor matrix $NF$ having the same dimensions as the counts matrix $K$. This matrix can be incorporated as shown below. We recommend providing a matrix with row-wise geometric means of $1$, so that the mean of normalized counts for a gene is close to the mean of the unnormalized counts. This can be accomplished by dividing out the current row geometric means. <>= normFactors <- normFactors / exp(rowMeans(log(normFactors))) normalizationFactors(dds) <- normFactors @ These steps then replace \Rfunction{estimateSizeFactors} in the steps described in Section~\ref{sec:steps}. Normalization factors, if present, will always be used in the place of size factors. The methods provided by the \Biocpkg{cqn} or \Biocpkg{EDASeq} packages can help correct for GC or length biases. They both describe in their vignettes how to create matrices which can be used by \deseqtwo{}. From the formula above, we see that normalization factors should be on the scale of the counts, like size factors, and unlike offsets which are typically on the scale of the predictors (i.e. the logarithmic scale for the negative binomial GLM). At the time of writing, the transformation from the matrices provided by these packages should be: <>= cqnOffset <- cqnObject$glm.offset cqnNormFactors <- exp(cqnOffset) EDASeqNormFactors <- exp(-1 * EDASeqOffset) @ \newpage %-------------------------------------------------- \section{Theory behind DESeq2} %-------------------------------------------------- \subsection{The DESeq2 model} \label{sec:glm} The \deseqtwo{} model and all the steps taken in the software are described in detail in our pre-print \cite{Love2014}, and we include the formula and descriptions in this section as well. The differential expression analysis in \deseqtwo{} uses a generalized linear model of the form: $$ K_{ij} \sim \textrm{NB}(\mu_{ij}, \alpha_i) $$ $$ \mu_{ij} = s_j q_{ij} $$ $$ \log_2(q_{ij}) = x_{j.} \beta_i $$ where counts $K_{ij}$ for gene $i$, sample $j$ are modeled using a negative binomial distribution with fitted mean $\mu_{ij}$ and a gene-specific dispersion parameter $\alpha_i$. The fitted mean is composed of a sample-specific size factor $s_j$\footnote{The model can be generalized to use sample- \textbf{and} gene-dependent normalization factors, see Appendix~\ref{sec:normfactors}.} and a parameter $q_{ij}$ proportional to the expected true concentration of fragments for sample $j$. The coefficients $\beta_i$ give the log2 fold changes for gene $i$ for each column of the model matrix $X$. By default these log2 fold changes are the maximum \emph{a priori} estimates after incorporating a zero-centered Normal prior -- in the software referrred to as a $\beta$-prior -- hence \deseqtwo{} provides ``moderated'' log2 fold change estimates. Dispersions are estimated using expected mean values from the maximum likelihood estimate of log2 fold changes, and optimizing the Cox-Reid adjusted profile likelihood, as first implemented for RNA-Seq data in \Biocpkg{edgeR} \cite{CR,edgeR_GLM}. The steps performed by the \Rfunction{DESeq} function are documented in its manual page; briefly, they are: \begin{enumerate} \item estimation of size factors $s_j$ by \Rfunction{estimateSizeFactors} \item estimation of dispersion $\alpha_i$ by \Rfunction{estimateDispersions} \item negative binomial GLM fitting for $\beta_i$ and Wald statistics by \Rfunction{nbinomWaldTest} \end{enumerate} For access to all the values calculated during these steps, see Section~\ref{sec:access} \subsection{Changes compared to the \Biocpkg{DESeq} package} The main changes in the package \deseqtwo{}, compared to the (older) version \Biocpkg{DESeq}, are as follows: \begin{itemize} \item \Rclass{SummarizedExperiment} is used as the superclass for storage of input data, intermediate calculations and results. \item Maximum \textit{a posteriori} estimation of GLM coefficients incorporating a zero-centered Normal prior with variance estimated from data (equivalent to Tikhonov/ridge regularization). This adjustment has little effect on genes with high counts, yet it helps to moderate the otherwise large variance in log2 fold change estimates for genes with low counts or highly variable counts. \item Maximum \textit{a posteriori} estimation of dispersion replaces the \Robject{sharingMode} options \Robject{fit-only} or \Robject{maximum} of the previous version of the package. This is similar to the dispersion estimation methods of DSS \cite{Wu2012New}. \item All estimation and inference is based on the generalized linear model, which includes the two condition case (previously the \textit{exact test} was used). \item The Wald test for significance of GLM coefficients is provided as the default inference method, with the likelihood ratio test of the previous version still available. \item It is possible to provide a matrix of sample-/gene-dependent normalization factors (Section \ref{sec:normfactors}). \item Automatic independent filtering on the mean of normalized counts (Section \ref{sec:indepfilt}). \item Automatic outlier detection and handling (Section \ref{sec:cooks}). \end{itemize} \subsection{Count outlier detection} \label{sec:cooks} \deseqtwo{} relies on the negative binomial distribution to make estimates and perform statistical inference on differences. While the negative binomial is versatile in having a mean and dispersion parameter, extreme counts in individual samples might not fit well to the negative binomial. For this reason, we perform automatic detection of count outliers. We use Cook's distance, which is a measure of how much the fitted coefficients would change if an individual sample were removed \cite{Cook1977Detection}. For more on the implementation of Cook's distance see Section~\ref{sec:outlierApproach} and the manual page for the \Rfunction{results} function. Below we plot the maximum value of Cook's distance for each row over the rank of the test statistic to justify its use as a filtering criterion. <>= W <- res$stat maxCooks <- apply(assays(dds)[["cooks"]],1,max) idx <- !is.na(W) plot(rank(W[idx]), maxCooks[idx], xlab="rank of Wald statistic", ylab="maximum Cook's distance per gene", ylim=c(0,5), cex=.4, col=rgb(0,0,0,.3)) m <- ncol(dds) p <- 3 abline(h=qf(.99, p, m - p)) @ \incfig{figure/cooksPlot-1}{.5\textwidth}{Cook's distance.}{ Plot of the maximum Cook's distance per gene over the rank of the Wald statistics for the condition. The two regions with small Cook's distances are genes with a single count in one sample. The horizontal line is the default cutoff used for 7 samples and 3 estimated parameters. } \subsection{Contrasts} \label{sec:ctrstTheory} Contrasts can be calculated for a \Rclass{DESeqDataSet} object for which the GLM coefficients have already been fit using the Wald test steps (\Rfunction{DESeq} with \texttt{test="Wald"} or using \Rfunction{nbinomWaldTest}). The vector of coefficients $\beta$ is left multiplied by the contrast vector $c$ to form the numerator of the test statistic. The denominator is formed by multiplying the covariance matrix $\Sigma$ for the coefficients on either side by the contrast vector $c$. The square root of this product is an estimate of the standard error for the contrast. The contrast statistic is then compared to a normal distribution as are the Wald statistics for the \deseqtwo{} package. $$ W = \frac{c^t \beta}{\sqrt{c^t \Sigma c}} $$ \subsection{Expanded model matrices} \label{sec:expanded} As mentioned in Section~\ref{sec:contrasts}, \deseqtwo{} uses ``expanded model matrices'' with the log2 fold change prior, in order to produce log2 fold change estimates and test results which are independent of the choice of base level. These model matrices differ from the standard model matrices, in that they have an indicator column (and therefore a coefficient) for each level of factors in the design formula in addition to an intercept. Expanded model matrices are not used without the log2 fold change prior or in the case of designs with 2 level factors and an interaction term. These matrices are therefore not full rank, but a coefficient vector $\beta_i$ can still be found due to the zero-centered prior on non-intercept coefficients. The prior variance for the log2 fold changes is calculated by first generating maximum likelihood estimates for a standard model matrix. The prior variance for each level of a factor is then set as the average of the mean squared maximum likelihood estimates for each level and every possible contrast, such that that this prior value will be base level independent. The \Robject{contrast} argument of the \Rfunction{results} function is again used in order to generate comparisons of interest. %-------------------------------------------------- \subsection{Independent filtering and multiple testing} \label{sec:indepfilt} \subsubsection{Filtering criteria} \label{sec:filtbycount} %-------------------------------------------------- The goal of independent filtering is to filter out those tests from the procedure that have no, or little chance of showing significant evidence, without even looking at their test statistic. Typically, this results in increased detection power at the same experiment-wide type I error. Here, we measure experiment-wide type I error in terms of the false discovery rate. A good choice for a filtering criterion is one that \begin{enumerate} \item\label{it:indp} is statistically independent from the test statistic under the null hypothesis, \item\label{it:corr} is correlated with the test statistic under the alternative, and \item\label{it:joint} does not notably change the dependence structure --if there is any-- between the tests that pass the filter, compared to the dependence structure between the tests before filtering. \end{enumerate} The benefit from filtering relies on property \ref{it:corr}, and we will explore it further in Section~\ref{sec:whyitworks}. Its statistical validity relies on property \ref{it:indp} -- which is simple to formally prove for many combinations of filter criteria with test statistics-- and \ref{it:joint}, which is less easy to theoretically imply from first principles, but rarely a problem in practice. We refer to \cite{Bourgon:2010:PNAS} for further discussion of this topic. A simple filtering criterion readily available in the results object is the mean of normalized counts irrespective of biological condition (Figure \ref{figure/indFilt-1}). Genes with very low counts are not likely to see significant differences typically due to high dispersion. For example, we can plot the $-\log_{10}$ $p$ values from all genes over the normalized mean counts. <>= plot(res$baseMean+1, -log10(res$pvalue), log="x", xlab="mean of normalized counts", ylab=expression(-log[10](pvalue)), ylim=c(0,30), cex=.4, col=rgb(0,0,0,.3)) @ \incfig{figure/indFilt-1}{.5\textwidth}{Mean counts as a filter statistic.}{ The mean of normalized counts provides an independent statistic for filtering the tests. It is independent because the information about the variables in the design formula is not used. By filtering out genes which fall on the left side of the plot, the majority of the low $p$ values are kept. } %-------------------------------------------------- \subsubsection{Why does it work?}\label{sec:whyitworks} %-------------------------------------------------- Consider the $p$ value histogram in Figure \ref{figure/fighistindepfilt-1}. It shows how the filtering ameliorates the multiple testing problem -- and thus the severity of a multiple testing adjustment -- by removing a background set of hypotheses whose $p$ values are distributed more or less uniformly in $[0,1]$. <>= use <- res$baseMean > attr(res,"filterThreshold") table(use) h1 <- hist(res$pvalue[!use], breaks=0:50/50, plot=FALSE) h2 <- hist(res$pvalue[use], breaks=0:50/50, plot=FALSE) colori <- c(`do not pass`="khaki", `pass`="powderblue") @ <>= barplot(height = rbind(h1$counts, h2$counts), beside = FALSE, col = colori, space = 0, main = "", ylab="frequency") text(x = c(0, length(h1$counts)), y = 0, label = paste(c(0,1)), adj = c(0.5,1.7), xpd=NA) legend("topright", fill=rev(colori), legend=rev(names(colori))) @ \incfig{figure/fighistindepfilt-1}{.5\textwidth}{Histogram of p values for all tests.}{ The area shaded in blue indicates the subset of those that pass the filtering, the area in khaki those that do not pass. } %--------------------------------------------------- \subsubsection{Diagnostic plots for multiple testing} %--------------------------------------------------- The Benjamini-Hochberg multiple testing adjustment procedure \cite{BH:1995} has a simple graphical illustration, which we produce in the following code chunk. Its result is shown in the left panel of Figure \ref{fig:multtest}. <>= resFilt <- res[use & !is.na(res$pvalue),] orderInPlot <- order(resFilt$pvalue) showInPlot <- (resFilt$pvalue[orderInPlot] <= 0.08) alpha <- 0.1 @ <>= plot(seq(along=which(showInPlot)), resFilt$pvalue[orderInPlot][showInPlot], pch=".", xlab = expression(rank(p[i])), ylab=expression(p[i])) abline(a=0, b=alpha/length(resFilt$pvalue), col="red3", lwd=2) @ <>= whichBH <- which(resFilt$pvalue[orderInPlot] <= alpha * seq(along=resFilt$pvalue)/length(resFilt$pvalue)) whichBH <- seq_len(max(whichBH)) ## Test some assertions: ## - whichBH is a contiguous set of integers from 1 to length(whichBH) ## - the genes selected by this graphical method coincide with those ## from p.adjust (i.e. padjFilt) stopifnot(length(whichBH)>0, identical(whichBH, seq(along=whichBH)), resFilt$padj[orderInPlot][ whichBH] <= alpha, resFilt$padj[orderInPlot][-whichBH] > alpha) @ Schweder and Spj\o{}tvoll \cite{SchwederSpjotvoll1982} suggested a diagnostic plot of the observed $p$-values which permits estimation of the fraction of true null hypotheses. For a series of hypothesis tests $H_1, \ldots, H_m$ with $p$-values $p_i$, they suggested plotting \begin{equation} \left( 1-p_i, N(p_i) \right) \mbox{ for } i \in 1, \ldots, m, \end{equation} where $N(p)$ is the number of $p$-values greater than $p$. An application of this diagnostic plot to \Robject{resFilt\$pvalue} is shown in the right panel of Figure \ref{fig:multtest}. When all null hypotheses are true, the $p$-values are each uniformly distributed in $[0,1]$, Consequently, the cumulative distribution function of $(p_1, \ldots, p_m)$ is expected to be close to the line $F(t)=t$. By symmetry, the same applies to $(1 - p_1, \ldots, 1 - p_m)$. When (without loss of generality) the first $m_0$ null hypotheses are true and the other $m-m_0$ are false, the cumulative distribution function of $(1-p_1, \ldots, 1-p_{m_0})$ is again expected to be close to the line $F_0(t)=t$. The cumulative distribution function of $(1-p_{m_0+1}, \ldots, 1-p_{m})$, on the other hand, is expected to be close to a function $F_1(t)$ which stays below $F_0$ but shows a steep increase towards 1 as $t$ approaches $1$. In practice, we do not know which of the null hypotheses are true, so we can only observe a mixture whose cumulative distribution function is expected to be close to \begin{equation} F(t) = \frac{m_0}{m} F_0(t) + \frac{m-m_0}{m} F_1(t). \end{equation} Such a situation is shown in the right panel of Figure \ref{fig:multtest}. If $F_1(t)/F_0(t)$ is small for small $t$, then the mixture fraction $\frac{m_0}{m}$ can be estimated by fitting a line to the left-hand portion of the plot, and then noting its height on the right. Such a fit is shown by the red line in the right panel of Figure \ref{fig:multtest}. <>= j <- round(length(resFilt$pvalue)*c(1, .66)) px <- (1-resFilt$pvalue[orderInPlot[j]]) py <- ((length(resFilt$pvalue)-1):0)[j] slope <- diff(py)/diff(px) @ <>= plot(1-resFilt$pvalue[orderInPlot], (length(resFilt$pvalue)-1):0, pch=".", xlab=expression(1-p[i]), ylab=expression(N(p[i]))) abline(a=0, slope, col="red3", lwd=2) @ \begin{figure}[ht] \centering \includegraphics[width=.49\textwidth]{figure/sortedP-1} \includegraphics[width=.49\textwidth]{figure/SchwederSpjotvoll-1} \caption{\emph{Left:} illustration of the Benjamini-Hochberg multiple testing adjustment procedure \cite{BH:1995}. The black line shows the $p$-values ($y$-axis) versus their rank ($x$-axis), starting with the smallest $p$-value from the left, then the second smallest, and so on. Only the first \Sexpr{sum(showInPlot)} $p$-values are shown. The red line is a straight line with slope $\alpha/n$, where $n=\Sexpr{length(resFilt[["pvalue"]])}$ is the number of tests, and $\alpha=\Sexpr{alpha}$ is a target false discovery rate (FDR). FDR is controlled at the value $\alpha$ if the genes are selected that lie to the left of the rightmost intersection between the red and black lines: here, this results in \Sexpr{length(whichBH)} genes. \emph{Right:} Schweder and Spj\o{}tvoll plot, as described in the text. For both of these plots, the $p$-values \Robject{resFilt\$pvalues} from Section~\ref{sec:filtbycount} were used as a starting point. Analogously, one can produce these types of plots for any set of $p$-values, for instance those from the previous sections.} \label{fig:multtest} \end{figure} \section{Frequently asked questions} \subsection{How can I get support for DESeq2?} We welcome questions about our software, and want to ensure that we eliminate issues if and when they appear. We have a few requests to optimize the process: \begin{itemize} \item all questions should take place on the Bioconductor support site: \url{https://support.bioconductor.org}, which serves as a repository of questions and answers. This helps to save the developers' time in responding to similar questions. Make sure to tag your post with ``deseq2''. It is often very helpful in addition to describe the aim of your experiment. \item before posting, first search the Bioconductor support site mentioned above for past threads which might have answered your question. \item if you have a question about the behavior of a function, read the sections of the manual page for this function by typing a question mark and the function name, e.g. \Robject{?results}. We spend a lot of time documenting individual functions and the exact steps that the software is performing. \item include all of your R code, especially the creation of the \Rclass{DESeqDataSet} and the design formula. Include complete warning or error messages, and conclude your message with the full output of \Robject{sessionInfo()}. \item if possible, include the output of \Robject{as.data.frame(colData(dds))}, so that we can have a sense of the experimental setup. If this contains confidential information, you can replace the levels of those factors using \Rfunction{levels()}. \end{itemize} \subsection{Why are some $p$ values set to \texttt{NA}?} See the details in Section~\ref{sec:moreInfo}. \subsection{How can I get unfiltered DESeq results?} Users can obtain unfiltered GLM results, i.e. without outlier removal or independent filtering with the following call: <>= dds <- DESeq(dds, minReplicatesForReplace=Inf) res <- results(dds, cooksCutoff=FALSE, independentFiltering=FALSE) @ In this case, the only $p$ values set to \Robject{NA} are those from genes with all counts equal to zero. \subsection{How do I use the variance stabilized or rlog transformed data for differential testing?} The variance stabilizing and rlog transformations are provided for applications other than differential testing, for example clustering of samples or other machine learning applications. For differential testing we recommend the \Rfunction{DESeq} function applied to raw counts as outlined in Section~\ref{sec:de}. \subsection{Can I use DESeq2 to analyze paired samples?} Yes, you should use a multi-factor design which includes the sample information as a term in the design formula. This will account for differences between the samples while estimating the effect due to the condition. The condition of interest should go at the end of the design formula. See Section~\ref{sec:multifactor}. \subsection{Can I run DESeq2 to contrast the levels of 100 groups?} \deseqtwo{} will work with any kind of design specified using the R formula. We enourage users to consider exploratory data analysis such as principal components analysis as described in Section~\ref{sec:pca}, rather than performing statistical testing of all combinations of dozens of groups. As a speed concern with fitting very large models, note that each additional level of a factor in the design formula adds another parameter to the GLM which is fit by \deseqtwo. Users might consider first removing genes with very few reads, e.g.\ genes with row sum of 1, as this will speed up the fitting procedure. \subsection{Can I use DESeq2 to analyze a dataset without replicates?} If a \Rclass{DESeqDataSet} is provided with an experimental design without replicates, a message is printed, that the samples are treated as replicates for estimation of dispersion. More details can be found in the manual page for \Rfunction{?DESeq}. \subsection{How can I include a continuous covariate in the design formula?} Continuous covariates can be included in the design formula in the same manner as factorial covariates. Continuous covariates might make sense in certain experiments, where a constant fold change might be expected for each unit of the covariate. However, in many cases, more meaningful results can be obtained by cutting continuous covariates into a factor defined over a small number of bins (e.g. 3-5). In this way, the average effect of each group is controlled for, regardless of the trend over the continuous covariates. In R, \Rclass{numeric} vectors can be converted into \Rclass{factors} using the function \Rfunction{cut}. \subsection{What are the exact steps performed by \Rfunction{DESeq()}?} See the manual page for \Rfunction{DESeq}, which links to the subfunctions which are called in order, where complete details are listed. \section{Acknowledgments} We have benefited in the development of \deseqtwo{} from the help and feedback of many individuals, including but not limited to: The Bionconductor Core Team, Alejandro Reyes, Andrzej Oles, Aleksandra Pekowska, Felix Klein, Vince Carey, Devon Ryan, Steve Lianoglou, Jessica Larson, Christina Chaivorapol, Pan Du, Richard Bourgon, Willem Talloen, Elin Videvall, Hanneke van Deutekom, Todd Burwell, Jesse Rowley, Igor Dolgalev. \section{Session Info} <>= toLatex(sessionInfo()) @ <>= options(prompt="> ", continue="+ ") @ \bibliography{library} \end{document}