--- title: "Importing transcript abundance datasets with tximport" author: "Michael I. Love, Charlotte Soneson, Mark D. Robinson" date: "`r Sys.Date()`" package: "`r packageVersion('tximport')`" output: rmarkdown::html_document: highlight: pygments toc: true fig_width: 5 bibliography: library.bib vignette: > %\VignetteIndexEntry{Importing transcript abundance datasets with tximport} %\VignetteEngine{knitr::rmarkdown} --- Import and summarize transcript-level abundance estimates for transcript- and gene-level analysis with Bioconductor packages, such as *edgeR*, *DESeq2*, and *limma-voom*. The motivation and methods for the functions provided by the *tximport* package are described in the following article [@Soneson2015]: > Charlotte Soneson, Michael I. Love, Mark D. Robinson (2015): > Differential analyses for RNA-seq: transcript-level estimates > improve gene-level inferences. *F1000Research* > http://dx.doi.org/10.12688/f1000research.7563.1 In particular, the *tximport* pipeline offers the following benefits: (i) this approach corrects for potential changes in gene length across samples (e.g. from differential isoform usage) [@Trapnell2013Differential], (ii) some of the upstream quantification methods (*Salmon*, *Sailfish*, *kallisto*) are substantially faster and require less memory and disk usage compared to alignment-based methods that require creation and storage of BAM files, and (iii) it is possible to avoid discarding those fragments that can align to multiple genes with homologous sequence, thus increasing sensitivity [@Robert2015Errors]. ```{r, echo=FALSE} library(knitr) opts_chunk$set(tidy=TRUE,message=FALSE) ``` ## Import transcript-level estimates We begin by locating some prepared files that contain transcript abundance estimates for six samples, from the *tximportData* package. The *tximport* pipeline will be nearly identical for various quantification tools, usually only requiring one change the `type` argument. We begin with quantification files generated by the *Salmon* software, and later show the use of *tximport* with any of: * *Salmon* [@Patro2016Salmon] * *Sailfish* [@Patro2014Sailfish] * *kallisto* [@Bray2016Near] * *RSEM* [@Li2011RSEM] First, we locate the directory containing the files. (Here we use `system.file` to locate the package directory, but for a typical use, we would just provide a path, e.g. `"/path/to/dir"`.) ```{r} library(tximportData) dir <- system.file("extdata", package="tximportData") list.files(dir) ``` Next, we create a named vector pointing to the quantification files. We will create a vector of filenames first by reading in a table that contains the sample IDs, and then combining this with `dir` and `"quant.sf"`. ```{r} samples <- read.table(file.path(dir,"samples.txt"), header=TRUE) samples files <- file.path(dir, "salmon", samples$run, "quant.sf") names(files) <- paste0("sample",1:6) all(file.exists(files)) ``` Transcripts need to be associated with gene IDs for gene-level summarization. If that information is present in the files, we can skip this step. For Salmon, Sailfish, and kallisto the files only provide the transcript ID. We first make a data.frame called `tx2gene` with two columns: 1) transcript ID and 2) gene ID. The column names do not matter but this column order must be used. The transcript ID must be the same one used in the abundance files. Creating this `tx2gene` data.frame can be accomplished from a *TxDb* object and the `select` function from the *AnnotationDbi* package. The following code could be used to construct such a table: ```{r} library(TxDb.Hsapiens.UCSC.hg19.knownGene) txdb <- TxDb.Hsapiens.UCSC.hg19.knownGene k <- keys(txdb, keytype="GENEID") df <- select(txdb, keys=k, keytype="GENEID", columns="TXNAME") tx2gene <- df[,2:1] # tx ID, then gene ID ``` Note: if you are using an *Ensembl* transcriptome, the easiest way to create the `tx2gene` data.frame is to use the [ensembldb](http://bioconductor.org/packages/ensembldb) packages. The annotation packages can be found by version number, and use the pattern `EnsDb.Hsapiens.vXX`. The `transcripts` function can be used with `return.type="DataFrame"`, in order to obtain something like the `df` object constructed in the code chunk above. See the *ensembldb* package vignette for more details. Here we read in a pre-constructed `tx2gene` table: ```{r} tx2gene <- read.csv(file.path(dir, "tx2gene.csv")) head(tx2gene) ``` The *tximport* package has a single function for importing transcript-level estimates. The `type` argument is used to specify what software was used for estimation ("kallisto", "salmon", "sailfish", and "rsem" are implemented). A simple list with matrices, "abundance", "counts", and "length", is returned, where the transcript level information is summarized to the gene-level. The "length" matrix can be used to generate an offset matrix for downstream gene-level differential analysis of count matrices, as shown below. **Note**: While *tximport* works without any dependencies, it is significantly faster to read in files using the *readr* package. If *tximport* detects that *readr* is installed, then it will use the `readr::read_tsv` function by default. A change from version 1.2 to 1.4 is that the reader is not specified by the user anymore, but chosen automatically based on the availability of the *readr* package. Advanced users can still customize the import of files using the `importer` argument. ```{r} library(tximport) library(readr) txi <- tximport(files, type="salmon", tx2gene=tx2gene) names(txi) head(txi$counts) ``` We could alternatively generate counts from abundances, using the argument `countsFromAbundance`, scaled to library size, `"scaledTPM"`, or additionally scaled using the average transcript length, averaged over samples and to library size, `"lengthScaledTPM"`. Using either of these approaches, the counts are not correlated with length, and so the length matrix should not be provided as an offset for downstream analysis packages. For more details on these approaches, see the article listed under `citation("tximport")`. We can avoid gene-level summarization by setting `txOut=TRUE`, giving the original transcript level estimates as a list of matrices. ```{r} txi.tx <- tximport(files, type="salmon", txOut=TRUE, tx2gene=tx2gene) ``` These matrices can then be summarized afterwards using the function `summarizeToGene`. This then gives the identical list of matrices as using `txOut=FALSE` (default) in the first `tximport` call. ```{r} txi.sum <- summarizeToGene(txi.tx, tx2gene) all.equal(txi$counts, txi.sum$counts) ``` ## Salmon / Sailfish Salmon or Sailfish `quant.sf` files can be imported by setting type to `"salmon"` or `"sailfish"`. ```{r} files <- file.path(dir,"salmon", samples$run, "quant.sf") names(files) <- paste0("sample",1:6) txi.salmon <- tximport(files, type="salmon", tx2gene=tx2gene) head(txi.salmon$counts) ``` ```{r} files <- file.path(dir,"sailfish", samples$run, "quant.sf") names(files) <- paste0("sample",1:6) txi.sailfish <- tximport(files, type="sailfish", tx2gene=tx2gene) head(txi.sailfish$counts) ``` *Note*: for previous version of Salmon or Sailfish, in which the `quant.sf` files start with comment lines, it is recommended to specify the `importer` argument as a function which reads in the lines beginning with the header. For example: ```{r eval=FALSE} txi <- tximport("quant.sf", type="none", txOut=TRUE, txIdCol="Name", abundanceCol="TPM", countsCol="NumReads", lengthCol="Length", importer=function(x) read_tsv(x, skip=8)) ``` ## Salmon / Sailfish with inferential replicates If inferential replicates (Gibbs or bootstrap samples) are present in expected locations relative to the `quant.sf` file, *tximport* will import these as well, if `txOut=TRUE`. *tximport* will not summarize inferential replicate information to the gene-level. Here we demonstrate using Salmon, run with only 5 Gibbs replicates (usually more Gibbs samples would be useful for estimating variability). ```{r} files <- file.path(dir,"salmon_gibbs", samples$run, "quant.sf") names(files) <- paste0("sample",1:6) txi.inf.rep <- tximport(files, type="salmon", txOut=TRUE) names(txi.inf.rep) names(txi.inf.rep$infReps) dim(txi.inf.rep$infReps$sample1) ``` The *tximport* arguments `varReduce` and `dropInfReps` can be used to summarize the inferential replicates into a single variance per transcript and per sample, or to not import inferential replicates, respectively. ## kallisto kallisto `abundance.h5` files can be imported by setting type to `"kallisto"`. Note that this requires that you have the Bioconductor package [rhdf5](http://bioconductor.org/packages/rhdf5) installed. (Here we only demonstrate reading in transcript-level information.) ```{r} files <- file.path(dir, "kallisto_boot", samples$run, "abundance.h5") names(files) <- paste0("sample",1:6) txi.kallisto <- tximport(files, type="kallisto", txOut=TRUE) head(txi.kallisto$counts) ``` ## kallisto with inferential replicates Because the `kallisto_boot` directory also has inferential replicate information, it was imported as well (and because `txOut=TRUE`). As with Salmon, inferential replicate information will not be summarized to the gene level. ```{r} names(txi.kallisto) names(txi.kallisto$infReps) dim(txi.kallisto$infReps$sample1) ``` ## kallisto with TSV files kallisto `abundance.tsv` files can be imported as well, but this is typically slower than the approach above. ```{r} files <- file.path(dir, "kallisto", samples$run, "abundance.tsv") names(files) <- paste0("sample",1:6) txi.kallisto.tsv <- tximport(files, type="kallisto", tx2gene=tx2gene) head(txi.kallisto.tsv$counts) ``` ## RSEM RSEM `sample.genes.results` files can be imported by setting type to `"rsem"`. ```{r} files <- file.path(dir,"rsem", samples$run, paste0(samples$run, ".genes.results")) names(files) <- paste0("sample",1:6) txi.rsem <- tximport(files, type="rsem") head(txi.rsem$counts) ``` ## Use with downstream Bioconductor differential expression packages **Note**: there are two suggested ways of importing estimates for use with gene-level differential expression methods. The first method, which we show below for *edgeR* and for *DESeq2*, is to use the estimated counts from the quantification tools, and additionally to use the transcript-level abundance estimates to calculate an offset that corrects for changes to the average transcript length across samples. The code examples below accomplish these steps for you, keeping track of appropriate matrices and calculating these offsets. The second method is to use the `tximport` argument `countsFromAbundance="lengthScaledTPM"` or `"scaledTPM"`, and then to use the count matrix `txi$counts` directly as you would a regular count matrix with these software. ## edgeR An example of creating a `DGEList` for use with *edgeR* [@Robinson2010]: ```{r, results="hide", messages=FALSE} library(edgeR) ``` ```{r} cts <- txi$counts normMat <- txi$length normMat <- normMat / exp(rowMeans(log(normMat))) library(edgeR) o <- log(calcNormFactors(cts/normMat)) + log(colSums(cts/normMat)) y <- DGEList(cts) y$offset <- t(t(log(normMat)) + o) # y is now ready for estimate dispersion functions # see edgeR User's Guide ``` ## DESeq2 An example of creating a `DESeqDataSet` for use with *DESeq2* [@Love2014]: ```{r, results="hide", messages=FALSE} library(DESeq2) ``` The user should make sure the rownames of `sampleTable` align with the colnames of `txi$counts`, if there are colnames. The best practice is to read `sampleTable` from a CSV file, and to construct `files` from a column of `sampleTable`, as was shown in the *tximport* examples above. ```{r} sampleTable <- data.frame(condition=factor(rep(c("A","B"),each=3))) rownames(sampleTable) <- colnames(txi$counts) ``` ```{r} dds <- DESeqDataSetFromTximport(txi, sampleTable, ~ condition) # dds is now ready for DESeq() # see DESeq2 vignette ``` ## limma-voom An example of creating a data object for use with *limma-voom* [@Law2014]. Because limma-voom does not use the offset matrix stored in `y$offset`, we recommend using the scaled counts generated from abundances, either `"scaledTPM"` or `"lengthScaledTPM"`: ```{r} files <- file.path(dir,"salmon", samples$run, "quant.sf") names(files) <- paste0("sample",1:6) txi <- tximport(files, type="salmon", tx2gene=tx2gene, countsFromAbundance="lengthScaledTPM") library(limma) y <- DGEList(txi$counts) y <- calcNormFactors(y) design <- model.matrix(~ condition, data=sampleTable) v <- voom(y, design) # v is now ready for lmFit() # see limma User's Guide ``` ## Acknowledgments The development of *tximport* has benefited from contributions and suggestions from: [Rob Patro](https://twitter.com/nomad421) (inferential replicates import), [Andrew Parker Morgan](https://github.com/andrewparkermorgan) (RHDF5 support), [Ryan C. Thompson](https://github.com/DarwinAwardWinner) (RHDF5 support), [Matt Shirley](https://twitter.com/mdshw5) (ignoreTxVersion), [Stephen Turner](https://twitter.com/genetics_blog), [Richard Smith-Unna](https://twitter.com/blahah404), [Rory Kirchner](https://twitter.com/RoryKirchner), [Martin Morgan](https://twitter.com/mt_morgan), ## Session info ```{r} sessionInfo() ``` ## References