%\VignetteIndexEntry{Overview Vignette} %\VignetteDepends{rjson, ggplot2, limma, edgeR, DESeq2, GOstats, GO.db, annotate, pheatmap} %\VignetteKeywords{compute cluster, pipeline, reports} %\VignettePackage{systemPipeR} % Latex compile % Sweave("systemPipeR.Rnw"); system("pdflatex systemPipeR.tex; bibtex systemPipeR; pdflatex systemPipeR.tex; pdflatex systemPipeR.tex") % echo 'Sweave("systemPipeR.Rnw")' | R --slave; echo 'Stangle("systemPipeR.Rnw")' | R --slave; pdflatex systemPipeR.tex; bibtex systemPipeR; pdflatex systemPipeR.tex \documentclass{article} <>= BiocStyle::latex(use.unsrturl=FALSE) @ \usepackage[authoryear,round]{natbib} \bibliographystyle{plainnat} \def\bibsection{\section{References}} \usepackage{graphicx} \usepackage{color} \usepackage{hyperref} \usepackage{url} \usepackage{float} %\newcommand{\comment}[1]{} %\newcommand{\Rfunction}[1]{{\texttt{#1}}} %\newcommand{\Robject}[1]{{\texttt{#1}}} %\newcommand{\Rpackage}[1]{{\textit{#1}}} %\newcommand{\Rmethod}[1]{{\texttt{#1}}} \newcommand{\Rfunarg}[1]{{\texttt{#1}}} %\newcommand{\Rclass}[1]{{\textit{#1}}} % Define header and footer area with fandyhdr package (see: http://www.ctan.org/tex-archive/macros/latex/contrib/fancyhdr/fancyhdr.pdf) \usepackage{fancyhdr} \pagestyle{fancy} \fancyhead{} \fancyfoot{} \rhead{\nouppercase{\leftmark}} \lhead{\textit{systemPipeR Manual}} \rfoot{\thepage} <>= options(width=95) unlink("test.db") @ %\parindent 0in \begin{document} \title{\Rpackage{systemPipeR}: NGS workflow and report generation environment} \author{Thomas Girke \\ Email contact: thomas.girke@ucr.edu} \maketitle \section{Introduction} \Rpackage{systemPipeR} provides utilities for building \textit{end-to-end} analysis workflows with automated report generation for next generation sequence (NGS) applications such as RNA-Seq, ChIP-Seq, VAR-Seq and many others \citep{Girke2014-oy}. An important feature is support for running command-line software, such as NGS aligners, on both single machines or compute clusters. This includes both interactive job submissions or batch submissions to queuing systems of clusters. For instance, \Rpackage{systemPipeR} can be used with most command-line aligners such as \Robject{BWA} \citep{Li2013-oy, Li2009-oc}, \Robject{TopHat 2} \citep{Kim2013-vg} and \Robject{Bowtie 2} \citep{Langmead2012-bs}, as well as the R-based NGS aligner \Rpackage{Rsubread} \citep{Liao2013-bn}. Efficient handling of complex sample sets and experimental designs is facilitated by a well-defined sample annotation infrastructure which improves reproducibility and user-friendliness of many typical analysis workflows in the NGS area \citep{Lawrence2013-kt}. A central concept for designing workflows within the \Rpackage{sytemPipeR} environment is the use of sample management containers called \Robject{SYSargs}. Instances of this S4 object class are constructed by the \Rfunction{systemArgs} function from two simple tablular files: a \Robject{targets} file and a \Robject{param} file. The latter is optional for workflow steps lacking command-line software. Typically, a \Robject{SYSargs} instance stores all sample-level inputs as well as the paths to the corresponding outputs generated by command-line- or R-based software generating sample-level output files, such as read preprocessors (trimmed/filtered FASTQ files), aligners (SAM/BAM files), variant callers (VCF/BCF files) or peak callers (BED/WIG files). Each sample level input/outfile operation uses its own \Robject{SYSargs} instance. The outpaths of \Robject{SYSargs} usually define the sample inputs for the next \Robject{SYSargs} instance. This connectivity is established by writing the outpaths with the \Robject{writeTargetsout} function to a new targets file that serves as input to the next \Rfunction{systemArgs} call. By chaining several \Robject{SYSargs} steps together one can construct complex workflows involving many sample-level input/output file operations with any combinaton of command-line or R-based software. The intended way of running \Rpackage{sytemPipeR} workflows is via \Robject{*.Rnw} or \Robject{*.Rmd} files, which can be executed either line-wise in interactive mode or with a single command from R or the command-line using a \Robject{make} file. This way comprehensive and reproducible analysis reports in PDF or HTML format can be generated in a fully automated manner. Templates for setting up custom project reports are provided as \Robject{*.Rnw} files in the \Robject{vignettes} subdirectory of this package. The corresponding PDFs of these report templates are linked here: \href{https://github.com/tgirke/systemPipeR/blob/master/vignettes/systemPipeRNAseq.pdf?raw=true}{systemPipeRNAseq}, \href{https://github.com/tgirke/systemPipeR/blob/master/vignettes/systemPipeChIPseq.pdf?raw=true}{systemPipeChIPseq} and \href{https://github.com/tgirke/systemPipeR/blob/master/vignettes/systemPipeVARseq.pdf?raw=true}{systemPipeVARseq}. To work with \Robject{*.Rnw} or \Robject{*.Rmd} files efficiently, basic knowledge of \href{https://www.stat.uni-muenchen.de/~leisch/Sweave/}{Sweave} or \href{http://yihui.name/knitr/}{knitr} and \href{http://www.latex-project.org/}{Latex} or \href{http://rmarkdown.rstudio.com/}{Markdown} is required. \tableofcontents \section{Getting Started} \subsection{Installation} The R software for running \Rpackage{systemPipeR} can be downloaded from CRAN (\url{http://cran.at.r-project.org/}). The \Rpackage{systemPipeR} package can be installed from R using the \Rfunction{biocLite} install command. <>= source("http://bioconductor.org/biocLite.R") # Sources the biocLite.R installation script biocLite("systemPipeR") # Installs the package @ \subsection{Loading the Package and Documentation} <>= library("systemPipeR") # Loads the package library(help="systemPipeR") # Lists all functions and classes vignette("systemPipeR") # Opens this PDF manual from R @ \subsection{Sample FASTQ Files} The mini sample FASTQ files used by this overview vignette as well as the associated workflow reporting vignettes can be downloaded from \href{http://biocluster.ucr.edu/~tgirke/projects/systemPipeR_test_data.zip}{\textcolor{blue}{here}}. The chosen data set \href{http://www.ncbi.nlm.nih.gov/sra/?term=SRP010938}{\textcolor{blue}{SRP010938}} contains 18 paired-end (PE) read sets from \textit{Arabidposis thaliana} \citep{Howard2013-fq}. To minimize processing time during testing, each FASTQ file has been subsetted to 90,000-100,000 randomly sampled PE reads that map to the first 100,000 nucleotides of each chromosome of the \textit{A. thalina} genome. The corresponding reference genome sequence (FASTA) and its GFF annotion files (provided in the same download) have been truncated accordingly. This way the entire test sample data set is less than 200MB in storage space. A PE read set has been chosen for this test data set for flexibility, because it can be used for testing both types of analysis routines requiring either SE (single end) reads or PE reads. \section{Structure of \Robject{targets} file} The \Robject{targets} file defines all input files (\textit{e.g.} FASTQ, BAM, BCF) and sample comparisons of an analysis workflow. The following shows the format of a sample \Robject{targets} file provided by this package. In a target file with a single type of input files, here FASTQ files of single end (SE) reads, the first three columns are mandatory including their column names, while it is four mandatory columns for FASTQ files for PE reads. All subsequent columns are optional and any number of additional columns can be added as needed. <>= library(systemPipeR) targetspath <- system.file("extdata", "targets.txt", package="systemPipeR") read.delim(targetspath, comment.char = "#") @ \noindent Structure of \Robject{targets} file for paired end (PE) samples. <>= targetspath <- system.file("extdata", "targetsPE.txt", package="systemPipeR") read.delim(targetspath, comment.char = "#")[1:2,1:6] @ \noindent Sample comparisons are defined in the header lines of the \Robject{targets} file starting with '\texttt{\# }'. The function \Rfunction{readComp} imports the comparison and stores them in a \Robject{list}. Alternatively, \Rfunction{readComp} can obtain the comparison information from the corresponding \Robject{SYSargs} object (see below). Note, the header lines are optional in \Robject{targets} files. They are mainly useful for controlling comparative analysis according to certain biological expectations, such as simple pairwise comparisons in RNA-Seq experiments. <>= readComp(file=targetspath, format="vector", delim="-") @ \section{Structure of \Robject{param} file and \Robject{SYSargs} container} \noindent The \Robject{param} file defines the parameters of the command-line software. The following shows the format of a sample \Robject{param} file provided by this package. <>= parampath <- system.file("extdata", "tophat.param", package="systemPipeR") read.delim(parampath, comment.char = "#") @ \noindent The \Rfunction{systemArgs} function imports the definitions of both the \Robject{param} file and the \Robject{targets} file, and stores all relevant information as \Robject{SYSargs} object. To run the pipeline without command-line software, one can omit the \Robject{param} file and specify \Rfunarg{param=NULL} instead. In addition, one can start the \Rpackage{systemPipeR} workflow with pregenerated BAM files by providing a targets file where the \Robject{FileName} column gives the paths to the BAM files and \Rfunarg{param} is assigned \Rfunarg{NULL}. <>= args <- systemArgs(sysma=parampath, mytargets=targetspath) args @ \noindent Several accessor functions are available that are named after the slot names of the \Robject{SYSargs} object class. <>= names(args) modules(args) cores(args) outpaths(args)[1] sysargs(args)[1] @ \noindent The content of the \Robject{param} file can be returned as JSON object as follows (requires \Rpackage{rjson} package). <>= systemArgs(sysma=parampath, mytargets=targetspath, type="json") @ \section{Workflow} \subsection{Define environment settings and samples} Load package: <>= library(systemPipeR) @ \noindent Construct \Robject{SYSargs} object from \Robject{param} and \Robject{targets} files. <>= args <- systemArgs(sysma="trim.param", mytargets="targets.txt") @ \subsection{Read Preprocessing} The function \Rfunction{preprocessReads} allows to apply predefined or custom read preprocessing functions to all FASTQ files referenced in a \Robject{SYSargs} container, such as quality filtering or adaptor trimming routines. The paths to the resulting output FASTQ files are stored in the \Robject{outpaths} slot of the \Robject{SYSargs} object. Internally, \Rfunction{preprocessReads} uses the \Rfunction{FastqStreamer} function from the \Rpackage{ShortRead} package to stream through large FASTQ files in a memory-efficient manner. The following example performs adaptor trimming with the \Rfunction{trimLRPatterns} function from the \Rpackage{Biostrings} package. After the trimming step a new targets file is generated (here \Robject{targets\_trim.txt}) containing the paths to the trimmed FASTQ files. The new targets file can be used for the next workflow step with an updated \Robject{SYSargs} instance, \textit{e.g.} running the NGS alignments using the trimmed FASTQ files. <>= preprocessReads(args=args, Fct="trimLRPatterns(Rpattern='GCCCGGGTAA', subject=fq)", batchsize=100000, overwrite=TRUE, compress=TRUE) writeTargetsout(x=args, file="targets_trim.txt") @ The following example shows how one can design a custom read preprocessing function using utilities provided by the \Rpackage{ShortRead} package, and then run it in batch mode with the \Rfunction{preprocessReads} function. <>= filterFct <- function(fq) { filter1 <- nFilter(threshold=1) # Keeps only reads without Ns filter2 <- polynFilter(threshold=20, nuc=c("A","T","G","C")) # Removes low complexity reads filter <- compose(filter1, filter2) fq[filter(fq)] } preprocessReads(args=args, Fct="filterFct(fq)", batchsize=100000) @ \subsection{FASTQ quality report} The following \Rfunction{seeFastq} and \Rfunction{seeFastqPlot} functions generate and plot a series of useful quality statistics for a set of FASTQ files including per cycle quality box plots, base proportions, base-level quality trends, relative k-mer diversity, length and occurrence distribution of reads, number of reads above quality cutoffs and mean quality distribution. <>= fqlist <- seeFastq(fastq=infile1(args), batchsize=10000, klength=8) pdf("./results/fastqReport.pdf", height=18, width=4*length(fqlist)) seeFastqPlot(fqlist) dev.off() @ \begin{figure}[H] \centering \includegraphics[width=18cm]{fastqReport.pdf} \caption{QC report for 18 FASTQ files.} \label{fig:fastqreport} \end{figure} \subsection{Alignment with Tophat 2} Build Bowtie 2 index. <>= args <- systemArgs(sysma="tophat.param", mytargets="targets.txt") moduleload(modules(args)) # Skip if module system is not available system("bowtie2-build ./data/tair10.fasta ./data/tair10.fasta") @ \noindent Execute \Robject{SYSargs} on a single machine without submitting to a queuing system of a compute cluster. This way the input FASTQ files will be processed sequentially. If available, multiple CPU cores can be used for processing each file. The number of CPU cores (here 4) to use for each process is defined in the \Robject{*.param} file. With \Rfunction{cores(args)} one can return this value from the \Robject{SYSargs} object. Note, if a module system is not installed or used, then the corresponding \Robject{*.param} file needs to be edited accordingly by either providing an empty field in the line(s) starting with \Robject{module} or by deleting these lines. <>= bampaths <- runCommandline(args=args) @ \noindent Alternatively, the computation can be greatly accelerated by processing many files in parallel using several compute nodes of a cluster, where a scheduling/queuing system is used for load balancing. To avoid over-subscription of CPU cores on the compute nodes, the value from \Rfunction{cores(args)} is passed on to the submission command, here \Rfunarg{nodes} in the \Rfunction{resources} list object. The number of independent parallel cluster processes is defined under the \Rfunarg{Njobs} argument. The following example will run 18 processes in parallel using for each 4 CPU cores. If the resources available on a cluster allow to run all 18 processes at the same time then the shown sample submission will utilize in total 72 CPU cores. Note, \Rfunction{runCluster} can be used with most queueing systems as it is based on utilities from the \Rfunction{BatchJobs} package which supports the use of template files (\Rfunarg{*.tmpl}) for defining the run parameters of different schedulers. To run the following code, one needs to have both a conf file (see \Rfunarg{.BatchJob} samples \href{https://code.google.com/p/batchjobs/wiki/DortmundUsage}{{\textcolor{blue}{here}}}) and a template file (see \Rfunarg{*.tmpl} samples \href{https://github.com/tudo-r/BatchJobs/tree/master/examples}{{\textcolor{blue}{here}}}) for the queueing available on a system. The following example uses the sample conf and template files for the Torque scheduler provided by this package. <>=0 file.copy(system.file("extdata", ".BatchJobs.R", package="systemPipeR"), ".") file.copy(system.file("extdata", "torque.tmpl", package="systemPipeR"), ".") resources <- list(walltime="20:00:00", nodes=paste0("1:ppn=", cores(args)), memory="10gb") reg <- clusterRun(args, conffile=".BatchJobs.R", template="torque.tmpl", Njobs=18, runid="01", resourceList=resources) @ \noindent Useful commands for monitoring progress of submitted jobs <>= showStatus(reg) file.exists(outpaths(args)) sapply(1:length(args), function(x) loadResult(reg, x)) # Works after job completion @ \subsection{Read and alignment count stats} \noindent Generate table of read and alignment counts for all samples. <>= read_statsDF <- alignStats(args) write.table(read_statsDF, "results/alignStats.xls", row.names=FALSE, quote=FALSE, sep="\t") @ \noindent The following shows the first four lines of the sample alignment stats file provided by the \Rpackage{systemPipeR} package. For simplicity the number of PE reads is multiplied here by 2 to approximate proper alignment frequencies where each read in a pair is counted. <>= read.table(system.file("extdata", "alignStats.xls", package="systemPipeR"), header=TRUE)[1:4,] @ \subsection{Create symbolic links for viewing BAM files in IGV} The genome browser IGV supports reading of indexed/sorted BAM files via web URLs. This way it can be avoided to create unnecessary copies of these large files. To enable this approach, an HTML directory with http access needs to be available in the user account (\textit{e.g.} \Rfunarg{\textasciitilde/public\_html}) of a system. If this is not the case then the BAM files need to be moved or copied to the system where IGV runs. In the following, \Rfunarg{htmldir} defines the path to the HTML directory with http access where the symbolic links to the BAM files will be stored. The corresponding URLs will be written to a text file specified under the \Rfunarg{urlfile} argument. <>= symLink2bam(sysargs=args, htmldir=c("~/.html/", "somedir/"), urlbase="http://myserver.edu/~username/", urlfile="IGVurl.txt") @ \subsection{Alternative NGS Aligners} \subsubsection{Alignment with Bowtie 2 (\textit{e.g.} for miRNA profiling)} The following example runs Bowtie 2 as a single process without submitting it to a cluster. <>= args <- systemArgs(sysma="bowtieSE.param", mytargets="targets.txt") moduleload(modules(args)) # Skip if module system is not available bampaths <- runCommandline(args=args) @ \noindent Alternatively, submit the job to compute nodes. <>= qsubargs <- getQsubargs(queue="batch", cores=cores(args), memory="mem=10gb", time="walltime=20:00:00") (joblist <- qsubRun(args=args, qsubargs=qsubargs, Nqsubs=18, package="systemPipeR")) @ \subsubsection{Alignment with BWA-MEM (\textit{e.g.} for VAR-Seq)} The following example runs BWA-MEM as a single process without submitting it to a cluster. <>= args <- systemArgs(sysma="bwa.param", mytargets="targets.txt") moduleload(modules(args)) # Skip if module system is not available system("bwa index -a bwtsw ./data/tair10.fasta") # Indexes reference genome bampaths <- runCommandline(args=args) @ \subsubsection{Alignment with Rsubread (\textit{e.g.} for RNA-Seq)} The following example shows how one can use within the \Rpackage{systemPipeR} environment the R-based aligner \Rpackage{Rsubread} or other R-based functions that read from input files and write to output files. <>= library(Rsubread) args <- systemArgs(sysma="rsubread.param", mytargets="targets.txt") buildindex(basename=reference(args), reference=reference(args)) # Build indexed reference genome align(index=reference(args), readfile1=infile1(args), input_format="FASTQ", output_file=outfile1(args), output_format="SAM", nthreads=8, indels=1, TH1=2) for(i in seq(along=outfile1(args))) asBam(file=outfile1(args)[i], destination=gsub(".sam", "", outfile1(args)[i]), overwrite=TRUE, indexDestination=TRUE) @ \subsection{Read counting for mRNA profiling experiments} Create \Robject{txdb} (needs to be done only once) <>= library(GenomicFeatures) txdb <- makeTranscriptDbFromGFF(file="data/tair10.gff", format="gff", dataSource="TAIR", species="A. thaliana") saveDb(txdb, file="./data/tair10.sqlite") @ \noindent Read counting with summarizeOverlaps in parallel mode with multiple cores <>= library(BiocParallel) txdb <- loadDb("./data/tair10.sqlite") eByg <- exonsBy(txdb, by="gene") bfl <- BamFileList(outpaths(args), yieldSize=50000, index=character()) multicoreParam <- MulticoreParam(workers=4); register(multicoreParam); registered() counteByg <- bplapply(bfl, function(x) summarizeOverlaps(eByg, x, mode="Union", ignore.strand=TRUE, inter.feature=TRUE, singleEnd=TRUE)) # Note: for strand-specific RNA-Seq set 'ignore.strand=FALSE' and for PE data set 'singleEnd=FALSE' countDFeByg <- sapply(seq(along=counteByg), function(x) assays(counteByg[[x]])$counts) rownames(countDFeByg) <- names(rowData(counteByg[[1]])); colnames(countDFeByg) <- names(bfl) rpkmDFeByg <- apply(countDFeByg, 2, function(x) returnRPKM(counts=x, ranges=eByg)) write.table(countDFeByg, "results/countDFeByg.xls", col.names=NA, quote=FALSE, sep="\t") write.table(rpkmDFeByg, "results/rpkmDFeByg.xls", col.names=NA, quote=FALSE, sep="\t") @ \subsection{Read counting for miRNA profiling experiments} Download miRNA genes from miRBase <>= system("wget ftp://mirbase.org/pub/mirbase/19/genomes/My_species.gff3 -P ./data/") gff <- import.gff("./data/My_species.gff3", asRangedData=FALSE) gff <- split(gff, elementMetadata(gff)$ID) bams <- names(bampaths); names(bams) <- targets$SampleName bfl <- BamFileList(bams, yieldSize=50000, index=character()) countDFmiR <- summarizeOverlaps(gff, bfl, mode="Union", ignore.strand=FALSE, inter.feature=FALSE) # Note: inter.feature=FALSE important since pre and mature miRNA ranges overlap rpkmDFmiR <- apply(countDFmiR, 2, function(x) returnRPKM(counts=x, gffsub=gff)) write.table(assays(countDFmiR)$counts, "results/countDFmiR.xls", col.names=NA, quote=FALSE, sep="\t") write.table(rpkmDFmiR, "results/rpkmDFmiR.xls", col.names=NA, quote=FALSE, sep="\t") @ \subsection{Correlation analysis of samples} The following computes the sample-wise Spearman correlation coefficients from the RPKM normalized expression values. After transformation to a distance matrix, hierarchical clustering is performed with the \Rfunction{hclust} function and the result is plotted as a dendrogram (\href{run:./results/sample_tree.pdf}{sample\_tree.pdf}). <>= library(ape) rpkmDFeBygpath <- system.file("extdata", "rpkmDFeByg.xls", package="systemPipeR") rpkmDFeByg <- read.table(rpkmDFeBygpath, check.names=FALSE) rpkmDFeByg <- rpkmDFeByg[rowMeans(rpkmDFeByg) > 50,] d <- cor(rpkmDFeByg, method="spearman") hc <- hclust(as.dist(1-d)) plot.phylo(as.phylo(hc), type="p", edge.col="blue", edge.width=2, show.node.label=TRUE, no.margin=TRUE) @ \begin{figure}[H] \centering \includegraphics[width=6cm]{systemPipeR-sample_tree.pdf} \caption{Correlation dendrogram of samples.} \label{fig:sample_tree} \end{figure} \subsection{DEG analysis with \Rpackage{edgeR}} The following \Rfunction{run\_edgeR} function is a convenience wrapper for identifying differentially expressed genes (DEGs) in batch mode with \Rpackage{edgeR}'s GML method \citep{Robinson2010-uk} for any number of pairwise sample comparisons specified under the \Rfunarg{cmp} argument. Users are strongly encouraged to consult the \href{http://www.bioconductor.org/packages/devel/bioc/vignettes/edgeR/inst/doc/edgeRUsersGuide.pdf}{\Rpackage{edgeR} vignette} for more detailed information on this topic and how to properly run \Rpackage{edgeR} on data sets with more complex experimental designs. <>= targets <- read.delim(targetspath, comment="#") cmp <- readComp(file=targetspath, format="matrix", delim="-") cmp[[1]] <>= countDFeBygpath <- system.file("extdata", "countDFeByg.xls", package="systemPipeR") countDFeByg <- read.delim(countDFeBygpath, row.names=1) edgeDF <- run_edgeR(countDF=countDFeByg, targets=targets, cmp=cmp[[1]], independent=FALSE, mdsplot="") @ Filter and plot DEG results for up and down regulated genes. Because of the small size of the toy data set used by this vignette, the \Rfunarg{FDR} value has been set to a relatively high threshold (here 10\%). More commonly used \Rfunarg{FDR} cutoffs are 1\% or 5\%. <>= DEG_list <- filterDEGs(degDF=edgeDF, filter=c(Fold=2, FDR=10)) @ \begin{figure}[H] \centering \includegraphics[width=10cm]{systemPipeR-DEGcounts.pdf} \caption{Up and down regulated DEGs identified with \Rpackage{edgeR}.} \label{fig:DEGcounts} \end{figure} <>= names(DEG_list) DEG_list$Summary[1:4,] @ \subsection{DEG analysis with \Rpackage{DESeq2}} The following \Rfunction{run\_DESeq2} function is a convenience wrapper for identifying DEGs in batch mode with \Rpackage{DESeq2} \citep{Love2014-sh} for any number of pairwise sample comparisons specified under the \Rfunarg{cmp} argument. Users are strongly encouraged to consult the \href{http://www.bioconductor.org/packages/devel/bioc/vignettes/DESeq2/inst/doc/DESeq2.pdf}{\Rpackage{DESeq2} vignette} for more detailed information on this topic and how to properly run \Rpackage{DESeq2} on data sets with more complex experimental designs. <>= degseqDF <- run_DESeq2(countDF=countDFeByg, targets=targets, cmp=cmp[[1]], independent=FALSE) @ Filter and plot DEG results for up and down regulated genes. <>= DEG_list2 <- filterDEGs(degDF=degseqDF, filter=c(Fold=2, FDR=10)) @ \begin{figure}[H] \centering \includegraphics[width=10cm]{systemPipeR-DEGcounts2.pdf} \caption{Up and down regulated DEGs identified with \Rpackage{DESeq2}.} \label{fig:DEGcounts2} \end{figure} \subsection{Venn Diagrams} The function \Rfunction{overLapper} can compute Venn intersects for large numbers of sample sets (up to 20 or more) and \Rfunction{vennPlot} can plot 2-5 way Venn diagrams. A useful feature is the possiblity to combine the counts from several Venn comparisons with the same number of sample sets in a single Venn diagram (here for 4 up and down DEG sets). <>= vennsetup <- overLapper(DEG_list$Up[6:9], type="vennsets") vennsetdown <- overLapper(DEG_list$Down[6:9], type="vennsets") vennPlot(list(vennsetup, vennsetdown), mymain="", mysub="", colmode=2, ccol=c("blue", "red")) @ \begin{figure}[H] \centering \includegraphics[width=14cm]{systemPipeR-vennplot.pdf} \caption{Venn Diagram for 4 Up and Down DEG Sets.} \label{fig:vennplot} \end{figure} \subsection{GO term enrichment analysis of DEGs} \subsubsection{Obtain gene-to-GO mappings} The following shows how to obtain gene-to-GO mappings from \Rpackage{biomaRt} (here for \textit{A. thaliana}) and how to organize them for the downstream GO term enrichment analysis. Alternatively, the gene-to-GO mappings can be obtained for many organisms from Bioconductor's \Robject{*.db} genome annotation packages or GO annotation files provided by various genome databases. For each annotation this relatively slow preprocessing step needs to be performed only once. Subsequently, the preprocessed data can be loaded with the \Rfunction{load} function as shown in the next subsection. <>= library("biomaRt") listMarts() # To choose BioMart database m <- useMart("ENSEMBL_MART_PLANT"); listDatasets(m) m <- useMart("ENSEMBL_MART_PLANT", dataset="athaliana_eg_gene") listAttributes(m) # Choose data types you want to download go <- getBM(attributes=c("go_accession", "tair_locus", "go_namespace_1003"), mart=m) go <- go[go[,3]!="",]; go[,3] <- as.character(go[,3]) dir.create("./data/GO") write.table(go, "data/GO/GOannotationsBiomart_mod.txt", quote=FALSE, row.names=FALSE, col.names=FALSE, sep="\t") catdb <- makeCATdb(myfile="data/GO/GOannotationsBiomart_mod.txt", lib=NULL, org="", colno=c(1,2,3), idconv=NULL) save(catdb, file="data/GO/catdb.RData") @ \subsubsection{Batch GO term enrichment analysis} Apply the enrichment analysis to the DEG sets obtained in the above differential expression analysis. Note, in the following example the \Rfunarg{FDR} filter is set here to an unreasonably high value, simply because of the small size of the toy data set used in this vignette. Batch enrichment analysis of many gene sets is performed with the \Rfunction{GOCluster\_Report} function. When \Rfunarg{method="all"}, it returns all GO terms passing the p-value cutoff specified under the \Rfunarg{cutoff} arguments. When \Rfunarg{method="slim"}, it returns only the GO terms specified under the \Rfunarg{myslimv} argument. The given example shows how one can obtain such a GO slim vector from BioMart for a specific organism. <>= load("data/GO/catdb.RData") DEG_list <- filterDEGs(degDF=edgeDF, filter=c(Fold=2, FDR=50), plot=FALSE) up_down <- DEG_list$UporDown; names(up_down) <- paste(names(up_down), "_up_down", sep="") up <- DEG_list$Up; names(up) <- paste(names(up), "_up", sep="") down <- DEG_list$Down; names(down) <- paste(names(down), "_down", sep="") DEGlist <- c(up_down, up, down) DEGlist <- DEGlist[sapply(DEGlist, length) > 0] BatchResult <- GOCluster_Report(catdb=catdb, setlist=DEGlist, method="all", id_type="gene", CLSZ=2, cutoff=0.9, gocats=c("MF", "BP", "CC"), recordSpecGO=NULL) library("biomaRt"); m <- useMart("ENSEMBL_MART_PLANT", dataset="athaliana_eg_gene") goslimvec <- as.character(getBM(attributes=c("goslim_goa_accession"), mart=m)[,1]) BatchResultslim <- GOCluster_Report(catdb=catdb, setlist=DEGlist, method="slim", id_type="gene", myslimv=goslimvec, CLSZ=10, cutoff=0.01, gocats=c("MF", "BP", "CC"), recordSpecGO=NULL) @ \subsubsection{Plot batch GO term results} The \Robject{data.frame} generated by \Rfunction{GOCluster\_Report} can be plotted with the \Rfunction{goBarplot} function. Because of the variable size of the sample sets, it may not always be desirable to show the results from different DEG sets in the same bar plot. Plotting single sample sets is achieved by subsetting the input data frame as shown in the first line of the following example. <>= gos <- BatchResultslim[grep("M6-V6_up_down", BatchResultslim$CLID), ] gos <- BatchResultslim pdf("GOslimbarplotMF.pdf", height=8, width=10); goBarplot(gos, gocat="MF"); dev.off() goBarplot(gos, gocat="BP") goBarplot(gos, gocat="CC") @ \begin{figure}[H] \centering \includegraphics[width=20cm]{GOslimbarplotMF.pdf} \caption{GO Slim Barplot for MF Ontology.} \label{fig:GOMF} \end{figure} \subsection{Clustering and heat maps} The following example performs hierarchical clustering on the RPKM normalized expression matrix subsetted by the DEGs identified in the above differential expression analysis. It uses a Pearson correlation-based distance measure and complete linkage for cluster joining. <>= library(pheatmap) geneids <- unique(as.character(unlist(DEG_list[[1]]))) y <- rpkmDFeByg[geneids, ] pdf("heatmap1.pdf") pheatmap(y, scale="row", clustering_distance_rows="correlation", clustering_distance_cols="correlation") dev.off() @ \begin{figure}[H] \centering \includegraphics[width=12cm]{heatmap1.pdf} \caption{Heat map with hierarchical clustering dendrograms of DEGs.} \label{fig:heatmap} \end{figure} \section{Version Information} <>= toLatex(sessionInfo()) @ \section{Funding} This software was developed with funding from the Agriculture and Food Research Institute of the National Institute of Food and Agriculture of the USDA (\href{http://www.reeis.usda.gov/web/crisprojectpages/0224573-reducing-losses-to-potato-and-tomato-late-blight-by-monitoring-pathogen-populations-improved-resistant-plants-education-and-extension.html}{{\textcolor{blue}{2011-68004-30154}}}), the National Science Foundation (\href{http://www.nsf.gov/awardsearch/showAward?AWD_ID=1021969}{{\textcolor{blue}{MCB-1021969}}}) and the National Institutes of Health/National Institute of Allergy and Infectious Diseases (5R01 AI036959). \bibliography{bibtex} \end{document}