%\VignetteIndexEntry{matter: Rapid prototyping with data on disk} %\VignetteKeyword{Infrastructure} \documentclass[a4paper]{article} \usepackage{caption} \usepackage{subcaption} <>= BiocStyle::latex() @ \title{\Rpackage{matter}: Rapid prototyping with data on disk} \author{Kylie A. Bemis} \begin{document} \SweaveOpts{concordance=TRUE} \maketitle \tableofcontents \section{Introduction} \Rpackage{matter} is designed for rapid prototyping of new statistical methods when working with larger-than-memory datasets on disk. Unlike related packages \Rpackage{bigmemory} \cite{bigmemory} and \Rpackage{ff} \cite{ff}, which also work with file-backed larger-than-memory datasets, \Rpackage{matter} aims to offer strict control over memory and maximum flexibility with on-disk data structures, so it can be easily adapted to domain-specific file formats, including user-customized file formats. The vignettes of this package are organized as follows: \itemize{ \item ``Rapid prototyping with data on disk'': This is the main vignette describing \Rpackage{matter} and its general use, design, and extensibility. It walks through basic data manipulation, data structures, and demonstrates an example S4 class extending \Rpackage{matter}. \item ``Supplementary 1 - Simulations and comparative benchmarks'': This supplementary vignette re-works the simulated statistical analysis examples from this vignette using \Rpackage{bigmemory} and \Rpackage{ff} and provides some basic benchmarks and comparisons of the three packages. \item ``Supplementary 2 - 3D mass spectrometry imaging case study'': This supplementary vignette demonstrates using \Rpackage{matter} for principal components analysis on large experimental data, along with in-depth comparisons with \Rpackage{bigmemory} and \Rpackage{ff} on real data. } \section{Installation} \Rpackage{matter} can be installed from Bioconductor using the following commands in \R{}. <>= install.packages("BiocManager") BiocManager::install("matter") @ \section{Basic use and data manipulation} <>= options(width=72) library(matter) data(matter_ex) @ \Rpackage{matter} matrices and vectors can be initialized similarly to ordinary \R{} matrices. When no file is given, a new temporary file is created in the default temporary file directory, which will be cleaned up later by either \R{} or the operating system. Here, we initialize a \Rpackage{matter} matrix with 10 rows and 5 columns. The resulting object is a subclass of the \Robject{matter} class, and stores file metadata that gives the location of the data on disk. In many cases, it can be treated as an ordinary \R{} matrix. <<>>= x <- matter_mat(data=1:50, nrow=10, ncol=5, datamode="double") x x[] @ As seen above, this is a small toy example in which the in-memory metadata actually takes up more space than the size of the data stored on disk. For much larger datasets, the in-memory metadata will be a small fraction of the total size of the dataset on disk. \Rpackage{matter}'s matrices and vectors can be indexed into like ordinary \R{} matrices and vectors. <<>>= x[1:4,] x[,3:4] @ We can assign names to \Robject{matter\_vec} vectors and row and column names to \Robject{matter\_mat} matrices. <<>>= rownames(x) <- 1:10 colnames(x) <- letters[1:5] x[] @ \Rpackage{matter} provides methods for calculating summary statistics for its vectors and matrices, including some methods that do not exist in base \R{}, such as \Robject{colVars}, which uses a memory-efficient running variance calculation that is accurate for large floating-point datasets \cite{Welford:1962tw}. <<>>= colSums(x) colSums(x[]) colVars(x) apply(x, 2, var) @ One of the major advantages of the flexibility of \Rpackage{matter} is being able to treat data from multiple files as a single dataset. This is particularly useful if analysing data from a domain where each sample in an experiment generates large files, such as high-resolution, high-throughput mass spectrometry imaging. Below, we create a second matrix, and show its data is stored in a separate file. We then combine the matrices, and the result can be treated as a single matrix, despite originating from multiple files. Combining the matrices does not create new data or change the existing data on disk. <<>>= y <- matter_mat(data=51:100, nrow=10, ncol=5, datamode="double") paths(x) paths(y) z <- cbind(x, y) z z[] @ Note that matrices in \Rpackage{matter} are either stored in a column-major or a row-major format. The default is to use the column-major format, as \R{} does. Column-major matrices are optimized for fast column-access, and assume that each column is stored contiguously or mostly-contiguously on disk. Conversely, row-major matrices are optimized for fast row-access, and make the same assumption for rows. Since \Rpackage{matter} does support both column-major and row-major formats, transposing a matrix is a trivial operation in \Rpackage{matter} that only needs to change the matrix metadata, and doesn't touch the data on disk. <<>>= t(x) rbind(t(x), t(y)) @ Note that this is equivalent to \verb|t(cbind(x, y))|. Below, we inspect the metadata associated with the different columns of \Robject{x} using the \verb|atomdata| method. <<>>= atomdata(x) @ This shows the ``atoms'' that compose the \Robject{matter} object. An atom in \Rpackage{matter} is a single contiguous segment of a file on disk. In this case, each atom corresponds to a different column. Note that each atom has a byte offset and an extent (i.e., length) associated with it. Now we show how to create a \Robject{matter} object from a pre-existing file. We will first create vectors corresponding to the second column of \Robject{x} and third column of \Robject{y}. <<>>= x2 <- matter_vec(offset=80, extent=10, paths=paths(x), datamode="double") y3 <- matter_vec(offset=160, extent=10, paths=paths(y), datamode="double") cbind(x2, y3)[] cbind(x[,2], y[,3]) @ We can even combine multiple on-disk vectors together before binding them all into a matrix. <<>>= z <- cbind(c(x2, y3), c(y3, x2)) atomdata(z) z[] @ This is a quick and easy way to build a dataset from many files, or even many segments of many files. Even if the resulting matrix would fit into memory, using \Rpackage{matter} can be a tidy, efficient way of reading complex binary data from multiple files into \R{}. Lastly, it is straightforward to coerce common base R types to their \Robject{matter} object equivalents using \verb|as.matter|. This includes raw, logical, integer, numeric, and character vectors, integer and numeric matrices, and data frames. <<>>= v1 <- 1:10 v2 <- as.matter(v1) v2 v2[] m1 <- diag(3) m2 <- as.matter(m1) m2 m2[] s1 <- letters[1:10] s2 <- as.matter(s1) s2 s2[] df1 <- data.frame(a=v1, b=s1, stringsAsFactors=FALSE) df2 <- as.matter(df1) df2 df2[] @ \section{Linear regression for on-disk datasets} \Rpackage{matter} is designed to provide a statistical computing environment for larger-than-memory datasets on disk. To facilitate this, \Rpackage{matter} provides a method for fitting of linear models for \Robject{matter} matrices through the \Rpackage{biglm} package \cite{biglm}. \Rpackage{matter} provides a wrapper for \Rpackage{biglm}'s \Robject{bigglm} function that works with \Robject{matter\_mat} matrices, which we demonstrate below. First, we simulate some data appropriate for linear regression. <>= set.seed(81216) n <- 1.5e7 p <- 9 b <- runif(p) names(b) <- paste0("x", 1:p) data <- matter_mat(nrow=n, ncol=p + 1, datamode="double") colnames(data) <- c(names(b), "y") data[,p + 1] <- rnorm(n) for ( i in 1:p ) { xi <- rnorm(n) data[,i] <- xi data[,p + 1] <- data[,p + 1] + xi * b[i] } data head(data) @ This creates a 1.2 GB dataset on disk, but only about 12 KB of metadata is stored in memory. Now we calculate some statistical summaries using \Rpackage{matter}'s \Robject{apply} method for \Robject{matter\_mat} matrices. <>= apply(data, 2, mean) apply(data, 2, var) @ We could also have used \Robject{colMeans} and \Robject{colVars}, which are specialized to be faster and more memory efficient. Now we fit the linear model to the data using the \Robject{bigglm} method for \Robject{matter\_mat} matrices. Note that it requires a formula, and (unfortunately) it does not allow \verb|y ~ .|, so all variables must be stated explicitly. <>= profmem({ fm <- as.formula(paste0("y ~ ", paste0(names(b), collapse=" + "))) bigglm.out <- bigglm(fm, data=data, chunksize=10000) }) @ <>= fm <- as.formula(paste0("y ~ ", paste0(names(b), collapse=" + "))) bigglm.out <- bigglm(fm, data=data, chunksize=10000) @ <>= summary(bigglm.out) cbind(coef(bigglm.out)[-1], b) @ On a 2012 retina MacBook Pro with 2.6 GHz Intel CPU, 16 GB RAM, and 500 GB SSD, fitting the linear model takes ~40 seconds and uses an additional ~400 MB of memory overhead. The max amount of memory used while fitting the model was only ~650 MB, for the 1.2 GB dataset. This memory usage can be controlled further by using the \verb|chunksize| argument in \verb|bigglm| or by specifying a different \verb|chunksize| for the \Robject{matter} object. \section{Principal components analysis for on-disk datasets} Because \Rpackage{matter} provides basic linear algebra for on-disk \Robject{matter\_mat} matrices with in-memory R matrices, it opens up the possibility for the use of many iterative statistical methods which can operate on only small portions of the data at a time. For example, \Robject{matter\_mat} matrices are compatible with the \Rpackage{irlba} package, which performs efficient, bounded-memory singular value decomposition (SVD) of matrices, and which can therefore be used for efficient principal components analysis (PCA) of large datasets \cite{irlba}. For convenience, \Rpackage{matter} provides a \verb|prcomp| method for performing PCA on \Robject{matter\_mat} matrices using the \verb|irlba| method from the \Rpackage{irlba} package, as demonstrated below. First, we simulate some data appropriate for principal components analysis. <>= set.seed(81216) n <- 1.5e6 p <- 100 data <- matter_mat(nrow=n, ncol=p, datamode="double") for ( i in 1:10 ) data[,i] <- (1:n)/n + rnorm(n) for ( i in 11:20 ) data[,i] <- (n:1)/n + rnorm(n) for ( i in 21:p ) data[,i] <- rnorm(n) data @ This again creates a 1.2 GB dataset on disk, but only about 12 KB of metadata is stored in memory. More metadata is stored compared to the previous example, because the matrix has more columns, and it is stored as a column-major \Robject{matter\_matc} matrix with independent metadata for each column. Note that, in the simulated data, only the first twenty variables show systematic variation, with the first ten variables varying distinctly from the next ten variables. First we calculate the variance for each column. <>= var.out <- colVars(data) plot(var.out, type='h', ylab="Variance") @ This takes only 7 seconds and uses less than 30 KB of additional memory. The maximum amount of memory used while calculating the variance for all columns of the 1.2 GB dataset is only 27 MB. Note that the \Robject{irlba} function has an optional argument \verb|mult| which allows specification of a custom matrix multiplication method, for use with packages such as \Rpackage{bigmemory} and \Rpackage{ff}. This is especially useful since it allows a \verb|transpose=TRUE| argument, so that the identity \verb|t(t(B) %*% A)| can be used in place of \verb|t(A) %*% B)| when transposition is an expensive operation. However, this is not necessary for \Rpackage{matter}, since transposition is a trivial operation for \Robject{matter\_mat} matrices. Implicit scaling and centering is supported in \Rpackage{matter}, so the data can be scaled and centered without changing the data on disk or the additional overhead of storing a scaled version of the data. Unlike the default \verb|prcomp| method for ordinary \R{} matrices, this version supports an argument \Robject{n} for specifying the number of principal components to calculate. <>= profmem({ prcomp.out <- prcomp(data, n=2, center=FALSE, scale.=FALSE) }) @ <>= prcomp.out <- prcomp(data, n=2, center=FALSE, scale.=FALSE) @ On a 2012 retina MacBook Pro with 2.6 GHz Intel CPU, 16 GB RAM, and 500 GB SSD, calculating the first two principal components takes ~100 seconds and uses an additional ~450 MB of memory overhead. The max amount of memory used during the computation was only ~700 MB, for the 1.2 GB dataset. Now we plot the first two principal components. <>= plot(prcomp.out$rotation[,1], type='h', ylab="PC 1") @ <>= plot(prcomp.out$rotation[,2], type='h', ylab="PC 2") @ \setkeys{Gin}{width=\textwidth} \begin{figure}[h] \centering \begin{subfigure}{.2\textwidth} \centering <>= <> @ \caption{\small Sample variance} \label{fig:var} \end{subfigure} \begin{subfigure}{.2\textwidth} \centering <>= <> @ \caption{\small PC1 loadings} \label{fig:pc1} \end{subfigure} \begin{subfigure}{.2\textwidth} \centering <>= <> @ \caption{\small PC2 loadings} \label{fig:pc2} \end{subfigure} \caption{\small Principal components analysis for on-disk dataset.} \end{figure} As shown in the plots of the first and second principal components, the first PC shows that most of the variation in the data occurs in the first twenty variables, while the second PC distinguishes the first ten variables from the next ten variables. <>= save(bigglm.out, prcomp.out, file="~/Documents/Developer/Projects/matter/data/matter_ex.rda") @ \section{Design and implementation} The \Rpackage{matter} package is designed with several goals in mind. Like the \textit{bigmemory} and \textit{ff} packages, it seeks to make statistical methods scalable to larger-than-memory datasets by utilizing data-on-disk. Unlike those packages, it seeks to make domain-specific file formats (such as Analyze 7.5 and imzML for MS imaging experiments) accessible from disk directly without additional file conversion. It seeks to have a minimal memory footprint, and require minimal developer effort to use, while maintaining computational efficiency wherever possible. \subsection{S4 classes} \Rpackage{matter} utilizes S4 classes to implement on-disk matrices in a way so that they can be seamlessly accessed as if they were ordinary \R{} matrices. These are the \Robject{atoms} class and the \Robject{matter} class. The \Robject{atoms} class is not exported to the user, who only interacts with the \Robject{matter} class and \Robject{matter} objects to create and manipulate on-disk matrices. \subsubsection{\Robject{atoms}: contiguous sectors of data on disk} By analogy to \R{}'s notion of ``atomic'' vectors, the \Robject{atoms} class uses the notion of contiguous ``atomic'' sectors of disk. Each ``atom'' in an \Robject{atoms} object gives the location of one block of contiguous data on disk, as denoted by a file path, an byte offset from the beginning of the file, a data type, and the number of data elements (i.e., the length) of the atom. An \Robject{atoms} object may consist of many atoms from multiple files and multiple locations on disk, while ultimately representing a single vector or row or column of a matrix. The ``atoms'' may be arranged into group, corresponding to rows or columns of a matrix, margins of an array, elements of a list, etc. Structure: \begin{itemize} \item \Robject{natoms}: the number of atoms \item \Robject{ngroups}: the number of groups \item \Robject{group\_id}: which group each atom belongs to \item \Robject{source\_id}: the ID's of the file paths where each atom is located \item \Robject{datamode}: the type of data (short, int, long, float, double) for each atom \item \Robject{offset}: each atom's byte offset from the beginning of the file \item \Robject{extent}: the length of each atom \item \Robject{index\_offset}: the cumulative index of the first element of each atom \item \Robject{index\_extent}: the cumulative one-past-the-end index of each atom \end{itemize} The \Robject{atoms} class has a C++ backend in the \Robject{Atoms} C++ class. \subsubsection{\Robject{matter}: vectors and matrices stored on disk} A \Robject{matter} object is made of one or more \Robject{atoms} objects, and represents a vector or matrix. It includes additional metadata such as dimensions and row names or column names. Structure: \begin{itemize} \item \Robject{data}: one or more \Robject{atoms} objects \item \Robject{datamode}: the type of data (integer, numeric) for the represented vector or matrix \item \Robject{paths}: the paths to the files used by the \Robject{atoms} objects \item \Robject{filemode}: should the files be open for read/write, or read-only? \item \Robject{chunksize}: how large the chunk sizes should be for calculations that operate on chunks of the dataset \item \Robject{length}: the total length of the dataset \item \Robject{dim}: the extent of each dimension (for a matrix) \item \Robject{names}: the names of the data elements (e.g., for a vector) \item \Robject{dimnames}: the names of the dimensions (e.g., for a matrix) \item \Robject{ops}: delayed operations registered to the object \end{itemize} A \Robject{matter\_vec} vector contains a single \Robject{atoms} object that represents all of the atoms of the vector. The \Robject{matter\_mat} matrix class has two subtypes for column-major (\Robject{matter\_matc}) and row-major (\Robject{matter\_matr}) matrices. A column-major \Robject{matter\_matc} matrix has one \Robject{atoms} object for each column, while a row-major \Robject{matter\_matr} matrix has one \Robject{atoms} object for each row. The \Robject{matter} class has a C++ backend in the \Robject{Matter} C++ class. \subsection{C++ classes} \Rpackage{matter} utilizes a C++ backend to access the data on disk and transform it into the appropriate representation in R. Although these classes correspond to S4 classes in \R{}, and are responsible for most of the computational work, all of the required metadata is stored in the S4 classes in \R{}, and are simply read by the C++ classes. This means that \Rpackage{matter} never depends on external pointers, which makes it trivial to share \Robject{matter} vectors and \Robject{matter} matrices between \R{} sessions that have access to the same filesystem. \subsubsection{\Robject{Atoms}: contiguous sectors of data on disk} The \Robject{Atoms} C++ class is responsible for reading and writing the data on disk, based on its metadata. For computational efficiency, it tries to perform sequential reads over random read/writes whenever possible, while minimizing the total number of atomic read/writes on disk. \subsubsection{\Robject{Matter}: vectors and matrices stored on disk} The \Robject{Matter} C++ class is responsible for transforming the data read by the \Robject{Atoms} class into a format appropriate for R. This may include re-arranging contiguous data that has been read sequentially into a different order, either due to the inherent organization of the dataset, or as requested by the user in R. \subsubsection{\Robject{MatterIterator}: iterate over virtual disk objects} The \Robject{MatterIterator} C++ class acts similarly to an iterator, and allows buffered iteration over a \Robject{Matter} object. It can either iterate over the whole dataset (for both vectors and matrices), or over a single column for column-major matrices, or over a single row for row-major matrices. A \Robject{MatterIterator} object will load portions of the dataset (as many elements as the \Robject{chunksize} at once) into memory, and then free that portion of the data and load a new chunk, as necessary. This buffering is handled automatically by the class, and code can treat it as a regular iterator. This allows seamless and simple iteration over \Robject{Matter} objects while maintaining strict control over the memory footprint of the calculation. \section{Extending with new S4 classes} The \Rpackage{matter} package is intended to be extensible to any uncompressed, open-source format. For example, the \Rpackage{Cardinal} package uses \Rpackage{matter} to attach Analyze 7.5 and imzML datasets, which are popular open-source data formats in mass spectrometry imaging. The flexibility of \Rpackage{matter} in terms of specifying custom file formats with a high degree of control distinguishes it from other packages for working with large, on-disk datasets in R. In this section, we demonstrate how one could create a custom S4 class for genomics sequencing data. We begin by creating a small toy example of a FASTQ file with only two sample reads from a larger library \cite{SRR001666} and writing them to disk. <>= seqs <- c("@SRR001666.1 071112_SLXA-EAS1_s_7:5:1:817:345 length=72", "GGGTGATGGCCGCTGCCGATGGCGTCAAATCCCACCAAGTTACCCTTAACAACTTAAGGGTTTTCAAATAGA", "+SRR001666.1 071112_SLXA-EAS1_s_7:5:1:817:345 length=72", "IIIIIIIIIIIIIIIIIIIIIIIIIIIIII9IG9ICIIIIIIIIIIIIIIIIIIIIDIIIIIII>IIIIII/", "@SRR001666.2 071112_SLXA-EAS1_s_7:5:1:801:338 length=72", "GTTCAGGGATACGACGTTTGTATTTTAAGAATCTGAAGCAGAAGTCGATGATAATACGCGTCGTTTTATCAT", "+SRR001666.2 071112_SLXA-EAS1_s_7:5:1:801:338 length=72", "IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII6IBIIIIIIIIIIIIIIIIIIIIIIIGII>IIIII-I)8I") file <- tempfile() writeLines(seqs, file) @ We create a new S4 class called \Robject{Fastq}, with slots for a sequence identifier, the raw sequence of letters, and the quality values for the sequence, all of which will be objects of the \Robject{matter\_str} class for on-disk strings. We also create generic functions and S4 methods to access these slots. Note that if we were creating this class for a package, it would be wiser to import the existing generic functions from packages \Rpackage{Biostrings} and \Rpackage{ShortRead}. For the purpose of demonstration, we define them here. <>= setClass("Fastq", slots=c( id = "matter_str", sread = "matter_str", quality = "matter_str")) setGeneric("id", function(x) standardGeneric("id")) setGeneric("sread", function(object, ...) standardGeneric("sread")) setGeneric("quality", function(object, ...) standardGeneric("quality")) setMethod("id", "Fastq", function(x) x@id) setMethod("sread", "Fastq", function(object) object@sread) setMethod("quality", "Fastq", function(object) object@quality) @ Now we write a function for constructing the new class. First we attach the file as a flat \Robject{matter\_vec} raw byte vector, and calculate the byte offsets of the new lines in the file. This is done by calling \verb|which(raw == charToRaw('\n'))|, which requires parsing the whole vector to do the elementwise comparisons. Note that \Rpackage{matter} will automatically perform this operation in chunks in efficient C++ code. For a large dataset, this means that only \verb|chunksize(raw)| data elements are ever loaded into memory at once. <>= attachFastq <- function(file) { length <- file.info(file)$size raw <- matter_vec(paths=file, length=length, datamode="raw") newlines <- which(raw == charToRaw('\n')) # parses the file in chunks if ( newlines[length(newlines)] == length ) newlines <- newlines[-length(newlines)] byte_start <- c(0L, newlines) byte_end <- c(newlines, length) - 1L # don't include the '\n' line_offset <- byte_start line_extent <- byte_end - byte_start id <- matter_str(paths=file, offset=1L + line_offset[c(TRUE,FALSE,FALSE,FALSE)], # skip the '@' extent=line_extent[c(TRUE,FALSE,FALSE,FALSE)] - 1L) # adjust for '@' sread <- matter_str(paths=file, offset=line_offset[c(FALSE,TRUE,FALSE,FALSE)], extent=line_extent[c(FALSE,TRUE,FALSE,FALSE)]) quality <- matter_str(paths=file, offset=line_offset[c(FALSE,FALSE,FALSE,TRUE)], extent=line_extent[c(FALSE,FALSE,FALSE,TRUE)]) new("Fastq", id=id, sread=sread, quality=quality) } @ Now we can call our \verb|attachFastq| function to parse the file and create an object of our new class \Robject{Fastq}. <>= fq <- attachFastq(file) fq id(fq)[1] id(fq)[2] sread(fq)[1] sread(fq)[2] quality(fq)[1] quality(fq)[2] @ Although this example used only a small toy data file, the same class and parsing function could be applied to much larger files. Very large files would take some time to parse and index the newlines, but the our \Robject{Fastq} class would remain memory-efficient. \section{Session info} <>= toLatex(sessionInfo()) @ % \bibliographystyle{unsrt} \bibliography{matter} \end{document}