---
output:
html_document
bibliography: ref.bib
---
# Normalization, redux {#more-norm}
## Overview
[Basic Chapter 2](http://bioconductor.org/books/3.20/OSCA.basic/normalization.html#normalization) introduced the principles and methodology for scaling normalization of scRNA-seq data.
This chapter provides some commentary on some miscellaneous theoretical aspects
including the motivation for the pseudo-count, the use and benefits of downsampling instead of scaling, and some discussion of alternative transformations.
## Scaling and the pseudo-count
When log-transforming, `logNormCounts()` will add a pseudo-count to avoid undefined values at zero.
Larger pseudo-counts will shrink the log-fold changes between cells towards zero for low-abundance genes, meaning that downstream high-dimensional analyses will be driven more by differences in expression for high-abundance genes.
Conversely, smaller pseudo-counts will increase the relative contribution of low-abundance genes.
Common practice is to use a pseudo-count of 1, for the simple pragmatic reason that it preserves sparsity in the original matrix (i.e., zeroes in the input remain zeroes after transformation).
This works well in all but the most pathological scenarios [@lun2018overcoming].
An interesting subtlety of `logNormCounts()` is that it will center the size factors at unity, if they were not already.
This puts the normalized expression values on roughly the same scale as the original counts for easier interpretation.
For example, Figure \@ref(fig:zeisel-demo-snap25) shows that interneurons have a median _Snap25_ log-expression from 5-6;
this roughly translates to an original count of 30-60 UMIs in each cell, which gives us some confidence that it is actually expressed.
This relationship to the original data would be less obvious - or indeed, lost altogether - if the centering were not performed.
(\#fig:zeisel-demo-snap25)Distribution of log-expression values for _Snap25_ in each cell type of the Zeisel brain dataset.
Centering also allows us to interpret a pseudo-count of 1 as an extra read or UMI for each gene.
In practical terms, this means that the shrinkage effect of the pseudo-count diminishes as read/UMI counts increase.
As a result, any estimates of log-fold changes in expression (e.g., from differences in the log-values between groups of cells) become increasingly accurate with deeper coverage.
Conversely, at lower counts, stronger shrinkage avoids inflated differences due to sampling noise, which might otherwise mask interesting features in downstream analyses like clustering.
In some sense, the pseudo-count aims to protect later analyses from the lack of information at low counts while trying to miminize its own effect at high counts.
For comparison, consider the situation where we applied a constant pseudo-count to some count-per-million-like measure.
It is easy to see that the accuracy of the subsequent log-fold changes would never improve regardless of how much additional sequencing was performed;
scaling to a constant library size of a million means that the pseudo-count will have the same effect for all datasets.
This is ironic given that the whole intention of sequencing more deeply is to improve quantification of these differences between cell subpopulations.
The same criticism applies to popular metrics like the "counts-per-10K" used in, e.g., *[seurat](https://CRAN.R-project.org/package=seurat)*.
## Downsampling instead of scaling
In rare cases, direct scaling of the counts is not appropriate due to the effect described by @lun2018overcoming.
Briefly, this is caused by the fact that the mean of the log-normalized counts is not the same as the log-transformed mean of the normalized counts.
The difference between them depends on the mean and variance of the original counts, such that there is a systematic trend in the mean of the log-counts with respect to the count size.
This typically manifests as trajectories correlated strongly with library size even after library size normalization, as shown in Figure \@ref(fig:cellbench-lognorm-fail) for synthetic scRNA-seq data generated with a pool-and-split approach [@tian2019benchmarking].
``` r
# TODO: move to scRNAseq.
library(BiocFileCache)
bfc <- BiocFileCache(ask=FALSE)
qcdata <- bfcrpath(bfc, "https://github.com/LuyiTian/CellBench_data/blob/master/data/mRNAmix_qc.RData?raw=true")
env <- new.env()
load(qcdata, envir=env)
sce.8qc <- env$sce8_qc
# Library size normalization and log-transformation.
sce.8qc <- logNormCounts(sce.8qc)
sce.8qc <- runPCA(sce.8qc)
gridExtra::grid.arrange(
plotPCA(sce.8qc, colour_by=I(factor(sce.8qc$mix))),
plotPCA(sce.8qc, colour_by=I(librarySizeFactors(sce.8qc))),
ncol=2
)
```
(\#fig:cellbench-lognorm-fail)PCA plot of all pool-and-split libraries in the SORT-seq CellBench data, computed from the log-normalized expression values with library size-derived size factors. Each point represents a library and is colored by the mixing ratio used to construct it (left) or by the size factor (right).
As the problem arises from differences in the sizes of the counts, the most straightforward solution is to downsample the counts of the high-coverage cells to match those of low-coverage cells.
This uses the size factors to determine the amount of downsampling for each cell required to reach the 1st percentile of size factors.
(The small minority of cells with smaller size factors are simply scaled up.
We do not attempt to downsample to the smallest size factor, as this would result in excessive loss of information for one aberrant cell with very low size factors.)
We can see that this eliminates the library size factor-associated trajectories from the first two PCs, improving resolution of the known differences based on mixing ratios (Figure \@ref(fig:cellbench-lognorm-downsample)).
The log-transformation is still necessary but no longer introduces a shift in the means when the sizes of the counts are similar across cells.
``` r
sce.8qc2 <- logNormCounts(sce.8qc, downsample=TRUE)
sce.8qc2 <- runPCA(sce.8qc2)
gridExtra::grid.arrange(
plotPCA(sce.8qc2, colour_by=I(factor(sce.8qc2$mix))),
plotPCA(sce.8qc2, colour_by=I(librarySizeFactors(sce.8qc2))),
ncol=2
)
```
(\#fig:cellbench-lognorm-downsample)PCA plot of pool-and-split libraries in the SORT-seq CellBench data, computed from the log-transformed counts after downsampling in proportion to the library size factors. Each point represents a library and is colored by the mixing ratio used to construct it (left) or by the size factor (right).
While downsampling is an expedient solution, it is statistically inefficient as it needs to increase the noise of high-coverage cells in order to avoid differences with low-coverage cells.
It is also slower than simple scaling.
Thus, we would only recommend using this approach after an initial analysis with scaled counts reveals suspicious trajectories that are strongly correlated with the size factors.
In such cases, it is a simple matter to re-normalize by downsampling to determine whether the trajectory is an artifact of the log-transformation.
## Comments on other transformations
Of course, the log-transformation is not the only possible transformation.
Another somewhat common choice is the square root, motivated by the fact that it is the variance stabilizing transformation for Poisson-distributed counts.
This assumes that counts are actually Poisson-distributed, which is true enough from the perspective of sequencing noise in UMI counts but ignores biological overdispersion.
One may also see the inverse hyperbolic sine (a.k.a, arcsinh) transformation being used on occasion, which is very similar to the log-transformation when considering non-negative values.
The main practical difference for scRNA-seq applications is a larger initial jump from zero to non-zero values.
Alternatively, we may use more sophisticated approaches for variance stabilizing transformations in genomics data, e.g., *[DESeq2](https://bioconductor.org/packages/3.20/DESeq2)* or *[sctransform](https://CRAN.R-project.org/package=sctransform)*.
These aim to remove the mean-variance trend more effectively than the simpler transformations mentioned above, though it could be argued whether this is actually desirable.
For low-coverage scRNA-seq data, there will always be a mean-variance trend under any transformation, for the simple reason that the variance must be zero when the mean count is zero.
These methods also face the challenge of removing the mean-variance trend while preserving the interesting component of variation, i.e., the log-fold changes between subpopulations;
this may or may not be done adequately, depending on the aggressiveness of the algorithm.
In practice, the log-transformation is a good default choice due to its simplicity and interpretability, and is what we will be using for all downstream analyses.
## Normalization versus batch correction
It is worth noting the difference between normalization and batch correction ([Multi-sample Chapter 1](http://bioconductor.org/books/3.20/OSCA.multisample/integrating-datasets.html#integrating-datasets)).
Normalization typically refers to removal of technical biases between cells, while batch correction involves removal of both technical biases and biological differences between batches.
Technical biases are relatively simple and straightforward to remove, whereas biological differences between batches can be highly unpredictable.
On the other hand, batch correction algorithms can share information between cells in the same batch, as all cells in the same batch are assumed to be subject to the same batch effect,
whereas most normalization strategies tend to operate on a cell-by-cell basis with less information sharing.
The key point here is that normalization and batch correction are different tasks, involve different assumptions and generally require different computational methods
(though some packages aim to perform both steps at once, e.g., *[zinbwave](https://bioconductor.org/packages/3.20/zinbwave)*).
Thus, it is important to distinguish between "normalized" and "batch-corrected" data, as these usually refer to different stages of processing.
Of course, these processes are not exclusive, and most workflows will perform normalization _within_ each batch followed by correction _between_ batches.
Interested readers are directed to [Multi-sample Chapter 1](http://bioconductor.org/books/3.20/OSCA.multisample/integrating-datasets.html#integrating-datasets) for more details.
## Session Info {-}