% \VignetteEngine{knitr::knitr} % \VignetteIndexEntry{TPP_introduction_1D} % \VignettePackage{TPP} \documentclass[10pt,a4paper,twoside]{article} \usepackage[utf8]{inputenc} \usepackage{graphicx} \usepackage{forloop} %\pagestyle{empty} <>= BiocStyle::latex() @ \bioctitle[TPP]{Introduction to the \Rpackage{TPP} package for analyzing Thermal Proteome Profiling data: Temperature range (TR) or concentration compound range (CCR) experiments} \author{Dorothee Childs, Nils Kurzawa\\ European Molecular Biology Laboratory (EMBL), \\ Heidelberg, Germany\\ \texttt{dorothee.childs@embl.de}} \date{\Rpackage{TPP} version \Sexpr{packageDescription("TPP")$Version}} \begin{document} <>= knitr::opts_chunk$set(concordance=TRUE, eval = TRUE, cache = TRUE, resize.width="0.45\\textwidth", fig.align='center', tidy = FALSE, message=FALSE) @ \maketitle \begin{abstract} Detecting the binding partners of a drug is one of the biggest challenges in drug research. Thermal Proteome Profiling (TPP) addresses this question by combining the cellular thermal shift assay concept \cite{molina2013} with mass spectrometry based proteome-wide protein quantitation \cite{savitski2014tracking}. Thereby, drug-target interactions can be inferred from changes in the thermal stability of a protein upon drug binding, or upon downstream cellular regulatory events, in an unbiased manner. The analysis of TPP experiments requires several data analytic and statistical modeling steps \cite{franken2015}. The package \Rpackage{TPP} facilitates this process by providing exectuable workflows that conduct all necessary steps. This vignette explains the use of the package. For details about the statistical methods, please refer to the papers \cite{savitski2014tracking, franken2015}. %\footnote{An example workflow that reproduces the results of paper \cite{franken2015} is described in the accompanying Vignette: "TPP analysis workflow: \textit{Thermal proteome profiling for unbiased identification of drug targets and detection of downstream effectors}".}. \end{abstract} \tableofcontents \section{Installation} To install the package, type the following commands into the \R{} console <>= if (!requireNamespace("BiocManager", quietly=TRUE)){ install.packages("BiocManager") } BiocManager::install("TPP") @ The installed package can be loaded by <>= library("TPP") @ \subsection{Special note for Windows users} The \Rpackage{TPP} package uses the \Rpackage{openxlsx} package to produce Excel output \cite{openxlsx}. \Rpackage{openxlsx} requires a zip application to be installed on your system and to be included in the path. On Windows, such a zip application ist not installed by default, but is available, for example, via \href{http://cran.r-project.org/bin/windows/Rtools/}{Rtools}. Without the zip application, you can still use the 'TPP' package and access its results via the dataframes produced by the main functions. \subsection{TPP-TR and TPP-CCR analysis} The \Rpackage{TPP} package performs two analysis workflows: \begin{enumerate} \item \textbf{Analysis of temperature range (TR) experiments:} TPP-TR experiments combine the cellular thermal shift assay (CETSA) approach with high-throughput mass spectrometry (MS). They provide protein abundance measurements at increasing temperatures for different treatment conditions. The data analysis comprises cross-experiment normalization, melting curve fitting, and the statistical evaluation of the estimated melting points in order to detect shifts induced by drug binding. \item \textbf{Analysis of compound concentration range (CCR) experiments:} TPP-CCR experiments combine the isothermal dose-response (ITDR) approach with high-throughput MS. The CCR workflow of the package performs median normalization, fits dose response curves, and determines the pEC50 values for proteins showing dose dependent changes in thermal stability upon drug treatment. \end{enumerate} The following sections describe both functionalities in detail. \section{Analyzing TPP-TR experiments} \subsection{Overview} The function \Rfunction{analyzeTPPTR} executes the whole workflow from data import through normalization and curve fitting to statistical analysis. Nevertheless, all of these steps can be invoked separately by the user. The corresponding functions can be recognized by their suffix \Rfunction{tpptr}. Here, we first show how to start the whole analysis using \Rfunction{analyzeTPPTR}. Afterwards, we demonstrate how to carry out single steps individually. Before you can start your analysis, you need to specify information about your experiments: \begin{itemize} \item The mandatory information comprises a unique experiment name, as well as the isobaric labels and corresponding temperature values for each experiment. \item Optionally, you can also specify a condition for each experiment (treatment or vehicle), as well as an arbitrary number of comparisons. Comparisons are pairs of experiments whose melting points will be compared in order to detect significant shifts. \end{itemize} The package retrieves this information from a configuration table that you need to specify before starting the analysis. This table can either be a data frame that you define in your R session, or a spreadsheet in .xlsx or .csv format. In a similar manner, the measurements themselves can either be provided as a list of data frames, or imported directly from files during runtime. We demonstrate the functionality of the package using the dataset \texttt{hdacTR\_smallExample}. It contains an illustrative subset of a larger dataset which was obtained by TPP-TR experiments on K562 cells treated with the histone deacetylase (HDAC) inhibitor panobinostat in the treatment groups and with vehicle in the control groups. The experiments were performed for two conditions (vehicle and treatment), with two biological replicates each. The raw MS data were processed with the Python package \Rpackage{isobarQuant}, which provides protein fold changes relative to the protein abundance at the lowest temperature as input for the TPP package \cite{franken2015}. Each text file produced by \Rpackage{isobarQuant} contains, among others, the following information: \begin{itemize} \item The gene symbol per protein (column `gene\_name') \item The relative concentrations per protein, already normalized to the lowest temperature (indicated by prefix `rel\_fc\_' and followed by the respective isobaric label) \item Quality control columns `qupm' (quantified unique peptide matches), which describe the number of unique peptide sequences in a protein group with reporter ions used in protein quantification, and `qssm' (quantified spectrum-sequence matches), which are the number of spectrum to sequence matches [peptides] in a protein group with reporter ions used in protein quantification. \end{itemize} More details about the \Rpackage{isobarQuant} output format can be found in the \href{https://www.nature.com/article-assets/npg/nprot/journal/v10/n10/extref/nprot.2015.101-S1.pdf}{supplementary manual} of the software (starting on page 7 of the document located at the provided link). First, we load the data: <>= data("hdacTR_smallExample") ls() @ This command loads two objects: \begin{enumerate} \item \texttt{hdacTR\_data}: a list of data frames that contain the measurements to be analyzed, \item \texttt{hdacTR\_config}: a configuration table with details about each experiment. \end{enumerate} \subsection{The configuration table} \texttt{hdacTR\_config} is an example of a configuration table in data frame format. We also provide a .xlsx version of this table. It is stored in the folder \texttt{example\_data/TR\_example\_data} in your package installation path. You can locate the \texttt{example\_data} folder on your system by typing <>= system.file('example_data', package = 'TPP') @ You can use both versions as a template for your own analysis Let's take a closer look at the content of the configuration table we just loaded: <>= print(hdacTR_config) @ It contains the following columns: \begin{itemize} \item Experiment: name of each experiment. \item Condition: experimental conditions (Vehicle or Treatment). \item Comparisons: comparisons to be performed. \item Label columns: each isobaric label names a column that contains the temperature the label corresponds to in the individual experiments. \end{itemize} An additional \texttt{Path} column must be added to the table if the data should be imported from files instead of data frames. \subsection{The data tables} \texttt{hdacTR\_data} is a list of data frames containing the measurements for each experimental condition and replicate: <>= summary(hdacTR_data) @ They contain between \Sexpr{min(sapply(hdacTR_data, nrow))} and \Sexpr{max(sapply(hdacTR_data, nrow))} proteins each: <>= data.frame(Proteins = sapply(hdacTR_data, nrow)) @ Each of the four data frames in \texttt{hdacTR\_data} stores protein measurements in a row wise manner. For illustration, let's look at some example rows of the first vehicle group. %For a better overview, an illustrative subset of \Sexpr{length(c(4:13, 25:34))} out \Sexpr{ncol(hdacTR_data[["Vehicle_1"]])} columns is shown: <>= hdacVehicle1 <- hdacTR_data[["Vehicle_1"]] head(hdacVehicle1) @ The columns can be grouped into three categories: \begin{itemize} \item a column with a protein identifier. Called \texttt{gene\_name} in the current dataset, \item the ten fold change columns all start with the prefix \texttt{rel\_fc\_}, followed by the isobaric labels \texttt{126} to \texttt{131L}, \item other columns that contain additional information. In the given example, the columns \texttt{qssm} and \texttt{qupm} were produced by the python package \Rpackage{isobarQuant} when analyzing the raw MS data. This metadata will be included in the package's output table. Additionally, it can be filtered according to pre-specified quality criteria for construction of the normalization set. The original results of the \Rpackage{isobarQuant} package contain more columns of this type. They are omitted here to keep the size of the example data within reasonable limits. \end{itemize} \subsection{Starting the whole workflow by \Rfunction{analyzeTPPTR}} The default settings of the \Rpackage{TPP} package are configured to work with the output of the python package \Rpackage{isobarQuant}, but you can adjust it for your own data, if desired. When analyzing data from \Rpackage{isobarQuant}, all you need to provide is: \begin{itemize} \item the configuration table, \item the experimental data, either as a list of data frames, or as tab-delimited \texttt{.txt} files, \item a desired output location, for example <>= resultPath = file.path(getwd(), 'Panobinostat_Vignette_Example') @ \end{itemize} If you want to use data from other sources than \Rpackage{isobarQuant}, see section \ref{sec:TR_with_nonIsobQuant_Data} for instructions. If you import the data directly from tab-delimited \texttt{.txt} files, please make sure that the entries are not encapsulated by quotes (for example, \texttt{"entry1"} or \texttt{'entry2'}). All quotes will be ignored during data import in order to robustly handle entries containing single quotes (for example protein annotation by \texttt{5'} or \texttt{3'}). By default, plots for the fitted melting curves are produced and stored in pdf format for each protein during runtime and we highly recommend that you do this when you analyze your data. However, producing plots for all \Sexpr{length(unique(unlist(sapply(hdacTR_data, function(d) d$gene_name))))} proteins in our dataset can be time consuming and would slow down the execution of the current example. Thus, we first disable plotting by setting the argument \Rcode{plotCurves=FALSE}. Afterwards, we will produce plots for individual proteins of interest. Note that, in practice, you will only be able to examine the results in an unbiased manner if you allow the production of all plots. We start the workflow by typing <>= TRresults <- analyzeTPPTR(configTable = hdacTR_config, methods = "meltcurvefit", data = hdacTR_data, nCores = 2, resultPath = resultPath, plotCurves = FALSE) @ This performs the melting curve fitting procedure in parallel on a maximum of two CPUs (requirement for package vignettes). Without specifying the \texttt{nCores} argument, fitting is performed by default on the maximum number of CPUs on your device. \Rfunction{analyzeTPPTR} produces a table that summarizes the results for each protein. It is returned as a data frame and exported to an Excel spreadsheet at the specified output location. It contains the following information for each experiment: \begin{itemize} \item normalized fold changes, \item melting curve parameters, \item statistical test results, \item quality checks on the curve parameters and p-values, \item additional columns from the original input data. \end{itemize} The quality of the result for each protein is determined by four filters. Currently, these criteria are checked only when the experimental setup includes exactly two replicates: \begin{center} \begin{tabular}{p{10cm} | p{6cm}} \textbf{Filter} & \textbf{Column name in result table} \\ \hline \textbf{1.} Is the minimum slope in each of the control vs. treatment experiments $< -0.06$? & \texttt{minSlopes\_less\_than\_0.06} \\ \hline \textbf{2.} Are both the melting point differences in the control vs treatment experiments greater than the melting point difference between the two untreated controls? & \texttt{meltP\_diffs\_T\_vs\_V\_greater\_V1\_vs\_V2} \\ \hline \textbf{3.} Is one of the p values for the two replicate experiments $< 0.05$ and the other one $< 0.1$? & \texttt{min\_pVals\_less\_0.05\_and\_max\_pVals\_less\_0.1} \\ \hline \textbf{4.} Do the melting point shifts in the two control vs treatment experiments have the same sign (i.e. protein was either stabilized or destabilized in both cases)? & \texttt{meltP\_diffs\_have\_same\_sign} \\ \end{tabular} \end{center} The current example revealed \Sexpr{sum(TRresults$fulfills_all_4_requirements==T, na.rm=T)} out of \Sexpr{nrow(TRresults)} proteins that fulfilled all four requirements: <>= tr_targets <- subset(TRresults, fulfills_all_4_requirements)$Protein_ID print(tr_targets) @ \Sexpr{length(grep("HDAC", tr_targets))} of the detected proteins belong to the HDAC family. Because Panobinostat is known to act as an HDAC inhibitor, we select them for further investigation. <>= hdac_targets <- grep("HDAC", tr_targets, value=TRUE) print(hdac_targets) @ We next investigate these proteins by estimating their melting curves for the different treatment conditions. However, we can only reproduce the same curves as before if the data is normalized by the same normalization procedure. Although we only want to fit and plot melting curves for a few proteins, the normalization therefore needs to incorporate all proteins in order to obtain the same normalizaton coefficients as before. The following section explains how to invoke these and other steps of the workflow independently of each other. \subsection{Starting individual steps of the workflow} \subsubsection{Data import} Currently, the \Rpackage{TPP} package stores the data in \Rclass{ExpressionSet}s, and so we convert the data that we have into the needed format. An advantage of the \Rclass{ExpressionSet} container is its consistent and standardized handling of metadata for the rows and columns of the data matrix. This ability is useful for the given data, because it enables the annotation of each fold change column by temperature values as well as the corresponding isobaric labels. Furthermore, each protein can be annotated with several additional properties which can be used for normalization or processing of the package output. The function \Rfunction{tpptrImport} imports the data and converts it into \Rclass{ExpressionSets}: <>= trData <- tpptrImport(configTable = hdacTR_config, data = hdacTR_data) @ The resulting object \Robject{trData} is a list of \Rclass{ExpressionSet}s for each experimental condition and replicate. Going back to the example data shown above (vehicle group 1), the corresponding object looks as follows: <>= trData[["Vehicle_1"]] @ Each \Rclass{ExpressionSet} $S_i$ contains the fold change measurements (accessible by \Rfunction{exprs}($S_i$)), column annotation for isobaric labels and temperatures (accessible by \Rfunction{phenoData}($S_i$)), additional measurements obtained for each protein (accessible by \Rfunction{featureData}($S_i$)), and the protein names (accessible by \Rfunction{featureNames}($S_i$)). \subsubsection{Data normalization} Whether normalization needs to be performed and what method is best suited depends on the experiment. Currently, the \Rpackage{TPP} package offers the normalization procedure described by Savitski (2014)\cite{savitski2014tracking}. It comprises the following steps: \begin{enumerate} \item In each experiment, filter proteins according to predefined quality criteria. \item Among the remaining proteins, identify those that were quantified in \emph{all} experiments (jointP). \item In each experiment, extract the proteins belonging to jointP. Subselect those proteins that pass the predefined fold change filters. \item Select the biggest remaining set among all experiments (normP). \item For each experiment, compute median fold changes over the proteins in normP and fit a sigmoidal melting curves through the medians. \item Use the melting curve with the best $R^2$ value to normalize all proteins in each experiment. \end{enumerate} The function \Rfunction{tpptrNormalize} performs all described steps. It requires a list of filtering criteria for construction of the normalization set. We distinguish between conditions on fold changes and on additional data columns. The function \Rfunction{tpptrDefaultNormReqs} offers an example object with default criteria for both categories: <>= print(tpptrDefaultNormReqs()) @ By default, \Rfunction{tpptrNormalize} applies the filtering criteria in \Rfunction{tpptrDefaultNormReqs}. If you want to normalize a dataset in which the column indicating measurement quality has a different name than 'qssm', you have to change the column name and threshold accordingly. Because our example data was produced by \Rpackage{isobarQuant}, we can use the default settings here. We normalize the imported data as follows: <>= normResults <- tpptrNormalize(data=trData) trDataNormalized <- normResults[["normData"]] @ \subsubsection{Melting curve fitting} Next we fit and plot melting curves for the detected HDAC targets. We first select the corresponding rows from the imported data. The data are stored as \texttt{expressionSet} objects from the \Rpackage{Biobase} package. It provides a range of functions to access and manipulate texttt{expressionSet} objects. Examples are the functions \texttt{featureNames}, \texttt{pData}, or \texttt{featureData}. In order to use them outside of the TPP package namespace, we first import the \Rpackage{Biobase} package: <>= trDataHDAC <- lapply(trDataNormalized, function(d) d[Biobase::featureNames(d) %in% hdac_targets,]) @ We fit melting curves for these proteins using the function \Rfunction{tpptrCurveFit}: <>= trDataHDAC <- tpptrCurveFit(data = trDataHDAC, resultPath = resultPath, nCores = 1) @ The melting curve parameters are now stored within the \texttt{featureData} of the \Rclass{ExpressionSet}s. For example, the melting curves estimated for the Vehicle group have the following parameters: <>= Biobase::pData(Biobase::featureData(trDataHDAC[["Vehicle_1"]]))[,1:5] @ The melting curve plots were stored in subdirectory \texttt{Melting\_Curves} in \texttt{resultPath}. You can browse this directory and inspect the melting curves and their parameters. In the following, you can see the plots that were placed in this directory for the \Sexpr{length(hdac_targets)} detected targets: % <>= % ## attempt to make import dynamic, currently not working % for (target in hdac_targets){ % cat("\\includegraphics[width=0.5\\textwidth]{",resultPath,"/Melting_Curves/meltCurve_",target,"}", sep = "") % } % @ % \begin{center} \includegraphics[width=0.5\textwidth]{\Sexpr{resultPath}/Melting_Curves/meltCurve_\Sexpr{hdac_targets[1]}} \includegraphics[width=0.5\textwidth]{\Sexpr{resultPath}/Melting_Curves/meltCurve_\Sexpr{hdac_targets[2]}} \includegraphics[width=0.5\textwidth]{\Sexpr{resultPath}/Melting_Curves/meltCurve_\Sexpr{hdac_targets[3]}} \end{center} \subsubsection{Significance assessment of melting point shifts} Similar to the normalization explained earlier, significance assessment of melting point shifts has to be performed on the whole dataset due to the binning procedure used for p-value computation. For the given dataset, we have already analyzed all curve parameters by the function \Rfunction{analyzeTPPTR}. Here we show how you can start this procedure independently of the other steps. This can be useful when you only need to re-compute the p-values (for example with a different binning parameter) without the runtime intense curve fitting before. Melting curve parameter analysis is performed by the function \Rfunction{tpptrAnalyzeMeltingCurves}. It requires a list of \Rclass{ExpressionSet}s with melting curve parameters stored in the featureData. To avoid runtime intensive repetitions of the curve fitting procedure, \Rfunction{analyzeTPPTR} saved these objects as an intermediate result after curve fitting in the subdirectory \texttt{/dataObj}. We can access them by the command: <>= load(file.path(resultPath, "dataObj", "fittedData.RData"), verbose=TRUE) @ This loaded the object \Robject{trDataFitted}, which is a list of \Rclass{ExpressionSet}s in which the melting curve parameters have already been stored in the \texttt{featureData} by \Rfunction{tpptrCurveFit}. Now we start the curve parameter evaluation, this time applying less rigorous filters on the quality of the fitted curves (defined by the minimum $R^2$), and their long-term melting behavior (constrained by the maximum plateu paramter): <>= minR2New <- 0.5 # instead of 0.8 maxPlateauNew <- 0.7 # instead of 0.3 newFilters <- list(minR2 = minR2New, maxPlateau = maxPlateauNew) TRresultsNew <- tpptrAnalyzeMeltingCurves(data = trDataFitted, pValFilter = newFilters) @ We can then compare the outcome to the results that we previously obtained with \texttt{minR2 = 0.8} and \texttt{maxPlateau = 0.3}: <>= tr_targetsNew <- subset(TRresultsNew, fulfills_all_4_requirements)$Protein_ID targetsGained <- setdiff(tr_targetsNew, tr_targets) targetsLost <- setdiff(tr_targets, tr_targetsNew) print(targetsGained) print(targetsLost) @ We observe that relaxing the filters before p-value calculation leads to the detection of \Sexpr{length(targetsGained)} new target protein\Sexpr{ifelse(length(as.character(targetsGained))==1, "", "s")}, while \Sexpr{length(as.character(targetsLost))} of the previous targets \Sexpr{ifelse(length(as.character(targetsLost))==1, "is", "are")} now omitted. This illustrates the trade-off we are facing when defining the filters: on the one hand, relaxing the filters enables more proteins, and hence, more true targets, to be considered for testing. On the other hand, inlcuding poor fits increases the variability in the data and reduces the power during hypothesis testing. This observation motivated the implementation of an alternative spline-based approach for fitting and testing ("non-parametric analysis of response curves", (NPARC)), which was newly introduced in version 3.0.0 of the \texttt{TPP} package. For more information, please check the vignette \texttt{NPARC\_analysis\_of\_TPP\_TR\_data}: <>= browseVignettes("TPP") @ \subsubsection{Output table} Finally, we export the new results to an Excel spreadsheet: Because this process can be time consuming for a large dataset, we only export the detected potential targets: <>= tppExport(tab = TRresultsNew, file = file.path(resultPath, "targets_newFilters.xlsx")) @ \subsection{Analyzing data not produced by the accompanying \Rpackage{isobarQuant} package}\label{sec:TR_with_nonIsobQuant_Data} \subsubsection{Specifying customized column names for data import} By default, \Rfunction{analyzeTPPTR} looks for a protein ID column named \texttt{gene\_name}, and a quality control column named \texttt{qupm} to assist in the decision between proteins with the same identifier. If these columns have different names in your own dataset, you have to define the new names using the arguments \texttt{idVar} and \texttt{qualColName}. Similarly, the argument \texttt{fcStr} has to be set to the new prefix of the fold change columns. \subsubsection{Specifying customized filtering criteria for normalization} You can set the filtering criteria for normalization set construction by modifying the supplied default settings. Remember to adjust the fold change column numbers in case you have more/ less than ten fold changes per experiment. <>= trNewReqs <- tpptrDefaultNormReqs() print(trNewReqs) trNewReqs$otherRequirements[1,"colName"] <- "mycolName" trNewReqs$fcRequirements[,"fcColumn"] <- c(6,8,9) print(trNewReqs) @ \subsection{Specifying the experiments to compare} You can specify an arbitrary number of comparisons in the configuration table. For each comparison, you add a separate column. The column name needs to start with the prefix 'Comparison'. The column needs to contain exactly two alpha-numerical characters (in our example, we used 'x'). If conditions are specified in the 'Condition' column, comparisons between melting points will always be performed in the direction $Tm_{Treatment} - Tm_{Vehicle}$. \section{Analyzing TPP-CCR experiments} First, we load the data: <>= data("hdacCCR_smallExample") @ This command loads two objects: the configuration tables for two replicates (\texttt{hdacCCR\_config\_repl1/2}) and two data frames that contain the measurements of both TPP-CCR experiments to be analyzed (\texttt{hdacCCR\_data\_repl1/2}). \subsection{Starting the whole workflow by \Rfunction{analyzeTPPCCR}} We start the workflow for replicate 1 by typing <>= CCRresults <- analyzeTPPCCR(configTable = hdacCCR_config[1,], data = hdacCCR_data[[1]], resultPath = resultPath, plotCurves = FALSE, nCores = 2) @ The following proteins passed the criteria of displaying a clear response to the treatment, and enabling curve fitting with $R^2 > 0.8$: <>= ccr_targets <- subset(CCRresults, passed_filter_Panobinostat_1)$Protein_ID print(ccr_targets) @ \Sexpr{length(grep("HDAC", ccr_targets))} of the selected proteins belong to the HDAC family. Because Panobinostat is known to act as an HDAC inhibitor, we select them for further investigation. <>= hdac_targets <- grep("HDAC", ccr_targets, value = TRUE) print(hdac_targets) @ The following section explains how to invoke the individual steps of the workflow separately. \subsection{Starting individual steps of the workflow} \subsubsection{Data import} The function \Rfunction{tppccrImport} imports the data and converts it into an \Rclass{ExpressionSet}: <>= ccrData <- tppccrImport(configTable = hdacCCR_config[1,], data = hdacCCR_data[[1]]) @ \subsubsection{Data normalization} Currently, the \Rpackage{TPP} package offers normalization by fold change medians for TPP-CCR experiments. We normalize the imported data by <>= ccrDataNormalized <- tppccrNormalize(data = ccrData) @ \subsubsection{Data transformation} We next have to specify the type of response for each protein, and transform the data accordingly: <>= ccrDataTransformed <- tppccrTransform(data = ccrDataNormalized)[[1]] @ \subsubsection{Dose response curve fitting} Next we fit and plot dose response curves for the detected HDAC targets. We first select the corresponding rows from the imported data: <>= ccrDataHDAC <- ccrDataTransformed[match(hdac_targets, Biobase::featureNames(ccrDataTransformed)),] @ We fit dose response curves for these proteins using the function \Rfunction{tppccrCurveFit}: <>= ccrDataFittedHDAC <- tppccrCurveFit(data=list(Panobinostat_1 = ccrDataHDAC), nCores = 1) tppccrPlotCurves(ccrDataFittedHDAC, resultPath = resultPath, nCores = 1) @ This function produces a table that contains the dose response curve parameters and additional information about each protein: <>= ccrResultsHDAC <- tppccrResultTable(ccrDataFittedHDAC) print(ccrResultsHDAC[,c(1, 22:25)]) @ The dose response curve plots were stored in subdirectory \texttt{DoseResponse\_Curves} in \texttt{resultPath}. You can browse this directory and inspect the fits and melting curve parameters. In the following, you can see the plot that were placed in this directory for the \Sexpr{length(hdac_targets)} detected targets: \begin{center} \includegraphics[width=0.5\textwidth]{\Sexpr{resultPath}/DoseResponse_Curves/drCurve_\Sexpr{hdac_targets[1]}} \includegraphics[width=0.5\textwidth]{\Sexpr{resultPath}/DoseResponse_Curves/drCurve_\Sexpr{hdac_targets[2]}} \includegraphics[width=0.5\textwidth]{\Sexpr{resultPath}/DoseResponse_Curves/drCurve_\Sexpr{hdac_targets[3]}} \includegraphics[width=0.5\textwidth]{\Sexpr{resultPath}/DoseResponse_Curves/drCurve_\Sexpr{hdac_targets[4]}} \end{center} % \subsection{Package output (CCR part)} % Returned objects: data frame, spreadsheet in .xlsx format. % % The spreadsheet contains the following types of columns: % \begin{itemize} % \item \texttt{normalized\_to\_lowest\_conc}: Fold changes normalized to the values at the lowest concentration. This ensures that the transformed values always range from $0$ to $1$. If the value at the lowest concentration is $0$, a small constant ($1e-15$) is added to prevent division by zero. % \end{itemize} \clearpage \bibliography{TPP_references} \end{document}