In this article, we walk through an end-to-end Affymetrix microarray differential expression workflow using Bioconductor packages. This workflow is directly applicable to current "Gene'' type arrays, e.g.the HuGene or MoGene arrays, but can easily be adapted to similar platforms. The data analyzed here is a typical clinical microarray data set that compares inflamed and non-inflamed colon tissue in two disease subtypes. For each disease, the differential gene expression between inflamed- and non-inflamed colon tissue was analyzed. We will start from the raw data CEL files, show how to import them into a Bioconductor ExpressionSet, perform quality control and normalization and finally differential gene expression (DE) analysis, followed by some enrichment analysis.
In this article, we walk through an end-to-end Affymetrix microarray differential expression workflow using Bioconductor packages. This workflow is directly applicable to current "Gene'' type arrays, e.g.the HuGene or MoGene arrays, but can easily be adapted to similar platforms. The data analyzed here is a typical clinical microarray data set that compares inflamed and non-inflamed colon tissue in two disease subtypes. For each disease, the differential gene expression between inflamed- and non-inflamed colon tissue was analyzed. We will start from the raw data CEL files, show how to import them into a Bioconductor ExpressionSet, perform quality control and normalization and finally differential gene expression (DE) analysis, followed by some enrichment analysis.
In this article we introduce a complete workflow for a typical (Affymetrix) microarray analysis. Data import, preprocessing, differential expression and enrichment analysis are discussed.The data set used
[1] is from a paper studying the differences in gene expression in inflamed and non-inflamed tissue. 14 patients suffering from Ulcerative colitis (UC) and 15 patients with Crohn’s disease (CD) were tested, and from each patient inflamed and non-inflamed colonic mucosa tissue was obtained via a biopsy. This is a typical clinical data set consisting of 58 arrays in total. Our aim is to analyze differential expression (DE) between the tissues. Our results show a substantial overlap with the results of the original paper.
Workflow package installation
The workflow is wrapped in a package called
maEndToEnd.The
maEndToEnd package can currently be obtained from GitHub and is available via the current development version of Bioconductor (3.8) (see here:
http://bioconductor.org/packages/devel/workflows/html/maEndToEnd.html).
Workflow package installation from Bioconductor
You can install the package via the
biocLite function.Currently, the workflow is available in the development version of Bioconductor (3.8), which will become the release version in October 2018.For details on how to use this version of Bioconductor see:
http://bioconductor.org/developers/how-to/useDevel/
Workflow package installation from Github
Workflow package import
Once the workflow package has been successfully installed, we can use a call to
library() in order to load it. This will also load all the other packages neccessary to run the workflow.
List of packages required for the workflow
Below, you find a list of packages that are required by the workflow. Some Helper/Styling packages have been commented here, as they are not strictly neccesary to execute the workflow.
Downloading the raw data from ArrayExpress
The first step of the analysis is to download the raw data CEL files. These files are produced by the array scanner software and contain the measured probe intensities. The data we use have been deposited at
ArrayExpress and have the accession code
E-MTAB-2967.We will store these files in the directory
raw_data_dir which defaults to a temporary directory.Each ArrayExpress data set has a landing page summarizing the data set, and we use the
getAEfunction from the
Bioconductor package to obtain the ftp links to the raw data files (
Data from Palmieri et. al. on ArrayEpress).With the code below, we download the raw data (also including annotation data) from
ArrayExpress
[2] by using the
getAE-function. The data are saved in the
raw_data_dir created above. The names of the downloaded files are returned as a list.We will now have a closer look at the data we downloaded from ArrayExpress
Background information on the data
Information stored in ArrayExpress
Each dataset at ArrayExpress is stored according to the MAGE-TAB (MicroArray Gene Expression Tabular)
specifications as a collection of tables bundled with the raw data. The MAGE-TAB format specifies up to five
different types of files:Investigation Description Format (IDF)Array Design Format (ADF)Sample and Data Relationship Format (SDRF)raw data filesprocessed data filesOther than the raw data files, the IDF and the SDRF file are important for us. The IDF file contains top level information about the experiment including title, description, submitter contact details and protocols. The SDRF file contains essential information on the experimental samples, e.g. the experimental group(s) they belong to.Before we move on to the actual raw data import, we will briefly introduce the
ExpressionSet class contained in the
package. It is commonly used to store microarray data in Bioconductor.
Bioconductor ExpressionSets
Genomic data can be very complex, usually consisting of a number of different components, e.g. information on the experimental samples, annotation of genomic features measured as well as the experimental data itself. In Bioconductor, the approach is taken that these components should be stored in a single structure to easily manage the data.The package
contains standardized data structures to represent genomic data. The
ExpressionSet class is designed to combine several different sources of information (i.e. as contained in the various MAGE-TAB files) into a single convenient structure. An ExpressionSet can be manipulated (e.g., subsetted, copied), and is the input to or output of many Bioconductor functions.The data in an ExpressionSet consist of:assayData: Expression data from microarray experiments with microarray probes in rows and sample identifiers in columnsmetaDataphenoData: A description of the samples in the experiment with sample identifiers in rows and description elements in columns; holds the content of the SDRF filefeatureData: metadata about the features on the chip or technology used for the experiment with same rows as assayData by default and freely assignable columnsfurther annotations for the features, for example gene annotations from biomedical databases (annotation).experimentData: A flexible structure to describe the experiment.The ExpressionSet class coordinates all of these data, so that one does not have to worry about the details. However, one should keep in mind that the rownames of the
phenoData have to match the column names of the assay data, while the row names of the assay data have to match the row names of the
featureData. This is illustrated in
Figure 1.
Figure 1.
Structure of Bioconductor’s ExpressionSet class.
You can use the functions
pData and
fData to extract the sample and feature annotation, respectively, from an
ExpressionSet. The function
exprs will return the expression data itself as a matrix.
Import of annotation data and microarray expression data as “ExpressionSet”
We import the SDRF file with the
read.delim function from the raw data folder in order to obtain the sample annotation.The sample names are given in the column Array.Data.File of the SDRF data table and will be used as rownames for the SDRF file.We turn the SDRF table into an
AnnotatedDataFrame from the
package that we will need later to create an
ExpressionSet for our data
[3].We now create the Expression Set object
raw_data, which contains array data, pheno data (from the SDRF file) as well as the information of the chip annotation package used.The analysis of Affymetrix arrays starts with CEL files. These are the result of the processing of the raw image files using the Affymetrix software and contain estimated probe intensity values. Each CEL file additionally contains some metadata, such as a chip identifier.We use the function
read.celfiles from the
package
[4] to import the files:This automatically creates an ExpressionSet, fills the sections “array data” with the data from the CEL files and uses the correct chip annotation package, in this case
(the chip-type is also stored in the .CEL files).Furthermore, we specified our
AnnotatedDataFrame“
SDRF” created earlier from the SDRF file as
phenoData. Thus, we had to make sure to import the CEL files in the order that corresponds to the SDRF table — to enforce this, we used the column
Array.Data.File of the
SDRF table as the
filenames argument.Finally, we checked whether the object created is valid (e.g. sample names match between the different tables).We now have a first look on the raw data.The
pData function of the Biobase package directly accesses the phenoData in the ExpressionSet
raw_data. With the
head() function, we can view the first six lines of the table. We have a look at the columns included and retain only those columns that are related to the experimental factors of interest.The columns of interest for us are the following:identifiers of the individuals, i.e. columns “Source.Name”, “Characteristics.individual.”disease of the individual, i.e. “Factor.Value.disease.”mucosa type, i.e. “Factor.Value.phenotype.”We now subselect the corresponding columns:
Quality control of the raw data
The first step after the initial data import is the quality control of the data. Here we check for outliers and try to see whether the data clusters as expected, e.g. by the experimental conditions. The expression intensity values are in the assayData sub-object “exprs” and can be accessed by the
exprs(raw_data) function. The rows represent the microarray probes, i.e. the single DNA locations on the chip, while the columns represent one microarray, i.e. a sample of inflamed and non-inflamed tissue of every patient, respectively.For quality control, we take the log2 of
Biobase::exprs(raw_data), as expression data is commonly analyzed on a logarithmic scale.We then perform a principal component analysis (PCA) and plot it (
Figure 2). Every point in the plot represents one sample, with the colour indicating the mucosa type (inflamed vs non-inflamed) and the shape indicating the disease (UC or CD).
Figure 2.
PCA plot of the log–transformed raw expression data.
The PCA plot (
Figure 2, performed on the log-intensity scale) of the raw data shows that the first principal component differentiates between the diseases. This means that the disease type is a major driver of gene expression differences. This might hinder our analysis, as we want to analyze the differential expression between inflamed and non-inflamed tissues, independently of the disease a person suffers from.We also represent the probe intensities via a boxplot graph with one box per individual microarray. (
Figure 3). Note that the
oligo::boxplot function, i.e. the boxplot function of the oligo package, can take expression sets as argument. It accesses the expression data and performs a log2-transformation by default. We therefore can use
raw_data as argument here.
Figure 3.
Intensity boxplots of the log2–transformed raw data.
When looking at the boxplot (
Figure 3), we see that the intensity distributions of the individual arrays are quite different, indicating the need for an appropriate normalization, which we will discuss next.Until now, we have only performed a very basic quality control; more elaborate quality control plots are available in the package
[5]. The package produces an html report, containing the quality control plots together with a description of their aims and an identification of possible outliers. We do not discuss this tool in detail here, but simply provide the code below, which creates a report for our raw data.
Background adjustment, calibration, summarization and annotation
Background adjustment
After the initial import and quality assessment, the next step in processing of microarray data is background adjustment. This is essential because a proportion of the measured probe intensities are due to non-specific hybridization and the noise in the optical detection system. Therefore, observed intensities need to be adjusted to give accurate measurements of specific hybridization.
Across-array normalization (calibration)
Normalization across arrays is needed in order to be able to compare measurements from different array hybridizations due to many obscuring sources of variation. These include different efficiencies of reverse transcription, labeling or hybridization reactions, physical problems with the arrays, reagent batch effects, and laboratory conditions.
Summarization
After normalization, summarization is necessary to be done because on the Affymetrix platform, transcripts are represented by multiple probes, that is multiple locations on the array. For each gene, the background-adjusted and normalized intensities of all probes need to be summarized into one quantity that estimates an amount proportional to the amount of RNA transcript.After the summarization step, the summarized data can be annotated with various information, e.g. gene symbols and ENSEMBL gene identifiers. There is an annotation database available from Bioconductor for our platform, namely the package
.You can view its content like this:Additional information is available from the reference manual of the package. Essentially, the package provides a mapping from the transcript cluster identifiers to the various annotation data.
Old and new “probesets” of Affymetrix microarrays
Traditionally, Affymetrix arrays (the so-called 3’ IVT arrays) were probeset based: a certain fixed group of probes were part of a probeset which represented a certain gene or transcript (note however, that a gene can be represented by multiple probesets).The more recent “Gene” and “Exon” Affymetrix arrays are exon based and hence there are two levels of summarization to get to the gene level. The “probeset” summarization leads to the exon level. The gene/transcript level is given by “transcript clusters”. Hence, the appropriate annotation package for our chip type is called
.“Gene” arrays were created as affordable versions of the “Exon” arrays, by only taking the “good” probes from the Exon array. Initially on the Exon array, at least four probes were part of one “Exon”. With the thinned out “Gene” array, many probesets were made up of three or fewer probes. This is visualized in
Figure 4: Single probesets are indicated by single colours; probesets representing one gene are indicated by a colour shade: e.g., all yellow probes belong to one Exon, and all yellow, orange and red probesets belong to one gene:
Figure 4.
Visualization of the difference between "Exon" type array (left) and "Gene" type array (right).
On the left side, we see plenty of probes for each Exon/probeset (i.e. each colour): therefore, a summarization on the probeset/exon level makes sense. In the gene type array, however, only a small proportion of the original probes per probeset is included. Thus, a summarization on the probeset/exon level is not recommended for “Gene” arrays but nonetheless possible by using the
annotation package.Note that furthermore, there are also no longer designated match/mismatch probes present on “Gene” and “Exon” type chips. The mismatch probe was initially intended as base-level for background correction, but hasn’t prevailed due to more elaborate background correction techniques that do not require a mismatch probe.
One-step preprocessing in oligo
The package
allows us to perform background correction, normalization and summarization in one single step using a deconvolution method for background correction, quantile normalization and the RMA (robust multichip average) algorithm for summarization.This series of steps as a whole is commonly referred to as RMA algorithm, although strictly speaking RMA is merely a summarization method
[6–
8].
Relative Log Expression data quality analysis
Before calibrating and evaluating the data, we want to perform another quality control procedure, namely Relative Log Expression (RLE), as described in the article by Gandolfo
et al.
[9]. To this end, we first perform an RMA without prior normalization:Further details on the RMA algorithm will be provided after RLE analysis, when the “full” RMA is carried out, including normalization.The RLE is performed by calculating the median log2 intensity of every transcript across all arrays.We do this by calculating the row medians of
exprs(palmieri_eset), as the transcripts are represented by the rows and the single microarrays by the columns.Note that we do not have to apply the log2 manually, as the output data of the RMA function is in log2 scale by default.We then substract this transcript median intensity from every transcript intensity via the
sweep function.We then reshape the data into a format in which we can use to create a boxplot for each array, as before:Note that the y-axis now displays for each microarray the deviation of expression intensity from the median expression of the respective single transcripts across arrays.Boxes with a larger extension therefore indicate an unusually high deviation from the median in a lot of transcripts, suggesting that these arrays are different from most of the others in some way.Boxes that are shifted in y-direction indicate a systematically higher or lower expression of the majority of transcripts in comparison to most of the other arrays. This could be caused by quality issues or batch effects.Therefore, if shape and median of a given box varies too much from the bulk, they should be inspected and potentially removed.By inspecting the boxplot in
Figure 5, five arrays could be considered as outliers: 2826_I, 2826_II, 3262_II, 3302_II and 3332_II are negatively y-shifted.
Figure 5.
Boxplot for the RLE values.
We will keep these samples in mind for heatmap cluster analysis later on in the workflow. Arrays that are confirmed to be outliers by heatmap analysis could be removed for subsequent analysis.
RMA calibration of the data
Now, we can apply the full RMA algorithm to our data in order to background-correct, normalize and summarize:The parameter
target defines the degree of summarization, the default option of which is “core”, using transcript clusters containing “safely” annotated genes. Other options for
target include “extended” and “full”. For summarization on the exon level (not recommended for Gene arrays), one can use “probeset” as the target option. Although other methods for background correction and normalization exist, RMA is usually a good default choice. RMA shares information across arrays and uses the versatile quantile normalization method that will make the array intensity distributions match. However, it is preferable to apply it only after outliers have been removed. The quantile normalization algorithm used by RMA works by replacing values by the average of identically ranked (within a single chip) values across arrays. A more detailed description can be found on the
Wikipedia page.An alternative to quantile normalization would be the
algorithm,that performs background correction and normalization by robustly shifting and scaling intensity values within arrays before log-transforming them. This is less “severe” than quantile normalization
[10].
Some mathematical background on normalization (calibration) and background correction
A generic model for the value of the intensity
Y of a single probe on a microarray is given byY =
B +
α · Swhere B is a random quantity due to background noise, usually composed of optical effects and non-specific binding,
α is a gain factor, and
S is the amount of measured specific binding. The signal
S is considered a random variable as well and accounts for measurement error and probe effects. The measurement error is typically assumed to be multiplicative so we can write:log(
S) =
θ +
φ +
εHere
θ represents the logarithm of the true abundance,
φ is a probe-specific effect, and
ε accounts for the nonspecific error. This is the additive-multiplicative-error model for microarray data used by RMA and also the
algorithm
[10]. The algorithms differ in the way that
B is removed and an estimate of
θ is obtained.
Quality assessment of the calibrated data
We now produce a clustering heatmap and another PCA plot using the calibrated data.First, we perform a PCA analysis of the calibrated data analogously to the one with the raw data:In comparison to the first PCA analysis before RMA (
Figure 2), we see that now the first principal component separates between the tissues types (
Figure 6). This indicates that now differential expression between the tissue types is the dominant source of variation. Note that the second principal component separates the diseases.
Figure 6.
PCA plot of the calibrated, summarized data.
We want to plot a heatmap with the sample-to-sample distances with the sample names as row-names. Also, we want to see how well the samples cluster for phenotype (inflamed and non-inflamed tissue) and disease (UC and CD), respectively. We use row annotation for that: it means that these features get a colour code and will be displayed to the left of each row.In order to map the sample-to-sample distances, we first compute the distances using the
dist function. We need to transpose the expression values since the function computes the distances between the rows (i.e. genes in our case) by default. The default distance is the Euclidean one. However this can be changed and we choose the Manhattan distance here (it uses absolute distances along rectangular paths instead of squared distances of the direct path), as it is more robust. We set the diagonal of the distance matrix to
NA in order to increase the contrast of the color coding. Those diagonal entries do not contain information since the distance of a sample to itself is always equal to zero.On the heatmap plot (
Figure 7) we also see that the samples do not cluster strongly by tissue, confirming the impression from the PCA plot (
Figure 6) that the separation between the tissues is not perfect. The yellow stripes in the heatmap might correspond to outliers that could potentially be removed: the ones that could be flagged here are 2826_II, 3262_II, 3271_I, 2978_II and 3332_II. 2826_II, 3262_II and 3332_II were found to be outliers in both RLE and heatmap analysis and could therefore potentially be removed; however, in order to stay as close as possible to the original paper, we continue with the complete set of samples. Note again that more elaborate metrics to identify and remove outliers are provided by the
package.
Figure 7.
Heatmap of the sample-to-sample distances.
Filtering based on intensity
We now filter out lowly expressed genes. Microarray data commonly show a large numberof probes in the background intensity range. These probes also do not change much across arrays. Hence they combine a low variance with a low intensity. Thus, they could end up being detected as differentially expressed although they are barely above the “detection” limit and are not very informative in general.We will perform a “soft” intensity based filtering here, since this is recommended by the
[11,
12] user guide (a package we will use below for the differential expression analysis).However, note that a variance based filter might exclude a similar set of probes in practice. For intensity-based filtering, we calculate the row-wise medians from the expression data, as they represent the transcript medians, and assign them to
palmieri_medians. From this we create a histogram:In the histogram of the gene-wise medians (
Figure 8), we can clearly see an enrichment of low medians on the left hand side. These represent the genes we want to filter. In order to infer a cutoff from the data, we inspect the histogram: We visually set a cutoff line
man_threshold to the left of the histogram peak in order not to exclude too many genes. In our example, we choose a threshold of 4. We plot the same histogram as before and add the threshold line with the
abline() function (
Figure 9):
Figure 8.
Histogram of the median intensities per gene.
Figure 9.
Histogram of the median intensities per gene with manual intensity filtering threshold (red line).
Transcripts that do not have intensities larger than the threshold in at least as many arrays as the smallest experimental group are excluded.In order to do so, we first have to get a list with the number of samples (=arrays) (
no_of_samples) in the experimental groups:We now filter out all transcripts that do not have intensities greater than the threshold in at least as many arrays as the smallest experimental group (14) which we define as
samples_cutoff.A function
idx_man_threshold is then applied to each row, i.e. to each transcript across all arrays. It evaluates whether the number of arrays where the median intensity passes the threshold (
sum(x > man_threshold)) is greater than the
samples_cutoff and returns TRUE or FALSE for each row, i.e. each transcript.We then create a table of
idx_man_threshold to summarize the results and get an overview over how many genes are filtered out. In the last step, we subset our expression set to
palmieri_manfiltered and keep the TRUE elements of
idx_man_threshold.
Annotation of the transcript clusters
Before we continue with the linear models for microarrays and differential expression, we first add “feature data”, i.e. annotation information to the transcript cluster identifiers stored in the featureData of our ExpressionSet:We used the function
select from
to query the gene symbols and associated short descriptions for the transcript clusters. For each cluster, we added the gene symbol (
SYMBOL) and a short description of the gene the cluster represents (
GENENAME).In a second step, we filtered out the probes that do not map to a gene, i.e. that do not have a gene symbol assigned.
Removing multiple mappings
Many transcript-cluster identifiers will map to multiple gene symbols, i.e. they can’t be unambigously assigned.We compute a summary table in the code below to see how many there are:First, we grouped
anno_palmieri by their PROBEID; that way, the subsequent operations are not carried through for each single row, but for each group, i.e. each PROBEID. We then summarized the groups and indicate the number of different genes assigned to a transcript cluster in the column
no_of_matches. Finally, we filtered for PROBEIDs with multiple matches, i.e.
no_of_matches > 1.With
dim(probe_stats), we could see how many probes have been mapped to multiple genes.We have close to 2000 transcript clusters that map to multiple gene symbols. It is difficult to decide which mapping is “correct”. Therefore, we exclude these transcript clusters.We want to remove those probe IDs that match the ones in
probe_stats, as those are the probes with multiple mappings. We assign these IDs to the variable
ids_to_exclude. Then, we generate
palmieri_final, an expression set without the
ids_to_exclude.As we have just excluded probe IDs from the assay data, we now have to also exclude them from the feature data anno_palmieri:Recall that
fData enables us to access the feature data of an expression set. Until now, no feature data whatsoever is stored in the
fData(palmieri_final). Only the row names are the row names of the assay data by default, which are the PROBEIDs of the transcripts.Therefore, we generate a column PROBEID in
fData(palmieri_final) and assign the row names of
fData(palmieri_final) to it:Then, we left-join
fData(palmieri_final) with
anno_palmieri, which already contains the columns “SYMBOL” and “GENENAME”. A left-join keeps the rows and columns of the first argument and adds the corresponding column entries of the second argument:By left-joining with anno_palmieri, we thus add the “SYMBOL” and “GENENAME” columns from anno_palmieri for only the PROBEIDs that are in fData(palmieri_final) and thus get the feature Data for the filtered probes.By left-joining with
anno_palmieri, we thus add the “SYMBOL” and “GENENAME” columns from
anno_palmieri for only the PROBEIDs that are in
fData(palmieri_final) and thus get the feature Data for the filtered probes.
Building custom annotations
Alternatively, one can re-map the probes of the array to a current annotation. A workflow to do this for Illumina arrays is given in Arloth
et al.
[13]. Essentially, the individual probe sequences are re-aligned to an in-silico “exome” that consists of all annotated transcript exons.In any case, the package
can be used to build customannotation packages for use with
. In order to do this, PGF / CLF files (called “Library files” on the Affymetrix website) as well as the probeset annotations are required. The probesets typically represent small stretches of the genome (such as a single exon) and multiple probesets are then used to form a transcript cluster.The CLF file contains information about the location of individual probes on the array. The PGF file then contains the individual probe sequences and shows the probeset they belong to. Finally, the probeset annotation .csv then contains information about which probesets are used in which transcript cluster. Commonly, multiple probesets are used in one transcript cluster and some probesets are contained in multiple transcript clusters.
Linear models
In order to analyse which genes are differentially expressed between inflamed and non-inflamed tissue, we have to fit a linear model to our expression data. Linear models are the “workhorse” for the analysis of experimental data. They can be used to analyse almost arbitrarily complex designs, however they also take a while to learn and understand and a thorough description is beyond the scope of this workflow.Mike Love’s and Michael Irizzary’s
genomics class
[14] is a very good resource, especially the section on
interactions and contrasts. It might also be helpful to learn some linear algebra to better understand the concepts here. The Khan Academy offers helpful (and free)
online courses.
Linear models for microarrays
We now apply linear models to microarrays. Specifically, we discuss how to use the
package for differential expression analysis. The package is designed to analyze complex experiments involving comparisons between many experimental groups simultaneously while remaining reasonably easy to use for simple experiments. The main idea is to fit a linear model to the expression data for each gene. Empirical Bayes and other methods are used to borrow information across genes for the residual variance estimation leading to “moderated”
t-statistics, and stabilizing the analysis for experiments with just a small number of arrays
[12]. Conceptually, the final per gene variance is a mix of a prior variance and the per gene variance.Typical experimental designs are disussed in chapter 9 of
“User Guide”, which can be found on the Bioconductor landing page of
.In the following, we use appropriate design and contrast matrices for our linear models and fit a linear model to each gene separately.
A linear model for the data
For the subsequent linear modelling of the data, we introduce the abbreviations “UC” and “CD” for the disease types, and “non_infl.” and “infl.” for the phenotypes, respectively:The original paper is interested in changes in transcription that occur between inflamed and adjacent non-inflamed mucosal areas of the colon. This is studied in both inflammatory bowel disease types.For building our linear model, we have to think about which experimental variables we want to consider. As we want to find differential expression between non-inflamed and inflamed tissue, in principle, those are the only two variables we would have to consider.However, since we have two arrays per individual patient, we have a “Paired Samples” design (see section 9.4 of the
user guide). This means that the samples might be biased by the person they come from. Whenever a feature in an experimental setup is expected to have a systematic influence on the result, blocking factors on these features should be introduced.Thus, the first factor we need is a blocking factor for the individuals that will absorb differences in expression between them. Therefore, we block on patients, which means that the patient IDs become variables of the linear model.Then we create factors that give us the grouping for the tissue types (non-inflamed and inflamed).Finally, we create two design matrices, one for each of the two diseases as we will analyze them separately in order to follow the analysis strategy of the original paper closely (one could also fit a joint model to the complete data set; however, the two diseases might show very different characteristics so that a joint fit might not be appropriate).We can inspect the design matrices:In the design matrix, the rows represent the patient array, and the columns are the variables we include in our linear model. The variables correspond to our blocking factors: there are two for non-inflamed and inflamed tissue, respectively, and one for each patient. “
i_UC2424” for example is the blocking variable of patient 2424; UC stands for the disease the patient is suffering from. For example, the first two rows of the design matrix
design_palmieri_CD correspond to the two arrays for individual “164”.The design matrix entries are 0 or 1 and thus tell us which variables are “active” for which sample:It can be seen as a switch that turns the single variables “on” (with a 1 at the corresponding position) or “off” (with a 0 at the corresponding position) for each row, i. e. each patient array. If we take a closer look at the single rows, we see that for each sample, there are two “ones” assigned: one to one of the variables “nI” or “I” corresponding to the tissue the array came from, and one to the corresponding patient-specific blocking variable.Note that in the linear model, individual 164 serves as the baseline for all other individuals and thus isn’t included in the sample variables.Note that instead of blocking on individuals, it would also be possible to use a “mixed model” approach with the
duplicateCorrelation() function from the
package. It has advantages over the “fixed patient effect model” presented here in terms of applicability to more complicated experimental designs, where we want to perform comparisons both within and between the patients (e.g. comparing between the two diseases; “split-plot-designs”).More information on it can be found in the
limma User’s Guide (section 17.3.6). However, the above explained is more intuitive and is therefore used here.Before heading on to find all differentially expressed genes for both diseases, we will first have a look at how this works in principle for one gene. We will fit the linear model for one gene and run a t-test in order to see whether the gene is differentially expressed or not.
Analysis of differential expression based on a single gene
For linear model fitting and subsequent testing for differential expression by t-test, we will pick the gene with the PROBEID 8164535. It has the gene symbol
CRAT and will be named as such in the following code.Before fitting the linear model, we have a look at the expression intensities of this gene for each patient in non-inflamed and inflamed tissue, respectively:We see that overall, this gene is expressed less in inflamed tissue (
Figure 10). We also see that the absolute expression intensities vary greatly between patients. However, we have already taken care of this problem by introducing blocking factors based on the individuals, which allows us to compare the tissues within each individual as represented by the single lines.
Figure 10.
Visualization of expression changes.
If we had not blocked for individuals, the linear model would treat them interchangably and a graphical depiction would only include a single line. Since the individuals haver very different baseline expression levels, this would lead to a very high variance of the estimated fold changes.We now compute the variable coefficients by fitting a linear model. We get a vector
crat_coef with one entry for each variable.In order to now obtain the expression values fitted by the model, we multiply the design matrix with this vector of coefficients
crat_coef:Recall that for every row in the design matrix (i.e. every patient sample) only the variables with a 1 in the design matrix are taken into account for calculating the fitted expression value.This means that as output of the multiplication, we get a vector
crat_fitted whose entries are the sum of relevant variable coefficients for each sample, respectively.For example, the fitted value for patient sample
2114_I.CEL is 6.9703: it is the sum of the respective activated variable coefficients “nI” (7.1917) and “i_CD2114” (-0.2215).Let’s visualize the difference between non-inflamed and inflamed tissue again after fitting:Note that the difference of the fitted expression values between inflamed and non-inflamed samples of one patient is the same for all patients and is determined by the difference between the variable coefficients of
I (6.7667) and
nI (7.1917), which is -0.425 (
Figure 11).
Figure 11.
Expression changes for the CRAT gene.
This is the case because the same blocking variable is activated by the design matrix for both samples from a single patient, leading to a comparison within patients only. These blocking variables correct the fitted tissue specific expression values towards the expression levels of the individual patients. Therefore the final estimate is like an average of all the within-individual differences.The “difference” between non-inflamed and inflamed tissue of -0.425 is actually a log2 fold change, as our expression data is on the log2 scale. -0.425 therefore is our log2 fold change for the CRAT gene.In order to test whether the gene is differentially expressed or not, a
t-test with the null hypothesis that there is no difference in the expression between non-inflamed and inflamed tissue is carried out. Our blocking design is conceptually similar to a paired t-test for which the statistic is given by:
Where,
d̅ is the mean difference in expression values between the individuals. The paired t-test computes the variance
s
2 from the paired differences. This is lower than the variance of a standard t-test and thus the paired t-test has higher power as long as the expression values for the same individual are correlated (
see e.g. the article on Wikipedia).We thus have improved the power of the ordinary
t-test by reducing the variance via blocking on individuals.We now conduct the
t-test on the linear model in order to find out whether the difference between non-inflamed and inflamed tissue differs significantly from 0:We get a low p-value close to 0 and thus can conclude that the
CRAT gene is differentially expressed between non-inflamed and inflamed tissue.Note that the p-value isn’t exactly the same one as below when analyzing the differential expression of all genes. This is due to the variance moderation performed by
.
Contrasts and hypotheses tests
We now fit the linear model for all genes and define appropriate contrasts to test hypotheses of interest.We want to compare the inflamed to the non-inflamed tissue. Thus, we create a contrast matrix consisting of only one contrast “I-nI”:
’s function
makeContrasts creates this matrix from a symbolic description of the contrast of interest.We now fit a linear model to our data and apply the
contrasts.fit() function to it in order to find genes with significant differential expression between non-inflamed and inflamed tissue:We applied the empirical Bayes variance moderation method to the model via the
eBayes() function, which computes moderated
t-statistics. In microarray analysis, the number of arrays often is quite small, and thus variance estimation is difficult. Using a combination of the per-gene-variance and a prior variance we can improve the variance estimate, hence the term “moderation”. “Empirical Bayes” means that the prior is estimated from the data.The result of the
eBayes() step is that the individual variances are shrunken towards the prior value.
Extracting results
Finally, we extract the number of differentially expressed genes. Results can be extracted by use of the
topTable function. We extract the results for both Crohn’s disease and ulcerative colitis, and the results are sorted by their absolute
t-statistics. As a diagnostic check, we also plot the p-value histogram (
Figure 12 and
Figure 13): We expect a uniform distribution for the p-values that correspond to true null hypotheses, while a peak near zero shows an enrichment for low p-values corresponding to differentially expressed (DE) genes.
Figure 12.
Histogram of the p–values for Crohn’s disease.
Figure 13.
Histogram of the p–values for ulcerative colitis.
Note that if the p-value distribution for a dataset is very different from the ones in the histograms below, this might lead to quality loss in the subsequent analysis. Reasons for a divergent p-value-distribution might be batch effects or a lack of consideration of other blocking factors in the design matrix. Thus, if the p-value is not as expected, try to include possible blocking factors and batches and rerun the analysis. If this does not help, empirical Bayes / null estimation methods for multiple testing are useful.A good starting point to learn about these methods is the article on false discovery rate estimation by Korbininan Strimmer
[15] and chapter 1–6 of Efron’s book on Large-Scale Inference
[16], as well as the blog-post on “How to interpret a p-value histogram” by David Robinson
[17].
Multiple testing FDR, and comparison with results from the original paper
In the original paper, a p-value of 0.001 was used as a significance cutoff. Using this we get 947 genes identified as differentially expressed for UC:However, it is impossible to determine a precise bound on the number of false positive genes in this list. All that we can say using p-values is that we have at most 21033 (total number of tests) * 0.001 = 21.033 false positive genes in our list. Therefore, by choosing a p-value cutoff of 0.001, as much as 2.22% of our genes identified as differentially expressed might be false positives.Thus, we can see that the “raw” p-values are very “liberal” when looking at many tests simultaneously. We therefore need error rates adapted to the multiple testing situation. By far the most popular one in molecular biology is the
false discovery rate or FDR for short. It is the percentage of false positives among all positives. As we have seen, the FDR of our genes list using a simple p-value cutoff might be quite high.On the other hand, we can see a clear peak in the p-value histogram (
Figure 13), caused by the differentially expressed genes. There we expect the actual FDR of our list to be lower.The FDR at a given cutoff is given by the “adjusted” p-value in the results table.The adjusted p-value for a raw p-value of 0.001 in the table is 0.0222, which is an order of magnitude lower than the FDR we can infer from p-values alone.So although this is not recommended in general, we also use a p-value cutoff at 0.001 in the following in order to be able to compare our workflow results to the paper results.The paper results can be downloaded as excel files from
http://links.lww.com/IBD/A795 and should be saved as an .xlsx file named
palmieri_DE_res.xlsx in your working directory.Note that while the paper uses p-value cutoffs, it also reports the corresponding FDRs (just as we did for the UC data here).For a p-value cutoff of 0.001, the corresponding FDRs are 0.05 in Crohn’s disease and 0.02 in ulcerative colitis. There are four tables in total, giving the list of up and downregulated genes in CD and UC, respectively. We calculate the overlap between our results and the ones from the paper as the ratio of the genes that were found in both analyses and the genes that were only found in the paper.We also calculate the total number of diffentially expressed genes that we find in our workflow analysis.We find 575 (CD) and 947 (UC) differentially expressed genes (“DE-genes”).In the paper, 298 (CD) and 520 (UC) DE-genes were found for the two diseases at the same cutoff. This higher number of DE-genes identified is probably due to the increased power of the blocking according to the individuals and the moderated variance estimation that
performs.We see that we get a moderate overlap of 0.6443 for CD and 0.6731 for UC, showing that both analyses lead to somewhat comparable results.
Visualization of DE analysis results - volcano plot
For a visualization of the differentially expressed genes, we create a volcano plot, which is commonly used to summarize the results of a differential expression analysis in a single figure.For a better overview, we only show gene symbols of genes with a fold change greater than 1, which we define in the
volcano_names object. The
highlight option in the
volcanoplot function is set to 100 and thus only labels the 100 genes with the lowest p-values.We can now do a little research on the biological function of genes that show a high foldchange, for example the gene with the symbol S100A8 on the right hand side of the plot (
Figure 14). If we search for this gene symbol on
genecards.org, we find that it encodes for a protein that builds a pro-inflammatory complex in association with another protein.
Figure 14.
Volcano plot of the DE-genes.
Gene ontology (GO) based enrichment analysis
As discussed above, it is recommended to use an FDR cutoff in differential expression analysis rather than a p-value cutoff, since this way you control an explicitly defined error rate and the results are easier to interpret and to compare. For the following enrichment analysis, we create tables with differentially expressed genes for CD and UC, respectively, and choose an FDR cutoff of 10%. Here, we focus on the CD subset of the data.We can now try to characterize the identified differentially expressed genes more in detail by performing a GO enrichment analysis. Essentially the gene ontology (
http://www.geneontology.org/) is a hierarchically organized collection of functional gene sets
[18–
20].
Matching the background set of genes
The function
genefinder from the
package
[21] will be used to find a background set of genes that are similar in expression to the differentially expressed genes. We then check whether the background has roughly the same distribution of average expression strength as the foreground.We do this in order not to select a biased background since the gene set testing is performed by a simple Fisher test on a 2x2 table. Note that this approach is very similar to commonly used web tools like GOrilla
[22].For every differentially expressed gene, we try to find genes with similar expression with
genefinder. The
genefinder function returns a list with two elements for each gene: one with the indices of the background genes found and one with the distances to the DE-genes:We have to extract the PROBEIDs, which correspond to the indices. We do that by using the
sapply function, which gives us a single matrix with the DE-genes as column names and the PROBEIDs of the corresponding background genes in the cells below:We then create a vector
back_genes containing all background gene PROBEIDs:In order to eliminate foreground genes, i.e. DE-genes, from the
back_genes set, we use the
setdiff function. It returns all elements from the first argument (
back_genes) that are not part of the second argument (
DE_genes_CD). With the
intersect function, we verify that we were successful: it should return 0, as there shouldn’t be any intersect anymore between
back_genes and
DE_genes_CD:We create a multidensity plot with mean expression on the x-axis and curves for all genes, foreground genes and background genes, respectively (
Figure 15). We want to see whether the background genes show a plot similar to the foreground genes so that the background is not biased for the gene enrichment analysis:
Figure 15.
Selecting a background set of genes for the gene ontology analysis.
When comparing the “background gene” curve to the “foreground gene” curve, we see a similar curve shape, indicating a sensible background matching (
Figure 15). Note that the right-shift of the “foreground-gene” curve in comparison to the “background-gene” curve indicates that DE-genes are generally very highly expressed, so that it wasn’t possible to find background-genes with exactly equal overall expression distribution.The “all gene” curve has the leftmost curve maximum; this can be explained by a high number of lowly expressed genes in all samples and shows that a background matching is sensible in order to avoid biases.For the actual testing of which GO gene sets are enriched in inflamed tissue, we use the
package which implements a nice interface to Fisher testing and also has additional algorithms taking the GO structure into account, by e.g. only reporting the most specific gene set in the hierarchy
[23].The GO has three top ontologies: Cellular component (CC), biological processes (BP), and molecular function (MF). For illustrative purposes we limit ourselves to the BP category here.
Running topGO
topGO requires a topGOdata object containing the necessary information for the analysis. We follow the steps described in the topGO vignettes: First, we will create a named vector
all_genes with all genes to be analyzed, i.e. DE-genes and background genes:The following steps were carried through:we created an
in_universe vector by using the
%in% matching function. We want to know which elements from
gene_IDs are also contained in
DE_genes_CD and
back_genes, as the latter two are our gene universe we use for enrichment analysis. We got a vector
in_universe with the length of
gene_IDs that has the entry
TRUE when the corresponding gene in
gene_IDs could be also found in
DE_genes_CD or
back_genes, and
FALSE otherwise.We did the same for our DE-genes and call this vector
in_selection.We created the
all_genes vector:First, we selected all the elements from
in_selection that are
TRUE in
in_universe by applying
all_genes <- in_selection[in_universe].Then, we converted the elements in
all_genes from
TRUE and
FALSE to 0 and 1 by converting the vector to an integer vector. This way, each element in the vector is a 0 if the corresponding gene is a background gene and a 1 if the corresponding gene is a DE-gene. Also, we converted the vector to a factor.We named the vector elements with the corresponding gene_IDs.We now initialize the
data set, using the GO annotations contained in the annotation data base for the chip we are using. The
nodeSize parameter specifies a minimum size of a GO category we want to use: i.e. here, categories with less than 10 genes are not included in the testing.Now the tests can be run.
offers a wide range of options, for details see the paper
[23] or the package vignette.We run two common tests: an ordinary Fisher test for every GO category, and the “elim” algorithm, which tries to incorporate the hierarchical structure of the GO and tries to “decorrelate” it in order to report the most specific significant term in the hierarchy.The algorithm starts processing the nodes / GO categories from the highest (bottommost) level and then iteratively moves to nodes from a lower level. If a node is scored as significant, all of its genes are marked as removed in all ancestor nodes. This way, the “elim” algorithm aims at finding the most specific node for every gene.The test uses a 0.01 p-value cutoff by default.We can now inspect the results. We look at the top 100 GO categories according to the “Fisher elim” algorithm. The function
GenTable produces a table of significant GO categories, the function
printGenes gives genes annotated to them; the significant ones are denoted with a “2” in the “raw p-value” column, the non-significant ones with a “1”. We therefore select
raw p-value == 2.Note that we do not get the actual p-values here because our
all_genes vector doesn’t contain this information; it only tells us whether a gene is differentially expressed or not.
Visualization of the GO-analysis results
A graph of the results can also be produced. Here we visualize the three most significant nodes according to the Fisher elim algorithm in the context of the GO hierarchy.We can see that indeed GO categories related to inflammation, signalling and immune response come up as significant (
Figure 16) Gene set enrichment analysis has been a field of very extensive research in bioinformatics. For additional approaches see the
vignette and the references therein and also in the
GeneSetEnrichment view.
Figure 16.
Significantly enriched GO nodes in the GO hierarchy.
A pathway enrichment analysis using reactome
The package
offers the possibility to test enrichment of specific pathways using the free, opensource, curated and peer reviewed
Reactome pathway database
[24,
25]. The package requires entrez identifiers, so we convert our PROBEIDs (transcript cluster identifiers) to entrez identifiers using the function
mapIDs from the package
. This will create a named vector that maps the PROBEIDs to the entrez ones, with the PROBEIDs as names and the entrez ids as vector elements.We can now run the enrichment analysis that performs a statistical test based on the hypergeoemtric distribution that is the same as a one sided Fisher-test, which
calls “Fisher-classic”. Details can be found in the vignette of the
package
[26].Note that we trimmed pathway names to 20 characters.
Visualizing the reactome based analysis results
The top pathways can be displayed as a bar chart that displays all categories with a p-value below the specified cutoff (Figure 17).
Figure 17.
Enriched Reactome pathways and their p–values as a bar chart.
The “enrichment map” from the package
displays the results of the enrichment analysis as a graph, where the color represents the p-value of the pathway and the edge-thickness (that is the line connecting two pathways) is proportional to the number of overlapping genes between two pathways.Again, the graph in
Figure 18 shows pathways related to signalling and immune response.
Figure 18.
Enriched Reactome pathways enrichment results as a graph.
The package
[27] can also perform these analyses using downloaded KEGG data. Furthermore, the package
[28] additionally offers network-based enrichment analysis of individual pathways. This allows the mapping of the expression data at hand to known regulatory interactions.
Session information
As the last part of this document, we call the function
sessionInfo, which reports the version numbers of R and all the packages used in this session. It is good practice to always keep such a record of this as it will help to track down what has happened in case an R script ceases to work or gives different results because the functions have been changed in a newer version of one of your packages. By including it at the bottom of a script, your reports will become more reproducible.The session information should also
always be included in any emails to the
Bioconductor support site along with all code used in the analysis.Click here for additional data file.
Data and software availability
This article is based on an
R markdown file (MA-Workflow.Rmd) which is available as Dataset 1 (Dataset 1. R markdown document to reproduce the results obtained in the article,
10.5256/f1000research.8967.d208076)
[29] and is currently available via the development version (3.8) of
Bioconductor. This will become the release version in October 2018. The .Rmd file allows the reader to reproduce the analysis results obtained in this article. All data analyzed are downloaded from ArrayExpress.The authors have done a nice job of responding to the reviewers' comments, and this second version of the manuscript is greatly improved.I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.The second version of this article is much improved from the first. The explanations are simplified, and much of what I considered to be extraneous has been removed. I only have a few minor points that could be addressed.It's not necessary (ever) to use install.packages() or the remotes package directly, for any Bioconductor package or really any package. Installing the maEndToEnd package can be accomplished by simply usingbiocLite("b-klaus/maEndToEnd")There is some complexity here, as you may need to install some of the dependencies for the maEndToEnd package, but that is a consequence of getting the package from GitHub rather than a failing of biocLite(). End users will have to install the dependencies regardless, so something likedeps <- strsplit(packageDescription("maEndToEnd")$Depends,",\\n")[[1]][-1]deps <- deps[!deps %in% row.names(installed.packages())]biocLite(deps)would do the trick.Making the RLE plot could be simplified by using oligo::basicPLM rather than oligo::rmaRLE(basicPLM(raw_data, normalize = FALSE))While it is true that RMA summarization for Exon ST arrays can be carried out at the 'full' or 'extended' level, that is not true for Gene ST arrays, which can only be summarized at the 'core' and 'probeset' level, so I would remove that from the section titled 'RMA calibration of the data' .Under the linear models section, the resource you call 'genomics class' is actually the Biomedical Data Science course and is authored by Mike Love and Rafael Irizarry (not Michael Irizzary).I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.Klaus illustrates an analysis workflow of Affymetrix microarrays for an experimental design with paired samples from two tissues in two separate diseases. The workflow covers steps including downloading and loading raw CEL files into R/Bioconductor, pre-processing and normalization, differential analysis via limma, and functional/pathway enrichment analysis, with detailed R scripts throughout. The workflow is a nice complement to resources that are already available on similar topics (notably the Limma user's guide) as it unites into a single document topics that have been discussed elsewhere in diverse forums.Major remarks:Although the normalization/linear model/contrast sections are technically correct, I find them to be overly disruptive to the presentation of the analysis. In particular, I would suggest moving the normalization section to an appendix. For the linear models section, presentation of ordinary least squares estimators, matrix notation, distributional assumptions of residuals, etc. seems outside of the scope of this work (or could potentially be presented in an appendix as well). I think that it would be more helpful if, rather than using a separate illustrative example based on the
toycars data, the author described in detail: 1) the design matrix for the Palmieri example; 2) a linear model for a single gene from the Palmieri data; 3) a linear model fit independently for each gene, sharing information across all genes (limma), with a brief discussion about the advantage of such an approach; 4) a simplified discussion of various contrasts relevant for this particular study.I was surprised to see that differential analysis conclusions are based on the raw p-values rather than those adjusted for multiple testing.I have two additional suggestions that could be of great practical interest to many readers:I think it would be nice to have a discussion (or at least some references) discussing what can be done if hypotheses are NOT met (e.g., the raw p-values are not uniformly distributed between 0 and 1 for genes under H0). I have in mind the author's recent discussion about using fdrtool to estimate the variance of the null model in the context of differential analyses via DESeq2 (http://www-huber.embl.de/users/klaus/Teaching/DESeq2Predoc2014.html).In addition to the fixed patient effect model presented here, I think it would be interesting to present an alternative strategy possible with
limma for such a design: estimating correlation within patient blocks using duplicateCorrelation(), see for example section 17.3.6 in the
limma User's Guide. Note that I am not suggesting a detailed comparison or presentation of mixed models (that is of course well beyond the scope of this work!), but it may be useful to discuss the code/results for such an approach.Minor remarks:I would suggest specifying in the abstract that the study "...compares paired inflamed and non-inflamed colon tissue...", as this was not immediately clear otherwise. In addition, on page 2/paragraph 1, the author writes that the original paper studied "differences between patients suffering UC or CD", which makes it sound like the disease comparison was of interest, when in fact it was the intra-patient tissue differences that were studied. It may in fact be helpful to present the experimental design in greater detail at the start of the paper; it was not clear to me until page 23 that the design actually involved paired samples from individuals.Page 12, the author says "In order to infer a cutoff from the data, we inspect the histogram of the median–intensities. We visually fit a central normal distribution given by 0.5 · N(5.1, 1.18) to the probe–wise medians". From the R code, it appears that the distribution fit was N(emp_mu = 5.3, emp_sd = 1.19), where emp_mu and emp_sd were estimated from the median expression values -- this is unclear from the expression "visually fit" in the text description. Also unclear to me is why prop_cental was set to be 0.5?The phrase "from the XXXX" is often used when referencing R packages, rather than "from XXXX" or "from the XXX package".The comparison to the original paper results (page 26) seems unnecessary.I found it a bit distracting that none of the figures are explicitly referred to by number in the text (e.g., "In Figure 1, ...").There are occasionally very verbose outputs included in the text (e.g. head(pData(raw_data))) that produce multiple pages of output that are not particularly useful.There are a few English typos throughout, e.g. "inflammed" should be "inflamed", "constrains" should be "constraints", "transcriptct clusters" should be "transcript clusters", etc.Since a very large number of R packages are loaded for the analysis on page 2, it would likely be helpful to either reduce the number of packages needed to be the strict minimum necessary, or to group packages by theme/functionality via comments, for example:General Bioconductor packageslibrary(Biobase)library(oligoClasses)Annotation and data packageslibrary(pd.hugene.1.0.st.v1)Quality control and pre-processing packageslibrary(oligo)Analysis and statistics packageslibrary(limma)library(mvtnorm)Plotting and color options packageslibrary(gplots)library(ggplot2)Formatting/documentation packageslibrary(knitr)library(BiocStyle)library(dplyr)I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.Klaus illustrates an analysis workflow of Affymetrix microarrays for an experimental design with paired samples from two tissues in two separate diseases. The workflow covers steps including downloading and loading raw CEL files into R/Bioconductor, pre-processing and normalization, differential analysis via limma, and functional/pathway enrichment analysis, with detailed R scripts throughout. The workflow is a nice complement to resources that are already available on similar topics (notably the Limma user's guide) as it unites into a single document topics that have been discussed elsewhere in diverse forums.#Major remarks:# remark 1Although the normalization/linear model/contrast sections are technically correct, I find them to be overly disruptive to the presentation of the analysis. In particular, I would suggest moving the normalization section to an appendix. For the linear models section, presentation of ordinary least squares estimators, matrix notation, distributional assumptions of residuals, etc. seems outside of the scope of this work (or could potentially be presented in an appendix as well). I think that it would be more helpful if, rather than using a separate illustrative example based on the toycars data, the author described in detail:1) the design matrix for the Palmieri example;2) a linear model for a single gene from the Palmieri data;3) a linear model fit independently for each gene, sharing information across all genes (limma), with a brief discussion about the advantage of such an approach;4) a simplified discussion of various contrasts relevant for this particular study.Answer:The linear model sections not directly related to the dataset at hand wereremoved from the workflow. In general, we have tried to tailor the revised workflow tobeginners in R and Bioconductor. Therefore, rather than discussing elaboratetools / techniques in detail, we try to point to the relevant literature whereappropriate.This is helpful for the reader who needs to deviate from our workflowfor his or her own data and maintains a straightforward workflow design.Concerning the single points mentioned in the comment above:1) The design matrix for the Palmieri example is now explained in detail.2) In the subsequent part of the workflow, the advise of the reviewer to implement a linear model for a single gene from the Palmieri data was adopted by fitting a linear model to the "CRAT" gene, and testing it for differential expression.3) The advantage of shared information across all genes is discussed briefly when introducing the "eBayes"-method in the "Contrasts and Hypotheses tests" part.4) In this workflow, we restrict the contrast analysis to the contrast between non-inflamed and inflamed tissue, as it is the one that is also analyzed in the original paper, and do not include additional contrasts in order to keep the workflowconcise.# remark 2I was surprised to see that differential analysis conclusions are based on the raw p-values rather than those adjusted for multiple testing.Answer: This is indeed a possible cause of confusion. In the workflow, parts of the analysis are based on raw p-values in order to make the results comparable to the original paper. However, we now mention the caveats of this approach more,introduce FDRs in a "hands-on" mannerexplicitly and caution against the use of raw p--values in practice.For the subsequent enrichment analyses,DE genes are identified using an FDR cutoff of 10%.I have two additional suggestions that could be of great practical interest to many readers:# remark 3I think it would be nice to have a discussion (or at least some references) discussing what can be done if hypotheses are NOT met (e.g., the raw p-values are not uniformly distributed between 0 and 1 for genes under H0). I have in mind the author's recent discussion about using fdrtool to estimate the variance of the null model in the context of differential analyses via DESeq2 (http://www-huber.embl.de/users/klaus/Teaching/DESeq2Predoc2014.html).Answer: As mentioned above, and in concordance with the requests of Jim,we tried to avoid introducing any additional (advanced) methods in orderto not confuse the novice R/Bioconductor user. Nevertheless,for workflow users encountering the problem of unusual p-value distributions,we have included references to this phenomenon and its implications by referring tothe article on false discovery rate estimation by Korbininan Strimmerand chapter 1-6 of Efron's book on Large-Scale Inference.# remark 4In addition to the fixed patient effect model presented here, I think it would be interesting to present an alternative strategy possible with limma for such a design: estimating correlation within patient blocks using duplicateCorrelation(), see for example section 17.3.6 in the limma User's Guide. Note that I am not suggesting a detailed comparison or presentation of mixed models (that is of course well beyond the scope of this work!), but it may be useful to discuss the code/results for such an approach.Answer: We acknowledge the advantages in certain applications of a mixed model using duplicateCorrelation() compared to a fixed patient effect model used in this workflow.Given that the workflow is tailored towards unexperienced users,we consider the fixed patient effect model more intuitivelyunderstandable. However, we now note explicitly that such ananalysis could be performed.Minor remarks:# remark 5I would suggest specifying in the abstract that the study "...compares paired inflamed and non-inflamed colon tissue...", as this was not immediately clear otherwise. In addition, on page 2/paragraph 1, the author writes that the original paper studied "differences between patients suffering UC or CD", which makes it sound like the disease comparison was of interest, when in fact it was the intra-patient tissue differences that were studied. It may in fact be helpful to present the experimental design in greater detail at the start of the paper; it was not clear to me until page 23 that the design actually involved paired samples from individuals.Answer: The experimental design is now described in greater detail in abstract andintroduction. Also, the sentence that the original paper studied "differencesbetween patients suffering from UC or CD" was removed.# remark 6Page 12, the author says "In order to infer a cutoff from the data, we inspect the histogram of the median–intensities. We visually fit a central normal distribution given by 0.5 · N(5.1, 1.18) to the probe–wise medians". From the R code, it appears that the distribution fit was N(emp_mu = 5.3, emp_sd = 1.19), where emp_mu and emp_sd were estimated from the median expression values -- this is unclear from the expression "visually fit" in the text description. Also unclear to me is why prop_cental was set to be 0.5?Answer: In an attempt to make low-intensity-filtering more intuitive, we removedthe fitting a normal distribution the the histogram of the probe-wise medians.Instead, we now visually set a vertical cutoff line to the histogram and filtergenes with a lower intensity than the cutoff in at least as many samples as the smallest experimental group has.# additional minor remarksThe phrase "from the XXXX" is often used when referencing R packages, rather than "from XXXX" or "from the XXX package".Answer: Thank you, the respective sentences were corrected accordingly.The comparison to the original paper results (page 26) seems unnecessary.Answer: The comparison to the original paper serves as a "proof of principle"for the implemented workflow (to show that what we are doing makes sense).Additionally, we wanted to show that it is relatively straightforwardto re--analyze publicly available microarray data using R and Bioconductor.I found it a bit distracting that none of the figures are explicitly referred to by number in the text (e.g., "In Figure 1, ...").Answer: Thank you, the respective partswere changed accordingly.There are occasionally very verbose outputs included in the text (e.g. head(pData(raw_data))) that produce multiple pages of output that are not particularly useful.Answer: The output was shortened where appropriate. The pdata call inparticular was retained, as we feel that it is interesting to the readersto see what kind of information comes with a dataset from Array Express.There are a few English typos throughout, e.g. "inflammed" should be "inflamed", "constrains" should be "constraints", "transcriptct clusters" should be "transcript clusters", etc.Answer: Thanks, we tried to perform thorough spell-checking in the revised version.Since a very large number of R packages are loaded for the analysis on page 2, it would likely be helpful to either reduce the number of packages needed to be the strict minimum necessary, or to group packages by theme/functionality via comments, for example:General Bioconductor packageslibrary(Biobase)library(oligoClasses)Annotation and data packageslibrary(pd.hugene.1.0.st.v1)Quality control and pre-processing packageslibrary(oligo)Analysis and statistics packageslibrary(limma)library(mvtnorm)Plotting and color options packageslibrary(gplots)library(ggplot2)Formatting/documentation packageslibrary(knitr)library(BiocStyle)library(dplyr)Answer: Thank you for this very useful suggestion, we implemented it in the revisedversion; the package-import statements at the beginning of the workflow is nowgrouped by topic.This manuscript is intended to take the reader through a complete analysis of Affymetrix Gene ST arrays, based on a set of arrays downloaded from ArrayExpress. The author covers each step from quality control of the raw data all the way to making comparisons using linear models and testing and visualizing pathways or gene sets.While this manuscript is technically correct (e.g., the code does what the author claims, the explanatory text is valid), I am not sure it is as useful as it could be for its intended audience. In other words, this manuscript is intended to provide an inexperienced reader (inexperienced in either R/Bioconductor or statistics or both) with a road map they can follow to learn how to analyze microarray data. However, both the code and the statistical explanations are far too complex to be useful for such a person.As an example, the author explains in mathematical terms what the background correction and summarization steps are intended to accomplish. While this is an important step in the analysis, this could instead be explained in heuristic terms that would be far more approachable for a less statistically savvy audience, while still conveying the general idea.The section on linear modeling and design matrices are similarly impenetrable for non-statisticians. The
limma User's Guide has dozens of examples of model matrices, with clear interpretations of the model coefficients. Yet as the author notes, this is probably the number one question on the Bioconductor support site. Rather than re-explaining something that most people clearly don't understand, it would be much more helpful to focus on a single model matrix, and provide a clear, heuristic explanation. The easiest to understand is the most basic orthogonal model that computes the mean of each group, followed by a contrast matrix to make comparisons of interest.The overview on linear models is well beyond the scope of this manuscript, and should be excised. The same is true for the section on testing general linear hypotheses. Ideally, an analyst using these tools would understand what they are doing from a statistical perspective, but it's difficult enough for a novice user to comprehend what the code is doing without trying to also understand the statistics.Similarly, the code is more complex than necessary. If the goal is to teach novice users about Bioconductor packages, then the code should be restricted as much as possible to those packages and base R. While ggplot2 style graphics, magrittr style function piping and dplyr two-table verbs may be useful for more advanced R users, in this context they are an added distraction.An example of overly-complex code is the filtering step. There are any number of ways to filter out genes that are arguably not expressed. The example is a very sophisticated way to perform this task, but a novice who is just learning doesn't require sophistication, they require something they can understand. Choosing a cutoff based on the distribution of probesets across each array, and then filtering all genes where fewer than 14 samples exceed this cutoff is not very sophisticated, but it is easy to understand, and would only require a few lines of code.The learning curve for R and Bioconductor is steep, and making this manuscript both simpler in terms of the code, and more focused by excluding most of the statistics and any example analyses that do not involve the microarray data would make it more approachable for inexperienced users.In the download step, getAE() is used to download the data, but then the SDRF file is downloaded directly. This is confusing, as getAE() has already downloaded that file. Is there a particular reason for the extra step?To test for model matrices that are not full rank, it's easier to use either nonEstimable() or is.fullrank() from the limma package.As noted above, this manuscript is technically accurate, and the code does exactly what the author claims. But it is too ambitious for the intended audience. Most people who could benefit from this manuscript are not statistically savvy, and the statistical sections will simply confuse them. In addition, it is not likely that many potential readers will be familiar with packages from the 'Hadleyverse', and incorporating those packages in the manuscript makes the code more difficult to understand.Paring the manuscript down to very simple heuristic statistical explanations, and limiting the code to functions from base R and the various Bioconductor packages being illustrated would make the manuscript more useful for its intended audience.I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.#
Major comments##
comment 1While this manuscript is technically correct (e.g., the code does what the author claims, the explanatory text is valid), I am not sure it is as useful as it could be for its intended audience. In other words, this manuscript is intended to provide an inexperienced reader (inexperienced in either R/Bioconductor or statistics or both) with a road map they can follow to learn how to analyze microarray data. However, both the code and the statistical explanations are far too complex to be useful for such a person.As an example, the author explains in mathematical terms what the background correction and summarization steps are intended to accomplish. While this is an important step in the analysis, this could instead be explained in heuristic terms that would be far more approachable for a less statistically savvy audience, while still conveying the general idea.Answer: As suggested, the general style of the workflow was changed a lot in order toguide inexperienced readers through the analysis of microarray data.Code is explained in greater detail, and all technical parts not directly relatedto the data at hand have either been removed or improved.However, we have decided to keep the the mathematical explanationof background correction and summarization, as we feel that the (rather simple)formulas are easier to understand than text.##
comment 2The section on linear modeling and design matrices are similarly impenetrable for non-statisticians. The limma User's Guide has dozens of examples of model matrices, with clear interpretations of the model coefficients. Yet as the author notes, this is probably the number one question on the Bioconductor support site. Rather than re-explaining something that most people clearly don't understand, it would be much more helpful to focus on a single model matrix, and provide a clear, heuristic explanation. The easiest to understand is the most basic orthogonal model that computes the mean of each group, followed by a contrast matrix to make comparisons of interest.The overview on linear models is well beyond the scope of this manuscript, and should be excised. The same is true for the section on testing general linear hypotheses. Ideally, an analyst using these tools would understand what they are doing from a statistical perspective, but it's difficult enough for a novice user to comprehend what the code is doing without trying to also understand the statistics.Answer: We gladly adopted this point of criticism and eliminatedthe rather technical part in which the theory behind design matricesand linear modelling is explained. Instead, we focused on explainingthe design matrix we haved used in the dataset at hand in easy-to-follow termsand present its use in the context of single linear model for a single gene.##
comment 3Similarly, the code is more complex than necessary. If the goal is to teach novice users about Bioconductor packages, then the code should be restricted as much as possible to those packages and base R. While ggplot2 style graphics, magrittr style function piping and dplyr two-table verbs may be useful for more advanced R users, in this context they are an added distraction.An example of overly-complex code is the filtering step. There are any number of ways to filter out genes that are arguably not expressed. The example is a very sophisticated way to perform this task, but a novice who is just learning doesn't require sophistication, they require something they can understand. Choosing a cutoff based on the distribution of probesets across each array, and then filtering all genes where fewer than 14 samples exceed this cutoff is not very sophisticated, but it is easy to understand, and would only require a few lines of code.Answer: While not all of the code from the packages like ggplot2, magrittr and dplyrwas removed, an effort was made to explain the code in more detailin order to give the inexperienced user the possibility to understand what it does. Inparticular, we think that ggplot2 is by now very commonly used and probably becoming a defacto standard plotting package.In the low-intensity filtering step, the fitting of a normal distributionto the histogram was eliminated for the sake of clarity. Instead,we now visually set a vertical cutoff line to the histogram and filtergenes with a lower intensity than the cutoff in at least as many samplesas the smallest experimental group has.##
comment 4The learning curve for R and Bioconductor is steep, and making this manuscript both simpler in terms of the code, and more focused by excluding most of the statistics and any example analyses that do not involve the microarray data would make it more approachable for inexperienced users.Answer: Thank you for this useful comment. We have made some effort to tailorthe revised version to the beginner in R / Bioconductor and / or statistics.The revised article contains a more focused workflow with more in depthexplanations of the code.#
Minor commentsIn the download step, getAE() is used to download the data, but then the SDRF file is downloaded directly. This is confusing, as getAE() has already downloaded that file. Is there a particular reason for the extra step?Answer: There is no particular reason for this step, the code has been changedin order to import the SDRF file from the already downloaded data.To test for model matrices that are not full rank, it's easier to use either nonEstimable() or is.fullrank() from the limma package.Answer: This section was removed, as it is quite technical andis really more of a "sanity check" than a necessity.
Authors: Nikolay Kolesnikov; Emma Hastings; Maria Keays; Olga Melnichuk; Y Amy Tang; Eleanor Williams; Miroslaw Dylag; Natalja Kurbatova; Marco Brandizi; Tony Burdett; Karyn Megy; Ekaterina Pilicheva; Gabriella Rustici; Andrew Tikhonov; Helen Parkinson; Robert Petryszak; Ugis Sarkans; Alvis Brazma Journal: Nucleic Acids Res Date: 2014-10-31 Impact factor: 16.971
Authors: David Croft; Antonio Fabregat Mundo; Robin Haw; Marija Milacic; Joel Weiser; Guanming Wu; Michael Caudy; Phani Garapati; Marc Gillespie; Maulik R Kamdar; Bijay Jassal; Steven Jupe; Lisa Matthews; Bruce May; Stanislav Palatnik; Karen Rothfels; Veronica Shamovsky; Heeyeon Song; Mark Williams; Ewan Birney; Henning Hermjakob; Lincoln Stein; Peter D'Eustachio Journal: Nucleic Acids Res Date: 2013-11-15 Impact factor: 16.971
Authors: Kenji J Maeda; Kurt C Showmaker; Ashley C Johnson; Michael R Garrett; Jennifer M Sasser Journal: Physiol Genomics Date: 2019-05-24 Impact factor: 3.107
Authors: Richard D Mattes; Sylvia B Rowe; Sarah D Ohlhorst; Andrew W Brown; Daniel J Hoffman; DeAnn J Liska; Edith J M Feskens; Jaapna Dhillon; Katherine L Tucker; Leonard H Epstein; Lynnette M Neufeld; Michael Kelley; Naomi K Fukagawa; Roger A Sunde; Steven H Zeisel; Anthony J Basile; Laura E Borth; Emahlea Jackson Journal: Adv Nutr Date: 2022-08-01 Impact factor: 11.567
Authors: Guy P Hunt; Luigi Grassi; Rafael Henkin; Fabrizio Smeraldi; Thomas P Spargo; Renata Kabiljo; Sulev Koks; Zina Ibrahim; Richard J B Dobson; Ammar Al-Chalabi; Michael R Barnes; Alfredo Iacoangeli Journal: Nucleic Acids Res Date: 2022-05-24 Impact factor: 19.160
Authors: Theodora-Dafni Michalettou; Ioannis Michalopoulos; Sylvain V Costes; Christine E Hellweg; Megumi Hada; Alexandros G Georgakilas Journal: Life (Basel) Date: 2021-02-03
Authors: Alexia Klonou; Penelope Korkolopoulou; Antonios N Gargalionis; Dimitrios S Kanakoglou; Hector Katifelis; Maria Gazouli; Sarantis Chlamydas; Andreas Mitsios; Theodosis Kalamatianos; George Stranjalis; Marios S Themistocleous; Kostas A Papavassiliou; Spyros Sgouros; Athanasios G Papavassiliou; Christina Piperi Journal: Neurotherapeutics Date: 2021-07-22 Impact factor: 6.088
Authors: Paul M McKeever; Raphael Schneider; Foad Taghdiri; Anna Weichert; Namita Multani; Robert A Brown; Adam L Boxer; Anna Karydas; Bruce Miller; Janice Robertson; Maria Carmela Tartaglia Journal: Mol Neurobiol Date: 2018-03-30 Impact factor: 5.590
Authors: Muhammad Naseem; Özge Osmanoğlu; Martin Kaltdorf; Afnan Ali M A Alblooshi; Jibran Iqbal; Fares M Howari; Mugdha Srivastava; Thomas Dandekar Journal: Int J Mol Sci Date: 2020-08-11 Impact factor: 5.923