Cong Fang1, Hangfeng He1, Qi Long2, Weijie J Su3. 1. Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA 19104. 2. Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania, Philadelphia, PA 19104. 3. Department of Statistics and Data Science, University of Pennsylvania, Philadelphia, PA 19104 suw@wharton.upenn.edu.
Abstract
In this paper, we introduce the Layer-Peeled Model, a nonconvex, yet analytically tractable, optimization program, in a quest to better understand deep neural networks that are trained for a sufficiently long time. As the name suggests, this model is derived by isolating the topmost layer from the remainder of the neural network, followed by imposing certain constraints separately on the two parts of the network. We demonstrate that the Layer-Peeled Model, albeit simple, inherits many characteristics of well-trained neural networks, thereby offering an effective tool for explaining and predicting common empirical patterns of deep-learning training. First, when working on class-balanced datasets, we prove that any solution to this model forms a simplex equiangular tight frame, which, in part, explains the recently discovered phenomenon of neural collapse [V. Papyan, X. Y. Han, D. L. Donoho, Proc. Natl. Acad. Sci. U.S.A. 117, 24652-24663 (2020)]. More importantly, when moving to the imbalanced case, our analysis of the Layer-Peeled Model reveals a hitherto-unknown phenomenon that we term Minority Collapse, which fundamentally limits the performance of deep-learning models on the minority classes. In addition, we use the Layer-Peeled Model to gain insights into how to mitigate Minority Collapse. Interestingly, this phenomenon is first predicted by the Layer-Peeled Model before being confirmed by our computational experiments.
In this paper, we introduce the Layer-Peeled Model, a nonconvex, yet analytically tractable, optimization program, in a quest to better understand deep neural networks that are trained for a sufficiently long time. As the name suggests, this model is derived by isolating the topmost layer from the remainder of the neural network, followed by imposing certain constraints separately on the two parts of the network. We demonstrate that the Layer-Peeled Model, albeit simple, inherits many characteristics of well-trained neural networks, thereby offering an effective tool for explaining and predicting common empirical patterns of deep-learning training. First, when working on class-balanced datasets, we prove that any solution to this model forms a simplex equiangular tight frame, which, in part, explains the recently discovered phenomenon of neural collapse [V. Papyan, X. Y. Han, D. L. Donoho, Proc. Natl. Acad. Sci. U.S.A. 117, 24652-24663 (2020)]. More importantly, when moving to the imbalanced case, our analysis of the Layer-Peeled Model reveals a hitherto-unknown phenomenon that we term Minority Collapse, which fundamentally limits the performance of deep-learning models on the minority classes. In addition, we use the Layer-Peeled Model to gain insights into how to mitigate Minority Collapse. Interestingly, this phenomenon is first predicted by the Layer-Peeled Model before being confirmed by our computational experiments.
In the past decade, deep learning has achieved remarkable performance across a range of scientific and engineering domains (1–3). Interestingly, these impressive accomplishments were mostly achieved by heuristics and tricks, though often plausible, without much principled guidance from a theoretical perspective. On the flip side, however, this reality suggests the great potential a theory could have for advancing the development of deep-learning methodologies in the coming decade.Unfortunately, it is not easy to develop a theoretical foundation for deep learning. Perhaps the most difficult hurdle lies in the nonconvexity of the optimization problem for training neural networks, which, loosely speaking, stems from the interaction between different layers of neural networks. To be more precise, consider a neural network for K-class classification (in logits), which in its simplest form reads*Here, denotes the weights of the L layers, denotes the biases, and is a nonlinear activation function such as the rectified linear unit (ReLU). Owing to the complex and nonlinear interaction between the L layers, when applying stochastic gradient descent to the optimization problemwith a loss function for training the neural network, it becomes very difficult to pinpoint how a given layer influences the output (above, denotes the training examples in the k-th class, with label is the total number of training examples, is the weight decay parameter, and throughout the paper is the norm). Worse, this difficulty in analyzing deep-learning models is compounded by an ever-growing number of layers.Therefore, any attempt to develop a tractable and comprehensive theory for demystifying deep learning would presumably first need to simplify the interaction between a large number of layers. Following this intuition, in this paper, we introduce the following optimization program as a surrogate model for Eq. with the goal of unveiling quantitative patterns of deep neural networks:where is, as in Eq. , comprised of K linear classifiers in the last layer, corresponds to the p-dimensional last-layer activations/features of all N training examples, and and E are two positive scalars. Note that the bias terms are omitted for simplicity. Although still nonconvex, this optimization program is presumably much more amenable to analysis than the old one, Eq. , as the interaction now is only between two layers.In relating Eq. to Eq. , a first simple observation is that in Eq. is replaced by in Eq. . Put differently, the black- box nature of the last-layer features, namely, is now modeled by a simple decision variable for each training example, with an overall constraint on their norm. Intuitively speaking, this simplification is done by peeling off the topmost layer from the neural network. Thus, we call the optimization program (1) the 1-Layer-Peeled Model, or simply the Layer-Peeled Model.At a high level, the Layer-Peeled Model takes a top-down approach to the analysis of deep neural networks. As illustrated in Fig. 1, the essence of the modeling strategy is to break down the neural network from top to bottom, specifically singling out the topmost layer and modeling all bottom layers collectively as a single variable. In fact, the top-down perspective that we took in the development of the Layer-Peeled Model was inspired by a recent breakthrough made by Papyan, Han, and Donoho (4), who discovered a mathematically elegant and pervasive phenomenon termed neural collapse in deep-learning training. This top-down approach was also taken in refs. (5–9) to investigate various aspects of deep-learning models.
Fig. 1.
Illustration of Layer-Peeled Models. B represents the 2-Layer-Peeled Model, which is discussed in Section 6. For each panel, we preserve the details of the white (top) box, whereas the gray (bottom) box is modeled by a simple decision variable for every training example. (A) The 1-Layer-Peeled Model. (B) The 2-Layer-Peeled Model.
Illustration of Layer-Peeled Models. B represents the 2-Layer-Peeled Model, which is discussed in Section 6. For each panel, we preserve the details of the white (top) box, whereas the gray (bottom) box is modeled by a simple decision variable for every training example. (A) The 1-Layer-Peeled Model. (B) The 2-Layer-Peeled Model.
Two Applications
Despite its plausibility, the ultimate test of the Layer-Peeled Model lies in its ability to faithfully approximate deep-learning models through explaining empirical observations and, better, predicting new phenomena. In what follows, we provide convincing evidence that the Layer-Peeled Model is up to this task by presenting two findings. To be concrete, we remark that the results below are concerned with well-trained deep-learning models, which correspond to, in rough terms, (near) optimal solutions of Eq. .
Balanced data
Roughly speaking, neural collapse (4) refers to the emergence of certain geometric patterns of the last-layer features and the last-layer classifiers , when the neural network for balanced classification problems is well-trained in the sense that it is toward not only zero misclassification error, but also negligible cross-entropy loss. Specifically, the authors observed the following properties in their massive experiments: The last-layer features from the same class tend to be very close to their class mean; these K-class means centered at the global mean have the same length and form the maximally possible equal-sized angles between any pair; moreover, the last-layer classifiers become dual to the class means in the sense that they are equal to each other for each class up to a scaling factor. See a more precise description in Section B.While it seems hopeless to rigorously prove neural collapse for multiple-layer neural networks (Eq. at the moment, alternatively, we seek to show that this phenomenon emerges in the surrogate model (Eq. . More precisely, when the size of each class for all k, is it true that any global minimizer of Eq. exhibits neural collapse? The following result answers this question in the affirmative:Neural collapse occurs in the Layer-Peeled Model.A formal statement of this result and a detailed discussion are given in Section 3.This result applies to a family of loss functions , particularly including the cross-entropy loss and the contrastive loss (see, e.g., ref. (10)). As an immediate implication, this result provides evidence of the Layer-Peeled Model’s ability to characterize well-trained deep-learning models.
Imbalanced data
While a surrogate model would be satisfactory if it explains some already-observed phenomenon, we set a higher standard for the model, asking whether it can predict a new common empirical pattern. Encouragingly, the Layer-Peeled Model happens to meet this standard. Specifically, we consider training deep-learning models on imbalanced datasets, where some classes contain many more training examples than others. Despite the pervasiveness of imbalanced classification in many practical applications (11), the literature remains scarce on its impact on the trained neural networks from a theoretical standpoint. Here, we provide mathematical insights into this problem by using the Layer-Peeled Model. In the following result, we consider optimal solutions to the Layer-Peeled Model on a dataset with two different class sizes: The first K majority classes each contain training examples (), and the remaining minority classes each contain examples (). We call the imbalance ratio.In the Layer-Peeled Model, the last-layer classifiers corresponding to the minority classes, namely, , collapse to a single vector when R is sufficiently large.This result is elaborated on in Section 4. The derivation involves some elements to tackle the nonconvexity of the Layer-Peeled Model (Eq. and the asymmetry due to the imbalance in class sizes.In slightly more detail, we identify a phase transition as the imbalance ratio R increases: When R is below a threshold, the minority classes are distinguishable in terms of their last-layer classifiers; when R is above the threshold, they become indistinguishable. While this phenomenon is merely predicted by the simple Layer-Peeled Model (Eq. , it appears in our computational experiments on deep neural networks. More surprisingly, our prediction of the phase transition point is in excellent agreement with the experiments, as shown in Fig. 2.
Fig. 2.
Minority Collapse predicted by the Layer-Peeled Model (LPM; in dotted lines) and empirically observed in deep learning (DL; in solid lines) on imbalanced datasets with K = 7 and . The y axis denotes the average cosine of the angles between any pair of the minority classifier for both LPM and DL. The datasets we use are subsets of the CIFAR10 datasets (12), and the size of the majority classes is fixed to 5,000. The experiments use VGG13 (13) as the deep-learning architecture, with weight decay (wd) . The prediction is especially accurate in capturing the phase transition point where the cosine becomes 1 or, equivalently, the minority classifiers become parallel to each other. More details can be found in Section C.
Minority Collapse predicted by the Layer-Peeled Model (LPM; in dotted lines) and empirically observed in deep learning (DL; in solid lines) on imbalanced datasets with K = 7 and . The y axis denotes the average cosine of the angles between any pair of the minority classifier for both LPM and DL. The datasets we use are subsets of the CIFAR10 datasets (12), and the size of the majority classes is fixed to 5,000. The experiments use VGG13 (13) as the deep-learning architecture, with weight decay (wd) . The prediction is especially accurate in capturing the phase transition point where the cosine becomes 1 or, equivalently, the minority classifiers become parallel to each other. More details can be found in Section C.This phenomenon, which we refer to as Minority Collapse, reveals the fundamental difficulty in using deep learning for classification when the dataset is widely imbalanced, even in terms of optimization, not to mention generalization. This is not a priori evident given that neural networks have a large approximation capacity (see, e.g., ref. (14)). Importantly, Minority Collapse emerges at a finite value of the imbalance ratio rather than at infinity. Moreover, even below the phase transition point of this ratio, we find that the angles between any pair of the minority classifiers are already smaller than those of the majority classes, both theoretically and empirically.
Related Work
There is a venerable line of work attempting to gain insights into deep learning from a theoretical point of view (15–29). See also the reviews (30–33) and references therein.The work of neural collapse by ref. (4) in this body of work is particularly noticeable with its mathematically elegant and convincing insights. In brief, ref. (4) observed the following four properties of the last-layer features and classifiers in deep-learning training on balanced datasets:(NC1) Variability collapse: The within-class variation of the last-layer features becomes 0, which means that these features collapse to their class means.(NC2) The class means centered at their global mean collapse to the vertices of a simplex equiangular tight frame (ETF) up to scaling.(NC3) Up to scaling, the last-layer classifiers each collapse to the corresponding class means.(NC4) The network’s decision collapses to simply choosing the class with the closest Euclidean distance between its class mean and the activations of the test example.Now we give the formal definition of ETF (4, 34).A K-simplex ETF is a collection of points in specified by the columns of the matrixwhere is the identity matrix, is the ones vector, and () is a partial orthogonal matrix such that .A common setup of the experiments for validating neural collapse is the use of the cross-entropy loss with regularization, which corresponds to weight decay in stochastic gradient descent. Based on convincing arguments and numerical evidence, ref. (4) demonstrated that the symmetry and stability of neural collapse improve deep-learning training in terms of generalization, robustness, and interpretability. Notably, these improvements occur with the benign overfitting phenomenon (35–39) during the terminal phase of training—when the trained model interpolates the in-sample training data.In passing, we remark that concurrent works (40–43) produced neural collapse using different surrogate models. In slightly more detail, refs. (40–42) obtained their models by peeling off the topmost layer. The difference, however, is that refs. (41) and (42) considered models that impose a norm constraint for each class, as opposed to an overall constraint, as employed in the Layer-Peeled Model. Moreover, ref. (40) analyzed gradient flow with an unconstrained features model using the squared loss instead of the cross-entropy loss. The work in ref. (43) provided an insightful perspective for the analysis of neural networks using convex duality. Relying on a convex formulation that is in the same spirit as our semidefinite programming relaxation, the authors of ref. (43) observed neural collapse in their ReLU-based model by leveraging strong duality under certain conditions.
Derivation
In this section, we heuristically derive the Layer-Peeled Model as an analytical surrogate for well-trained neural networks. Although our derivation lacks rigor, the goal is to reduce the complexity of the optimization problem (Eq. while roughly preserving its structure. Notably, the penalty corresponds to weight decay used in training deep-learning models, which is necessary for preventing this optimization program from attaining its minimum at infinity when is the cross-entropy loss. For simplicity, we omit the biases in the neural network .Taking a top-down standpoint, our modeling strategy starts by singling out the weights of the topmost layer and rewriting Eq. aswhere the last-layer feature function and denotes the weights from all layers but the last layer. From the Lagrangian dual viewpoint, a minimum of the optimization program above is also an optimal solution tofor some positive numbers C1 and C2. To clear up any confusion, note that due to its nonconvexity, Eq. may admit multiple global minima, and each in general corresponds to different values of . Next, we can equivalently write Eq. aswhere denotes a decision variable, and the function is defined as for any .To simplify Eq. , we make the ansatz that the range of under the constraint is approximately an ellipse in the sense thatfor some . Loosely speaking, this ansatz asserts that should be regarded as a variable in an space. To shed light on the rationale behind the ansatz, note that intuitively lives in the dual space of in view of the appearance of the product in the objective. Furthermore, is in an space for the constraint on it. Last, note that spaces are self-dual.Inserting this approximation into Eq. , we obtain the following optimization program, which we call the Layer-Peeled Model:For simplicity, above and henceforth we write for the last-layer classifiers/weights and the thresholds and .This optimization program is nonconvex but, as we will show soon, is generally mathematically tractable for analysis. On the surface, the Layer-Peeled Model has no dependence on the data , which, however, is not the correct picture, since the dependence has been implicitly incorporated into the threshold E.In passing, we remark that neural collapse does not emerge if the second constraint of Eq. uses the norm for any (strictly speaking, is not a norm when q < 1), in place of the norm. This fact in turn justifies in part the ansatz Eq. . This result is formally stated in Proposition 2 in Section 6.
Layer-Peeled Model for Explaining Neural Collapse
In this section, we consider training deep neural networks on a balanced dataset—that is, n = n for all classes . Our main finding is that the Layer-Peeled Model displays the neural collapse phenomenon, just as in deep-learning training (4). The proofs are all deferred to . Throughout this section, we assume unless otherwise specified. This assumption is satisfied in many popular network architectures, where p is usually tens or hundreds of times of K.
Cross-Entropy Loss
The cross-entropy loss is perhaps the most popular loss used in training deep-learning models for classification tasks. This loss function takes the formwhere denotes the -th entry of the logit . Recall that is the label of the k-th class, and the feature is set to in the Layer-Peeled Model (Eq. . In contrast to the complex deep neural networks, which are often considered a black-box, the Layer-Peeled Model is much easier to deal with. As an exemplary use case, the following result shows that any minimizer of the Layer-Peeled Model (Eq. with the cross-entropy loss admits an almost closed-form expression.In the balanced case, any global minimizer
of
with the cross-entropy loss obeysfor all
, where the constants
, and the matrix
forms a K-simplex ETF specified in
.Note that the minimizers ’s are equivalent to each other up to rotation. This is because of the rational invariance of simplex ETFs (see in Definition 1).This theorem demonstrates the highly symmetric geometry of the last-layer features and weights of the Layer-Peeled Model, which is precisely the phenomenon of neural collapse. Explicitly, Eq. says that all within-class (last-layer) features are the same: for all ; next, it also says that the K-class-mean features together exhibit a K-simplex ETF up to scaling, from which we immediately conclude thatfor any by Definition 1; in addition, Eq. also displays the precise duality between the last-layer classifiers and features. Taken together, these facts indicate that the minimizer satisfies exactly (NC1)–(NC3). Last, Property (NC4) is also satisfied by recognizing that, for any given last-layer features , the predicted class is , where denotes the inner product of the two vectors. Note that the prediction satisfiesConversely, the presence of neural collapse in the Layer-Peeled Model offers evidence of the effectiveness of our model as a tool for analyzing neural networks. To be complete, we remark that other models were very recently proposed to justify the neural collapse phenomenon (40–42) (see also ref. (44)).
Extensions to Other Loss Functions
In the modern practice of deep learning, various loss functions are employed to take into account the problem characteristics. Here, we show that the Layer-Peeled Model continues to exhibit the phenomenon of neural collapse for some popular loss functions.
Contrastive loss
Contrastive losses have been extensively used recently in both supervised and unsupervised deep learning (10, 45–47). These losses pull similar training examples together in their embedding space while pushing apart dissimilar examples. Here, we consider the supervised contrastive loss (48), which (in the balanced case) is defined through the last-layer features by introducing aswhere is a parameter. Note that this loss function uses the label information implicitly. As the loss does not involve the last-layer classifiers explicitly, the Layer-Peeled Model in this case takes the formWe show that this Layer-Peeled Model also exhibits neural collapse in its last-layer features, even though the label information is not explicitly explored in the loss.Any global minimizer of
satisfiesfor all
and
, where
forms a K-simplex ETF.Theorem 3 shows that the contrastive loss in the associated Layer-Peeled Model does a perfect job in pulling together training examples from the same class. Moreover, as seen from the denominator in Eq. , minimizing this loss would intuitively render the between-class inner products of last-layer features as small as possible, thereby pushing the features to form the vertices of a K-simplex ETF up to scaling.
Softmax-based loss
The cross-entropy loss can be thought of as a softmax-based loss. To see this, define the softmax transform asfor . Let g1 be any nonincreasing convex function and g2 be any nondecreasing convex function, both defined on (0, 1). We consider a softmax-based loss function that takes the formHere, denotes the k-th element of . Taking and , we recover the cross-entropy loss. Another example is to take and for , which can be implemented in most deep-learning libraries, such as PyTorch (49).We have the following theorem regarding the softmax-based loss functions in the balanced case.Assume
. For any loss function defined in
,
given by
is a global minimizer of
. Moreover, if g1, g2
is strictly monotone, then any global minimizer must be given by
.In other words, neural collapse continues to emerge with softmax-based losses under mild regularity conditions. The first part of this theorem does not preclude the possibility that the Layer-Peeled Model admits solutions other than Eq. . When applied to the cross-entropy loss, it is worth pointing out that this theorem is a weak version of Theorem 1, albeit more general. Regarding the first assumption in Theorem 4, note that E and would be arbitrarily large if the weight decay λ in Eq. is sufficiently small, thereby meeting the assumption concerning in this theorem.We remark that Theorem 4 does not require the convexity of the loss . To circumvent the hurdle of nonconvexity, our proof in presents several elements.In passing, we leave the experimental confirmation of neural collapse with these loss functions for future work.
Layer-Peeled Model for Predicting Minority Collapse
Deep-learning models are often trained on datasets where there is a disproportionate ratio of observations in each class (50–52). For example, in the Places2 challenge dataset (53), the number of images in its majority scene categories is about eight times that in its minority classes. Another example is the Ontonotes dataset for part-of-speech tagging (54), where the number of words in its majority classes can be more than 100 times that in its minority classes. While empirically, the imbalance in class sizes often leads to inferior model performance of deep learning (see, e.g., ref. (11)), there remains a lack of a solid theoretical footing for understanding its effect, perhaps due to the complex details of deep-learning training.In this section, we use the Layer-Peeled Model to seek a fine-grained characterization of how class imbalance impacts neural networks that are trained for a sufficiently long time. In particular, neural collapse no longer emerges in the presence of class imbalance (see numerical evidence in ). Instead, our analysis predicts a phenomenon we term Minority Collapse, which fundamentally limits the performance of deep learning, especially on the minority classes, both theoretically and empirically. All omitted proofs are relegated to .
Technique: Convex Relaxation
When it comes to imbalanced datasets, the Layer-Peeled Model no longer admits a simple expression for its minimizers as in the balanced case, due to the lack of symmetry between classes. This fact results in, among others, an added burden on numerically computing the solutions of the Layer-Peeled Model.To overcome this difficulty, we introduce a convex optimization program as a relaxation of the nonconvex Layer-Peeled Model (Eq. , relying on the well-known result for relaxing a quadratically constrained quadratic program as a semidefinite program (see, e.g., ref. (55)). To begin with, defining as the feature mean of the k-th class (i.e., ), we introduce a decision variable . By definition, is positive semidefinite and satisfiesandwhere follows from the Cauchy–Schwarz inequality. Thus, we consider the following semidefinite programming problem:Lemma 1 below relates the solutions of Eq. to that of Eq. .Assume
and the loss function
is convex in its first argument. Let
be a minimizer of the convex program
[15]. Define
aswhere
denotes the positive square root of
and
is any partial orthogonal matrix such that
. Then,
is a minimizer of
. Moreover, if all
’s satisfy
, then all the solutions of
. are in the form of
.This lemma in effect says that the relaxation does not lead to any loss of information when we study the Layer-Peeled Model through a convex program, thereby offering a computationally efficient tool for gaining insights into the terminal phase of training deep neural networks on imbalanced datasets. An appealing feature is that the size of the program [15] is independent of the number of training examples. Besides, this lemma predicts that even in the imbalanced case, the last-layer features collapse to their class means under mild conditions. Therefore, Property (NC1) is satisfied (see more discussion about the condition in ).The assumption of the convexity of in the first argument is satisfied by a large class of loss functions. The condition that the first K-diagonal elements of any make the associated constraint saturated is also not restrictive. For example, we prove in that this condition is satisfied for the cross-entropy loss. We also remark that Eq. is not the unique convex relaxation. An alternative is to relax Eq. via a nuclear norm-constrained convex program (56), (57) (see more details in ).
Minority Collapse
With the technique of convex relaxation in place, now we numerically solve the Layer-Peeled Model on imbalanced datasets, with the goal of identifying possible nontrivial patterns. As a worthwhile starting point, we consider a dataset that has K majority classes, each containing training examples, and K minority classes, each containing training examples. That is, assume and . For convenience, call the imbalance ratio. Note that the case reduces to the balanced setting.An important question is to understand how the K last-layer minority classifiers behave as the imbalance ratio R increases, as this is directly related to the model performance on the minority classes. To address this question, we show that the average cosine of the angles between any pair of the K minority classifiers in Fig. 3 by solving the simple convex program [15]. This figure reveals a two-phase behavior of the minority classifiers as R increases:
Fig. 3.
The average cosine of the angles between any pair of the minority classifier solved from the Layer-Peeled Model. The average cosine reaches 1 once R is above some threshold. The total number of classes is fixed to 10. The gray dash-dotted line indicates the value of , which is given by Eq. . The between-majority-class angles can still be large, even when Minority Collapse emerges. Notably, our simulation suggests that the minority classifiers exhibit an equiangular frame, and so do the majority classifiers. (A) , E = 5. (B) , E = 10.
When for some , the average between-minority-class angle becomes smaller as R increases.Once , the average between-minority-class angle becomes zero, and, in addition, the minority classifiers have about the same length. This implies that all the minority classifiers collapse to a single vector.The average cosine of the angles between any pair of the minority classifier solved from the Layer-Peeled Model. The average cosine reaches 1 once R is above some threshold. The total number of classes is fixed to 10. The gray dash-dotted line indicates the value of , which is given by Eq. . The between-majority-class angles can still be large, even when Minority Collapse emerges. Notably, our simulation suggests that the minority classifiers exhibit an equiangular frame, and so do the majority classifiers. (A) , E = 5. (B) , E = 10.Above, the phase transition point R0 depends on the class sizes and the thresholds . This value becomes smaller when , or the number of minority classes K is smaller while fixing the other parameters (see more numerical examples in ).We refer to the phenomenon that appears in the second phase as Minority Collapse. While it can be expected that the minority classifiers become closer to each other as the level of imbalance increases, surprisingly, these classifiers become completely indistinguishable once R hits a finite value. Once Minority Collapse takes place, the neural network would predict equal probabilities for all the minority classes, regardless of the input. As such, its predictive ability is by no means better than a coin toss when conditioned on the minority classes. This situation would only get worse in the presence of adversarial perturbations. This phenomenon is especially detrimental when the minority classes are more frequent in the application domains than in the training data. Even outside the regime of Minority Collapse, the classification might still be unreliable if the imbalance ratio is large, as the softmax predictions for the minority classes can be close to each other.To put the observations in Fig. 3 on a firm footing, we prove in the theorem below that Minority Collapse indeed emerges in the Layer-Peeled Model as R tends to infinity.Assume
and
, and fix K. Let
be any global minimizer of the Layer-Peeled Model () with the cross-entropy loss. As
, we haveTo intuitively see why Minority Collapse occurs, first note that the majority classes become the predominant part of the risk function as the level of imbalance increases. The minimization of the objective, therefore, pays too much emphasis on the majority classifiers, encouraging the between-majority-class angles to grow and meanwhile shrinking the between-minority-class angles to zero. As an aside, an interesting question for future work is to prove that and are exactly equal for sufficiently large R.
Experiments
At the moment, Minority Collapse is merely a prediction of the Layer-Peeled Model. An immediate question thus is: Does this phenomenon really occur in real-world neural networks? At first glance, it does not necessarily have to be the case since the Layer-Peeled Model is a dramatic simplification of deep neural networks.To address this question, we resort to computational experiments. Explicitly, we consider training two network architectures, VGG and ResNet (58), on the FashionMNIST (59) and CIFAR10 datasets and, in particular, replace the dropout layers in VGG with batch normalization (60). As both datasets have 10 classes, we use three combinations of to split the data into majority classes and minority classes. In the case of FashionMNIST (CIFAR10), we let the K majority classes each contain all the () training examples from the corresponding class of FashionMNIST (CIFAR10), and the K minority classes each have () examples randomly sampled from the corresponding class. The rest of the experiment setup is basically the same as ref. (4). In detail, we use the cross-entropy loss and stochastic gradient descent with momentum 0.9 and weight decay . The networks are trained for 350 epochs with a batch size of 128. The initial learning is annealed by a factor of 10 at 1/3 and 2/3 of the 350 epochs. The only difference from ref. (4) is that we simply set the learning rate to 0.1 instead of sweeping over 25 learning rates between 0.0001 and 0.25. This is because the test performance of our trained models is already comparable with their best reported test accuracy. Detailed training and test performance are displayed in .The results of the experiments above are displayed in Fig. 4. This figure clearly indicates that the angles between the minority classifiers collapse to zero as soon as R is large enough. Moreover, the numerical examination in Table 1 shows that the norm of the classifier is constant across the minority classes. Taken together, these two pieces clearly give evidence for the emergence of Minority Collapse in these neural networks, thereby further demonstrating the effectiveness of our Layer-Peeled Model. Besides, Fig. 4 also shows that the issue of Minority Collapse is compounded when there are more majority classes, which is consistent with Fig. 3.
Fig. 4.
Occurrence of Minority Collapse in deep neural networks. Each curve denotes the average between-minority-class cosine. We fix . In particular, B shares the same setting with Fig. 2 in Section 1, where the LPM-based predictions are given by such that the two constraints in the Layer-Peeled Model become active for the weights of the trained networks. For ResNet 18, Minority Collapse also occurs as long as R is sufficiently large. Specifically, the average cosine would hit 1 for K = 7 when R = 5, 000 on CIFAR10, and when R = 3, 000 on FashionMNIST. (A) VGG11 on FashionMNIST. (B) VGG13 on CIFAR10. (C) ResNet18 on FashionMNIST. (D) ResNet18 on CIFAR10.
Table 1.
Variability of the lengths of the minority classifiers when
Dataset
FashionMNIST
CIFAR10
Network architecture
VGG11
ResNet18
VGG13
ResNet18
No. of majority classes
K A = 3
KA = 5
KA = 7
KA = 3
KA = 5
KA = 7
KA = 3
KA = 5
KA = 7
KA = 3
KA = 5
KA = 7
Norm variation
2.7×10−5
4.4×10−8
6.0×10−8
1.4×10−5
5.0−8
6.3×10−8
1.4×10−4
9.0×10−7
5.2×10−8
5.4×10−5
3.5×10−7
5.4×10−8
Each number in the row of “norm variation” is , where denotes the SD of the lengths of the K classifiers and the denominator denotes the average. The results indicate that the classifiers of the minority classes have almost the same length.
Occurrence of Minority Collapse in deep neural networks. Each curve denotes the average between-minority-class cosine. We fix . In particular, B shares the same setting with Fig. 2 in Section 1, where the LPM-based predictions are given by such that the two constraints in the Layer-Peeled Model become active for the weights of the trained networks. For ResNet 18, Minority Collapse also occurs as long as R is sufficiently large. Specifically, the average cosine would hit 1 for K = 7 when R = 5, 000 on CIFAR10, and when R = 3, 000 on FashionMNIST. (A) VGG11 on FashionMNIST. (B) VGG13 on CIFAR10. (C) ResNet18 on FashionMNIST. (D) ResNet18 on CIFAR10.Variability of the lengths of the minority classifiers whenEach number in the row of “norm variation” is , where denotes the SD of the lengths of the K classifiers and the denominator denotes the average. The results indicate that the classifiers of the minority classes have almost the same length.Next, in order to get a handle on how Minority Collapse impacts the test accuracy, we plot the results of another numerical study in Fig. 5. The setting is the same as Fig. 4, except that now we randomly sample six or five examples per class for the minority classes, depending on whether the dataset is FashionMNIST or CIFAR10. The results show that the performance of the trained model deteriorates in the test data when the imbalance ratio R = 1, 000, when Minority Collapse has occurred or is about to occur. This is by no means intuitive a priori, as the test performance is only restricted to the minority classes and a large value of R only leads to more training data in the majority classes without affecting the minority classes at all.
Fig. 5.
Comparison of the test accuracy on the minority classes between R = 1 and R = 1, 000. We fix and use n = 6 (n = 5) training examples from each minority class and () training examples from each majority class in FashionMNIST (CIFAR10). Note that when R = 1, 000, the test accuracy on the minority classes can be lower than 10% because the trained neural networks misclassify many examples in the minority classes as some majority classes. (A) VGG11 on FashionMNIST. (B) VGG13 on CIFAR10. (C) ResNet18 on FashionMNIST. (D) ResNet18 on CIFAR10.
Comparison of the test accuracy on the minority classes between R = 1 and R = 1, 000. We fix and use n = 6 (n = 5) training examples from each minority class and () training examples from each majority class in FashionMNIST (CIFAR10). Note that when R = 1, 000, the test accuracy on the minority classes can be lower than 10% because the trained neural networks misclassify many examples in the minority classes as some majority classes. (A) VGG11 on FashionMNIST. (B) VGG13 on CIFAR10. (C) ResNet18 on FashionMNIST. (D) ResNet18 on CIFAR10.It is worthwhile to mention that the emergence of Minority Collapse would prevent the model from achieving zero training error. This is because its prediction is uniform over the minority classes, and, therefore, the “argmax” rule does not give the correct label for a training example from a minority class. As such, the occurrence of Minority Collapse is a departure from the terminal phase of deep-learning training. While this fact seems to contradict conventional wisdom on the approximation power of deep learning, it is important to note that the constraints in the Layer-Peeled Model or, equivalently, weight decay in neural networks limits the expressive power of deep-learning models. Besides, it is equally important to recognize that the training error, which mostly occurs in the minority classes, is actually very small when Minority Collapse emerges since the minority examples only account for a small portion of the entire training set. In this spirit, the aforementioned departure is not as significant as it appears at first glance since the training error is generally, if not always, not exactly zero (see, e.g., ref. (4)). From an optimization point of view, a careful examination indicates that Minority Collapse can be attributed to the two constraints in the Layer-Peeled Model or the regularization in Eq. . For example, Fig. 2 shows that Minority Collapse occurs earlier with a larger value of λ. However, this issue does not disappear by simply setting a small penalty coefficient λ, as the imbalance ratio can be arbitrarily large.
How to Mitigate Minority Collapse?
In this section, we further exploit the use of the Layer-Peeled Model in an attempt to lessen the detrimental effect of Minority Collapse. Instead of aiming to develop a full set of methodologies to overcome this issue, which is beyond the scope of the paper, our aim is to evaluate some simple techniques used for imbalanced datasets.Among many approaches to handling class imbalance in deep learning (see the review in ref. (11)), perhaps the most popular one is to oversample training examples from the minority classes (61–64). In its simplest form, this sampling scheme retains all majority training examples while duplicating each training example from the minority classes for w times, where the oversampling rate w is a positive integer. Oversampling in effect transforms the original problem to the minimization of an optimization problem by replacing the risk term in Eq. withwhile keeping the penalty term . Note that oversampling is closely related to weight adjusting (see more discussion in ).A close look at Eq. suggests that the neural network obtained by minimizing this program might behave as if it were trained on a (larger) dataset with n and examples in each majority class and minority class, respectively. To formalize this intuition, as earlier, we start by considering the Layer-Peeled Model in the case of oversampling:where .The following result confirms our intuition that oversampling indeed boosts the size of the minority classes for the Layer-Peeled Model.Assume
and the loss function
is convex in the first argument. Let
be any minimizer of the convex program
[15]
with
and
. Define
aswhere
is any partial orthogonal matrix such that
. Then,
is a global minimizer of the oversampling-adjusted Layer-Peeled Model (). Moreover, if all
’s satisfy
, then all the solutions of
are in the form of
.Together with Lemma 1, Proposition 1 shows that the number of training examples in each minority class is now in effect instead of in the Layer-Peeled Model. In the special case , the results show that all the angles are equal between any given pair of the last-layer classifiers, no matter if they fall in the majority or minority classes.We turn to Fig. 6 for an illustration of the effects of oversampling on real-world deep-learning models, using the same experimental setup as in Fig. 5. From Fig. 6, we see that the angles between pairs of the minority classifiers become larger as the oversampling rate w increases. Consequently, the issue of Minority Collapse becomes less detrimental in terms of training accuracy as w increases. This again corroborates the predictive ability of the Layer-Peeled Model.
Fig. 6.
Effect of oversampling when the imbalance ratio is R = 1, 000. Each plot shows the average cosine of the between-minority-class angles. The results indicate that increasing the oversampling rate would enlarge the between-minority-class angles. (A) VGG11 on FashionMNIST. (B) VGG13 on CIFAR10. (C) ResNet18 on FashionMNIST. (D) ResNet18 on CIFAR10.
Effect of oversampling when the imbalance ratio is R = 1, 000. Each plot shows the average cosine of the between-minority-class angles. The results indicate that increasing the oversampling rate would enlarge the between-minority-class angles. (A) VGG11 on FashionMNIST. (B) VGG13 on CIFAR10. (C) ResNet18 on FashionMNIST. (D) ResNet18 on CIFAR10.Next, we refer to Table 2 for effect on the test performance. The results clearly demonstrate the improvement in test accuracy using oversampling, with certain choices of the oversampling rate. The improvement is noticeable on both the minority classes and all classes.
Table 2.
Test accuracy (%) on FashionMNIST when R = 1, 000
Network architecture
VGG11
ResNet18
No. of majority classes
K A = 3
KA = 5
KA = 7
KA = 3
KA = 5
KA = 7
Original (minority)
15.29
20.30
17.00
30.66
34.26
5.53
Oversampling (minority)
41.13
57.22
30.50
37.86
53.46
8.13
Improvement (minority)
25.84
36.92
13.50
7.20
19.20
2.60
Original (overall)
40.10
57.61
69.09
50.88
64.89
66.13
Oversampling (overall)
58.25
76.17
73.37
55.91
74.56
67.10
Improvement (overall)
18.15
18.56
4.28
5.03
9.67
0.97
For example, “Original (minority)” means that the test accuracy is evaluated only on the minority classes, and oversampling is not used. When oversampling is used, we report the best test accuracy among four oversampling rates: 1, 10, 100, and 1,000. The best test accuracy is never achieved at , indicating that oversampling with a large w would impair the test performance.
Test accuracy (%) on FashionMNIST when R = 1, 000For example, “Original (minority)” means that the test accuracy is evaluated only on the minority classes, and oversampling is not used. When oversampling is used, we report the best test accuracy among four oversampling rates: 1, 10, 100, and 1,000. The best test accuracy is never achieved at , indicating that oversampling with a large w would impair the test performance.Behind the results of Table 2, however, it reveals an issue when addressing Minority Collapse by oversampling. Specifically, this technique might lead to degradation of test performance using a very large oversampling rate w, which, though, can mitigate Minority Collapse. How can we efficiently select an oversampling rate for optimal test performance? More broadly, Minority Collapse does not seem likely to be fully resolved by sampling-based approaches alone, and the doors are wide open for future investigation.
Discussion
In this paper, we have developed the Layer-Peeled Model as a simple, yet effective, modeling strategy toward understanding well-trained deep neural networks. The derivation of this model follows a top-down strategy by isolating the last layer from the remaining layers. Owing to the analytical and numerical tractability of the Layer-Peeled Model, we provide some explanation of a recently observed phenomenon called neural collapse in deep neural networks trained on balanced datasets (4). Moving to imbalanced datasets, an analysis of this model suggests that the last-layer classifiers corresponding to the minority classes would collapse to a single vector once the imbalance level is above a certain threshold. This phenomenon, which we refer to as Minority Collapse, occurs consistently in our computational experiments.The efficacy of the Layer-Peeled Model in analyzing well-trained deep-learning models implies that the ansatz Eq. —a crucial step in the derivation of this model—is at least a useful approximation. Moreover, this ansatz can be further justified by the following result in an indirect manner, which, together with Theorem 1, shows that the norm suggested by the ansatz happens to be the only choice among all the norms that is consistent with empirical observations. Its proof is given in .Assume
and
.
For any
, consider the optimization problemwhere
is the cross-entropy loss. Then, any global minimizer of this program does not satisfy
for any positive numbers C and
. That is, neural collapse does not emerge in this model.While the Layer-Peeled Model has demonstrated its noticeable effectiveness, it requires future investigation for consolidation and extension. First, an analysis of the gap between the Layer-Peeled Model and well-trained deep-learning models would be a welcome advance. For example, how does the gap depend on the neural network architectures? How to take into account the sparsity of the last-layer features when using the ReLU activation function? From a different angle, a possible extension is to retain multiple layers following the top-down viewpoint. Explicitly, letting be the number of the top layers we wish to retain in the model, we can represent the prediction of the neural network as by letting and be the first L – m layers and the last m layers, respectively. Consider the m-Layer-Peeled Model:The two constraints might be modified to take into account the network architectures. An immediate question is whether this model with is capable of capturing new patterns of deep-learning training.From a practical standpoint, the Layer-Peeled Model together with its convex relaxation Eq. offers an analytical and computationally efficient technique to identify and mitigate bias induced by class imbalance. An interesting question is to extend Minority Collapse from the case of two-valued class sizes to general imbalanced datasets. Next, as suggested by our findings in Section 5, how should we choose loss functions in order to mitigate Minority Collapse (64)? Last, a possible use case of the Layer-Peeled Model is to design more efficient sampling schemes to take into account fairness considerations (65–67).Broadly speaking, insights can be gained not only from the Layer-Peeled Model, but also from its modeling strategy. The details of empirical deep-learning models, though formidable, can often be simplified by rendering a certain part of the network modular. When the interest is about the top few layers, for example, this paper clearly demonstrates the benefits of taking a top-down strategy for modeling neural networks, especially in consolidating our understanding of previous results and in discovering new patterns. Owing to its mathematical convenience, the Layer-Peeled Model shall open the door for future research extending these benefits.
Authors: David Silver; Aja Huang; Chris J Maddison; Arthur Guez; Laurent Sifre; George van den Driessche; Julian Schrittwieser; Ioannis Antonoglou; Veda Panneershelvam; Marc Lanctot; Sander Dieleman; Dominik Grewe; John Nham; Nal Kalchbrenner; Ilya Sutskever; Timothy Lillicrap; Madeleine Leach; Koray Kavukcuoglu; Thore Graepel; Demis Hassabis Journal: Nature Date: 2016-01-28 Impact factor: 49.962