Literature DB >> 33286768

The Conditional Entropy Bottleneck.

Ian Fischer1.   

Abstract

Much of the field of Machine Learning exhibits a prominent set of failure modes, including vulnerability to adversarial examples, poor out-of-distribution (OoD) detection, miscalibration, and willingness to memorize random labelings of datasets. We characterize these as failures of robust generalization, which extends the traditional measure of generalization as accuracy or related metrics on a held-out set. We hypothesize that these failures to robustly generalize are due to the learning systems retaining too much information about the training data. To test this hypothesis, we propose the Minimum Necessary Information (MNI) criterion for evaluating the quality of a model. In order to train models that perform well with respect to the MNI criterion, we present a new objective function, the Conditional Entropy Bottleneck (CEB), which is closely related to the Information Bottleneck (IB). We experimentally test our hypothesis by comparing the performance of CEB models with deterministic models and Variational Information Bottleneck (VIB) models on a variety of different datasets and robustness challenges. We find strong empirical evidence supporting our hypothesis that MNI models improve on these problems of robust generalization.

Entities:  

Keywords:  information bottleneck; information theory; machine learning

Year:  2020        PMID: 33286768      PMCID: PMC7597329          DOI: 10.3390/e22090999

Source DB:  PubMed          Journal:  Entropy (Basel)        ISSN: 1099-4300            Impact factor:   2.524


1. Introduction

Despite excellent progress in classical generalization (e.g., accuracy on a held-out set), the field of Machine Learning continues to struggle with the following issues: Vulnerability to adversarial examples. Most machine-learned systems are vulnerable to adversarial examples. Many defenses have been proposed, but few have demonstrated robustness against a powerful, general-purpose adversary. Many proposed defenses are ad-hoc and fail in the presence of a concerted attacker [1,2]. Poor out-of-distribution detection. Most models do a poor job of signaling that they have received data that is substantially different from the data they were trained on. Even generative models can report that an entirely different dataset has higher likelihood than the dataset they were trained on [3]. Ideally, a trained model would give less confident predictions for data that was far from the training distribution (as well as for adversarial examples). Barring that, there would be a clear, principled statistic that could be extracted from the model to tell whether the model should have made a low-confidence prediction. Many different approaches to providing such a statistic have been proposed [4,5,6,7,8,9], but most seem to do poorly on what humans intuitively view as obviously different data. Miscalibrated predictions. Related to the issues above, classifiers tend to be overconfident in their predictions [4]. Miscalibration reduces confidence that a model’s output is fair and trustworthy. Overfitting to the training data. Zhang et al. [10] demonstrated that classifiers can memorize fixed random labelings of training data, which means that it is possible to learn a classifier with perfect inability to generalize. This critical observation makes it clear that a fundamental test of generalization is that the model should fail to learn when given what we call information-free datasets. We consider these to be problems of robust generalization, which we define and discuss in Section 2.1. In this work, we hypothesize that these problems of robust generalization all have a common cause: models retain too much information about the training data. We formalize this by introducing the Minimum Necessary Information (MNI) criterion for evaluating a learned representation (Section 2.2). We then introduce an objective function that directly optimizes the MNI, the Conditional Entropy Bottleneck (CEB) (Section 2.3) and compare it with the closely-related Information Bottleneck (IB) objective [11] in Section 2.5. In Section 2.6, we describe practical ways to optimize CEB in a variety of settings. Finally, we give empirical evidence for the following claims: Better classification accuracy. MNI models can achieve superior accuracy on classification tasks than models that capture either more or less information than the minimum necessary information (Section 3.1.1 and Section 3.1.6). Improved robustness to adversarial examples. Retaining excessive information about the training data results in vulnerability to a variety of whitebox and transfer adversarial examples. MNI models are substantially more robust to these attacks (Section 3.1.2 and Section 3.1.6). Strong out-of-distribution detection. The CEB objective provides a useful metric for out-of-distribution (OoD) detection, and CEB models can detect OoD examples as well or better than non-MNI models (Section 3.1.3). Better calibration. MNI models are better calibrated than non-MNI models (Section 3.1.4). No memorization of information-free datasets. MNI models fail to learn in information-free settings, which we view as a minimum bar for demonstrating robust generalization (Section 3.1.5).

2. Materials and Methods

2.1. Robust Generalization

In classical generalization, we are interested in a model’s performance on held-out data on some task of interest, such as classification accuracy. In robust generalization, we want: (RG1) to maintain the model’s performance in the classical generalization setting; (RG2) to ensure the model’s performance in the presence of an adversary (unknown at training time); and (RG3) to detect adversarial and non-adversarial data that strongly differ from the training distribution. Adversarial training approaches considered in the literature so far [12,13,14] violate (RG1), as they typically result in substantial decreases in accuracy. Similarly, provable robustness approaches (e.g., Cohen et al. [15], Wong et al. [16]) provide guarantees for a particular adversary known at training time, also at a cost to test accuracy. To our knowledge, neither approaches provide any mechanism to satisfy (RG3). On the other hand, approaches for detecting adversarial and non-adversarial out-of-distribution (OoD) examples [4,5,6,7,8,9] are either known to be vulnerable to adversarial attack [1,2], or do not demonstrate that the approach provides robustness against unknown adveraries, both of which violate (RG2). Training on information-free datasets [10] provides an additional way to check if a learning system is compatible with (RG1), as memorization of such datasets necessarily results in maximally poor performance on any test set. Model calibration is not obviously a necessary condition for robust generalization, but if a model is well-calibrated on a held-out set, its confidence may provide some signal for distinguishing OoD examples, so we mention it as a relevant metric for (RG3). To our knowledge, the only works to date that have demonstrated progress on robust generalization for modern machine learning datasets are the Variational Information Bottleneck [17,18] (VIB), and Information Dropout [19]. Alemi et al. [17] presented preliminary results that VIB improves adversarial robustness on image classification tasks while maintaining high classification accuracy ((RG1) and (RG2)). Alemi et al. [18] showed that VIB models provide a useful signal, the Rate, R, for detecting OoD examples ((RG3)). Achille and Soatto [19] also showed preliminary results on adversarial robustness and demonstrated failure to train on information-free datasets. In this work, we do not claim to “solve” robust generalization, but we do show notable improvement on all three conditions simply by changing the training objective. This evidence supports our core hypothesis that problems of robust generalization are caused in part by retaining too much information about the training data.

2.2. The Minimum Necessary Information

We define the Minimum Necessary Information (MNI) criterion for a learned representation in three parts: Information. We would like a representation Z that captures useful information about a dataset . Entropy is the unique measure of information [20], so the criterion prefers information-theoretic approaches. (We assume familiarity with the mutual information and its relationships to entropy and conditional entropy: [21] (p. 20).) Necessity. The semantic value of information is given by a task, which is specified by the set of variables in the dataset. Here we will assume that the task of interest is to predict Y given X, as in any supervised learning dataset. The information we capture in our representation Z must be necessary to solve this task. As a variable X may have redundant information that is useful for predicting Y, a representation Z that captures the necessary information may not be minimal or unique (the MNI criterion does not require uniqueness of Z). Minimality. Given all representations that can solve the task, we require one that retains the smallest amount of information about the task: . Necessity can be defined as . Any less information than that would prevent Z from solving the task of predicting Y from X. Minimality can be defined as . Any more information than that would result in Z capturing information from X that is either redundant or irrelevant for predicting Y. Since the information captured by Z is constrained from above and below, we have the following necessary and sufficient conditions for perfectly achieving the Minimum Necessary Information, which we call the MNI Point:The MNI point defines a unique point in the information plane. The geometry of the information plane can be seen in Figure 1. The MNI criterion does not make any Markov assumptions on the models or algorithms that learn the representations. However, the algorithms we discuss here all do rely on the standard Markov chain . See Fischer [22] for an example of an objective that doesn’t rely on a Markov chain during training.
Figure 1

Geometry of the feasible regions in the information plane for for any algorithm, with key points and edges labeled. The edges bound the feasible region for an pair where , which would generally be the case in an image classification task, for example. The dashed lines bound the feasible regions when the underlying model depends on a Markov chain. The and lines are the upper bound for . The and lines are the right bound for . The points correspond to the best possible Maximum Likelihood Estimates (MLE) for the corresponding Markov chain models. The point corresponds to the maximum information Z could ever capture about . The Minimum Necessary Information (MNI) point is . As increases, Z captures more information that is either redundant or irrelevant with respect to predicting Y. Similarly, any variation in Y that remains once we know X is just noise as far as the task is concerned. The MNI point is the unique point that has no redundant or irrelevant information from X, and everything but the noise from Y.

A closely related concept to Necessity is called sufficiency by Achille and Soatto [19] and other authors. We avoid the term due to potential confusion with minimum sufficient statistics, which maintain the mutual information between a model and the data it generates [21] (p. 35). The primary difference between necessity and sufficiency is the reliance on the Markov constraint to define sufficiency. Ref. [19] also does not identify the MNI point as an idealized target, instead defining the optimization problem: minimize . In general it may not be possible to satisfy Equation (1). As discussed in Anantharam et al. [23,24,25], for any given dataset , there is some maximum value for any possible representation Z: with equality only when is a deterministic map. Training datasets are often deterministic in one direction or the other. e.g., common image datasets map each distinct image to a single label. Thus, in practice, we can often get very close to the MNI on the training set given a sufficiently powerful model.

MNI and Robust Generalization

To satisfy (RG1) (classical generalization), a model must have on the test dataset. Shamir et al. [26] show that , where indicates the training set information and N is the size of the training set. More recently, Bassily et al. [27] gave a similar result in a PAC setting. Both results indicate that models that are compressed on the training data should do better at generalizing to similar test data. Less clear is how an MNI model might improve on (RG2) (adversarial robustness). In this work, we treat it as a hypothesis that we investigate empirically rather than theoretically. The intuition behind the hypothesis can be described in terms of the idea of robust and non-robust features from Ilyas et al. [28]: non-robust features in X should be compressed as much as possible when we learn Z, whereas robust features should be retained as much as is necessary. If Equation (1) is satisfied, Z must have “scaled” the importance of the the features in X according to their importance for predicting Y. Consequently, an attacker that tries to take advantage of a non-robust feature will have to change it much more in order to confuse the model, possibly exceeding the constraints of the attack before it succeeds. For (RG3) (detection), the MNI criterion does not directly apply, as that will be a property of specific modeling choices. However, if the model provides an accurate way to measure for a particular pair , Alemi et al. [18] suggests that can be a valuable signal for OoD detection.

2.3. The Conditional Entropy Bottleneck

We would like to learn a representation Z of X that will be useful for predicting Y. We can represent this problem setting with the Markov chain . We would like Z to satisfy Equation (1). Given the conditional independence Z⫫ in our Markov chain, , by the data processing inequality. Thus, maximizing is consistent with the MNI criterion. However, does not clearly have a constraint that targets , as . Instead, we can notice the following identities at the MNI point: The conditional mutual information is always non-negative, so learning a compressed representation Z of X is equivalent to minimizing . Using our Markov chain and the chain rule of mutual information [21]: This leads us to the general Conditional Entropy Bottleneck: In line 7, we can optionally drop because it is constant with respect to Z. Here, any is valid, but for deterministic datasets (Section 2.2), will achieve the MNI for a sufficiently powerful model. Further, we should expect to yield consistent models and other values of not to: since shows up in two forms in the objective, weighing them differently forces the optimization procedure to count bits of in two different ways, potentially leading to a situation where at convergence. Given knowledge of those four entropies, we can define a consistency metric for Z:

2.4. Variational Bound on CEB

We will variationally upper bound the first term of Equation (5) and lower bound the second term using three distributions: , the encoder which defines the joint distribution we will use for sampling, ; , the backward encoder, an approximation of ; and , the classifier, an approximation of (the name is arbitrary, as Y may not be labels). All of , , and may have learned parameters, just like the encoder and decoder of a VAE [29], or the encoder, classifier, and marginal in VIB. In the following, we write expectations . They are always with respect to the joint distribution; here, that is . The first term of Equation (5): The second term of Equation (5): These variational bounds give us a tractable objective function for amortized inference, the Variational Conditional Entropy Bottleneck (VCEB): There are a number of other ways to optimize Equation (5). We describe a few of them in Section 2.6 and Appendix B and Appendix C.

2.5. Comparison to the Information Bottleneck

The Information Bottleneck (IB) [11] learns a representation Z from X subject to a soft constraint: where controls the strength of the constraint. As , IB recovers the standard cross-entropy loss. In Figure 2 we show information diagrams comparing which regions IB and CEB maximize and minimize. See Yeung [30] for a theoretical explanation of information diagrams.CEB avoids trying to both minimize and maximize the central region at the same time. In Figure 3 we show the feasible regions for CEB and IB, labeling the MNI point on both. CEB’s rectification of the information plane means that we can always measure in absolute terms how much more we could compress our representation at the same predictive performance: . For IB, it is not possible to tell a priori how far we are from optimal compression.
Figure 2

Information diagrams showing how IB and CEB maximize and minimize different regions. regions inaccessible to the objective due to the Markov chain . regions being maximized by the objective ( in both cases). regions being minimized by the objective. IB minimizes the intersection between Z and both and . CEB only minimizes the intersection between Z and .

Figure 3

Geometry of the feasible regions for IB and CEB, with all points labeled. CEB rectifies IB’s parallelogram by subtracting at every point. Everything outside of the black lines is unattainable by any model on any dataset. Compare the IB feasible region to the dashed region in Figure 1.

From Equations (4), (5) and (16), it is clear that CEB and IB are equivalent for . To simplify comparison of the two objectives, we can parameterize them with: Under this parameterization, for deterministic datasets, sufficiently powerful models will target the MNI point at . As increases, more information is captured by the model. may capture less than the MNI. may capture more than the MNI.

Amortized IB

As described in Tishby et al. [11], IB is a tabular method, so it is not usable for amortized inference. The tabular optimization procedure used for IB trivially applies to CEB, just by setting . Two recent works have extended IB for amortized inference. Achille and Soatto [19] presents InfoDropout, which uses IB to motivate a variation on Dropout [31]. Alemi et al. [17] presents the Variational Information Bottleneck (VIB): Instead of the backward encoder, VIB has a marginal posterior, , which is a variational approximation to . Following Alemi et al. [32], we define the Rate (R): We similarly define the Residual Information (): During optimization, observing R does not tell us how tightly we are adhering to the MNI. However, observing tells us exactly how many bits we are from the MNI point, assuming that our current classifier is optimal. For convenience, define , and likewise for VIB. We can compare variational CEB with VIB by taking their difference at : Solving for when that difference is 0: Since the optimal is the marginalization of , at convergence we must have: This solution may be difficult to find, as only gets information about y indirectly through . For otherwise equivalent models, we may expect to converge to a looser approximation of than CEB. Since VIB optimizes an upper bound on , will report R converging to , but may capture less than the MNI. In contrast, if converges to 0, the variational tightness of to the optimal depends only on the tightness of to the optimal .

2.6. Model Variants

We introduce some variants on the basic variational CEB classification model that we will use in Section 3.1.6.

2.6.1. Bidirectional CEB

We can learn a shared representation Z that can be used to predict both X and Y with the following bidirectional CEB model: . This corresponds to the following joint: . The main CEB objective can then be applied in both directions: For the two latent representations to be useful, we want them to be consistent with each other (minimally, they must have the same parametric form). Fortunately, that consistency is trivial to encourage by making the natural variational substitutions: and . This gives variational CEB: where is a decoder distribution. At convergence, we learn a unified Z that is consistent with both and , permitting generation of either output given either input in the trained model, in the same spirit as Vedantam et al. [33], but without needing to train a joint encoder .

2.6.2. Consistent Classifier

We can reuse the backwards encoder as a classifier: . We refer to this as the Consistent Classifier: . If the labels are uniformly distributed, the factor can be dropped; otherwise, it suffices to use the empirical . Using the consistent classifier for classification problems results in a model that only needs parameters for the two encoders, and . This classifier differs from the more common maximum a posteriori (MAP) classifier because is not the sampling distribution of either Z or Y.

2.6.3. CatGen Decoder

We can further generalize the idea of the consistent classifier to arbitrary prediction tasks by relaxing the requirement that we perfectly marginalize Y in the softmax. Instead, we can marginalize Y over any minibatch of size K we see at training time, under an assumption of a uniform distribution over the training examples we sampled: We can immediately see that this definition of gives a valid distribution, as it is just a softmax over the minibatch. That means it can be directly used in the original objective without violating the variational bound. We call this decoder CatGen, for Categorical Generative Model because it can trivially “generate” Y: the softmax defines a categorical distribution over the batch; sampling from it gives indices of that most closely correspond to . Maximizing in this manner is a universal task, in that it can be applied to any paired data . This includes images and labels – the CatGen model may be used in place of both and in the CEB model (using for ). This avoids a common concern when dealing with multivariate predictions: if predicting X is disproportionately harder than predicting Y, it can be difficult to balance the model [33,34]. For CatGen models, predicting X is never any harder than predicting Y, since in both cases we are just trying to choose the correct example out of K possibilities. It turns out that CatGen is mathematically equivalent to Contrastive Predictive Coding (CPC) [35] after an offset of . We can see this using the proof from Poole et al. [36], and substituting for : The advantage of the CatGen approach over CPC in the CEB setting is that we already have parameterized the forward and backward encoders to compute , so we don’t need to introduce any new parameters when using CatGen to maximize the term. As with CPC, the CatGen bound is constrained by , but when targeting the MNI, it is more likely that we can train with . This is trivially the case for the datasets we explore here, where . It is also practical for larger datasets like ImageNet, where models are routinely trained with batch sizes in the thousands (e.g., Goyal et al. [37]), and .

3. Results

We evaluate deterministic, VIB, and CEB models on Fashion MNIST [38] and CIFAR10 [39]. Our experiments focus on comparing the performance of otherwise identical models when we change only the objective function and vary . Thus, we are interested in relative differences in performance that can be directly attributed to the difference in objective and . These experiments cover the three aspects of Robust Generalization (Section 2.1): (RG1) (classical generalization) in Section 3.1 and Section 3.1.6; (RG2) (adversarial robustness) in Section 3.1 and Section 3.1.6; and (RG3) (detection) in Section 3.1.

3.1. (RG1), (RG2), and (RG3): Fashion MNIST

Fashion MNIST [38] is an interesting dataset in that it is visually complex and challenging, but small enough to train in a reasonable amount of time. We trained 60 different models on Fashion MNIST, four each for the following 15 types: a deterministic model (Determ); seven VIB models (VIB−1, ..., VIB5); and seven CEB models (CEB−1, ..., CEB5). Subscripts indicate . All 60 models share the same inference architecture and are trained with otherwise identical hyperparameters. See Appendix A for details.

3.1.1. (RG1): Accuracy and Compression

In Figure 4 we see that both VIB and CEB have improved accuracy over the deterministic baseline, consistent with compressed representations generalizing better. Also, CEB outperforms VIB at every , which we can attribute to the tighter variational bound given by minimizing rather than R. In the case of a simple classification problem with a uniform distribution over classes in the training set (like Fashion MNIST), we can directly compute , where C is the number of classes. In order to compare the relative complexity of the learned representations for the VIB and CEB models, in the second panel of Figure 4 we show the maximum rate lower bound seen during training: using the encoder’s minibatch marginal for both VIB and CEB. This lower bound on is the “InfoNCE with a tractable encoder” bound from Poole et al. [36]. The two sets of models show nearly the same at each value of . Both models converge to exactly nats at , as predicted by the derivation of CEB.
Figure 4

Test accuracy, maximum rate lower bound seen during training, and robustness to targeted PGD L and L attacks on CEB, VIB, and Deterministic models trained on Fashion MNIST. At every the CEB models outperform the VIB models on both accuracy and robustness, while having essentially identical maximum rates. None of these models is adversarially trained.

3.1.2. (RG2): Adversarial Robustness

The bottom two panels of Figure 4 show robustness to targeted Projected Gradient Descent (PGD) L2 and L∞ attacks [14]. All of the attacks are targeting the trouser class of Fashion MNIST, as that is the most distinctive class. Targeting a less distinctive class, such as one of the shirt classes, would confuse the difficulty of classifying the different shirts and the robustness of the model to adversaries. To measure robustness to the targeted attacks, we count the number of predictions that changed from a correct prediction on the clean image to an incorrect prediction of the target class on the adversarial image, and divide by the original number of correct predictions. Consistent with testing (RG2), these adversaries are completely unknown to the models at training time – none of these models see any adversarial examples during training. CEB again outperforms VIB at every , and the deterministic baseline at all but the least-compressed model (). We also see for both models that as decreases, the robustness to both attacks increases, indicating that more compressed models are more robust. Consistent with the MNI hypothesis, at we end up with CEB models that have hit exactly 2.3 nats for the rate lower bound, have maintained high accuracy, and have strong robustness to both attacks. Moving to gives only a small improvement to robustness, at the cost of a large decrease in accuracy.

3.1.3. (RG3): Out-of-Distribution Detection

We compare the ability of Determ, CEB0, VIB0, and VIB4 to detect four different out-of-distribution (OoD) detection datasets. is uniform noise in the image domain. MNIST uses the MNIST test set. Vertical Flip is the most challenging, using vertically flipped Fashion MNIST test images, as originally proposed in Alemi et al. [18]. CW is the Carlini-Wagner L2 attack [40] at the default settings found in Papernot et al. [41], and additionally includes the adversarial attack success rate against each model. We use two different metrics for thresholding, proposed in Alemi et al. [18]. H is the classifier entropy. R is the rate, defined in Section 2.5. These two threshold scores are used with the standard suite of proper scoring rules [42]: False Positive Rate at 95% True Positive Rate (FPR 95% TPR), Area Under the ROC Curve (AUROC), and Area Under the Precision-Recall Curve (AUPR). Table 1 shows that using R to detect OoD examples can be much more effective than using classifier-based approaches. The deterministic baseline model is far weaker at detection using H than either of the high-performing stochastic models (CEB0 and VIB4). Those models both saturate detection performance, providing reliable signals for all four OoD datasets. However, as VIB0 demonstrates, simply having R available as a signal does not guarantee good detection. As we saw above, the VIB0 models had noticeably worse classification performance, indicating that they had not achieved the MNI point: for those models. These results indicate that for detection, violating the MNI criterion by having may not be harmful, but violating the criterion in the opposite direction is harmful.
Table 1

Results for out-of-distribution detection (OoD). Thrsh. is the threshold score used: H is the entropy of the classifier; R is the rate. Determ cannot compute R, so only H is shown. For VIB and CEB models, H is always inferior to R, similar to findings in Alemi et al. [18], so we omit it. Adv. Success is attack success of the CW adversary (bottom four rows). Arrows denote whether higher or lower scores are better. Bold indicates the best score in that column for that OoD dataset.

OoDModelThrsh.FPR @ 95% TPR ↓AUROC ↑AUPR In ↑Adv. Success ↓
U(0,1)Determ H 35.893.597.1N/A
VIB4 R 0.0 100.0 100.0 N/A
VIB0 R 80.657.151.4N/A
CEB0 R 0.0 100.0 100.0 N/A
MNISTDeterm H 59.088.490.0N/A
VIB4 R 0.0 100.0 100.0 N/A
VIB0 R 12.366.791.1N/A
CEB0 R 0.1 94.4 99.9 N/A
Vertical FlipDeterm H 66.888.690.2N/A
VIB4 R 0.0 100.0 100.0 N/A
VIB0 R 17.352.791.3N/A
CEB0 R 0.0 90.7 100.0 N/A
CWDeterm H 15.490.786.0100.0%
VIB4 R 0.0 100.0 100.0 55.2%
VIB0 R 0.0 98.7 100.0 35.8%
CEB0 R 0.0 99.7 100.0 35.8%

3.1.4. (RG3): Calibration

A well-calibrated model is correct half of the time it gives a confidence of 50% for its prediction. In Figure 5, we show calibration plots at various points during training for four models. Calibration curves help analyze whether models are underconfident or overconfident. Each point in the plots corresponds to a 5% confidence bin. Accuracy is averaged for each bin. All four networks move from under- to overconfidence during training. However, CEB0 and VIB0 end up only slightly overconfident, while is already sufficient to make VIB and CEB (not shown) nearly as overconfident as the deterministic model.
Figure 5

Calibration plots with 90% confidence intervals for four of the models after 2000 steps, 20,000 steps, and 40,000 steps (left, center, and right of each trio): (a) is CEB0; (b) is VIB0; (c) is VIB2; (d) is Determ. Perfect calibration corresponds to the dashed diagonal lines. Underconfidence occurs when the points are above the diagonal. Overconfidence is below the diagonal. The models are nearly perfectly calibrated still at 20,000 steps, but even at , the VIB model is almost as overconfident as Determ.

3.1.5. (RG1): Overfitting Experiments

We replicate the basic experiment from Zhang et al. [10] by using the images from Fashion MNIST, but replacing the training labels with fixed random labels. This dataset is information-free because . We use that dataset to train multiple deterministic models, as well as CEB and VIB models at from 0 through 7. We find that the CEB and VIB models with never learn, even after 100 epochs of training, but the deterministic models always learn. After about 40 epochs of training they begin to memorize the random labels, indicating severe overfitting and a perfect failure to generalize.

3.1.6. (RG1) and (RG2): CIFAR10 Experiments

For CIFAR10 [39] we trained the largest Wide ResNet [43] we could fit on a single GPU with a batch size of 250. This was a 62 × 7 model trained using AutoAugment [44]. We trained 3 CatGen CEB models each of CEB0 and CEB5 and then selected the two models with the highest test accuracy for the adversarial robustness experiments. We evaluated the CatGen models using the consistent classifier, since CatGen models only train and . CEB0 reached 97.51% accuracy. This result is better than the 28 × 10 Wide ResNet from AutoAugment by 0.19 percentage points, although it is still worse than the Shake-Drop model from that paper. We additionally tested the model on the CIFAR-10.1 test set [45], getting accuracy of 93.6%. This is a gap of only 3.9 percentage points, which is better than all of the results reported in that paper, and substantially better than the Wide ResNet results (but still inferior to the Shake-Drop AutoAugment results). The CEB5 model reached 97.06% accuracy on the normal test set and 91.9% on the CIFAR-10.1 test set, showing that increased gave substantially worse generalization. To test robustness of these models, we swept for both PGD attacks (Figure 6). The CEB0 model not only has substantially higher accuracy than the adversarially-trained Wide ResNet from Madry et al. [14] (Madry), it also beats the Madry model on both the L2 and the L∞ attacks at almost all values of . We also show that this model is even more robust to two transfer attacks, where we used the CEB5 model and the Madry model to generate PGD attacks, and then test them on the CEB0 model. This result indicates that these models are not doing “gradient masking”, a failure mode of some attempts at adversarial defense [2], since these are black-box attacks that do not rely on taking gradients through the target model.
Figure 6

Left: Accuracy on untargeted attacks at different values of for all 10,000 CIFAR10 test set examples. CEB0 is the model with the highest accuracy (97.51%) trained at . CEB5 is the model with the highest accuracy (97.06%) trained at . Madry is the best adversarially-trained model from Madry et al. [14] with 87.3% accuracy (values provided by Aleksander Madry). CEB5⇒CEB0 is transfer attacks from the CEB5 model to the CEB0 model. Madry ⇒CEB0 is transfer attacks from the Madry model to the CEB0 model. Madry was trained with 7 steps of PGD at (grey dashed line). Chance is 10% (grey dotted line). Right: Accuracy on untargeted attacks at different values of . All values are collected at 7 steps of PGD. CEB0 outperforms Madry everywhere except the region of . Madry appears to have overfit to L, given its poor performance on attacks relative to either CEB model. None of the CEB models are adversarially trained.

4. Conclusions

We have presented the Conditional Entropy Bottleneck (CEB), motivated by the Minimum Necessary Information (MNI) criterion and the hypothesis that failures of robust generalization are due in part to learning models that retain too much information about the training data. We have shown empirically that simply by switching to CEB, models may substantially improve their robust generalization, including (RG1) higher accuracy, (RG2) better adversarial robustness, and (RG3) stronger OoD detection. We believe that the MNI criterion and CEB offer a promising path forward for many tasks in machine learning by permitting fast amortized inference in an easy-to-implement framework that improves robust generalization.
  3 in total

1.  Predictability, complexity, and learning.

Authors:  W Bialek; I Nemenman; N Tishby
Journal:  Neural Comput       Date:  2001-11       Impact factor: 2.026

2.  Learning Representations for Neural Network-Based Classification Using the Information Bottleneck Principle.

Authors:  Rana Ali Amjad; Bernhard C Geiger
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2019-04-02       Impact factor: 6.226

3.  Information Dropout: Learning Optimal Representations Through Noisy Computation.

Authors:  Alessandro Achille; Stefano Soatto
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2018-01-10       Impact factor: 6.226

  3 in total
  3 in total

1.  Pareto-Optimal Clustering with the Primal Deterministic Information Bottleneck.

Authors:  Andrew K Tan; Max Tegmark; Isaac L Chuang
Journal:  Entropy (Basel)       Date:  2022-05-30       Impact factor: 2.738

2.  Information Bottleneck: Theory and Applications in Deep Learning.

Authors:  Bernhard C Geiger; Gernot Kubin
Journal:  Entropy (Basel)       Date:  2020-12-14       Impact factor: 2.524

3.  Visual Pretraining via Contrastive Predictive Model for Pixel-Based Reinforcement Learning.

Authors:  Tung M Luu; Thang Vu; Thanh Nguyen; Chang D Yoo
Journal:  Sensors (Basel)       Date:  2022-08-29       Impact factor: 3.847

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.