Buckets:

mishig's picture
|
download
raw
166 kB

Title: Provably Learning Diverse Features in Multi-View Data with Midpoint Mixup

URL Source: https://arxiv.org/html/2210.13512

Markdown Content: Back to arXiv

This is experimental HTML to improve accessibility. We invite you to report rendering errors. Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off. Learn more about this project and help improve conversions.

Why HTML? Report Issue Back to Abstract Download PDF Abstract 1Introduction 2Background on Mixup 3Motivating Midpoint Mixup: The Linear Regime 4Analyzing Midpoint Mixup Training Dynamics on General Multi-View Data 5Experiments 6Conclusion References

HTML conversions sometimes display errors due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.

failed: breqn

Authors: achieve the best HTML results from your LaTeX submissions by following these best practices.

License: CC BY 4.0 arXiv:2210.13512v4 [cs.LG] 04 Nov 2024 Provably Learning Diverse Features in Multi-View Data with Midpoint Mixup Muthu Chidambaram Xiang Wang Chenwei Wu Rong Ge Abstract

Mixup is a data augmentation technique that relies on training using random convex combinations of data points and their labels. In recent years, Mixup has become a standard primitive used in the training of state-of-the-art image classification models due to its demonstrated benefits over empirical risk minimization with regards to generalization and robustness. In this work, we try to explain some of this success from a feature learning perspective. We focus our attention on classification problems in which each class may have multiple associated features (or views) that can be used to predict the class correctly. Our main theoretical results demonstrate that, for a non-trivial class of data distributions with two features per class, training a 2-layer convolutional network using empirical risk minimization can lead to learning only one feature for almost all classes while training with a specific instantiation of Mixup succeeds in learning both features for every class. We also show empirically that these theoretical insights extend to the practical settings of image benchmarks modified to have multiple features.

Mixup, Feature Learning, Ensemble, Deep Learning Theory, Optimization 1Introduction

Data augmentation techniques have been a mainstay in the training of state-of-the-art models for a wide array of tasks - particularly in the field of computer vision - due to their ability to artificially inflate dataset size and encourage model robustness to various transformations of the data.

One such technique that has achieved widespread use is Mixup (Zhang et al., 2018), which constructs new data points as convex combinations of pairs of data points and their labels from the original dataset. Mixup has been shown to empirically improve generalization and robustness when compared to standard training over different model architectures, tasks, and domains (Liang et al., 2018; He et al., 2019; Thulasidasan et al., 2019; Lamb et al., 2019; Arazo et al., 2019; Guo, 2020; Verma et al., 2021b; Wang et al., 2021). It has also found applications to distributed private learning (Huang et al., 2021), learning fair models (Chuang & Mroueh, 2021), semi-supervised learning (Berthelot et al., 2019b; Sohn et al., 2020; Berthelot et al., 2019a), self-supervised (specifically contrastive) learning (Verma et al., 2021a; Lee et al., 2020; Kalantidis et al., 2020), and multi-modal learning (So et al., 2022).

The success of Mixup has instigated several works attempting to theoretically characterize its potential benefits and drawbacks (Guo et al., 2019; Carratino et al., 2020; Zhang et al., 2020, 2021; Chidambaram et al., 2021). These works have focused mainly on analyzing, at a high-level, the beneficial (or detrimental) behaviors encouraged by the Mixup-version of the original empirical loss for a given task.

As such, none of these previous works (to the best of our knowledge) have provided an algorithmic analysis of Mixup training in the context of non-linear models (i.e. neural networks), which is the main use case of Mixup. In this paper, we begin this line of work by theoretically separating the full training dynamics of Mixup (with a specific set of hyperparameters) from empirical risk minimization (ERM) for a 2-layer convolutional network (CNN) architecture on a class of data distributions exhibiting a multi-view nature. This multi-view property essentially requires (assuming classification data) that each class in the data is well-correlated with multiple features present in the data.

Our analysis is heavily motivated by the recent work of Allen-Zhu & Li (2021), which showed that this kind of multi-view data can provide a fruitful setting for theoretically understanding the benefits of ensembles and knowledge distillation in the training of deep learning models. We show that Mixup can, perhaps surprisingly, capture some of the key benefits of ensembles explained by Allen-Zhu & Li (2021) despite only being used to train a single model.

1.1Main Contributions

Our main results (Theorem 4.6 and Theorem 4.7) give a clear separation between Mixup training and regular training. They show that for data with two different features (or views) per class, ERM learns only one feature with high probability, while Mixup training can guarantee the learning of both features with high probability. Our analysis focuses on Midpoint Mixup, in which training is done on the midpoints of data points and their labels. While this seems extreme, we motivate this choice by proving several nice properties of Midpoint Mixup and giving intuitions on why it favors learning all features in the data in Section 3.1.

In particular, Section 3.1 highlights the main ideas behind why this multi-view learning is possible in the relatively simple to understand setting of linearly separable data. We prove in this section that the Midpoint Mixup gradient descent dynamics can push towards learning all features in the data (for our notion of multi-view data) so long as there are dependencies between the features. Furthermore, we show that models that have learned all features can achieve arbitrarily small pointwise loss on Midpoint-Mixup-augmented data points, and that this property is unique to Midpoint Mixup.

In Section 4.2, we show that the ideas developed for the linearly separable case can be extended to a noisy, non-linearly-separable class of data distributions with two features per class. We prove in our main results that for such distributions, minimizing the empirical cross-entropy using gradient descent can lead to learning only one of the features in the data (Theorem 4.6) while minimizing the Midpoint Mixup cross-entropy succeeds in learning both features (Theorem 4.7). While our theory in this section focuses on the case of two features/views per class to be consistent with Allen-Zhu & Li (2021), our techniques can readily be extended to more general multi-view data distributions.

Last but not least, we show in Section 5 that our theory extends to practice by training models on image classification benchmarks that are modified to have additional spurious features correlated with the true class labels. We find in our experiments that Midpoint Mixup outperforms ERM, and performs comparably to the previously used Mixup settings in Zhang et al. (2018). A primary goal of this section is to illustrate that Midpoint Mixup is not just a toy theoretical setting, but rather one that can be of practical interest.

1.2Related Work

Mixup. The idea of training on midpoints (or approximate midpoints) is not new; both Guo (2021) and Chidambaram et al. (2021) empirically study settings resembling what we consider in this paper, but they do not develop theory for this kind of training (beyond an information theoretic result in the latter case). As mentioned earlier, there are also several theoretical works analyzing the Mixup formulation and it variants (Carratino et al., 2020; Zhang et al., 2020, 2021; Chidambaram et al., 2021; Park et al., 2022), but none of these works contain optimization results (which are the focus of this work). Additionally, we note that there are many Mixup-like data augmentation techniques and training formulations that are not (immediately) within the scope of the theory developed in this paper. For example, Cut Mix (Yun et al., 2019), Manifold Mixup (Verma et al., 2019), Puzzle Mix (Kim et al., 2020), SaliencyMix (Uddin et al., 2020), Co-Mixup (Kim et al., 2021), AutoMix (Liu et al., 2021), and Noisy Feature Mixup (Lim et al., 2021) are all such variations.

Data Augmentation. Our work is also influenced by the existing large body of work theoretically analyzing the benefits of data augmentation (Bishop, 1995; Dao et al., 2019; Wu et al., 2020; Hanin & Sun, 2021; Rajput et al., 2019; Yang et al., 2022; Wang et al., 2022; Chen et al., 2020; Mei et al., 2021). The most relevant such work to ours is the recent work of Shen et al. (2022), which also studies the impact of data augmentation on the learning dynamics of a 2-layer network in a setting motivated by that of Allen-Zhu & Li (2021). However, Midpoint Mixup differs significantly from the data augmentation scheme considered in Shen et al. (2022), and consequently our results and setting are also of a different nature (we stick much more closely to the setting of Allen-Zhu & Li (2021)). As such, our work can be viewed as a parallel thread to that of Shen et al. (2022).

2Background on Mixup

We will introduce Mixup in the context of π‘˜ -class classification, although the definitions below easily extend to regression. As a notational convenience, we will use [ π‘˜ ] to indicate { 1 , 2 , … , π‘˜ } .

Recall that, given a finite dataset 𝒳 βŠ‚ ℝ 𝑑 Γ— [ π‘˜ ] with | 𝑋 |

𝑁 , we can define the empirical cross-entropy loss 𝐽 ⁒ ( 𝑔 , 𝒳 ) of a model 𝑔 : ℝ 𝑑 β†’ ℝ π‘˜ as:

𝐽 ⁒ ( 𝑔 , 𝒳 )

βˆ’ 1 𝑁 ⁒ βˆ‘ 𝑖 ∈ [ 𝑁 ] log ⁑ πœ™ 𝑦 𝑖 ⁒ ( 𝑔 ⁒ ( π‘₯ 𝑖 ) ) ,

where πœ™ 𝑦 ⁒ ( 𝑔 ⁒ ( π‘₯ ) )

exp ⁑ ( 𝑔 𝑦 ⁒ ( π‘₯ ) ) βˆ‘ 𝑠 ∈ [ π‘˜ ] exp ⁑ ( 𝑔 𝑠 ⁒ ( π‘₯ ) ) .

(2.1)

With πœ™ being the standard softmax function and the notation 𝑔 𝑦 , πœ™ 𝑦 indicating the 𝑦 -th coordinate functions of 𝑔 and πœ™ respectively. Now let us fix a distribution π’Ÿ πœ† whose support is contained in [ 0 , 1 ] and introduce the notation 𝑧 𝑖 , 𝑗 ⁒ ( πœ† )

πœ† ⁒ π‘₯ 𝑖 + ( 1 βˆ’ πœ† ) ⁒ π‘₯ 𝑗 (using 𝑧 𝑖 , 𝑗 when πœ† is clear from context) where ( π‘₯ 𝑖 , 𝑦 𝑖 ) , ( π‘₯ 𝑗 , 𝑦 𝑗 ) ∈ 𝒳 . Then we may define the Mixup cross-entropy 𝐽 𝑀 ⁒ ( 𝑔 , 𝒳 , π’Ÿ πœ† ) as:

β„“ ⁒ ( πœ† , 𝑖 , 𝑗 )

πœ† ⁒ log ⁑ πœ™ 𝑦 𝑖 ⁒ ( 𝑔 ⁒ ( 𝑧 𝑖 , 𝑗 ) )

+ ( 1 βˆ’ πœ† ) ⁒ log ⁑ πœ™ 𝑦 𝑗 ⁒ ( 𝑔 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ,

𝐽 𝑀 ⁒ ( 𝑔 , 𝒳 , π’Ÿ πœ† )

βˆ’ 1 𝑁 2 ⁒ βˆ‘ 𝑖 ∈ [ 𝑁 ] βˆ‘ 𝑗 ∈ [ 𝑁 ] 𝔼 πœ† ∼ π’Ÿ πœ† ⁒ [ β„“ ⁒ ( πœ† , 𝑖 , 𝑗 ) ] .

(2.2)

We mention a minor differences between Equation 2.2 and the original formulation of Zhang et al. (2018). Zhang et al. (2018) consider the expectation term in Equation 2.2 over 𝑁 randomly sampled pairs of points from the original dataset 𝒳 , whereas we explicitly consider mixing all 𝑁 2 possible pairs of points. This is, however, just to make various parts of our analysis easier to follow - one could also sample 𝑁 mixed points uniformly, and the analysis would still carry through with an additional high probability qualifier (the important aspect is the proportions with which different mixed points show up; i.e. mixing across classes versus mixing within a class).

3Motivating Midpoint Mixup: The Linear Regime

As can be seen from Equation 2.2, the Mixup cross-entropy 𝐽 𝑀 ⁒ ( 𝑔 , 𝒳 , π’Ÿ πœ† ) depends heavily on the choice of mixing distribution π’Ÿ πœ† . Zhang et al. (2018) took π’Ÿ πœ† to be Beta ⁒ ( 𝛼 , 𝛼 ) with 𝛼 being a hyperparameter. In this work, we will specifically be interested in the case of 𝛼 β†’ ∞ , for which the distribution π’Ÿ πœ† takes the value 1 / 2 with probability 1. We refer to this special case as Midpoint Mixup, and note that it can also be viewed as a case of the Pairwise Label Smoothing strategy introduced by (Guo, 2021). We will write the Midpoint Mixup loss as 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) (here 𝑧 𝑖 , 𝑗

( π‘₯ 𝑖 + π‘₯ 𝑗 ) / 2 and there is no π’Ÿ πœ† dependence as the mixing is deterministic):

β„“ ⁒ ( 𝑖 , 𝑗 )

log ⁑ πœ™ 𝑦 𝑖 ⁒ ( 𝑔 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) + log ⁑ πœ™ 𝑦 𝑗 ⁒ ( 𝑔 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ,

𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 )

βˆ’ 1 2 ⁒ 𝑁 2 ⁒ βˆ‘ 𝑖 ∈ [ 𝑁 ] βˆ‘ 𝑗 ∈ [ 𝑁 ] β„“ ⁒ ( 𝑖 , 𝑗 ) .

(3.1)

We focus on this version of Mixup for the following key reasons.

Equal Feature Learning. Firstly, we will show that 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) exhibits the nice property that its global minimizers correspond to models in which all of the features in the data are learned equally (in a sense to be made precise in Section 3.1).

Pointwise Optimality. We show that for Midpoint Mixup, it is possible to learn a classifier (with equal feature learning) that achieves arbitrarily small loss for every Midpoint-Mixup-augmented point. We will also show that this is not possible for 𝐽 𝑀 ⁒ ( 𝑔 , 𝒳 , π’Ÿ πœ† ) when π’Ÿ πœ† is any other non-trivial distribution (i.e. non-point-mass distribution).

Cleaner Optimization Analysis. Additionally, from a technical perspective, the Midpoint Mixup loss lends itself to a simpler optimization analysis due to the fact that the structure of its gradients is not changing with each optimization iteration (we do not need to sample new mixing proportions at each optimization step). Indeed, we see that Equation 3.1 circumvents the expectation with respect to πœ† that arose in 𝐽 𝑀 ⁒ ( 𝑔 , 𝒳 , π’Ÿ πœ† ) .

Empirically Viable. While we are not trying to claim that Midpoint Mixup is a superior alternative to standard Mixup settings considered in the literature, we will show in Section 5 that it can still significantly outperform empirical risk minimization in practice, and in fact performs quite closely to known good settings of Mixup.

3.1Midpoint Mixup with Linear Models on Linearly Separable Data

To make clear what we mean by feature learning, we first turn our attention to the simple setting of learning linear models 𝑔 𝑦 ⁒ ( π‘₯ )

⟨ 𝑀 𝑦 , π‘₯ ⟩ (i.e. one weight vector associated per class) on linearly separable data, as this setting will serve as a foundation for our main results. Namely, we consider π‘˜ -class classification with a dataset 𝒳 of 𝑁 labeled data points generated according to the following data distribution (with 𝑁 sufficiently large).

Definition 3.1.

[Simple Multi-View Setting] For each class 𝑦 ∈ [ π‘˜ ] , let 𝑣 𝑦 , 1 , 𝑣 𝑦 , 2 ∈ ℝ 𝑑 be orthonormal unit vectors also satisfying 𝑣 𝑦 , β„“ βŸ‚ 𝑣 𝑠 , β„“ β€² when 𝑦 β‰  𝑠 for any β„“ , β„“ β€² ∈ [ 2 ] . Each point ( π‘₯ , 𝑦 ) ∼ π’Ÿ is then generated by sampling 𝑦 ∈ [ π‘˜ ] uniformly and constructing π‘₯ as:

𝛽 𝑦 ∼ Uni ⁒ ( [ 0.1 , 0.9 ] ) π‘₯

𝛽 𝑦 ⁒ 𝑣 𝑦 , 1 + ( 1 βˆ’ 𝛽 𝑦 ) ⁒ 𝑣 𝑦 , 2 .

(3.2)

Definition 3.1 is multi-view in the following sense: for any class 𝑦 , it suffices (from an accuracy perspective) to learn a model 𝑔 that has a significant correlation with either the feature vector 𝑣 𝑦 , 1 or 𝑣 𝑦 , 2 . In this context, one can think of feature learning as corresponding to how positively correlated the weight 𝑀 𝑦 is with each of the same class feature vectors 𝑣 𝑦 , 1 and 𝑣 𝑦 , 2 (we provide a more rigorous definition in our main results).

If one now considers the empirical cross-entropy loss 𝐽 ⁒ ( 𝑔 , 𝒳 ) , it is straightforward to see that it is possible to achieve the global minimum of 𝐽 ⁒ ( 𝑔 , 𝒳 ) by just considering models 𝑔 in which we take ⟨ 𝑀 𝑦 , 𝑣 𝑦 , 1 ⟩ β†’ ∞ for every class 𝑦 . This means we can minimize the usual cross-entropy loss without learning both features for each class in 𝒳 .

However, this is not the case for Midpoint Mixup. Indeed, we show below that a necessary (with extremely high probability) and sufficient condition for a linear model 𝑔 to minimize 𝐽 𝑀 ⁒ 𝑀 (when taking its scaling to ∞ ) is that it has equal correlation with both features for every class (sufficiency relies also on having weaker correlations with other class features). In what follows, we use inf 𝐽 𝑀 ⁒ 𝑀 ⁒ ( β„Ž , 𝒳 ) to indicate the global minimum of 𝐽 𝑀 ⁒ 𝑀 over all functions β„Ž : ℝ 𝑑 β†’ ℝ π‘˜ (i.e. this is the smallest achievable loss). Full proofs of all of the following results can be found in Section C of the Appendix.

Lemma 3.2.

[Midpoint Mixup Optimal Direction] A linear model 𝑔 satisfies the following

lim 𝛾 β†’ ∞ 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝛾 ⁒ 𝑔 , 𝒳 )

inf 𝐽 𝑀 ⁒ 𝑀 ⁒ ( β„Ž , 𝒳 ) ,

(3.3)

if 𝑔 has the property that for every class 𝑦 we have ⟨ 𝑀 𝑦 , 𝑣 𝑦 , β„“ 1 ⟩

⟨ 𝑀 𝑠 , 𝑣 𝑠 , β„“ 2 ⟩ > 0 and ⟨ 𝑀 𝑦 , 𝑣 𝑠 , β„“ 2 ⟩

0 for every 𝑠 β‰  𝑦 and β„“ 1 , β„“ 2 ∈ [ 2 ] . Furthermore, with probability 1 βˆ’ exp ⁑ ( βˆ’ Θ ⁒ ( 𝑁 ) ) (over the randomness of 𝒳 ), the condition ⟨ 𝑀 𝑦 , 𝑣 𝑦 , β„“ 1 ⟩

⟨ 𝑀 𝑠 , 𝑣 𝑠 , β„“ 2 ⟩ is necessary for 𝑔 to satisfy Equation 3.3.

Proof Sketch. The idea is that if 𝑔 has equal correlation with both features for every class, its predictions will be constant on the original data points due to the fact that the coefficients for both features in each data point are mirrored as per Equation 3.2. With the condition ⟨ 𝑀 𝑦 , 𝑣 𝑠 , β„“ ⟩

0 (this can be weakened significantly), this implies the softmax output of 𝑔 on the Midpoint Mixup points will be exactly 1 / 2 for each of the classes being mixed in the scaling limit (and 0 for all other classes), which is optimal.

Note that Lemma 3.2 implies two properties mentioned earlier for Midpoint Mixup: Equal Feature Learning and Pointwise Optimality. Furthermore, we can also show that if we consider 𝐽 𝑀 ⁒ ( 𝑔 , 𝒳 , π’Ÿ πœ† ) for any other non-point-mass distribution, we can prove that the analogue of Lemma 3.2 does not hold true (because Pointwise Optimality would be impossible).

Proposition 3.3.

For any distribution π’Ÿ πœ† that is not a point mass on 0 , 1 , or 1 / 2 , and any linear model 𝑔 satisfying the conditions of Lemma 3.2, we have that with probability 1 βˆ’ exp ⁑ ( βˆ’ Θ ⁒ ( 𝑁 ) ) (over the randomness of 𝒳 ) there exists an πœ– 0

0 depending only on π’Ÿ πœ† such that:

𝐽 𝑀 ⁒ ( 𝑔 , 𝒳 , π’Ÿ πœ† ) β‰₯ inf 𝐽 𝑀 ⁒ ( β„Ž , 𝒳 , π’Ÿ πœ† ) + πœ– 0 .

(3.4)

Proof Sketch. In the case of general mixing distributions, we cannot achieve the Mixup optimal behavior of πœ™ 𝑦 𝑖 ⁒ ( 𝑔 ⁒ ( 𝑧 𝑖 , 𝑗 ⁒ ( πœ† ) ) )

πœ† for every πœ† if the outputs 𝑔 𝑦 are constant on the original data points.

Lemma 3.2 outlines the key theoretical benefit of Midpoint Mixup - namely that its global minimizers exist within the class of models that we consider, and such minimizers learn all features in the data equally. And although Lemma 3.2 is stated in the context of linear models, the result naturally carries through to when we consider two-layer neural networks of the type we define in the next section. That being said, the interpretation of Proposition 3.3 is not intended to disqualify the possibility that the minimizer of 𝐽 𝑀 ⁒ ( 𝑔 , 𝒳 , π’Ÿ πœ† ) when restricted to a specific model class is a model in which all features are learned near-equally (we expect this to be the case in fact for any reasonable π’Ÿ πœ† ). Proposition 3.3 is moreso intended to motivate the study of Midpoint Mixup as a particularly interesting choice of the mixing distribution π’Ÿ πœ† .

We now proceed one step further from the above results and show that the feature learning benefit of Midpoint Mixup manifests itself even in the optimization process (when using gradient-based methods). We show that, if significant separation between feature correlations exists, the Midpoint Mixup gradients correct the separation. For simplicity, we suppose WLOG that ⟨ 𝑀 𝑦 , 𝑣 𝑦 , 1 ⟩ > ⟨ 𝑀 𝑦 , 𝑣 𝑦 , 2 ⟩ . Now letting Ξ” 𝑦

⟨ 𝑀 𝑦 , 𝑣 𝑦 , 1 βˆ’ 𝑣 𝑦 , 2 ⟩ and using the notation βˆ‡ 𝑀 𝑦 for βˆ‚ βˆ‚ 𝑀 𝑦 , we can prove:

Proposition 3.4.

[Mixup Gradient Lower Bound] Let 𝑦 be any class such that Ξ” 𝑦 β‰₯ log ⁑ π‘˜ , and suppose that both ⟨ 𝑀 𝑦 , 𝑣 𝑦 , 1 ⟩ β‰₯ 0 and the cross-class orthogonality condition ⟨ 𝑀 𝑠 , 𝑣 𝑒 , β„“ ⟩

0 hold for all 𝑠 β‰  𝑒 and β„“ ∈ [ 2 ] . Then we have with high probability that:

⟨ βˆ’ βˆ‡ 𝑀 𝑦 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , 2 ⟩ β‰₯ Θ ⁒ ( 1 π‘˜ 2 ) .

(3.5)

Proof Sketch. The key idea is to analyze the gradient correlation with the direction 𝑣 𝑦 , 1 βˆ’ 𝑣 𝑦 , 2 via a concentration of measure argument. We show that either this correlation is significantly negative under the stated conditions (which will imply Equation 3.5), or that the gradient correlation with 𝑣 𝑦 , 2 is already large.

Proposition 3.4 shows that, assuming nonnegativity of within-class correlations and an orthogonality condition across classes (which we will show to be approximately true in our main results), the feature correlation that is lagging behind for any class 𝑦 will receive a significant gradient when optimizing the Midpoint Mixup loss. On the other hand, we can also prove that this need not be the case for empirical risk minimization:

Proposition 3.5.

[ERM Gradient Upper Bound] For every 𝑦 ∈ [ π‘˜ ] , assuming the same conditions as in Proposition 3.4, if Ξ” 𝑦 β‰₯ 𝐢 ⁒ log ⁑ π‘˜ for any 𝐢

0 then with high probability we have that:

⟨ βˆ’ βˆ‡ 𝑀 𝑦 𝐽 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , 2 ⟩ ≀ 𝑂 ⁒ ( 1 π‘˜ 0.1 ⁒ 𝐢 βˆ’ 1 ) .

(3.6)

Proof Sketch. This follows directly from the form of the gradient for 𝐽 ⁒ ( 𝑔 , 𝒳 ) and the fact that there is a constant lower bound on the weight associated with each feature in every data point, as per Definition 3.1.

While Proposition 3.5 demonstrates that training using ERM can possibly fail to learn both features associated with a class due to increasingly small gradients, one can verify that this does not naturally occur in the optimization dynamics of linear models on linearly separable data of the type in Definition 3.1 (see for example, the related result in Chidambaram et al. (2021)). On the other hand, if we move away from linearly separable data and linear models to more realistic settings, the situation described above does indeed show up, which motivates our main results.

4Analyzing Midpoint Mixup Training Dynamics on General Multi-View Data

For our main results, we now consider a data distribution and class of models that are meant to more closely mimic practical situations.

4.1General Multi-View Data Setup

We adopt a slightly simplified version of the setting of (Allen-Zhu & Li, 2021). We still consider the problem of π‘˜ -class classification on a dataset 𝒳 of 𝑁 labeled data points, but our data points are now represented as ordered tuples π‘₯

( π‘₯ ( 1 ) , … , π‘₯ ( 𝑃 ) ) of 𝑃 input patches π‘₯ ( 𝑖 ) with each π‘₯ ( 𝑖 ) ∈ ℝ 𝑑 (so 𝒳 βŠ‚ ℝ 𝑃 ⁒ 𝑑 Γ— [ π‘˜ ] ).

As was the case in Definition 3.1 and in (Allen-Zhu & Li, 2021), we assume that the data is multi-view in that each class 𝑦 is associated with 2 orthonormal feature vectors 𝑣 𝑦 , 1 and 𝑣 𝑦 , 2 , and we once again consider 𝑁 and π‘˜ to be sufficiently large. As mentioned in (Allen-Zhu & Li, 2021), we could alternatively consider the number of classes π‘˜ to be fixed (i.e. binary classification) and the number of associated features to be large, and our theory would still translate. We now precisely define the data generating distribution π’Ÿ that we will focus on for the remainder of the paper.

Definition 4.1.

[General Multi-View Data Distribution] Identically to Definition 3.1, each class 𝑦 is associated with two orthonormal feature vectors, after which each point ( π‘₯ , 𝑦 ) ∼ π’Ÿ is generated as:.

1.

Sample a label 𝑦 uniformly from [ π‘˜ ] .

2.

Designate via any method two disjoint subsets 𝑃 𝑦 , 1 ⁒ ( π‘₯ ) , 𝑃 𝑦 , 2 ⁒ ( π‘₯ ) βŠ‚ [ 𝑃 ] with | 𝑃 𝑦 , 1 ⁒ ( π‘₯ ) |

| 𝑃 𝑦 , 2 ⁒ ( π‘₯ ) |

𝐢 𝑃 for a universal constant 𝐢 𝑃 , and additionally choose via any method a bijection πœ‘ : 𝑃 𝑦 , 1 ⁒ ( π‘₯ ) β†’ 𝑃 𝑦 , 2 ⁒ ( π‘₯ ) . We then generate the signal patches of π‘₯ in corresponding pairs π‘₯ ( 𝑝 )

𝛽 𝑦 , 𝑝 ⁒ 𝑣 𝑦 , 1 and π‘₯ ( πœ‘ ⁒ ( 𝑝 ) )

( 𝛿 2 βˆ’ 𝛽 𝑦 , 𝑝 ) ⁒ 𝑣 𝑦 , 2

𝛽 𝑦 , πœ‘ ⁒ ( 𝑝 ) ⁒ 𝑣 𝑦 , 2 for every 𝑝 ∈ 𝑃 𝑦 , 1 ⁒ ( π‘₯ ) with the 𝛽 𝑦 , 𝑝 chosen according to a symmetric distribution (allowed to vary per class 𝑦 ) supported on [ 𝛿 1 , 𝛿 2 βˆ’ 𝛿 1 ] satisfying the anti-concentration property that 𝛽 𝑦 , 𝑝 takes values in a subset of its support whose Lebesgue measure is 𝑂 ⁒ ( 1 / log ⁑ π‘˜ ) with probability π‘œ ⁒ ( 1 ) .1

3.

Fix, via any method, 𝑄 distinct classes 𝑠 1 , 𝑠 2 , … , 𝑠 𝑄 ∈ [ π‘˜ ] βˆ– 𝑦 with 𝑄

Θ ⁒ ( 1 ) . The remaining [ 𝑃 ] βˆ– ( 𝑃 𝑦 , 1 ⁒ ( π‘₯ ) βˆͺ 𝑃 𝑦 , 2 ⁒ ( π‘₯ ) ) patches not considered above are the feature noise patches of π‘₯ , and are defined to be π‘₯ ( 𝑝 )

βˆ‘ 𝑗 ∈ [ 𝑄 ] βˆ‘ β„“ ∈ [ 2 ] 𝛾 𝑗 , β„“ ⁒ 𝑣 𝑠 𝑗 , β„“ , where the 𝛾 𝑗 , β„“ ∈ [ 𝛿 3 , 𝛿 4 ] can be arbitrary.

Note that there are parts of the data-generating process that we leave underspecified, as our results will work for any choice. Henceforth, we use 𝒳 to refer to a dataset consisting of 𝑁 i.i.d. draws from the distribution π’Ÿ . Our data distribution represents a very low signal-to-noise (SNR) setting in which the true signal for a class exists only in a constant ( 2 ⁒ 𝐢 𝑃 ) number of patches while the rest of the patches contain low magnitude noise in the form of other class features.

We focus on the case of learning the data distribution π’Ÿ with the same two-layer CNN-like architecture used in (Allen-Zhu & Li, 2021). We recall that this architecture relies on the following polynomially-smoothed ReLU activation, which we refer to as ReLU ~ :

ReLU ~ ⁒ ( π‘₯ )

{ 0

 if  ⁒ π‘₯ ≀ 0

π‘₯ 𝛼 𝛼 ⁒ 𝜌 𝛼 βˆ’ 1

 if  ⁒ π‘₯ ∈ [ 0 , 𝜌 ]

π‘₯ βˆ’ ( 1 βˆ’ 1 𝛼 ) ⁒ 𝜌

 if  ⁒ π‘₯ β‰₯ 𝜌 .

The polynomial part of this activation function will be very useful for us in suppressing the feature noise in π’Ÿ . Our full network architecture, which consists of π‘š hidden neurons, can then be specified as follows.

Definition 4.2.

[2-Layer Network] We denote our network by 𝑔 : ℝ 𝑃 ⁒ 𝑑 β†’ ℝ π‘˜ . For each 𝑦 ∈ [ π‘˜ ] , we define 𝑔 𝑦 as follows.

𝑔 𝑦 ⁒ ( π‘₯ )

βˆ‘ π‘Ÿ ∈ [ π‘š ] βˆ‘ 𝑝 ∈ [ 𝑃 ] ReLU ~ ⁒ ( ⟨ 𝑀 𝑦 , π‘Ÿ , π‘₯ ( 𝑝 ) ⟩ ) .

(4.1)

We will use 𝑀 𝑦 , π‘Ÿ ( 0 ) to refer to the weights of the network 𝑔 at initialization (and 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) after 𝑑 steps of gradient descent), and similarly 𝑔 𝑑 to refer to the model after 𝑑 iterations of gradient descent. We consider the standard choice of Xavier initialization, which, in our setting, corresponds to 𝑀 𝑦 , π‘Ÿ ( 0 ) ∼ 𝒩 ⁒ ( 0 , 1 𝑑 ⁒ I 𝑑 ) .

For model training, we focus on full batch gradient descent with a fixed learning rate of πœ‚ applied to 𝐽 ⁒ ( 𝑔 , 𝒳 ) and 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) . Once again using the notation βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) for βˆ‚ βˆ‚ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , the updates to the weights of the network 𝑔 are thus of the form:

𝑀 𝑦 , π‘Ÿ ( 𝑑 + 1 )

𝑀 𝑦 , π‘Ÿ ( 𝑑 ) βˆ’ πœ‚ ⁒ βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) .

(4.2)

In defining our data distribution and model above, we have introduced several hyperparameters. Throughout our results, we make the following assumptions about these hyperparameters.

Assumption 4.3.

[Choice of Hyperparameters] We assume that:

𝑑

Ξ© ⁒ ( π‘˜ 20 ) ,
𝑃

Θ ⁒ ( π‘˜ 2 ) ,
𝐢 𝑃

Θ ⁒ ( 1 ) ,

π‘š

Θ ⁒ ( π‘˜ ) ,
𝛿 1 , 𝛿 2

Θ ⁒ ( 1 ) ,
𝛿 3 , 𝛿 4

Θ ⁒ ( π‘˜ βˆ’ 1.5 ) ,

𝜌

Θ ⁒ ( 1 / π‘˜ ) ,
𝛼

8 .

Discussion of Hyperparameter Choices. We make concrete choices of hyperparameters above for the sake of calculations (and we stress that these are not close to the tightest possible choices), but only the relationships between them are important. Namely, we need 𝑑 to be a significantly larger polynomial of π‘˜ than 𝑃 , we need 𝛿 3 , 𝛿 4

π‘œ ⁒ ( 1 ) but large enough so that 𝑃 ⁒ 𝛿 3 ≫ 𝛿 2 (to avoid learnability by linear models, as shown below), we need 𝛼 sufficiently large so that the network can suppress the low-magnitude feature noise, and we need 𝛿 1 , 𝛿 2

Θ ⁒ ( 1 ) so that the signal feature coefficients significantly outweigh the noise feature coefficients.

To convince the reader that our choice of model is not needlessly complicated given the setting, we prove the following result showing that there exist realizations of the distribution π’Ÿ on which linear classifiers cannot achieve perfect accuracy.

Proposition 4.4.

There exists a π’Ÿ satisfying all of the conditions of Definition 4.1 and Assumption 4.3 such that with probability at least 1 βˆ’ π‘˜ 2 ⁒ exp ⁑ ( Θ ⁒ ( βˆ’ 𝑁 / π‘˜ 2 ) ) , for any classifier β„Ž : ℝ 𝑃 ⁒ 𝑑 β†’ ℝ π‘˜ of the form β„Ž 𝑦 ⁒ ( π‘₯ )

βˆ‘ 𝑝 ∈ [ 𝑃 ] ⟨ 𝑀 𝑦 , π‘₯ ( 𝑝 ) ⟩ and any 𝒳 consisting of 𝑁 i.i.d. draws from π’Ÿ , there exists a point ( π‘₯ , 𝑦 ) ∈ 𝒳 and a class 𝑠 β‰  𝑦 such that β„Ž 𝑠 ⁒ ( π‘₯ ) β‰₯ β„Ž 𝑦 ⁒ ( π‘₯ ) .

Proof Sketch. The idea, as was originally pointed out by Allen-Zhu & Li (2021), is that there are Θ ⁒ ( π‘˜ 2 ) feature noise patches with coefficients of order Θ ⁒ ( π‘˜ βˆ’ 1.5 ) . Thus, because the features are orthogonal, these noise patches can influence the classification by an order Θ ⁒ ( π‘˜ ) term away from the direction of the true signal.

The full proof can be found in Section C of the Appendix.

4.2Main Results

Having established the setting for our main results, we now concretely define the notion of feature learning in our context.

Definition 4.5.

[Feature Learning] Let ( π‘₯ , 𝑦 ) ∼ π’Ÿ . We say that feature 𝑣 𝑦 , β„“ is learned by 𝑔 if argmax 𝑠 𝑔 𝑠 ⁒ ( π‘₯ β€² )

𝑦 where π‘₯ β€² is π‘₯ with all instances of feature 𝑣 𝑦 , 3 βˆ’ β„“ replaced by the all-zero vector.

Our definition of feature learning corresponds to whether the model 𝑔 is able to correctly classify data points in the presence of only a single signal feature instead of both (generalizing the notion of weight-feature correlation to nonlinear models). By analyzing the gradient descent dynamics of 𝑔 for the empirical cross-entropy 𝐽 , we can then show the following.

Theorem 4.6.

For π‘˜ and 𝑁 sufficiently large and the settings stated in Assumption 4.3, we have that the following hold with probability at least 1 βˆ’ 𝑂 ⁒ ( 1 / π‘˜ ) after running gradient descent with a step size πœ‚

𝑂 ⁒ ( 1 / poly ⁒ ( π‘˜ ) ) for 𝑂 ⁒ ( poly ⁒ ( π‘˜ ) / πœ‚ ) iterations on 𝐽 ⁒ ( 𝑔 , 𝒳 ) (for sufficiently large polynomials in π‘˜ ):

1.

(Training accuracy is perfect): For all ( π‘₯ 𝑖 , 𝑦 𝑖 ) ∈ 𝒳 , we have argmax 𝑠 𝑔 𝑑 𝑠 ⁒ ( π‘₯ 𝑖 )

𝑦 𝑖 .

2.

(Only one feature is learned): For ( 1 βˆ’ π‘œ ⁒ ( 1 ) ) ⁒ π‘˜ classes, there exists exactly one feature that is learned in the sense of Definition 4.5 by the model 𝑔 𝑑 .

Furthermore, the above remains true for all 𝑑

𝑂 ⁒ ( poly ⁒ ( π‘˜ ) ) for any polynomial in π‘˜ .

Proof Sketch. The proof is in spirit very similar to Theorem 1 in (Allen-Zhu & Li, 2021), and relies on many of the tools therein. The main idea is that, with high probability, there exists a separation between the class 𝑦 weight correlations with the features 𝑣 𝑦 , 1 and 𝑣 𝑦 , 2 at initialization. This separation is then amplified throughout training due to the polynomial part of ReLU ~ . Once one feature correlation becomes large enough, the gradient updates to the class 𝑦 weights rapidly decrease, leading to the remaining feature not being learned.

Theorem 4.6 shows that only one feature is learned (in our sense) for the vast majority of classes. As mentioned, our proof is quite similar to (Allen-Zhu & Li, 2021), but due to simplifications in our setting (no added Gaussian noise for example) and some different ideas the proof is much shorter - we hope this makes some of the machinery from (Allen-Zhu & Li, 2021) accessible to a wider audience.

The reason we prove Theorem 4.6 is in fact to highlight the contrast provided by the analogous result for Midpoint Mixup.

Theorem 4.7.

For π‘˜ and 𝑁 sufficiently large and the settings stated in Assumptions 4.3, we have that the following hold with probability 1 βˆ’ 𝑂 ⁒ ( 1 / π‘˜ ) after running gradient descent with a step size πœ‚

𝑂 ⁒ ( 1 / poly ⁒ ( π‘˜ ) ) for 𝑂 ⁒ ( poly ⁒ ( π‘˜ ) / πœ‚ ) iterations on 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) (for sufficiently large polynomials in π‘˜ ):

1.

(Training accuracy is perfect): For all ( π‘₯ 𝑖 , 𝑦 𝑖 ) ∈ 𝒳 , we have argmax 𝑠 𝑔 𝑠 ⁒ ( π‘₯ 𝑖 )

𝑦 𝑖 .

2.

(Both features are learned): For each class 𝑦 ∈ [ π‘˜ ] , both 𝑣 𝑦 , 1 and 𝑣 𝑦 , 2 are learned in the sense of Definition 4.5 by the model 𝑔 .

Furthermore, the above remains true for all 𝑑

𝑂 ⁒ ( poly ⁒ ( π‘˜ ) ) for any polynomial in π‘˜ .

Proof Sketch. The core idea of the proof relies on similar techniques to that of Proposition 3.4, but the nonlinear part of the ReLU ~ activation introduces a few additional difficulties due to the fact that the gradients in the nonlinear par are much smaller than those in the linear part of ReLU ~ . Nevertheless, we show that even these smaller gradients are sufficient for the feature correlation that is lagging behind to catch up in polynomial time.

The full proofs of Theorems 4.6 and 4.7 can be found in Section B of the Appendix.

Remark 4.8.

Theorems 4.6 and 4.7 show a separation between ERM and Midpoint Mixup with respect to feature learning, as we have defined. They are not results regarding the test accuracy of the trained models on the distribution π’Ÿ ; even learning only a single feature per class is sufficient for perfect test accuracy on π’Ÿ . The significance (and our desired interpretation) of these results is that, when the training distribution π’Ÿ has some additional spurious features when compared to the testing distribution, ERM can potentially fail to learn the true signal features whereas Midpoint Mixup will likely learn all features (including the true signal). One may also interpret the results as generalization that is robust to distributional shift; the test distribution in this case has dropped some features present in the training distribution.

5Experiments

The goal of the results of Sections 3 and 4 was to provide theory (from a feature learning and optimization perspective) for why Mixup has enjoyed success over ERM in many practical settings. The intuition is that, for image classification tasks, one could reasonably expect images from the same class to be generated from a shared set of latent features (much like our data distribution in Definition 4.1), in which case it may be possible to achieve perfect training accuracy by learning a strict subset of these features when doing empirical risk minimization. On the other hand, based on our ideas, we would expect Mixup to learn all such latent features associated with each class (assuming some dependency between them), and thus potentially generalize better.

A direct empirical verification of this phenomenon on image datasets is tricky (and a possible avenue for future work) due to the fact that one would need to clearly define a notion of latent features with respect to the images being considered, which is outside the scope of this work. Instead, we take for granted that such features exist, and attempt to verify whether Mixup is able to learn the β€œtrue” features associated with each class better than ERM when spurious features are added.

For our experimental setup, we consider training ResNet-18 (He et al., 2015) on versions of Fashion MNIST (FMNIST) (Xiao et al., 2017), CIFAR-10, and CIFAR-100 (Krizhevsky, 2009) in which every training data point is transformed such that a randomly sampled training point from a different (but randomly fixed) class is concatenated (along the channels dimension) to the original point (visualized in Figure 1). Additionally, to introduce a dependency structure akin to what we have in Definitions 3.1 and 4.1, we sample a 𝛾 ∼ Uni ⁒ ( [ 0 , 1 ] ) and scale the first part of the training point (the true image) by 𝛾 while scaling the concatenated part by 1 βˆ’ 𝛾 during training.

Figure 1:Visualization of data modification in CIFAR-10.

If we work under the intuition that images from each class are generated by relatively different latent features, then this modification process corresponds to adding patches of (fixed) spurious features to each class that have a dependency (from the scaling factor 𝛾 ) on the original features of the data. We leave the test data for each dataset unmodified, except for the concatenation of an all-zeros vector of the same shape to each point so that the shape of the test data matches that of the training data (in effect, this penalizes models that learned only the spurious features we concatenated in the training data). This zeroing out of the additional channels is also intended to replicate Definition 4.5 in our experimental setup.

While we consider the above setup to be intuitive and resemble our theoretical setting, it is fair to ask why we chose this setup compared to the many possible alternatives. Firstly, we found that using synthetic spurious features (i.e. random orthogonal vectors scaled to have the same norm as the images) as opposed to images from different classes was far too noisy (training error went to 0 immediately); the test errors on each dataset degraded to near-random levels, so it was difficult to make comparisons. Additionally, we found the same to be true if we considered adding spurious features as opposed to concatenating them.

(a)FMNIST (b)CIFAR-10 (c)CIFAR-100 Figure 2:Test error comparison between Uniform Mixup (green), Midpoint Mixup (orange), and ERM (blue). Each curve represents the average of 5 model runs (over the randomness of the data augmentations and model initializations), while the surrounding area represents 1 standard deviation.

For each of our image classification tasks, we train models using Mixup with π’Ÿ πœ†

Beta ⁒ ( 1 , 1 ) (the choice used in Zhang et al. (2018) for CIFAR, which we refer to as Uniform Mixup), Midpoint Mixup, and ERM. Our implementation is in PyTorch (Paszke et al., 2019) and uses the ResNet implementation of Kuang Liu, released under an MIT license. All models were trained for 100 epochs with a batch size of 750, which was the largest feasible size on our compute setup of a single P100 GPU (we use a large batch size to approximate the full batch gradient descent aspect of our theory). For optimization, we use Adam (Kingma & Ba, 2015) with the default hyperparameters of 𝛽 1

0.9 , 𝛽 2

0.999 and a learning rate of 0.001 . We did a modest amount of hyperparameter tuning in preliminary experiments, where we compared Adam and SGD with different log-spaced learning rates in the range [ 0.001 , 0.1 ] , and found that Adam with the default hyperparameters almost always worked the best. We report our results for each dataset in Table 1, and accompanying test error plots are shown in Figure 2. Code for our experiments is available at: https://github.com/2014mchidamb/midpoint-mixup-multi-view-icml.

Table 1:Final test errors on unmodified test data (mean over 5 runs) along with 1 standard deviation range for Uniform Mixup, Midpoint Mixup, and ERM. Model FMNIST CIFAR-10 CIFAR-100 Uniform Mixup 9.66 18.52 Β± 1 53.42 Β± 1

Midpoint Mixup 14.84 Β± 1 22.29 Β± 2 53.61 Β± 2

ERM 16.55 Β± 2 27.77 Β± 2 69.28 Β± 2

From Table 1 we see that Uniform Mixup performs the best in all cases, and that Midpoint Mixup tracks the performance of Uniform Mixup reasonably closely. We stress that the ordering of model performance is unsurprising; a truly fair comparison with Midpoint Mixup would require training on all 𝑁 2 possible mixed points, which is infeasible in our compute setup (we opt to randomly mix points per batch, as is standard). Our experiments are intended to show that Midpoint Mixup still non-trivially captures the benefits of Mixup in an empirical setting that is far from the asymptotic regime of our theory, while Mixup using standard hyperparameter settings significantly outperforms ERM in the presence of spurious features. A final observation worth making is that we find Midpoint Mixup performs significantly better than ERM when moving from the 10-class settings of FMNIST and CIFAR-10 to the 100-class setting of CIFAR-100, and this is in line with what our theory predicts (a larger number of classes more closely approximates our setting).

6Conclusion

To summarize, the main contributions of our work have been theoretical motivation for an extreme case of Mixup training (Midpoint Mixup), as well as an optimization analysis separating the learning dynamics of a 2-layer convolutional network trained using Midpoint Mixup and empirical risk minimization. Our results show that, for a class of data distributions satisfying the property that there are multiple, dependent features correlated with each class in the data, Midpoint Mixup can outperform ERM (both theoretically and empirically) in learning these features.

In proving these results, we have also shown along the way that Midpoint Mixup can be a useful theoretical lens in the study of Mixup, and exhibits several nice properties that may be of independent interest. Our results also raise a number of questions for future work. Namely, our work has only scratched the surface of exploring empirical properties of Midpoint Mixup, and there is much to be done in trying to understand how Midpoint Mixup is able to train so quickly in non-asymptotic settings. Additionally, it is natural to consider how our techniques for analyzing Midpoint Mixup can be extended to optimization analyses for Mixup using more general distributions, or even modified versions of Mixup.

Finally, we observe that, due to the mostly theoretical nature of this work, we do not anticipate any direct misuses of our findings or negative broader impacts.

Acknowledgements

During this project, Rong Ge, Muthu Chidambaram, Xiang Wang, and Chenwei Wu were supported in part by NSF Award DMS-2031849, CCF-1704656, CCF-1845171 (CAREER), CCF-1934964 (Tripods), a Sloan Research Fellowship, and a Google Faculty Research Award.

References Allen-Zhu & Li (2021) ↑ Allen-Zhu, Z. and Li, Y.Towards understanding ensemble, knowledge distillation and self-distillation in deep learning, 2021. Arazo et al. (2019) ↑ Arazo, E., Ortego, D., Albert, P., O’Connor, N. E., and McGuinness, K.Unsupervised label noise modeling and loss correction.arXiv preprint arXiv:1904.11238, 2019. Berthelot et al. (2019a) ↑ Berthelot, D., Carlini, N., Cubuk, E. D., Kurakin, A., Sohn, K., Zhang, H., and Raffel, C.Remixmatch: Semi-supervised learning with distribution alignment and augmentation anchoring.arXiv preprint arXiv:1911.09785, 2019a. Berthelot et al. (2019b) ↑ Berthelot, D., Carlini, N., Goodfellow, I., Papernot, N., Oliver, A., and Raffel, C. A.Mixmatch: A holistic approach to semi-supervised learning.In Wallach, H., Larochelle, H., Beygelzimer, A., d'AlchΓ©-Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019b.URL https://proceedings.neurips.cc/paper/2019/file/1cd138d0499a68f4bb72bee04bbec2d7-Paper.pdf. Bishop (1995) ↑ Bishop, C. M.Training with noise is equivalent to tikhonov regularization.Neural computation, 7(1):108–116, 1995. Carratino et al. (2020) ↑ Carratino, L., CissΓ©, M., Jenatton, R., and Vert, J.-P.On mixup regularization, 2020. Chen et al. (2020) ↑ Chen, S., Dobriban, E., and Lee, J.A group-theoretic framework for data augmentation.Advances in Neural Information Processing Systems, 33:21321–21333, 2020. Chernozhukov et al. (2014) ↑ Chernozhukov, V., Chetverikov, D., and Kato, K.Comparison and anti-concentration bounds for maxima of gaussian random vectors, 2014. Chidambaram et al. (2021) ↑ Chidambaram, M., Wang, X., Hu, Y., Wu, C., and Ge, R.Towards understanding the data dependency of mixup-style training.CoRR, abs/2110.07647, 2021.URL https://arxiv.org/abs/2110.07647. Chuang & Mroueh (2021) ↑ Chuang, C. and Mroueh, Y.Fair mixup: Fairness via interpolation.CoRR, abs/2103.06503, 2021.URL https://arxiv.org/abs/2103.06503. Dao et al. (2019) ↑ Dao, T., Gu, A., Ratner, A., Smith, V., De Sa, C., and RΓ©, C.A kernel theory of modern data augmentation.In International Conference on Machine Learning, pp. 1528–1537. PMLR, 2019. Guo (2020) ↑ Guo, H.Nonlinear mixup: Out-of-manifold data augmentation for text classification.In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pp.  4044–4051. AAAI Press, 2020.URL https://aaai.org/ojs/index.php/AAAI/article/view/5822. Guo (2021) ↑ Guo, H.Midpoint regularization: from high uncertainty training to conservative classification, 2021. Guo et al. (2019) ↑ Guo, H., Mao, Y., and Zhang, R.Mixup as locally linear out-of-manifold regularization.In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp.  3714–3722, 2019. Hanin & Sun (2021) ↑ Hanin, B. and Sun, Y.How data augmentation affects optimization for linear regression.Advances in Neural Information Processing Systems, 34, 2021. He et al. (2015) ↑ He, K., Zhang, X., Ren, S., and Sun, J.Deep residual learning for image recognition, 2015. He et al. (2019) ↑ He, T., Zhang, Z., Zhang, H., Zhang, Z., Xie, J., and Li, M.Bag of tricks for image classification with convolutional neural networks.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. Huang et al. (2021) ↑ Huang, Y., Song, Z., Li, K., and Arora, S.Instahide: Instance-hiding schemes for private distributed learning, 2021. Kalantidis et al. (2020) ↑ Kalantidis, Y., Sariyildiz, M. B., Pion, N., Weinzaepfel, P., and Larlus, D.Hard negative mixing for contrastive learning.Advances in Neural Information Processing Systems, 33:21798–21809, 2020. Kim et al. (2021) ↑ Kim, J., Choo, W., Jeong, H., and Song, H. O.Co-mixup: Saliency guided joint mixup with supermodular diversity.In International Conference on Learning Representations, 2021.URL https://openreview.net/forum?id=gvxJzw8kW4b. Kim et al. (2020) ↑ Kim, J.-H., Choo, W., and Song, H. O.Puzzle mix: Exploiting saliency and local statistics for optimal mixup.In International Conference on Machine Learning, pp. 5275–5285. PMLR, 2020. Kingma & Ba (2015) ↑ Kingma, D. P. and Ba, J.Adam: A method for stochastic optimization.In Bengio, Y. and LeCun, Y. (eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.URL http://arxiv.org/abs/1412.6980. Krizhevsky (2009) ↑ Krizhevsky, A.Learning multiple layers of features from tiny images.Technical report, 2009. Lamb et al. (2019) ↑ Lamb, A., Verma, V., Kannala, J., and Bengio, Y.Interpolated adversarial training: Achieving robust neural networks without sacrificing too much accuracy.In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, pp.  95–103, 2019. Lee et al. (2020) ↑ Lee, K., Zhu, Y., Sohn, K., Li, C., Shin, J., and Lee, H.i-mix: A strategy for regularizing contrastive representation learning.CoRR, abs/2010.08887, 2020.URL https://arxiv.org/abs/2010.08887. Liang et al. (2018) ↑ Liang, D., Yang, F., Zhang, T., and Yang, P.Understanding mixup training methods.IEEE Access, 6:58774–58783, 2018. Lim et al. (2021) ↑ Lim, S. H., Erichson, N. B., Utrera, F., Xu, W., and Mahoney, M. W.Noisy feature mixup, 2021.URL https://arxiv.org/abs/2110.02180. Liu et al. (2021) ↑ Liu, Z., Li, S., Wu, D., Chen, Z., Wu, L., Guo, J., and Li, S. Z.Automix: Unveiling the power of mixup.CoRR, abs/2103.13027, 2021.URL https://arxiv.org/abs/2103.13027. Mei et al. (2021) ↑ Mei, S., Misiakiewicz, T., and Montanari, A.Learning with invariances in random features and kernel models.In Conference on Learning Theory, pp.  3351–3418. PMLR, 2021. Park et al. (2022) ↑ Park, C., Yun, S., and Chun, S.A unified analysis of mixed sample data augmentation: A loss function perspective.arXiv preprint arXiv:2208.09913, 2022. Paszke et al. (2019) ↑ Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S.Pytorch: An imperative style, high-performance deep learning library.In Advances in Neural Information Processing Systems, volume 32, pp.  8026–8037. Curran Associates, Inc., 2019. Rajput et al. (2019) ↑ Rajput, S., Feng, Z., Charles, Z., Loh, P.-L., and Papailiopoulos, D.Does data augmentation lead to positive margin?In International Conference on Machine Learning, pp. 5321–5330. PMLR, 2019. Shen et al. (2022) ↑ Shen, R., Bubeck, S., and Gunasekar, S.Data augmentation as feature manipulation: a story of desert cows and grass cows, 2022.URL https://arxiv.org/abs/2203.01572. So et al. (2022) ↑ So, J., Oh, C., Shin, M., and Song, K.Multi-modal mixup for robust fine-tuning.arXiv preprint arXiv:2203.03897, 2022. Sohn et al. (2020) ↑ Sohn, K., Berthelot, D., Carlini, N., Zhang, Z., Zhang, H., Raffel, C. A., Cubuk, E. D., Kurakin, A., and Li, C.-L.Fixmatch: Simplifying semi-supervised learning with consistency and confidence.In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M. F., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, pp.  596–608. Curran Associates, Inc., 2020.URL https://proceedings.neurips.cc/paper/2020/file/06964dce9addb1c5cb5d6e3d9838f733-Paper.pdf. Thulasidasan et al. (2019) ↑ Thulasidasan, S., Chennupati, G., Bilmes, J. A., Bhattacharya, T., and Michalak, S.On mixup training: Improved calibration and predictive uncertainty for deep neural networks.Advances in Neural Information Processing Systems, 32:13888–13899, 2019. Uddin et al. (2020) ↑ Uddin, A. F. M. S., Monira, M. S., Shin, W., Chung, T., and Bae, S.Saliencymix: A saliency guided data augmentation strategy for better regularization.CoRR, abs/2006.01791, 2020.URL https://arxiv.org/abs/2006.01791. Verma et al. (2019) ↑ Verma, V., Lamb, A., Beckham, C., Najafi, A., Mitliagkas, I., Lopez-Paz, D., and Bengio, Y.Manifold mixup: Better representations by interpolating hidden states.In International Conference on Machine Learning, pp. 6438–6447. PMLR, 2019. Verma et al. (2021a) ↑ Verma, V., Luong, M.-T., Kawaguchi, K., Pham, H., and Le, Q. V.Towards domain-agnostic contrastive learning, 2021a. Verma et al. (2021b) ↑ Verma, V., Qu, M., Kawaguchi, K., Lamb, A., Bengio, Y., Kannala, J., and Tang, J.Graphmix: Improved training of gnns for semi-supervised learning.In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp.  10024–10032, 2021b. Vershynin (2018) ↑ Vershynin, R.High-Dimensional Probability: An Introduction with Applications in Data Science.Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 2018.ISBN 9781108415194.URL https://books.google.com/books?id=NDdqDwAAQBAJ. Wang et al. (2022) ↑ Wang, R., Walters, R., and Yu, R.Data augmentation vs. equivariant networks: A theory of generalization on dynamics forecasting.arXiv preprint arXiv:2206.09450, 2022. Wang et al. (2021) ↑ Wang, Y., Wang, W., Liang, Y., Cai, Y., and Hooi, B.Mixup for node and graph classification.In Proceedings of the Web Conference 2021, pp.  3663–3674, 2021. Wu et al. (2020) ↑ Wu, S., Zhang, H., Valiant, G., and RΓ©, C.On the generalization effects of linear transformations in data augmentation.In International Conference on Machine Learning, pp. 10410–10420. PMLR, 2020. Xiao et al. (2017) ↑ Xiao, H., Rasul, K., and Vollgraf, R.Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017.URL https://arxiv.org/abs/1708.07747. Yang et al. (2022) ↑ Yang, S., Dong, Y., Ward, R., Dhillon, I. S., Sanghavi, S., and Lei, Q.Sample efficiency of data augmentation consistency regularization.arXiv preprint arXiv:2202.12230, 2022. Yun et al. (2019) ↑ Yun, S., Han, D., Oh, S. J., Chun, S., Choe, J., and Yoo, Y.Cutmix: Regularization strategy to train strong classifiers with localizable features.In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp.  6023–6032, 2019. Zhang et al. (2018) ↑ Zhang, H., Cisse, M., Dauphin, Y. N., and Lopez-Paz, D.mixup: Beyond empirical risk minimization.In International Conference on Learning Representations, 2018. Zhang et al. (2020) ↑ Zhang, L., Deng, Z., Kawaguchi, K., Ghorbani, A., and Zou, J.How does mixup help with robustness and generalization?, 2020. Zhang et al. (2021) ↑ Zhang, L., Deng, Z., Kawaguchi, K., and Zou, J.When and how mixup improves calibration, 2021. Appendix ASupporting Lemmas and Calculations

In this section we collect several technical lemmas and computations that will be necessary for the proofs of our main results.

A.1Gaussian Concentration and Anti-concentration Results

The following are well-known concentration results for Gaussian random variables; we include proofs for the convenience of the reader.

Proposition A.1.

Let 𝑋 𝑖 ∼ 𝒩 ⁒ ( 0 , 𝜎 𝑖 2 ) with 𝑖 ∈ [ π‘š ] and let 𝜎

max 𝑖 ⁑ 𝜎 𝑖 . Then,

𝔼 ⁒ [ max 𝑖 ⁑ 𝑋 𝑖 ] ≀ 𝜎 ⁒ 2 ⁒ log ⁑ π‘š .

Proof.

Let 𝑍

max 𝑖 ⁑ 𝑋 𝑖 . Then by Jensen’s inequality and the MGF of 𝒩 ⁒ ( 0 , 𝜎 𝑖 2 ) , we have:

exp ⁑ ( 𝑑 ⁒ 𝔼 ⁒ [ 𝑍 ] ⁒ missing )
≀ 𝔼 ⁒ exp ⁑ ( 𝑑 ⁒ 𝑍 )

𝔼 ⁒ [ exp ⁑ ( 𝑑 ⁒ max 𝑖 ⁑ 𝑋 𝑖 ) ]

≀ 𝔼 ⁒ [ βˆ‘ 𝑖 exp ⁑ ( 𝑑 ⁒ 𝑋 𝑖 ) ]

βˆ‘ 𝑖 exp ⁑ ( 𝑑 2 ⁒ 𝜎 𝑖 2 / 2 )

≀ π‘š ⁒ exp ⁑ ( 𝑑 2 ⁒ 𝜎 2 / 2 )

⟹ 𝔼 ⁒ [ 𝑍 ]

≀ log ⁑ π‘š 𝑑 + 𝑑 ⁒ 𝜎 2 2 .

Minimizing the RHS yields 𝑑

2 ⁒ log ⁑ π‘š / 𝜎 , from which the result follows. ∎

Proposition A.2.

Let 𝑋 𝑖 be as in Proposition A.1. Then,

𝑃 ⁒ ( max 𝑖 ⁑ 𝑋 𝑖 β‰₯ 𝑑 + 𝜎 ⁒ 2 ⁒ log ⁑ π‘š ) ≀ exp ⁑ ( βˆ’ 𝑑 2 / ( 2 ⁒ 𝜎 2 ) ) .

Proof.

We simply union bound and use the fact that 𝑃 ⁒ ( 𝑋 𝑖 β‰₯ 𝑑 ) ≀ exp ⁑ ( βˆ’ 𝑑 2 / ( 2 ⁒ 𝜎 2 ) ) (Chernoff bound for zero mean Gaussians) to get:

𝑃 ⁒ ( max 𝑖 ⁑ 𝑋 𝑖 β‰₯ 𝑑 + 𝜎 ⁒ 2 ⁒ log ⁑ π‘š )

≀ βˆ‘ 𝑖 𝑃 ⁒ ( 𝑋 𝑖 β‰₯ 𝑑 + 𝜎 ⁒ 2 ⁒ log ⁑ π‘š )

≀ π‘š ⁒ exp ⁑ ( βˆ’ ( 𝑑 + 𝜎 ⁒ 2 ⁒ log ⁑ π‘š ) 2 / ( 2 ⁒ 𝜎 2 ) )

≀ exp ⁑ ( βˆ’ 𝑑 2 / ( 2 ⁒ 𝜎 2 ) ) .

∎

Proposition A.3.

Let 𝑋 1 , 𝑋 2 , … , 𝑋 π‘š be i.i.d. Gaussian variables with mean 0 and variance 𝜎 2 . Then we have that:

𝑃 ⁒ ( max 𝑖 ⁑ 𝑋 𝑖 > Θ ⁒ ( 𝜎 ⁒ log ⁑ ( π‘š / log ⁑ ( 1 / 𝛿 ) ) ) )

1 βˆ’ Θ ⁒ ( 𝛿 ) .

Proof.

We recall that:

𝑃 ⁒ ( 𝑋 𝑖 > π‘₯ )

Θ ⁒ ( 𝜎 π‘₯ ⁒ 𝑒 βˆ’ π‘₯ 2 / ( 2 ⁒ 𝜎 2 ) ) .

A proof of this fact can be found in (Vershynin, 2018). We additionally have that:

𝑃 ⁒ ( max ⁑ 𝑋 𝑖 > π‘₯ )

1 βˆ’ ( 1 βˆ’ 𝑃 ⁒ ( 𝑋 𝑖

π‘₯ ) ) π‘š .

So from the previous asymptotic characterization of 𝑃 ⁒ ( 𝑋 𝑖 > π‘₯ ) we have that choosing π‘₯

Θ ⁒ ( 𝜎 ⁒ log ⁑ ( π‘š / log ⁑ ( 1 / 𝛿 ) ) ) gives 𝑃 ⁒ ( 𝑋 𝑖 > π‘₯ )

Θ ⁒ ( log ⁑ ( 1 / 𝛿 ) / π‘š ) , from which the result follows. ∎

We will also have need for a recent anti-concentration result due to (Chernozhukov et al., 2014), which we restate below.

Proposition A.4 (Theorem 3 (i) (Chernozhukov et al., 2014)).

Let 𝑋 𝑖 ∼ 𝒩 ⁒ ( 0 , 𝜎 2 ) for 𝑖 ∈ [ π‘š ] with 𝜎 2 > 0 . Defining π‘Ž π‘š

𝔼 ⁒ [ max 𝑖 ⁑ 𝑋 𝑖 / 𝜎 ] , we then have for every πœ–

0 :

sup π‘₯ ∈ ℝ 𝑃 ⁒ ( | max 𝑖 ⁑ 𝑋 𝑖 βˆ’ π‘₯ | ≀ πœ– ) ≀ 4 ⁒ πœ– ⁒ ( 1 + π‘Ž π‘š ) / 𝜎 .

Corollary A.5.

Applying Proposition A.1, we have sup π‘₯ ∈ ℝ 𝑃 ⁒ ( | max 𝑖 ⁑ 𝑋 𝑖 βˆ’ π‘₯ | ≀ πœ– ) ≀ 4 ⁒ πœ– ⁒ ( 1 + 2 ⁒ log ⁑ π‘š ) / 𝜎 .

A.2Gradient Calculations

Here we collect the gradient calculations used in the proofs of the main results. We recall that we use βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) to indicate βˆ‚ βˆ‚ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) and 𝑧 𝑖 , 𝑗

( π‘₯ 𝑖 + π‘₯ 𝑗 ) / 2 . Additionally, we will omit parentheses after ReLU ~ when function application is clear.

Calculation A.6.

For any ( π‘₯ 𝑖 , 𝑦 𝑖 ) ∈ 𝒳 :

βˆ‡ 𝑀 𝑦 𝑖 , π‘Ÿ ( 𝑑 ) 𝑔 𝑑 𝑦 𝑖 ⁒ ( π‘₯ 𝑖 )

βˆ‘ 𝑝 ∈ [ 𝑃 ] ReLU ~ β€² ⁒ ⟨ 𝑀 𝑦 𝑖 , π‘Ÿ , π‘₯ 𝑖 ( 𝑝 ) ⟩ ⁒ π‘₯ 𝑖 ( 𝑝 ) .
Proof.
βˆ‡ 𝑀 𝑦 𝑖 , π‘Ÿ ( 𝑑 ) 𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑖 )

βˆ‚ βˆ‚ 𝑀 𝑦 𝑖 , π‘Ÿ ( 𝑑 ) ⁑ βˆ‘ 𝑒 ∈ [ π‘š ] βˆ‘ 𝑝 ∈ [ 𝑃 ] ReLU ~ ⁒ ⟨ 𝑀 𝑦 𝑖 , 𝑒 ( 𝑑 ) , π‘₯ 𝑖 ( 𝑝 ) ⟩

βˆ‘ 𝑝 ∈ [ 𝑃 ] ReLU ~ β€² ⁒ ⟨ 𝑀 𝑦 𝑖 , π‘Ÿ ( 𝑑 ) , π‘₯ 𝑖 ( 𝑝 ) ⟩ ⁒ π‘₯ 𝑖 ( 𝑝 ) .

∎

Calculation A.7.

For any ( π‘₯ 𝑖 , 𝑦 𝑖 ) ∈ 𝒳 , if max 𝑒 ∈ [ π‘˜ ] ⁑ ⟨ 𝑀 𝑦 𝑖 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑒 , β„“ ⟩ < 𝜌 / ( 𝛿 2 βˆ’ 𝛿 1 ) and 𝑠 β‰  𝑦 𝑖 , then:

⟨ βˆ‡ 𝑀 𝑦 𝑖 , π‘Ÿ ( 𝑑 ) 𝑔 𝑑 𝑦 𝑖 ⁒ ( π‘₯ 𝑖 ) , 𝑣 𝑦 𝑖 , β„“ ⟩

βˆ‘ 𝑝 ∈ 𝑃 𝑦 𝑖 , β„“ ⁒ ( π‘₯ 𝑖 ) 𝛽 𝑖 , 𝑝 𝛼 ⁒ ⟨ 𝑀 𝑦 𝑖 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 𝑖 , β„“ ⟩ 𝛼 βˆ’ 1 𝜌 𝛼 βˆ’ 1 ,

⟨ βˆ‡ 𝑀 𝑦 𝑖 , π‘Ÿ ( 𝑑 ) 𝑔 𝑑 𝑦 𝑖 ⁒ ( π‘₯ 𝑖 ) , 𝑣 𝑠 , β„“ ⟩

≀ Θ ⁒ ( 𝑃 𝛿 4 𝛼 max 𝑒 β‰  𝑦 ⟨ 𝑀 𝑦 𝑖 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑒 , β„“ ⟩ 𝛼 βˆ’ 1 𝜌 𝛼 βˆ’ 1 ) .

Proof.

When max 𝑒 ∈ [ π‘˜ ] ⁑ ⟨ 𝑀 𝑦 𝑖 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑒 , β„“ ⟩ < 𝜌 / ( 𝛿 2 βˆ’ 𝛿 1 ) , we are in the polynomial part of ReLU ~ for every patch in π‘₯ 𝑖 , since max 𝑝 ∈ 𝑃 𝑦 𝑖 , β„“ ⁒ ( π‘₯ 𝑖 ) ⁑ ⟨ 𝑀 𝑦 𝑖 , π‘Ÿ ( 𝑑 ) , π‘₯ 𝑖 ( 𝑝 ) ⟩ < 𝜌 since 𝛽 𝑖 , 𝑝 ≀ 𝛿 2 βˆ’ 𝛿 1 . The first line then follows from Calculation A.6 and the fact that all of the feature vectors are orthonormal (so only those patches that have the features 𝑣 𝑦 𝑖 , β„“ are relevant). The second line follows from the fact that there are at most 𝑃 βˆ’ 2 ⁒ 𝐢 𝑃 feature noise patches containing the vector 𝑣 𝑠 , β„“ , and in each of these patches there are only a constant number of feature vectors (which we do not constrain). ∎

Calculation A.8.

For any ( π‘₯ 𝑖 , 𝑦 𝑖 ) ∈ 𝒳 , if ⟨ 𝑀 𝑦 𝑖 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 𝑖 , β„“ ⟩ β‰₯ 𝜌 / 𝛿 1 , then:

⟨ βˆ‡ 𝑀 𝑦 𝑖 , π‘Ÿ ( 𝑑 ) 𝑔 𝑑 𝑦 𝑖 ⁒ ( π‘₯ 𝑖 ) , 𝑣 𝑦 𝑖 , β„“ ⟩

βˆ‘ 𝑝 ∈ 𝑃 𝑦 𝑖 , β„“ ⁒ ( π‘₯ 𝑖 ) 𝛽 𝑖 , 𝑝 .

Proof.

When ⟨ 𝑀 𝑦 𝑖 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 𝑖 , β„“ ⟩ β‰₯ 𝜌 / 𝛿 1 we necessarily have min 𝑝 ∈ 𝑃 𝑦 𝑖 , β„“ ⁒ ( π‘₯ 𝑖 ) ⁑ ⟨ 𝑀 𝑦 𝑖 , π‘Ÿ ( 𝑑 ) , π‘₯ 𝑖 ( 𝑝 ) ⟩ β‰₯ 𝜌 since 𝛽 𝑖 , 𝑝 β‰₯ 𝛿 1 , and then the result again follows from Calculation A.6 and the fact that ReLU ~ β€²

1 in the linear regime. ∎

Calculation A.9 (ERM Gradient).
βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) 𝐽 ⁒ ( 𝑔 𝑑 , 𝒳 )

βˆ’ 1 𝑁 ⁒ βˆ‘ 𝑖 ∈ [ 𝑁 ] ( πŸ™ 𝑦 𝑖

𝑦 βˆ’ πœ™ 𝑦 ⁒ ( 𝑔 ⁒ ( π‘₯ 𝑖 ) ) ) ⁒ βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) 𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑖 ) .

Proof.

First let us observe that:

log ⁑ πœ™ 𝑦 𝑖 ⁒ ( 𝑔 𝑑 ⁒ ( π‘₯ 𝑖 ) )

𝑔 𝑑 𝑦 𝑖 ⁒ ( π‘₯ 𝑖 ) βˆ’ log ⁒ βˆ‘ 𝑠 exp ⁑ ( 𝑔 𝑑 𝑠 ⁒ ( π‘₯ 𝑖 ) )

⟹ βˆ‚ log ⁑ πœ™ 𝑦 𝑖 ⁒ ( 𝑔 𝑑 ⁒ ( π‘₯ 𝑖 ) ) βˆ‚ 𝑀 𝑦 , π‘Ÿ

πŸ™ 𝑦 𝑖

𝑦 ⁒ βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) 𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑖 ) βˆ’ πœ™ 𝑦 ⁒ ( 𝑔 ⁒ ( π‘₯ 𝑖 ) ) ⁒ βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) 𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑖 ) .

Summing (and negating) the above over all points π‘₯ 𝑖 gives the result. ∎

Calculation A.10 (Midpoint Mixup Gradient).
βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 𝑑 , 𝒳 )

βˆ’ 1 2 ⁒ 𝑁 2 ⁒ βˆ‘ 𝑖 ∈ [ 𝑁 ] βˆ‘ 𝑗 ∈ [ 𝑁 ] ( πŸ™ 𝑦 𝑖

𝑦 + πŸ™ 𝑦 𝑗

𝑦 βˆ’ 2 ⁒ πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ) ⁒ βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) 𝑔 𝑑 𝑦 ⁒ ( 𝑧 𝑖 , 𝑗 ) .

Proof.

Follows from applying Calculation A.9 to each part of the summation in 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) . ∎

Appendix BProofs of Main Results

This section contains the proofs of the main results in this paper. We have opted to present the proofs in a linear fashion - inlining several claims and their proofs along the way - as we find this to be more readable than the alternative. The proofs of inlined claims are ended with the β–  symbol, while the proofs of the overarching results are ended with the β–‘ symbol.

For convenience, we recall the assumptions (as they were stated in the main body) that are used in these results: See 4.3

B.1Proof of Theorem 4.6

See 4.6

Proof.

We break the proof into two parts. In part one, we prove that (with high probability) each class output 𝑔 𝑑 𝑦 becomes large (but not too large) on data points belonging to class 𝑦 and stays small on other data points, which consequently allows us to obtain perfect training accuracy at the end (thereby proving the first half of the theorem). In part two, we show that (again with high probability) the max correlations with features 𝑣 𝑦 , 1 and 𝑣 𝑦 , 2 for a class 𝑦 have a separation at initialization that gets amplified over the course of training, and due to this separation one of the feature correlations becomes essentially irrelevant, which will be used to prove the second half of the theorem.

Part I.

In this part, we show that the network output 𝑔 𝑑 𝑦 𝑖 ⁒ ( π‘₯ 𝑖 ) reaches and remains Θ ⁒ ( log ⁑ π‘˜ ) while 𝑔 𝑑 𝑠 ⁒ ( π‘₯ 𝑖 )

π‘œ ⁒ ( 1 ) for all 𝑑

𝑂 ⁒ ( poly ⁒ ( π‘˜ ) ) and 𝑠 β‰  𝑦 𝑖 . These two facts together allow us to control the 1 βˆ’ πœ™ 𝑦 𝑖 ⁒ ( 𝑔 𝑑 ⁒ ( π‘₯ 𝑖 ) ) terms that show up throughout our analysis (see Calculation A.9), while also being sufficient for showing that we get perfect training accuracy. The intuition behind these results is that, when 𝑔 𝑑 𝑦 𝑖 ⁒ ( π‘₯ 𝑖 )

𝑐 ⁒ log ⁑ π‘˜ , we have that exp ⁑ ( 𝑔 𝑑 𝑦 𝑖 ⁒ ( π‘₯ 𝑖 ) )

π‘˜ 𝑐 so the 1 βˆ’ πœ™ 𝑦 𝑖 ⁒ ( 𝑔 𝑑 ⁒ ( π‘₯ 𝑖 ) ) terms in the gradient updates quickly become small and 𝑔 𝑑 𝑦 𝑖 stops growing. Throughout this part of the proof and the next, we will use the following notation (some of which has been introduced previously) to simplify the presentation.

𝑁 𝑦

{ 𝑖 : 𝑖 ∈ [ 𝑁 ] ⁒  and  ⁒ 𝑦 𝑖

𝑦 } , 𝑃 𝑦 , β„“ ⁒ ( π‘₯ 𝑖 )

{ 𝑝 : 𝑝 ∈ [ 𝑃 ] ⁒  and  ⁒ ⟨ π‘₯ 𝑖 ( 𝑝 ) , 𝑣 𝑦 , β„“ ⟩ > 0 } ,

𝐡 𝑦 , β„“ ( 𝑑 )

{ π‘Ÿ : π‘Ÿ ∈ [ π‘š ] ⁒  and  ⁒ ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , β„“ ⟩ β‰₯ 𝜌 / 𝛿 1 } .

(B.1)

Here, 𝑁 𝑦 represents the indices corresponding to class 𝑦 points, 𝑃 𝑦 , β„“ ⁒ ( π‘₯ 𝑖 ) (as used in Definition 4.1) represents the patch support of the feature 𝑣 𝑦 , β„“ in π‘₯ 𝑖 (recall the features are orthonormal), and 𝐡 𝑦 , β„“ ( 𝑑 ) represents the set of class 𝑦 weights that have achieved a big enough correlation with the feature 𝑣 𝑦 , β„“ to necessarily be in the linear regime of ReLU ~ on all class 𝑦 points at iteration 𝑑 .

Prior to beginning our analysis of the network outputs 𝑔 𝑑 𝑦 , we first prove a claim that will serve as the setting for the rest of the proof. In what follows, and also throughout the rest of this proof and the proof of Theorem 4.7, we will abuse asymptotic notation in inequalities. When we consider a quantity to be bounded by an asymptotic term, we mean there exists some universal constant for which the inequality is true for sufficiently large choices of the parameters involved, so that the negation of the inequality is well-defined (i.e. we use the same constant in the negation).

Claim B.1.

With probability 1 βˆ’ 𝑂 ⁒ ( 1 / π‘˜ ) , all of the following are (simultaneously) true for every class 𝑦 ∈ [ π‘˜ ] :

β€’

| 𝑁 𝑦 |

Θ ⁒ ( 𝑁 / π‘˜ )

β€’

max 𝑠 ∈ [ π‘˜ ] , π‘Ÿ ∈ [ π‘š ] , β„“ ∈ [ 2 ] ⁑ ⟨ 𝑀 𝑠 , π‘Ÿ ( 0 ) , 𝑣 𝑦 , β„“ ⟩

𝑂 ⁒ ( log ⁑ π‘˜ / 𝑑 )

β€’

βˆ€ β„“ ∈ [ 2 ] , max π‘Ÿ ∈ [ π‘š ] ⁑ ⟨ 𝑀 𝑦 , π‘Ÿ ( 0 ) , 𝑣 𝑦 , β„“ ⟩

Ξ© ⁒ ( 1 / 𝑑 )

Proof of Claim B.1.

We prove each part of the claim in order, starting with showing that | 𝑁 𝑦 |

Θ ⁒ ( 𝑁 / π‘˜ ) with the desired probability for every 𝑦 . To see this, we note that the joint distribution of the | 𝑁 𝑦 | is multinomial with uniform probability 1 / π‘˜ . Now by a Chernoff bound, for each 𝑦 ∈ [ π‘˜ ] we have that | 𝑁 𝑦 |

Θ ⁒ ( 𝑁 / π‘˜ ) with probability at least 1 βˆ’ exp ⁑ ( Θ ⁒ ( βˆ’ 𝑁 / π‘˜ ) ) . Taking a union bound over all 𝑦 gives that this holds simultaneously for every 𝑦 with probability at least 1 βˆ’ π‘˜ ⁒ exp ⁑ ( Θ ⁒ ( βˆ’ 𝑁 / π‘˜ ) ) .

The fact that for every 𝑦 we have max 𝑠 ∈ [ π‘˜ ] , π‘Ÿ ∈ [ π‘š ] , β„“ ∈ [ 2 ] ⁑ ⟨ 𝑀 𝑠 , π‘Ÿ ( 0 ) , 𝑣 𝑦 , β„“ ⟩

𝑂 ⁒ ( log ⁑ π‘˜ / 𝑑 ) with probability 1 βˆ’ 𝑂 ⁒ ( 1 / π‘˜ ) follows from Proposition A.2. Namely, using Proposition A.2 with 𝑑

2 ⁒ 𝜎 ⁒ 2 ⁒ log ⁑ π‘š (here 𝜎

1 / 𝑑 by our choice of initialization) yields that max π‘Ÿ ⁑ ⟨ 𝑀 𝑠 , π‘Ÿ ( 0 ) , 𝑣 𝑦 , β„“ ⟩ β‰₯ 3 ⁒ 2 ⁒ log ⁑ π‘˜ / 𝑑 with probability bounded above by 1 / π‘˜ 3 for any 𝑠 , 𝑦 . Taking a union bound over 𝑠 , 𝑦 then gives the result. The final fact follows by near identical logic but using Proposition A.3 (note that the correlations ⟨ 𝑀 𝑠 , π‘Ÿ ( 0 ) , 𝑣 𝑦 , β„“ ⟩ are i.i.d. 𝒩 ⁒ ( 0 , 1 / 𝑑 ) due to the fact that the features are orthonormal and the weights themselves are i.i.d.). ∎

In everything that follows, we will always assume the conditions of Claim B.1 unless otherwise stated. We begin by proving a result concerning the size of softmax outputs πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( π‘₯ ) ) that we will repeatedly use throughout the rest of the proof.

Claim B.2.

Consider 𝑖 ∈ 𝑁 𝑦 and suppose that both max 𝑠 ∈ [ π‘˜ ] , π‘Ÿ ∈ [ π‘š ] , β„“ ∈ [ 2 ] ⁑ ⟨ 𝑀 𝑠 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑠 , β„“ ⟩

𝑂 ⁒ ( log ⁑ π‘˜ ) and max 𝑠 β‰  𝑦 , π‘Ÿ ∈ [ π‘š ] , β„“ ∈ [ 2 ] ⁑ ⟨ 𝑀 𝑠 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , β„“ ⟩

𝑂 ⁒ ( log ⁑ ( π‘˜ ) / 𝑑 ) hold true. If we have 𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑖 ) β‰₯ π‘Ž ⁒ log ⁑ π‘˜ for some π‘Ž ∈ [ 0 , ∞ ) , then:

1 βˆ’ πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( π‘₯ 𝑖 ) )

{ 𝑂 ⁒ ( 1 / π‘˜ π‘Ž βˆ’ 1 )

if  ⁒ π‘Ž

1

Θ ⁒ ( 1 )

otherwise .

Proof of Claim B.2.

By assumption, all of the weight-feature correlations are 𝑂 ⁒ ( log ⁑ π‘˜ ) at 𝑑 . Furthermore, for 𝑠 β‰  𝑦 , all of the off-diagonal correlations ⟨ 𝑀 𝑠 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , β„“ ⟩ are 𝑂 ⁒ ( log ⁑ ( π‘˜ ) / 𝑑 ) . This implies that (using 𝛿 4

Θ ⁒ ( π‘˜ βˆ’ 1.5 ) , 𝜌

Θ ⁒ ( 1 / π‘˜ ) , 𝑃

Θ ⁒ ( π‘˜ 2 ) , π‘š

Θ ⁒ ( π‘˜ ) , and 𝛼

8 ):

𝑔 𝑑 𝑠 ⁒ ( π‘₯ 𝑖 )
≀ 𝑂 ⁒ ( π‘š 𝑃 𝛿 4 𝛼 max 𝑒 β‰  𝑦 ⟨ 𝑀 𝑠 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑒 , β„“ ⟩ 𝛼 𝜌 𝛼 βˆ’ 1 ) ,

≀ 𝑂 ⁒ ( π‘˜ 2 + 𝛼 ⁒ log ⁑ ( π‘˜ ) 𝛼 π‘˜ 1.5 ⁒ 𝛼 )

𝑂 ⁒ ( log ⁑ ( π‘˜ ) 𝛼 π‘˜ 2 )

⟹ exp ⁑ ( 𝑔 𝑑 𝑠 ⁒ ( π‘₯ 𝑖 ) )

≀ 1 + 𝑂 ⁒ ( log ⁑ ( π‘˜ ) 𝛼 π‘˜ 2 ) .

(B.2)

Where above we disregarded the constant number ( 2 ⁒ 𝐢 𝑃 ) of very low order correlations ⟨ 𝑀 𝑠 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , β„“ ⟩ and used the inequality that exp ⁑ ( π‘₯ ) ≀ 1 + π‘₯ + π‘₯ 2 for π‘₯ ≀ 1 . Now by the assumption that 𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑖 ) β‰₯ π‘Ž ⁒ log ⁑ π‘˜ , we have exp ⁑ ( 𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑖 ) ) β‰₯ π‘˜ π‘Ž , so:

1 βˆ’ πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( π‘₯ 𝑖 ) )
≀ 1 βˆ’ π‘˜ π‘Ž π‘˜ π‘Ž + ( π‘˜ βˆ’ 1 ) + π‘œ ⁒ ( 1 )

π‘˜ βˆ’ 1 + π‘œ ⁒ ( 1 ) π‘˜ π‘Ž + ( π‘˜ βˆ’ 1 ) + π‘œ ⁒ ( 1 ) .

(B.3)

From which the result follows. ∎

Corollary B.3.

Under the same conditions as Claim B.2, for 𝑠 β‰  𝑦 , we have:

πœ™ 𝑠 ⁒ ( 𝑔 𝑑 ⁒ ( π‘₯ 𝑖 ) )

𝑂 ⁒ ( 1 π‘˜ max ⁑ ( π‘Ž , 1 ) ) .

Proof of Corollary B.3.

Follows from Equations B.2 and B.3. ∎

With these softmax bounds in hand, we now show that the β€œdiagonal” correlations ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , β„“ ⟩ grow much more quickly than the the β€œoff-diagonal” correlations ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑠 , β„“ ⟩ (where 𝑠 β‰  𝑦 ). This will allow us to satisfy the conditions of Claim B.2 throughout training.

Claim B.4.

Consider an arbitrary 𝑦 ∈ [ π‘˜ ] . Let 𝐴 ≀ 𝜌 / ( 𝛿 2 βˆ’ 𝛿 1 ) and let 𝑇 𝐴 denote the first iteration at which max π‘Ÿ ∈ [ π‘š ] , β„“ ∈ [ 2 ] ⁑ ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑇 𝐴 ) , 𝑣 𝑦 , β„“ ⟩ β‰₯ 𝐴 . Then we must have both that 𝑇 𝐴

𝑂 ⁒ ( poly ⁒ ( π‘˜ ) ) and that max π‘Ÿ ∈ [ π‘š ] , 𝑠 β‰  𝑦 , β„“ ∈ [ 2 ] ⁑ ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑇 𝐴 ) , 𝑣 𝑠 , β„“ ⟩

𝑂 ⁒ ( log ⁑ ( π‘˜ ) / 𝑑 ) .

Proof of Claim B.4.

Firstly, all weight-feature correlations are π‘œ ⁒ ( 𝜌 ) at initialization (see Claim B.1). Now for 𝑠 β‰  𝑦 and ⟨ 𝑀 𝑦 , π‘Ÿ ( 0 ) , 𝑣 𝑠 , β„“ ⟩

0 , we have for every 𝑑 at which ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑠 , β„“ ⟩ < 𝜌 / ( 𝛿 2 βˆ’ 𝛿 1 ) that (using Calculations A.7 and A.9):

⟨ βˆ’ πœ‚ ⁒ βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) 𝐽 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑠 , β„“ ⟩

≀ βˆ’ πœ‚ 𝑁 ⁒ βˆ‘ 𝑖 ∈ 𝑁 𝑠 πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( π‘₯ 𝑖 ) ) ⁒ βˆ‘ 𝑝 ∈ 𝑃 𝑠 , β„“ ⁒ ( π‘₯ 𝑖 ) 𝛽 𝑖 , 𝑝 𝛼 ⁒ ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑠 , β„“ ⟩ 𝛼 βˆ’ 1 𝜌 𝛼 βˆ’ 1

  • πœ‚ 𝑁 ⁒ βˆ‘ 𝑖 ∈ 𝑁 𝑦 ( 1 βˆ’ πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( π‘₯ 𝑖 ) ) ) ⁒ Θ ⁒ ( 𝑃 𝛿 4 𝛼 max 𝑒 β‰  𝑦 , β„“ β€² ∈ [ 2 ] ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑒 , β„“ β€² ⟩ 𝛼 βˆ’ 1 𝜌 𝛼 βˆ’ 1 )

βˆ’ πœ‚ 𝑁 ⁒ βˆ‘ 𝑖 βˆ‰ 𝑁 𝑦 βˆͺ 𝑁 𝑠 πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( π‘₯ 𝑖 ) ) ⁒ Θ ⁒ ( 𝑃 𝛿 3 𝛼 min 𝑒 β‰  𝑦 , β„“ β€² ∈ [ 2 ] ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑒 , β„“ β€² ⟩ 𝛼 βˆ’ 1 𝜌 𝛼 βˆ’ 1 )

≀ πœ‚ 𝑁 ⁒ βˆ‘ 𝑖 ∈ 𝑁 𝑦 ( 1 βˆ’ πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( π‘₯ 𝑖 ) ) ) ⁒ Θ ⁒ ( 𝑃 𝛿 4 𝛼 max 𝑒 β‰  𝑦 , β„“ β€² ∈ [ 2 ] ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑒 , β„“ β€² ⟩ 𝛼 βˆ’ 1 𝜌 𝛼 βˆ’ 1 ) .

(B.4)

Similarly, for ⟨ 𝑀 𝑦 , π‘Ÿ ( 0 ) , 𝑣 𝑦 , β„“ ⟩

0 , we have for every 𝑑 at which ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , β„“ ⟩ < 𝜌 / ( 𝛿 2 βˆ’ 𝛿 1 ) that:

⟨ βˆ’ πœ‚ ⁒ βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) 𝐽 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , β„“ ⟩

β‰₯ πœ‚ 𝑁 ⁒ βˆ‘ 𝑖 ∈ 𝑁 𝑦 ( 1 βˆ’ πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( π‘₯ 𝑖 ) ) ) ⁒ βˆ‘ 𝑝 ∈ 𝑃 𝑠 , β„“ ⁒ ( π‘₯ 𝑖 ) 𝛽 𝑖 , 𝑝 𝛼 ⁒ ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , β„“ ⟩ 𝛼 βˆ’ 1 𝜌 𝛼 βˆ’ 1

βˆ’ πœ‚ 𝑁 ⁒ βˆ‘ 𝑖 βˆ‰ 𝑁 𝑦 πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( π‘₯ 𝑖 ) ) ⁒ Θ ⁒ ( 𝑃 𝛿 4 𝛼 max 𝑒 β‰  𝑦 , β„“ β€² ∈ [ 2 ] ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑒 , β„“ β€² ⟩ 𝛼 βˆ’ 1 𝜌 𝛼 βˆ’ 1 ) .

(B.5)

From Equation B.5, Claim B.2, and Corollary B.3 we get that for 𝑑 ≀ 𝑇 𝐴 :

⟨ βˆ’ πœ‚ ⁒ βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) 𝐽 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , β„“ ⟩

β‰₯ Θ ⁒ ( πœ‚ ⁒ ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , β„“ ⟩ 𝛼 βˆ’ 1 π‘˜ ⁒ 𝜌 𝛼 βˆ’ 1 ) .

(B.6)

Where above we also used the fact that | 𝑁 𝑦 |

Θ ⁒ ( 𝑁 / π‘˜ ) . On the other hand, also using Claim B.2 and Corollary B.3, we have that for all 𝑑 for which ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑠 , β„“ ⟩ < 𝜌 / ( 𝛿 2 βˆ’ 𝛿 1 ) :

⟨ βˆ’ πœ‚ ⁒ βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) 𝐽 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑠 , β„“ ⟩

≀ Θ ⁒ ( πœ‚ 𝑃 𝛿 4 𝛼 max 𝑒 β‰  𝑦 , β„“ β€² ∈ [ 2 ] ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑒 , β„“ β€² ⟩ 𝛼 βˆ’ 1 𝜌 𝛼 βˆ’ 1 ) .

(B.7)

Now suppose that ⟨ 𝑀 𝑦 , π‘Ÿ ( 0 ) , 𝑣 𝑠 , β„“ ⟩ is the maximum off-diagonal correlation at initialization. Then using Equation B.7, we can lower bound the number of iterations 𝑇 it takes for ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑠 , β„“ ⟩ to grow by a fixed constant 𝐢 factor from initialization:

𝑇 ⁒ Θ ⁒ ( πœ‚ ⁒ 𝑃 ⁒ 𝛿 4 𝛼 ⁒ 𝐢 𝛼 βˆ’ 1 ⁒ ⟨ 𝑀 𝑦 , π‘Ÿ ( 0 ) , 𝑣 𝑠 , β„“ ⟩ 𝛼 βˆ’ 1 𝜌 𝛼 βˆ’ 1 )
β‰₯ ( 𝐢 βˆ’ 1 ) ⁒ ⟨ 𝑀 𝑦 , π‘Ÿ ( 0 ) , 𝑣 𝑠 , β„“ ⟩ ,

⟹ 𝑇
β‰₯ Θ ⁒ ( 𝜌 𝛼 βˆ’ 1 πœ‚ ⁒ 𝑃 ⁒ 𝛿 4 𝛼 ⁒ ⟨ 𝑀 𝑦 , π‘Ÿ ( 0 ) , 𝑣 𝑠 , β„“ ⟩ 𝛼 βˆ’ 2 )

Θ ⁒ ( π‘˜ 1.5 ⁒ 𝛼 βˆ’ 2 ⁒ 𝜌 𝛼 βˆ’ 1 ⁒ 𝑑 𝛼 / 2 βˆ’ 1 πœ‚ ) .

(B.8)

As there exists at least one ⟨ 𝑀 𝑦 , π‘Ÿ ( 0 ) , 𝑣 𝑦 , β„“ ⟩

Ξ© ⁒ ( 1 / 𝑑 ) , it immediately follows from comparing to Equation B.6 and recalling that 𝛼

8 in Assumption 4.3 that 𝑇 >> 𝑇 𝐴 , and that 𝑇 𝐴

𝑂 ⁒ ( poly ⁒ ( π‘˜ ) ) , so the claim is proved. ∎

Having established strong control over the off-diagonal correlations, we are now ready to prove the first half of the main result of this part of the proof - that 𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑖 ) reaches Ξ© ⁒ ( log ⁑ π‘˜ ) for all 𝑖 ∈ 𝑁 𝑦 in 𝑂 ⁒ ( poly ⁒ ( π‘˜ ) ) iterations. In proving this, it will help us to have some control over the network outputs 𝑔 𝑑 𝑦 across different points π‘₯ 𝑖 and π‘₯ 𝑗 at the later stages of training, which we take care of below.

Claim B.5.

For every 𝑦 ∈ [ π‘˜ ] and all 𝑑 such that max 𝑖 ∈ 𝑁 𝑦 ⁑ 𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑖 ) β‰₯ log ⁑ π‘˜ and max π‘Ÿ ∈ [ π‘š ] , 𝑠 β‰  𝑦 , β„“ ∈ [ 2 ] ⁑ ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑠 , β„“ ⟩

𝑂 ⁒ ( log ⁑ ( π‘˜ ) / 𝑑 ) , we have max 𝑖 ∈ 𝑁 𝑦 ⁑ 𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑖 )

Ξ© ⁒ ( min 𝑖 ∈ 𝑁 𝑦 ⁑ 𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑖 ) ) .

Proof of Claim B.5.

Let 𝑗

argmax 𝑖 ∈ 𝑁 𝑦 𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑖 ) . Since 𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑗 ) β‰₯ log ⁑ π‘˜ , we necessarily have that 𝐡 𝑦 , β„“ is non-empty for at least one β„“ ∈ [ 2 ] (since π‘š ⁒ 𝜌

Θ ⁒ ( 1 ) ). Only those weights 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) with π‘Ÿ ∈ 𝐡 𝑦 , β„“ for some β„“ ∈ [ 2 ] are asymptotically relevant (as any weights not considered can only contribute a 𝑂 ⁒ ( 1 ) term), and we can write:

𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑗 ) ≀ βˆ‘ β„“ ∈ [ 2 ] ( βˆ‘ 𝑝 ∈ 𝑃 𝑦 , β„“ ⁒ ( π‘₯ 𝑖 ) 𝛽 𝑖 , 𝑝 ) ⁒ βˆ‘ π‘Ÿ ∈ 𝐡 𝑦 , β„“ ( 𝑑 ) ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , β„“ ⟩ + π‘œ ⁒ ( log ⁑ π‘˜ ) .

For any other 𝑗 ∈ 𝑁 𝑦 , we have that 𝛽 𝑗 , 𝑝 β‰₯ 𝛿 1 ⁒ 𝛽 𝑖 , 𝑝 / ( 𝛿 2 βˆ’ 𝛿 1 ) , from which the result follows. ∎

Now we may show:

Claim B.6.

For each 𝑦 ∈ [ π‘˜ ] , let 𝑇 𝑦 denote the first iteration such that max 𝑖 ∈ 𝑁 𝑦 ⁑ 𝑔 𝑇 𝑦 𝑦 ⁒ ( π‘₯ 𝑖 ) β‰₯ log ⁑ π‘˜ . Then 𝑇 𝑦

𝑂 ⁒ ( poly ⁒ ( π‘˜ ) ) and max π‘Ÿ ∈ [ π‘š ] , 𝑠 β‰  𝑦 , β„“ ∈ [ 2 ] ⁑ ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑇 𝑦 ) , 𝑣 𝑠 , β„“ ⟩

𝑂 ⁒ ( log ⁑ ( π‘˜ ) / 𝑑 ) . Furthermore, min 𝑖 ∈ 𝑁 𝑦 ⁑ 𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑖 )

Ξ© ⁒ ( log ⁑ π‘˜ ) for all 𝑑 β‰₯ 𝑇 𝑦 .

Proof of Claim B.6.

Applying Claim B.4 to an arbitrary 𝑦 ∈ [ π‘˜ ] yields the existence of a correlation ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑦 , β„“ βˆ— ⟩ β‰₯ 𝜌 / ( 𝛿 2 βˆ’ 𝛿 1 ) . Reusing the logic of Claim B.4, but this time replacing ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑦 , β„“ βˆ— ⟩ in Equation B.6 with 𝜌 / ( 𝛿 2 βˆ’ 𝛿 1 ) , shows that in 𝑂 ⁒ ( poly ⁒ ( π‘˜ ) ) additional iterations we have ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑦 , β„“ βˆ— ⟩ β‰₯ 𝜌 / 𝛿 1 (implying π‘Ÿ ∈ 𝐡 𝑦 , β„“ βˆ— ( 𝑑 ) ) while the off-diagonal correlations still remain within a constant factor of initialization.

Now we may lower bound the update to ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑦 , β„“ βˆ— ⟩ as (using Calculation A.8):

⟨ βˆ’ πœ‚ ⁒ βˆ‡ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) 𝐽 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , β„“ βˆ— ⟩

β‰₯ πœ‚ 𝑁 ⁒ βˆ‘ 𝑖 ∈ 𝑁 𝑦 ( 1 βˆ’ πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( π‘₯ 𝑖 ) ) ) ⁒ βˆ‘ 𝑝 ∈ 𝑃 𝑠 , β„“ ⁒ ( π‘₯ 𝑖 ) 𝛽 𝑖 , 𝑝

βˆ’ πœ‚ 𝑁 ⁒ βˆ‘ 𝑖 βˆ‰ 𝑁 𝑦 πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( π‘₯ 𝑖 ) ) ⁒ Θ ⁒ ( 𝑃 𝛿 4 𝛼 max 𝑒 β‰  𝑦 , β„“ β€² ∈ [ 2 ] ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑒 , β„“ β€² ⟩ 𝛼 βˆ’ 1 𝜌 𝛼 βˆ’ 1 ) .

(B.9)

So long as max 𝑖 ∈ 𝑁 𝑦 ⁑ 𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑖 ) < log ⁑ π‘˜ (which is necessarily still the case at this point, as again π‘š ⁒ 𝜌

Θ ⁒ ( 1 ) ), we have by the logic of Claim B.2 that we can simplify Equation B.9 to:

⟨ βˆ’ πœ‚ ⁒ βˆ‡ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) 𝐽 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , β„“ βˆ— ⟩

β‰₯ Θ ⁒ ( πœ‚ / π‘˜ ) .

(B.10)

Where again we used the fact that | 𝑁 𝑦 |

Θ ⁒ ( 𝑁 / π‘˜ ) . Now we can upper bound 𝑇 𝑦 by the number of iterations it takes ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑦 , β„“ βˆ— ⟩ to grow to log ⁑ ( π‘˜ ) / 𝛿 1 . From Equation B.10, we clearly have that 𝑇 𝑦

𝑂 ⁒ ( poly ⁒ ( π‘˜ ) ) for some polynomial in π‘˜ . Furthermore, comparing to Equation B.7, we necessarily still have max π‘Ÿ ∈ [ π‘š ] , 𝑠 β‰  𝑦 , β„“ ∈ [ 2 ] ⁑ ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑇 𝑦 ) , 𝑣 𝑠 , β„“ ⟩

𝑂 ⁒ ( log ⁑ ( π‘˜ ) / 𝑑 ) . Finally, as the update in Equation B.9 is positive at 𝑇 𝑦 (and the absolute value of a gradient update is π‘œ ⁒ ( 1 ) ), it follows that min 𝑖 ∈ 𝑁 𝑦 ⁑ 𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑖 )

Ξ© ⁒ ( log ⁑ π‘˜ ) for all 𝑑 β‰₯ 𝑇 𝑦 by Claim B.5. ∎

The final remaining task is to show that 𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑖 )

𝑂 ⁒ ( log ⁑ π‘˜ ) and 𝑔 𝑑 𝑠 ⁒ ( π‘₯ 𝑖 )

π‘œ ⁒ ( 1 ) for all 𝑑

𝑂 ⁒ ( poly ⁒ ( π‘˜ ) ) and 𝑖 ∈ 𝑁 𝑦 for every 𝑦 ∈ [ π‘˜ ] .

Claim B.7.

For all 𝑑

𝑂 ⁒ ( π‘˜ 𝐢 ) for any universal constant 𝐢 , and for every 𝑦 ∈ [ π‘˜ ] and 𝑠 β‰  𝑦 , we have that 𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑖 )

𝑂 ⁒ ( log ⁑ π‘˜ ) and max π‘Ÿ ∈ [ π‘š ] , 𝑠 β‰  𝑦 , β„“ ∈ [ 2 ] ⁑ ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑠 , β„“ ⟩

𝑂 ⁒ ( log ⁑ ( π‘˜ ) / 𝑑 ) for all 𝑖 ∈ 𝑁 𝑦 .

Proof of Claim B.7.

Let us again consider any class 𝑦 ∈ [ π‘˜ ] and 𝑑 β‰₯ 𝑇 𝑦 . The idea is to show that max 𝑖 ∈ 𝑁 𝑦 ⁑ 1 βˆ’ πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( π‘₯ 𝑖 ) ) is decreasing rapidly as min 𝑖 ∈ 𝑁 𝑦 ⁑ 𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑖 ) grows to successive levels of π‘Ž ⁒ log ⁑ π‘˜ for π‘Ž

1 .

Firstly, following Equation B.9, we can form the following upper bound for the gradient updates to π‘Ÿ ∈ 𝐡 𝑦 , β„“ ( 𝑑 ) :

⟨ βˆ’ πœ‚ ⁒ βˆ‡ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) 𝐽 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , β„“ βˆ— ⟩

≀ πœ‚ 𝑁 ⁒ βˆ‘ 𝑖 ∈ 𝑁 𝑦 ( 1 βˆ’ πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( π‘₯ 𝑖 ) ) ) ⁒ βˆ‘ 𝑝 ∈ 𝑃 𝑠 , β„“ ⁒ ( π‘₯ 𝑖 ) 𝛽 𝑖 , 𝑝

≀ ( 1 βˆ’ min 𝑖 ∈ 𝑁 𝑦 ⁑ πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( π‘₯ 𝑖 ) ) ) ⁒ Θ ⁒ ( πœ‚ / π‘˜ ) .

(B.11)

From Equation B.11 it follows that it takes at least Θ ⁒ ( π‘˜ ⁒ log ⁑ ( π‘˜ ) / ( π‘š ⁒ πœ‚ ) ) iterations (since the correlations must grow at least log ⁑ ( π‘˜ ) / π‘š ) from 𝑇 𝑦 for 𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑖 ) to reach 2 ⁒ log ⁑ π‘˜ . Now let 𝑇 π‘Ž denote the number of iterations it takes for min 𝑖 ∈ 𝑁 𝑦 ⁑ 𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑖 ) to cross π‘Ž ⁒ log ⁑ π‘˜ after crossing ( π‘Ž βˆ’ 1 ) ⁒ log ⁑ π‘˜ for the first time. For π‘Ž β‰₯ 3 , we necessarily have that 𝑇 π‘Ž

Ξ© ⁒ ( π‘˜ ⁒ 𝑇 π‘Ž βˆ’ 1 ) by Claim B.2 and Equation B.11.

Let us now further define 𝑇 𝑓 to be the first iteration at which max 𝑖 ∈ 𝑁 𝑦 ⁑ 𝑔 𝑇 𝑓 𝑦 ⁒ ( π‘₯ 𝑖 ) β‰₯ 𝑓 ⁒ ( π‘˜ ) ⁒ log ⁑ π‘˜ for some 𝑓 ⁒ ( π‘˜ )

πœ” ⁒ ( 1 ) . By Claim B.5, at this point min 𝑖 ∈ 𝑁 𝑦 ⁑ 𝑔 𝑇 𝑓 𝑦 ⁒ ( π‘₯ 𝑖 )

Ξ© ⁒ ( 𝑓 ⁒ ( π‘˜ ) ⁒ log ⁑ π‘˜ ) . However, we have from the above discussion that:

𝑇 𝑓
β‰₯ Ξ© ⁒ ( poly ⁒ ( π‘˜ ) ) + βˆ‘ π‘Ž

0 𝑓 ⁒ ( π‘˜ ) βˆ’ 3 Ξ© ⁒ ( π‘˜ π‘Ž ⁒ log ⁑ π‘˜ πœ‚ )

β‰₯ Ξ© ⁒ ( log ⁑ π‘˜ ⁒ ( π‘˜ 𝑓 ⁒ ( π‘˜ ) βˆ’ 2 βˆ’ 1 ) πœ‚ ⁒ ( π‘˜ βˆ’ 1 ) )

β‰₯ πœ” ⁒ ( poly ⁒ ( π‘˜ ) ) .

(B.12)

So max 𝑖 ∈ 𝑁 𝑦 ⁑ 𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑖 )

𝑂 ⁒ ( log ⁑ π‘˜ ) for all 𝑑

𝑂 ⁒ ( poly ⁒ ( π‘˜ ) ) . An identical analysis also works for the off-diagonal correlations ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑠 , β„“ ⟩ but forming an upper bound using Equation B.4, so we are done. ∎

We get the following two corollaries as straightforward consequences of Claim B.7.

Corollary B.8 (Perfect Training Accuracy).

We have that there exists a universal constant 𝐢 such that argmax 𝑠 𝑔 𝑑 𝑠 ⁒ ( π‘₯ 𝑖 )

𝑦 𝑖 for every ( π‘₯ 𝑖 , 𝑦 𝑖 ) ∈ 𝒳 for all 𝑑 β‰₯ π‘˜ 𝐢 but with 𝑑

𝑂 ⁒ ( poly ⁒ ( π‘˜ ) ) .

Corollary B.9 (Softmax Control).

We have that for all 𝑦 ∈ [ π‘˜ ] and any 𝑑

𝑂 ⁒ ( poly ⁒ ( π‘˜ ) ) for any polynomial in π‘˜ that max 𝑖 ∈ 𝑁 𝑦 ⁒ βˆ‘ 𝑠 β‰  𝑦 exp ⁑ ( 𝑔 𝑑 𝑠 ⁒ ( π‘₯ 𝑖 ) )

π‘˜ + π‘œ ⁒ ( 1 ) .

Corollary B.8 finishes this part of the proof.

Part II.

For the next part of the proof, we characterize the separation between max π‘Ÿ ∈ [ π‘š ] ⁑ ⟨ 𝑀 𝑦 , π‘Ÿ ( 0 ) , 𝑣 𝑦 , 1 ⟩ and max π‘Ÿ ∈ [ π‘š ] ⁑ ⟨ 𝑀 𝑦 , π‘Ÿ ( 0 ) , 𝑣 𝑦 , 2 ⟩ , and show that this separation (when it is significant enough) gets amplified over the course of training. To show this, we will rely largely on the techniques found in (Allen-Zhu & Li, 2021), and finish in a near-identical manner to the proof of Claim B.7.

As with Part I, we first introduce some notation that we will use throughout this part of the proof:

𝑆 𝑦 , β„“

1 𝑁 ⁒ βˆ‘ 𝑖 ∈ 𝑁 𝑦 βˆ‘ 𝑝 ∈ 𝑃 𝑦 , β„“ ⁒ ( π‘₯ 𝑖 ) 𝛽 𝑖 , 𝑝 𝛼 , Ξ› 𝑦 , β„“ ( 𝑑 )

max π‘Ÿ ∈ [ π‘š ] ⁑ ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , β„“ ⟩ .

Here, 𝑆 𝑦 , β„“ represents the data-dependent quantities that show up in the gradient updates to the correlations during the phase of training in which the correlations are in the polynomial part of ReLU ~ , while Ξ› 𝑦 , β„“ ( 𝑑 ) represents the max class 𝑦 correlation with feature 𝑣 𝑦 , β„“ at time 𝑑 .

Now we can prove essentially the same result as Proposition B.2 in (Allen-Zhu & Li, 2021), which quantifies the separation between Ξ› 𝑦 , 1 ( 0 ) and Ξ› 𝑦 , 2 ( 0 ) after taking into account 𝑆 𝑦 , 1 and 𝑆 𝑦 , 2 .

Claim B.10 (Feature Separation at Initialization).

For each class 𝑦 , we have that either:

Ξ› 𝑦 , 1 ( 0 )

β‰₯ ( 𝑆 𝑦 , 2 𝑆 𝑦 , 1 ) 1 𝛼 βˆ’ 2 ⁒ ( 1 + Θ ⁒ ( 1 log 2 ⁑ π‘˜ ) ) ⁒ Ξ› 𝑦 , 2 ( 0 ) or

Ξ› 𝑦 , 2 ( 0 )

β‰₯ ( 𝑆 𝑦 , 1 𝑆 𝑦 , 2 ) 1 𝛼 βˆ’ 2 ⁒ ( 1 + Θ ⁒ ( 1 log 2 ⁑ π‘˜ ) ) ⁒ Ξ› 𝑦 , 1 ( 0 )

with probability 1 βˆ’ 𝑂 ⁒ ( 1 log ⁑ π‘˜ ) .

Proof of Claim B.10.

Suppose WLOG that 𝑆 𝑦 , 1 β‰₯ 𝑆 𝑦 , 2 . If neither of the inequalities in the claim hold, then we have that:

Ξ› 𝑦 , 1 ( 0 ) ∈ ( 𝑆 𝑦 , 2 𝑆 𝑦 , 1 ) 1 𝛼 βˆ’ 2 ⁒ ( 1 Β± Θ ⁒ ( 1 log 2 ⁑ π‘˜ ) ) ⁒ Ξ› 𝑦 , 2 ( 0 ) .

Which follows from the fact that, for a constant 𝐴 , we have:

1 1 + 𝐴 log 2 ⁑ π‘˜ β‰₯ 1 βˆ’ 𝐴 log 2 ⁑ π‘˜ .

Now we recall that Ξ› 𝑦 , 1 ( 0 ) and Ξ› 𝑦 , 2 ( 0 ) are both maximums over i.i.d. 𝒩 ⁒ ( 0 , 1 / 𝑑 ) variables (again, since the feature vectors are orthonormal), so we can apply Corollary A.5 (Gaussian anti-concentration) to Ξ› 𝑦 , 1 ( 0 ) while taking πœ–

( 𝑆 𝑦 , 2 / 𝑆 𝑦 , 1 ) 1 𝛼 βˆ’ 2 ⁒ Θ ⁒ ( 1 / log 2 ⁑ π‘˜ ) ⁒ Ξ› 𝑦 , 2 ( 0 ) and π‘₯

( 𝑆 𝑦 , 2 / 𝑆 𝑦 , 1 ) 1 𝛼 βˆ’ 2 ⁒ Ξ› 𝑦 , 2 ( 0 ) . It is crucial to note that we can only do this because Ξ› 𝑦 , 2 ( 0 ) is independent of Ξ› 𝑦 , 1 ( 0 ) , and both take values over all of ℝ . From this we get that:

𝑃 ⁒ ( Ξ› 𝑦 , 1 ( 0 ) ∈ ( 𝑆 𝑦 , 2 / 𝑆 𝑦 , 1 ) 1 𝛼 βˆ’ 2 ⁒ Ξ› 𝑦 , 2 ( 0 ) Β± πœ– )
≀ 4 ⁒ πœ– ⁒ ( 1 + 2 ⁒ log ⁑ π‘š ) 𝜎

𝑂 ⁒ ( 𝜎 ⁒ log ⁑ π‘š log 2 ⁑ π‘˜ ) ⁒ Θ ⁒ ( log ⁑ π‘š 𝜎 )

𝑂 ⁒ ( 1 log ⁑ π‘˜ ) with probability  ⁒ 1 βˆ’ 1 π‘š .

Where we used the fact that π‘š

Θ ⁒ ( π‘˜ ) and Proposition A.2 to characterize Ξ› 𝑦 , 2 ( 0 ) (also noting that 𝑆 𝑦 , 2 / 𝑆 𝑦 , 1 is Θ ⁒ ( 1 ) ). Thus, neither of the inequalities hold with probability 𝑂 ⁒ ( 1 / log ⁑ π‘˜ ) , so we have the desired result. ∎

We can use the separation from Claim B.10 to show that, in the initial stages of training, the max correlated weight/feature pair grows out of the polynomial region of ReLU ~ and becomes large much faster than the correlations with the other feature for the same class. For 𝑦 ∈ [ π‘˜ ] , let β„“ βˆ— be such that Ξ› 𝑦 , β„“ βˆ— ( 0 ) is the left-hand side of the satisfied inequality from Claim B.10. Additionally, let π‘Ÿ βˆ—

argmax π‘Ÿ ⟨ 𝑀 𝑦 , π‘Ÿ ( 0 ) , 𝑣 𝑦 , β„“ βˆ— ⟩ , i.e. the strongest weight/feature correlation pair at initialization. We will show that when ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑦 , β„“ βˆ— ⟩ becomes Ξ© ⁒ ( 𝜌 ) , the other correlations remain small. In order to do so, we need a useful lemma from (Allen-Zhu & Li, 2021) that we restate below.

Lemma B.11 (Lemma C.19 from (Allen-Zhu & Li, 2021)).

Let π‘ž β‰₯ 3 be a constant and π‘₯ 0 , 𝑦 0

π‘œ ⁒ ( 1 ) . Let { π‘₯ 𝑑 , 𝑦 𝑑 } 𝑑 β‰₯ 0 be two positive sequences updated as

β€’

π‘₯ 𝑑 + 1 β‰₯ π‘₯ 𝑑 + πœ‚ ⁒ 𝐢 𝑑 ⁒ π‘₯ 𝑑 π‘ž βˆ’ 1 for some 𝐢 𝑑

Θ ⁒ ( 1 ) , and

β€’

𝑦 𝑑 + 1 ≀ 𝑦 𝑑 + πœ‚ ⁒ 𝑆 ⁒ 𝐢 𝑑 ⁒ 𝑦 𝑑 π‘ž βˆ’ 1 for some constant 𝑆

Θ ⁒ ( 1 ) .

Where πœ‚

𝑂 ⁒ ( 1 / poly ⁒ ( π‘˜ ) ) for a sufficiently large polynomial in π‘˜ . Suppose π‘₯ 0 β‰₯ 𝑦 0 ⁒ 𝑆 1 π‘ž βˆ’ 2 ⁒ ( 1 + Θ ⁒ ( 1 polylog ⁒ ( π‘˜ ) ) ) . For every 𝐴

𝑂 ⁒ ( 1 ) , letting 𝑇 π‘₯ be the first iteration such that π‘₯ 𝑑 β‰₯ 𝐴 , we must have that

𝑦 𝑇 π‘₯

𝑂 ⁒ ( 𝑦 0 ⁒ polylog ⁒ ( π‘˜ ) ) .

To apply Lemma B.11 in our setting, we first prove the following claim.

Claim B.12.

For a class 𝑦 ∈ [ π‘˜ ] , we define the following two sequences:

π‘Ž 𝑦 , 𝑑

( 𝑆 𝑦 , β„“ βˆ— 𝜌 𝛼 βˆ’ 1 ) 1 𝛼 βˆ’ 2 ⁒ ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑦 , β„“ βˆ— ⟩ and 𝑏 𝑦 , 𝑑

( 𝑆 𝑦 , 3 βˆ’ β„“ βˆ— 𝜌 𝛼 βˆ’ 1 ) 1 𝛼 βˆ’ 2 ⁒ ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , 3 βˆ’ β„“ βˆ— ⟩ .

Where the π‘Ÿ in the definition of 𝑏 𝑦 , 𝑑 is arbitrary. Then with probability 1 βˆ’ 𝑂 ⁒ ( 1 log ⁑ π‘˜ ) there exist 𝐢 𝑑 , 𝑆

Θ ⁒ ( 1 ) such that for all 𝑑 for which ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑦 , β„“ βˆ— ⟩ < 𝜌 / ( 𝛿 2 βˆ’ 𝛿 1 ) :

π‘Ž 𝑦 , 𝑑 + 1

β‰₯ π‘Ž 𝑦 , 𝑑 + πœ‚ ⁒ 𝐢 𝑑 ⁒ π‘Ž 𝑦 , 𝑑 𝛼 βˆ’ 1

𝑏 𝑦 , 𝑑 + 1

≀ 𝑏 𝑦 , 𝑑 + πœ‚ ⁒ 𝑆 ⁒ 𝐢 𝑑 ⁒ 𝑏 𝑦 , 𝑑 𝛼 βˆ’ 1 .

Additionally (with the same probability), we have that π‘Ž 𝑦 , 0 β‰₯ 𝑆 1 𝛼 βˆ’ 2 ⁒ ( 1 + Θ ⁒ ( 1 polylog ⁒ ( π‘˜ ) ) ) ⁒ 𝑏 𝑦 , 0 .

Proof of Claim B.12.

The update to ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑦 , β„“ βˆ— ⟩ in this regime can be bounded as follows (using Corollary B.9 and recalling Equation B.5):

⟨ βˆ’ πœ‚ ⁒ βˆ‡ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) 𝐽 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , β„“ βˆ— ⟩

β‰₯ πœ‚ 𝑁 ⁒ βˆ‘ 𝑖 ∈ 𝑁 𝑦 ( 1 βˆ’ πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( π‘₯ 𝑖 ) ) ) ⁒ βˆ‘ 𝑝 ∈ 𝑃 𝑠 , β„“ ⁒ ( π‘₯ 𝑖 ) 𝛽 𝑖 , 𝑝 𝛼 ⁒ ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , β„“ ⟩ 𝛼 βˆ’ 1 𝜌 𝛼 βˆ’ 1

βˆ’ πœ‚ 𝑁 ⁒ βˆ‘ 𝑖 βˆ‰ 𝑁 𝑦 πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( π‘₯ 𝑖 ) ) ⁒ Θ ⁒ ( 𝑃 𝛿 4 𝛼 max 𝑒 β‰  𝑦 , β„“ β€² ∈ [ 2 ] ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑒 , β„“ β€² ⟩ 𝛼 βˆ’ 1 𝜌 𝛼 βˆ’ 1 )

β‰₯ πœ‚ ⁒ ( 1 βˆ’ Θ ⁒ ( 1 π‘˜ ) ) ⁒ 𝑆 𝑦 , β„“ βˆ— ⁒ ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑦 , β„“ βˆ— ⟩ 𝛼 βˆ’ 1 𝜌 𝛼 βˆ’ 1 .

(B.13)

Similarly, we have (noting also that ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , 3 βˆ’ β„“ βˆ— ⟩ < 𝜌 / ( 𝛿 2 βˆ’ 𝛿 1 ) ):

⟨ βˆ’ πœ‚ ⁒ βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) 𝐽 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , 3 βˆ’ β„“ βˆ— ⟩ ≀ πœ‚ ⁒ 𝑆 𝑦 , 3 βˆ’ β„“ βˆ— ⁒ ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , 3 βˆ’ β„“ βˆ— ⟩ 𝛼 βˆ’ 1 𝜌 𝛼 βˆ’ 1 .

(B.14)

Multiplying the above inequalities by ( 𝑆 𝑦 , β„“ βˆ— / 𝜌 𝛼 βˆ’ 1 ) 1 𝛼 βˆ’ 2 , we see that π‘Ž 𝑦 , 𝑑 and 𝑏 𝑦 , 𝑑 satisfy the inequalities in the claim with 𝐢 𝑑

1 βˆ’ Θ ⁒ ( 1 π‘˜ ) and 𝑆

( 𝑆 𝑦 , 3 βˆ’ β„“ βˆ— / 𝑆 𝑦 , β„“ βˆ— ) ⁒ ( 1 + Θ ⁒ ( 1 π‘˜ ) ) . Now by Claim B.10 we have:

π‘Ž 𝑦 , 0

β‰₯ ( 𝑆 𝑦 , 3 βˆ’ β„“ βˆ— 𝑆 𝑦 , β„“ βˆ— ) 1 𝛼 βˆ’ 2 ⁒ ( 1 + Θ ⁒ ( 1 log 2 ⁑ π‘˜ ) ) ⁒ 𝑏 𝑦 , 0

β‰₯ 𝑆 1 𝛼 βˆ’ 2 ⁒ ( 1 + Θ ⁒ ( 1 polylog ⁒ ( π‘˜ ) ) ) ⁒ 𝑏 𝑦 , 0

so we are done. ∎

Now by the fact that | 𝑁 𝑦 |

Θ ⁒ ( 𝑁 / π‘˜ ) , we have 𝑆 𝑦 , 1 , 𝑆 𝑦 , 2

Θ ⁒ ( 1 / π‘˜ )

𝑂 ⁒ ( 𝜌 ) , which implies that ( 𝑆 𝑦 , β„“ βˆ— / 𝜌 𝛼 βˆ’ 1 ) 1 / ( 𝛼 βˆ’ 2 )

𝑂 ⁒ ( 1 / 𝜌 ) . From this we get that while π‘Ž 𝑦 , 𝑑 < 𝐢 / ( 𝛿 2 βˆ’ 𝛿 1 ) for some appropriately chosen constant 𝐢 , we have ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑦 , β„“ βˆ— ⟩ < 𝜌 / ( 𝛿 2 βˆ’ 𝛿 1 ) .

Since Claim B.12 holds in this regime, we can apply Lemma B.11 with 𝐴

𝐢 / ( 𝛿 2 βˆ’ 𝛿 1 ) , which gives us that when π‘Ž 𝑦 , 𝑑 β‰₯ 𝐢 / ( 𝛿 2 βˆ’ 𝛿 1 ) , we have 𝑏 𝑦 , 𝑑

𝑂 ⁒ ( 𝑏 𝑦 , 0 ⁒ polylog ⁒ ( π‘˜ ) ) . From this we obtain that when ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑦 , β„“ βˆ— ⟩ β‰₯ 𝜌 / ( 𝛿 2 βˆ’ 𝛿 1 ) we have that ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , 3 βˆ’ β„“ βˆ— ⟩ is still within a polylog ⁒ ( π‘˜ ) factor of ⟨ 𝑀 𝑦 , π‘Ÿ ( 0 ) , 𝑣 𝑦 , 3 βˆ’ β„“ βˆ— ⟩ for any π‘Ÿ .

Now from the same logic as the proof of Claim B.7, we can show that this separation remains throughout training.

Claim B.13.

For any class 𝑦 ∈ [ π‘˜ ] , with probability 1 βˆ’ 𝑂 ⁒ ( 1 / log ⁑ π‘˜ ) , we have that max π‘Ÿ ∈ [ π‘š ] ⁑ ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , 3 βˆ’ β„“ βˆ— ⟩

𝑂 ⁒ ( polylog ⁒ ( π‘˜ ) ⁒ max π‘Ÿ ∈ [ π‘š ] ⁑ ⟨ 𝑀 𝑦 , π‘Ÿ ( 0 ) , 𝑣 𝑦 , 3 βˆ’ β„“ βˆ— ⟩ ) for all 𝑑

𝑂 ⁒ ( poly ⁒ ( π‘˜ ) ) for any polynomial in π‘˜ .

Proof of Claim B.13.

It follows from the same logic as in the proof of Claim B.6 that at the first iteration 𝑑 for which we have min 𝑖 ∈ 𝑁 𝑦 ⁑ 𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑖 ) β‰₯ log ⁑ π‘˜ , we still have ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , 3 βˆ’ β„“ βˆ— ⟩ is within some polylog ⁒ ( π‘˜ ) factor of initialization (here the correlation ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , 3 βˆ’ β„“ βˆ— ⟩ can be viewed as the same as an off-diagonal correlation from the proof of Claim B.6). The rest of the proof then follows from identical logic to that of Claim B.7; namely, we can show that for ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , 3 βˆ’ β„“ βˆ— ⟩ to grow by more than a polylog ⁒ ( π‘˜ ) factor we need πœ” ⁒ ( poly ⁒ ( π‘˜ ) ) training iterations. ∎

From Claim B.13 along with Claim B.7, it follows that with probability 1 βˆ’ 𝑂 ⁒ ( 1 / log ⁑ π‘˜ ) , for any class 𝑦 (after polynomially many training iterations) we have:

𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑖 β€² )

𝑂 ⁒ ( π‘š ⁒ polylog ⁒ ( π‘˜ ) 𝑑 )

𝑂 ⁒ ( π‘˜ ⁒ polylog ⁒ ( π‘˜ ) 𝑑 ) ,

𝑔 𝑑 𝑠 ⁒ ( π‘₯ 𝑖 β€² )

Ξ© ⁒ ( 𝑃 ⁒ log ⁑ ( π‘˜ ) 𝛼 𝜌 𝛼 βˆ’ 1 ⁒ π‘˜ 2.5 ⁒ 𝛼 )

πœ” ⁒ ( π‘˜ ⁒ polylog ⁒ ( π‘˜ ) 𝑑 ) for  ⁒ 𝑠 β‰  𝑦 , if  ⁒ βˆƒ 𝑃 𝑠 , β„“ ⁒ ( π‘₯ 𝑖 ) β‰  βˆ… .

(B.15)

Where π‘₯ 𝑖 β€² is any point π‘₯ 𝑖 with 𝑖 ∈ 𝑁 𝑦 modified so that all instances of feature 𝑣 𝑦 , β„“ βˆ— are replaced by 0, and the second line above follows from the fact that by Claim B.7 we must have ⟨ 𝑀 𝑠 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑠 , β„“ ⟩

Ξ© ⁒ ( log ⁑ ( π‘˜ ) / π‘˜ ) for at least some π‘Ÿ , β„“ for every 𝑠 (and 𝑑

Θ ⁒ ( π‘˜ 20 ) ). This proves that feature 𝑣 𝑦 , 3 βˆ’ β„“ βˆ— is not learned in the sense of Definition 4.5.

Using Claim B.13 for each class, we have by a Chernoff bound that with probability at least 1 βˆ’ π‘œ ⁒ ( 1 / π‘˜ ) that for ( 1 βˆ’ π‘œ ⁒ ( 1 ) ) ⁒ π‘˜ classes only a single feature is learned, which proves the theorem. ∎

B.2Proof of Theorem 4.7

See 4.7

Proof.

As in the proof of Theorem 4.6, we break the proof into two parts. The first part mirrors most of the structure of Part I of the proof of Theorem 4.6, in that we analyze the off-diagonal correlations and also show that the network outputs 𝑔 𝑑 𝑦 can grow to (and remain) Ξ© ⁒ ( log ⁑ π‘˜ ) as training progresses. However, we do not show that the outputs stay 𝑂 ⁒ ( log ⁑ π‘˜ ) in Part I (as we did in the ERM case), as there are additional subtleties in the Midpoint Mixup analysis that require different techniques which we find are more easily introduced separately.

The second part of the proof differs significantly from Part II of the proof of Theorem 4.6, as our goal is to now show that any large separation between weight-feature correlations for each class are corrected over the course of training. At a high level, we show this by proving a gradient correlation lower bound that depends only on the magnitude of the separation between correlations and the variance of the feature coefficients in the data distribution, after which we can conclude that any feature lagging behind will catch up in polynomially many training iterations. We then use the techniques from the gradient lower bound analysis to prove that the network outputs 𝑔 𝑑 𝑦 stay 𝑂 ⁒ ( log ⁑ π‘˜ ) throughout training, which wraps up the proof.

Part I. We first recall that 𝑧 𝑖 , 𝑗

( π‘₯ 𝑖 + π‘₯ 𝑗 ) / 2 , and we refer to such 𝑧 𝑖 , 𝑗 as β€œmixed points”. In this part of the proof, we show that 𝑔 𝑑 𝑦 𝑖 ⁒ ( 𝑧 𝑖 , 𝑗 ) crosses log ⁑ π‘˜ on at least one mixed point 𝑧 𝑖 , 𝑗 in polynomially many iterations (after which the network outputs remain Ξ© ⁒ ( log ⁑ π‘˜ ) ). As before, this requires getting a handle on the off-diagonal correlations ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑠 , β„“ ⟩ (with 𝑠 β‰  𝑦 ).

Throughout the proof, we will continue to rely on the notation introduced in Equation B.1 in the proof of Theorem 4.6. However, we make one slight modification to the definition of 𝐡 𝑦 , β„“ ( 𝑑 ) for the Mixup case (so as to be able to handle mixed points), which is as follows:

𝐡 𝑦 , β„“ ( 𝑑 )

{ π‘Ÿ : π‘Ÿ ∈ [ π‘š ] ⁒  and  ⁒ ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , β„“ ⟩ β‰₯ 2 ⁒ 𝜌 / 𝛿 1 } .

(B.16)

We again start by proving a claim that will constitute our setting for the rest of this proof. We reiterate again that when we write inequalities with respect to asymptotic notation, we are implicitly making some fixed choice of bounding function, so that negations of such statements are well-defined.

Claim B.14.

With probability 1 βˆ’ 𝑂 ⁒ ( 1 / π‘˜ ) , all of the following are (simultaneously) true for every class 𝑦 ∈ [ π‘˜ ] :

β€’

| 𝑁 𝑦 |

Θ ⁒ ( 𝑁 / π‘˜ )

β€’

max 𝑠 ∈ [ π‘˜ ] , π‘Ÿ ∈ [ π‘š ] , β„“ ∈ [ 2 ] ⁑ ⟨ 𝑀 𝑠 , π‘Ÿ ( 0 ) , 𝑣 𝑦 , β„“ ⟩

𝑂 ⁒ ( log ⁑ π‘˜ / 𝑑 )

β€’

βˆ€ β„“ ∈ [ 2 ] , max π‘Ÿ ∈ [ π‘š ] ⁑ ⟨ 𝑀 𝑦 , π‘Ÿ ( 0 ) , 𝑣 𝑦 , β„“ ⟩

Ξ© ⁒ ( 1 / 𝑑 )

β€’

For Ξ© ⁒ ( π‘˜ ) tuples ( 𝑠 , β„“ ) ∈ [ π‘˜ ] Γ— [ 2 ] we have ⟨ 𝑀 𝑦 , π‘Ÿ ( 0 ) , 𝑣 𝑠 , β„“ ⟩

0 .

Proof of Claim B.14.

The first three items in the claim are exactly the same as in Claim B.1, and the last item is true because the correlations ⟨ 𝑀 𝑦 , π‘Ÿ ( 0 ) , 𝑣 𝑠 , β„“ ⟩ are mean zero Gaussians. ∎

Once again, in everything that follows, we will always assume the conditions of Claim B.14 unless otherwise stated. We now translate Claim B.2 to the Midpoint Mixup setting.

Claim B.15.

Consider 𝑖 ∈ 𝑁 𝑦 , 𝑗 ∈ 𝑁 𝑠 for 𝑠 β‰  𝑦 and suppose that max 𝑒 βˆ‰ { 𝑦 , 𝑠 } ⁑ 𝑔 𝑑 𝑒 ⁒ ( 𝑧 𝑖 , 𝑗 )

𝑂 ⁒ ( log ⁑ ( π‘˜ ) / π‘˜ ) holds true. If we have 𝑔 𝑑 𝑦 ⁒ ( 𝑧 𝑖 , 𝑗 )

π‘Ž ⁒ log ⁑ π‘˜ and 𝑔 𝑑 𝑠 ⁒ ( 𝑧 𝑖 , 𝑗 )

𝑏 ⁒ log ⁑ π‘˜ for π‘Ž , 𝑏

𝑂 ⁒ ( 1 ) , then:

1 βˆ’ 2 ⁒ πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( 𝑧 𝑖 , 𝑗 ) )

{ βˆ’ Ξ© ⁒ ( 1 )
if  ⁒ π‘Ž > 1 , π‘Ž βˆ’ 𝑏

Ω ⁒ ( 1 )

Β± 𝑂 ⁒ ( 1 )
if  ⁒ π‘Ž > 1 , π‘Ž βˆ’ 𝑏

Β± π‘œ ⁒ ( 1 )

Θ ⁒ ( 1 )

otherwise .

Where in the second item above the sign of 1 βˆ’ 2 ⁒ πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) depends on the sign of π‘Ž βˆ’ 𝑏 .

Proof of Claim B.15.

In comparison to Claim B.2, the Midpoint Mixup case is slightly more involved in that 𝑔 𝑑 𝑠 ⁒ ( 𝑧 𝑖 , 𝑗 ) can be quite large due to the π‘₯ 𝑗 part of 𝑧 𝑖 , 𝑗 . As a result, we directly assume some control over the different class outputs on the mixed points (which we will prove to hold throughout training later). By assumption, we have for 𝑒 β‰  𝑦 , 𝑠 :

𝑔 𝑑 𝑒 ⁒ ( 𝑧 𝑖 , 𝑗 )

𝑂 ⁒ ( log ⁑ ( π‘˜ ) / π‘˜ ) ⟹ exp ⁑ ( 𝑔 𝑑 𝑒 ⁒ ( 𝑧 𝑖 , 𝑗 ) )

≀ 1 + 𝑂 ⁒ ( log ⁑ ( π‘˜ ) / π‘˜ ) .

(B.17)

Where above we used the inequality exp ⁑ ( π‘₯ ) ≀ 1 + π‘₯ + π‘₯ 2 for π‘₯ ∈ [ 0 , 1 ] . Now by the assumptions that 𝑔 𝑑 𝑦 ⁒ ( 𝑧 𝑖 , 𝑗 )

π‘Ž ⁒ log ⁑ π‘˜ and 𝑔 𝑑 𝑠 ⁒ ( 𝑧 𝑖 , 𝑗 )

𝑏 ⁒ log ⁑ π‘˜ , we have:

1 βˆ’ 2 ⁒ πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( π‘₯ 𝑖 ) )
≀ 1 βˆ’ 2 ⁒ π‘˜ π‘Ž π‘˜ π‘Ž + π‘˜ 𝑏 + ( π‘˜ βˆ’ 2 ) + π‘œ ⁒ ( π‘˜ )

π‘˜ 𝑏 βˆ’ π‘˜ π‘Ž + ( π‘˜ βˆ’ 2 ) + π‘œ ⁒ ( π‘˜ ) π‘˜ π‘Ž + π‘˜ 𝑏 + ( π‘˜ βˆ’ 2 ) + π‘œ ⁒ ( π‘˜ ) .

(B.18)

From which the result follows. ∎

Corollary B.16.

Under the same conditions as Claim B.15, for 𝑒 β‰  𝑦 , 𝑠 we have:

1 βˆ’ 2 ⁒ πœ™ 𝑒 ⁒ ( 𝑔 𝑑 ⁒ ( 𝑧 𝑖 , 𝑗 ) )

Θ ⁒ ( 1 ) .

Proof of Corollary B.16.

Follows from Equations B.17 and B.18. ∎

We observe that Claim B.15 and Corollary B.16 are less precise than Claim B.2, largely because there is now a dependence on the gap between the class 𝑦 and class 𝑠 network outputs as opposed to just the class 𝑦 network output. We are now again ready to compare the growth of the diagonal correlations ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , β„“ ⟩ with the off-diagonal correlations ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑠 , β„“ ⟩ . However, this is not as straightforward as it was in the ERM setting. The issue is that the off-diagonal correlations can actually grow significantly, due to the fact that the features 𝑣 𝑦 , β„“ can show up when mixing points in class 𝑦 with class 𝑠 .

Claim B.17.

Fix an arbitrary class 𝑦 ∈ [ π‘˜ ] . Let 𝐴 ∈ [ Ξ© ⁒ ( 𝜌 ) , 𝜌 / ( 𝛿 2 βˆ’ 𝛿 1 ) ] and let 𝑇 𝐴 be the first iteration at which max π‘Ÿ ∈ [ π‘š ] , β„“ ∈ [ 2 ] ⁑ ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑇 𝐴 ) , 𝑣 𝑦 , β„“ ⟩ β‰₯ 𝐴 ; we must have both that 𝑇 𝐴

𝑂 ⁒ ( poly ⁒ ( π‘˜ ) ) and that, for every 𝑠 β‰  𝑦 and β„“ ∈ [ 2 ] :

⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑇 𝐴 ) , 𝑣 𝑠 , β„“ ⟩

𝑂 ⁒ ( max β„“ β€² ∈ [ 2 ] ⁑ ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑇 𝐴 ) , 𝑣 𝑦 , β„“ β€² ⟩ / π‘˜ ) .

Additionally, for all 𝑠 , β„“ with ⟨ 𝑀 𝑦 , π‘Ÿ ( 0 ) , 𝑣 𝑠 , β„“ ⟩ > 0 , we have that ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑇 𝐴 ) , 𝑣 𝑠 , β„“ ⟩

Ξ© ⁒ ( ⟨ 𝑀 𝑦 , π‘Ÿ ( 0 ) , 𝑣 𝑠 , β„“ ⟩ polylog ⁒ ( π‘˜ ) ) .

Proof of Claim B.17.

By our setting, we must have that there exists a diagonal correlation ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 0 ) , 𝑣 𝑦 , β„“ βˆ— ⟩

Ξ© ⁒ ( 1 / 𝑑 ) , which we will focus our attention on. Using Calculation A.10 and the ideas from Calculation A.7, we can lower bound the update to ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑦 , β„“ βˆ— ⟩ from initialization up to 𝑇 𝐴 as:

⟨ βˆ’ πœ‚ ⁒ βˆ‡ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , β„“ βˆ— ⟩

β‰₯ πœ‚ 𝑁 2 ⁒ βˆ‘ 𝑖 ∈ 𝑁 𝑦 βˆ‘ 𝑗 βˆ‰ 𝑁 𝑦 ( 1 βˆ’ 2 ⁒ πœ™ 𝑦 ⁒ ( ( 𝑔 𝑑 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ) ) ⁒ Θ ⁒ ( ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑦 , β„“ βˆ— ⟩ 𝛼 βˆ’ 1 𝜌 𝛼 βˆ’ 1 )

  • πœ‚ 𝑁 2 ⁒ βˆ‘ 𝑖 ∈ 𝑁 𝑦 βˆ‘ 𝑗 ∈ 𝑁 𝑦 ( 1 βˆ’ πœ™ 𝑦 ⁒ ( ( 𝑔 𝑑 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ) ) ⁒ Θ ⁒ ( ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑦 , β„“ βˆ— ⟩ 𝛼 βˆ’ 1 𝜌 𝛼 βˆ’ 1 )

βˆ’ πœ‚ 𝑁 2 ⁒ βˆ‘ 𝑖 βˆ‰ 𝑁 𝑦 βˆ‘ 𝑗 βˆ‰ 𝑁 𝑦 πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ⁒ Θ ⁒ ( 𝑃 𝜎 4 𝛼 max 𝑒 ∈ [ π‘˜ ] , π‘ž ∈ [ 2 ] ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑒 , π‘ž ⟩ 𝛼 βˆ’ 1 𝜌 𝛼 βˆ’ 1 )

(B.19)

Above we made use of the fact that, for 𝑖 ∈ 𝑁 𝑦 and 𝑗 βˆ‰ 𝑁 𝑦 , we have ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑧 𝑖 , 𝑗 ( 𝑝 ) ⟩ β‰₯ ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑦 , β„“ βˆ— ⟩ / 2 for at least Θ ⁒ ( | 𝑁 𝑦 | ⁒ 𝑁 ) mixed points since the correlation ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑒 , π‘ž ⟩ is positive for Ξ© ⁒ ( π‘˜ ) tuples ( 𝑒 , π‘ž ) ∈ [ π‘˜ ] Γ— [ 2 ] (under Setting B.14). We can similarly upper bound the update to ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑠 , β„“ ⟩ for an arbitrary π‘Ÿ ∈ [ π‘š ] as:

⟨ βˆ’ πœ‚ ⁒ βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑠 , β„“ ⟩

≀ βˆ’ πœ‚ 𝑁 2 ⁒ βˆ‘ 𝑖 βˆ‰ 𝑁 𝑦 βˆ‘ 𝑗 ∈ 𝑁 𝑠 πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ⁒ Θ ⁒ ( ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑠 , β„“ ⟩ 𝛼 βˆ’ 1 𝜌 𝛼 βˆ’ 1 )

βˆ’ πœ‚ 𝑁 2 ⁒ βˆ‘ 𝑖 βˆ‰ 𝑁 𝑦 βˆͺ 𝑁 𝑠 βˆ‘ 𝑗 βˆ‰ 𝑁 𝑦 βˆͺ 𝑁 𝑠 πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ⁒ Θ ⁒ ( 𝑃 𝛿 3 𝛼 min 𝑒 ∈ [ π‘˜ ] , π‘ž ∈ [ 2 ] ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑒 , π‘ž ⟩ 𝛼 βˆ’ 1 𝜌 𝛼 βˆ’ 1 )

  • πœ‚ 𝑁 2 ⁒ βˆ‘ 𝑖 ∈ 𝑁 𝑦 βˆ‘ 𝑗 ∈ 𝑁 𝑠 ( 1 βˆ’ 2 ⁒ πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ) ⁒ Θ ⁒ ( max π‘ž ∈ [ 2 ] ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑠 , β„“

  • 𝑣 𝑦 , π‘ž ⟩ 𝛼 βˆ’ 1 𝜌 𝛼 βˆ’ 1 )

  • πœ‚ 𝑁 2 ⁒ βˆ‘ 𝑖 ∈ 𝑁 𝑦 βˆ‘ 𝑗 βˆ‰ 𝑁 𝑦 βˆͺ 𝑁 𝑠 ( 1 βˆ’ 2 ⁒ πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ) ⁒ Θ ⁒ ( 𝛿 4 max 𝑒 ∈ [ π‘˜ ] , π‘ž ∈ [ 2 ] ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑒 , π‘ž ⟩ 𝛼 βˆ’ 1 𝜌 𝛼 βˆ’ 1 )

  • πœ‚ 𝑁 2 ⁒ βˆ‘ 𝑖 ∈ 𝑁 𝑦 βˆ‘ 𝑗 ∈ 𝑁 𝑦 ( 1 βˆ’ πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ) ⁒ Θ ⁒ ( 𝑃 𝛿 4 𝛼 max 𝑒 β‰  𝑦 , π‘ž ∈ [ 2 ] ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑒 , π‘ž ⟩ 𝛼 βˆ’ 1 𝜌 𝛼 βˆ’ 1 ) .

(B.20)

As the above may be rather difficult to parse on first glance, let us take a moment to unpack the individual terms on the RHS. The first two terms are a precise splitting of the βˆ’ 2 ⁒ πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ) term from Calculation A.10; namely, the case where we mix with the class 𝑠 allows for constant size coefficients on the feature 𝑣 𝑠 , β„“ while the other cases only allow for 𝑣 𝑠 , β„“ to show up in the feature noise patches. The next three terms consider all cases of mixing with the class 𝑦 . The first of these terms considers the case of mixing class 𝑦 with class 𝑠 , in which case it is possible to have patches in 𝑧 𝑖 , 𝑗 that have both 𝑣 𝑠 , β„“ and 𝑣 𝑦 , β„“ βˆ— with constant coefficients. The next term considers mixing class 𝑦 with a class that is neither 𝑦 nor 𝑠 , in which case the feature 𝑣 𝑠 , β„“ can only show up when mixing with a feature noise patch, so we suffer a factor of at least 𝛿 4

Θ ⁒ ( 1 / π‘˜ 1.5 ) (note we do not suffer a 𝛿 4 𝛼 factor as 𝑣 𝑦 , β„“ βˆ— can still be in 𝑧 𝑖 , 𝑗 ) from the ⟨ 𝑧 𝑖 , 𝑗 , 𝑣 𝑠 , β„“ ⟩ part of the gradient. Finally, the last term considers mixing within class 𝑦 .

The first of the three positive terms in the RHS of Equation B.20 presents the main problem - the fact that the diagonal correlations can show up non-trivially in the off-diagonal correlation gradient means the gradients can be much larger than in the ERM case. However, the key is that there are only Θ ⁒ ( 𝑁 2 / π‘˜ 2 ) mixed points between classes 𝑦 and class 𝑠 . Thus, once more using the fact that Θ ⁒ ( | 𝑁 𝑒 | )

Θ ⁒ ( 𝑁 / π‘˜ ) for every 𝑒 ∈ [ π‘˜ ] , the other conditions in our setting, and Claim B.15, we obtain that for all 𝑑 ≀ 𝑇 𝐴 :

⟨ βˆ’ πœ‚ ⁒ βˆ‡ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , β„“ βˆ— ⟩

β‰₯ Θ ⁒ ( πœ‚ ⁒ ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑦 , β„“ βˆ— ⟩ 𝛼 βˆ’ 1 π‘˜ ⁒ 𝜌 𝛼 βˆ’ 1 )

(B.21)

⟨ βˆ’ πœ‚ ⁒ βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑠 , β„“ ⟩

≀ Θ ⁒ ( πœ‚ max 𝑒 ∈ { 𝑠 , 𝑦 } , π‘ž ∈ [ 2 ] ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑒 , π‘ž ⟩ 𝛼 βˆ’ 1 π‘˜ 2 ⁒ 𝜌 𝛼 βˆ’ 1 )

(B.22)

Crucially we have that Equation B.22 is a Θ ⁒ ( 1 / π‘˜ ) factor smaller than Equation B.21. Recalling that all correlations are 𝑂 ⁒ ( log ⁑ ( π‘˜ ) / 𝑑 ) at initialization, we see that the difference in the updates in Equations B.21 and B.22 is at least of the same order as Equation B.21. Thus, in 𝑂 ⁒ ( poly ⁒ ( π‘˜ ) ) iterations, it follows that ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑦 , β„“ βˆ— ⟩ > ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑠 , β„“ ⟩ (this necessarily occurs for a 𝑑 < 𝑇 𝐴 by definition of 𝐴 and comparison to the bounds above), after which it follows from Equations B.21 and B.22 that ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑇 𝐴 ) , 𝑣 𝑠 , β„“ ⟩

𝑂 ⁒ ( ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑇 𝐴 ) , 𝑣 𝑦 , β„“ βˆ— ⟩ / π‘˜ ) (and clearly 𝑇 𝐴

𝑂 ⁒ ( poly ⁒ ( π‘˜ ) ) ). This proves the first part of the claim.

It remains to show that the off-diagonal correlations also do not decrease by too much, as if they were to become negative that would potentially cause problems in Equation B.19 due to ReLU ~ β€² becoming 0. Using Equation B.20, we can form the following lower bound to ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑠 , β„“ ⟩ :

⟨ βˆ’ πœ‚ ⁒ βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑠 , β„“ ⟩ β‰₯ βˆ’ Θ ⁒ ( πœ‚ ⁒ ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑠 , β„“ ⟩ 𝛼 βˆ’ 1 π‘˜ 2 ⁒ 𝜌 𝛼 βˆ’ 1 ) .

(B.23)

Now let 𝑇 denote the number of iterations starting from initialization that it takes ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑠 , β„“ ⟩ to decrease to ⟨ 𝑀 𝑦 , π‘Ÿ ( 0 ) , 𝑣 𝑠 , β„“ ⟩ / polylog ⁒ ( π‘˜ ) for some fixed polylog ⁒ ( π‘˜ ) factor. Then it follows from Equation B.21 that in 𝑇 iterations ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑦 , β„“ βˆ— ⟩ has increased by at least a π‘˜ / polylog ⁒ ( π‘˜ ) factor. As a result, we have that at 𝑇 𝐴 the correlation ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑠 , β„“ ⟩ has decreased by at most a polylog ⁒ ( π‘˜ ) 𝐢 factor for some universal constant 𝐢 , proving the claim. ∎

Corollary B.18.

For any class 𝑦 ∈ [ π‘˜ ] , and any 𝑑 β‰₯ 𝑇 𝐴 (for any 𝑇 𝐴 satisfying the definition in Claim B.17), we have for any 𝑠 β‰  𝑦 and β„“ ∈ [ 2 ] :

⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑠 , β„“ ⟩

𝑂 ⁒ ( max β„“ β€² ∈ [ 2 ] ⁑ ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , β„“ β€² ⟩ / π‘˜ )

Additionally, for all 𝑠 , β„“ with ⟨ 𝑀 𝑦 , π‘Ÿ ( 0 ) , 𝑣 𝑠 , β„“ ⟩

0 , we have that ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑠 , β„“ ⟩

0 .

Proof of Corollary B.18.

The 𝑂 ⁒ ( 1 / π‘˜ ) factor separation between the updates to diagonal and off-diagonal correlations shown in Equations B.21 and B.22 continue to hold once we pass into the linear regime of ReLU ~ . Furthermore, the logic used to prove the lower bound for positive correlations in Claim B.17 easily extends to showing that the correlations remain positive throughout training. ∎

As noted above, the bound on the off-diagonal correlations obtained in Claim B.17 and Corollary B.18 is much weaker than what it was in Claim B.4, which is why we weakened the assumptions in Claim B.15. We now prove the Midpoint Mixup analogues to Claims B.5, B.6, and B.7.

Claim B.19.

Consider 𝑦 ∈ [ π‘˜ ] and 𝑑 such that max 𝑖 ∈ 𝑁 𝑦 , 𝑗 ∈ [ 𝑁 ] ⁑ 𝑔 𝑑 𝑦 ⁒ ( 𝑧 𝑖 , 𝑗 )

Θ ⁒ ( log ⁑ π‘˜ ) . Then max 𝑖 ∈ 𝑁 𝑦 , 𝑗 ∈ [ 𝑁 ] ⁑ 𝑔 𝑑 𝑦 ⁒ ( 𝑧 𝑖 , 𝑗 )

Θ ⁒ ( min 𝑖 ∈ 𝑁 𝑦 , 𝑗 ∈ [ 𝑁 ] ⁑ 𝑔 𝑑 𝑦 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) .

Proof of Claim B.19.

For any 𝑑 satisfying the conditions of the claim, we necessarily have that Corollary B.18 holds. As a result, we have:

βˆ‘ π‘Ÿ ∈ [ π‘š ] βˆ‘ β„“ ∈ [ 2 ] ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , β„“ ⟩

𝑂 ⁒ ( log ⁑ π‘˜ ) ⟹ βˆ‘ π‘Ÿ ∈ [ π‘š ] βˆ‘ β„“ ∈ [ 2 ] ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑠 , β„“ ⟩

𝑂 ⁒ ( log ⁑ ( π‘˜ ) / π‘˜ ) .

Thus, we may disregard the off-diagonal correlations in considering the class 𝑦 output on 𝑧 𝑖 , 𝑗 (i.e. we do not need to worry about the π‘₯ 𝑗 part of 𝑧 𝑖 , 𝑗 ), and the rest is identical to Claim B.5. ∎

Claim B.20.

For each 𝑦 ∈ [ π‘˜ ] , let 𝑇 𝑦 denote the first iteration at which max 𝑖 ∈ 𝑁 𝑦 , 𝑗 βˆ‰ 𝑁 𝑦 ⁑ 𝑔 𝑇 𝑦 𝑦 ⁒ ( 𝑧 𝑖 , 𝑗 ) β‰₯ log ⁑ ( π‘˜ βˆ’ 1 ) . Then we have that 𝑇 𝑦

𝑂 ⁒ ( poly ⁒ ( π‘˜ ) ) (for a sufficiently large polynomial in π‘˜ ) and that min 𝑖 ∈ 𝑁 𝑦 , 𝑗 βˆ‰ [ 𝑁 ] ⁑ 𝑔 𝑑 𝑦 ⁒ ( 𝑧 𝑖 , 𝑗 )

Ξ© ⁒ ( log ⁑ π‘˜ ) for all 𝑑 β‰₯ 𝑇 𝑦 .

Proof of Claim B.20.

As in the proof of Claim B.6, applying Claim B.17 to any class 𝑦 yields the existence of a correlation ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑦 , β„“ βˆ— ⟩ and a 𝑑

𝑂 ⁒ ( poly ⁒ ( π‘˜ ) ) such that ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑦 , β„“ βˆ— ⟩

𝜌 / ( 𝛿 2 βˆ’ 𝛿 1 ) . And again, reusing the logic of Claim B.17 but replacing ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑦 , β„“ βˆ— ⟩ in Equation B.21 with 𝜌 / ( 𝛿 2 βˆ’ 𝛿 1 ) yields that in an additional 𝑂 ⁒ ( poly ⁒ ( π‘˜ ) ) iterations we have ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑦 , β„“ βˆ— ⟩

2 ⁒ 𝜌 / 𝛿 1 (implying that 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) has reached the linear regime of ReLU ~ on effectively all mixed points) while the off-diagonal correlations continue to lag behind by a 𝑂 ⁒ ( 1 / π‘˜ ) factor.

At this point we may lower bound the update to ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑦 , β„“ βˆ— ⟩ as:

⟨ βˆ’ πœ‚ ⁒ βˆ‡ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , β„“ βˆ— ⟩

β‰₯ πœ‚ 𝑁 2 ⁒ βˆ‘ 𝑖 ∈ 𝑁 𝑦 βˆ‘ 𝑗 βˆ‰ 𝑁 𝑦 Θ ⁒ ( 1 βˆ’ 2 ⁒ πœ™ 𝑦 ⁒ ( ( 𝑔 𝑑 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ) )

  • πœ‚ 𝑁 2 ⁒ βˆ‘ 𝑖 ∈ 𝑁 𝑦 βˆ‘ 𝑗 ∈ 𝑁 𝑦 Θ ⁒ ( 1 βˆ’ πœ™ 𝑦 ⁒ ( ( 𝑔 𝑑 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ) )

βˆ’ πœ‚ 𝑁 2 ⁒ βˆ‘ 𝑖 βˆ‰ 𝑁 𝑦 βˆ‘ 𝑗 βˆ‰ 𝑁 𝑦 πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ⁒ Θ ⁒ ( 𝑃 𝜎 4 𝛼 max 𝑒 ∈ [ π‘˜ ] , π‘ž ∈ [ 2 ] ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑒 , π‘ž ⟩ 𝛼 βˆ’ 1 𝜌 𝛼 βˆ’ 1 ) .

(B.24)

Using Claim B.15, we have that so long as max 𝑖 ∈ 𝑁 𝑦 , 𝑗 βˆ‰ 𝑁 𝑦 ⁑ 𝑔 𝑑 𝑦 ⁒ ( 𝑧 𝑖 , 𝑗 ) < log ⁑ ( π‘˜ βˆ’ 1 ) , we get (using | 𝑁 𝑒 |

Θ ⁒ ( 𝑁 / π‘˜ ) for all 𝑒 ∈ [ π‘˜ ] ):

⟨ βˆ’ πœ‚ ⁒ βˆ‡ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , β„“ βˆ— ⟩ β‰₯ Θ ⁒ ( πœ‚ / π‘˜ ) .

(B.25)

This also implies, by the logic of Claim B.17, that the off-diagonal correlations ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑠 , β„“ ⟩ have updates that can be upper bounded as:

⟨ βˆ’ πœ‚ ⁒ βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑠 , β„“ ⟩ ≀ Θ ⁒ ( πœ‚ / π‘˜ 2 ) .

(B.26)

Comparing Equations B.25 and B.26, we have that 𝑔 𝑑 𝑦 ⁒ ( 𝑧 𝑖 , 𝑗 ) β‰₯ log ⁑ ( π‘˜ βˆ’ 1 ) (and, consequently, 𝑔 𝑑 𝑦 ⁒ ( π‘₯ 𝑖 ) β‰₯ log ⁑ ( π‘˜ βˆ’ 1 ) ) for at least one mixed point 𝑧 𝑖 , 𝑗 with 𝑖 ∈ 𝑁 𝑦 in 𝑂 ⁒ ( poly ⁒ ( π‘˜ ) ) iterations while the off-diagonal correlations are 𝑂 ⁒ ( log ⁑ ( π‘˜ ) / π‘˜ ) . This also implies that min 𝑖 ∈ 𝑁 𝑦 , 𝑗 ∈ [ 𝑁 ] ⁑ 𝑔 𝑑 𝑦 ⁒ ( 𝑧 𝑖 , 𝑗 )

Ξ© ⁒ ( log ⁑ π‘˜ ) by Claim B.19. Finally, since Equation B.25 is positive, the class 𝑦 network outputs remain Ξ© ⁒ ( log ⁑ π‘˜ ) for 𝑑 β‰₯ 𝑇 𝑦 (as again we cannot decrease below log ⁑ ( π‘˜ βˆ’ 1 ) by more than π‘œ ⁒ ( 1 ) since the gradients are π‘œ ⁒ ( 1 ) ). ∎

Part II. Having analyzed the growth of diagonal and off-diagonal correlations in the initial stages of training, we now shift gears to focusing on the gaps between the correlations for each class. The key idea is that 𝐽 𝑀 ⁒ 𝑀 will push the correlations for the features 𝑣 𝑦 , 1 and 𝑣 𝑦 , 2 closer together throughout training (so long as they are sufficiently separated), for every class 𝑦 .

In order to prove this, we will rely on analyzing an expectation form of the gradient for 𝐽 𝑀 ⁒ 𝑀 . As the expressions involved in this analysis can become cumbersome quite quickly, we will first introduce a slew of new notation to make the presentation of the results a bit easier.

Firstly, in everything that follows, we assume 𝑣 𝑦 , 1 to be the better correlated feature at time 𝑑 for every class 𝑦 ∈ [ π‘˜ ] in the following sense:

βˆ‘ π‘Ÿ ∈ 𝐡 𝑦 , 1 ( 𝑑 ) ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , 1 ⟩ β‰₯ βˆ‘ π‘Ÿ ∈ 𝐡 𝑦 , 2 ( 𝑑 ) ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , 2 ⟩ .

(B.27)

Where the sets 𝐡 𝑦 , β„“ ( 𝑑 ) are as defined in Equation B.16. As the quantities in Equation B.27 will be repeatedly referenced, we define:

𝐢 𝑦 , β„“ ( 𝑑 )

β‰œ βˆ‘ π‘Ÿ ∈ 𝐡 𝑦 , β„“ ( 𝑑 ) ⟨ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) , 𝑣 𝑦 , β„“ ⟩

(B.28)

Ξ” 𝑦 ( 𝑑 )

β‰œ 𝐢 𝑦 , 1 ( 𝑑 ) βˆ’ 𝐢 𝑦 , 2 ( 𝑑 ) .

(B.29)

Now for the aforementioned expectation analysis we introduce several relevant random variables. We use 𝛽 𝑦 , 𝑝 (for every 𝑦 ∈ [ π‘˜ ] ) to denote a random variable following the distribution of the signal coefficients for class 𝑦 from Definition 4.1 and we further use 𝛽 𝑦 to denote a random variable representing the sum of 𝐢 𝑃 i.i.d. 𝛽 𝑦 , 𝑝 . Similarly, we use 𝑧 𝑦 , 𝑠 to denote the average/midpoint of two random variables following the distributions of class 𝑦 and class 𝑠 points respectively. Finally, we define 𝐴 1 ⁒ ( 𝛽 𝑠 , 𝛽 𝑦 ) and 𝐴 2 ⁒ ( 𝛽 𝑠 , 𝛽 𝑦 ) as:

𝐴 1 ⁒ ( 𝛽 𝑠 , 𝛽 𝑦 )

β‰œ 1 βˆ’ 2 ⁒ πœ™ 𝑦 ⁒ ( ( 𝑔 𝑑 ⁒ ( 𝑧 𝑦 , 𝑠 ) ) ) ,

(B.30)

𝐴 2 ⁒ ( 𝛽 𝑠 , 𝛽 𝑦 )

β‰œ 1 βˆ’ πœ™ 𝑦 ⁒ ( ( 𝑔 𝑑 ⁒ ( 𝑧 𝑦 , 𝑠 ) ) ) .

(B.31)

In context, this notation will imply that 𝐴 1 ⁒ ( 𝛽 𝑦 , 𝛽 𝑠 )

1 βˆ’ 2 ⁒ πœ™ 𝑠 ⁒ ( ( 𝑔 𝑑 ⁒ ( 𝑧 𝑦 , 𝑠 ) ) ) (i.e. swapping the order of arguments changes which coordinate of the softmax is being considered). Note that the way we have defined 𝐴 1 and 𝐴 2 in Equations B.30 and B.31 is a slight abuse of notation; the RHS of each equation is a function of random variables 𝛽 𝑒 , 𝑝 for each 𝑒 ∈ [ π‘˜ ] . However, only the linear terms involving classes 𝑦 and 𝑠 will be asymptotically relevant, and so we write 𝐴 1 ⁒ ( 𝛽 𝑠 , 𝛽 𝑦 ) and 𝐴 2 ⁒ ( 𝛽 𝑠 , 𝛽 𝑦 ) to emphasize this.

Now we will first prove an upper bound on the difference of gradient correlations in the linear regime, and then use these ideas to prove that correlations in the poly part of ReLU ~ will still get significant gradient. After we have done that, we will revisit this next claim to show that the separation between feature correlations continues to decrease even after they reach the linear regime.

Claim B.21.

Suppose that max 𝑠 ∈ [ π‘˜ ] ⁑ 𝐢 𝑠 , 1 ( 𝑑 )

𝑂 ⁒ ( log ⁑ π‘˜ ) . Then for any class 𝑦 ∈ [ π‘˜ ] and any π‘Ÿ 1 ∈ 𝐡 𝑦 , 1 ( 𝑑 ) and π‘Ÿ 2 ∈ 𝐡 𝑦 , 2 ( 𝑑 ) , we let:

Ξ¨ ⁒ ( π‘Ÿ 1 , π‘Ÿ 2 ) β‰œ ⟨ βˆ’ βˆ‡ 𝑀 𝑦 , π‘Ÿ 1 ( 𝑑 ) 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , 1 ⟩ βˆ’ ⟨ βˆ’ βˆ‡ 𝑀 𝑦 , π‘Ÿ 2 ( 𝑑 ) 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , 2 ⟩ .

(B.32)

After which we have that:

Ξ¨ ⁒ ( π‘Ÿ 1 , π‘Ÿ 2 )

≀ Θ ⁒ ( 1 π‘˜ 2 ) ⁒ βˆ‘ 𝑠 β‰  𝑦 𝔼 𝛽 𝑠 , 𝛽 𝑦 ⁒ [ 𝐴 1 ⁒ ( 𝛽 𝑠 , 𝛽 𝑦 ) ⁒ ( 𝛽 𝑦 βˆ’ 𝐢 𝑃 ⁒ 𝛿 2 / 2 ) ]

  • Θ ⁒ ( 1 π‘˜ 2 ) ⁒ 𝔼 𝛽 𝑦 ⁒ [ 𝐴 2 ⁒ ( 𝛽 𝑦 , 𝛽 𝑦 ) ⁒ ( 𝛽 𝑦 βˆ’ 𝐢 𝑃 ⁒ 𝛿 2 / 2 ) ]

  • 𝑂 ⁒ ( 𝑃 ⁒ 𝛿 4 𝛼 ⁒ ( log ⁑ π‘˜ ) 𝛼 βˆ’ 1 ) .

(B.33) Proof of Claim B.21.

Using the logic from Equation B.24 as well as the fact that π‘Ÿ 1 ∈ 𝐡 𝑦 , 1 ( 𝑑 ) and π‘Ÿ 2 ∈ 𝐡 𝑦 , 2 ( 𝑑 ) (i.e. we are considering weights in the linear regime of ReLU ~ for each feature), we get:

Ξ¨ ⁒ ( π‘Ÿ 1 , π‘Ÿ 2 )

≀ 1 𝑁 2 ⁒ βˆ‘ 𝑖 ∈ 𝑁 𝑦 βˆ‘ 𝑗 βˆ‰ 𝑁 𝑦 ( 1 βˆ’ 2 ⁒ πœ™ 𝑦 ⁒ ( ( 𝑔 𝑑 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ) ) ⁒ ( βˆ‘ 𝑝 ∈ 𝑃 𝑦 , 1 ⁒ ( π‘₯ 𝑖 ) 𝛽 𝑖 , 𝑝 βˆ’ 𝐢 𝑃 ⁒ 𝛿 2 / 2 )

  • 1 𝑁 2 ⁒ βˆ‘ 𝑖 ∈ 𝑁 𝑦 βˆ‘ 𝑗 βˆ‰ 𝑁 𝑦 ( 1 βˆ’ πœ™ 𝑦 ⁒ ( ( 𝑔 𝑑 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ) ) ⁒ ( βˆ‘ 𝑝 ∈ 𝑃 𝑦 , 1 ⁒ ( π‘₯ 𝑖 ) 𝛽 𝑖 , 𝑝 βˆ’ 𝐢 𝑃 ⁒ 𝛿 2 / 2 )

  • 𝑂 ⁒ ( 𝑃 ⁒ 𝛿 4 𝛼 ⁒ ( log ⁑ π‘˜ ) 𝛼 βˆ’ 1 ) .

(B.34)

Now since we took 𝑁 sufficiently large in Assumption 4.3, by concentration for bounded random variables we can replace the expressions on the RHS above with their expected values, as the deviation will be within 𝑂 ⁒ ( 𝑃 ⁒ 𝛿 4 𝛼 ⁒ ( log ⁑ π‘˜ ) 𝛼 βˆ’ 1 ) (with probability 1 βˆ’ 𝑂 ⁒ ( 1 / π‘˜ ) , consistent with our setting).

The expectations will be over all of the random variables 𝛽 𝑒 , 𝑝 for 𝑒 ∈ [ π‘˜ ] , not just the summation random variables 𝛽 𝑦 and 𝛽 𝑠 . However, we observe that for the mixed point random variable 𝑧 𝑦 , 𝑠 , the 𝛽 𝑒 , 𝑝 for 𝑒 β‰  𝑦 , 𝑠 can only show up in the feature noise patches of 𝑧 𝑦 , 𝑠 . By the same logic as the calculation controlling the feature noise contribution to the gradient above (once again, the relevant calculation is in Equation B.24), the contribution from these terms to the expectations will be within the last asymptotic term of Equation B.34. Thus, similar to Equations B.30 and B.31, we again abuse notation and write the expectations in Equation B.33 with respect to only the variables 𝛽 𝑦 and 𝛽 𝑠 to indicate this.

∎

We will now show that 𝔼 𝛽 𝑠 , 𝛽 𝑦 ⁒ [ 𝐴 1 ⁒ ( 𝛽 𝑠 , 𝛽 𝑦 ) ⁒ ( 𝛽 𝑦 βˆ’ 𝐢 𝑃 ⁒ 𝛿 2 / 2 ) ] is significantly negative so long as the separation between feature correlations Ξ” 𝑦 ( 𝑑 ) is sufficiently large. Once again, to simplify notation even further, we will use 𝛽 𝑦 ~

𝛽 𝑦 βˆ’ 𝐢 𝑃 ⁒ 𝛿 2 / 2 and use β„™ ⁒ ( 𝛽 𝑦 ~ ) to refer to its associated probability measure.

Claim B.22.

Suppose that max 𝑒 ∈ [ π‘˜ ] ⁑ 𝐢 𝑒 , 1 ( 𝑑 )

𝑂 ⁒ ( log ⁑ π‘˜ ) . Let 𝑦 be any class such that Ξ” 𝑦 ( 𝑑 ) β‰₯ log ⁑ π‘˜ βˆ’ π‘œ ⁒ ( 1 ) , and for each 𝑠 ∈ [ π‘˜ ] with 𝑠 β‰  𝑦 let:

Ξ› 𝑦 , 𝑠 β‰œ βˆ‘ 𝑒 β‰  𝑦 exp ⁑ ( 𝑔 𝑑 𝑒 ⁒ ( 𝑧 𝑦 , 𝑠 ) ) .

(B.35)

Now so long as there exists at least one class 𝑠 β‰  𝑦 such that the set π‘ˆ 𝑦 , 𝑠 of all ( 𝛽 𝑠 ~ , 𝛽 𝑦 ~ ) ∈ [ 0 , 𝐢 𝑃 ⁒ ( 𝛿 2 / 2 βˆ’ 𝛿 1 ) ] Γ— [ βˆ’ 𝐢 𝑃 ⁒ ( 𝛿 2 / 2 βˆ’ 𝛿 1 ) , 𝐢 𝑃 ⁒ ( 𝛿 2 / 2 βˆ’ 𝛿 1 ) ] satisfying Ξ› 𝑦 , 𝑠 ≀ 𝛽 𝑦 ~ ⁒ exp ⁑ ( 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 ( 𝑑 ) / 2 ) has measure ( β„™ ⁒ ( 𝛽 𝑠 ~ ) Γ— β„™ ⁒ ( 𝛽 𝑦 ~ ) ) ⁒ ( π‘ˆ )

Θ ⁒ ( 1 ) (note that by Equation B.35, 𝛽 𝑠 ~ factors into Ξ› 𝑦 , 𝑠 ), then we have:

𝔼 𝛽 𝑠 , 𝛽 𝑦 ⁒ [ 𝐴 1 ⁒ ( 𝛽 𝑠 , 𝛽 𝑦 ) ⁒ ( 𝛽 𝑦 βˆ’ 𝐢 𝑃 ⁒ 𝛿 2 / 2 ) ]

βˆ’ Θ ⁒ ( 1 ) .

(B.36) Proof of Claim B.22.

We begin by first showing that the expectation on the LHS of Equation B.36 is negative. Indeed, this is almost immediate from the fact that 𝛽 𝑦 ~ is a symmetric, mean zero random variable - we need only show that 𝐴 1 is monotonically decreasing in 𝛽 𝑦 .

From the definition of 𝐴 1 , we observe that it suffices to show that 𝑔 𝑑 𝑦 ⁒ ( 𝑧 𝑦 , 𝑠 ) is monotonically increasing in 𝛽 𝑦 . However, this is straightforward to see from the assumption that Ξ” 𝑦 ( 𝑑 ) β‰₯ log ⁑ π‘˜ βˆ’ π‘œ ⁒ ( 1 ) , as this implies that an πœ– increase in 𝛽 𝑦 leads to a 𝑂 ⁒ ( πœ– ⁒ log ⁑ π‘˜ ) βˆ’ 𝑂 ⁒ ( πœ– ) increase in 𝑔 𝑑 𝑦 , since the feature noise and weights that are in the polynomial part of ReLU ~ can contribute at most 𝑂 ⁒ ( 1 ) by the logic of Claim B.19.

Now we need only show that the expectation is sufficiently negative. To do this, we will restrict our attention to a class 𝑠 β‰  𝑦 such that the condition on the set π‘ˆ 𝑦 , 𝑠 from the claim is satisfied. We will also rely on the following facts, which will allow us to write things purely in terms of 𝐢 𝑦 , β„“ ( 𝑑 ) and 𝐢 𝑠 , β„“ ( 𝑑 ) (i.e. disregarding the weights that are not in the linear regime):

𝑔 𝑑 𝑦 ⁒ ( 𝑧 𝑦 , 𝑠 )

∈ [ ( 𝛽 𝑦 ⁒ 𝐢 𝑦 , 1 ( 𝑑 ) + ( 𝐢 𝑃 ⁒ 𝛿 2 βˆ’ 𝛽 𝑦 ) ⁒ 𝐢 𝑦 , 2 ( 𝑑 ) ) / 2 , ( 𝛽 𝑦 ⁒ 𝐢 𝑦 , 1 ( 𝑑 ) + ( 𝐢 𝑃 ⁒ 𝛿 2 βˆ’ 𝛽 𝑦 ) ⁒ 𝐢 𝑦 , 2 ( 𝑑 ) ) / 2 + 𝑂 ⁒ ( 1 ) ] ,

(B.37)

𝑔 𝑑 𝑠 ⁒ ( 𝑧 𝑦 , 𝑠 )

∈ [ ( 𝛽 𝑠 ⁒ 𝐢 𝑠 , 1 ( 𝑑 ) + ( 𝐢 𝑃 ⁒ 𝛿 2 βˆ’ 𝛽 𝑠 ) ⁒ 𝐢 𝑠 , 2 ( 𝑑 ) ) / 2 , ( 𝛽 𝑠 ⁒ 𝐢 𝑠 , 1 ( 𝑑 ) + ( 𝐢 𝑃 ⁒ 𝛿 2 βˆ’ 𝛽 𝑠 ) ⁒ 𝐢 𝑠 , 2 ( 𝑑 ) ) / 2 + 𝑂 ⁒ ( 1 ) ] ,

(B.38)

𝑔 𝑑 𝑒 ⁒ ( 𝑧 𝑦 , 𝑠 )

𝑂 ⁒ ( log ⁑ π‘˜ π‘˜ ) for  ⁒ 𝑒 β‰  𝑦 , 𝑠 .

(B.39)

Which follow from Claim B.19 and Corollary B.18 (alongside the assumption that max 𝑒 ∈ [ π‘˜ ] ⁑ 𝐢 𝑒 , 1 ( 𝑑 )

𝑂 ⁒ ( log ⁑ π‘˜ ) ) respectively. Now we perform the substitution 𝑔 𝑑 𝑒 ← 𝑔 𝑑 𝑒 βˆ’ 𝐢 𝑃 ⁒ 𝛿 2 ⁒ ( 𝐢 𝑦 , 1 ( 𝑑 ) + 𝐢 𝑦 , 2 ( 𝑑 ) ) / 4 for all 𝑒 ∈ [ π‘˜ ] , as this can be done without changing the value of πœ™ 𝑦 ⁒ ( 𝑔 𝑑 ⁒ ( 𝑧 𝑦 , 𝑠 ) ) . Under this transformation (which is just centering) we have that (using Equation B.37):

𝑔 𝑑 𝑦 ⁒ ( 𝑧 𝑦 , 𝑠 )

∈ [ ( 𝛽 𝑦 βˆ’ 𝐢 𝑃 ⁒ 𝛿 2 / 2 ) ⁒ Ξ” 𝑦 ( 𝑑 ) / 2 , ( 𝛽 𝑦 βˆ’ 𝐢 𝑃 ⁒ 𝛿 2 / 2 ) ⁒ Ξ” 𝑦 ( 𝑑 ) / 2 + 𝑂 ⁒ ( 1 ) ] .

(B.40)

Which isolates the correlation gap term Ξ” 𝑦 ( 𝑑 ) . Now we have:

𝔼 𝛽 𝑠 , 𝛽 𝑦
[ 𝐴 1 ⁒ ( 𝛽 𝑠 , 𝛽 𝑦 ) ⁒ ( 𝛽 𝑦 βˆ’ 𝐢 𝑃 ⁒ 𝛿 2 / 2 ) ]

∫ 𝐢 𝑃 ⁒ 𝛿 1 𝐢 𝑃 ⁒ ( 𝛿 2 βˆ’ 𝛿 1 ) ∫ 𝐢 𝑃 ⁒ 𝛿 1 𝐢 𝑃 ⁒ ( 𝛿 2 βˆ’ 𝛿 1 ) Ξ› 𝑦 , 𝑠 ( 𝑑 ) βˆ’ exp ⁑ ( 𝑔 𝑑 𝑦 ⁒ ( 𝑧 𝑦 , 𝑠 ) ) Ξ› 𝑦 , 𝑠 ( 𝑑 ) + exp ⁑ ( 𝑔 𝑑 𝑦 ⁒ ( 𝑧 𝑦 , 𝑠 ) ) ⁒ ( 𝛽 𝑦 βˆ’ 𝐢 𝑃 ⁒ 𝛿 2 / 2 ) ⁒ 𝑑 β„™ ⁒ ( 𝛽 𝑦 ) ⁒ 𝑑 β„™ ⁒ ( 𝛽 𝑠 )

≀ ∫ 𝐢 𝑃 ⁒ ( 𝛿 1 βˆ’ 𝛿 2 / 2 ) 𝐢 𝑃 ⁒ ( 𝛿 2 / 2 βˆ’ 𝛿 1 ) ∫ 𝐢 𝑃 ⁒ ( 𝛿 1 βˆ’ 𝛿 2 / 2 ) 0 Ξ› 𝑦 , 𝑠 ( 𝑑 ) βˆ’ exp ⁑ ( 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 ( 𝑑 ) / 2 + 𝑂 ⁒ ( 1 ) ) Ξ› 𝑦 , 𝑠 ( 𝑑 ) + exp ⁑ ( 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 ( 𝑑 ) / 2 + 𝑂 ⁒ ( 1 ) ) ⁒ 𝛽 𝑦 ~ ⁒ 𝑑 β„™ ⁒ ( 𝛽 𝑦 ~ ) ⁒ 𝑑 β„™ ⁒ ( 𝛽 𝑠 ~ )

  • ∫ 𝐢 𝑃 ⁒ ( 𝛿 1 βˆ’ 𝛿 2 / 2 ) 𝐢 𝑃 ⁒ ( 𝛿 2 / 2 βˆ’ 𝛿 1 ) ∫ 0 𝐢 𝑃 ⁒ ( 𝛿 2 / 2 βˆ’ 𝛿 1 ) Ξ› 𝑦 , 𝑠 ( 𝑑 ) βˆ’ exp ⁑ ( 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 ( 𝑑 ) / 2 ) Ξ› 𝑦 , 𝑠 ( 𝑑 )
  • exp ⁑ ( 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 ( 𝑑 ) / 2 ) ⁒ 𝛽 𝑦 ~ ⁒ 𝑑 β„™ ⁒ ( 𝛽 𝑦 ~ ) ⁒ 𝑑 β„™ ⁒ ( 𝛽 𝑠 ~ ) .

(B.41)

Where above we used Equation B.40 to get the upper bound in the last step. We next focus on bounding the inner integral in Equation B.41. Using the symmetry of 𝛽 𝑦 ~ , we have that:

𝔼 𝛽 𝑦 ⁒ [ 𝐴 1 ⁒ ( 𝛽 𝑠 , 𝛽 𝑦 ) ⁒ ( 𝛽 𝑦 βˆ’ 𝐢 𝑃 ⁒ 𝛿 2 / 2 ) ]

βˆ’ ∫ 0 𝐢 𝑃 ⁒ ( 𝛿 2 / 2 βˆ’ 𝛿 1 ) ( Ξ› 𝑦 , 𝑠 ( 𝑑 ) βˆ’ exp ⁑ ( βˆ’ 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 ( 𝑑 ) / 2 + 𝑂 ⁒ ( 1 ) ) Ξ› 𝑦 , 𝑠 ( 𝑑 ) + exp ⁑ ( βˆ’ 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 ( 𝑑 ) / 2 + 𝑂 ⁒ ( 1 ) ) βˆ’ Ξ› 𝑦 , 𝑠 ( 𝑑 ) βˆ’ exp ⁑ ( 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 ( 𝑑 ) / 2 ) Ξ› 𝑦 , 𝑠 ( 𝑑 ) + exp ⁑ ( 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 ( 𝑑 ) / 2 ) ) ⁒ 𝛽 𝑦 ~ ⁒ 𝑑 β„™ ⁒ ( 𝛽 𝑦 ~ )

βˆ’ ∫ 0 𝐢 𝑃 ⁒ ( 𝛿 2 / 2 βˆ’ 𝛿 1 ) 2 ⁒ Ξ› 𝑦 , 𝑠 ( 𝑑 ) ⁒ ( exp ⁑ ( 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 ( 𝑑 ) / 2 ) βˆ’ exp ⁑ ( βˆ’ 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 ( 𝑑 ) / 2 + 𝑂 ⁒ ( 1 ) ) ) ( Ξ› 𝑦 , 𝑠 ( 𝑑 ) ) 2 + Ξ› 𝑦 , 𝑠 ( 𝑑 ) ⁒ ( exp ⁑ ( 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 ( 𝑑 ) / 2 ) + exp ⁑ ( βˆ’ 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 ( 𝑑 ) / 2 + 𝑂 ⁒ ( 1 ) ) ) + exp ⁑ ( 𝑂 ⁒ ( 1 ) ) ⁒ 𝛽 𝑦 ~ ⁒ 𝑑 β„™ ⁒ ( 𝛽 𝑦 ~ ) .

(B.42)

The tricky aspect of Equation B.42 is the 𝑂 ⁒ ( 1 ) term in the exponential, which can lead to a positive contribution (via a negative integrand) when 𝛽 𝑦 ~ is close to 0. However, we can safely restrict the bounds of integration in Equation B.42 to a region [ 𝜚 , 𝐢 𝑃 ⁒ ( 𝛿 2 / 2 βˆ’ 𝛿 1 ) ] for 𝜚

Θ ⁒ ( 1 / log ⁑ π‘˜ ) (with an appropriately large constant), as in such a region the integrand is guaranteed to be positive since Ξ” 𝑦 ( 𝑑 ) β‰₯ log ⁑ π‘˜ βˆ’ π‘œ ⁒ ( 1 ) . From Definition 4.1, this restriction costs us only a π‘œ ⁒ ( 1 / log ⁑ π‘˜ ) term in the integrand, which is going to be asymptotically irrelevant.

Indeed, it follows that:

𝔼 𝛽 𝑠 , 𝛽 𝑦 ⁒ [ 𝐴 1 ⁒ ( 𝛽 𝑠 , 𝛽 𝑦 ) ⁒ ( 𝛽 𝑦 βˆ’ 𝐢 𝑃 ⁒ 𝛿 2 / 2 ) ]

≀ βˆ’ ∫ 𝐢 𝑃 ⁒ ( 𝛿 1 βˆ’ 𝛿 2 / 2 ) 𝐢 𝑃 ⁒ ( 𝛿 2 / 2 βˆ’ 𝛿 1 ) ∫ 𝜚 𝐢 𝑃 ⁒ ( 𝛿 2 / 2 βˆ’ 𝛿 1 ) Θ ⁒ ( 𝛽 𝑦 ~ ⁒ Ξ› 𝑦 , 𝑠 ( 𝑑 ) ⁒ exp ⁑ ( 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 ( 𝑑 ) / 2 ) ( Ξ› 𝑦 , 𝑠 ( 𝑑 ) ) 2 + Ξ› 𝑦 , 𝑠 ( 𝑑 ) ⁒ exp ⁑ ( 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 ( 𝑑 ) / 2 ) ) ⁒ 𝑑 β„™ ⁒ ( 𝛽 𝑦 ~ ) ⁒ 𝑑 β„™ ⁒ ( 𝛽 𝑠 ~ )

≀ βˆ’ ∫ π‘ˆ 𝑦 , 𝑠 Θ ⁒ ( 𝛽 𝑦 ~ ⁒ Ξ› 𝑦 , 𝑠 ( 𝑑 ) ⁒ exp ⁑ ( 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 ( 𝑑 ) / 2 ) ( Ξ› 𝑦 , 𝑠 ( 𝑑 ) ) 2 + Ξ› 𝑦 , 𝑠 ( 𝑑 ) ⁒ exp ⁑ ( 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 ( 𝑑 ) / 2 ) ) ⁒ 𝑑 β„™ ⁒ ( 𝛽 𝑠 ~ ) Γ— β„™ ⁒ ( 𝛽 𝑦 ~ )

βˆ’ Θ ⁒ ( 1 ) .

(B.43)

Where the last line follows from the assumption in the claim on the set π‘ˆ 𝑦 , 𝑠 , and we are done. ∎

We proved Claim B.22 in terms of 𝛽 𝑦 (the sum of the individual 𝛽 𝑦 , 𝑝 ) to keep notation manageable (avoids 𝐢 𝑃 iterated integrals) and to more closely mirror the proof of Proposition 3.4. However, what we will really use for our remaining analysis is the following corollary, which gives the same result as Claim B.22 but for each of the individual terms 𝛽 𝑦 , 𝑝 . Below we use βˆ‘ 𝑖

1 𝐢 𝑃 𝛽 𝑦 , 𝑖 to make explicit the dependence between the sum and each individual random variable 𝛽 𝑦 , 𝑝 (so as to not mislead one to think of them as independent random variables).

Corollary B.23.

Under the same conditions as Claim B.22, we have for every 𝑝 ∈ [ 𝐢 𝑃 ] :

𝔼 𝛽 𝑠 , 𝛽 𝑦 , 1 , … , 𝛽 𝑦 , 𝐢 𝑃 ⁒ [ 𝐴 1 ⁒ ( 𝛽 𝑠 , βˆ‘ 𝑖

1 𝐢 𝑃 𝛽 𝑦 , 𝑖 ) ⁒ ( 𝛽 𝑦 , 𝑝 βˆ’ 𝛿 2 / 2 ) ]

βˆ’ Θ ⁒ ( 1 ) .

(B.44) Proof of Claim B.23.

The proof follows identically to that of Claim B.23 (effectively the only change in the computations is that 𝛽 𝑦 ~ becomes 𝛽 ~ 𝑦 , 𝑝 ), as the functions 𝐴 1 and 𝐴 2 satisfy the same monotonically increasing property in each of the i.i.d. 𝛽 𝑦 , 𝑝 (and there are only 𝐢 𝑃

Θ ⁒ ( 1 ) many). One could also have simply seen this from the symmetry of the 𝛽 𝑦 , 𝑝 in Equation B.44; indeed, we expect Equation B.44 to only differ by a factor 1 / 𝐢 𝑃 from Equation B.36. ∎

Now we can show using Corollary B.23 that there is a significant gradient component towards correcting the separation between the feature correlations even when the second feature correlation is in the polynomial part of ReLU ~ (which is where it got stuck for a significant number of classes in the ERM proof).

Claim B.24.

Suppose that max 𝑒 ∈ [ π‘˜ ] ⁑ 𝐢 𝑒 , 1 ( 𝑑 )

𝑂 ⁒ ( log ⁑ π‘˜ ) . Let 𝑦 be any class such that Ξ” 𝑦 ( 𝑑 ) β‰₯ log ⁑ π‘˜ βˆ’ π‘œ ⁒ ( 1 ) . Then for any π‘Ÿ 2 βˆ‰ 𝐡 𝑦 , 2 ( 𝑑 ) satisfying ⟨ 𝑀 𝑦 , π‘Ÿ 2 ( 0 ) , 𝑣 𝑦 , 2 ⟩ β‰₯ 𝜏

0 , so long as there exists an π‘Ÿ 1 ∈ 𝐡 𝑦 , 1 ( 𝑑 ) satisfying:

⟨ βˆ’ βˆ‡ 𝑀 𝑦 , π‘Ÿ 1 ( 𝑑 ) 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , 1 ⟩ β‰₯ 0 .

(B.45)

We have:

⟨ βˆ’ βˆ‡ 𝑀 𝑦 , π‘Ÿ 2 ( 𝑑 ) 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , 2 ⟩ β‰₯ Θ ⁒ ( 𝜏 𝜌 𝛼 βˆ’ 1 ⁒ π‘˜ 2 ) .

(B.46) Proof of Claim B.24.

By near-identical logic to the steps leading to Equation B.33 and using Equation B.21, we obtain (following the same notation as before):

⟨ βˆ’ βˆ‡ 𝑀 𝑦 , π‘Ÿ 2 ( 𝑑 ) 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , 2 ⟩ β‰₯ Θ ⁒ ( 𝜏 𝜌 𝛼 ⁒ π‘˜ 2 ) ⁒ βˆ‘ 𝑠 β‰  𝑦 𝔼 𝛽 𝑠 , 𝛽 𝑦 , 1 , … , 𝛽 𝑦 , 𝐢 𝑃 ⁒ [ 𝐴 1 ⁒ ( 𝛽 𝑠 , βˆ‘ 𝑝

1 𝐢 𝑃 𝛽 𝑦 , 𝑝 ) ⁒ βˆ‘ 𝑝

1 𝐢 𝑃 ( 𝛿 2 βˆ’ 𝛽 𝑦 , 𝑝 ) 𝛼 ] .

(B.47)

Now we break the rest of the proof into two cases: whether the condition on the set π‘ˆ 𝑦 , 𝑠 from Claim B.22 holds for some 𝑠 β‰  𝑦 or not. In the former case, using Corollary B.23 and our assumption in the statement of the claim that Equation B.45 holds, we get (from linearity of expectation):

Θ ⁒ ( 1 π‘˜ 2 ) ⁒ βˆ‘ 𝑠 β‰  𝑦 𝔼 𝛽 𝑠 , 𝛽 𝑦 , 1 , … , 𝛽 𝑦 , 𝐢 𝑃 ⁒ [ 𝐴 1 ⁒ ( 𝛽 𝑠 , βˆ‘ 𝑝

1 𝐢 𝑃 𝛽 𝑦 , 𝑝 ) ⁒ ( 𝛿 2 βˆ’ 𝛽 𝑦 , 𝑝 ) ] β‰₯ Θ ⁒ ( 1 π‘˜ 2 ) .

(B.48)

Now observing that Cov ⁒ ( 𝐴 1 ⁒ ( 𝛽 𝑠 , βˆ‘ 𝑝

1 𝐢 𝑃 𝛽 𝑦 , 𝑝 ) ⁒ ( 𝛿 2 βˆ’ 𝛽 𝑦 , 𝑝 ) , ( 𝛿 2 βˆ’ 𝛽 𝑦 , 𝑝 ) 𝛼 βˆ’ 1 )

0 for every 𝑝 , we obtain:

⟨ βˆ’ βˆ‡ 𝑀 𝑦 , π‘Ÿ 2 ( 𝑑 ) 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , 2 ⟩

β‰₯ βˆ‘ 𝑠 β‰  𝑦 βˆ‘ 𝑝

1 𝐢 𝑃 𝔼 𝛽 𝑠 , 𝛽 𝑦 , 1 , … , 𝛽 𝑦 , 𝐢 𝑃 ⁒ [ 𝐴 1 ⁒ ( 𝛽 𝑠 , βˆ‘ 𝑝

1 𝐢 𝑃 𝛽 𝑦 , 𝑝 ) ⁒ ( 𝛿 2 βˆ’ 𝛽 𝑦 , 𝑝 ) ] ⁒ 𝔼 𝛽 𝑠 , 𝛽 𝑦 , 1 , … , 𝛽 𝑦 , 𝐢 𝑃 ⁒ [ ( 𝛿 2 βˆ’ 𝛽 𝑦 , 𝑝 ) 𝛼 βˆ’ 1 ] .

(B.49)

And the result then follows for this case from Equation B.48 and the fact that 𝔼 𝛽 𝑠 , 𝛽 𝑦 , 1 , … , 𝛽 𝑦 , 𝐢 𝑃 ⁒ [ ( 𝛿 2 βˆ’ 𝛽 𝑦 , 𝑝 ) 𝛼 βˆ’ 1 ] is a data-distribution-dependent constant.

For the second case, we have that for every class 𝑠 β‰  𝑦 , β„™ ⁒ ( 𝛽 𝑠 ~ ) Γ— β„™ ⁒ ( 𝛽 𝑠 ~ ) ⁒ ( π‘ˆ 𝑦 , 𝑠 )

π‘œ ⁒ ( 1 ) . However, by comparing to Equation B.41, we see that this must imply directly that Equation B.47 satisfies the lower bound in Equation B.46, so we are done (intuitively: the class 𝑦 is lagging behind all of the other classes, so the coefficient in the gradient correlation with 𝑣 𝑦 , 2 cannot be small). ∎

Having proved Claim B.24, it remains to prove that both Equation B.45 and max 𝑦 ∈ [ π‘˜ ] ⁑ 𝐢 𝑦 , 1 ( 𝑑 )

𝑂 ⁒ ( log ⁑ π‘˜ ) hold throughout training, as after doing so we can conclude that the second feature correlation will escape the polynomial part of ReLU ~ and become sufficiently large in polynomially many training steps.

Claim B.25.

For any 𝑦 ∈ [ π‘˜ ] , β„“ ∈ [ 2 ] ,  and  ⁒ π‘Ÿ ∈ [ π‘š ] , we have that:

⟨ βˆ’ βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 𝑑 + 1 ) 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , β„“ ⟩ β‰₯ 0.99 ⁒ ⟨ βˆ’ βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , β„“ ⟩

so long as βˆ‘ 𝑠 β‰  𝑦 exp ⁑ ( 𝑔 𝑑 + 1 𝑠 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) β‰₯ βˆ‘ 𝑠 β‰  𝑦 exp ⁑ ( 𝑔 𝑑 𝑠 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) for all mixed points 𝑧 𝑖 , 𝑗 with 𝑖 ∈ 𝑁 𝑦 .

Proof of Claim B.25.

We proceed by brute force; namely, as long as πœ‚ is sufficiently small, we can prove that the gradient for 𝐽 𝑀 ⁒ 𝑀 does not decrease too much between successive iterations. As notation is going to become cumbersome quite quickly, we will use the following place-holders for the gradient correlations at time 𝑑 and 𝑑 + 1 :

𝐺 𝑑 β‰œ ⟨ βˆ’ βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , β„“ ⟩ .

We will now prove the result assuming π‘Ÿ ∈ 𝐡 𝑦 , β„“ ( 𝑑 ) , as the the case where π‘Ÿ βˆ‰ 𝐡 𝑦 , β„“ ( 𝑑 ) is strictly better (we will have the upper bound shown below with additional π‘œ ⁒ ( 1 ) factors). We have that (compare to Equation B.24):

𝐺 𝑑 βˆ’ 𝐺 𝑑 + 1

≀ 1 𝑁 2 ⁒ βˆ‘ 𝑖 ∈ 𝑁 𝑦 βˆ‘ 𝑗 βˆ‰ 𝑁 𝑦 Θ ⁒ ( πœ™ 𝑦 ⁒ ( ( 𝑔 𝑑 + 1 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ) βˆ’ πœ™ 𝑦 ⁒ ( ( 𝑔 𝑑 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ) )

  • 1 𝑁 2 ⁒ βˆ‘ 𝑖 ∈ 𝑁 𝑦 βˆ‘ 𝑗 ∈ 𝑁 𝑦 Θ ⁒ ( πœ™ 𝑦 ⁒ ( ( 𝑔 𝑑

  • 1 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ) βˆ’ πœ™ 𝑦 ⁒ ( ( 𝑔 𝑑 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ) )

  • 1 𝑁 2 ⁒ βˆ‘ 𝑖 βˆ‰ 𝑁 𝑦 βˆ‘ 𝑗 βˆ‰ 𝑁 𝑦 ( πœ™ 𝑦 ⁒ ( ( 𝑔 𝑑

  • 1 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ) βˆ’ πœ™ 𝑦 ⁒ ( ( 𝑔 𝑑 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ) ) ⁒ Θ ⁒ ( 𝑃 𝜎 4 𝛼 max 𝑒 ∈ [ π‘˜ ] , π‘ž ∈ [ 2 ] ⟨ 𝑀 𝑦 , π‘Ÿ βˆ— ( 𝑑 ) , 𝑣 𝑒 , π‘ž ⟩ 𝛼 βˆ’ 1 𝜌 𝛼 βˆ’ 1 ) .

(B.50)

Let us now focus on the πœ™ 𝑦 ⁒ ( ( 𝑔 𝑑 + 1 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ) βˆ’ πœ™ 𝑦 ⁒ ( ( 𝑔 𝑑 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ) terms present in Equation B.50 above. We will just consider the first case above (mixing between class 𝑦 and a non- 𝑦 class), as the other analyses follow similarly. Furthermore, we will omit the 𝑧 𝑖 , 𝑗 in what follows (in the interest of brevity) and simply write 𝑔 𝑑 + 1 𝑦 . Additionally, similar to Claim B.22, we will use the notation Ξ› 𝑑

βˆ‘ 𝑠 β‰  𝑦 exp ⁑ ( 𝑔 𝑑 𝑠 ) .

Now by the assumption in the statement of the claim, we have that Ξ› 𝑑 + 1 β‰₯ Ξ› 𝑑 , and since π‘š

Θ ⁒ ( π‘˜ ) (number of weights per class), we have that 𝑔 𝑑 + 1 𝑦 ≀ 𝑔 𝑑 𝑦 + Θ ⁒ ( π‘˜ ⁒ πœ‚ ⁒ 𝐺 𝑑 ) (all of the updates for weights in the linear regime are identical and strictly larger than updates for those in the poly regime). Thus,

πœ™ 𝑦 ⁒ ( 𝑔 𝑑 + 1 ) βˆ’ πœ™ 𝑦 ⁒ ( 𝑔 𝑑 )
≀ exp ⁑ ( 𝑔 𝑑 𝑦 + Θ ⁒ ( π‘˜ ⁒ πœ‚ ⁒ 𝐺 𝑑 ) ) Ξ› 𝑑 + exp ⁑ ( 𝑔 𝑑 𝑦 + Θ ⁒ ( π‘˜ ⁒ πœ‚ ⁒ 𝐺 𝑑 ) ) βˆ’ exp ⁑ ( 𝑔 𝑑 𝑦 ) Ξ› 𝑑 + exp ⁑ ( 𝑔 𝑑 𝑦 )

Ξ› 𝑑 ⁒ exp ⁑ ( 𝑔 𝑑 𝑦 ) ⁒ ( exp ⁑ ( Θ ⁒ ( π‘˜ ⁒ πœ‚ ⁒ 𝐺 𝑑 ) ) βˆ’ 1 ) Ξ› 𝑑 2 + Ξ› 𝑑 ⁒ exp ⁑ ( 𝑔 𝑑 𝑦 ) ⁒ ( exp ⁑ ( Θ ⁒ ( π‘˜ ⁒ πœ‚ ⁒ 𝐺 𝑑 ) ) + 1 ) + exp ⁑ ( 2 ⁒ 𝑔 𝑑 𝑦 + Θ ⁒ ( π‘˜ ⁒ πœ‚ ⁒ 𝐺 𝑑 ) )

≀ Θ ⁒ ( π‘˜ ⁒ πœ‚ ⁒ 𝐺 𝑑 + π‘˜ 2 ⁒ πœ‚ 2 ⁒ 𝐺 𝑑 2 )

Θ ⁒ ( π‘˜ ⁒ πœ‚ ⁒ 𝐺 𝑑 ) .

(B.51)

Where in the last line we again used the inequality exp ⁑ ( π‘₯ ) ≀ 1 + π‘₯ + π‘₯ 2 for π‘₯ ∈ [ 0 , 1 ] . Plugging Equation B.51 into Equation B.50 (after similar calculations for the other two pieces of Equation B.50) yields the result (since again, πœ‚

𝑂 ⁒ ( 1 / poly ⁒ ( π‘˜ ) ) is suitably small). ∎

Corollary B.26.

For any 𝑦 ∈ [ π‘˜ ] , β„“ ∈ [ 2 ] , π‘Ÿ ∈ [ π‘š ] ,  and  ⁒ 𝑑 , we have that:

⟨ βˆ’ βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 0 ) 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , β„“ ⟩ β‰₯ 0 ⟹ ⟨ βˆ’ βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , β„“ ⟩ β‰₯ 0 .

Proof of Corollary B.26.

For every class 𝑦 , we have βˆ‘ 𝑠 β‰  𝑦 exp ⁑ ( 𝑔 𝑑 + 1 𝑠 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) β‰₯ βˆ‘ 𝑠 β‰  𝑦 exp ⁑ ( 𝑔 𝑑 𝑠 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) for all mixed points 𝑧 𝑖 , 𝑗 with 𝑖 ∈ 𝑁 𝑦 for 𝑑

0 (see the proof of Claim B.17), and the corollary then follows from an induction argument and Claim B.25. ∎

And with Corollary B.26 we may prove that max 𝑦 ∈ [ π‘˜ ] ⁑ 𝐢 𝑦 , 1 ( 𝑑 )

𝑂 ⁒ ( log ⁑ π‘˜ ) holds throughout training.

Claim B.27.

For all 𝑑

𝑂 ⁒ ( poly ⁒ ( π‘˜ ) ) , for any polynomial in π‘˜ , we have that max 𝑦 ∈ [ π‘˜ ] ⁑ 𝐢 𝑦 , 1 ( 𝑑 )

𝑂 ⁒ ( log ⁑ π‘˜ ) .

Proof of Claim B.27.

The idea is to consider the sum of gradient correlations across classes, and show that the cross-class mixing term in this sum becomes smaller (as this would be our only concern - we already know the same-class mixing term will become smaller by the logic of Claim B.6).

As in the previous claims in this section, we will proceed with an expectation analysis. We will focus on the weights 𝑀 𝑦 , π‘Ÿ that are in the linear regime for the feature 𝑣 𝑦 , 1 for each class 𝑦 , as these are the only relevant weights for 𝐢 𝑦 , 1 ( 𝑑 ) . Additionally, instead of considering the sum of gradient correlations over all 𝑀 𝑦 , π‘Ÿ with π‘Ÿ ∈ 𝐡 𝑦 , 1 ( 𝑑 ) , it will suffice for our purposes to just consider the sum of gradient correlations over classes while using an arbitrary weight 𝑀 𝑦 , π‘Ÿ in the linear regime. Thus, we will abuse notation slightly and use 𝑀 𝑦 , π‘Ÿ to indicate such a weight for each class 𝑦 for the remainder of the proof of this claim (note that we do not mean to imply by this that weight π‘Ÿ is in the linear regime for every class simultaneously, but rather that there exists some π‘Ÿ for every class that is in the linear regime).

Now in the same vein as Equation B.33 (referring again to Equation B.24), we have that:

βˆ‘ 𝑦 ∈ [ π‘˜ ] ⟨ βˆ’ βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , 1 ⟩
≀ Θ ⁒ ( 1 π‘˜ 2 ) ⁒ βˆ‘ 𝑦

1 π‘˜ βˆ‘ 𝑠

𝑦 + 1 π‘˜ 𝔼 𝛽 𝑠 , 𝛽 𝑦 ⁒ [ 𝐴 1 ⁒ ( 𝛽 𝑠 , 𝛽 𝑦 ) ⁒ 𝛽 𝑦 ] + 𝔼 𝛽 𝑦 , 𝛽 𝑠 ⁒ [ 𝐴 1 ⁒ ( 𝛽 𝑦 , 𝛽 𝑠 ) ⁒ 𝛽 𝑠 ]

+ Θ ⁒ ( 1 π‘˜ 2 ) ⁒ βˆ‘ 𝑦

1 π‘˜ 𝔼 𝛽 𝑦 ⁒ [ 𝐴 2 ⁒ ( 𝛽 𝑦 , 𝛽 𝑦 ) ⁒ 𝛽 𝑦 ] βˆ’ 𝑂 ⁒ ( 𝑃 ⁒ 𝛿 4 𝛼 / poly ⁒ ( π‘˜ ) ) .

(B.52)

And we recall that 𝑁 is sufficiently large so that the deviations from the expectations above are negligible compared to the subtracted term. We have carefully paired the expectations in the leading term of Equation B.52 so as to make use of the following fact:

𝐴 1 ⁒ ( 𝛽 𝑠 , 𝛽 𝑦 )

βˆ’ 𝐴 1 ⁒ ( 𝛽 𝑦 , 𝛽 𝑠 ) + βˆ‘ 𝑒 ∈ [ π‘˜ ] βˆ– { 𝑦 , 𝑠 } exp ⁑ ( 𝑔 𝑑 𝑒 ⁒ ( 𝑧 𝑦 , 𝑠 ) ) βˆ‘ 𝑒 ∈ [ π‘˜ ] exp ⁑ ( 𝑔 𝑑 𝑒 ⁒ ( 𝑧 𝑦 , 𝑠 ) ) .

(B.53)

The second term on the RHS of Equation B.53 is of course π‘œ ⁒ ( 𝑃 ⁒ 𝛿 4 𝛼 / poly ⁒ ( π‘˜ ) ) so long as 𝑔 𝑑 𝑠 ⁒ ( 𝑧 𝑦 , 𝑠 ) and/or 𝑔 𝑑 𝑦 ⁒ ( 𝑧 𝑦 , 𝑠 ) are greater than 𝐢 ⁒ log ⁑ π‘˜ for a large enough constant 𝐢 , so we obtain:

βˆ‘ 𝑦 ∈ [ π‘˜ ] ⟨ βˆ’ βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , 1 ⟩
≀ Θ ⁒ ( 1 π‘˜ 2 ) ⁒ βˆ‘ 𝑦

1 π‘˜ βˆ‘ 𝑠

𝑦 + 1 π‘˜ 𝔼 𝛽 𝑠 , 𝛽 𝑦 ⁒ [ 𝐴 1 ⁒ ( 𝛽 𝑠 , 𝛽 𝑦 ) ⁒ ( 𝛽 𝑦 βˆ’ 𝛽 𝑠 ) ]

+ Θ ⁒ ( 1 π‘˜ 2 ) ⁒ βˆ‘ 𝑦

1 π‘˜ 𝔼 𝛽 𝑦 ⁒ [ 𝐴 2 ⁒ ( 𝛽 𝑦 , 𝛽 𝑦 ) ⁒ 𝛽 𝑦 ] βˆ’ 𝑂 ⁒ ( 𝑃 ⁒ 𝛿 4 𝛼 / poly ⁒ ( π‘˜ ) ) .

(B.54)

Now again by the logic of Claim B.22 we have that Cov ⁒ ( 𝐴 1 ⁒ ( 𝛽 𝑠 , 𝛽 𝑦 ) , 𝛽 𝑦 βˆ’ 𝛽 𝑠 ) < 0 , so it follows that:

βˆ‘ 𝑦 ∈ [ π‘˜ ] ⟨ βˆ’ βˆ‡ 𝑀 𝑦 , π‘Ÿ ( 𝑑 ) 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , 1 ⟩
≀ Θ ⁒ ( 1 π‘˜ 2 ) ⁒ βˆ‘ 𝑦

1 π‘˜ 𝔼 𝛽 𝑦 ⁒ [ 𝐴 2 ⁒ ( 𝛽 𝑦 , 𝛽 𝑦 ) ⁒ 𝛽 𝑦 ] βˆ’ 𝑂 ⁒ ( 𝑃 ⁒ 𝛿 4 𝛼 / poly ⁒ ( π‘˜ ) ) .

(B.55)

And if 𝑔 𝑑 𝑦 ⁒ ( 𝑧 𝑦 , 𝑠 ) β‰₯ 𝐢 ⁒ log ⁑ π‘˜ for a sufficiently large constant 𝐢 , we have that the RHS above would be negative, which contradicts Corollary B.26, proving the claim. ∎

We have now wrapped up all of the pieces necessary to prove Theorem 4.7. Indeed, we can now show that for every class the correlation with both features becomes large over the course of training.

Claim B.28.

For every class 𝑦 ∈ [ π‘˜ ] , in 𝑂 ⁒ ( poly ⁒ ( π‘˜ ) ) iterations (for a sufficiently large polynomial in π‘˜ ) we have that both 𝐢 𝑦 , 1 ( 𝑑 )

Ξ© ⁒ ( log ⁑ π‘˜ ) and 𝐢 𝑦 , 2 ( 𝑑 )

Ξ© ⁒ ( log ⁑ π‘˜ ) .

Proof of Claim B.28.

Claim B.20 guarantees 𝐢 𝑦 , 1 ( 𝑑 )

Ξ© ⁒ ( log ⁑ π‘˜ ) in polynomially many iterations. If at this point 𝐢 𝑦 , 2 ( 𝑑 )

Ξ© ⁒ ( log ⁑ π‘˜ ) , we are done. If this is not the case, but we have 𝐡 𝑦 , 2 ( 𝑑 ) β‰  βˆ… , then we are done by Claim B.21, Corollary B.26, and Claim B.27. On the other hand, if 𝐡 𝑦 , 2 ( 𝑑 )

βˆ… , then we have by Claim B.24 that in polynomially many iterations (as we can take 𝜏

Θ ⁒ ( 1 / 𝑑 ) by our setting) 𝐡 𝑦 , 2 ( 𝑑 ) β‰  βˆ… , after which we have reverted back to the previous case and we are still done. ∎

Now we may conclude the overall proof by observing that Claims B.20 and B.27 in tandem imply that we achieve (and maintain) perfect training accuracy in polynomially many iterations, while Claim B.28 implies that both features are learned in the sense of Definition 4.5. ∎

Appendix CProofs of Auxiliary Results C.1Proofs of Lemma 3.2 and Proposition 3.3

See 3.2

Proof.

We first prove sufficiency. If 𝑔 satisfies the conditions in the Lemma, then we have for any data point ( π‘₯ 𝑖 , 𝑦 𝑖 ) that:

𝑔 𝑦 𝑖 ⁒ ( π‘₯ 𝑖 )

⟨ 𝑀 𝑦 𝑖 , 𝛽 𝑦 , 𝑖 ⁒ 𝑣 𝑦 𝑖 , 1 + ( 1 βˆ’ 𝛽 𝑦 , 𝑖 ) ⁒ 𝑣 𝑦 𝑖 , 2 ⟩

⟨ 𝑀 𝑦 𝑖 , 𝑣 𝑦 𝑖 , 1 ⟩

𝐢 ,

(C.1)

for some universal constant 𝐢 . We also have that 𝑔 𝑠 ⁒ ( π‘₯ 𝑖 )

0 for any 𝑠 β‰  𝑦 𝑖 (by the cross-class orthogonality condition), from which we then get:

πœ™ 𝑦 ⁒ ( 𝛾 ⁒ 𝑔 ⁒ ( 𝑧 𝑖 , 𝑗 ) )

πœ™ 𝑦 ⁒ ( 𝛾 ⁒ 𝑔 𝑦 𝑗 ⁒ ( 𝑧 𝑖 , 𝑗 ) )

exp ⁑ ( 𝛾 ⁒ 𝐢 / 2 ) 𝑂 ⁒ ( π‘˜ ) + 2 ⁒ exp ⁑ ( 𝛾 ⁒ 𝐢 / 2 ) ,

(C.2)

for any mixed point 𝑧 𝑖 , 𝑗 with 𝑦 𝑖 β‰  𝑦 𝑗 . Equation C.2 tends to 1 / 2 as 𝛾 β†’ ∞ , and one can easily check that this is the global optimal prediction for the classes 𝑦 𝑖 and 𝑦 𝑗 on the Mixup point 𝑧 𝑖 , 𝑗 (for any such mixed point). Similarly, if 𝑧 𝑖 , 𝑗 is a mixed point with 𝑦 𝑖

𝑦 𝑗 , then Equation C.2 becomes the ERM case, we obtain the optimal prediction of 1 for the correct class in the limit.

On the other hand, if there exists a pair of classes ( 𝑦 , 𝑠 ) with 𝑠 β‰  𝑦 and β„“ 1 , β„“ 2 ∈ [ 2 ] such that ⟨ 𝑀 𝑦 , 𝑣 𝑦 , β„“ 1 ⟩ β‰  ⟨ 𝑀 𝑠 , 𝑣 𝑠 , β„“ 2 ⟩ , then with probability 1 βˆ’ exp ⁑ ( βˆ’ Θ ⁒ ( 𝑁 ) ) there exists a mixed point 𝑧 𝑖 , 𝑗 in 𝒳 (where 𝑦 𝑖

𝑦 , 𝑦 𝑗

𝑠 , and 𝑦 β‰  𝑠 ) such that 𝑔 𝑦 ⁒ ( 𝑧 𝑖 , 𝑗 ) β‰  𝑔 𝑠 ⁒ ( 𝑧 𝑖 , 𝑗 ) , and hence lim 𝛾 β†’ ∞ πœ™ 𝑦 ⁒ ( 𝛾 ⁒ 𝑔 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) β‰  1 / 2 , so we cannot achieve the infimum of the Midpoint Mixup loss. ∎

See 3.3

Proof.

Firstly, we observe (just from properties of cross-entropy):

inf 𝐽 𝑀 ⁒ ( β„Ž , 𝒳 , π’Ÿ πœ† )

βˆ’ 𝔼 πœ† ∼ π’Ÿ πœ† ⁒ [ πœ† ⁒ log ⁑ πœ† + ( 1 βˆ’ πœ† ) ⁒ log ⁑ ( 1 βˆ’ πœ† ) ] .

(C.3)

Now suppose a model 𝑔 satisfies the conditions of Lemma 3.2. We recall that this implies 𝑔 𝑦 𝑖 ⁒ ( π‘₯ 𝑖 )

𝑔 𝑦 𝑗 ⁒ ( π‘₯ 𝑗 )

𝐢

0 for a constant 𝐢 and every pair ( π‘₯ 𝑖 , 𝑦 𝑖 ) and ( π‘₯ 𝑗 , 𝑦 𝑗 ) as in the proof of Lemma 3.2.

With at least probability 1 βˆ’ exp ⁑ ( βˆ’ Θ ⁒ ( 𝑁 ) ) , we then have that there exist a pair of points ( π‘₯ 𝑖 , 𝑦 𝑖 ) and ( π‘₯ 𝑗 , 𝑦 𝑗 ) in 𝒳 with 𝑦 𝑖 β‰  𝑦 𝑗 . The Mixup loss restricted to this pair (for which we use the notation 𝐽 𝑀 ⁒ ( 𝑔 , 𝑧 𝑖 , 𝑗 , π’Ÿ πœ† ) ) is then:

𝐽 𝑀 ⁒ ( 𝑔 , 𝑧 𝑖 , 𝑗 , π’Ÿ πœ† )

βˆ’ 𝔼 πœ† ∼ π’Ÿ πœ† ⁒ [ πœ† ⁒ log ⁑ πœ™ 𝑦 𝑖 ⁒ ( 𝑔 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) + ( 1 βˆ’ πœ† ) ⁒ log ⁑ πœ™ 𝑦 𝑗 ⁒ ( 𝑔 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ] .

(C.4)

Furthermore, we have that:

πœ™ 𝑦 𝑖 ⁒ ( 𝑔 ⁒ ( 𝑧 𝑖 , 𝑗 ⁒ ( πœ† ) ) )

exp ⁑ ( πœ† ⁒ 𝐢 ) 𝑂 ⁒ ( π‘˜ ) + exp ⁑ ( πœ† ⁒ 𝐢 ) + exp ⁑ ( ( 1 βˆ’ πœ† ) ⁒ 𝐢 ) .

(C.5)

From Equations C.4 and C.5 we can see that, since 𝐷 πœ† is supported on more than just 0 , 1 ,  and  ⁒ 1 / 2 , 𝐽 𝑀 ⁒ ( 𝑔 , 𝑧 𝑖 , 𝑗 , π’Ÿ πœ† ) β†’ ∞ as 𝐢 β†’ ∞ (Equation C.5 implies that in the limit πœ™ 𝑦 𝑖 ⁒ ( 𝑔 ⁒ ( 𝑧 𝑖 , 𝑗 ⁒ ( πœ† ) ) ) can only take the values, 0, 1, or 1 / 2 ). Thus it suffices to constrain our attention to 𝐢 ∈ [ 0 , 𝑀 ] for some 𝑀 ∈ ℝ depending only on π’Ÿ πœ† , noting that the loss 𝐽 𝑀 ⁒ ( 𝑔 , 𝑧 𝑖 , 𝑗 , π’Ÿ πœ† ) is finite for any 𝐢 in this interval.

However, 𝐽 𝑀 ⁒ ( 𝑔 , 𝑧 𝑖 , 𝑗 , π’Ÿ πœ† ) βˆ’ inf 𝐽 𝑀 ⁒ ( β„Ž , 𝑧 𝑖 , 𝑗 , π’Ÿ πœ† ) > 0 (observe from the conditions of the Lemma that inf 𝐽 𝑀 ⁒ ( β„Ž , 𝑧 𝑖 , 𝑗 , π’Ÿ πœ† )

inf 𝐽 𝑀 ⁒ ( β„Ž , 𝒳 , π’Ÿ πœ† ) ) for all 𝐢 ∈ [ 0 , 𝑀 ] due to Equation C.3. Since this is a continuous function of 𝐢 over a compact set, it must obtain a minimum greater than 0, and we may choose πœ– 0 to be this minimum (rescaled by a factor of Ξ© ⁒ ( 1 / 𝑁 2 ) ), thereby finishing the proof. ∎

C.2Proofs of Propositions 3.4 and 3.5

See 3.4

Proof.

The idea of proof will be to analyze the gradient correlation with 𝑣 𝑦 , 1 βˆ’ 𝑣 𝑦 , 2 , and either show that this is significantly negative or, in the case where it is not, the gradient correlation with 𝑣 𝑦 , 2 is still significant. Firstly, using the cross-class orthogonality assumption and Calculation A.10, we can compute:

⟨ βˆ’ βˆ‡ 𝑀 𝑦 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , 1 βˆ’ 𝑣 𝑦 , 2 ⟩

1 𝑁 2 ⁒ βˆ‘ 𝑖 ∈ 𝑁 𝑦 βˆ‘ 𝑗 βˆ‰ 𝑁 𝑦 ( 1 βˆ’ 2 ⁒ πœ™ 𝑦 ⁒ ( ( 𝑔 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ) ) ⁒ ( 𝛽 𝑖 βˆ’ 1 2 )

  • 1 𝑁 2 ⁒ βˆ‘ 𝑖 ∈ 𝑁 𝑦 βˆ‘ 𝑗 ∈ 𝑁 𝑦 ( 1 βˆ’ πœ™ 𝑦 ⁒ ( ( 𝑔 ⁒ ( 𝑧 𝑖 , 𝑗 ) ) ) ) ⁒ ( 𝛽 𝑖 βˆ’ 1 2 ) .

(C.6)

Where above we used 𝑁 𝑦 to indicate the indices corresponding to class 𝑦 data points (as we do in the proofs of the main results). Now using concentration of measure for bounded random variables and the fact that 𝑁 is sufficiently large, we have from Equation C.6 that with high probability (and with poly ⁒ ( π‘˜ ) representing a very large polynomial in π‘˜ ):

⟨ βˆ’ βˆ‡ 𝑀 𝑦 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , 1 βˆ’ 𝑣 𝑦 , 2 ⟩

≀ Θ ⁒ ( 1 π‘˜ 2 ) ⁒ βˆ‘ 𝑠 β‰  𝑦 𝔼 𝛽 𝑠 , 𝛽 𝑦 ⁒ [ 𝐴 1 ⁒ ( 𝛽 𝑠 , 𝛽 𝑦 ) ⁒ ( 𝛽 𝑦 βˆ’ 1 / 2 ) ]

  • Θ ⁒ ( 1 π‘˜ 2 ) ⁒ 𝔼 𝛽 𝑦 ⁒ [ 𝐴 2 ⁒ ( 𝛽 𝑦 ) ⁒ ( 𝛽 𝑦 βˆ’ 1 / 2 ) ]
  • 𝑂 ⁒ ( 1 / poly ⁒ ( π‘˜ ) ) .

(C.7)

Where we define the functions 𝐴 1 and 𝐴 2 as:

𝑧 𝑦 , 𝑠

β‰œ 1 2 ⁒ ( 𝛽 𝑦 ⁒ 𝑣 𝑦 , 1 + ( 1 βˆ’ 𝛽 𝑦 ) ⁒ 𝑣 𝑦 , 2 + 𝛽 𝑠 ⁒ 𝑣 𝑠 , 1 + ( 1 βˆ’ 𝛽 𝑠 ) ⁒ 𝑣 𝑠 , 2 ) ,

𝐴 1 ⁒ ( 𝛽 𝑠 , 𝛽 𝑦 )

β‰œ 1 βˆ’ 2 ⁒ πœ™ 𝑦 ⁒ ( ( 𝑔 ⁒ ( 𝑧 𝑦 , 𝑠 ) ) ) ,

(C.8)

𝐴 2 ⁒ ( 𝛽 𝑠 , 𝛽 𝑦 )

β‰œ 1 βˆ’ πœ™ 𝑦 ⁒ ( ( 𝑔 ⁒ ( 𝑧 𝑦 , 𝑠 ) ) ) .

(C.9)

Here 𝑧 𝑦 , 𝑠 is a random variable denoting the midpoint of a class 𝑦 point and a class 𝑠 point (distributed according to Definition 4.1). Note that Equations C.8 and C.9 are not abuses of notation - the functions 𝐴 1 and 𝐴 2 depend only on the random variables 𝛽 𝑠 and 𝛽 𝑦 , since we can ignore the cross-class correlations due to orthogonality.

Let us immediately observe that the first two terms (the expectation terms) in Equation C.7 are bounded above by 0. This is due to the fact that 𝛽 𝑦 βˆ’ 1 / 2 is a symmetric, centered random variable and the functions 𝐴 1 and 𝐴 2 are monotonically decreasing in 𝛽 𝑦 . We will focus on showing that the first term is significantly negative, as that will be sufficient for our purposes.

Now we perform the transformation (essentially centering):

𝑔 𝑑 𝑒 ← 𝑔 𝑑 𝑒 βˆ’ ⟨ 𝑀 𝑦 , 𝑣 𝑦 , 1 ⟩ + ⟨ 𝑀 𝑦 , 𝑣 𝑦 , 2 ⟩ 4 .

(C.10)

For all 𝑒 ∈ [ π‘˜ ] , noting that this doesn’t change the value of the softmax outputs. Under this transformation we have that 𝑔 𝑑 𝑦 ⁒ ( 𝑧 𝑦 , 𝑠 )

( 𝛽 𝑦 βˆ’ 1 / 2 ) ⁒ Ξ” 𝑦 / 2 , which isolates the gap term Ξ” 𝑦 .

For further convenience, let us use Ξ› 𝑠 β‰œ βˆ‘ 𝑒 β‰  𝑦 exp ⁑ ( 𝑔 𝑑 𝑒 ⁒ ( 𝑧 𝑦 , 𝑠 ) ) , and observe that Ξ› 𝑠 depends only on 𝛽 𝑠 due to orthogonality. Using the change of variables 𝛽 𝑦 ~

𝛽 𝑦 βˆ’ 1 / 2 and 𝛽 𝑠 ~

𝛽 𝑠 βˆ’ 1 / 2 we can then compute the expectation in the first term of Equation C.7 as:

𝔼 𝛽 𝑠 , 𝛽 𝑦 ⁒ [ 𝐴 1 ⁒ ( 𝛽 𝑠 , 𝛽 𝑦 ) ⁒ ( 𝛽 𝑦 βˆ’ 1 / 2 ) ]

1 0.64 ⁒ ∫ 0.1 0.9 ∫ 0.1 0.9 Ξ› 𝑠 βˆ’ exp ⁑ ( 𝑔 𝑑 𝑦 ⁒ ( 𝑧 𝑦 , 𝑠 ) ) Ξ› 𝑠 + exp ⁑ ( 𝑔 𝑑 𝑦 ⁒ ( 𝑧 𝑦 , 𝑠 ) ) ⁒ ( 𝛽 𝑦 βˆ’ 1 / 2 ) ⁒ 𝑑 𝛽 𝑦 ⁒ 𝑑 𝛽 𝑠

1 0.64 ⁒ ∫ βˆ’ 0.4 0.4 ∫ βˆ’ 0.4 0.4 Ξ› 𝑠 βˆ’ exp ⁑ ( 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 / 2 ) Ξ› 𝑠 + exp ⁑ ( 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 / 2 ) ⁒ 𝛽 𝑦 ~ ⁒ 𝑑 𝛽 𝑦 ~ ⁒ 𝑑 𝛽 𝑠 ~ .

(C.11)

We will focus on the inner integral in Equation C.11. Using the symmetry of 𝛽 𝑦 ~ , we have that:

𝔼 𝛽 𝑦 ⁒ [ 𝐴 1 ⁒ ( 𝛽 𝑠 , 𝛽 𝑦 ) ⁒ ( 𝛽 𝑦 βˆ’ 1 / 2 ) ]

1 0.8 ⁒ ∫ βˆ’ 0.4 0.4 Ξ› 𝑠 βˆ’ exp ⁑ ( 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 / 2 ) Ξ› 𝑠 + exp ⁑ ( 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 / 2 ) ⁒ 𝑑 𝛽 𝑦 ~

βˆ’ 1 0.8 ⁒ ∫ 0 0.4 ( Ξ› 𝑠 βˆ’ exp ⁑ ( βˆ’ 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 / 2 ) Ξ› 𝑠 + exp ⁑ ( βˆ’ 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 / 2 ) βˆ’ Ξ› 𝑠 βˆ’ exp ⁑ ( 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 / 2 ) Ξ› 𝑠 + exp ⁑ ( 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 / 2 ) ) ⁒ 𝛽 𝑦 ~ ⁒ 𝑑 𝛽 𝑦 ~

βˆ’ 1 0.8 ⁒ ∫ 0 0.4 2 ⁒ Ξ› 𝑠 ⁒ ( exp ⁑ ( 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 / 2 ) βˆ’ exp ⁑ ( βˆ’ 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 / 2 ) ) Ξ› 𝑠 2 + Ξ› 𝑠 ⁒ ( exp ⁑ ( 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 / 2 ) + exp ⁑ ( βˆ’ 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 / 2 ) ) + 1 ⁒ 𝛽 𝑦 ~ ⁒ 𝑑 𝛽 𝑦 ~ .

(C.12)

Since Ξ” 𝑦 β‰₯ log ⁑ π‘˜ , the exp ⁑ ( βˆ’ 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 / 2 ) term will not have any contribution asymptotically, so we get:

𝔼 𝛽 𝑠 , 𝛽 𝑦 ⁒ [ 𝐴 1 ⁒ ( 𝛽 𝑠 , 𝛽 𝑦 ) ⁒ ( 𝛽 𝑦 βˆ’ 1 / 2 ) ]

βˆ’ ∫ βˆ’ 0.4 0.4 ∫ 0 0.4 Θ ⁒ ( Ξ› 𝑠 ⁒ 𝛽 𝑦 ~ ⁒ exp ⁑ ( 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 / 2 ) Ξ› 𝑠 2 + Ξ› 𝑠 ⁒ exp ⁑ ( 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 / 2 ) ) ⁒ 𝑑 𝛽 𝑦 ~ ⁒ 𝑑 𝛽 𝑠 ~ .

(C.13)

Now we consider two cases. First suppose that there exists an 𝑠 for which the set π‘ˆ 𝑠 βŠ‚ [ βˆ’ 0.4 , 0.4 ] Γ— [ 0 , 0.4 ] consisting of all ( 𝛽 𝑠 ~ , 𝛽 𝑦 ~ ) such that Ξ› 𝑠 ≀ exp ⁑ ( 𝛽 𝑦 ~ ⁒ Ξ” 𝑦 / 2 ) + πœ… for some sufficiently small universal constant πœ… has non-zero constant measure. Then it follows that Equation C.13 is Θ ⁒ ( 1 ) , and we have the desired result from Equation C.7.

On the other hand, if this is not the case, then for every 𝑠 β‰  𝑦 we have that π‘ˆ 𝑠 𝑐 ∩ [ βˆ’ 0.4 , 0.4 ] Γ— [ 0 , 0.4 ] has constant measure, and from Equation C.11 we have that 𝔼 𝛽 𝑠 , 𝛽 𝑦 ⁒ [ 𝐴 1 ⁒ ( 𝛽 𝑠 , 𝛽 𝑦 ) ⁒ ( 1 βˆ’ 𝛽 𝑦 ) ]

Θ ⁒ ( 1 ) , which directly allows us to conclude that ⟨ βˆ’ βˆ‡ 𝑀 𝑦 𝐽 𝑀 ⁒ 𝑀 ⁒ ( 𝑔 , 𝒳 ) , 𝑣 𝑦 , 2 ⟩

Ξ© ⁒ ( 1 / π‘˜ 2 ) . ∎

See 3.5

Proof.

From the facts that ⟨ 𝑀 𝑦 , 𝑣 𝑦 , 2 ⟩ β‰₯ 0 and Ξ” 𝑦 β‰₯ 𝐢 ⁒ log ⁑ π‘˜ , we have that 𝑔 𝑦 ⁒ ( π‘₯ 𝑖 ) β‰₯ 𝛽 𝑦 , 𝑖 ⁒ 𝐢 ⁒ log ⁑ π‘˜ for every 𝑖 ∈ 𝑁 𝑦 (where 𝛽 𝑦 , 𝑖 represents the coefficient in front of 𝑣 𝑦 , 1 in π‘₯ 𝑖 ). Since 𝛽 𝑦 , 𝑖 ∈ [ 0.1 , 0.9 ] , we immediately have the result from the logic of Claim B.2 and Calculation A.9. ∎

C.3Proof of Proposition 4.4

See 4.4

Proof.

For hyperparameters, we can choose 𝛿 1

𝛿 2 βˆ’ 𝛿 1

1 , 𝛿 3

𝛿 4

π‘˜ βˆ’ 1.5 while being consistent with Assumption 4.3. For the distribution π’Ÿ , for each point ( π‘₯ 𝑖 , 𝑦 𝑖 ) , we sample a class 𝑠 ∈ [ π‘˜ ] βˆ– 𝑦 𝑖 uniformly and choose it to be the single class used in the feature noise patches for π‘₯ 𝑖 . This clearly falls within the scope of Definition 4.1.

Now in 𝑁 i.i.d. samples from π’Ÿ as specified above, we have with probability at least 1 βˆ’ π‘˜ 2 ⁒ exp ⁑ ( Θ ⁒ ( βˆ’ 𝑁 / π‘˜ 2 ) ) (Chernoff bound, as in Claim B.1) that a sample with each possible pair 𝑦 , 𝑠 ∈ [ π‘˜ ] of signal and noise classes exists. Suppose now that there exist classes 𝑦 , 𝑒 ∈ [ π‘˜ ] such that 𝑒 βˆ‰ argmax 𝑠 ∈ [ π‘˜ ] ⟨ 𝑀 𝑠 , 𝑣 𝑦 , 1 + 𝑣 𝑦 , 2 ⟩ . This necessarily implies that 𝑒 β‰  max 𝑠 ∈ [ π‘˜ ] ⁑ 𝑔 𝑠 ⁒ ( π‘₯ ) for all points π‘₯ with label 𝑒 but having feature noise class 𝑦 (since the order of the sum of the feature noise is Θ ⁒ ( π‘˜ ) ), which gives the result.

On the other hand, if there does not exist such a class pair 𝑦 , 𝑒 , then we are also done as that implies all of the weight-feature correlations are the same. ∎

Report Issue Report Issue for Selection Generated by L A T E xml Instructions for reporting errors

We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:

Click the "Report Issue" button. Open a report feedback form via keyboard, use "Ctrl + ?". Make a text selection and click the "Report Issue for Selection" button near your cursor. You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.

Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.

Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.

Xet Storage Details

Size:
166 kB
Β·
Xet hash:
7c091fd4e04c95e7cd52a5e89e8b536e2d49de312fb6e7d600753892569e60c1

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.