1 Introduction
Statistical estimators are typically random, since they depend on a random training set; their statistical guarantees are typically stated in terms of their expected loss between estimated and true parameters [31, 12, 51, 26]
. A bound on their expected loss however might not be sufficient in higher stakes settings, such as autonomous driving, and riskladen health care, among others, since the deviation of the estimator from its expected behavior could be large. In such settings, we might instead prefer a bound on the loss that holds with high probability. Such highprobability bounds are however often stated only under strong assumptions (e.g. subGaussianity or boundedness) on the tail of underlying distributions
[25, 24, 45, 18]; conditions which often do not hold in realworld settings. There has also been a burgeoning line of recent work that relaxes these assumptions and allows for heavytailed underlying distributions [3, 28, 35], but the resulting algorithms are often not only complex, but are also specifically batch learning algorithms that require the entire dataset, which limits their scalability. For instance, many popular polynomial time algorithms on heavytailed mean estimation [9, 42, 5, 43, 6, 32] and heavytailed linear regression [28, 46, 44, 38] need to store the entire dataset. On the other hand, most successful practical modern learning algorithms are iterative, lightweight and access data in a “streaming” fashion. As a consequence, we focus on designing and analyzing iterative statistical estimators which only use constant storage in each step. To summarize, motivated by practical considerations, we have three desiderata: (1) allowing for heavytailed underlying distributions (weak modeling assumptions), (2) high probability bounds on the loss between estimated and true parameters instead of just its expectation (strong statistical guarantees), and (3) estimators that access data in a streaming fashion while only using constant storage (scalable, simple algorithms).A useful alternative viewpoint of the statistical estimation problem above is that of stochastic optimization: where we have access to the optimization objective function (which in the statistical estimation case is simply the population risk of the estimator) only via samples of the objective function or its higher order derivatives (typically just the gradient). Here again, most of the literature on stochastic optimization typically provides bounds in expectation [26, 51, 22], or places a strong assumptions on the tail of the stochastic function derivatives, such as bounded distributions [24, 45]
and subGaussian distributions
[25, 34]. Figure 1 shows that even for the simple stochastic optimization task of mean estimation, the deviation of stochastic gradient descent (SGD) is much worse for heavytailed distributions than subGaussian ones. Therefore, bounds on expected behavior, or strong assumptions on the tails of the stochastic noise distribution are no longer sufficient.While there has been a line of work on heavytailed stochastic optimization, these require nontrivial storage complexity or batch sizes, making them unsuitable for streaming settings or largescale problems [16, 37]. Specifically these existing works require at least batch size to obtain a approximate solution under heavytailed noise [44, 7, 19, 39] (See Section B in the Appendix for further discussion). In other words, to achieve a typical convergence rate (on the squared error), where is the number of samples, they would need a batchsize of nearly an entire dataset.
Therefore, we investigate the following question:
Can we develop a stochastic optimization method that satisfy our three desiderata?
Our answer is that a simple algorithm suffices: stochastic gradient descent with clipping (clippedSGD). In particular, we first prove a high probability bound for clippedSGD under heavytailed noise, with a decaying stepsize sequence for strongly convex objectives. By using a decaying step size, we improve the analysis of Gorbunov et al. [19] and develop the first robust stochastic optimization algorithm in a fully streaming setting  i.e. with batch size. We then consider corollaries for statistical estimation where the optimization objective is the population risk of the estimator, and derive the first streaming robust mean estimation and linear regression algorithm that satisfy all three desiderata above.
We summarize our contributions as follows:

We prove the first highprobability bounds for clippedSGD with step size for strongly convex and smooth objectives without subGaussian assumption on stochastic gradients, and constant batch sizes. To the best of our knowledge, this is the first stochastic optimization algorithm that uses constant batch size in this setting. See Section 1.1 and Table 1 for more details and comparisons. A critical ingredient is a nuanced condition on the stochastic gradient noise.

We show that our proposed stochastic optimization algorithm can be used for a broad class of statistical estimation problems. As corollaries of this framework, we present a new class of robust heavytailed estimators for streaming mean estimation and linear regression.

Lastly, we conduct synthetic experiments to corroborate our theoretical and methodological developments. We show that clippedSGD not only outperforms SGD and a number of baselines in average performance but also has a wellcontrolled tail performance.
1.1 Related Work
Batch Heavytailed mean estimation.
In the batchsetting, Hopkins [27] proposed a robust mean estimator that can be computed in polynomial time and which matches the error guarantees achieved by the empirical mean on Gaussian data. After this work, efficient algorithms with improved asymptotic runtimes were given in [10, 6, 5, 32, 8]. Related works [42, 38] provide more practical and computationallyefficient algorithms with slightly suboptimal rates. We note that these estimators are targeted to the batch learning setting, and in particular need to store all datapoints. Some exceptions are [38, 8], which have storage complexity to obtain a solution with a failure probability. However, these works rely on the medianofmeans framework and are not easy to extend to the streaming setting. For more details on these recent algorithms, see the recent survey [35].
Heavytailed stochastic optimization.
A line of work in stochastic convex optimization have proposed bounds that achieve subGaussian concentration around their mean (a common step towards providing sharp highprobability bounds), while only assuming that the variance of stochastic gradients is bounded (i.e. allowing for heavytailed stochastic gradients).
Davis et al. [7] proposed proxBoost that is based on robust distance estimation and proximal operators. Prasad et al. [44] utilized the geometric medianofmeans to robustly estimate gradients in each minibatch. Gorbunov et al. [19] and Nazin et al. [39] proposed clippedSSTM and RSMD respectively based on truncation of stochastic gradients for stochastic mirror/gradient descent. Zhang et al. [51] analyzed the convergence of clippedSGD in expectation but focus on a different noise regime where the distribution of stochastic gradients has bounded moments for some . However, all the above works [7, 44, 19, 39] have an unfavorable dependency on the batch size to get the typical convergence rate (on the squared error). We note that our bound is comparable to the above approaches while using a constant batch size. See Section B in the Appendix for more details.Heavytailed regression.
For the setting where the regression noise is heavytailed with bounded variance, Huber’s estimator is known to achieve subGaussian style rates [14, 47]. For the case where both the covariates and the noise are both heavytailed, several recent works have proposed computationally efficient estimators that achieve subGaussian style error rates based on the medianofmeans framework [28, 38] and thresholding techniques [46]. However, as we noted before, computing all of these estimators [28, 38, 46] require storing the entire dataset.
2 Background and Problem formulation
In this paper, we consider the following statistical estimation/stochastic optimization setting: we assume that there some class of functions parameterized by , where is a convex subset of
; some random vector
with distribution; and a loss function
which takes and outputs the loss of at point . In this setting, we want to recover the true parameter defined as the minimizer of the population risk function :(1) 
We assume that is differentiable and convex, and further impose two regularity conditions on the population risk: there exist and such that
(2) 
with and it holds for all . The parameters are called the strongconvexity and smoothness parameters of the function .
To solve the minimization problem defined in Eq. (1), we assume that we can access the stochastic gradient of the population risk, , at any point given a sample . We note that this is a unbiased gradient estimator, i.e.
Our goal in this work is to develop robust statistical methods under heavytailed distributions, where only very low order moments may be finite, e.g. studentt distribution or Pareto distribution. In this work, we assume the stochastic gradient distribution only has bounded second moment. Formally, for any , we assume that there exists and such that
(3) 
In other words, the variance of the norm of the gradient distribution depends on a uniform constant and a positiondependent variable, , which allows the variance of gradient noise to be large when is far from the true parameter . We note that this is a more general assumption compared to prior works which assumed that the variance is uniformly bounded by . It can be seen that our condition is more nuanced and can be weaker: even if our condition holds, we would allow for a uniform bound on variance to be large: . Whereas, a uniform bound on the variance could always be cast as and . We will show that this more nuanced assumption is essential to obtain tight bounds for linear regression problems.
We next provide some running examples to instantiate the above:

Mean estimation: Given observations where the distribution with mean and a bounded covariance matrix . The minimizer of the following square loss is the mean of distribution :
(4) In this case, , and satisfy the assumption in Eq.(3).

Linear regression: Given covariateresponse pairs , where are sampled from and have a linear relationship, i.e. , where is the true parameter we want to estimate and is drawn from a zeromean distribution. Suppose that under distribution the covariate have mean and nonsingular covariance matrix . In this setting, we consider the squared loss:
(5) The true parameter is the minimizer of . We also note that and satisfies the assumption in Eq.(2) with , and satisfy the assumption in Eq.(3). Note that if we had to uniformly bound the variance of the gradients as in previous stochastic optimization work, that bound would need to scale as: , where , which will yield much looser bounds.
3 Main results
In this section, we introduce our clipped stochastic gradient descent algorithm. We begin by formally defining clipped stochastic gradients. For a clipping parameter :
(6) 
where , and is the stochastic gradient. The overall algorithm is summarized in Algorithm 1, where we use to denote the (Euclidean) projection onto the domain .
Next, we state our main convergence result for clippedSGD in Theorem 1. (Streaming heavytailed stochastic optimization) Suppose that the population risk satisfies the regularity conditions in Eq. (2) and stochastic gradient noise satisfies the condition in Eq. (3). Let
Given samples , the Algorithm 1 initialized at with
(7) 
where is a scaling constant can be chosen by users, returns such that with probability at least , we have
(8) 
We provide a proof sketch to explain the values we choose for these hyperparameters in Section
5 and give a complete proof in Section E in the Appendix.Remarks:
a) This theorem says that with a properly chosen clipping level , clippedSGD has an asymptotic convergence rate of and enjoys subGaussian style concentration around the true minimizer (alternatively, its high probability bound scales logarithmically in the confidence parameter ). The first term in the error bound is related to the initialization error and the second term is governed by the stochastic gradient noise. These two terms have different convergence rates: at early iterations when the initialization error is large, the first term dominates but quickly decreases at the rate of . At later iterations the second term dominates and decreases at the usual statistical rate of convergence of .
b) Note that is a common choice for optimizing strongly convex functions [24, 45]. The only difference is that we add a delay parameter to "slow down" the learning rate and to stablize the training process. The delay parameter depends on the positiondependent variance term and the condition number . From a theoretical viewpoint, the delay parameter ensures that the true gradient is within the clipping region with highprobability, i.e. for with high probability and this in turn allows us to control the variance and the bias incurred by clipped gradients. Moreover, it controls the positiondependent variance term , especially during the initial iterations when the error (and the variance of stochastic gradients) is large.
c) We choose the clipping level to be proportional to to balance the variance and bias of clipped gradients. Roughly speaking, the bias is inversely proportional to the clipping level (Lemma 5). As the error converges at the rate , and this in turn suggests that we should choose the clipping level to be .
We also provide an error bound and sample complexity where, as in prior work, we assume the variance of stochastic gradients are uniformly bounded by , i.e. and (as before, we assume that the population loss is stronglyconvex and smooth). We have the following corollary:
Under the same assumptions and with the same hyperparameters in Theorem 1 and letting , with the probability at least , we have the following error bound:
(9) 
where it the initialization error. In other words, to achieve with probability at least , we need samples. With our general results in place we now turn our attention to deriving some important consequences for mean estimation and linear regression.
4 Consequences for Heavytailed Parameter Estimation
In this section, we investigate the consequences of Theorem 1 for statistical estimation in the presence of heavytailed noise. We plug in the respective loss functions , the terms and capturing the underlying stochastic gradient distribution, in Theorem 1 to obtain highprobability bounds for the respective statistical estimators.
4.1 Heavytailed Mean Estimation
We assume that the distribution has bounded covariance matrix . Then clippedSGD for mean estimation has the following guarantee. (Streaming Heavytailed Mean Estimation) Given samples from a distribution and confidence level , the Algorithm 1 instantiated with loss function , initial point , and
where is a scaling constant can be chosen by users, returns such that with probability at least , we have
(10) 
Remarks:
The proposed mean estimator matches the error bound of the wellknown geometricmedianofmeans estimator [38], achieving . This guarantee is still suboptimal compared to the optimal subGaussian rate [35]. Existing polynomial time algorithms having optimal performance are for the batch setting and require either storing the entire dataset [6, 5, 36, 32, 11] or have storage complexity [8]. On the other hand, we argue that trading off some statistical accuracy for a large savings in memory and computation is favorable in practice.
4.2 Heavytailed Linear Regression
We consider the linear regression model described in Eq. (5). Assume that the covariates have bounded moments and a nonsingular covariance matrix with bounded operator norm, and the noise has bounded
moments. We denote the minimum and maximum eigenvalue of
by and. More formally, we say a random variable
has a bounded moment if there exists a constant such that for every unit vector , we have(11) 
(Streaming Heavytailed Regression) Given samples and confidence level , the Algorithm 1 instantiated with loss function in Eq. (5), initial point , and
where is a scaling constant can be chosen by users and is the constant in Eq.(11), returns such that with probability at least , we have
(12) 
5 Sketch of proof
In this section, we provide an overview of the arguments that constitute the proof of Theorem 1 and explain our theoretical contribution. The full details of the proof can be found in Section E in Appendix. Our proof is improved upon previous analysis of clippedSGD with constant learning rate and batch size [19] and high probability bounds for SGD with step size in the subGaussian setting [25]. Our analysis consists of three steps: (i) Expansion of the update rule. (ii) Selection of the clipping level. (iii) Concentration of bounded martingale sequences.
Notations:
Recall that we use step size . We will write is the noise indicating the difference between the stochastic gradient and the true gradient at step . Let be the algebra generated by the first steps of clippedSGD. We note that clipping introduce bias so that is no longer zero, so we decompose the noise term into a bias term and a zeromean variance term , i.e.
(13) 
(i) Expansion of the update rule:
We start with the following lemma that is pretty standard in the analysis of SGD for strongly convex functions. It can be obtained by unrolling the update rules and using properties of stronglyconvex and smooth functions. Under the conditions in Theorem 1, for any , we have
(ii) Selection of the clipping level:
Now, to upper bound the noise terms, we need to choose the clipping level properly to balance the variance term and the bias term . Specifically, we use the inequalities of Gorbunov et al. [19], which provides us upper bounds for the magnitude and variance of these noise terms. (Lemma F.5, [19] ) For any , we have
(14) 
Moreover, for all , assume that the variance of stochastic gradients is bounded by , i.e. and assume that the norm of the true gradient is less than , i.e. . Then we have
(15) 
This lemma gives us the dependencies between the variance, bias and clipping level: a larger clipping level leads to a smaller bias but the magnitude of the variance term is larger, while the variance of the variance term remains constant. These three inequalities are also essential for us to use concentration inequalities for martingales. However, we highlight the necessity for these inequalities to hold: the true gradient lies in the clipping region up to a constant, i.e. . This condition is necessary since without this, we could not have upper bounds of the bias and variance terms. Therefore, the clipping level should be chosen in a very accurate way. Below we informally describe how do we choose it.
We note that should converge with rate for strongly convex functions with step size [45, 25]. To make sure the first noise term upper bound by , one should expect each summand for , which implies . This motivates us to choose the clipping level to be proportional to by Eq.(15). Also, from the detailed proof in Section E, we will show that the delay parameter makes sure holds with high probabilities and the position dependent noise is controlled.
(iii) Concentration of bounded martingale sequences:
A significant technical step of our analysis is the use of the following Freedman’s inequality.
(Freedman’s inequality [15]) Let be a martingale difference sequence with a uniform bound on the steps . Let denote the sum of conditional variances, i.e. Then, for every ,
Freedman’s inequality says that if we know the boundness and variance of the martingale difference sequence, the summation of them has exponential concentration around its expected value for all subsequence .
Now we turn our attention to the variance term in the first noise term, i.e. . It is the summation of a martingale difference sequence since . Note that Lemma 5 has given us upper bounds for boundness/variance for . However, the main technical difficulty is that each summand involves the error of past sequences, i.e. .
Our solution is the use of Freedman’s inequality, which gives us a loose control of all past error terms with high probabilities, i.e. for . On the contrary, a recurrences technique used in the past works [21, 20, 19] uses an increasing clipping levels and calls the Bernstein inequality ( , which only provides an upper bound for the entire sequence, ) times in order to control for every . As a result, it incurs an extra factor on their bound since it imposes a too strong control over past error terms.
Finally, we describe why clippedSGD allows a batch size at a high level. For previous works of stochastic optimization with strongly convex and smooth objective, they use a constant step size throughout their training process [21, 44, 7, 39]. However, to ensure their approach make a constant progress at each iteration, they should use an exponential growing batch size to reduce variance of gradients. Whereas in our approach, we explicitly control the variance by using a decayed learning rate and clipping the gradients. Therefore, we are able to provide a careful analysis of the resulted bounded martingale sequences.
6 Experiments
To corroborate our theoretical findings, we conduct experiments on mean estimation and linear regression to study the performance of clippedSGD
. Another experiment on logistic regression is presented in the Section
D.3 in the Appendix.Methods.
We compare clippedSGD with vanilla SGD, which takes stochastic gradient to update without a clipping function. For linear regression, we also compare clippedSGD
with Lasso regression
[40], Ridge regression and Huber regression
[47, 29]. All methods use the same step size at step .To simulate heavytailed samples, we draw from a scaled standardized Pareto distribution with tailparameter for which the moment only exists for . The smaller the , the more heavytailed the distribution. Due to space constraints, we defer other results with different setups to the Appendix.
Choice of Hyperparameter.
Note that in Theorem 1, the clipping level depends on the initialization error, i.e. , which is not known in advance. Moreover, we found that the suggested value of has a too large of a constant factor and substantially decreases the convergence rate especially for small . However, standard hyperparameter selection techniques such as crossvalidation and holdout validation require storing the entire validation set, which are not suitable for streaming settings.
Consequently, we use a sequential validation method [30], where we do "training" and "evaluation" on the last percent of the data. Formally, given candidate solutions trained from samples , let be the estimated parameter for candidate at iteration . Then we choose the candidate that minimizes the empirical mean of the risk of the last percents of samples, i.e.
(16) 
That is, we calculate the mean of the risk induced by given the sample before using the sample to update the parameter at the last percents of iterations. In our experiment, we choose to tune the delay parameter , clipping level and regularization factors for Lasso, Ridge and Huber regression.
Metric.
For any estimator , we use the loss as our primary metric. To measure the tail performance of estimators (not just their expected loss), we also use , which is the bound on the loss that we wish to hold with probability . This could also be viewed as the percentile of the loss (e.g. if , then this would be the 95th percentile of the loss).
6.1 Synthetic Experiments: Mean estimation
Setup.
We obtain samples from a scaled standardized Pareto distribution with tailparameter . We initialize the mean parameter estimate as , , and fix the step size to for clippedSGD and Vanilla SGD. We note that in this setting, it can be seen that is the empirical mean over samples by some simple algebra. Each metric is reported over 50000 trials. For reference, we also compare our approach with coordiantewise/geometric medianofmeans(MoM), which use space.
Results.
Figure 2 shows the performance of all algorithms. In the first panel, we can see clippedSGD has expected error that converges as as our theory indicates. Also, the 99.9 percent quantile loss of Vanilla SGD is over 10 times worse than the expected error as in the second panel while the tail performance of clippedSGD is similar to its expected performance. Finally, we note that though coordinatewise/ geometric medianofmeans use space, they are still worse than our clippedSGD algorithm.
In the third panel, we can see that Clipped SGD performs better across different . When the clipping level is too small, it takes too small a step size so that the final error is high. While if we use a very large clipping level, the performance of Clipped SGD is similar to Vanilla SGD.
6.2 Synthetic Experiments: Linear regression
Setup
We generate covariate from an scaled standardized Pareto distribution with tailparameter . The true regression parameter is set to be and the initial parameter is set to . The response is generated by , where is sampled from scaled rescaled Pareto distribution with mean , variance and tailparameter . We select . Each metric is reported over 50000 trials.
Results
We note that in this experiment, our hyperparameter selection technique yields . Figure 3(a), 3(b) show that clippedSGD performs the best among all baselines in terms of average error and quantile errors across different probability levels . Also, has linear relation to as Corollary 4.2 indicates. In Figure 3(c), we plot quantile loss against different clipping levels. It shows a similar trend to the mean estimation.
Next, in Figure 3(d), we plot the averaged convergence curve for different delay parameters . This shows that it is necessary to use a large enough to prevent it from diverging. This phenomenon can also be observed for different baseline models. Figure 3(e), 3(f) shows that clippedSGD performs the best across different noise level and different dimension .
7 Discussion
In this paper, we provide a streaming algorithm for statistical estimation under heavytailed distribution. In particular, we close the gap in theory of clipped stochastic gradient descent with heavytailed noise. We show that clippedSGD can not only be used in parameter estimation tasks, such as mean estimation and linear regression, but also a more general stochastic optimization problem with heavytailed noise. There are several avenues for future work, including a better understanding of clippedSGD under different distributions, such as having higher bounded moments or symmetric distributions, where the clipping technique incurs less bias. Finally, it would also be of interest to extend our results to different robustness setting such as Huber’s contamination model [29]
, where there are a constant portion of arbitrary outliers in observed samples.
References
 [1] (2016) Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp. 308–318. Cited by: Appendix B.
 [2] (2014) Convex optimization: algorithms and complexity. arXiv preprint arXiv:1405.4980. Cited by: Appendix E.
 [3] (2012) Challenging the empirical mean and empirical variance: a deviation study. In Annales de l’IHP Probabilités et statistiques, Vol. 48, pp. 1148–1185. Cited by: §1.

[4]
(2020)
Understanding gradient clipping in private sgd: a geometric perspective
. Advances in Neural Information Processing Systems 33. Cited by: Appendix B.  [5] (2020) Highdimensional robust mean estimation via gradient descent. In International Conference on Machine Learning, pp. 1768–1778. Cited by: §1.1, §1, §4.1.
 [6] (2019) Fast mean estimation with subgaussian rates. In Conference on Learning Theory, pp. 786–806. Cited by: §1.1, §1, §4.1.
 [7] (2021) From low probability to high confidence in stochastic convex optimization. Journal of Machine Learning Research 22 (49), pp. 1–38. Cited by: Table 1, §1.1, §1, §5.
 [8] (2019) Robust subgaussian estimation of a mean vector in nearly linear time. arXiv preprint arXiv:1906.03058. Cited by: §1.1, §4.1.
 [9] (2017) Being robust (in high dimensions) can be practical. In International Conference on Machine Learning, pp. 999–1008. Cited by: §1.
 [10] (2020) Outlier robust mean estimation with subgaussian rates via stability. arXiv preprint arXiv:2007.15618. Cited by: §1.1.

[11]
(2019)
Quantum entropy scoring for fast robust mean estimation and improved outlier detection
. arXiv preprint arXiv:1906.11366. Cited by: §4.1.  [12] (2011) Adaptive subgradient methods for online learning and stochastic optimization.. Journal of machine learning research 12 (7). Cited by: §1.
 [13] (2016) Stochastic intermediate gradient method for convex problems with stochastic inexact oracle. Journal of Optimization Theory and Applications 171 (1), pp. 121–145. Cited by: Table 1.
 [14] (2017) Estimation of high dimensional mean regression in the absence of symmetry and light tail assumptions. Journal of the Royal Statistical Society. Series B, Statistical methodology 79 (1), pp. 247. Cited by: §1.1.
 [15] (1975) On tail probabilities for martingales. the Annals of Probability, pp. 100–118. Cited by: §5.
 [16] (2015) Robust regression for largescale neuroimaging studies. Neuroimage 111, pp. 431–441. Cited by: §1.
 [17] (2021) On proximal policy optimization’s heavytailed gradients. arXiv preprint arXiv:2102.10264. Cited by: Appendix B.
 [18] (2013) Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization, ii: shrinking procedures and optimal algorithms. SIAM Journal on Optimization 23 (4), pp. 2061–2089. Cited by: Table 1, §1.
 [19] (2020) Stochastic optimization with heavytailed noise via accelerated gradient clipping. arXiv preprint arXiv:2005.10785. Cited by: Table 1, Appendix E, §1.1, §1, §1, §5, §5, §5.
 [20] (2019) Optimal decentralized distributed algorithms for stochastic convex optimization. arXiv preprint arXiv:1911.07363. Cited by: §5.
 [21] (2018) An accelerated method for derivativefree smooth stochastic convex optimization. arXiv preprint arXiv:1802.09022. Cited by: §5, §5.
 [22] (2019) SGD: general analysis and improved rates. In International Conference on Machine Learning, pp. 5200–5209. Cited by: §1.
 [23] (2017) Improved training of wasserstein gans. arXiv preprint arXiv:1704.00028. Cited by: Appendix B.
 [24] (2019) Tight analyses for nonsmooth stochastic gradient descent. In Conference on Learning Theory, pp. 1579–1613. Cited by: §1, §1, §3.
 [25] (2019) Simple and optimal highprobability bounds for stronglyconvex stochastic gradient descent. arXiv preprint arXiv:1909.00843. Cited by: Appendix B, §1, §1, §5, §5.
 [26] (2014) Beyond the regret minimization barrier: optimal algorithms for stochastic stronglyconvex optimization. The Journal of Machine Learning Research 15 (1), pp. 2489–2512. Cited by: §1, §1.
 [27] (2020) Mean estimation with subgaussian rates in polynomial time. Annals of Statistics 48 (2), pp. 1193–1213. Cited by: §1.1.
 [28] (2016) Loss minimization and parameter estimation with heavy tails. The Journal of Machine Learning Research 17 (1), pp. 543–582. Cited by: §1.1, §1.
 [29] (1973) Robust regression: asymptotics, conjectures and monte carlo. Annals of statistics 1 (5), pp. 799–821. Cited by: §6, §7.
 [30] (2018) Forecasting: principles and practice. OTexts. Cited by: §6.
 [31] (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §1.
 [32] (2020) A fast spectral algorithm for mean estimation with subgaussian rates. In Conference on Learning Theory, pp. 2598–2612. Cited by: §1.1, §1, §4.1.

[33]
(2020)
Gradient descent with early stopping is provably robust to label noise for overparameterized neural networks
. InInternational Conference on Artificial Intelligence and Statistics
, pp. 4313–4324. Cited by: Appendix B.  [34] (2019) On the convergence of stochastic gradient descent with adaptive stepsizes. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 983–992. Cited by: §1.
 [35] (2019) Mean estimation and regression under heavytailed distributions: a survey. Foundations of Computational Mathematics 19 (5), pp. 1145–1190. Cited by: Table 2, §1.1, §1, §4.1.
 [36] (2019) Robust multivariate mean estimation: the optimality of trimmed mean. arXiv preprint arXiv:1907.11391. Cited by: §4.1.
 [37] (2013) Ad click prediction: a view from the trenches. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 1222–1230. Cited by: §1.
 [38] (2015) Geometric median and robust estimation in banach spaces. Bernoulli 21 (4), pp. 2308–2335. Cited by: §1.1, §1.1, §1, §4.1.
 [39] (2019) Algorithms of robust stochastic optimization based on mirror descent method. Automation and Remote Control 80 (9), pp. 1607–1627. Cited by: Table 1, §1.1, §1, §5.
 [40] (2012) Robust lasso with missing and grossly corrupted observations. IEEE transactions on information theory 59 (4), pp. 2036–2058. Cited by: §6.

[41]
(2012)
Understanding the exploding gradient problem
. CoRR, abs/1211.5063 2 (417), pp. 1. Cited by: Appendix B.  [42] (2019) A unified approach to robust mean estimation. arXiv preprint arXiv:1907.00927. Cited by: §1.1, §1.
 [43] (2020) A robust univariate mean estimator is all you need. In International Conference on Artificial Intelligence and Statistics, pp. 4034–4044. Cited by: §1.
 [44] (2018) Robust estimation via robust gradient estimation. arXiv preprint arXiv:1802.06485. Cited by: Table 1, §F.3, §1.1, §1, §1, §5.
 [45] (2011) Making gradient descent optimal for strongly convex stochastic optimization. arXiv preprint arXiv:1109.5647. Cited by: Appendix B, §1, §1, §3, §5.
 [46] (2019) Adaptive hard thresholding for nearoptimal consistent robust regression. In Conference on Learning Theory, pp. 2892–2897. Cited by: §1.1, §1.
 [47] (2020) Adaptive huber regression. Journal of the American Statistical Association 115 (529), pp. 254–265. Cited by: §1.1, §6.
 [48] (1999) A general class of exponential inequalities for martingales and ratios. Annals of probability 27 (1), pp. 537–564. Cited by: Appendix E.

[49]
(2017)
Scaling sgd batch size to 32k for imagenet training
. arXiv preprint arXiv:1708.03888 6, pp. 12. Cited by: Appendix B.  [50] (2019) Why gradient clipping accelerates training: a theoretical justification for adaptivity. arXiv preprint arXiv:1905.11881. Cited by: Appendix B.

[51]
(2019)
Why are adaptive methods good for attention models?
. arXiv preprint arXiv:1912.03194. Cited by: §1.1, §1, §1.
Appendix A Organization
The Appendices contain additional technical content and are organized as follows. In Appendix B, we provide additional details for related work, which contain a detailed comparison of previous work on stochastic optimization. In Appendix C, we detail hyperparameters used for experiments in Section 6. In Appendix D, we present supplementary experimental results for different setups for heavytailed mean estimation and linear regression. Additionally, we show a synthetic experiment on logistic regression. Finally, in Appendix E and F, we give the proofs for Theorem 1 in Section 3 and corollaries in Section 4.
Appendix B Related work : Additional details
Heavytailed stochastic optimization.
In this section, we present a detailed comparison of existing results. In Table 1, we compare existing high probability bounds of stochastic optimization for strongly convex and smooth objectives.
Since our work focus on largescale setting (where we need to access data in a streaming fashion), we assume the number of samples is large so that the required is small. In such setting, is the dominating term and the error is driven by the stochastic noise term . If ignoring the difference in logarithmic factors and assuming is small, all methods for heavytailed noise in Table 1 achieve and are comparable to algorithms derived under the subGaussian noise assumption.
However, we can see that all of the existing methods require batch size except ours. Their batch sizes are not constants because they use a constant step size throughout their training process. Although they can achieve linear convergence rates for initialization error , they should use an exponential growing batch size to reduce variance induced by gradient noise. Additionally, in large scale setting where the noise term is the dominating term, the linear convergence of initial error is not important. On the contrary, we choose a step size, which is widely used in stochastic optimization for strongly convex objective [45, 25]. Our proposed clippedSGD can therefore enjoy convergence rate while using a constant batch size.
Finally, our analysis improves the dependency on the confidence level term : it does not have extra logarithmic terms and does not depend on . Although our bound has a worse dependency on the condition number , we argue that our bound has an extra term because our bound is derived under the square error, i.e. instead of the difference between objective values . As a result, we believe the dependency on of our bounds can be improved by modifying Lemma E.
Gradient clipping.
Gradient clipping is a wellknown optimization technique for both convex/nonconvex optimization problems [23, 49, 41]. It has been shown to accelerate neural network training [50], stablize the policy gradient algorithms [17] and design different private optimization algorithms [4, 1]. Gradient clipping has also been shown to be robust to label noise [33].
Comments
There are no comments yet.