← ML Research Wiki / 2506.17145

Worst-case convergence analysis of relatively inexact gradient descent on smooth convex functions

(2025)

Paper Information
arXiv ID

Abstract

We consider the classical gradient descent algorithm with constant stepsizes, where some error is introduced in the computation of each gradient.More specifically, we assume some relative bound on the inexactness, in the sense that the norm of the difference between the true gradient and its approximate value is bounded by a certain fraction of the gradient norm.This paper presents a worst-case convergence analysis of this so-called relatively inexact gradient descent on smooth convex functions, using the Performance Estimation Problem (PEP) framework.We first derive the exact worst-case behavior of the method after one step.Then we study the case of several steps and provide computable upper and lower bounds using the PEP framework.Finally, we discuss the optimal choice of constant stepsize according to the obtained worst-case convergence rates.

Summary

This paper presents a worst-case convergence analysis of relatively inexact gradient descent on smooth convex functions, emphasizing situations where exact gradient calculations are impractical or costly. Utilizing the Performance Estimation Problem (PEP) framework, the authors establish tight convergence guarantees for the algorithm, deriving its behavior after both one step and several steps while providing computable bounds. The analysis reveals the optimal choice for constant stepsizes contingent on the inexactness levels. They introduce a method to manage the impact of gradient inexactness on convergence rates using upper and lower bounds, exploring various models of inexactness, and comparing it against traditional settings. They contribute significantly by extending the PEP analysis beyond strong convex settings into smooth convex scenarios and analyze multiple iterations in their convergence rates.

Methods

This paper employs the following methods:

  • Gradient Descent
  • Performance Estimation Problem (PEP)

Models Used

  • None specified

Datasets

The following datasets were used in this research:

  • None specified

Evaluation Metrics

  • None specified

Results

  • Tight convergence guarantees for inexact gradient descent
  • Optimal choice of constant stepsize based on inexactness levels
  • Characterization of convergence rates across distinct stepsize regimes

Limitations

The authors identified the following limitations:

  • Difficulty in deriving explicit worst-case instances for the intermediate regime
  • Analytical characterization of optimal stepsize is complex

Technical Requirements

  • Number of GPUs: None specified
  • GPU Type: None specified
  • Compute Requirements: None specified

Papers Using Similar Methods

External Resources