arXiv:2408.17145v2 Announce Type: replace-cross Abstract: The classical line search for learning rate (LR) tuning in the stochastic gradient descent (SGD) algorithm can tame the convergence slowdown due to data-sampling noise. In a federated setting, wherein the client heterogeneity introduces a slowdown to the global convergence, line search can be relevantly adapted. In this work, we show that a stochastic variant of line search tames the heterogeneity in federated optimization in addition to that due to client-local gradient noise. To this end, we introduce Federated Stochastic Line Search (FedSLS) algorithm and show that it achieves deterministic rates in expectation. Specifically, FedSLS offers linear convergence for strongly convex objectives even with partial client participation. Recently, the extrapolation of the server's LR has shown promise for improved empirical performance for federated learning. To benefit from extrapolation, we extend FedSLS to Federated Extrapolated Stochastic Line Search (FedExpSLS) and prove its convergence. Our extensive empirical results show that the proposed methods perform at par or better than the popular federated learning algorithms across many convex and non-convex problems.
