Paper ID | L.3.1 | ||
Paper Title | Communication Efficient Distributed Approximate Newton Method | ||
Authors | Avishek Ghosh, University of California, Berkeley, United States; Raj Kumar Maity, Arya Mazumdar, University of Massachusetts, Amherst, United States; Kannan Ramachandran, University of California, Berkeley, United States | ||
Session | L.3: Distributed Learning Performance Analysis | ||
Presentation | Lecture | ||
Track | Statistics and Learning Theory | ||
Manuscript | Click here to download the manuscript | ||
Virtual Presentation | Click here to watch in the Virtual Symposium | ||
Abstract | In this paper, we develop a communication efficient second order distributed Newton-type algorithm. For communication efficiency, we consider a generic class of $\delta$-approximate compressors (Karimireddy et al., 2019), which includes \emph{sign}-based compression and top-$k$ sparsification. We provide three potential settings where compression can be employed; and provide rate of convergence for smooth objectives. We show that, in the regime where $\delta$ is constant, our theoretical convergence rate matches that of a state-of-the-art distributed second order algorithm called \emph{DINGO} (Crane and Roosta, 2019). This implies that we get the compression for free in this regime. The full paper can be found at: https://tinyurl.com/ujnpt4c. |
Plan Ahead
2021 IEEE International Symposium on Information Theory
11-16 July 2021 | Melbourne, Victoria, Australia