Researchers from Boston University and Google Research have introduced a new online learning algorithm that achieves near-optimal regret without requiring bounds on gradient norms. This development, detailed in a 2024 paper by A. Cutkosky and Z. Mhammedi, represents a significant advancement in the field of online learning. Additionally, a separate 2024 paper by S. Duffield, K. Donatella, J. Chiu, P. Klett, and D. Simpson from Normal Computing discusses scalable Bayesian learning with posteriors, highlighting benefits such as improved generalization, online learning, and uncertainty decomposition.
[LG] Scalable Bayesian Learning with posteriors S Duffield, K Donatella, J Chiu, P Klett, D Simpson [Normal Computing] (2024) https://t.co/fJniJUfXiR - Bayesian learning provides benefits like improved generalization, online learning, and uncertainty decomposition compared to… https://t.co/WzBAGJ4Buq
Is In-Context Learning in Large Language Models Bayesian? A Martingale Perspective. https://t.co/vSlSfUrYzj
Scalable Bayesian Learning with posteriors. https://t.co/UYbbflIYM9
Is In-Context Learning in Large Language Models Bayesian? A Martingale Perspective https://t.co/1EdJXEvNwH
Scalable Bayesian Learning with posteriors https://t.co/KUIYEAZQB4
[LG] Fully Unconstrained Online Learning A Cutkosky, Z Mhammedi [Boston University & Google Research] (2024) https://t.co/3WvcF1T2yi - The paper provides a new online learning algorithm that obtains near-optimal regret without requiring bounds on the gradient norms or the… https://t.co/J0j6k3pnIW