Volgen
Haihao (Sean) Lu
Haihao (Sean) Lu
Assistant Professor, MIT
Geverifieerd e-mailadres voor mit.edu - Homepage
Titel
Geciteerd door
Geciteerd door
Jaar
Relatively smooth convex optimization by first-order methods, and applications
H Lu, RM Freund, Y Nesterov
SIAM Journal on Optimization 28 (1), 333-354, 2018
3862018
The best of many worlds: Dual mirror descent for online allocation problems
SR Balseiro, H Lu, V Mirrokni
Operations Research 71 (1), 101-119, 2023
149*2023
Depth creates no bad local minima
H Lu, K Kawaguchi
arXiv preprint arXiv:1702.08580, 2017
1252017
“relative continuity” for non-lipschitz nonsmooth convex optimization using stochastic (or deterministic) mirror descent
H Lu
INFORMS Journal on Optimization 1 (4), 288-303, 2019
792019
Ordered sgd: A new stochastic optimization framework for empirical risk minimization
K Kawaguchi, H Lu
International Conference on Artificial Intelligence and Statistics, 669-679, 2020
692020
Practical large-scale linear programming using primal-dual hybrid gradient
D Applegate, M Díaz, O Hinder, H Lu, M Lubin, B O'Donoghue, W Schudy
Advances in Neural Information Processing Systems 34, 20243-20257, 2021
672021
Regularized online allocation problems: Fairness and beyond
S Balseiro, H Lu, V Mirrokni
International Conference on Machine Learning, 630-639, 2021
492021
Faster first-order primal-dual methods for linear programming using restarts and sharpness
D Applegate, O Hinder, H Lu, M Lubin
Mathematical Programming 201 (1), 133-184, 2023
452023
Accelerating gradient boosting machines
H Lu, SP Karimireddy, N Ponomareva, V Mirrokni
International conference on artificial intelligence and statistics, 516-526, 2020
442020
The landscape of the proximal point method for nonconvex–nonconcave minimax optimization
B Grimmer, H Lu, P Worah, V Mirrokni
Mathematical Programming 201 (1), 373-407, 2023
41*2023
Randomized gradient boosting machine
H Lu, R Mazumder
SIAM Journal on Optimization 30 (4), 2780-2808, 2020
382020
Accelerating Greedy Coordinate Descent Methods
H Lu, R Freund, V Mirrokni
International Conference on Machine Learning, 3257-3266, 2018
362018
New computational guarantees for solving convex optimization problems with first order methods, via a function growth condition measure
RM Freund, H Lu
Mathematical Programming 170, 445-477, 2018
352018
An -Resolution ODE Framework for Discrete-Time Optimization Algorithms and Applications to the Linear Convergence of Minimax Problems
H Lu
Mathematical Programming 194, 1061-1112, 2022
34*2022
Generalized stochastic frank–wolfe algorithm with stochastic “substitute” gradient for structured convex optimization
H Lu, RM Freund
Mathematical Programming 187 (1), 317-349, 2021
342021
Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions
S Wang, W Zhou, H Lu, A Maleki, V Mirrokni
International Conference on Machine Learning, 5228-5237, 2018
302018
Infeasibility detection with primal-dual hybrid gradient for large-scale linear programming
D Applegate, M Díaz, H Lu, M Lubin
SIAM Journal on Optimization 34 (1), 459-484, 2024
242024
Approximate leave-one-out for high-dimensional non-differentiable learning problems
S Wang, W Zhou, A Maleki, H Lu, V Mirrokni
arXiv preprint arXiv:1810.02716, 2018
202018
On the linear convergence of extragradient methods for nonconvex–nonconcave minimax problems
S Hajizadeh, H Lu, B Grimmer
INFORMS Journal on Optimization 6 (1), 19-31, 2024
92024
cuPDLP. jl: A GPU implementation of restarted primal-dual hybrid gradient for linear programming in Julia
H Lu, J Yang
arXiv preprint arXiv:2311.12180, 2023
92023
Het systeem kan de bewerking nu niet uitvoeren. Probeer het later opnieuw.
Artikelen 1–20