Follow
Mart van Baalen
Mart van Baalen
Machine Learning Researcher, Qualcomm AI Research
Verified email at qti.qualcomm.com
Title
Cited by
Cited by
Year
Data-free quantization through weight equalization and bias correction
M Nagel, M Baalen, T Blankevoort, M Welling
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2019
4452019
Up or down? adaptive rounding for post-training quantization
M Nagel, RA Amjad, M Van Baalen, C Louizos, T Blankevoort
International Conference on Machine Learning, 7197-7206, 2020
3002020
A white paper on neural network quantization
M Nagel, M Fournarakis, RA Amjad, Y Bondarenko, M Van Baalen, ...
arXiv preprint arXiv:2106.08295, 2021
2302021
Bayesian bits: Unifying quantization and pruning
M Van Baalen, C Louizos, M Nagel, RA Amjad, Y Wang, T Blankevoort, ...
Advances in neural information processing systems 33, 5741-5752, 2020
882020
Gradient Regularization for Quantization Robustness
M Alizadeh, A Behboodi, M van Baalen, C Louizos, T Blankevoort, ...
arXiv preprint arXiv:2002.07520, 2020
552020
Fp8 quantization: The power of the exponent
A Kuzmin, M Van Baalen, Y Ren, M Nagel, J Peters, T Blankevoort
Advances in Neural Information Processing Systems 35, 14651-14662, 2022
222022
A white paper on neural network quantization. arXiv 2021
M Nagel, M Fournarakis, RA Amjad, Y Bondarenko, M van Baalen, ...
arXiv preprint arXiv:2106.08295, 0
22
Deep matrix factorization for recommendation
M van Baalen
Master's Thesis, Univ. of Amsterdam, Sep 30, 2016
192016
Cyclical pruning for sparse neural networks
S Srinivas, A Kuzmin, M Nagel, M van Baalen, A Skliar, T Blankevoort
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
92022
Simulated quantization, real power savings
M van Baalen, B Kahne, E Mahurin, A Kuzmin, A Skliar, M Nagel, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
72022
Up or down
M Nagel, RA Amjad, M Van Baalen, C Louizos, T Blankevoort
Adaptive rounding for post-training quantization. CoRR abs/2004.10568, 2020
72020
FP8 versus INT8 for efficient deep learning inference
M van Baalen, A Kuzmin, SS Nair, Y Ren, E Mahurin, C Patel, ...
arXiv preprint arXiv:2303.17951, 2023
42023
A Practical Mixed Precision Algorithm for Post-Training Quantization
NP Pandey, M Nagel, M van Baalen, Y Huang, C Patel, T Blankevoort
arXiv preprint arXiv:2302.05397, 2023
42023
Quantized sparse weight decomposition for neural network compression
A Kuzmin, M van Baalen, M Nagel, A Behboodi
arXiv preprint arXiv:2207.11048, 2022
22022
Pruning vs Quantization: Which is Better?
A Kuzmin, M Nagel, M Van Baalen, A Behboodi, T Blankevoort
arXiv preprint arXiv:2307.02973, 2023
12023
A Practical Mixed Precision Algorithm for Post-Training Quantization
N Prasad Pandey, M Nagel, M van Baalen, Y Huang, C Patel, ...
arXiv e-prints, arXiv: 2302.05397, 2023
2023
QBitOpt: Fast and Accurate Bitwidth Reallocation during Training
J Peters, M Fournarakis, M Nagel, M van Baalen, T Blankevoort
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023
2023
Quantizing Neural Networks for Low-Power Computer Vision
M Fournarakis, M Nagel, RA Amjad, Y Bondarenko, M van Baalen, ...
Low-Power Computer Vision, 235-272, 2022
2022
Quantized sparse PCA for neural network weight compression
A Kuzmin, M Van Baalen, M Nagel, A Behboodi
2021
The system can't perform the operation now. Try again later.
Articles 1–19