Volgen
Ghada Sokar
Ghada Sokar
Google DeepMind
Geverifieerd e-mailadres voor google.com
Titel
Geciteerd door
Geciteerd door
Jaar
SpaceNet: Make Free Space For Continual Learning
G Sokar, DC Mocanu, M Pechenizkiy
Elsevier Neurocomputing Journal, 2020
682020
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity
S Liu, T Chen, Z Atashgahi, X Chen, G Sokar, E Mocanu, M Pechenizkiy, ...
ICLR 2022.The Tenth International Conference on Learning Representations, 2021
442021
Topological Insights in Sparse Neural Networks
S Liu, T Van der Lee, A Yaman, Z Atashgahi, D Ferraro, G Sokar, ...
ECML PKDD 2020, 2020
33*2020
The Dormant Neuron Phenomenon in Deep Reinforcement Learning
G Sokar, R Agarwal, PS Castro, U Evci
ICML2023, International Conference on Machine Learning, Oral, 2023
312023
Dynamic Sparse Training for Deep Reinforcement Learning
G Sokar, E Mocanu, DC Mocanu, M Pechenizkiy, P Stone
IJCAI-ECAI 2022. The 31st International Joint Conference on Artificial …, 2021
312021
Quick and robust feature selection: the strength of energy-efficient sparse training for autoencoders
Z Atashgahi, G Sokar, T van der Lee, E Mocanu, DC Mocanu, R Veldhuis, ...
Machine Learning, 1-38, 2022
202022
A generic OCR using deep siamese convolution neural networks
G Sokar, EE Hemayed, M Rehan
2018 IEEE 9th Annual Information Technology, Electronics and Mobile …, 2018
202018
Self-Attention Meta-Learner for Continual Learning
G Sokar, DC Mocanu, M Pechenizkiy
AAMAS 2021. 20th International Conference on Autonomous Agents and …, 2021
142021
Learning Invariant Representation for Continual Learning
G Sokar, DC Mocanu, M Pechenizkiy
AAAI Workshop on Meta-Learning for Computer Vision (AAAI-2021), 2020
142020
Quick and robust feature selection: the strength of energy-efficient sparse training for autoencoders
Z Atashgahi, G Sokar, T van der Lee, E Mocanu, DC Mocanu, R Veldhuis, ...
Machine Learning Journal, 2020
142020
Where to Pay Attention in Sparse Training for Feature Selection?
G Sokar, Z Atashgahi, M Pechenizkiy, DC Mocanu
NeurIPS2022, 36th Annual Conference on Neural Information Processing Systems, 2022
122022
Avoiding Forgetting and Allowing Forward Transfer in Continual Learning via Sparse Networks
G Sokar, DC Mocanu, M Pechenizkiy
ECMLPKDD2022, 2021
9*2021
Automatic Noise Filtering with Dynamic Sparse Training in Deep Reinforcement Learning
B Grooten, G Sokar, S Dohare, E Mocanu, ME Taylor, M Pechenizkiy, ...
AAMAS 2023. 22nd International Conference on Autonomous Agents and …, 2023
82023
FreeTickets: Accurate, Robust and Efficient Deep Ensemble by Training with Dynamic Sparsity
S Liu, T Chen, Z Atashgahi, X Chen, G Sokar, E Mocanu, M Pechenizkiy, ...
ICLR 2022.The Tenth International Conference on Learning Representations, 2021
32021
Mixtures of Experts Unlock Parameter Scaling for Deep RL
J Obando-Ceron, G Sokar, T Willi, C Lyle, J Farebrother, J Foerster, ...
arXiv preprint arXiv:2402.08609, 2024
22024
Continual Learning with Dynamic Sparse Training: Exploring Algorithms for Effective Model Updates
MO Yildirim, EC Gok, G Sokar, DC Mocanu, J Vanschoren
Conference on Parsimony and Learning, 94-107, 2024
22024
Dynamic Sparse Training for Deep Reinforcement Learning (Poster)
GAZN Sokar, E Mocanu, DC Mocanu, M Pechenizkiy, P Stone
Sparsity in Neural Networks: Advancing Understanding and Practice 2021, 2021
22021
Continual Lifelong Learning for Intelligent Agents
G Sokar
IJCAI 2021. International Joint Conferences on Artifical Intelligence (IJCAI), 2021
12021
Supervised Feature Selection via Ensemble Gradient Information from Sparse Neural Networks
K Liu, Z Atashgahi, G Sokar, M Pechenizkiy, DC Mocanu
International Conference on Artificial Intelligence and Statistics, 3952-3960, 2024
2024
Learning Continually Under Changing Data Distributions
G Sokar
2023
Het systeem kan de bewerking nu niet uitvoeren. Probeer het later opnieuw.
Artikelen 1–20