Follow
Ghada Sokar
Title
Cited by
Cited by
Year
SpaceNet: Make Free Space For Continual Learning
G Sokar, DC Mocanu, M Pechenizkiy
Elsevier Neurocomputing Journal, 2020
762020
The Dormant Neuron Phenomenon in Deep Reinforcement Learning
G Sokar, R Agarwal, PS Castro, U Evci
ICML2023, International Conference on Machine Learning, Oral, 2023
532023
Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity
S Liu, T Chen, Z Atashgahi, X Chen, G Sokar, E Mocanu, M Pechenizkiy, ...
ICLR 2022.The Tenth International Conference on Learning Representations, 2021
522021
Dynamic Sparse Training for Deep Reinforcement Learning
G Sokar, E Mocanu, DC Mocanu, M Pechenizkiy, P Stone
IJCAI-ECAI 2022. The 31st International Joint Conference on Artificial …, 2021
412021
Topological Insights in Sparse Neural Networks
S Liu, T Van der Lee, A Yaman, Z Atashgahi, D Ferraro, G Sokar, ...
ECML PKDD 2020, 2020
35*2020
Quick and robust feature selection: the strength of energy-efficient sparse training for autoencoders
Z Atashgahi, G Sokar, T van der Lee, E Mocanu, DC Mocanu, R Veldhuis, ...
Machine Learning, 1-38, 2022
222022
A generic OCR using deep siamese convolution neural networks
G Sokar, EE Hemayed, M Rehan
2018 IEEE 9th Annual Information Technology, Electronics and Mobile …, 2018
202018
Where to Pay Attention in Sparse Training for Feature Selection?
G Sokar, Z Atashgahi, M Pechenizkiy, DC Mocanu
NeurIPS2022, 36th Annual Conference on Neural Information Processing Systems, 2022
172022
Self-Attention Meta-Learner for Continual Learning
G Sokar, DC Mocanu, M Pechenizkiy
AAMAS 2021. 20th International Conference on Autonomous Agents and …, 2021
162021
Learning Invariant Representation for Continual Learning
G Sokar, DC Mocanu, M Pechenizkiy
AAAI Workshop on Meta-Learning for Computer Vision (AAAI-2021), 2020
152020
Quick and robust feature selection: the strength of energy-efficient sparse training for autoencoders
Z Atashgahi, G Sokar, T van der Lee, E Mocanu, DC Mocanu, R Veldhuis, ...
Machine Learning Journal, 2020
152020
Avoiding Forgetting and Allowing Forward Transfer in Continual Learning via Sparse Networks
G Sokar, DC Mocanu, M Pechenizkiy
ECMLPKDD2022, 2021
12*2021
Automatic Noise Filtering with Dynamic Sparse Training in Deep Reinforcement Learning
B Grooten, G Sokar, S Dohare, E Mocanu, ME Taylor, M Pechenizkiy, ...
AAMAS 2023. 22nd International Conference on Autonomous Agents and …, 2023
112023
Mixtures of experts unlock parameter scaling for deep rl
J Obando-Ceron, G Sokar, T Willi, C Lyle, J Farebrother, J Foerster, ...
arXiv preprint arXiv:2402.08609, 2024
82024
Continual learning with dynamic sparse training: Exploring algorithms for effective model updates
MO Yildirim, EC Gok, G Sokar, DC Mocanu, J Vanschoren
Conference on Parsimony and Learning, 94-107, 2024
32024
Dynamic sparse training for deep reinforcement learning (poster)
GAZN Sokar, E Mocanu, DC Mocanu, M Pechenizkiy, P Stone
Sparsity in Neural Networks: Advancing Understanding and Practice 2021, 2021
32021
FreeTickets: Accurate, Robust and Efficient Deep Ensemble by Training with Dynamic Sparsity
S Liu, T Chen, Z Atashgahi, X Chen, G Sokar, E Mocanu, M Pechenizkiy, ...
ICLR 2022.The Tenth International Conference on Learning Representations, 2021
32021
Mixtures of Experts Unlock Parameter Scaling for Deep RL
JSO Ceron, G Sokar, T Willi, C Lyle, J Farebrother, JN Foerster, ...
Forty-first International Conference on Machine Learning, 0
2
Supervised Feature Selection via Ensemble Gradient Information from Sparse Neural Networks
K Liu, Z Atashgahi, G Sokar, M Pechenizkiy, DC Mocanu
International Conference on Artificial Intelligence and Statistics, 3952-3960, 2024
12024
Continual Lifelong Learning for Intelligent Agents
G Sokar
IJCAI 2021. International Joint Conferences on Artifical Intelligence (IJCAI), 2021
12021
The system can't perform the operation now. Try again later.
Articles 1–20