Volgen
Niloofar Mireshghallah
Niloofar Mireshghallah
Andere namenFatemeh Mireshghallah
Postdoctoral scholar, University of Washington
Geverifieerd e-mailadres voor cs.washington.edu - Homepage
Titel
Geciteerd door
Geciteerd door
Jaar
Privacy in deep learning: A survey
F Mireshghallah, M Taram, P Vepakomma, A Singh, R Raskar, ...
arXiv preprint arXiv:2004.12254, 2020
1282020
What does it mean for a language model to preserve privacy?
H Brown, K Lee, F Mireshghallah, R Shokri, F Tramèr
Proceedings of the 2022 ACM Conference on Fairness, Accountability, and …, 2022
1132022
Shredder: Learning noise distributions to protect inference privacy
F Mireshghallah, M Taram, P Ramrakhyani, A Jalali, D Tullsen, ...
Proceedings of the Twenty-Fifth International Conference on Architectural …, 2020
99*2020
Releq: an automatic reinforcement learning approach for deep quantization of neural networks
A Elthakeb, P Pilligundla, FS Mireshghallah, A Yazdanbakhsh, S Gao, ...
NeurIPS ML for Systems workshop, 2018, 2019
94*2019
Neither private nor fair: Impact of data imbalance on utility and fairness in differential privacy
T Farrand, F Mireshghallah, S Singh, A Trask
Proceedings of the 2020 workshop on privacy-preserving machine learning in …, 2020
882020
Quantifying privacy risks of masked language models using membership inference attacks
F Mireshghallah, K Goyal, A Uniyal, T Berg-Kirkpatrick, R Shokri
arXiv preprint arXiv:2203.03929, 2022
732022
ReLeQ : A Reinforcement Learning Approach for Automatic Deep Quantization of Neural Networks
AT Elthakeb, P Pilligundla, F Mireshghallah, A Yazdanbakhsh, ...
IEEE micro 40 (5), 37-45, 2020
61*2020
Benchmarking differential privacy and federated learning for bert models
P Basu, TS Roy, R Naidu, Z Muftuoglu, S Singh, F Mireshghallah
arXiv preprint arXiv:2106.13973, 2021
532021
An empirical analysis of memorization in fine-tuned autoregressive language models
F Mireshghallah, A Uniyal, T Wang, DK Evans, T Berg-Kirkpatrick
Proceedings of the 2022 Conference on Empirical Methods in Natural Language …, 2022
52*2022
Not all features are equal: Discovering essential features for preserving prediction privacy
F Mireshghallah, M Taram, A Jalali, ATT Elthakeb, D Tullsen, ...
Proceedings of the Web Conference 2021, 669-680, 2021
51*2021
Mix and match: Learning-free controllable text generation using energy language models
F Mireshghallah, K Goyal, T Berg-Kirkpatrick
arXiv preprint arXiv:2203.13299, 2022
502022
Flute: A scalable, extensible framework for high-performance federated learning simulations
DD Mirian Hipolito Garcia, Andre Manoel, Daniel Madrigal Diaz, Fatemehsadat ...
arXiv preprint arXiv:2203.13789, 2022
41*2022
Membership inference attacks against language models via neighbourhood comparison
J Mattern, F Mireshghallah, Z Jin, B Schölkopf, M Sachan, ...
arXiv preprint arXiv:2305.18462, 2023
392023
DP-SGD vs PATE: which has less disparate impact on model accuracy?
A Uniyal, R Naidu, S Kotti, S Singh, PJ Kenfack, F Mireshghallah, A Trask
arXiv preprint arXiv:2106.12576, 2021
312021
Smaller language models are better black-box machine-generated text detectors
F Mireshghallah, J Mattern, S Gao, R Shokri, T Berg-Kirkpatrick
arXiv preprint arXiv:2305.09859, 2023
282023
UserIdentifier: implicit user representations for simple and effective personalized sentiment analysis
F Mireshghallah, V Shrivastava, M Shokouhi, T Berg-Kirkpatrick, R Sim, ...
arXiv preprint arXiv:2110.00135, 2021
282021
Privacy Regularization: Joint Privacy-Utility Optimization in Language Models
F Mireshghallah, HA Inan, M Hasegawa, V Rühle, T Berg-Kirkpatrick, ...
Proceedings of the 2021 Conference of the North American Chapter of the …, 2021
272021
Energy-efficient permanent fault tolerance in hard real-time systems
FS Mireshghallah, M Bakhshalipour, M Sadrosadati, H Sarbazi-Azad
IEEE Transactions on Computers 68 (10), 1539-1545, 2019
202019
U-Noise: Learnable Noise Masks for Interpretable Image Segmentation
T Koker, F Mireshghallah, T Titcombe, G Kaissis
arXiv preprint arXiv:2101.05791, 2021
172021
Can llms keep a secret? testing privacy implications of language models via contextual integrity theory
N Mireshghallah, H Kim, X Zhou, Y Tsvetkov, M Sap, R Shokri, Y Choi
arXiv preprint arXiv:2310.17884, 2023
132023
Het systeem kan de bewerking nu niet uitvoeren. Probeer het later opnieuw.
Artikelen 1–20