Nina Grgić-Hlača
Nina Grgić-Hlača
Verified email at - Homepage
Cited by
Cited by
A Unified Approach to Quantifying Algorithmic Unfairness: Measuring Individual & Group Unfairness via Inequality Indices
T Speicher, H Heidari, N Grgić-Hlača, KP Gummadi, A Singla, A Weller, ...
Proceedings of the 24th ACM SIGKDD International Conference on Knowledge …, 2018
Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction
N Grgić-Hlača, EM Redmiles, KP Gummadi, A Weller
Proceedings of the 2018 WWW Conference, 2018
Beyond Distributive Fairness in Algorithmic Decision Making: Feature Selection for Procedurally Fair Learning
N Grgić-Hlača, MB Zafar, KP Gummadi, A Weller
Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence …, 2018
The Case for Process Fairness in Learning: Feature Selection for Fair Decision Making
N Grgić-Hlača, MB Zafar, KP Gummadi, A Weller
NIPS Symposium on Machine Learning and the Law, 2016
Human Decision Making with Machine Assistance: An Experiment on Bailing and Jailing
N Grgić-Hlača, C Engel, KP Gummadi
CSCW 2019, 2019
An Empirical Study on Learning Fairness Metrics for COMPAS Data with Human Supervision
H Wang, N Grgic-Hlaca, P Lahoti, KP Gummadi, A Weller
arXiv preprint arXiv:1910.10255, 2019
Human-Centered Approaches to Fair and Responsible AI
MK Lee, N Grgić-Hlača, MC Tschantz, R Binns, A Weller, M Carney, ...
Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing …, 2020
On Fairness, Diversity and Randomness in Algorithmic Decision Making
N Grgić-Hlača, MB Zafar, KP Gummadi, A Weller
arXiv preprint arXiv:1706.10208, 2017
Dimensions of diversity in human perceptions of algorithmic fairness
N Grgić-Hlača, G Lima, A Weller, EM Redmiles
arXiv preprint arXiv:2005.00808, 2020
Human Perceptions on Moral Responsibility of AI: A Case Study in AI-Assisted Bail Decision-Making
G Lima, N Grgić-Hlača, M Cha
Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems …, 2021
“Look! it’sa computer program! it’s an algorithm! it’s ai!”: does terminology affect human perceptions and evaluations of algorithmic decision-making systems?
M Langer, T Hunsicker, T Feldkamp, CJ König, N Grgić-Hlača
Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems …, 2022
Machine Advice with a Warning about Machine Limitations: Experimentally Testing the Solution Mandated by the Wisconsin Supreme Court
C Engel, N Grgić-Hlača
Journal of Legal Analysis 13 (1), 284-340, 2021
The Conflict Between Explainable and Accountable Decision-Making Algorithms
G Lima, N Grgić-Hlača, JK Jeong, M Cha
2022 ACM Conference on Fairness, Accountability, and Transparency, 2103-2113, 2022
Taking Advice from (Dis) Similar Machines: The Impact of Human-Machine Similarity on Machine-Assisted Decision-Making
N Grgić-Hlača, C Castelluccia, KP Gummadi
Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 10 …, 2022
Automatic Item Generation for Elementary Logic Quizzes via Markov Logic Networks
D Lauc, N Grgić-Hlača, S Skansi
User-Centered Design Strategies for Massive Open Online Courses (MOOCs), 177-186, 2016
Exercises and Solutions-Logic I
D Lauc, N Grgić-Hlača
Ibis grafika, 2013
The system can't perform the operation now. Try again later.
Articles 1–16