qertdesigns.blogg.se

Miro reverby
Miro reverby












  • Jiahao Chen, Nathan Kallus, Xiaojie Mao, Geoffry Svacha, and Madeleine Udell.
  • Fair lending needs explainable models for responsible recommendation. In 2021 IEEE European Symposium on Security and Privacy (EuroS P). On the Privacy Risks of Algorithmic Fairness. Does Facebook use sensitive data for advertising purposes?Commun.
  • José González Cabañas, Ángel Cuevas, Aritz Arrate, and Rubén Cuevas.
  • Information, Communication & Society 0, 0 (Oct. Making sense of algorithmic profiling: user perceptions on Facebook.
  • Moritz Büchi, Eduard Fosch-Villaronga, Christoph Lutz, Aurelia Tamò-Larrieux, and Shruthi Velidi.
  • Using publicly available information to proxy for unidentified race and ethnicity. Dark Matters: On the Surveillance of Blackness. Racial Categories in Medical Practice: How Useful Are They?PLOS Medicine 4, 9 (Sept. Hammonds, Alondra Nelson, William Quivers, Susan M.
  • Lundy Braun, Anne Fausto-Sterling, Duana Fullwiley, Evelynn M.
  • Sorting things out: classification and its consequences. In Proceedings of the 2020 conference on fairness, accountability, and transparency. Awareness in practice: tensions in access to sensitive attribute data for antidiscrimination.
  • Miranda Bogen, Aaron Rieke, and Shazeda Ahmed.
  • The gender binary will not be deprogrammed: Ten years of coding gender on Facebook. Race After Technology: Abolitionist Tools for the New Jim Code. Measuring discrepancies in Airbnb guest acceptance rates using anonymized demographic data.
  • Sid Basu, Ruthie Berman, Adam Bloomston, John Cambell, Anne Diaz, Nanako Era, Benjamin Evans, Sukhada Palkar, and Skyler Wharton.
  • Social Science Research Network, Rochester, NY. Designing Disaggregated Evaluations of AI Systems: Choices, Considerations, and Tradeoffs.

    miro reverby

    Solon Barocas, Anhong Guo, Ece Kamar, Jacquelyn Krones, Meredith Ringel Morris, Jennifer Wortman Vaughan, Duncan Wadsworth, and Hanna Wallach.Studying Up: Reorienting the study of algorithmic fairness around issues of power. Chelsea Barabas, JB Rubinovitz, Colin Doyle, and Karthik Dinakar.To Build a Better Future, Designers Need to Start Saying ‘No’. Beyond Bias: Re-Imagining the Terms of ‘Ethical AI’ in Criminal Law. CDC under scrutiny after struggling to report Covid race, ethnicity data. Fairness On The Ground: Applying Algorithmic Fairness Approaches To Production Systems. Chloé Bakalar, Renata Barreto, Miranda Bogen, Sam Corbett-Davies, Melissa Hall, Isabel Kloumann, Michelle Lam, Joaquin Quiñonero Candela, Manish Raghavan, Joshua Simons, Jonathan Tannen, Edmund Tong, Kate Vredenburgh, and Jiejing Zhao.”What We Can’t Measure, We Can’t Understand”: Challenges to Demographic Data Procurement in the Pursuit of Fairness. McKane Andrus, Elena Spitzer, Jeffrey Brown, and Alice Xiang.How Meta is working to assess fairness in relation to race in the U.S. Rachad Alao, Miranda Bogen, Jingang Miao, Ilya Mironov, and Jonathan Tannen.Global Partnership on Artificial Intelligence. Enabling data sharing for social benefit through data trusts. Aapti Institute and Open Data Institute.Towards this end, we assess privacy-focused methods of data collection and use and participatory data governance structures as proposals for more responsibly collecting demographic data. We argue that, by confronting these questions before and during the collection of demographic data, algorithmic fairness methods are more likely to actually mitigate harmful treatment disparities without reinforcing systems of oppression.

    miro reverby

    Looking more broadly, the risks to entire groups and communities include the expansion of surveillance infrastructure in the name of fairness, misrepresenting and mischaracterizing what it means to be part of a demographic group, and ceding the ability to define what constitutes biased or unfair treatment. For the risks to individuals we consider the unique privacy risks of sensitive attributes, the possible harms of miscategorization and misrepresentation, and the use of sensitive data beyond data subjects’ expectations. In this work, we explore under what conditions demographic data should be collected and used to enable algorithmic fairness methods by characterizing a range of social risks to individuals and communities. We show how these techniques largely ignore broader questions of data governance and systemic oppression when categorizing individuals for the purpose of fairer algorithmic processing.

    miro reverby

    Through this paper, we consider calls to collect more data on demographics to enable algorithmic fairness and challenge the notion that discrimination can be overcome with smart enough technical methods and sufficient data. Most proposed algorithmic fairness techniques require access to demographic data in order to make performance comparisons and standardizations across groups, however this data is largely unavailable in practice, hindering the widespread adoption of algorithmic fairness.














    Miro reverby