AI-infused hiring programs have drawn scrutiny, most notably over whether they end up exhibiting biases based on the data they’re trained on.

  • Odusei@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I figure you’d audit it by examining the results, and if bias isn’t detectable in the results then I’d argue that’s at the very least still better than the human-based systems we’ve been relying on up til now.

      • Odusei@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        When the demographics of the output are roughly equivalent to the demographics of the input. If ten men and fifty women apply, and eight men and two women are hired, that is worth investigating.

      • BraveSirZaphod@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Not inherently, but things can be tested.

        If you have a bunch of otherwise identical résumés, with the only difference being the racial connotation of the name, and the AI gives significantly different results, there’s an identifiable problem.