Listen to this post

A few months ago, I wrote about how artificial intelligence was being introduced in the workplace.

At the ABA Annual Labor & Employment Conference last week, a whole panel discussion was devoted to the legal ramifications of using artificial intelligence — particularly in hiring decisions.

The speakers talked about the EEOC guidance that I discussed as well. But beyond that, speakers highlighted that the Federal Trade Commission has also been getting involved and has indicated its intent to more closely scrutinize AI tools that may adverse impact certain job candidates (and employees).

In fact, on August 11, 2022, the FTC issued its “Advanced Notice of Proposed Rulemaking on commercial surveillance and lax data
security practices, seeking public comment on whether new trade regulation rules are needed to protect people’s privacy and information”, according to the speakers. The notice asks a series of questions on the of AI tools.

For employers, this means that the law and regulations regarding artificial intelligence is far from settled.  Indeed, there is no good definition as to what is (or is not) “artificial intelligence” that is the subject of concern.

The speakers highlighted one area that employers might want to think about when considering AI.  AI works best when it is given a dataset to work with. So if a company wanted to highlight its best performing employees and asked AI to develop criteria that best matched these employees, the dataset could be skewed inappropriately. For example, suppose the dataset was made up primarily Ivy League educated white males.

It’s much more likely then that these same types of people would be the “preferred” job candidate even though the employers cannot otherwise discriminate on the basis of gender or race.  This could have a disparate impact on certain classes of candidates, particularly those with disabilities as well.

Employers should tread cautiously, no matter what the vendor may say.