A featured session at the recent TAtechNorthAmerica Digital conference was the Point-Counter Point Discussion on “The Ethics of Artificial Intelligence in Talent Acquisition.” One of the discussants – Aaron Crews, the Chief Data Analytics Officer at Littler, Mendelson, P.S. – is interviewed here.
Peter Weddle, CEO of TAtech (PW): Greetings Aaron. We’re delighted that you could join us for this virtual interview. I see from your LinkedIn profile that you’re both an attorney, having served as Senior Associate General Counsel and Head of eDiscovery at Walmart, and a technology expert, with a stint as General Counsel and VP of Strategy at Text IQ. That would seem to give you a unique vantage point on the questions of ethics and AI in talent acquisition, so let’s get right to our questions.
PW: Can you give us a brief history of how the topic of ethical AI has evolved in general and, of course, with regard to recruiting?
Aaron Crews (AC): Initially, the conversation around ethical AI with regard to recruiting was limited to a conversation about whether it was possible to build machines that are reliably effective in improving the recruiting process?
That question has largely been answered, and the answer is mostly yes. Obviously, some technologies work better than others, and some are particularly dubious—like using facial recognition to indicate whether an interviewee is being truthful or not. But, generally speaking we have proven that technology can improve the recruiting process by allowing recruiters and hiring managers to rapidly sift through large numbers of resumes to identify and concentrate on individuals who are “a likely match” for a given job. There are vastly differing opinions regarding what “likely match” means. For instance, is a “likely match” someone with 100% of the required job skills? 80% of the required job skills? Do you consider the applicant’s temperament or cultural fit in the organization, and, if so, how are you capturing and quantifying that, etc.? There are numerous variations here, all arguing for a different definition of “likely match,” and the answer to the question is often organizationally and use case dependent. There are lots of these “does it work” kind of questions, and they are really the first order ethics conversation. If it doesn’t work reliably, the conversation is kind of over. But, in general, the “does it work” questions have been answered in the affirmative.
More recently, the conversation has shifted from a conversation about general efficacy to a conversation about “transparency and bias.” The idea here is that technologies that implicate the kinds of basic needs that sit at the foot of Maslow’s Pyramid—things like the ability to get a job so you can purchase water, food, and shelter, credit to buy homes or transportation, sentencing guidelines or parole from incarceration, etc.—should be explainable to those affected by their output. For instance, it is generally insufficient to respond to the question “Why wasn’t I hired for that position I applied for?” with “We don’t know, the computer said no.” So the current ethics conversation is partly centered on what kinds of technology are appropriate to specific use cases. For instance, with my Netflix account, I don’t really need to understand why it keeps recommending that I watch My Little Pony, but I should be able to explain what data elements a hiring algorithm used to generate an output, and how those data elements were weighted so I can understand why I was ranked number 22 out of 100 applicants for a given employment opportunity instead of number 2. This ethics conversation re transparency is also a risk conversation. If you are an organization deploying AI driven tools from recruiting and hiring, you need to be able to explain how the machine works if you get sued for something related to that use.
The conversation around transparency leads directly into conversations about bias. Because algorithms are “garbage in/garbage out” functions, the quality of their insights/decisions is entirely the product of the data used to train them (both initially and over time). Unbalanced or incomplete training sets can lead to impermissibly biased outputs. Similarly, the inclusion of data points that provide little in the way of information gain, but significant risk of bias, are also problematic. Like the conversation around the appropriate level of transparency, the conversation about what data is appropriate to use to train an algorithm is use case specific, but being discerning and intentional about this on the front end can produce algorithms that function in more ethical ways.
Interestingly, the conversation about ethics in AI is transitioning to conversations about “human in the loop.” What is the appropriate level of automation and machines acting on a given output versus having a human review output and make decisions informed by them? In my personal opinion, this is the space where the ethical rubber meets the road, because it subsumes and then goes beyond the issues of efficacy, transparency, and bias.
PW: Why are we talking about ethics in AI when we don’t apply such a standard to any other talent acquisition technology?
AC: That is a really good question, and a bit complicated. Here is my best guess:
1) I think it is partly because of popular culture’s treatment of AI over the years, and some of the marketing hype around these technologies. There is a fear of or belief in a “super intelligent AI” that people often refer to as “the Singularity.” Many people believe this Singularity is around the proverbial corner. As a result, they believe we need ethics for AI generally in order to ensure our new robot overlords don’t destroy or enslave us. For the record, I’m in the camp that says self-aware super intelligent (or even regularly intelligent) AI is exceedingly unlikely. At the end of the day these are algorithms running statistical analysis over large data sets, nothing more. Thus, this technology is neither artificial nor intelligent in the commonly used sense of those terms. This push for a more general “ethics for AI” then bleeds into specific domains, like talent acquisition technology.
2) Because of AI’s treatment in popular culture, the press, and the marketing of some of these products, people have a tendency to defer to the “decisions” these machines “make.” As a result, people can be less critical or probing of the outputs of these technologies than they otherwise might be if the recommendation were to come from a more traditional source like another person. If people uncritically accept and implement the outputs of these machines, then an ethics cannon for how to build and use them is necessary. That is particularly true because it is proven that impermissible bias can be built into these algorithms or develop over time via use (if they are an algorithm that continually “learns” from data that is feed into it), and that bias can then be “systematized” via implementation.
3) With humans, we can ask for the reasons for or the thought process behind decisions. With opaque technologies, like artificial neural networks, that is a much more difficult process—sometimes it is impossible. So, another reason for the ethics for AI in recruiting conversation may be familiarity bias or heuristics, which refers to the phenomenon where people opt for more familiar options, even though these may produce less favorable outcomes than other alternatives. People making these decisions are our familiar starting point, so we don’t consider that there is an ethics conversation to be had there. Because AI is new, we think we need to put more significant guardrails on it. If you think about it, that is kind of odd, since transparent explainable technology leaves a trail to decision that can be investigated and understood. It is much more difficult to crack open a person’s head and see what they were actually thinking when they made a decision.
PW: How do you see the ethics issue in talent acquisition evolving over the next 12-24 months?
AC: I think it is going to focus on the three issues above—transparency/explainability, bias detection and mitigation, and human in the loop. I think we are getting to a general consensus that opaque technologies are probably inappropriate for use in this space (or at least carry a lot of risk), that understood bias is a problem—but it is one that can be overcome by using well thought out training data and routinized routine disparate impact testing (bias we don’t yet understand or recognize is an entirely different issue. How do you know what you don’t know?), and that humans should be reviewing the output of these systems and using them as an additional source of information in the selection process, as opposed to just going with the person the machine ranks as #1.
PW: What questions should an employer ask to address the ethics issue when evaluating an AI-based recruitment product?
AC: There are a number of questions that may be appropriate based on the technology and the use case in play, but here are four to get people started:
1. Is the technology transparent and can end users understand its “reasoning”?
2. What was the data that was used to train the system and what sort of bias analysis/testing has been done on the underlying algorithm?
3. Is there a mechanism for routinely engaging in disparate impact testing to look for bias, and what is the process for addressing any impermissible bias that is identified?
4. At the end of the process, is a given technology producing an output that my employees are analyzing and incorporating into their decision process or is the ultimate action completely automated?
PW: Can the evaluation be done without the expertise of an outside expert – say a lawyer or scholar of ethics?
AC: It is entirely possible to have internal teams with the capability to go through this analysis in-house. That said, I always recommend running this stuff by outside counsel who understand these issues. If nothing else, they can help curb the tendency toward group think and clanning, and provide a fresh set of eyes to look for potential ethics and risk potholes.
PW: Under current law, who’s at fault if an employer uses an AI-based system that produces inappropriate / unethical results – the employer, the company that built the system or someone else?
AC: Under current law, liability is generally going to sit with the employer who used the flawed system or used a good system in a flawed way. If the employer included an indemnity clause in the contract with the AI vendor, they may be able to shift the risk and the damages associated with a bad result.
PW: What steps should an employer take if it discovers that its AI-based system is producing inappropriate / unethical results?
AC: The employer should generally take the system offline long enough to figure out what is driving the issue and whether it can be corrected. This may mean retraining the system, reweighting the data elements the model is analyzing, adding or removing elements driving the problematic result, or other steps. Once you understand what is driving the issue and what can be done to correct it, you can make an informed decision about when and whether you bring the system back into use. I recommend bringing lawyers who understand these issues into the loop as quickly as possible so you fully understand the risk profile associated with the problem.
PW: You’ve given us a lot to think about. Thanks very much! Any final words for our readers?
AC: These technologies can provide significant benefits in terms of expanding the available talent pool for organizations, helping to ensure an organization has a diverse group of employees, and helping to hire at the scale and at the speed a business needs. That said, there is a lot of upfront work necessary to ensure that a given technology is right for its proposed application. Spending time at the beginning to vet, pilot, and test before going “all in” on a specific tech can improve the odds of success.
The recording of the Point-Counter Point discussion on “The Ethics of Artificial Intelligence in Talent Acquisition” as well as recordings of all of the sessions at both TAtechNorthAmerica Digital and TAtechEurope Digital, are now available here. For information on upcoming TAtech conferences, click here.