Artificial Intelligence: Getting Ethical with the Digital Transformation

The HR Technology Digital Transformation is happening, and has been for a while now. But I have to say the following quote from HR Exchange Network Advisory Board member Jan van der Hoop sums it up pretty well:

“I’m sensing it feels to me like the [HR] community as a whole is pretty much a deer in the headlights. They know there’s changes happening. They know they have to get jiggy with those changes, and they’re really not sure where to begin.”

Speaking as someone who is still fairly new to the space, I feel, very much, the same way.

Let me give you a quick background. I’ve spent the last 11 years of my professional life as a journalist. With the exception of a seven-month stint at my hometown radio station, all of that time was in television news. When I made the jump from journalism to Human Resources… my head started spinning… and it continues to this day, albeit not as fast as it used to spin.

The more I delved into my research, the more information flowed past my desk and through my laptop. It is rather dizzying, this concept of Human Resources and technology. I love the topics involved… even the ones I don’t quite understand. And as a Xennial (half Gen Xer and half Millennial) I love technology and its adoption.

There is one area, however, of great concern to me.

As a father, I am very keen on the ethical use of new technologies. I’m constantly looking at the possible ways my children, even at their young ages now, will use and interact with technology. What I wasn’t prepared for was how the thought processes I currently use to evaluate technology as a parent would inform my evaluation of technology professionally in the HR space.

Quite frankly, folks get really excited about technology… especially artificial Intelligence. And they should. This stuff is awesome. I’m a huge science fiction fan as well… and seeing my favorite show, Star Trek, come to life is "fascinating" (Thanks Mr. Spock). But setting that to the side, there are some real ethical questions that need to be answered.

While speaking with Jan, I asked him about specific concerns around the ethical efficacy of AI. He started by giving me an example.

“Let me give you a practical example of something that backfired and then let’s maybe dissect that because there was a story that was reported about six months ago about Admiral Insurance, so they are the Welsh version of Geico Insurance. A couple minutes online will save you 15% or whatever the claim is. Admiral Insurance got the idea that they could do a better job of quoting insurance if they really understood the risk factors of the people who were applying, so they did some research, created some very elaborate algorithms that were designed to comb through an applicant’s social media posts, so it would read through what they posted on Twitter and Facebook, and based on that content, would infer their behavioral traits, and as they inferred their behavioral treats, of course, that would carry with it a certain risk calculation.

It happened in the background and it happened so quickly that they were able to turn around a very custom, very accurate, they thought, quote for insurance based on somebody’s risk profile, and when it came to light, actually, so part of the application, the online insurance application, asked people for their Facebook account, their Twitter account, what have you. When it came out, when it became public, what it was these guys were launching, and how it worked, there was a huge public backlash to the extent that as far as I know, Admiral actually shut down that whole initiative and went back to conventional underwriting practices.

So, it’s not hard to imagine the same sort of thing happening inside an organizational context with the organization making hiring decisions based on social media posts, making promotional decisions based on other inferred information about their employees. It might or might not be accurate.”

This whole situation presented three problems. It created issues with perception and trust, and organizational risk.

Artificial Intelligence sharing ideas

Let’s face it. HR professionals are dealing with people, plain and simple. Those professionals are the stewards of personnel information; information is and can be a valuable asset and a precious commodity. Precious though it may be, some of it isn’t useable and thus can be termed "sloppy."

So, you have two forms of data, organized and sloppy. With which do you think AI works best? If you picked sloppy, and I know you didn’t, DO NOT PASS GO. DO NOT COLLECT $200.

If you feed sloppy data into AI, you’re going to get something that is dangerous at best. Equally as dangerous, letting that organized, good data fall into the wrong hands.

Alright… so how do you deal with this knowledge; where to begin?

van der Hoop says there are four things you need to consider when employing AI.

  1. Check the optics of the decision. Is it going to upset employees and customers collectively? Would you be embarrassed to share what you’re doing?  “That, to me, is the first level gut check, and an organization that’s thinking about going up this path might actually want to do some focus groups internally or externally, depending what path they’re thinking of taking to get some feedback from people about how it would make them feel if they knew the organization was using this data in this way,” van der Hoop said.
  2. Make sure you get your ethics straight.
  3. Make sure you have the write infrastructure and responsible stewards of the information.
  4. Invest in good data.

Normally, the pieces I author are more academic in nature. I take a concept and really try to break it down into manageable pieces. For me, I need to understand how individual pieces work. That then informs the whole.

I think technology in the HR space, especially AI, is much the same. All of these pieces are fantastic individually, but when you try to pair technologies, you can quickly get into the realm of dysfunction. HR professionals and C-Suite leaders have to be smart about what they’re employing and how.

So what is the moral of the story? I’ll leave that to Jan van der Hoop.

“Nothing is going to reduce the need for the human touch and human contact and human caring that creates the social fabric that keeps an organization going.”