HR Technology Conversations: Jan van der Hoop Part I

Add bookmark

As more and more technology is created for Human Resources, there is much to be said about its efficacy and the ethical standards by which use is graded.  I sat down with HR Exchange Network advisory board member John van der Hoop to discuss the topics.  Below is Part I of our conversation.

HR Tech Mason and Jan Title Image

Transcript:

Mason:

Okay. Let’s go ahead and kick it off here. First question, can you talk about the HR digital transformation that is currently underway?

Jan:

It’s all over the map, Mason. It’s like trying to get a grasp of a cloud of ether. I think there’s a lot of new stuff coming from a lot of different directions. Lots of vendors crowding the marketplace with different offerings, and I’m not sure that the HR community can really hear through all the noise what they’re really all about.

Mason:

That makes a lot of sense because I will tell you as someone that’s new to the space, there are days that I can’t make head nor tails of what I’m reading or what I’m looking at. There’s just so much, and it’s like you click on one link, it takes you to another link, to another link, and it seems like there’s no end in sight.

Jan:

Yeah, it’s baffling, and so it’s interesting. I spent, I would say accidentally spent 15 years of my career in corporate HR, having started in operations. I thought it was going to be a six-month assignment in my rotation, and ended up doing it for quite a stretch of time in a variety of organizations, and I’m still very much connected with the HR community, and I would say by and large, the communities that I’m tied into here are hearing a lot of noise about AI and a lot of noise about technology and a lot noise about how the world is changing around them, and mixed in with that is all the messages about the millennials and the kids that are following them and how they’re so different, and I’m sensing it feels to me like the community as a whole is pretty much deer in the headlight. They know there’s changes happening. They know they have to get jiggy with those changes, and they’re really not sure where to begin.

Mason:

If you were in a position to walk into a company, who they’re starting to approach this transformation, what would be one of the first things you say to them in terms of where to begin?

Jan:

Wow. I would say start small. Understand the current state. Understand the forces that are driving change around them, and be very selective about the one or two things that they take on as internal initiatives to begin to shift how they do things. To take on too much too quickly is a recipe for disaster in any business in any function. It’s not just about HR, but I think there’s a few things. The worst possible thing an organization could do is take a big lurch forward and say all of a sudden, “We need to get into machine learning and AI and apply that to our people as well as to our business,” because I don’t think most organizations are ready for it. I don’t think they’re ready for it in terms of really thinking through the ethical boundaries, those lines that they’re not willing to cross either as it relates to how they treat their customers or their employees. I don’t think they’ve really even entertained much thought around the trade-offs.

It’s like autonomous vehicles. You have the ethical question everybody’s talking about now is if I’m driving an autonomous vehicle and the car’s in charge, and there’s a mom and baby crossing the street in front of me, who lives? Does the car drive me into a telephone pole and kill me, or does it save the mom and baby? Or take them out? Right? So that metaphor applies equally well in organizations I think. There are always those lesser evil compromises that need to be thought through and today are thought through by humans, we hope. Maybe not as thoughtfully as they should be, but at least they present themselves as somebody either decides to deal with it or they don’t. The more we begin to rely on artificial intelligence, I think the more the ethical logic needs to be built into the software, and I don’t think a lot of people are giving it that depth of thought.

Mason:

You’re the first person who’s actually really mentioned the ethical side of AI. I’d like to explore that a little bit. What do you think, as AI is out there and it’s starting to be used, one, what are your specific concerns about it in terms of the ethical efficacy of AI, and how can it be used properly to its best ability?

Jan:

Well, I think there’s a bunch … Let me give you a practical example of something that backfired and then let’s maybe dissect that because there was a story that was reported about six months ago about Admiral Insurance, so they are the Welsh version of Geico Insurance. A couple minutes online will save you 15% or whatever the claim is. Admiral Insurance got the idea that they could do a better job of quoting insurance if they really understood the risk factors of the people who were applying, so they did some research, created some very elaborate algorithms that were designed to comb through an applicant’s social media posts, so it would read through what they posted on Twitter and Facebook, and based on that content, would infer their behavioral traits, and as they inferred their behavioral treats, of course, that would carry with it a certain risk calculation.

It happened in the background and it happened so quickly that they were able to turn around a very custom, very accurate, they thought, quote for insurance based on somebody’s risk profile, and when it came to light, actually, so part of the application, the online insurance application, asked people for their Facebook account, their Twitter account, what have you. When it came out, when it became public, what it was these guys were launching, and how it worked, there was a huge public backlash to the extent that as far as I know, Admiral actually shut down that whole initiative and went back to conventional underwriting practices.

So it’s not hard to imagine the same sort of thing happening inside an organizational context with the organization making hiring decisions based on social media posts, making promotional decisions based on other inferred information about their employees. It might or might not be accurate. So there’s a perception issue. There’s a public trust issue. There’s a reputational risk to the organization of going a little bit wild with how they use this information, and I think before anyone really wants to go there, not only do they need to consider the optics, Mason, of whether they should go there or not, they need to make sure that they’ve got the right infrastructure set up in the organization for any kind of data gathering and work in developing AI. First of all, most organizations do not have the quality of data that they think they do, so whether it’s in marketing or in product development or in manufacturing or in HR, it’s always going to be a case of garbage in, garbage out.

If you don’t have good quality information, if you’ve got sloppy data, then getting AI to do work and machine learning on a foundation of sloppy data is going to be dangerous at best. I think the other piece is the algorithms and the things that go into the black box really need to be … They need to be treated like the valuable asset that they are, but they’re also … It’s a dangerous asset, placed into the wrong hands, and so it’s got to be safeguarded. It’s got to be managed and inventoried judiciously, and the organization, I think, has to have people in control of that information who are good stewards of those assets and are very deliberate about where they get used and how they get used and where they don’t get used, and most organizations, I think, haven’t given that degree of thought yet for those responsibilities.

Mason:

So if we were to put down, let’s say as I’m writing my article, if I were to put down the five top things to know about ethical AI use, or top three, top four, however many. What would that list be?

Jan:

I think the first would be around doing a gut check around the optics. Is it something that’s going to … If it came out that you were doing it, would it piss off your customer base or would it piss off your employee base? Would you be embarrassed to share with them what you were doing in a town hall? That, to me, is the first level gut check, and an organization that’s thinking about going up this path might actually want to do some focus groups internally or externally, depending what path they’re thinking of taking to get some feedback from people about how it would make them feel if they knew the organization was using this data in this way, because I can tell you it cost Admiral millions in not only lost revenue but reputational damage.

I think the second is making sure you get your ethics straight, which is something that we’ve already spoken about. I think the third is make sure you’ve got the right infrastructure and the right people responsible for that good stewardship and management of the asset, and that the algorithms that you’re using and everything that goes into that black box is inventoried and itemized and it doesn’t get used without the proper oversight and planning, and the third or fourth, I’ve lost count, probably the fourth, is just investing the time up front to make sure that you’re actually doing this work on the basis of good quality data.

Mason:

So let’s move on from AI and move into VR and AR and robotics how do you think those are fitting into the HR space right now?

Jan:

I’m not seeing it there a whole lot, honestly. I’m hearing more and more, obviously, about bots that are helping in the selection process or they’re helping in the outbound communication process. I’m hearing an awful lot about people applying machine learning to databases of resumes and people to infer things, right? So let’s take the resumes of the people that are in the organization that are good performers that we wish we had more of. Let’s throw them into a black box and see what information that can turn out, and then let’s take those learnings around what it is about those people we wish we had more of, and apply it to a database of 200,000 resumes and see who we should be talking to, and that’s about as elaborate as I’m seeing at this stage. Obviously, IBM Watson is on the scene with a whole bunch of different algorithms that are based on personality and other things. I’m not hearing as much about AR and VR, and maybe I’m just not hooked into the right channels, but I’m not seeing a whole lot of that.

Mason:

Gotcha. Since that’s not something that you’re hearing a lot about, let’s move onto people analytics. What are the most important things or trends that HR professionals need to know about people analytics and how are they using them?

Jan:

I think across the board, so this is personal opinion and observation over 30 years. I think across the board, HR has got an opportunity to up its game as it applies to analytics, period. I think as a profession, as a specialty, we’ve allowed ourselves to not be as directly tied in and connected to the business and how the business makes money as we need to be, and that’s a very broad generalization, and a dangerous one to make, Mason, so I want you to use that one carefully, but most often, when I sit down in front of an HR professional in an organization and ask them about the P&L, I get the eyes glazing over, and I’ve had people tell me, “I’m not responsible for the P&L,” which should be an alarm bell ringing in the CEO’s office.

I think the years to come are going to see more HR professionals rising to the challenge by really understanding an organization’s financials and by finding ways to deliberately influence those financials in a productive and positive way through the initiatives that they undertake in HR, so basic numeracy, basic finance, I think are skillsets that are going to be in high demand and are going to be rewarded for people in HR. I also think the HR function itself, I’m going to sound a little bit like Dave Ulrich here, but we need to learn to measure the things that matter. When you look at the typical metrics in an HR function, they’re measuring number of days in a process.

They’re measuring number of dollars that it costs to hire or costs to advertise or what have you, but there’s very little attachment to any kind of quality metrics in most HR organizations, so yeah, we filled the requisition in 27 days, but did we hire the right person, and did that person actually stay longer than expected and perform better than expected? Is there something that we did differently in attracting that person that we need to apply to future hires? There’s very little continuous improvement. There’s very little feedback loop, and there’s very little accountability for quality in my experience.

So there’s all sorts of basic analytics available. Talk about psychometrics for just a moment. I won’t make it about us, but when you look at psychometrics in general, there have been very good, high quality, reasonably priced assessment products in the marketplace for 30 years, and they’ve just continued to get better and stronger. You look at SHRM surveys of their membership and the statistics around what people know and understand of assessments is very low, and yet these tools have been around forever, but the basic understanding isn’t there. The discipline to use them productively isn’t there. I think there’s a rising trend. Now more organizations are starting to use them for selection, which is great, but if they’re using the right tool, the data they collect through the application and the selection process, if they’re using a good normative assessment, that data is valid for the life cycle of the employee, so they could and should be using it in onboarding.

They could and should be using it with the hiring manager and the team around them so that they understand one another better. They can onboard and [inaudible 00:18:46] productivity more quickly so the manager understands how best to coach the individual and bring them up to excellence as quickly as possible. They could be using the information for career planning purposes and for succession planning purposes, and yet, I don’t … I was going to say I don’t know whether it’s ignorance or laziness. That’s harsh and judgmental. I think there’s lots of good information available, even in organizations that are doing the very basics around psychometrics, that ends up in a drawer after the person’s hired and just doesn’t get applied, so I think as it relates to analytics and HR, there’s a ton of low-hanging fruit. They don’t need to go out and buy an IBM Watson system. They don’t need to go out and spend lavish money on new computers and new databases and all the rest of that. I think it’s not hard for an HR professional to start collecting relevant information very economically, and start making better decisions with it.

Mason:

So you’re essentially saying that once data is collected, the data continuously needs … It can’t continuously be used. It’s not something that, that’s not a one-time deal. They can use it in, for instance, predictive analytics, or for decision-making based on the analysis of the information that they’ve already gotten.

Jan:

Yeah.

Mason:

Do you think there’s a need on the part of organizations to continue to gather that information about particular employees, either through maybe quarterly reviews or just maybe a couple of times a year, doing a survey of the workforce to see where things are in terms of what they should and maybe shouldn’t be doing?

Jan:

Yeah. I think the answer is yes to both, in the right context. To put out a quarterly or monthly pulse survey for the sake of putting out the pulse survey and ticking a box on the list of initiatives that I was assigned this year, I think is a dreadful mistake. I think before putting out surveys, before asking people for information, before doing reviews, I think it’s critical to sit down and really tease apart, “Okay, what information is it that we’re looking to gather, and how will we use that information for the benefit of the individual and for the organization?” And if there aren’t good answers to both of those questions, then there’s no value to collecting the data.

Mason:

It’s just data collection for data collection’s sake.

Jan:

Yeah. It’s to fill a hard drive. A waste of everybody’s time.

Mason:

Do you think in terms of, and I only ask this in terms of your generations, do you think for instance, millennials versus gen X, do you think they’re like, “Sure, I’ll answer a survey,” I mean, they’ll open up a Facebook survey and answer survey after survey after survey to find out what Harry Potter house they fit into. Do you think that’s something they want to do? I know, right? It’s crazy but it’s true. Do you think that’s something that they want to do as part of their professional career as well, versus someone that fits into the generation X where it’s like, “I don’t want people knowing all that about me”?

Jan:

You know what? I think there are a lot of distinctions about the generations that are accurate, and I think it’s a little bit overblown. I think the piece that we lose sight of, if we’re talking about cross-generational management and communication for just a moment, fundamentally, they’re people. They’re human beings who benefit from the occasional touch from a manager who reaches out to say, “Hey, what do you think? I need some help here. How can I support you?” As we all did at the start of our career. We were every bit as odd to the generation of managers who greeted us when we entered the workforce as these guys are to us, and with any luck, you and I both, early in our career, ran into a manager or two who gave a shit and took the time to get to know us, took the time to get to understand our interests and what gives us energy and what we enjoy doing and what we’re naturally good at, and found ways to play us to those things more than they played us to the things that we were just never going to be great at.

These gen Xers and the millennials and the guys that are going to follow them aren’t any different. They’ll answer a question if they’re asked and if we’re not asking them questions about how they’re feeling and how we can support them and what are their dreams, and a whole litany of other questions, we’re going to lose them if we’re not asking those questions, and we’re going to lose them even faster if we ask the questions and we ignore the answer or we just don’t listen. It doesn’t mean we have to agree. It doesn’t mean we need to change the world to conform with what they’re asking for, but if we’re not listening, then we’re missing the boat.

Mason:

Let’s see here. Yeah. So much for that 30-minute deadline.

Jan:

We’re way off topic. So I think the moral of that is AI and machine learning and data, yes, there’s a whole lot we could be doing better and nothing, nothing is going to reduce the need for human touch and human contact and human caring that creates the social fabric that keeps an organization going.

Mason:

That’s something that even though I’m new to the space, that’s something that I’ve tried to write in the things that I’m pursuing in the articles and the research that I’m doing, is that all of these things are great but you can’t throw the baby out with the bathwater. You can’t disconnect the humanity of this, and it seems to me that HR professionals and companies and organizations that are pursuing technology solutions, at the center of it, they still have to remember that the information that they’re gathering or the analytics that they’re processing, or the AI that they’re using, the reality is that humans are going to be interacting with this stuff, and the things that you do with it really impact human individuals. It’s not like, “Look, we have 36 people that can use both left and right hand. They’re ambidextrous.” That’s useless in the grand scheme of things unless you’re going to use it in a way that benefits the employment of the company.

Jan:

Well, and if you’re not using the insights to improve the quality of the human connections, you’re missing the boat. Whether it’s your employees or your customer. A good assessment is going to help you figure out how to connect more quickly with another person and more powerfully. If you know how you’re wired and you know how they’re wired, you know how you need to flex to reach them more effectively. Having a database full of psychometric information about 100,000 employees is useless unless you use it productively.

Mason:

I don’t want to ask you for organizations using analytics, what are the things I need to ask, because that’s different per each organization. You can’t sit down and say, “Okay, medical organizations should be asking these types of questions,” and so I don’t think you can really do that, so we’ll skip past that in general, but in terms of, and I’ve heard some people say that this word “engagement” is overburdened and overused, but in terms of employees interacting with one another, regardless of what level, where do you think technology fits into that? Is it more of a social type situation? Or is it just communicating amongst each other about, for instance, I’m free this day, or I need to switch this day, can you switch this day, type stuff?

Jan:

I think it’s all of the above, and I think the same comments about technology on a peer basis apply, as what we said about other aspects of this. Technology can take the friction out of getting work done efficiently, absolutely, so being able to share calendars in Outlook would be a really rudimentary example of that, but it saves from having to send three emails and frustrating each other because we’ve already got a full inbox, then great. Magnificent. The friction’s gone. Would it be helpful for an individual to understand how best to communicate with their boss or how best to communicate with their coworkers by giving them relevant information about how they’re wired? Absolutely, and I don’t see a lot of organizations doing that today.

Some are, but they’re not necessarily getting all the mileage out of it that they could, because they usually share that information and debrief it in an afternoon workshop where everybody’s off-site at a conference or something, and it doesn’t always get reinforced terribly well once everybody’s back at the shop, so it’s the basic disciplines around reinforcement that often water down the investments in that kind of learning and development.

Mason:

So it’s definitely what you put into it is what you’re going to get out of it type situation?

Jan:

Yeah, and how you reinforce it, and how you bring it to life, and how you make it part of the organizational culture and language and just how we get things done, which requires leadership.

Mason:

How do you think technology is impacting the culture of organizations? Is it just from, let’s say, the analytics section, or is it from maybe a wellness program where everybody’s wearing Fitbits and challenging each other, or is it-

Jan:

Yeah, I’ve seen and read little bits of all those things. I’d say broad strokes, and forgive me, when my phone rings, it’s the CEO of an organization that we’re working with, so I’m going to have to drop you fast. I’ve bought five minutes from him, but he’s going to call. I think broad strokes, what I see most in organizations is managers … What’s the word I’m looking for? They’re almost abdicating their responsibility to be leaders of people in favor of what they think is the efficiency the technology is going to bring them, and it’s shortsighted. They’ll buy efficiency today, but if it weakens the relationship, then they’re going to pay for it in three months when the person leaves me for a job across the road for an extra 5,000 bucks, so the risk with technology is it’s always the next shiny thing that we think is going to remove the pain and make things easier, and in embracing it, we forget the importance of the personal connection.

RECOMMENDED