Episode Overview
Justine Whitaker, an experienced people leader, joins Luke to discuss the strategic imperative of AI, urging leaders and employees alike to be “crew, not passengers” in its adoption. Having studied AI through a human-centred design lens, Whitaker highlights that the most powerful applications are those that break down organisational silos, such as aggregating customer and people data to demonstrate real business impact and ROI. The core challenge lies not in the technology itself, but in managing the human element: overcoming biases like algorithmic aversion (being less forgiving of machine errors) and preventing human dependency that leads to a lack of critical thinking. She argues that to build trust, organisations must ensure a positive first experience and apply AI to solve high-value, substantial problems.
For the future, Whitaker envisions a radical evolution of the Chief People Officer (CPO) role, suggesting a hybridisation with the CTO/CIO function to leverage shared, integrated, real-time data for predictive workforce planning. This shift would free the CPO from administrative tasks (automation) to focus on strategic augmentation. For any individual looking to thrive, the most crucial action is adopting a non-negotiable 15-minute daily habit of exploring and experimenting with AI, a simple discipline that future-proofs one’s career against an accelerating pace of change.
Watch the Full Episode
Read the Full Transcript
Luke Fisher: I think the reality of the level of integration we’re going to have with AI, you know, we need to switch our thinking to being crew, not passengers. And I really like this analogy, that we’ve got to ask the questions. We’ve got to make our providers accountable so we don’t just sit back and let it kind of wash over us. You know, we are not a passenger in this.
Hello and welcome back to Culture In Action. I’m Luke Fisher, CEO of Mo. My guest today is Justine Whitaker, an experienced people leader, culture strategist, and someone who’s been diving deep into how AI is transforming the talent space. After a couple of months off, in this episode, we explore what happens when humans and technology truly collaborate, not just coexist. Justine shares how she’s been studying AI through the lens of human-centered design and why being crew, not passengers, on this journey matters for the future of work, and how some small daily habits like 15 minutes of learning can future-proof your career. If you’ve wondered how AI can make work more human, this conversation’s definitely for you.
So let’s get us started with a relatively broad question. I’d like to say that I think you’ve had the last few months off since your last role, but it doesn’t seem like that’s quite true. But I’d love to hear what you’ve been pulled into and how you’ve been exploring AI in this talent space since you’ve had a little bit more time on your hands.
Justine Whitaker: Well, look, I took the opportunity, given that whilst there’s a lot that we don’t know, the one thing we do know for sure is that AI is going to be omnipresent in whatever happens in the future. You know, whether you think it’s overhyped or under-hyped, there’s no two ways about it that AI is going to be with us, whether you see it as augmented or whether you see it through the lens of automation. So I took the opportunity to try and get myself as equipped as possible. So that includes just some of my own practice, you know, reading and following people online who are the trailblazers, and also most recently doing some studying more formally through Wharton. So I was interested in them because they’re looking at AI innovation and human-centred design. So a mixture of my own curiosity and some formal learning.
Luke Fisher: Okay. Very cool. And then was there a particular moment for you as you started getting into it? I imagine the first day or two where you’re like, I’m intrigued, let’s have a look at AI. And then you get into it and then you’re like, ah, okay, we need to fundamentally rethink in HR how we think about AI. Is there one of those moments, or not?
Justine Whitaker: Yeah, so I think one of them was a sort of anthropomorphic moment where I was playing around on ChatGPT and then it made a suggestion as to what I should do next, which almost made it feel like the AI was clairvoyant. You know, it had anticipated what I was going to need to do next really, really accurately. So that certainly was a moment. And then I think the other one is when I’ve been working just with a classic good old LLM and have asked some questions, becoming more aware of my blind spots. So I would ask it, you know, “What do you see that I have missed?” or “What have you learned about me in this session that I may not have picked up on?” And I think all of those moments really helped me shape my relationship and some of my philosophical points of view on what kind of relationship I want to have with inorganic intelligence and how it will complement my organic intelligence in the future.
AI and Breaking Organisational Silos
Luke Fisher: Yeah. Very, very interesting. And then of the applications that you’ve either seen or tested yourself, is there one in which you’ve gone, “Wow, okay, this can make a really meaningful difference to my life,” as an individual or as a CPO, as you’ve spent a lot of your time?
Justine Whitaker: Yeah. So I think the things that have had the most impact on me are those applications of AI which break the silos down. So classically, particularly in the people world, we stay in our lane and I think that’s a mistake, and I think that hurts us. So the tools and the applications that I’ve been most impressed by don’t stay in their lane, and they deliberately— so a very good example is I come across an excellent application which brings data and information from your CX, you know, your customer-facing world, data and information and insights from your PX world, brings them together in real time. It doesn’t just give you an insight; it actually creates a measure of ROI on the actions you’ve taken. So it becomes, it gives you a business impact. So that’s the stuff that impresses me, where you are actually able to aggregate information or insight or sentiment from silos that have previously been mutually exclusive. I think that’s where it gets really exciting. We can traverse those lanes and bring a more multidisciplinary approach to solving some increasingly complicated problems.
Luke Fisher: Very interesting. And with that, I guess, begs the question of, like, are we overlooking any areas of the employee experience where AI could be delivering pretty substantial value at the moment?
Justine Whitaker: I mean, I think there’s a lot that it can do. I’m not seeing gaps yet. I definitely think we get very, very consumed with what our relationship with AI should look like, you know, whether it will replace us, whether it will augment us. I think the piece of thinking that we do need to think really hard about is what is the management of the risk? What is the management of the reputation? How do we get over some of these blockers around trust? Such that it is actually going to be possible to collaborate and integrate with whatever form your AI takes, so we can keep the human in the loop and actually optimise our potential as well as optimising the potential inherent in this phenomenal technology.
So I guess my answer, Luke, is less what is not being done in terms of application of AI, but rather in the way we are talking and thinking about it. I think there are some big questions that we can solve, but they are more related to the slightly more sort of philosophical and ethical and, you know, risk management around utilisation of the tech. And you might go, “Well, why don’t you leave that to the big tech companies and, you know, the Eric Schmidts of this world?” But I think the reality of the level of integration we’re going to have with AI, you know, we need to switch our thinking to being crew, not passengers. And I really like this analogy that we’ve got to ask the questions. We’ve got to make our providers accountable so we don’t just sit back and let it kind of wash over us. You know, we are not a passenger in this. There is an opportunity for you to be crew and influence it by actually asking really good, searching questions to drive accountability, transparency, robust ethical and governance frameworks, et cetera. So that’s where my focus is going, rather than gaps in application of AI.
Luke Fisher: Okay. That’s why I’ve loved spending so much time with you, to be honest. As you can see, maybe not even crew, but you’re trying to pilot the ship in many cases and really understand the direction of travel. So, it’s very, very cool.
The Challenge of Trust and Dependency
Luke Fisher: Do you see, Luke? I mean, do you have a kind of view, you must, on that sort of ethical, existential, reputational kind of risk and how much…
Justine Whitaker: So, I think the biggest fear that I have is that people are going to fall to autopilot. Because it gives you the opportunity to become lazy because it’s smart and it learns you and it understands the decisions that you would make, and therefore you get into a point where you lack original thought because you might have new context or new nuance of information in which you’ve been able to interpret that the AI hasn’t, that in your gut would cause you to make a different decision, but you actually have become so reliant that you just follow what’s returned to you.
Luke Fisher: And I think that’s the bit which causes me the most amount of worry, and I’ve seen that just in some of my own behaviours. Like when I feel like I’ve got cognitive overload, you make a lot of decisions in my job and I’m sure you do yours, and it gets to the point where you’re like, “Oh, okay, I can formulate roughly this, but how about I just have AI write me the core structure so I can do 80% of the work and I don’t have to think about it,” and then I can refine, and that’s a much easier job. It’s the small things like that, that as we become more and more heavily dependent, I think a lot of the risk sits there, in dependency, because products that are great fall into our lives and become a critical part of it. And it goes from conscious to subconscious. And I think at that point that’s the interesting zone in which interventions from the AI that peak your consciousness of your decisions could be quite an interesting one, just to keep people on their toes. So yeah, that would be the one for me. What do you think?
Justine Whitaker: The other thing I think that is super interesting is the relationship we build. So you, I mentioned sort of anthropomorphising it, you actually seeing a machine as having an identity and a personality, and I caught myself saying “Please and thank you,” “I really appreciate that,” you know, which I think is really interesting. On one hand you go, “Well, look, I’m kind of treating it like an inorganic colleague.” On the other hand, you go, “You’re mad. It’s a machine. It’s not going to have empathy because the reality of it is to actually process my pleases and thank yous and ‘could you,’ all of that is going to need to go to the cloud and through a big data centre, and there’s actual cost to the” – and all sorts of things – “when I add that level of politeness in,” actually required. But I do, because I look at the AI almost as if it’s a team member, yeah, and my reciprocity. I think if I’m pleasant to it, it might be pleasant and work well or more accurately for me. So I think increasingly this whole relationship we have with it in whatever forms it takes is going to be a super interesting area to watch and explore.
Luke Fisher: It’s funny, this reminds me of a conversation that I had with one of my team, which is, “I’m polite to it because when the machines take over, they’ll remember me.” Which brings me to my next question, which is about trust and the adoption of technology, and how often trusting something is a big blocker to the outcome, which is mass efficiency. And I heard whilst away last week at the UKG event, they talked about reversing the productivity equation, which normally most firms will look for: same level of investment, increased output. But AI is probably the first time in which we can reduce effort and increase output. So the outcome potentially is really significant, but the human adoption of it is still required for now. So how are you seeing trust in AI tools be built successfully by organisations that are adopting?
Human Bias and Building Trust in AI
Justine Whitaker: So I think the interesting lens on this is looking at it through the lens of human biases and heuristics. So regardless of whether it’s digital technology or AI technology or automation technology, you know, we always have a risk aversion and we favour the status quo. So the reality of it is before we even start, regardless of what the nature of the change is, we favour the devil we know rather than the devil we don’t. And we know from a psychological perspective, even if something is going to be advantageous to us, if it requires change, we are likely to just kind of want to lean back from it. So I think that that is very relevant and very ever-present regardless of the nature of the change.
I think there’s also a really interesting point when we think about what’s called algorithmic aversion, which means that in simple terms, we are more willing to let a human off for a mistake than we are a machine. So if we are working together, Luke, you make a mistake in an important spreadsheet, I’m probably going to go, “Oh, he’s only human, but I’m sure he tried really hard.” My AI makes a mistake, a hallucination, I get quite ratty with it, and it doesn’t take me much to lose my trust. And some of that is because a lot of the intelligence is in a black box that I don’t understand, which means I can’t control. So I think those biases are ever-present and make it difficult for us to ingest any new technology. I don’t think they’re new to AI.
If you said, “What would I do to build trust?” I think I’d be thinking about a couple of things. One is the first impression. Again, we know that that’s very indelible, so we think about an anchoring bias: was your first experience a good one or not? And simply, it’s really, really hard to re-engineer that first bad experience, yeah. Super important. I don’t think we think about that enough. I think the other thing to think about is exactly what is the problem we’re trying to solve. And again, work backwards. So you are looking for those pressure points, or you are looking for areas that are really difficult, where you want to try and solve for so you can show what can be done rather than just fiddly-diddly around. I think that would be a couple of things I would be thinking about: how do you manage that first impression, that first experience, that first experience with the interface? And then secondly, how are you solving a problem that really matters, that grabs people’s attention, reduces friction, and you focus on dedicating the technology to try to solve something real.
Luke Fisher: Yeah, yeah. It’s interesting. Your answer gives me two considerations. One of them is about how you build trust. The other one is about how often I hear people say, “AI is a wonderful solution, seeking a problem or a clear use case.” And I think it is often that people don’t think thoughtfully enough to express what the problem is that they’re trying to solve or know the problem that generates the most amount of value for them. Because everybody’s situation is unique, but AI has such broad capability that it can be relevant to all, that you end up in a position in which you don’t get the “aha” moment. In product, we talk about it as an “aha” moment, which is you go from value promise to value delivered, and at the point in which that happens, that’s your aha moment.
So all of these tools are trying to take you to a point in which you go, “I came to do this job, and I got it.” And AI has the opportunity to do it exceptionally quickly compared to things like an, like you’ve implemented HR systems, right? And it can take an age and it takes you nine months to implement and nine months to get value, so you’re waiting 18 months in order to get it. AI has a wonderful opportunity to instantly create value for you, which ties into the trust point, I think, which is one of… I always think about trust quite simply, like predictability-based trust or vulnerability-based trust are the two primary ways in which we build it.
We show up the same, which becomes predictable, or we’re vulnerable, which is exposing ourselves in order to show, “I’m open to you,” and therefore don’t feel exposed, and you start to develop relationships off the back of that. I think we’re particularly harsh on AI when it comes to predictability-based trust, because I think our bias is that it’s going to catch us out somewhere, because probably sci-fi movies forever have said the machines are going to take over, and so on. It’s just in there somewhere. I wonder from a design standpoint—I wonder if anyone listening can message me on LinkedIn to tell me this—but like how much the vulnerability consideration’s been baked into the AI interface. Because when you talked about the work you’re doing at Wharton for human-centered design, is I’ve never heard AI say that it got it wrong, or I’ve never seen AI being in a position where they’re like, “That’s probably not one of my strongest areas, but here’s how I would start to build some rationale around it,” because you would hear humans do that all of the time, and it pulls you in.
Justine Whitaker: And I think, Luke, that that comes back to some of my remark around crew, not passengers, because when I’m working with my LLM, for example, I will say, “Give me a confidence rating on how certain you are that this is correct,” particularly where it needs to be important. “Give me your source. Give me your source. Validate your source.” So I’m— there’s a point at which you can be a passenger and go, “Look, I really hope that it is going to get this right for me.” And then there’s a time where you can be crew and interrogate it, trust it, but validate or verify it. And you can switch between LLMs and you have one running off against another.
I often do that, you know, put one data set into one LLM and another into another, see what you get. So to your earlier point, still utilise your critical thinking and say, “Does this feel right?” And then if it doesn’t feel right or you don’t know what to do, you can always ask the AI, “What do I do next? What would a, you know, a leading strategist in this area do? What would a risk-averse person? What would a Simon Sinek? What would a Barack Obama?” whatever your persona of choice would be. But I do completely agree with you. I think the trust is a really interesting one, and we also have a choice to whether we become kind of crew or whether we are passengers and go, “It’s a black box. I’m out of my depth.”
The Evolving Role of the CPO
Luke Fisher: Yeah, yeah, yeah, it’s very interesting. Let’s talk just a little bit about hype versus reality. I think I don’t go through a podcast without it coming up. I don’t go to an event without somebody mentioning it, and I don’t think we’ve seen an RFP for a little while now that doesn’t mention, “What are you doing in terms of AI?” I’d love to get your thoughts. When you look at the role of the Chief People Officer, you say, “Here’s all of my jobs to do. Where’s the hype sitting and where deserves more opportunity?” when you think about the breadth of a modern CPO.
Justine Whitaker: Yeah, this might not surprise you, Luke, that I would go in an oblique direction with the question. I would first be looking at the role of a CPO, and I think in an ideal futuristic world, I would be looking to see if we could create something that is a hybridisation of your CPO and your CTO or your CIO, so your chief technology officer, your chief innovation. I really think that nobody does themselves any service by staying in their lane and their silo, and the future really does belong to those who have enough humility, let’s be frank, common sense to actually go, “If we are thinking about strategic workforce planning, that is not the realm of just the CPO.” That is very much the realm of the CTO and the CPO, and maybe the COO as well, maybe the chief risk officer, even. So I think one of the areas that is somewhat under-examined is how we actually think about the structure of the people function and those vertically integrated elements of it, particularly technology, and kind of get over this, “Well, who owns it?” The reality is we all own it. We all have a stake in AI, and I think you could capture that in the way you structure your organisational functions, clearly, not necessarily for the workforce you’ve got now, but for the workforce and the way of working you want to have in the future.
Luke Fisher: Okay, interesting. And if I just bring it a little bit closer to home, if you are now speaking to a CPO, taking all of what you’ve learned and your great career that you’ve had, and you say, “AI in your role, in your world,” sounds like from what you’ve said, “is more like a collaborator than a tool.” How did you get to that point?
Justine Whitaker: Yeah. So, I mean, if I think about my relationship and what that would mean in terms of the way I would structure my function and my own role, I think there are pieces of AI that it’s great, because it will automate. You know, the mind-numbing meeting transcripts and all that. AI can do that. Good luck to it. It does it better than I ever would. You know, let it automate that which is absolutely brilliant. And then the other, the other side, I think about how it would augment rather than replace me. What does it do well? What do I do well? And then how do we work with the human-in-the-loop kind of ethos to get the best benefit for everybody out of the tech and the best benefit out of the human in the loop and the capabilities that we bring that the tech doesn’t have yet and probably will never have to the level that people do. So I feel really optimistic about the future. And I think the reality of it is, if it’s under-hyped, then we are well-prepared. But it is entirely possible that this whole thing will go faster than anyone can possibly predict. So, you know, I don’t think it has any harm to be actively thinking about this, exploring this in a very good friend of mine, Zoe Walters, talks about spending at least 15 minutes religiously every day exploring, thinking, playing, experimenting.
Future-Proofing with Small Daily Habits
Luke Fisher: Yeah, yeah, yeah. I wonder, this might be a tough question to answer, but let’s go with like a two-year window. What do you reckon the day-to-day, week-to-week of a Chief People Officer looks like?
Justine Whitaker: I would love to see Chief People Officers working in a much more multidisciplinary way. So I would like to see Chief People Officers spending more time with finance and tech and risk and customer, and for us to be working on shared, integrated, real-time data sets. I think if AI and the tech can deliver us integrated real-time data sets, that also allows us more time to be pivoting and actually spending more time in that, you know, in the sentiment analysis and the face-to-face. So yeah, I think good CPOs will have really good data. They’ll also have a great ability to link quantitative and qualitative data and extrapolate the trends from that, so look back historically, but also use them predictively so we can anticipate what’s going to happen next. I also think it’s a great opportunity to give our people more and more autonomy, which I think the tech will do that, and more and more mastery, which I think the tech will also give us a great opportunity to do. So I would love to see the CPO role change quite materially so we can leverage the automation and give us more time with augmenting our intelligence to actually do some of the more predictive work that is ultimately good, you know, for our customers, for our stakeholders, for our shareholders, and of course for all of our people.
Luke Fisher: Before I come to the employee, because you raise a really good point about empowering them and providing them with autonomy. I want to roll an exercise with you, which is relating to your AI companion in this two-year picture, and I would love you to give me some, “What if AI did [X] for the CPO? What would that mean that you think you’ll be spending time on?”
Justine Whitaker: What if AI did? I mean, it would be, at the moment, AI doesn’t pick up my sentiment. It doesn’t pick up my galvanic skin response or heart beating fast or any of those kinds of things, and it would be great to integrate the kind of wearables with the tech so that I was able to give another data set to the technology to help it help me. Now I know that’s a big leap, but I think that could be, that could be really, really valuable on lots of fronts. Where I find the tech helps me, it is really challenged my thinking: “What have I missed? Is there a different perspective? How do we make this more strategic? What do I need to do to learn more?” So the tech can strengthen that, I think that would be absolutely wonderful. And to get my hands on as much high-quality data and information as possible would be fantastic, and I would use that to scenario plan and run multiple possible futures, and then to plan and strategise. So I think it would help me reverse-engineer and prepare more robustly for a future that nobody can anticipate. So if it could help me do that, absolutely, that would be a thrill. I think there would be a real game-changer for what we do in our field at the moment.
Luke Fisher: Fascinating. So I was in Las Vegas last week for UKG Aspire. They had two things that are directly related to what you’ve just said. One is the integration of wearables so that you can start to understand heart rate and how that intersects against your workforce management data. So if somebody’s starting to see a stress spike, for example, you might want to recommend breaks. They’re tracking that via wearables, integrating it with the workforce planning data, and then making recommendations to support you. Basically, “Hey, this person’s showing signs of stress. Why don’t you look to encourage an earlier break? Here’s how you can rework your day,” et cetera.
Justine Whitaker: Brilliant.
Luke Fisher: And then the second one was all around predictive analytics and recommended actions. So based upon multiple kind of “if this, then that” type scenarios, or the ability to bridge shared context across the different business areas, recommended certain actions to be completed by the manager. So that manager empowerment piece was a massive step in the right direction, but it’s really funny that you’ve basically listed the same two of the four things that they talked about in terms of their “what if” exercise last week. So, cool.
Justine Whitaker: Great, Luke, and I’m delighted to hear there’s a level of alignment. And if we can get to a point where we have enough humility and enough openness to learn and unlearn, and begin to think about things differently, I think it can be really good for all of us in terms of some of those insights. But no one said it’s going to be easy, but nothing is or was, and it’s not going to get any slower. It’s only going to get faster. That’s another kind of immutable truth. It’s just going to get faster. So, anything we can do to help ourselves adapt, to learn, have a better quality of life, to be thinking about the things that matter, to be spending time with the people that matter to us, doing the work that really matters, meaningful and purposeful, is a good thing.
Luke Fisher: Agreed. Last couple of questions before we jump into our lightning round. I just talked, we touched a little bit on AI is dominating most of the boardrooms, so a lot of these senior people are trying to understand it, but have you got any great use cases of what you’ve seen from the average employee or manager and how they’ve been well empowered using AI? Or is the adoption not there yet?
Justine Whitaker: Yeah, so one of the things that I’ve seen very recently, which I thought was fabulous, is if you’ve ever created a skills infrastructure or skills framework, or even if you’ve tried to customise a universal one, like a SFIA framework, the Skills for the Information Age, it can take a year to customise that. And by the time you’ve done that, you need a lie-down, and all you’ve got is a fancy list, you know? Then you’ve actually got to go through and map what you’ve got, and then there’s always the issues around validation of skills, breadth and depth and veracity. I come across a tool end of last week that can do all of that pretty much instantaneously. So you can upload your role profiles, it can map them to a universal framework, which you can then customise. It can push back out to you revised role profiles, revised clusters of skills to make new jobs, and then run a variety of risk analysis on it. Now, it can do that in probably, that full analysis, in maybe a little bit short of five minutes. And it took me in my job with a great team, using an internationally established framework, a bit over 12 months. So I think there are some remarkable, some remarkable things.
Luke Fisher: Very, very. And if most of this that we’ve talked to is transformation over the next couple of years, I’m a big fan of marginal changes and how you can start to get some traction pretty quickly. If someone’s sat here listening or watching and saying, “Okay, I’m in, but I need to start small,” what are some of the things in which, given the last chunk of time that you’ve been spending getting your teeth stuck into this, what initiatives would you start on if they wanted to start from Monday to start shifting either their prospects or their culture with AI?
Justine Whitaker: Yeah, I would definitely carve out a minimum of 15 minutes and make it non-negotiable. I absolutely am just completely convinced you cannot not do it. This is the best thing you can do to future-proof yourself. So, 15 minutes, non-negotiable. And then there’s a number of different directions you can go. There are always good thought leaders on LinkedIn that you can follow. You can ask ChatGPT or Claude or whatever, whoever you are working with, “How do I learn about this?” So I’ve taught my ChatGPT to give me an AI prompt every day to help improve my prompt engineering. So I think you can ask the AI to help you create a learning program. If you are a more formal learner, there are really great programs that are free, available from OpenAI, for example, Google. So look at what’s free and available. The other thing which I found really useful is find people who are already doing it and ask them what they use, what they’re playing with, and that kind of vicarious learning. So I think we can learn from others and also just, I think the biggest piece of advice I could give is, regardless of what you do, do something.
Luke Fisher: Agreed. Completely agree. A perfect answer. I think the only definitive advice is: don’t do nothing, do something related to AI every day without fail.


