In many organisations, culture is talked about like weather: something that just happens. It drifts in, it shifts, it becomes something you either tolerate or complain about. But as Justine Whitaker puts it in our Culture in Action podcast conversation, “You get the culture that you choose to build.”
Culture is not a backdrop. It’s not a side project. And it certainly isn’t something that can be fixed with a one-off initiative. It’s the ongoing, everyday way an organisation behaves, makes decisions, solves problems, and treats its people. And the organisations that thrive are the ones that take that seriously.
In this conversation, Justine Whitaker pulls back the curtain on what actually builds culture, not in theory, but in practice, offering a clear, strategic playbook for People leaders navigating the AI transformation.
The Imperative: Adopt the ‘Crew, Not Passengers’ Mindset
Justine’s core philosophy for navigating the AI era is one of active ownership. She argues that passive acceptance of technology is a significant risk, especially for people leaders. The speed and scope of AI integration require professionals to stop being mere passengers and instead become crew who demand accountability and steer the conversation.
Shifting the Relationship with Inorganic Intelligence
The ‘crew’ mindset goes beyond simply using the tools; it’s about defining a philosophical and ethical relationship with “inorganic intelligence.” Justine highlights that AI is going to be “omnipresent” and its pace of change “will only get faster.” This reality necessitates proactive engagement rather than waiting for the technology to be perfectly governed.
Define Your Role: Are you sitting back, hoping the technology works, or are you actively engaging with it? Justine stresses that every professional must choose their stance, as the cost of inaction (doing nothing) is too high.
Demand Accountability: Being “crew” means actively interrogating AI providers and the systems themselves to ensure accountability, transparency, and robust ethical and governance frameworks.
Counter Autopilot: Luke noted the danger of humans falling into autopilot and losing original thought due to over-reliance on smart AI. Justine’s message counters this by stressing the need for continued critical thinking and verification. Allowing AI to automate the structure of work must not lead to the automation of critical judgement.
AI’s Highest Value: Breaking Organisational Silos
For the People function, Justine sees AI’s most transformative application as its ability to demolish traditional silos and aggregate disparate data sources.
Linking People Strategy to Financial Outcomes
Historically, the People function has struggled to directly quantify its value to the P&L (Profit and Loss). AI changes this by enabling the measurement of Return on Investment (ROI) in real-time.
Integrated Insight: She was most impressed by applications that combine data from mutually exclusive silos, such as Customer Experience (CX) and People Experience (PX).
Demonstrate Business Impact: This cross-functional data synthesis allows the People function to move beyond being a cost center. By linking employee sentiment or training effectiveness to customer satisfaction scores and revenue data, AI proves the tool’s utility and enables a level of strategic justification previously unattainable. This also fosters a multidisciplinary approach to solving complex business problems.
“The things that have had the most impact on me are those applications of AI which break the silos down.”
The Trust Challenge: Overcoming Psychological Biases
Building trust is the single biggest psychological barrier to mass adoption, and Justine emphasises that this is a psychological challenge, not a purely technical one.
Understanding Algorithmic Aversion
The human response to technology mistakes is governed by algorithmic aversion.
The Bias: Humans are more forgiving of human error because we acknowledge intent, effort, and the capacity for fatigue. When an AI “hallucinates” or makes a mistake, the trust is lost quickly because the underlying process is often perceived as a “black box” that they cannot control or influence.
“We are more willing to let a human off for a mistake than we are a machine.”
To counter the “black box” and foster a trusting relationship (the ‘crew’ approach), users must treat AI as a colleague whose work requires verification. Justine advises actively interrogating the AI:
- Asking for a confidence rating on its certainty.
- Demanding the source of its information and requesting validation.
- Utilising multiple LLMs to cross-validate answers and challenge initial assumptions.
This intentional practice allows the user to still utilise their critical thinking, thereby maintaining the human-in-the-loop and avoiding the pitfall of blind dependency. Crucially, adoption must be launched by focusing on solving “a problem that really matters”—a high-value pressure point—to deliver an immediate, undeniable moment of value (an ‘aha’ moment) that generates initial trust.
The Future of the People Function: Augmentation and Hybridisation
Justine predicts a material change in the leadership structure and day-to-day work of the People function within the next two years, driven by AI’s capabilities.
She argues that the Chief People Officer (CPO) role must hybridise with the CTO or CIO function. Strategic workforce planning, risk management, and predictive capability are no longer independent domains.
“I would love to see Chief People Officers working in a much more multidisciplinary way… [with] shared, integrated, real-time data sets.”
This unified leadership structure, which acknowledges that “we all own it,” allows the organisation to leverage integrated data for complex decision-making, such as predicting skill gaps or forecasting market changes.
While AI can automate routine, “mind-numbing” tasks (like meeting transcripts), the CPO’s time will be freed up for augmentation—using AI to enhance their intelligence for higher-level work.
Focus on Predictive Work: The CPO will spend more time on multidisciplinary work, leveraging integrated real-time data to link quantitative and qualitative data, run scenario planning, and engage in predictive strategising. This helps them “prepare more robustly for a future that nobody can anticipate.”
For individuals feeling the pressure of accelerating change, Justine offers a simple, powerful, and practical personal strategy.
The 15-Minute Rule
Dedicate a minimum of 15 minutes religiously every day to exploring, experimenting, and playing with AI. This is a non-negotiable step for future-proofing your career.
“I would definitely carve out a minimum of 15 minutes and make it non-negotiable… This is the best thing you can do to future-proof yourself.”
How to Use the Time: This small, continuous habit ensures that every employee builds practical literacy. If you don’t know where to start, Justine suggests asking the AI (e.g., ChatGPT) itself to create a learning program for you or provide a daily prompt to improve your own prompt engineering skills. You can also engage in vicarious learning by asking colleagues what tools they are using and how.
The Agility Advantage: The power of this technology is highlighted by the fact that creating a complete, customised skills framework—a task that historically took a large team over 12 months—can now be completed by an AI tool in under five minutes. The future belongs to those who can leverage this speed, which starts with that daily 15-minute commitment.
The Ultimate Human Advantage
As AI becomes smarter, human qualities become more valuable. Justine advises actively working on emotional intelligence and relational intelligence—innately human strengths that AI struggles to emulate.
“I really am working on my emotional intelligence really actively… that is an innately human strength that I think any AI or even any super intelligence is going to really struggle to emulate.”
By doubling down on human connection, diversity, and empathy, leaders can ensure that as technology augments what we do, the fundamental human element remains the guiding principle of how and why we work. By leading as ‘crew’—active, accountable, and open to continuous learning—we ensure that technology serves our people strategy, and not the other way around.

